MistralMixtralOpen SourceArena #238Dec 10, 2023

Mistral: Mixtral 8x7B Instruct

Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters. Instruct model fine-tuned by Mistral. #moe

Context Window
33K
tokens
Max Output
16K
tokens
Released
Dec 10, 2023
Arena Rank
#238
of 305 models

Capabilities

👁Vision
🧠Reasoning
🔧Tool Calling
Prompt Caching
🖥Computer Use
🎨Image Generation

Supported Parameters

Frequency Penalty
Reduce repetition
Logit Bias
Adjust token weights
Max Tokens
Output length limit
min_p
Presence Penalty
Encourage new topics
Repetition Penalty
Penalize repeated tokens
Response Format
JSON mode / structured output
Seed
Deterministic outputs
Stop Sequences
Custom stop tokens
Temperature
Controls randomness
Tool Choice
Control tool usage
Tool Calling
Function calling support
Top K
Top-K sampling
Top P
Nucleus sampling

Pricing Comparison

RouterInput / 1MOutput / 1MCached Input / 1M
OpenRouter$0.54$0.54
Martian$0.54$0.54

Benchmarks

Open LLM Leaderboard
AverageOpen LLM Leaderboard
23.82/100
IFEvalOpen LLM Leaderboard
55.99/100
BBHOpen LLM Leaderboard
29.74/100
MATH Lvl 5Open LLM Leaderboard
9.14/100
GPQAOpen LLM Leaderboard
7.05/100
MUSROpen LLM Leaderboard
11.07/100
MMLU-PROOpen LLM Leaderboard
29.91/100

Model IDs

OpenRoutermistralai/mixtral-8x7b-instruct

Tags

tool-calling
Compare with another model

Related Models