Mercury Coder is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like Claude 3.5 Haiku and GPT-4o Mini while matching their performance. Mercury Coder's speed means that developers can stay in the flow while coding, enjoying rapid chat-based iteration and responsive code completion suggestions. On Copilot Arena, Mercury Coder ranks 1st in speed and ties for 2nd in quality. Read more in the blog post here.
| Router | Input / 1M | Output / 1M | Cached Input / 1M |
|---|---|---|---|
| OpenRouter | $0.25 | $1.00 | — |
| Martian | $0.25 | $1.00 | — |
inception/mercury-coderRanked by provider, pricing, capabilities, and arena performance
Same provider · Similar price
Same provider · Similar price
Same provider · Similar price
Same provider · Similar price
Same provider · Similar price
Same provider · Similar price