LLM Router
HomeRoutersModelsProvidersBenchmarksPricingCompareBlogAbout
HomeModelsBenchmarksPricingCompareBlog
LLM Router

Independent comparison platform for LLM routing infrastructure.

Platform

  • Home
  • Routers
  • Models
  • Pricing
  • Blog
  • About

Routers

  • Requesty
  • OpenRouter
  • Martian
  • Unify
  • LiteLLM

© 2026 LLM Router

Data from public sources. May not reflect real-time pricing.

Providers›Qwen

Qwen

↗ Website

Qwen (通义千问) is Alibaba Cloud's family of large language models. The Qwen series includes both open-source and commercial models with strong multilingual support, coding abilities, and competitive benchmark performance across multiple model sizes.

Pricing available from Requesty, OpenRouter, Vercel AI, Martian, DeepInfra.

Total Models
60
Arena Ranked
16
of 60
Open Source
60
of 60
Cheapest Input
$0.01
per 1M tokens

$ Pricing Summary(per 1M tokens)

MetricInputOutput
Cheapest$0.01$0.09
Average$0.32$1.43
Most Expensive$1.60$6.40

⚙ Capabilities

👁
Vision
17
of 60 models
🧠
Reasoning
20
of 60 models
🔧
Tool Calling
49
of 60 models
⚡
Prompt Caching
15
of 60 models
🖥
Computer Use
0
of 60 models
🎨
Image Generation
0
of 60 models

🤖 All Qwen Models(60)

QwenQwen 3OSS
#34

Qwen3 Max

Qwen3-Max is an updated release built on the Qwen3 series, offering major improvements in reasoning, instruction following, multilingual support, and long-tail knowledge coverage compared to the January 2025 version. It delivers higher accuracy in math, coding, logic, and science tasks, follows complex instructions in Chinese and English more reliably, reduces hallucinations, and produces higher-quality responses for open-ended Q&A, writing, and conversation. The model supports over 100 languages with stronger translation and commonsense reasoning, and is optimized for retrieval-augmented generation (RAG) and tool calling, though it does not include a dedicated “thinking” mode.

Context
262K
Max Output
66K
Input/1M
$0.84
👁 Vision🔧 Tools⚡ Cache
Pricing (per 1M tokens)
Requesty★$0.86 / $3.44
OpenRouter$1.20 / $6.00
Vercel AI$0.84 / $3.38
Martian$1.20 / $6.00
DeepInfra$1.20 / $6.00
2025-09-23View details →
QwenQwen 3OSS
#51

Qwen: Qwen3 VL 235B A22B Instruct

Qwen3-VL-235B-A22B Instruct is an open-weight multimodal model that unifies strong text generation with visual understanding across images and video. The Instruct model targets general vision-language use (VQA, document parsing, chart/table extraction, multilingual OCR). The series emphasizes robust perception (recognition of diverse real-world and synthetic categories), spatial understanding (2D/3D grounding), and long-form visual comprehension, with competitive results on public multimodal benchmarks for both perception and reasoning. Beyond analysis, Qwen3-VL supports agentic interaction and tool use: it can follow complex instructions over multi-image, multi-turn dialogues; align text to video timelines for precise temporal queries; and operate GUI elements for automation tasks. The models also enable visual coding workflows—turning sketches or mockups into code and assisting with UI debugging—while maintaining strong text-only performance comparable to the flagship Qwen3 language models. This makes Qwen3-VL suitable for production scenarios spanning document AI, multilingual OCR, software/UI assistance, spatial/embodied tasks, and research on vision-language agents.

Context
262K
Max Output
—
Input/1M
$0.20
👁 Vision🔧 Tools⚡ Cache
Pricing (per 1M tokens)
OpenRouter$0.20 / $0.88
Martian$0.20 / $0.88
DeepInfra$0.20 / $0.88
2025-09-23View details →
QwenQwen 3OSS
#64

Qwen: Qwen3 Next 80B A3B Instruct (free)

Qwen3-Next-80B-A3B-Instruct is an instruction-tuned chat model in the Qwen3-Next series optimized for fast, stable responses without “thinking” traces. It targets complex tasks across reasoning, code generation, knowledge QA, and multilingual use, while remaining robust on alignment and formatting. Compared with prior Qwen3 instruct variants, it focuses on higher throughput and stability on ultra-long inputs and multi-turn dialogues, making it well-suited for RAG, tool use, and agentic workflows that require consistent final answers rather than visible chain-of-thought. The model employs scaling-efficient training and decoding to improve parameter efficiency and inference speed, and has been validated on a broad set of public benchmarks where it reaches or approaches larger Qwen3 systems in several categories while outperforming earlier mid-sized baselines. It is best used as a general assistant, code helper, and long-context task solver in production settings where deterministic, instruction-following outputs are preferred.

Context
262K
Max Output
—
Input/1M
Free
🔧 Tools
Pricing (per 1M tokens)
OpenRouterFree / Free
Vercel AI$0.09 / $1.10
Martian$0.09 / $1.10
DeepInfra$0.09 / $1.10
2025-09-11View details →
QwenQwen 3OSS
#67

Qwen3 235B A22B Thinking 2507

Qwen3-235B-A22B-Thinking-2507 is the Qwen3's new model with scaling the thinking capability of Qwen3-235B-A22B, improving both the quality and depth of reasoning.

Context
262K
Max Output
262K
Input/1M
$0.30
🧠 Reasoning🔧 Tools
Pricing (per 1M tokens)
Vercel AI$0.30 / $2.90
2025-04-01View details →
QwenQwen 3OSS
#69

Qwen: Qwen3 VL 235B A22B Thinking

Qwen3-VL-235B-A22B Thinking is a multimodal model that unifies strong text generation with visual understanding across images and video. The Thinking model is optimized for multimodal reasoning in STEM and math. The series emphasizes robust perception (recognition of diverse real-world and synthetic categories), spatial understanding (2D/3D grounding), and long-form visual comprehension, with competitive results on public multimodal benchmarks for both perception and reasoning. Beyond analysis, Qwen3-VL supports agentic interaction and tool use: it can follow complex instructions over multi-image, multi-turn dialogues; align text to video timelines for precise temporal queries; and operate GUI elements for automation tasks. The models also enable visual coding workflows, turning sketches or mockups into code and assisting with UI debugging, while maintaining strong text-only performance comparable to the flagship Qwen3 language models. This makes Qwen3-VL suitable for production scenarios spanning document AI, multilingual OCR, software/UI assistance, spatial/embodied tasks, and research on vision-language agents.

Context
131K
Max Output
33K
Input/1M
Free
👁 Vision🧠 Reasoning🔧 Tools
Pricing (per 1M tokens)
OpenRouterFree / Free
Martian$0.45 / $3.50
2025-09-23View details →
QwenQwen 3OSS
#90

Qwen: Qwen3 235B A22B

Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between a "thinking" mode for complex reasoning, math, and code tasks, and a "non-thinking" mode for general conversational efficiency. The model demonstrates strong reasoning ability, multilingual support (100+ languages and dialects), advanced instruction-following, and agent tool-calling capabilities. It natively handles a 32K token context window and extends up to 131K tokens using YaRN-based scaling.

Context
41K
Max Output
41K
Input/1M
$0.22
🧠 Reasoning🔧 Tools⚡ Cache
Pricing (per 1M tokens)
OpenRouter$0.30 / $1.20
Martian$0.22 / $0.88
2025-04-28View details →
QwenQwen 3OSS
#96

Qwen: Qwen3 Next 80B A3B Thinking

Qwen3-Next-80B-A3B-Thinking is a reasoning-first chat model in the Qwen3-Next line that outputs structured “thinking” traces by default. It’s designed for hard multi-step problems; math proofs, code synthesis/debugging, logic, and agentic planning, and reports strong results across knowledge, reasoning, coding, alignment, and multilingual evaluations. Compared with prior Qwen3 variants, it emphasizes stability under long chains of thought and efficient scaling during inference, and it is tuned to follow complex instructions while reducing repetitive or off-task behavior. The model is suitable for agent frameworks and tool use (function calling), retrieval-heavy workflows, and standardized benchmarking where step-by-step solutions are required. It supports long, detailed completions and leverages throughput-oriented techniques (e.g., multi-token prediction) for faster generation. Note that it operates in thinking-only mode.

Context
128K
Max Output
—
Input/1M
$0.15
🧠 Reasoning🔧 Tools
Pricing (per 1M tokens)
OpenRouter$0.15 / $1.20
Vercel AI$0.15 / $1.50
Martian$0.15 / $1.20
2025-09-11View details →
QwenQwen 3OSS
#119

Qwen: Qwen3 8B

Qwen3-8B is a dense 8.2B parameter causal language model from the Qwen3 series, designed for both reasoning-heavy tasks and efficient dialogue. It supports seamless switching between "thinking" mode for math, coding, and logical inference, and "non-thinking" mode for general conversation. The model is fine-tuned for instruction-following, agent integration, creative writing, and multilingual use across 100+ languages and dialects. It natively supports a 32K token context window and can extend to 131K tokens with YaRN scaling.

Context
32K
Max Output
8K
Input/1M
$0.05
🧠 Reasoning🔧 Tools⚡ Cache
Pricing (per 1M tokens)
OpenRouter$0.05 / $0.40
Martian$0.05 / $0.40
2025-04-28View details →
QwenQwen 3OSS
#119

Qwen: Qwen3 14B

Qwen3-14B is a dense 14.8B parameter causal language model from the Qwen3 series, designed for both complex reasoning and efficient dialogue. It supports seamless switching between a "thinking" mode for tasks like math, programming, and logical inference, and a "non-thinking" mode for general-purpose conversation. The model is fine-tuned for instruction-following, agent tool use, creative writing, and multilingual tasks across 100+ languages and dialects. It natively handles 32K token contexts and can extend to 131K tokens using YaRN-based scaling.

Context
41K
Max Output
41K
Input/1M
$0.05
🧠 Reasoning🔧 Tools⚡ Cache
Pricing (per 1M tokens)
OpenRouter$0.05 / $0.22
Martian$0.05 / $0.22
DeepInfra$0.12 / $0.24
2025-04-28View details →
QwenQwen 3OSS
#119

Qwen: Qwen3 32B

Qwen3-32B is a dense 32.8B parameter causal language model from the Qwen3 series, optimized for both complex reasoning and efficient dialogue. It supports seamless switching between a "thinking" mode for tasks like math, coding, and logical inference, and a "non-thinking" mode for faster, general-purpose conversation. The model demonstrates strong performance in instruction-following, agent tool use, creative writing, and multilingual tasks across 100+ languages and dialects. It natively handles 32K token contexts and can extend to 131K tokens using YaRN-based scaling.

Context
41K
Max Output
41K
Input/1M
$0.08
🧠 Reasoning🔧 Tools⚡ Cache
Pricing (per 1M tokens)
OpenRouter$0.08 / $0.24
DeepInfra$0.08 / $0.28
2025-04-28View details →
QwenQwenOSS
#122

Qwen Plus 0728

Qwen3, the latest generation in the Qwen large language model series, features both dense and mixture-of-experts (MoE) architectures to excel in reasoning, multilingual support, and advanced agent tasks. Its unique ability to switch seamlessly between a thinking mode for complex reasoning and a non-thinking mode for efficient dialogue ensures versatile, high-quality performance. Significantly outperforming prior models like QwQ and Qwen2.5, Qwen3 delivers superior mathematics, coding, commonsense reasoning, creative writing, and interactive dialogue capabilities. The Qwen3-30B-A3B variant includes 30.5 billion parameters (3.3 billion activated), 48 layers, 128 experts (8 activated per task), and supports up to 131K token contexts with YaRN, setting a new standard among open-source models.

Context
1.0M
Max Output
33K
Input/1M
$0.40
🔧 Tools
Pricing (per 1M tokens)
Requesty★$0.40 / $1.20
OpenRouter$0.40 / $1.20
Martian$0.40 / $1.20
2025-09-08View details →
QwenQwen 3OSS
#141

Qwen: Qwen3 30B A3B

Qwen3, the latest generation in the Qwen large language model series, features both dense and mixture-of-experts (MoE) architectures to excel in reasoning, multilingual support, and advanced agent tasks. Its unique ability to switch seamlessly between a thinking mode for complex reasoning and a non-thinking mode for efficient dialogue ensures versatile, high-quality performance. Significantly outperforming prior models like QwQ and Qwen2.5, Qwen3 delivers superior mathematics, coding, commonsense reasoning, creative writing, and interactive dialogue capabilities. The Qwen3-30B-A3B variant includes 30.5 billion parameters (3.3 billion activated), 48 layers, 128 experts (8 activated per task), and supports up to 131K token contexts with YaRN, setting a new standard among open-source models.

Context
41K
Max Output
41K
Input/1M
$0.06
🧠 Reasoning🔧 Tools⚡ Cache
Pricing (per 1M tokens)
OpenRouter$0.06 / $0.22
Martian$0.06 / $0.22
DeepInfra$0.08 / $0.28
2025-04-28View details →
QwenQwenOSS
#157

Qwen-Max

Qwen3, the latest generation in the Qwen large language model series, features both dense and mixture-of-experts (MoE) architectures to excel in reasoning, multilingual support, and advanced agent tasks. Its unique ability to switch seamlessly between a thinking mode for complex reasoning and a non-thinking mode for efficient dialogue ensures versatile, high-quality performance. Significantly outperforming prior models like QwQ and Qwen2.5, Qwen3 delivers superior mathematics, coding, commonsense reasoning, creative writing, and interactive dialogue capabilities. The Qwen3-30B-A3B variant includes 30.5 billion parameters (3.3 billion activated), 48 layers, 128 experts (8 activated per task), and supports up to 131K token contexts with YaRN, setting a new standard among open-source models.

Context
33K
Max Output
8K
Input/1M
$1.60
🔧 Tools
Pricing (per 1M tokens)
Requesty★$1.60 / $6.40
OpenRouter$1.60 / $6.40
Martian$1.60 / $6.40
2025-02-01View details →
QwenQwen 2.5OSS
#200

Qwen: Qwen2.5 Coder 7B Instruct

Qwen2.5-Coder-7B-Instruct is a 7B parameter instruction-tuned language model optimized for code-related tasks such as code generation, reasoning, and bug fixing. Based on the Qwen2.5 architecture, it incorporates enhancements like RoPE, SwiGLU, RMSNorm, and GQA attention with support for up to 128K tokens using YaRN-based extrapolation. It is trained on a large corpus of source code, synthetic data, and text-code grounding, providing robust performance across programming languages and agentic coding workflows. This model is part of the Qwen2.5-Coder family and offers strong compatibility with tools like vLLM for efficient deployment. Released under the Apache 2.0 license.

Context
33K
Max Output
—
Input/1M
$0.03
Pricing (per 1M tokens)
OpenRouter$0.03 / $0.09
Martian$0.03 / $0.09
2025-04-15View details →
QwenQwen 3OSS
#260

Qwen: QwQ 32B

QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.

Context
33K
Max Output
33K
Input/1M
$0.15
🧠 Reasoning🔧 Tools
Pricing (per 1M tokens)
OpenRouter$0.15 / $0.40
Martian$0.15 / $0.40
2025-03-05View details →
QwenQwenOSS
#273

Qwen Turbo

Qwen3, the latest generation in the Qwen large language model series, features both dense and mixture-of-experts (MoE) architectures to excel in reasoning, multilingual support, and advanced agent tasks. Its unique ability to switch seamlessly between a thinking mode for complex reasoning and a non-thinking mode for efficient dialogue ensures versatile, high-quality performance. Significantly outperforming prior models like QwQ and Qwen2.5, Qwen3 delivers superior mathematics, coding, commonsense reasoning, creative writing, and interactive dialogue capabilities. The Qwen3-30B-A3B variant includes 30.5 billion parameters (3.3 billion activated), 48 layers, 128 experts (8 activated per task), and supports up to 131K token contexts with YaRN, setting a new standard among open-source models.

Context
1.0M
Max Output
8K
Input/1M
$0.05
🔧 Tools
Pricing (per 1M tokens)
Requesty★$0.05 / $0.20
OpenRouter$0.05 / $0.20
Martian$0.05 / $0.20
2025-02-01View details →
QwenQwen 3OSS

Qwen: Qwen3 Max Thinking

Qwen3-Max-Thinking is the flagship reasoning model in the Qwen3 series, designed for high-stakes cognitive tasks that require deep, multi-step reasoning. By significantly scaling model capacity and reinforcement learning compute, it delivers major gains in factual accuracy, complex reasoning, instruction following, alignment with human preferences, and agentic behavior.

Context
262K
Max Output
66K
Input/1M
$1.20
🧠 Reasoning🔧 Tools
Pricing (per 1M tokens)
OpenRouter$1.20 / $6.00
Vercel AI$1.20 / $6.00
Martian$1.20 / $6.00
DeepInfra$1.20 / $6.00
2026-02-09View details →
QwenQwen 3OSS

Qwen: Qwen3 Coder Next

Qwen3-Coder-Next is an open-weight causal language model optimized for coding agents and local development workflows. It uses a sparse MoE design with 80B total parameters and only 3B activated per token, delivering performance comparable to models with 10 to 20x higher active compute, which makes it well suited for cost-sensitive, always-on agent deployment. The model is trained with a strong agentic focus and performs reliably on long-horizon coding tasks, complex tool usage, and recovery from execution failures. With a native 256k context window, it integrates cleanly into real-world CLI and IDE environments and adapts well to common agent scaffolds used by modern coding tools. The model operates exclusively in non-thinking mode and does not emit <think> blocks, simplifying integration for production coding agents.

Context
262K
Max Output
66K
Input/1M
$0.07
🔧 Tools⚡ Cache
Pricing (per 1M tokens)
OpenRouter$0.07 / $0.30
Vercel AI$0.50 / $1.20
Martian$0.07 / $0.30
2026-02-04View details →
QwenQwen 3OSS

Qwen3 Embedding 0.6B

The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B).

Context
33K
Max Output
33K
Input/1M
$0.01
Pricing (per 1M tokens)
Vercel AI$0.01 / Free
2025-11-14View details →
QwenQwen 3OSS

Qwen: Qwen3 VL 32B Instruct

Qwen3-VL-32B-Instruct is a large-scale multimodal vision-language model designed for high-precision understanding and reasoning across text, images, and video. With 32 billion parameters, it combines deep visual perception with advanced text comprehension, enabling fine-grained spatial reasoning, document and scene analysis, and long-horizon video understanding.Robust OCR in 32 languages, and enhanced multimodal fusion through Interleaved-MRoPE and DeepStack architectures. Optimized for agentic interaction and visual tool use, Qwen3-VL-32B delivers state-of-the-art performance for complex real-world multimodal tasks.

Context
131K
Max Output
33K
Input/1M
$0.10
👁 Vision🔧 Tools
Pricing (per 1M tokens)
OpenRouter$0.10 / $0.42
Vercel AI$0.22 / $0.88
2025-10-23View details →
QwenQwen 3OSS

Qwen: Qwen3 VL 8B Thinking

Qwen3-VL-8B-Thinking is the reasoning-optimized variant of the Qwen3-VL-8B multimodal model, designed for advanced visual and textual reasoning across complex scenes, documents, and temporal sequences. It integrates enhanced multimodal alignment and long-context processing (native 256K, expandable to 1M tokens) for tasks such as scientific visual analysis, causal inference, and mathematical reasoning over image or video inputs. Compared to the Instruct edition, the Thinking version introduces deeper visual-language fusion and deliberate reasoning pathways that improve performance on long-chain logic tasks, STEM problem-solving, and multi-step video understanding. It achieves stronger temporal grounding via Interleaved-MRoPE and timestamp-aware embeddings, while maintaining robust OCR, multilingual comprehension, and text generation on par with large text-only LLMs.

Context
131K
Max Output
33K
Input/1M
$0.12
👁 Vision🧠 Reasoning🔧 Tools
Pricing (per 1M tokens)
OpenRouter$0.12 / $1.36
Martian$0.18 / $2.10
2025-10-14View details →
QwenQwen 3OSS

Qwen: Qwen3 VL 8B Instruct

Qwen3-VL-8B-Instruct is a multimodal vision-language model from the Qwen3-VL series, built for high-fidelity understanding and reasoning across text, images, and video. It features improved multimodal fusion with Interleaved-MRoPE for long-horizon temporal reasoning, DeepStack for fine-grained visual-text alignment, and text-timestamp alignment for precise event localization. The model supports a native 256K-token context window, extensible to 1M tokens, and handles both static and dynamic media inputs for tasks like document parsing, visual question answering, spatial reasoning, and GUI control. It achieves text understanding comparable to leading LLMs while expanding OCR coverage to 32 languages and enhancing robustness under varied visual conditions.

Context
131K
Max Output
33K
Input/1M
$0.08
👁 Vision🔧 Tools
Pricing (per 1M tokens)
OpenRouter$0.08 / $0.50
Martian$0.08 / $0.50
2025-10-14View details →
QwenQwen 3OSS

Qwen: Qwen3 VL 30B A3B Thinking

Qwen3-VL-30B-A3B-Thinking is a multimodal model that unifies strong text generation with visual understanding for images and videos. Its Thinking variant enhances reasoning in STEM, math, and complex tasks. It excels in perception of real-world/synthetic categories, 2D/3D spatial grounding, and long-form visual comprehension, achieving competitive multimodal benchmark results. For agentic use, it handles multi-image multi-turn instructions, video timeline alignments, GUI automation, and visual coding from sketches to debugged UI. Text performance matches flagship Qwen3 models, suiting document AI, OCR, UI assistance, spatial tasks, and agent research.

Context
131K
Max Output
33K
Input/1M
Free
👁 Vision🧠 Reasoning🔧 Tools
Pricing (per 1M tokens)
OpenRouterFree / Free
Martian$0.20 / $1.00
2025-10-06View details →
QwenQwen 3OSS

Qwen: Qwen3 VL 30B A3B Instruct

Qwen3-VL-30B-A3B-Instruct is a multimodal model that unifies strong text generation with visual understanding for images and videos. Its Instruct variant optimizes instruction-following for general multimodal tasks. It excels in perception of real-world/synthetic categories, 2D/3D spatial grounding, and long-form visual comprehension, achieving competitive multimodal benchmark results. For agentic use, it handles multi-image multi-turn instructions, video timeline alignments, GUI automation, and visual coding from sketches to debugged UI. Text performance matches flagship Qwen3 models, suiting document AI, OCR, UI assistance, spatial tasks, and agent research.

Context
131K
Max Output
33K
Input/1M
$0.13
👁 Vision🔧 Tools
Pricing (per 1M tokens)
OpenRouter$0.13 / $0.52
Martian$0.15 / $0.60
DeepInfra$0.15 / $0.60
2025-10-06View details →
QwenQwen 3OSS

Qwen3 VL 235B A22B Thinking

Qwen3 series VL models feature significantly enhanced multimodal reasoning capabilities, with a particular focus on optimizing the model for STEM and mathematical reasoning. Visual perception and recognition abilities have been comprehensively improved, and OCR capabilities have undergone a major upgrade.

Context
256K
Max Output
256K
Input/1M
$0.22
🧠 Reasoning🔧 Tools
Pricing (per 1M tokens)
Vercel AI$0.22 / $0.88
2025-09-24View details →
QwenQwen 3OSS

Qwen3 Coder Plus

Qwen3 Coder Plus is Alibaba's proprietary version of the Open Source Qwen3 Coder 480B A35B. It is a powerful coding agent model specializing in autonomous programming via tool calling and environment interaction, combining coding proficiency with versatile general-purpose abilities.

Context
1.0M
Max Output
66K
Input/1M
$1.00
👁 Vision🔧 Tools⚡ Cache
Pricing (per 1M tokens)
Requesty★$1.00 / $5.00
OpenRouter$1.00 / $5.00
Vercel AI$1.00 / $5.00
Martian$1.00 / $5.00
2025-09-23View details →
QwenQwen 3OSS

Qwen3 Coder Flash

Qwen3 Coder Flash is Alibaba's fast and cost efficient version of their proprietary Qwen3 Coder Plus. It is a powerful coding agent model specializing in autonomous programming via tool calling and environment interaction, combining coding proficiency with versatile general-purpose abilities.

Context
1.0M
Max Output
66K
Input/1M
$0.30
👁 Vision🔧 Tools⚡ Cache
Pricing (per 1M tokens)
Requesty★$0.30 / $1.50
OpenRouter$0.30 / $1.50
Martian$0.30 / $1.50
2025-09-17View details →
QwenQwen 3OSS

Qwen: Qwen3 30B A3B Thinking 2507

Qwen3-30B-A3B-Thinking-2507 is a 30B parameter Mixture-of-Experts reasoning model optimized for complex tasks requiring extended multi-step thinking. The model is designed specifically for “thinking mode,” where internal reasoning traces are separated from final answers. Compared to earlier Qwen3-30B releases, this version improves performance across logical reasoning, mathematics, science, coding, and multilingual benchmarks. It also demonstrates stronger instruction following, tool use, and alignment with human preferences. With higher reasoning efficiency and extended output budgets, it is best suited for advanced research, competitive problem solving, and agentic applications requiring structured long-context reasoning.

Context
33K
Max Output
—
Input/1M
$0.05
🧠 Reasoning🔧 Tools
Pricing (per 1M tokens)
OpenRouter$0.05 / $0.34
Martian$0.05 / $0.34
2025-08-28View details →
QwenQwen 3OSS

Qwen: Qwen3 Coder 30B A3B Instruct

Qwen3-Coder-30B-A3B-Instruct is a 30.5B parameter Mixture-of-Experts (MoE) model with 128 experts (8 active per forward pass), designed for advanced code generation, repository-scale understanding, and agentic tool use. Built on the Qwen3 architecture, it supports a native context length of 256K tokens (extendable to 1M with Yarn) and performs strongly in tasks involving function calls, browser use, and structured code completion. This model is optimized for instruction-following without “thinking mode”, and integrates well with OpenAI-compatible tool-use formats.

Context
160K
Max Output
33K
Input/1M
$0.07
🔧 Tools
Pricing (per 1M tokens)
OpenRouter$0.07 / $0.27
Vercel AI$0.07 / $0.27
Martian$0.07 / $0.27
2025-07-31View details →
QwenQwen 3OSS

Qwen3 30b A3b Instruct 2507

Qwen3-30B-A3B-Instruct-2507 is a 30.5B-parameter mixture-of-experts language model from Qwen, with 3.3B active parameters per inference. It operates in non-thinking mode and is designed for high-quality instruction following, multilingual understanding, and agentic tool use. Post-trained on instruction data, it demonstrates competitive performance across reasoning (AIME, ZebraLogic), coding (MultiPL-E, LiveCodeBench), and alignment (IFEval, WritingBench) benchmarks. It outperforms its non-instruct variant on subjective and open-ended tasks while retaining strong factual and coding performance.

Context
262K
Max Output
262K
Input/1M
$0.08
👁 Vision🔧 Tools⚡ Cache
Pricing (per 1M tokens)
Requesty★$0.20 / $0.80
OpenRouter$0.08 / $0.33
Martian$0.08 / $0.33
2025-07-29View details →
QwenQwen 3OSS

Qwen: Qwen3 235B A22B Thinking 2507

Qwen3-235B-A22B-Thinking-2507 is a high-performance, open-weight Mixture-of-Experts (MoE) language model optimized for complex reasoning tasks. It activates 22B of its 235B parameters per forward pass and natively supports up to 262,144 tokens of context. This "thinking-only" variant enhances structured logical reasoning, mathematics, science, and long-form generation, showing strong benchmark performance across AIME, SuperGPQA, LiveCodeBench, and MMLU-Redux. It enforces a special reasoning mode (</think>) and is designed for high-token outputs (up to 81,920 tokens) in challenging domains. The model is instruction-tuned and excels at step-by-step reasoning, tool use, agentic workflows, and multilingual tasks. This release represents the most capable open-source variant in the Qwen3-235B series, surpassing many closed models in structured reasoning use cases.

Context
131K
Max Output
—
Input/1M
Free
🧠 Reasoning🔧 Tools
Pricing (per 1M tokens)
OpenRouterFree / Free
Martian$0.11 / $0.60
DeepInfra$0.23 / $2.30
2025-07-25View details →
QwenQwen 3OSS

Qwen: Qwen3 Coder 480B A35B (free)

Qwen3-Coder-480B-A35B-Instruct is a Mixture-of-Experts (MoE) code generation model developed by the Qwen team. It is optimized for agentic coding tasks such as function calling, tool use, and long-context reasoning over repositories. The model features 480 billion total parameters, with 35 billion active per forward pass (8 out of 160 experts). Pricing for the Alibaba endpoints varies by context length. Once a request is greater than 128k input tokens, the higher pricing is used.

Context
262K
Max Output
262K
Input/1M
Free
🔧 Tools
Pricing (per 1M tokens)
OpenRouterFree / Free
Vercel AI$0.40 / $1.60
Martian$0.22 / $1.00
2025-07-23View details →
QwenQwen 3OSS

Qwen: Qwen3 235B A22B Instruct 2507

Qwen3-235B-A22B-Instruct-2507 is a multilingual, instruction-tuned mixture-of-experts language model based on the Qwen3-235B architecture, with 22B active parameters per forward pass. It is optimized for general-purpose text generation, including instruction following, logical reasoning, math, code, and tool usage. The model supports a native 262K context length and does not implement "thinking mode" (<think> blocks). Compared to its base variant, this version delivers significant gains in knowledge coverage, long-context reasoning, coding benchmarks, and alignment with open-ended tasks. It is particularly strong on multilingual understanding, math reasoning (e.g., AIME, HMMT), and alignment evaluations like Arena-Hard and WritingBench.

Context
262K
Max Output
—
Input/1M
$0.07
🧠 Reasoning🔧 Tools
Pricing (per 1M tokens)
OpenRouter$0.07 / $0.10
Martian$0.07 / $0.10
2025-07-21View details →
QwenQwen 3OSS

Qwen3 Embedding 4B

The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B).

Context
33K
Max Output
33K
Input/1M
$0.02
Pricing (per 1M tokens)
Vercel AI$0.02 / Free
2025-06-05View details →
QwenQwen 3OSS

Qwen3 Embedding 8B

The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B).

Context
33K
Max Output
33K
Input/1M
$0.05
Pricing (per 1M tokens)
Vercel AI$0.05 / Free
2025-06-05View details →
QwenQwen 3OSS

Qwen: Qwen3 4B (free)

Qwen3-4B is a 4 billion parameter dense language model from the Qwen3 series, designed to support both general-purpose and reasoning-intensive tasks. It introduces a dual-mode architecture—thinking and non-thinking—allowing dynamic switching between high-precision logical reasoning and efficient dialogue generation. This makes it well-suited for multi-turn chat, instruction following, and complex agent workflows.

Context
41K
Max Output
—
Input/1M
Free
🧠 Reasoning🔧 Tools
Pricing (per 1M tokens)
OpenRouterFree / Free
2025-04-30View details →
QwenQwen 3OSS

Qwen3-14B

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support

Context
41K
Max Output
16K
Input/1M
$0.06
🧠 Reasoning🔧 Tools
Pricing (per 1M tokens)
Vercel AI$0.06 / $0.24
2025-04-01View details →
QwenQwen 3OSS

Qwen3-235B-A22B

Qwen3-235B-A22B-Instruct-2507 is the updated version of the Qwen3-235B-A22B non-thinking mode, featuring Significant improvements in general capabilities, including instruction following, logical reasoning, text comprehension, mathematics, science, coding and tool usage.

Context
41K
Max Output
16K
Input/1M
$0.07
🔧 Tools
Pricing (per 1M tokens)
Vercel AI$0.07 / $0.46
2025-04-01View details →
QwenQwen 3OSS

Qwen3-30B-A3B

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support

Context
41K
Max Output
16K
Input/1M
$0.08
🧠 Reasoning🔧 Tools
Pricing (per 1M tokens)
Vercel AI$0.08 / $0.29
2025-04-01View details →
QwenQwen 3OSS

Qwen 3 32B

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support

Context
41K
Max Output
16K
Input/1M
$0.10
🧠 Reasoning🔧 Tools
Pricing (per 1M tokens)
Vercel AI$0.10 / $0.30
2025-04-01View details →
QwenQwen 2.5OSS

Qwen: Qwen2.5 VL 32B Instruct

Qwen2.5-VL-32B is a multimodal vision-language model fine-tuned through reinforcement learning for enhanced mathematical reasoning, structured outputs, and visual problem-solving capabilities. It excels at visual analysis tasks, including object recognition, textual interpretation within images, and precise event localization in extended videos. Qwen2.5-VL-32B demonstrates state-of-the-art performance across multimodal benchmarks such as MMMU, MathVista, and VideoMME, while maintaining strong reasoning and clarity in text-based tasks like MMLU, mathematical problem-solving, and code generation.

Context
16K
Max Output
16K
Input/1M
$0.05
👁 Vision⚡ Cache
Pricing (per 1M tokens)
OpenRouter$0.05 / $0.22
Martian$0.05 / $0.22
DeepInfra$0.20 / $0.60
2025-03-24View details →
QwenQwenOSS

Qwen: Qwen VL Plus

Qwen's Enhanced Large Visual Language Model. Significantly upgraded for detailed recognition capabilities and text recognition abilities, supporting ultra-high pixel resolutions up to millions of pixels and extreme aspect ratios for image input. It delivers significant performance across a broad range of visual tasks.

Context
131K
Max Output
8K
Input/1M
$0.21
👁 Vision⚡ Cache
Pricing (per 1M tokens)
OpenRouter$0.21 / $0.63
2025-02-05View details →
QwenQwenOSS

Qwen: Qwen VL Max

Qwen VL Max is a visual understanding model with 7500 tokens context length. It excels in delivering optimal performance for a broader spectrum of complex tasks.

Context
131K
Max Output
33K
Input/1M
$0.80
👁 Vision🔧 Tools
Pricing (per 1M tokens)
OpenRouter$0.80 / $3.20
Martian$0.80 / $3.20
2025-02-01View details →
QwenQwen 2.5OSS

Qwen: Qwen2.5 VL 72B Instruct

Qwen2.5-VL is proficient in recognizing common objects such as flowers, birds, fish, and insects. It is also highly capable of analyzing texts, charts, icons, graphics, and layouts within images.

Context
33K
Max Output
33K
Input/1M
$0.15
👁 Vision⚡ Cache
Pricing (per 1M tokens)
OpenRouter$0.15 / $0.60
Martian$0.15 / $0.60
2025-02-01View details →
QwenQwenOSS

Qwen2.5 Coder 32B Instruct

Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). Qwen2.5-Coder brings the following improvements upon CodeQwen1.5: - Significantly improvements in **code generation**, **code reasoning** and **code fixing**. - A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies. To read more about its evaluation results, check out [Qwen 2.5 Coder's blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).

Context
33K
Max Output
33K
Input/1M
$0.03
⚡ Cache
Pricing (per 1M tokens)
OpenRouter$0.03 / $0.11
Martian$0.03 / $0.11
2024-11-11View details →
QwenQwenOSS

Qwen: Qwen2.5 7B Instruct

Qwen2.5 7B is the latest series of Qwen large language models. Qwen2.5 brings the following improvements upon Qwen2: - Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains. - Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots. - Long-context Support up to 128K tokens and can generate up to 8K tokens. - Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. Usage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE).

Context
33K
Max Output
—
Input/1M
$0.04
🔧 Tools
Pricing (per 1M tokens)
OpenRouter$0.04 / $0.10
Martian$0.04 / $0.10
2024-10-16View details →
QwenQwenOSS

Qwen2.5 72B Instruct

Qwen2.5 72B is the latest series of Qwen large language models. Qwen2.5 brings the following improvements upon Qwen2: - Significantly more knowledge and has greatly improved capabilities in coding and mathematics, thanks to our specialized expert models in these domains. - Significant improvements in instruction following, generating long texts (over 8K tokens), understanding structured data (e.g, tables), and generating structured outputs especially JSON. More resilient to the diversity of system prompts, enhancing role-play implementation and condition-setting for chatbots. - Long-context Support up to 128K tokens and can generate up to 8K tokens. - Multilingual support for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. Usage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE).

Context
33K
Max Output
16K
Input/1M
$0.12
🔧 Tools
Pricing (per 1M tokens)
OpenRouter$0.12 / $0.39
Martian$0.12 / $0.39
2024-09-19View details →
QwenQwenOSS

Qwen: Qwen2.5-VL 7B Instruct

Qwen2.5 VL 7B is a multimodal LLM from the Qwen Team with the following key enhancements: - SoTA understanding of images of various resolution & ratio: Qwen2.5-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc. - Understanding videos of 20min+: Qwen2.5-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc. - Agent that can operate your mobiles, robots, etc.: with the abilities of complex reasoning and decision making, Qwen2.5-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions. - Multilingual Support: to serve global users, besides English and Chinese, Qwen2.5-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc. For more details, see this [blog post](https://qwenlm.github.io/blog/qwen2-vl/) and [GitHub repo](https://github.com/QwenLM/Qwen2-VL). Usage of this model is subject to [Tongyi Qianwen LICENSE AGREEMENT](https://huggingface.co/Qwen/Qwen1.5-110B-Chat/blob/main/LICENSE).

Context
33K
Max Output
—
Input/1M
$0.20
👁 Vision
Pricing (per 1M tokens)
OpenRouter$0.20 / $0.20
2024-08-28View details →
QwenQwen 3OSS

Qwen/Qwen3 Coder 480B A35B Instruct

 

Context
262K
Max Output
—
Input/1M
$0.40
🔧 Tools
Pricing (per 1M tokens)
Requesty★$0.40 / $1.80
View details →
QwenQwen 3OSS

Parasail Qwen3 235b A22b Instruct 2507

 

Context
262K
Max Output
8K
Input/1M
$0.15
🔧 Tools
Pricing (per 1M tokens)
Requesty★$0.15 / $0.85
View details →
QwenQwenOSS

Parasail Qwen25 Vl 72b Instruct

 

Context
33K
Max Output
8K
Input/1M
$0.70
👁 Vision🔧 Tools
Pricing (per 1M tokens)
Requesty★$0.70 / $0.70
View details →
QwenQwen 3OSS

Qwen3 235b A22b Fp8

Qwen3, the latest generation in the Qwen large language model series, features both dense and mixture-of-experts (MoE) architectures to excel in reasoning, multilingual support, and advanced agent tasks. Its unique ability to switch seamlessly between a thinking mode for complex reasoning and a non-thinking mode for efficient dialogue ensures versatile, high-quality performance. Significantly outperforming prior models like QwQ and Qwen2.5, Qwen3 delivers superior mathematics, coding, commonsense reasoning, creative writing, and interactive dialogue capabilities. The Qwen3-30B-A3B variant includes 30.5 billion parameters (3.3 billion activated), 48 layers, 128 experts (8 activated per task), and supports up to 131K token contexts with YaRN, setting a new standard among open-source models.

Context
128K
Max Output
—
Input/1M
$0.20
Pricing (per 1M tokens)
Requesty★$0.20 / $0.80
View details →
QwenQwen 2.5OSS

Qwen2.5 Vl 72b Instruct

Qwen2 VL 72B is a multimodal LLM from the Qwen Team with the following key enhancements: SoTA understanding of images of various resolution & ratio: Qwen2-VL achieves state-of-the-art performance on visual understanding benchmarks, including MathVista, DocVQA, RealWorldQA, MTVQA, etc. Understanding videos of 20min+: Qwen2-VL can understand videos over 20 minutes for high-quality video-based question answering, dialog, content creation, etc. Agent that can operate your mobiles, robots, etc.: with the abilities of complex reasoning and decision making, Qwen2-VL can be integrated with devices like mobile phones, robots, etc., for automatic operation based on visual environment and text instructions. Multilingual Support: to serve global users, besides English and Chinese, Qwen2-VL now supports the understanding of texts in different languages inside images, including most European languages, Japanese, Korean, Arabic, Vietnamese, etc.

Context
96K
Max Output
—
Input/1M
$0.80
Pricing (per 1M tokens)
Requesty★$0.80 / $0.80
View details →
QwenQwenOSS

Qwen 2.5 72b Instruct

Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters.

Context
32K
Max Output
—
Input/1M
$0.38
🔧 Tools
Pricing (per 1M tokens)
Requesty★$0.38 / $0.40
View details →
QwenQwen 2.5OSS

Qwen/Qwen2.5 7B Instruct Turbo

Qwen3, the latest generation in the Qwen large language model series, features both dense and mixture-of-experts (MoE) architectures to excel in reasoning, multilingual support, and advanced agent tasks. Its unique ability to switch seamlessly between a thinking mode for complex reasoning and a non-thinking mode for efficient dialogue ensures versatile, high-quality performance. Significantly outperforming prior models like QwQ and Qwen2.5, Qwen3 delivers superior mathematics, coding, commonsense reasoning, creative writing, and interactive dialogue capabilities. The Qwen3-30B-A3B variant includes 30.5 billion parameters (3.3 billion activated), 48 layers, 128 experts (8 activated per task), and supports up to 131K token contexts with YaRN, setting a new standard among open-source models.

Context
33K
Max Output
—
Input/1M
$0.30
🔧 Tools
Pricing (per 1M tokens)
Requesty★$0.30 / $0.30
View details →
QwenQwen 2.5OSS

Qwen/Qwen2.5 72B Instruct Turbo

Qwen3, the latest generation in the Qwen large language model series, features both dense and mixture-of-experts (MoE) architectures to excel in reasoning, multilingual support, and advanced agent tasks. Its unique ability to switch seamlessly between a thinking mode for complex reasoning and a non-thinking mode for efficient dialogue ensures versatile, high-quality performance. Significantly outperforming prior models like QwQ and Qwen2.5, Qwen3 delivers superior mathematics, coding, commonsense reasoning, creative writing, and interactive dialogue capabilities. The Qwen3-30B-A3B variant includes 30.5 billion parameters (3.3 billion activated), 48 layers, 128 experts (8 activated per task), and supports up to 131K token contexts with YaRN, setting a new standard among open-source models.

Context
33K
Max Output
—
Input/1M
$1.20
🔧 Tools
Pricing (per 1M tokens)
Requesty★$1.20 / $1.20
View details →
QwenQwen 3OSS

Qwen/Qwen3 235B A22B

Qwen3, the latest generation in the Qwen large language model series, features both dense and mixture-of-experts (MoE) architectures to excel in reasoning, multilingual support, and advanced agent tasks. Its unique ability to switch seamlessly between a thinking mode for complex reasoning and a non-thinking mode for efficient dialogue ensures versatile, high-quality performance. Significantly outperforming prior models like QwQ and Qwen2.5, Qwen3 delivers superior mathematics, coding, commonsense reasoning, creative writing, and interactive dialogue capabilities. The Qwen3-30B-A3B variant includes 30.5 billion parameters (3.3 billion activated), 48 layers, 128 experts (8 activated per task), and supports up to 131K token contexts with YaRN, setting a new standard among open-source models.

Context
40K
Max Output
4K
Input/1M
$0.20
🔧 Tools
Pricing (per 1M tokens)
Requesty★$0.20 / $0.60
View details →
QwenQwen 3OSS

Qwen/Qwen3 32B

Qwen3, the latest generation in the Qwen large language model series, features both dense and mixture-of-experts (MoE) architectures to excel in reasoning, multilingual support, and advanced agent tasks. Its unique ability to switch seamlessly between a thinking mode for complex reasoning and a non-thinking mode for efficient dialogue ensures versatile, high-quality performance. Significantly outperforming prior models like QwQ and Qwen2.5, Qwen3 delivers superior mathematics, coding, commonsense reasoning, creative writing, and interactive dialogue capabilities. The Qwen3-30B-A3B variant includes 30.5 billion parameters (3.3 billion activated), 48 layers, 128 experts (8 activated per task), and supports up to 131K token contexts with YaRN, setting a new standard among open-source models.

Context
40K
Max Output
—
Input/1M
$0.10
🔧 Tools
Pricing (per 1M tokens)
Requesty★$0.10 / $0.30
View details →
QwenQwen 2.5OSS

Qwen/Qwen2.5 72B Instruct

Qwen3, the latest generation in the Qwen large language model series, features both dense and mixture-of-experts (MoE) architectures to excel in reasoning, multilingual support, and advanced agent tasks. Its unique ability to switch seamlessly between a thinking mode for complex reasoning and a non-thinking mode for efficient dialogue ensures versatile, high-quality performance. Significantly outperforming prior models like QwQ and Qwen2.5, Qwen3 delivers superior mathematics, coding, commonsense reasoning, creative writing, and interactive dialogue capabilities. The Qwen3-30B-A3B variant includes 30.5 billion parameters (3.3 billion activated), 48 layers, 128 experts (8 activated per task), and supports up to 131K token contexts with YaRN, setting a new standard among open-source models.

Context
131K
Max Output
—
Input/1M
$0.23
🔧 Tools
Pricing (per 1M tokens)
Requesty★$0.23 / $0.40
View details →
QwenQwen 2.5OSS

Qwen/Qwen2.5 Coder 32B Instruct

Qwen3, the latest generation in the Qwen large language model series, features both dense and mixture-of-experts (MoE) architectures to excel in reasoning, multilingual support, and advanced agent tasks. Its unique ability to switch seamlessly between a thinking mode for complex reasoning and a non-thinking mode for efficient dialogue ensures versatile, high-quality performance. Significantly outperforming prior models like QwQ and Qwen2.5, Qwen3 delivers superior mathematics, coding, commonsense reasoning, creative writing, and interactive dialogue capabilities. The Qwen3-30B-A3B variant includes 30.5 billion parameters (3.3 billion activated), 48 layers, 128 experts (8 activated per task), and supports up to 131K token contexts with YaRN, setting a new standard among open-source models.

Context
16K
Max Output
—
Input/1M
$0.07
🔧 Tools
Pricing (per 1M tokens)
Requesty★$0.07 / $0.16
View details →
← Back to all providers