LLM Router
HomeRoutersModelsProvidersBenchmarksPricingCompareBlogAbout
HomeModelsBenchmarksPricingCompareBlog
LLM Router

Independent comparison platform for LLM routing infrastructure.

Platform

  • Home
  • Routers
  • Models
  • Pricing
  • Blog
  • About

Routers

  • Requesty
  • OpenRouter
  • Martian
  • Unify
  • LiteLLM

© 2026 LLM Router

Data from public sources. May not reflect real-time pricing.

Providers›Google

Google

↗ Website

Google develops the Gemini and Gemma families of AI models, building on decades of AI research at Google DeepMind. Their models range from lightweight Gemma open-source models to the powerful Gemini Pro and Ultra, offering multimodal capabilities across text, code, and images.

Pricing available from Requesty, OpenRouter, Vercel AI, Martian, DeepInfra.

Total Models
23
Arena Ranked
6
of 23
Open Source
8
of 23
Cheapest Input
$0.03
per 1M tokens

$ Pricing Summary(per 1M tokens)

MetricInputOutput
Cheapest$0.03$0.08
Average$0.58$6.53
Most Expensive$2.00$120.00

⚙ Capabilities

👁
Vision
15
of 23 models
🧠
Reasoning
10
of 23 models
🔧
Tool Calling
14
of 23 models
⚡
Prompt Caching
10
of 23 models
🖥
Computer Use
1
of 23 models
🎨
Image Generation
2
of 23 models

🤖 All Google Models(23)

GoogleGemini 3
#3

Gemini 3 Pro Preview

Gemini 3 Pro is Google’s flagship frontier model for high-precision multimodal reasoning, combining strong performance across text, image, video, audio, and code with a 1M-token context window. Reasoning Details must be preserved when using multi-turn tool calling, see our docs here: https://openrouter.ai/docs/use-cases/reasoning-tokens#preserving-reasoning-blocks. It delivers state-of-the-art benchmark results in general reasoning, STEM problem solving, factual QA, and multimodal understanding, including leading scores on LMArena, GPQA Diamond, MathArena Apex, MMMU-Pro, and Video-MMMU. Interactions emphasize depth and interpretability: the model is designed to infer intent with minimal prompting and produce direct, insight-focused responses. Built for advanced development and agentic workflows, Gemini 3 Pro provides robust tool-calling, long-horizon planning stability, and strong zero-shot generation for complex UI, visualization, and coding tasks. It excels at agentic coding (SWE-Bench Verified, Terminal-Bench 2.0), multimodal analysis, and structured long-form tasks such as research synthesis, planning, and interactive learning experiences. Suitable applications include autonomous agents, coding assistants, multimodal analytics, scientific reasoning, and high-context information processing.

Context
1.0M
Max Output
66K
Input/1M
$2.00
👁 Vision🧠 Reasoning🔧 Tools⚡ Cache
Pricing (per 1M tokens)
Requesty★$2.00 / $12.00
OpenRouter$2.00 / $12.00
Vercel AI$2.00 / $12.00
Martian$2.00 / $12.00
2025-11-18View details →
GoogleGemini 3
#5

Gemini 3 Flash Preview

Gemini 3 Flash Preview is a high speed, high value thinking model designed for agentic workflows, multi turn chat, and coding assistance. It delivers near Pro level reasoning and tool use performance with substantially lower latency than larger Gemini variants, making it well suited for interactive development, long running agent loops, and collaborative coding tasks. Compared to Gemini 2.5 Flash, it provides broad quality improvements across reasoning, multimodal understanding, and reliability. The model supports a 1M token context window and multimodal inputs including text, images, audio, video, and PDFs, with text output. It includes configurable reasoning via thinking levels (minimal, low, medium, high), structured output, tool use, and automatic context caching. Gemini 3 Flash Preview is optimized for users who want strong reasoning and agentic behavior without the cost or latency of full scale frontier models.

Context
1.0M
Max Output
66K
Input/1M
$0.50
👁 Vision🧠 Reasoning🔧 Tools⚡ Cache
Pricing (per 1M tokens)
Requesty★$0.50 / $3.00
OpenRouter$0.50 / $3.00
Vercel AI$0.50 / $3.00
Martian$0.50 / $3.00
2025-12-17View details →
GoogleGemini 2.5
#16

Gemini 2.5 Pro

Gemini 2.5 Pro is Google’s state-of-the-art AI model designed for advanced reasoning, coding, mathematics, and scientific tasks. It employs “thinking” capabilities, enabling it to reason through responses with enhanced accuracy and nuanced context handling. Gemini 2.5 Pro achieves top-tier performance on multiple benchmarks, including first-place positioning on the LMArena leaderboard, reflecting superior human-preference alignment and complex problem-solving abilities.

Context
1.0M
Max Output
66K
Input/1M
$1.25
👁 Vision🧠 Reasoning🔧 Tools⚡ Cache🖥 Computer
EUUS
Pricing (per 1M tokens)
Requesty★$1.25 / $10.00
OpenRouter$1.25 / $10.00
Vercel AI$1.25 / $10.00
Martian$1.25 / $10.00
DeepInfra$1.25 / $10.00
2025-06-17View details →
GoogleGemini 2.5
#59

Gemini 2.5 Flash

Gemini 2.5 Flash is Google's state-of-the-art workhorse model, specifically designed for advanced reasoning, coding, mathematics, and scientific tasks. It includes built-in "thinking" capabilities, enabling it to provide responses with greater accuracy and nuanced context handling. Additionally, Gemini 2.5 Flash is configurable through the "max tokens for reasoning" parameter, as described in the documentation (https://openrouter.ai/docs/use-cases/reasoning-tokens#max-tokens-for-reasoning).

Context
1.0M
Max Output
66K
Input/1M
$0.30
👁 Vision🧠 Reasoning🔧 Tools⚡ Cache
EUUS
Pricing (per 1M tokens)
Requesty★$0.30 / $2.50
OpenRouter$0.30 / $2.50
Vercel AI$0.30 / $2.50
Martian$0.30 / $2.50
DeepInfra$0.30 / $2.50
2025-06-17View details →
GoogleGemini 2.0
#103

Gemini 2.0 Flash

Gemini 2.0 Flash delivers next-gen features and improved capabilities, including superior speed, native tool use, multimodal generation, and a 1M token context window.

Context
1.0M
Max Output
8K
Input/1M
$0.10
Pricing (per 1M tokens)
Vercel AI$0.10 / $0.40
Martian$0.10 / $0.40
2024-12-11View details →
GoogleGemini 2.0
#109

Gemini 2.0 Flash Lite

Gemini 2.0 Flash delivers next-gen features and improved capabilities, including superior speed, built-in tool use, multimodal generation, and a 1M token context window.

Context
1.0M
Max Output
8K
Input/1M
$0.07
Pricing (per 1M tokens)
Vercel AI$0.07 / $0.30
Martian$0.07 / $0.30
2024-12-11View details →
GoogleGemini 3

Nano Banana Pro (Gemini 3 Pro Image Preview)

Nano Banana Pro is Google’s most advanced image-generation and editing model, built on Gemini 3 Pro. It extends the original Nano Banana with significantly improved multimodal reasoning, real-world grounding, and high-fidelity visual synthesis. The model generates context-rich graphics, from infographics and diagrams to cinematic composites, and can incorporate real-time information via Search grounding. It offers industry-leading text rendering in images (including long passages and multilingual layouts), consistent multi-image blending, and accurate identity preservation across up to five subjects. Nano Banana Pro adds fine-grained creative controls such as localized edits, lighting and focus adjustments, camera transformations, and support for 2K/4K outputs and flexible aspect ratios. It is designed for professional-grade design, product visualization, storyboarding, and complex multi-element compositions while remaining efficient for general image creation workflows.

Context
1.0M
Max Output
33K
Input/1M
$2.00
👁 Vision🧠 Reasoning🔧 Tools⚡ Cache
Pricing (per 1M tokens)
Requesty★$2.00 / $12.00
OpenRouter$2.00 / $12.00
Vercel AI$2.00 / $120.00
2025-11-20View details →
GoogleGemini 2.5

Gemini 2.5 Flash Image (Nano Banana)

Gemini 2.5 Flash Image, a.k.a. "Nano Banana," is now generally available. It is a state of the art image generation model with contextual understanding. It is capable of image generation, edits, and multi-turn conversations. Aspect ratios can be controlled with the [image_config API Parameter](https://openrouter.ai/docs/features/multimodal/image-generation#image-aspect-ratio-configuration)

Context
1.0M
Max Output
66K
Input/1M
$0.30
👁 Vision🧠 Reasoning🔧 Tools⚡ Cache
Pricing (per 1M tokens)
Requesty★$0.30 / $30.00
OpenRouter$0.30 / $2.50
Vercel AI$0.30 / $2.50
2025-10-07View details →
GoogleGemini 2.5

Google: Gemini 2.5 Flash Preview 09-2025

Gemini 2.5 Flash Preview September 2025 Checkpoint is Google's state-of-the-art workhorse model, specifically designed for advanced reasoning, coding, mathematics, and scientific tasks. It includes built-in "thinking" capabilities, enabling it to provide responses with greater accuracy and nuanced context handling. Additionally, Gemini 2.5 Flash is configurable through the "max tokens for reasoning" parameter, as described in the documentation (https://openrouter.ai/docs/use-cases/reasoning-tokens#max-tokens-for-reasoning).

Context
1.0M
Max Output
66K
Input/1M
$0.30
👁 Vision🧠 Reasoning🔧 Tools⚡ Cache
Pricing (per 1M tokens)
OpenRouter$0.30 / $2.50
Vercel AI$0.30 / $2.50
Martian$0.30 / $2.50
2025-09-25View details →
GoogleGemini 2.5

Google: Gemini 2.5 Flash Lite Preview 09-2025

Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance across common benchmarks compared to earlier Flash models. By default, "thinking" (i.e. multi-pass reasoning) is disabled to prioritize speed, but developers can enable it via the [Reasoning API parameter](https://openrouter.ai/docs/use-cases/reasoning-tokens) to selectively trade off cost for intelligence.

Context
1.0M
Max Output
66K
Input/1M
$0.10
👁 Vision🧠 Reasoning🔧 Tools⚡ Cache
Pricing (per 1M tokens)
OpenRouter$0.10 / $0.40
Vercel AI$0.10 / $0.40
Martian$0.10 / $0.40
2025-09-25View details →
GoogleGemini 2.5

Gemini 2.5 Flash Lite

Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance across common benchmarks compared to earlier Flash models. By default, "thinking" (i.e. multi-pass reasoning) is disabled to prioritize speed, but developers can enable it via the [Reasoning API parameter](https://openrouter.ai/docs/use-cases/reasoning-tokens) to selectively trade off cost for intelligence.

Context
1.0M
Max Output
66K
Input/1M
$0.10
👁 Vision🧠 Reasoning🔧 Tools⚡ Cache
EUUS
Pricing (per 1M tokens)
Requesty★$0.10 / $0.40
OpenRouter$0.10 / $0.40
Vercel AI$0.10 / $0.40
Martian$0.10 / $0.40
2025-07-22View details →
GoogleGemma 3OSS

Google: Gemma 3n 2B (free)

Gemma 3n E2B IT is a multimodal, instruction-tuned model developed by Google DeepMind, designed to operate efficiently at an effective parameter size of 2B while leveraging a 6B architecture. Based on the MatFormer architecture, it supports nested submodels and modular composition via the Mix-and-Match framework. Gemma 3n models are optimized for low-resource deployment, offering 32K context length and strong multilingual and reasoning performance across common benchmarks. This variant is trained on a diverse corpus including code, math, web, and multimodal data.

Context
8K
Max Output
2K
Input/1M
Free
Pricing (per 1M tokens)
OpenRouterFree / Free
2025-07-09View details →
GoogleGemma 3OSS

Google: Gemma 3n 4B (free)

Gemma 3n E4B-it is optimized for efficient execution on mobile and low-resource devices, such as phones, laptops, and tablets. It supports multimodal inputs—including text, visual data, and audio—enabling diverse tasks such as text generation, speech recognition, translation, and image analysis. Leveraging innovations like Per-Layer Embedding (PLE) caching and the MatFormer architecture, Gemma 3n dynamically manages memory usage and computational load by selectively activating model parameters, significantly reducing runtime resource requirements. This model supports a wide linguistic range (trained in over 140 languages) and features a flexible 32K token context window. Gemma 3n can selectively load parameters, optimizing memory and computational efficiency based on the task or device capabilities, making it well-suited for privacy-focused, offline-capable applications and on-device AI solutions. [Read more in the blog post](https://developers.googleblog.com/en/introducing-gemma-3n/)

Context
8K
Max Output
2K
Input/1M
Free
Pricing (per 1M tokens)
OpenRouterFree / Free
2025-05-20View details →
Google

Gemini Embedding 001

State-of-the-art embedding model with excellent performance across English, multilingual and code tasks.

Context
0
Max Output
—
Input/1M
$0.15
Pricing (per 1M tokens)
Vercel AI$0.15 / Free
2025-05-20View details →
GoogleGemini 2.5

Google: Gemini 2.5 Pro Preview 05-06

Gemini 2.5 Pro is Google’s state-of-the-art AI model designed for advanced reasoning, coding, mathematics, and scientific tasks. It employs “thinking” capabilities, enabling it to reason through responses with enhanced accuracy and nuanced context handling. Gemini 2.5 Pro achieves top-tier performance on multiple benchmarks, including first-place positioning on the LMArena leaderboard, reflecting superior human-preference alignment and complex problem-solving abilities.

Context
1.0M
Max Output
66K
Input/1M
$1.25
👁 Vision🧠 Reasoning🔧 Tools⚡ Cache
Pricing (per 1M tokens)
OpenRouter$1.25 / $10.00
2025-05-07View details →
GoogleGemma 3OSS

Google: Gemma 3 4B (free)

Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, and chat capabilities, including structured outputs and function calling.

Context
33K
Max Output
8K
Input/1M
Free
👁 Vision
Pricing (per 1M tokens)
OpenRouterFree / Free
DeepInfra$0.04 / $0.08
2025-03-13View details →
GoogleGemma 3OSS

Google: Gemma 3 12B (free)

Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, and chat capabilities, including structured outputs and function calling. Gemma 3 12B is the second largest in the family of Gemma 3 models after [Gemma 3 27B](google/gemma-3-27b-it)

Context
33K
Max Output
8K
Input/1M
Free
👁 Vision
Pricing (per 1M tokens)
OpenRouterFree / Free
DeepInfra$0.04 / $0.13
2025-03-13View details →
GoogleGemma 3OSS

Google: Gemma 3 27B (free)

Gemma 3 introduces multimodality, supporting vision-language input and text outputs. It handles context windows up to 128k tokens, understands over 140 languages, and offers improved math, reasoning, and chat capabilities, including structured outputs and function calling. Gemma 3 27B is Google's latest open source model, successor to [Gemma 2](google/gemma-2-27b-it)

Context
131K
Max Output
8K
Input/1M
Free
👁 Vision🔧 Tools
Pricing (per 1M tokens)
OpenRouterFree / Free
DeepInfra$0.08 / $0.16
2025-03-12View details →
GoogleGemini 2.0

Google: Gemini 2.0 Flash Lite

Gemini 2.0 Flash Lite offers a significantly faster time to first token (TTFT) compared to [Gemini Flash 1.5](/google/gemini-flash-1.5), while maintaining quality on par with larger models like [Gemini Pro 1.5](/google/gemini-pro-1.5), all at extremely economical token prices.

Context
1.0M
Max Output
8K
Input/1M
$0.07
👁 Vision🔧 Tools
Pricing (per 1M tokens)
OpenRouter$0.07 / $0.30
Martian$0.07 / $0.30
2025-02-25View details →
GoogleGemini 2.0

Gemini 2.0 Flash 001

Gemini Flash 2.0 offers a significantly faster time to first token (TTFT) compared to [Gemini Flash 1.5](/google/gemini-flash-1.5), while maintaining quality on par with larger models like [Gemini Pro 1.5](/google/gemini-pro-1.5). It introduces notable enhancements in multimodal understanding, coding capabilities, complex instruction following, and function calling. These advancements come together to deliver more seamless and robust agentic experiences.

Context
1.0M
Max Output
8K
Input/1M
$0.10
👁 Vision🔧 Tools
Pricing (per 1M tokens)
Requesty★$0.10 / $0.40
OpenRouter$0.10 / $0.40
Martian$0.10 / $0.40
2025-02-05View details →
GoogleGemma 2OSS

Google: Gemma 2 27B

Gemma 2 27B by Google is an open model built from the same research and technology used to create the [Gemini models](/models?q=gemini). Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. See the [launch announcement](https://blog.google/technology/developers/google-gemma-2/) for more details. Usage of Gemma is subject to Google's [Gemma Terms of Use](https://ai.google.dev/gemma/terms).

Context
8K
Max Output
2K
Input/1M
$0.65
Pricing (per 1M tokens)
OpenRouter$0.65 / $0.65
2024-07-13View details →
GoogleGemma 2OSS

Google: Gemma 2 9B

Gemma 2 9B by Google is an advanced, open-source language model that sets a new standard for efficiency and performance in its size class. Designed for a wide variety of tasks, it empowers developers and researchers to build innovative applications, while maintaining accessibility, safety, and cost-effectiveness. See the [launch announcement](https://blog.google/technology/developers/google-gemma-2/) for more details. Usage of Gemma is subject to Google's [Gemma Terms of Use](https://ai.google.dev/gemma/terms).

Context
8K
Max Output
—
Input/1M
$0.03
Pricing (per 1M tokens)
OpenRouter$0.03 / $0.09
2024-06-28View details →
GoogleGemmaOSS

Parasail Gemma3 27b It

Gemma 3 1B is the smallest of the new Gemma 3 family. It handles context windows up to 32k tokens, understands over 140 languages, and offers improved math, reasoning, and chat capabilities, including structured outputs and function calling. Note: Gemma 3 1B is not multimodal. For the smallest multimodal Gemma 3 model, please see [Gemma 3 4B](google/gemma-3-4b-it)

Context
128K
Max Output
8K
Input/1M
$0.30
🔧 Tools
Pricing (per 1M tokens)
Requesty★$0.30 / $0.50
View details →
← Back to all providers