ZhipuGLM 4Arena #110Aug 11, 2025
Z.ai: GLM 4.5V
GLM-4.5V is a vision-language foundation model for multimodal agent applications. Built on a Mixture-of-Experts (MoE) architecture with 106B parameters and 12B activated parameters, it achieves state-of-the-art results in video understanding, image Q&A, OCR, and document parsing, with strong gains in front-end web coding, grounding, and spatial reasoning. It offers a hybrid inference mode: a "thinking mode" for deep reasoning and a "non-thinking mode" for fast responses. Reasoning behavior can be toggled via the `reasoning` `enabled` boolean. Learn more in our docs
Context Window
66K
tokens
Max Output
16K
tokens
Released
Aug 11, 2025
Arena Rank
#110
of 305 models
Capabilities
👁Vision
🧠Reasoning
🔧Tool Calling
⚡Prompt Caching
🖥Computer Use
🎨Image Generation
Supported Parameters
Frequency Penalty
Reduce repetition
Include Reasoning
Show reasoning tokens
Max Tokens
Output length limit
Presence Penalty
Encourage new topics
Reasoning
Extended thinking
Repetition Penalty
Penalize repeated tokens
Response Format
JSON mode / structured output
Seed
Deterministic outputs
Stop Sequences
Custom stop tokens
structured_outputs
Temperature
Controls randomness
Tool Choice
Control tool usage
Tool Calling
Function calling support
Top K
Top-K sampling
Top P
Nucleus sampling
Pricing Comparison
| Router | Input / 1M | Output / 1M | Cached Input / 1M |
|---|---|---|---|
| OpenRouter | $0.60 | $1.80 | $0.11 |
| Vercel AI | $0.60 | $1.80 | — |
Model IDs
OpenRouter
z-ai/glm-4.5vHugging Facezai-org/GLM-4.5V ↗
Tags
visionreasoningtool-calling