LFM2-8B-A1B is an efficient on-device Mixture-of-Experts (MoE) model from Liquid AI’s LFM2 family, built for fast, high-quality inference on edge hardware. It uses 8.3B total parameters with only ~1.5B active per token, delivering strong performance while keeping compute and memory usage low—making it ideal for phones, tablets, and laptops.
| Router | Input / 1M | Output / 1M | Cached Input / 1M |
|---|---|---|---|
| OpenRouter | $0.01 | $0.02 | — |
| Martian | $0.01 | $0.02 | — |
liquid/lfm2-8b-a1bRanked by provider, pricing, capabilities, and arena performance
Same provider · Similar price
Same provider · Similar price
Same provider · Similar price
Similar price · Similar context
Same provider
Same provider