Open WeightMistral
Mistral Small
Context
32K tokens
Pricing
$0.10/M input, $0.30/M output
Modalities
text, code
Released
Sep 2024
- Overview
- Mistral's efficient model optimized for fast, cost-effective inference while maintaining strong performance on core tasks. Mistral Small is designed for high-volume production deployments where speed and cost matter more than peak capability.
- Why it matters
- Mistral Small competes in the critical efficient tier where most production API calls actually land. Its strong performance on classification, extraction, and structured output tasks — combined with Mistral's EU-based data processing — makes it attractive for European enterprises running high-volume workloads. The model's function-calling reliability and JSON mode accuracy are particularly valued for building reliable agentic pipelines where consistency matters more than raw intelligence. It serves as the entry point to the Mistral ecosystem.
Key strengths
- Fast inference with low latency
- Strong function calling and JSON mode
- EU-based data processing
- Cost-effective for high-volume workloads
We cover ai models every week.
Get the 5 AI stories that matter — free, every Friday.
Know the terms. Know the moves.
Get the 5 AI stories that matter every Friday — free.
Free forever. No spam.