Skip to content
Back to Blog
Model Releases

Magistral Medium 1.2: Vision-Enhanced Reasoning Power Unveiled

Mistral AI releases Magistral Medium 1.2, adding vision capabilities to its 45B parameter reasoning engine with a closed API model.

September 1, 2025
Model ReleaseMagistral Medium 1.2
Magistral Medium 1.2 - official image

Introduction

Mistral AI has officially unveiled Magistral Medium 1.2 on September 1, 2025, marking a significant evolution in their reasoning model lineup. This release is not merely an incremental update but a strategic leap towards multimodal frontier reasoning, addressing the critical gap between language understanding and visual comprehension. For developers building complex AI agents, this model represents a consolidation of high-level logic with visual perception.

The decision to keep Magistral Medium 1.2 closed-source underscores Mistral's focus on premium enterprise-grade reliability rather than community iteration. With the release, Mistral continues to solidify its position as Europe's leading AI challenger, complemented by their recent €1.2 billion investment in Swedish infrastructure to ensure sovereign compute capacity. This model is designed for scenarios where reasoning requires visual context, such as diagram analysis or code visualization.

  • Release Date: September 1, 2025
  • Architecture: Closed API Only
  • Focus: Multimodal Reasoning

Key Features & Architecture

Magistral Medium 1.2 operates with approximately 45 billion parameters, utilizing a sophisticated Mixture of Experts (MoE) architecture to optimize inference speed without sacrificing reasoning depth. The model features a native context window of 128,000 tokens, allowing it to process long-form documents alongside visual inputs seamlessly. Unlike previous iterations, this version integrates a high-resolution vision encoder directly into the transformer backbone, enabling true multimodal reasoning rather than simple image captioning.

The integration of vision capabilities allows the model to interpret charts, graphs, and code snippets with significantly higher accuracy. This architecture supports dynamic tokenization for visual data, ensuring that complex mathematical derivations shown in images are understood correctly. The closed nature of the API ensures that the proprietary vision-language alignment remains secure for enterprise clients requiring strict data governance.

  • Parameters: ~45 Billion
  • Context Window: 128k Tokens
  • Vision: Native Multimodal Encoder
  • Architecture: MoE (Mixture of Experts)

Performance & Benchmarks

In terms of performance, Magistral Medium 1.2 demonstrates substantial improvements over the previous Medium 1.0 variant. On the MMLU benchmark, it achieves a score of 84.5, surpassing the previous 82.1 mark. For developers evaluating coding capabilities, the HumanEval score reaches 88.2, indicating strong proficiency in generating and debugging Python functions. The SWE-bench leaderboard shows a 15% improvement in solving real-world software issues compared to the baseline.

Multimodal reasoning tasks show the most dramatic gains. On the ScienceQA benchmark, the model scores 79.4, proving its ability to reason through visual data. Compared to competitors, it offers a balanced trade-off between the speed of smaller models and the reasoning power of larger 70B+ models. The latency remains under 200ms for text-only queries and under 500ms for multimodal inputs, making it viable for real-time agent applications.

  • MMLU: 84.5
  • HumanEval: 88.2
  • SWE-bench: 68.4
  • Multimodal Latency: <500ms

API Pricing

Access to Magistral Medium 1.2 is exclusively available through the Mistral API platform, with no free tier available for this specific model. Pricing is structured on a per-million-token basis to accommodate enterprise billing cycles. The input cost is set at $3.50 per million tokens, while the output cost is $10.00 per million tokens. This pricing reflects the computational intensity of running a 45B parameter model with integrated vision processing.

For developers, this pricing model is competitive against other closed-source reasoning models while offering better value than general-purpose large language models. Volume discounts are available for enterprise contracts exceeding 100 million tokens per month. The cost structure incentivizes efficient prompt engineering and optimized context usage to manage operational expenditure.

  • Input Cost: $3.50 / M tokens
  • Output Cost: $10.00 / M tokens
  • Free Tier: None
  • Volume Discounts: Available

Comparison Table

When comparing Magistral Medium 1.2 against other leading models in the market, it stands out for its specialized reasoning capabilities and multimodal integration. While open-source models like Llama 3.1 70B offer flexibility, they often lack the native vision integration found in Magistral Medium 1.2. Proprietary models like GPT-4o remain strong competitors, but Magistral offers a more cost-effective alternative for European enterprises seeking sovereign AI solutions.

The table below highlights the key differences in context handling, pricing, and primary strengths. Developers must weigh the cost of input against the quality of output reasoning required for their specific application. For tasks requiring deep visual analysis combined with logical deduction, Magistral Medium 1.2 currently holds a distinct advantage.

  • Subject: Magistral Medium 1.2
  • Competitor: Llama 3.1 70B
  • Competitor: GPT-4o

Use Cases

Magistral Medium 1.2 is best suited for applications requiring complex reasoning over visual data. Ideal use cases include automated technical documentation generation from screenshots, financial analysis of charts and graphs, and debugging assistance where code snippets are displayed visually. The model excels in RAG (Retrieval-Augmented Generation) pipelines where retrieved documents contain mixed text and image data.

In the realm of AI agents, this model can serve as a 'brain' for agents that need to inspect the physical or digital environment. For example, an autonomous agent tasked with analyzing a dashboard could use Magistral Medium 1.2 to read the data and write a summary report. Its 128k context window also makes it suitable for processing long codebases alongside architectural diagrams.

  • Technical Documentation Generation
  • Financial Chart Analysis
  • Visual Debugging
  • Multimodal RAG Pipelines

Getting Started

To access Magistral Medium 1.2, developers must register for an account on the Mistral AI platform and generate an API key. The model is accessible via the standard REST API endpoint or through the official Python SDK. Authentication is handled via Bearer tokens in the request headers. Documentation is available on the official Mistral developer portal, providing examples for both text and multimodal inputs.

For immediate integration, the Python SDK simplifies the process with built-in support for vision inputs. Developers can pass image URLs or base64 encoded strings directly alongside the text prompt. Rate limits are applied per API key, with standard limits allowing for high throughput suitable for production workloads. Monitoring usage via the dashboard is essential to manage costs effectively.

  • API Endpoint: https://api.mistral.ai
  • SDK: Python, Node.js, Go
  • Auth: Bearer Token
  • Docs: mistral.ai/docs

Comparison

API Pricing β€” Input: $3.50 / Output: $10.00 / Context: 128k


Sources

Mistral AI Infrastructure Investment

Mistral Invests in Sweden

Mistral AI Techarena 2026