Skip to content
Back to Blog
Model Releases

Command A: Cohere's 111B Open Source Enterprise Model

Cohere releases Command A, a 111B parameter open-source model optimized for enterprise RAG and agentic workflows, running efficiently on just 2 GPUs.

March 13, 2025
Model ReleaseCommand A
Command A - official image

Introduction

On March 13, 2025, Cohere officially unveiled Command A, marking a significant milestone in the open-source large language model landscape. As the latest entrant into the Command family, this model represents Cohere's most performant offering to date, specifically engineered for real-world enterprise tasks. Unlike many proprietary counterparts, Command A is fully open source, allowing developers to fine-tune and deploy the model across their own infrastructure with complete control over data privacy and security.

The release comes at a critical time for the AI industry, where efficiency and cost-effectiveness are paramount. While competitors focus on massive parameter counts that require extensive hardware, Command A demonstrates that high performance does not always necessitate a data center's worth of compute. This shift towards accessible, high-capacity models is reshaping how engineering teams approach agentic workflows and Retrieval Augmented Generation (RAG) pipelines.

  • Released: 2025-03-13
  • Provider: Cohere
  • License: Open Source
  • Focus: Enterprise RAG and Agentic Tasks

Key Features & Architecture

Command A is built on a robust 111B parameter architecture that balances density with efficiency. The model is designed to handle complex reasoning tasks while maintaining a low inference footprint, capable of running on as few as 2 GPUs in certain configurations. This architectural choice makes it particularly attractive for startups and mid-sized enterprises looking to implement LLMs without prohibitive cloud costs.

Beyond raw parameters, Command A boasts a massive 256K context window, enabling the processing of extensive documents and long-form codebases in a single pass. The model supports multilingual capabilities, ensuring seamless integration into global applications. Its open-weight nature allows for community-driven improvements and customization, fostering a collaborative ecosystem around Cohere's technology.

  • Parameters: 111B
  • Context Window: 256K
  • Hardware: Runs on 2 GPUs
  • Languages: Multilingual Support

Performance & Benchmarks

In terms of raw capability, Command A excels in standard LLM benchmarks. It has been evaluated on MMLU, HumanEval, and SWE-bench, where it demonstrates competitive performance against closed-source models. The model's strength lies in its ability to maintain coherence and accuracy over long contexts, a common failure point for other architectures.

Benchmarks indicate that Command A achieves state-of-the-art results in agentic reasoning and tool use. It outperforms many smaller models in logic-heavy tasks while maintaining efficiency. This makes it a superior choice for applications requiring deep reasoning, such as automated coding assistants or complex data analysis agents that must interpret large datasets accurately.

  • MMLU Score: Top Tier
  • HumanEval: High Accuracy
  • SWE-bench: Strong Performance
  • Latency: Optimized for Speed

API Pricing

Cohere has structured pricing for Command A to be accessible for both experimentation and production workloads. The model is priced competitively compared to industry leaders, with a focus on value per token. This pricing structure encourages adoption for heavy RAG workloads where context volume drives costs.

Developers can access the model via Cohere's API, which supports standard streaming and batch processing. The free tier is available for testing purposes, allowing engineers to validate performance before committing to paid plans. For enterprise customers, volume discounts are available through direct negotiation, ensuring scalability as usage grows.

  • Input Price: $2.50 per 1M tokens
  • Output Price: $2.50 per 1M tokens
  • Free Tier: Available for Testing
  • Enterprise: Custom Pricing

Comparison Table

When placed against other leading models in the market, Command A stands out for its balance of context window and cost. While some models offer faster inference, they often sacrifice context depth or require significantly higher hardware overhead. Command A bridges the gap between high-end proprietary models and efficient open-weight alternatives.

  • Competitive Pricing
  • Large Context Window
  • Open Source Flexibility

Use Cases

Command A is best suited for applications requiring deep understanding and long-context retention. Coding assistants benefit from the model's ability to parse entire repositories, while customer service bots leverage its multilingual capabilities to serve diverse audiences. The agentic features allow the model to autonomously execute tasks, such as querying databases or generating reports based on retrieved information.

  • Enterprise RAG Systems
  • Automated Coding Agents
  • Multilingual Customer Support
  • Document Analysis

Getting Started

Accessing Command A is straightforward for developers familiar with Cohere's ecosystem. The model can be accessed via the Cohere API endpoint, utilizing standard SDKs for Python, Node.js, and other languages. Documentation is comprehensive, providing examples for both chat completion and embedding tasks.

  • API Endpoint: docs.cohere.com
  • SDKs: Python, Node.js, Go
  • Docs: Official Cohere Changelog

Comparison

API Pricing β€” Input: 2.50 / Output: 2.50 / Context: 256K


Sources

Announcing Command A | Cohere

Command A - Intelligence, Performance & Price Analysis

Cohere: Command A Review β€” Pricing, Benchmarks & Capabilities