Skip to content
Back to Blog
Model Releases

Poolside Laguna-M.1: The 225B Coding Giant Arrives in 2026

Poolside unveils Laguna-M.1, a massive MoE model designed for agentic software engineering with 225B parameters.

April 28, 2026
Model ReleaseLaguna-M.1
Laguna M.1 model announcement by poolside

Introduction: A Milestone in AI History

Poolside has officially unveiled Laguna-M.1, a monumental leap in the landscape of generative AI for software engineering. Released on April 28, 2026, this model represents a significant milestone for the company and the industry at large. It is not merely an incremental update but a foundational shift in how agentic coding is approached. For developers, this means a new standard for handling complex, long-horizon tasks that previous models struggled with. The model is part of the broader Laguna family, designed to push the boundaries of what large language models can achieve in real-world production environments.

Its architecture is built on lessons learned from previous iterations, specifically focusing on efficiency and raw capability. As the most capable model to date, it completes pre-training at the end of 2025. The release signifies a transition from experimental agentic tools to enterprise-grade solutions. The Laguna family is now anchored by this powerhouse, setting the stage for future releases that will likely inherit this robust foundation.

  • Released on 2026-04-28 by Poolside.
  • Most capable model to date in the Laguna family.
  • Foundation for the entire Laguna model family.
  • Completed pre-training at the end of 2025.

Key Features & Architecture

The architecture of Laguna-M.1 is a testament to high-scale training. It operates as a 225B parameter Mixture-of-Experts model, but it only activates 23B parameters per token. This MoE design allows for massive capacity without prohibitive inference costs. The model was trained from scratch on 30T tokens using the Muon optimizer. It utilized 6,144 interconnected NVIDIA Hopper GPUs entirely in-house. The context window supports 128K inputs with up to 8K output tokens. This setup enables the model to handle massive codebases and extensive documentation contexts.

Beyond standard inference, the model incorporates a custom async on-policy RL system with an Agent Client Protocol (ACP) server. This infrastructure supports advanced reasoning capabilities required for complex software tasks. The 128K context window is particularly vital for developers managing large legacy codebases where information density is high.

  • 225B total parameters with 23B activated per token.
  • 128K context window with 8K output tokens.
  • Trained on 30T tokens using Muon optimizer.
  • Custom async on-policy RL system with ACP server.

Performance & Benchmarks

Performance metrics are crucial for any serious engineering model. On SWE-bench Verified, Laguna-M.1 achieves a score of 72.5%. It scores 67.3% on SWE-bench Multilingual and 46.9% on SWE-bench Pro. Terminal-Bench 2.0 sees a score of 40.7%. These numbers place it above many competitors in the current landscape. The model is specifically tuned for agentic workflows, meaning it can plan and execute multi-step tasks autonomously.

While Qwen3.6 35B shows competitive results on some benchmarks, the raw parameter count and specialized training of Laguna-M.1 offer distinct advantages for heavy lifting. The model excels in reasoning tasks that require understanding the entire flow of a software project. This is critical for enterprise adoption where reliability is paramount.

  • 72.5% on SWE-bench Verified.
  • 67.3% on SWE-bench Multilingual.
  • 46.9% on SWE-bench Pro.
  • 40.7% on Terminal-Bench 2.0.

API Pricing

Access to the model is currently free for a limited time. This is a strategic move to gain market traction and developer adoption. The input price is $0/M tokens. The output price is $0/M tokens. This applies to the poolside API and OpenRouter. This pricing structure is unique and offers a risk-free environment for developers to test the capabilities. The free tier allows startups and institutions to experiment without financial risk.

However, this offer is time-limited. Once the promotion ends, pricing may change. Developers should monitor the poolside API documentation for updates. The free tier is available via the poolside API and OpenRouter integrations, making it accessible for immediate deployment in local environments.

  • Input: $0/M tokens (Free for limited time).
  • Output: $0/M tokens (Free for limited time).
  • Context Window: 128K.
  • Available via poolside API and OpenRouter.

Use Cases

Best suited for agentic coding, long-horizon software engineering tasks, and complex debugging. It can also handle RAG tasks where the context window is critical. It is ideal for enterprise environments that need reliable code generation. The agentic nature allows it to break down complex requirements into actionable steps, reducing the need for manual intervention.

Startups and institutions can use the model for internal tools and external applications. The foundation for the entire Laguna model family ensures that future versions will maintain this high standard. It is particularly useful for teams working on large-scale software projects where context management is a frequent bottleneck.

  • Agentic coding and long-horizon tasks.
  • RAG applications requiring large context.
  • Enterprise software engineering workflows.
  • Complex debugging and refactoring.

Getting Started

Access via API endpoint, SDK, or platform links. Poolside API and OpenRouter. The model is not open source, but weights are available on request for startups, institutions, and universities. This ensures that organizations with the right need can deploy the model locally for security and compliance reasons.

Developers can try Laguna XS.2 on Hugging Face for local experimentation before moving to the larger model. The official blog provides a deeper dive into the training process. Integration is straightforward through standard REST APIs and SDKs provided by Poolside.

  • API Endpoint via poolside.ai.
  • OpenRouter integration available.
  • Weights available on request for institutions.
  • Hugging Face for local model exploration.

API Pricing β€” Input: $0/M tokens (free for limited time) / Output: $0/M tokens (free for limited time) / Context: 128K


Sources

Poolside API Platform

Laguna M.1 on OpenRouter

Shimmer by poolside

pool - Agent harness

Laguna XS.2 and M.1: A Deeper Dive

The finishing touches - Poolside

Laguna M.1 vs Mistral Small 4 - AI Model Comparison