Skip to content
Back to Blog
Model Releases

GLM-4.7 Release: Zhipu AI Open-Source Coding Model

Zhipu AI releases GLM-4.7, an open-weights coding model topping leaderboards with a cost-effective Flash variant.

December 1, 2025
Model ReleaseGLM-4.7
GLM-4.7 - official image

Introduction

Zhipu AI has officially launched the GLM-4.7 model on December 1, 2025, marking a significant milestone for the open-source community. This release democratizes access to cutting-edge reasoning capabilities, directly challenging established market leaders in both coding and general reasoning tasks. As an open-weights model, GLM-4.7 allows developers to deploy locally or via API without the enterprise licensing costs associated with Western giants.

The model's architecture is designed to handle complex development workflows, offering a robust alternative for teams looking to integrate advanced AI into their software engineering pipelines. By topping global coding and reasoning leaderboards immediately upon release, GLM-4.7 signals a shift in the competitive landscape for AI-driven development tools.

  • Release Date: 2025-12-01
  • Status: Open Weights
  • Category: Coding & Reasoning

Key Features & Architecture

The GLM-4.7 architecture utilizes a Mixture of Experts (MoE) structure to optimize inference speed while maintaining high accuracy. It includes a specialized GLM-4.7 Flash variant, which is optimized for low-latency tasks and cost-sensitive applications. This dual-tier approach ensures users can choose between maximum performance or budget-friendly efficiency based on their specific project requirements.

The model supports a massive context window, allowing it to ingest entire codebases or extensive documentation in a single prompt. Additionally, it features multimodal capabilities, enabling it to process code screenshots and generate text explanations seamlessly. These technical specifications make it highly versatile for modern development environments where context retention is critical.

  • Architecture: MoE (Mixture of Experts)
  • Flash Variant: Low-latency optimization
  • Context Window: 256k tokens
  • Multimodal: Code image processing

Performance & Benchmarks

In terms of performance, GLM-4.7 has topped global coding and reasoning leaderboards upon release. It achieved a HumanEval score of 92.5%, surpassing previous industry standards for code generation tasks. On the MMLU benchmark, it scored 89.1%, indicating strong general reasoning capabilities across diverse domains.

Furthermore, it demonstrated a 76.8% pass rate on the SWE-bench Verified leaderboard, proving its ability to resolve real-world software issues autonomously. These concrete numbers validate its claim as a top-tier coding model, outperforming many closed-source alternatives in specific technical domains and establishing Zhipu AI as a serious contender in the global AI race.

  • HumanEval: 92.5%
  • MMLU: 89.1%
  • SWE-bench Verified: 76.8%

API Pricing

Zhipu AI has positioned GLM-4.7 as a highly cost-effective solution compared to Western competitors. The API pricing for the standard model starts at $0.08 per million input tokens and $0.24 per million output tokens. This structure is designed to lower the barrier to entry for high-volume development tasks.

The Flash variant offers even greater value, with input costs dropping to $0.02 per million tokens. This pricing structure is approximately 42 times cheaper than some of the biggest names in the industry, making it accessible for startups and individual developers who need high performance without breaking the bank. Free tier availability is also provided for testing purposes.

  • Standard Input: $0.08 / 1M tokens
  • Standard Output: $0.24 / 1M tokens
  • Flash Input: $0.02 / 1M tokens
  • Free Tier: Available for testing

Comparison Table

When compared to other leading models, GLM-4.7 stands out for its balance of performance and price. GPT-4o remains a strong competitor in general reasoning but lacks the open-weight flexibility of GLM-4.7. Claude 3.5 Sonnet offers excellent coding capabilities but at a significantly higher operational cost.

GLM-4.7 bridges the gap between performance and accessibility, making it a preferred choice for open-source enthusiasts and cost-conscious engineering teams alike. The table below highlights the key differentiators between these top-tier models for developers evaluating their next infrastructure investment.

  • Open Source: Yes
  • Coding Focus: High
  • Cost Efficiency: Superior

Use Cases

Developers can utilize GLM-4.7 for a wide range of applications including full-stack development, automated code refactoring, and intelligent debugging agents. It is particularly well-suited for RAG (Retrieval-Augmented Generation) systems where long context windows are required to retrieve relevant documentation from private repositories.

Additionally, the model supports agentic-style execution, allowing it to perform multi-step tasks autonomously within a development environment. This versatility ensures it can be integrated into CI/CD pipelines for automated testing and code review processes, significantly reducing manual overhead for engineering teams.

  • Full-Stack Development
  • Automated Refactoring
  • Intelligent Debugging Agents
  • RAG Systems
  • CI/CD Integration

Getting Started

Accessing GLM-4.7 is straightforward for developers. You can find the model on Hugging Face under the Zhipu AI organization, where the weights are available for local deployment. For API access, Zhipu provides a dedicated endpoint that supports standard authentication protocols, ensuring secure integration into production environments.

The SDK is available in Python and JavaScript, simplifying integration into existing workflows. Documentation is hosted on their official developer portal, providing comprehensive guides on prompt engineering and model fine-tuning. Developers can start experimenting with the model immediately using the provided sandbox environments.

  • Platform: Hugging Face
  • API: Zhipu Cloud
  • SDK: Python, JavaScript
  • Docs: Official Portal

Comparison

Model: GLM-4.7 | Context: 256k | Max Output: 8k | Input $/M: $0.08 | Output $/M: $0.24 | Strength: Open Source & Cost

Model: GPT-4o | Context: 128k | Max Output: 4k | Input $/M: $5.00 | Output $/M: $15.00 | Strength: General Reasoning

Model: Claude 3.5 | Context: 200k | Max Output: 4k | Input $/M: $3.00 | Output $/M: $15.00 | Strength: Long Context

API Pricing β€” Input: $0.08 / Output: $0.24 / Context: 256k tokens


Sources

Zhipu AI Releases New Flagship Model GLM-5

American AI coding agents are impressive. But so are China's

Meet Z.AI 4.7 Flash, a Low-Cost Local AI Model for Coding