User Guide

Discover how to get the most out of the RAG, Memory, MCP Servers, Full-Stack Mode, API, PWA, and BYOK features in Chat LLM.

Welcome to Chat LLM

Chat LLM is a powerful AI platform that gives you access to over 1,200 AI models from leading providers including OpenAI, Anthropic, Google, xAI, Mistral, and many more. Whether you're a developer, writer, researcher, or business professional, our platform provides the tools you need to work smarter with AI.

This comprehensive guide will walk you through all the features available on our platform. From basic chat functionality to advanced features like RAG, Memory, and Full-Stack Mode, you'll find everything you need to maximize your productivity and get the most out of AI assistance.

Quick Start Guide

Get started with Chat LLM in just a few simple steps. No registration required to try free models!

Choose Your Model

Select from 50+ free models or 1,200+ premium models. Free models like DeepSeek, Qwen, and Llama are available without any sign-up.

Start Chatting

Simply type your message and press Enter. The AI will respond in real-time with streaming responses for a smooth experience.

Explore Features

Enable RAG for document-based answers, set up Memory for personalized responses, or try Full-Stack Mode to build applications.

Create an Account

Sign up for free to save your chat history, access more features, and unlock premium models with a subscription.

Getting Started

Chat LLM offers advanced AI features to enhance your experience. Here's what you need to know:

What are the Key AI Features?

  • RAG (Retrieval Augmented Generation): Enhances AI responses by incorporating knowledge from your documents
  • Memory: Enables the AI to remember important information about you across multiple conversations
  • MCP Servers: Specialized tools that extend your AI with external knowledge sources and capabilities
  • Context7: An MCP server providing up-to-date technical documentation from public libraries for accurate development guidance
  • Full-Stack Mode: Build complete web applications with AI-powered file editing, terminal, and live preview
  • API: Provides programmatic access to Chat LLM features for building custom integrations and applications
  • PWA (Progressive Web App): Allows installation as a standalone app

When to Use Each Feature

  • Use RAG when you need the AI to reference specific documents or knowledge bases
  • Use Memory when you want the AI to remember personal preferences or details across conversations
  • Use MCP Servers when you need specialized tools or external knowledge sources for enhanced capabilities
  • Use Context7 when you need up-to-date technical documentation and best practices for development
  • Use Full-Stack Mode when you want to build complete web applications with AI-powered development tools
  • Use API when you need programmatic access to Chat LLM features for custom applications
  • Use PWA when you want to install Chat LLM as a standalone application on your device

Popular Use Cases

Discover how different professionals use Chat LLM to enhance their workflow and productivity.

👨‍💻Software Development

Use Full-Stack Mode to build complete web applications, Context7 for up-to-date documentation, and RAG for project-specific code references. Perfect for debugging, code generation, and architecture planning.

✍️Content Creation

Leverage Memory to maintain your writing style across sessions, use RAG with your style guides, and generate creative content with various AI models specialized in writing.

🔬Research & Analysis

Upload documents via RAG for in-depth analysis, use Memory to track research preferences, and leverage multiple AI models for comprehensive literature reviews.

💼Business Operations

Automate customer support with RAG-powered responses, generate reports, create presentations, and use the API for seamless integration with your existing tools.

Tips & Best Practices

Follow these recommendations to get the most out of Chat LLM:

  • 1Be specific in your prompts - The more detailed your question, the better the AI can assist you. Include context, requirements, and any constraints.
  • 2Use Memory for personalization - Store your preferences, common requirements, and frequently used information so the AI remembers them across conversations.
  • 3Combine features for power users - Enable RAG with your documents while using Memory and MCP Servers together for the most powerful AI experience.
  • 4Try different models - Each AI model has different strengths. Experiment with various models to find the best one for your specific use case.
  • 5Install the PWA - Install Chat LLM as a Progressive Web App on your device for faster access, offline capabilities, and a native app-like experience.

Combining All Features

For the best experience, you can use all features together:

Example: Complete Technical Support Experience

  • Memory knows you prefer detailed technical explanations
  • RAG finds relevant technical documentation about your specific issue from your knowledge base
  • MCP Servers provide specialized tools for extended capabilities
  • Context7 adds up-to-date best practices and library documentation for production-ready solutions
  • Full-Stack Mode enables AI to build and edit complete web applications with live preview
  • Result: You receive a fully personalized, comprehensive, and actionable solution with best practices, executable code, and seamless access tailored to your specific problem

Click on each guide above to learn more about how to set up and use these powerful features.