Skip to content

๐Ÿš€ Model List โ€‹

Chaterm supports multiple model providers, offering you a flexible AI programming experience. From built-in models to custom integrations, meeting different scenario needs.

โœจ Built-in Models โ€‹

Chaterm comes with multiple high-quality code models out of the box, ready to use without additional configuration:

๐Ÿง  Chain-of-Thought Models โ€‹

These models have deep reasoning capabilities and can analyze problems step by step to provide detailed solutions:

ModelFeaturesUse CasesReasoning Ability
DeepSeek-R1 (thinking)๐ŸŽฏ Advanced model with deep reasoning capabilitiesComplex algorithm design, architecture analysisโญโญโญโญโญ
DeepSeek-V3.1 (thinking)๐Ÿ’ก Supports complex code analysisLarge project refactoring, performance optimizationโญโญโญโญโญ
GLM-4.5 (thinking)๐Ÿ” Powerful logical reasoning capabilitiesCode review, problem diagnosisโญโญโญโญ
Qwen-Plus (thinking)๐Ÿš€ Alibaba Cloud's Tongyi Qianwen chain-of-thought modelMulti-language development, cross-platform projectsโญโญโญโญ

โšก Standard Models โ€‹

Fast-response standard models suitable for daily programming tasks:

ModelFeaturesUse CasesResponse Speed
GLM-4.5๐ŸŽจ Excellent code generation capabilitiesRapid prototyping, feature implementationโšกโšกโšกโšก
Qwen-Plus๐Ÿ† High-performance code generation modelEnterprise application developmentโšกโšกโšก
Qwen-Turboโšก Fast-response lightweight modelReal-time programming assistance, rapid iterationโšกโšกโšกโšกโšก

๐Ÿ”ง Add Custom Models โ€‹

You can add more model providers in settings to extend Chaterm's functionality. Supports multiple integration methods to meet different needs:

๐ŸŒ Cloud Model Integration โ€‹

1. ๐Ÿ”— LiteLLM Integration โ€‹

Connect to various model providers through LiteLLM with unified API interface:

Configuration ItemDescriptionRequired
URL AddressLiteLLM service endpointโœ… Required
API KeyAccess keyโœ… Required
Model NameSpecific model to useโœ… Required

Advantages: Unified interface, supports multiple model providers

2. ๐Ÿค– OpenAI Integration โ€‹

Direct connection to OpenAI services with official support:

Configuration ItemDescriptionRequired
OpenAI URL AddressOpenAI API endpointโœ… Required
OpenAI API KeyOpenAI access keyโœ… Required
Model NameGPT-4, GPT-3.5, etc.โœ… Required

Advantages: Official support, stable and reliable

3. โ˜๏ธ Amazon Bedrock โ€‹

Using AWS Bedrock services, enterprise-grade solution:

Configuration ItemDescriptionRequired
AWS Access KeyAWS access keyโœ… Required
AWS Secret KeyAWS secret keyโœ… Required
AWS Session TokenSession token๐Ÿ”ถ Optional
AWS RegionService regionโœ… Required
Custom VPC EndpointPrivate network endpoint๐Ÿ”ถ Optional
Cross-Region InferenceMulti-region deployment๐Ÿ”ถ Optional
Model NameBedrock modelโœ… Required

Advantages: Enterprise-grade security, high availability

4. ๐Ÿš€ DeepSeek Integration โ€‹

Connect to DeepSeek official API, enjoy advanced models:

Configuration ItemDescriptionRequired
DeepSeek API KeyDeepSeek access keyโœ… Required
Model NameDeepSeek modelโœ… Required

Advantages: Advanced models, strong reasoning capabilities

๐Ÿ  Local Model Deployment โ€‹

5. ๐Ÿฆ™ Ollama Local Deployment โ€‹

Using locally deployed Ollama models, protect data privacy:

Configuration ItemDescriptionRequired
Ollama URL AddressLocal Ollama service addressโœ… Required
Model NameLocal model nameโœ… Required

Advantages: Data privacy, offline available

๐Ÿ“‹ Usage Instructions โ€‹

Quick Start โ€‹

  1. Go to Settings Page - Click the settings icon in the top right corner
  2. Select "Models" Tab - Find model settings in the left menu
  3. Click "Add Model" Button - Start adding new model configuration
  4. Choose the Corresponding Provider - Select appropriate model provider based on needs
  5. Fill in Necessary Configuration Information - Fill in configuration items according to table requirements
  6. Save and Test Connection - Verify configuration is correct

๐Ÿ”ง Configuration Tips โ€‹

  • API Key Security: Use environment variables to store sensitive information
  • Connection Testing: Always perform connection test after configuration
  • Model Switching: Configure multiple models and switch as needed
  • Performance Monitoring: Monitor model response time and usage costs

๐ŸŽฏ Model Selection Recommendations โ€‹

Select by Use Case โ€‹

Use CaseRecommended ModelReason
Daily ProgrammingQwen-Turboโšก Fast response, low cost
Complex TasksDeepSeek-R1 (thinking)๐Ÿง  Strong reasoning capabilities, deep analysis
Local DeploymentOllama๐Ÿ”’ Data privacy, offline available
Enterprise ApplicationsAmazon Bedrock๐Ÿข Stable and reliable, security compliant
Multi-language DevelopmentQwen-Plus (thinking)๐ŸŒ Multi-language support, strong understanding
Rapid PrototypingGLM-4.5๐Ÿš€ Fast generation, suitable for iteration

Select by Performance Needs โ€‹

๐Ÿš€ Pursuing Speed โ€‹

  • Qwen-Turbo - Fastest response
  • GLM-4.5 - Balanced performance and quality

๐Ÿง  Pursuing Quality โ€‹

  • DeepSeek-R1 (thinking) - Strongest reasoning
  • DeepSeek-V3.1 (thinking) - Complex analysis

๐Ÿ’ฐ Pursuing Cost Efficiency โ€‹

  • Qwen-Turbo - Lowest cost
  • Ollama Local - No usage fees

๐Ÿ”’ Pursuing Privacy โ€‹

  • Ollama Local - Complete localization
  • Amazon Bedrock - Enterprise-grade security

๐Ÿ’ก Best Practices โ€‹

Model Combination Usage โ€‹

  • Development Phase: Use fast models for rapid iteration
  • Code Review: Use chain-of-thought models for deep analysis
  • Production Environment: Use stable and reliable enterprise-grade models

Cost Optimization โ€‹

  • Local Models: Suitable for frequently used scenarios
  • Cloud Models: Suitable for occasional complex tasks
  • Hybrid Usage: Choose appropriate models based on task complexity

Security Considerations โ€‹

  • Sensitive Data: Prioritize local models
  • Enterprise Environment: Use compliance-compliant models
  • API Security: Regularly rotate API keys