Skip to content

Model List

Chaterm supports multiple model providers, offering a flexible AI programming experience. From built-in models to custom integrations, meeting the needs of different scenarios.

Built-in Models

Chaterm includes multiple high-quality code models out of the box, requiring no additional configuration:

Chain-of-Thought Models

These models have deep reasoning capabilities, able to analyze problems step by step and provide detailed solutions:

ModelFeaturesUse CasesReasoning Ability
DeepSeek-R1 (thinking)Advanced model with deep reasoningComplex algorithm design, architecture analysisHigh
GLM-4.6 (thinking)Strong logical reasoning abilityCode review, problem diagnosisMedium-High
Qwen-Plus (thinking)Alibaba Cloud Qwen chain-of-thought modelMulti-language development, cross-platform projectsMedium-High

Standard Models

Fast-responding standard models, suitable for daily programming tasks:

ModelFeaturesUse CasesResponse Speed
DeepSeek-V3.2Supports complex code analysisLarge project refactoring, performance optimizationFast
Qwen-PlusHigh-performance code generation modelEnterprise application developmentFast
GLM-4.6Excellent code generation capabilityRapid prototyping, feature implementationMedium
Qwen-TurboFast-responding lightweight modelReal-time programming assistance, rapid iterationVery Fast

Adding Custom Models

You can add more model providers in settings to extend Chaterm's functionality. Supports multiple integration methods to meet different needs:

Model Integration

1. LiteLLM Integration

Connect to multiple model providers through LiteLLM, supporting unified API interface:

Configuration ItemDescriptionRequired
URL AddressLiteLLM service endpointRequired
API KeyAccess keyRequired
Model NameSpecific model to useRequired

Advantages: Unified interface, supports multiple model providers

2. OpenAI Integration

Directly connect to OpenAI service, using official API:

Configuration ItemDescriptionRequired
OpenAI URL AddressOpenAI API endpointRequired
OpenAI API KeyOpenAI access keyRequired
Model NameGPT-5, GPT-4, etc.Required

Advantages: Official support, stable and reliable

3. Amazon Bedrock

Use AWS Bedrock service, enterprise-grade solution:

Configuration ItemDescriptionRequired
AWS Access KeyAWS access keyRequired
AWS Secret KeyAWS secret keyRequired
AWS Session TokenSession tokenOptional
AWS RegionService regionRequired
Custom VPC EndpointPrivate network endpointOptional
Cross-Region InferenceMulti-region deploymentOptional
Model NameBedrock modelRequired

Advantages: Enterprise-grade security, high availability

4. DeepSeek Integration

Connect to DeepSeek official API, using advanced models:

Configuration ItemDescriptionRequired
DeepSeek API KeyDeepSeek access keyRequired
Model NameDeepSeek modelRequired

Advantages: Advanced models, strong reasoning capability

Local Model Deployment

5. Ollama Local Deployment

Use locally deployed Ollama models to protect data privacy:

Configuration ItemDescriptionRequired
Ollama URL AddressLocal Ollama service addressRequired
Model NameLocal model nameRequired

Advantages: Data privacy, offline available

Usage Instructions

Quick Start

  1. Enter Settings Page - Click the settings icon in the top right corner
  2. Select "Models" Tab - Find model settings in the left menu
  3. Click "Add Model" Button - Start adding new model configuration
  4. Select Corresponding Provider - Choose appropriate model provider based on needs
  5. Fill in Required Configuration - Fill in configuration items according to the table requirements
  6. Save and Test Connection - Verify configuration is correct

Configuration Tips

  • API Key Security: Use environment variables to store sensitive information
  • Connection Testing: Be sure to test connection after configuration
  • Model Switching: Can configure multiple models and switch as needed
  • Performance Monitoring: Pay attention to model response time and usage costs

Model Selection Recommendations

By Use Case

Use CaseRecommended ModelReasons
Daily ProgrammingQwen-TurboFast response, low cost
Complex TasksDeepSeek-R1 (thinking)Strong reasoning, deep analysis
Local DeploymentOllamaData privacy, offline available
Enterprise ApplicationsAmazon BedrockStable and reliable, secure and compliant
Multi-language DevelopmentQwen-Plus (thinking)Multi-language support, strong understanding
Rapid PrototypingGLM-4.6Fast generation, suitable for iteration

By Performance Requirements

Pursuing Speed

  • Qwen-Turbo - Fastest response
  • GLM-4.6 - Balanced performance and quality

Pursuing Quality

  • DeepSeek-R1 (thinking) - Strongest reasoning
  • DeepSeek-V3.2 - Complex analysis

Pursuing Cost Efficiency

  • Qwen-Turbo - Lowest cost
  • Ollama Local - No usage fees

Pursuing Privacy

  • Ollama Local - Fully localized
  • Amazon Bedrock - Enterprise-grade security

License