Model List
Chaterm supports multiple model providers, offering a flexible AI programming experience. From built-in models to custom integrations, meeting the needs of different scenarios.
Built-in Models
Chaterm includes multiple high-quality code models out of the box, requiring no additional configuration:
Chain-of-Thought Models
These models have deep reasoning capabilities, able to analyze problems step by step and provide detailed solutions:
| Model | Features | Use Cases | Reasoning Ability |
|---|---|---|---|
| DeepSeek-R1 (thinking) | Advanced model with deep reasoning | Complex algorithm design, architecture analysis | High |
| GLM-4.6 (thinking) | Strong logical reasoning ability | Code review, problem diagnosis | Medium-High |
| Qwen-Plus (thinking) | Alibaba Cloud Qwen chain-of-thought model | Multi-language development, cross-platform projects | Medium-High |
Standard Models
Fast-responding standard models, suitable for daily programming tasks:
| Model | Features | Use Cases | Response Speed |
|---|---|---|---|
| DeepSeek-V3.2 | Supports complex code analysis | Large project refactoring, performance optimization | Fast |
| Qwen-Plus | High-performance code generation model | Enterprise application development | Fast |
| GLM-4.6 | Excellent code generation capability | Rapid prototyping, feature implementation | Medium |
| Qwen-Turbo | Fast-responding lightweight model | Real-time programming assistance, rapid iteration | Very Fast |
Adding Custom Models
You can add more model providers in settings to extend Chaterm's functionality. Supports multiple integration methods to meet different needs:
Model Integration
1. LiteLLM Integration
Connect to multiple model providers through LiteLLM, supporting unified API interface:
| Configuration Item | Description | Required |
|---|---|---|
| URL Address | LiteLLM service endpoint | Required |
| API Key | Access key | Required |
| Model Name | Specific model to use | Required |
Advantages: Unified interface, supports multiple model providers
2. OpenAI Integration
Directly connect to OpenAI service, using official API:
| Configuration Item | Description | Required |
|---|---|---|
| OpenAI URL Address | OpenAI API endpoint | Required |
| OpenAI API Key | OpenAI access key | Required |
| Model Name | GPT-5, GPT-4, etc. | Required |
Advantages: Official support, stable and reliable
3. Amazon Bedrock
Use AWS Bedrock service, enterprise-grade solution:
| Configuration Item | Description | Required |
|---|---|---|
| AWS Access Key | AWS access key | Required |
| AWS Secret Key | AWS secret key | Required |
| AWS Session Token | Session token | Optional |
| AWS Region | Service region | Required |
| Custom VPC Endpoint | Private network endpoint | Optional |
| Cross-Region Inference | Multi-region deployment | Optional |
| Model Name | Bedrock model | Required |
Advantages: Enterprise-grade security, high availability
4. DeepSeek Integration
Connect to DeepSeek official API, using advanced models:
| Configuration Item | Description | Required |
|---|---|---|
| DeepSeek API Key | DeepSeek access key | Required |
| Model Name | DeepSeek model | Required |
Advantages: Advanced models, strong reasoning capability
Local Model Deployment
5. Ollama Local Deployment
Use locally deployed Ollama models to protect data privacy:
| Configuration Item | Description | Required |
|---|---|---|
| Ollama URL Address | Local Ollama service address | Required |
| Model Name | Local model name | Required |
Advantages: Data privacy, offline available
Usage Instructions
Quick Start
- Enter Settings Page - Click the settings icon in the top right corner
- Select "Models" Tab - Find model settings in the left menu
- Click "Add Model" Button - Start adding new model configuration
- Select Corresponding Provider - Choose appropriate model provider based on needs
- Fill in Required Configuration - Fill in configuration items according to the table requirements
- Save and Test Connection - Verify configuration is correct
Configuration Tips
- API Key Security: Use environment variables to store sensitive information
- Connection Testing: Be sure to test connection after configuration
- Model Switching: Can configure multiple models and switch as needed
- Performance Monitoring: Pay attention to model response time and usage costs
Model Selection Recommendations
By Use Case
| Use Case | Recommended Model | Reasons |
|---|---|---|
| Daily Programming | Qwen-Turbo | Fast response, low cost |
| Complex Tasks | DeepSeek-R1 (thinking) | Strong reasoning, deep analysis |
| Local Deployment | Ollama | Data privacy, offline available |
| Enterprise Applications | Amazon Bedrock | Stable and reliable, secure and compliant |
| Multi-language Development | Qwen-Plus (thinking) | Multi-language support, strong understanding |
| Rapid Prototyping | GLM-4.6 | Fast generation, suitable for iteration |
By Performance Requirements
Pursuing Speed
- Qwen-Turbo - Fastest response
- GLM-4.6 - Balanced performance and quality
Pursuing Quality
- DeepSeek-R1 (thinking) - Strongest reasoning
- DeepSeek-V3.2 - Complex analysis
Pursuing Cost Efficiency
- Qwen-Turbo - Lowest cost
- Ollama Local - No usage fees
Pursuing Privacy
- Ollama Local - Fully localized
- Amazon Bedrock - Enterprise-grade security