๐ Model List โ
Chaterm supports multiple model providers, offering you a flexible AI programming experience. From built-in models to custom integrations, meeting different scenario needs.
โจ Built-in Models โ
Chaterm comes with multiple high-quality code models out of the box, ready to use without additional configuration:
๐ง Chain-of-Thought Models โ
These models have deep reasoning capabilities and can analyze problems step by step to provide detailed solutions:
| Model | Features | Use Cases | Reasoning Ability |
|---|---|---|---|
| DeepSeek-R1 (thinking) | ๐ฏ Advanced model with deep reasoning capabilities | Complex algorithm design, architecture analysis | โญโญโญโญโญ |
| DeepSeek-V3.1 (thinking) | ๐ก Supports complex code analysis | Large project refactoring, performance optimization | โญโญโญโญโญ |
| GLM-4.5 (thinking) | ๐ Powerful logical reasoning capabilities | Code review, problem diagnosis | โญโญโญโญ |
| Qwen-Plus (thinking) | ๐ Alibaba Cloud's Tongyi Qianwen chain-of-thought model | Multi-language development, cross-platform projects | โญโญโญโญ |
โก Standard Models โ
Fast-response standard models suitable for daily programming tasks:
| Model | Features | Use Cases | Response Speed |
|---|---|---|---|
| GLM-4.5 | ๐จ Excellent code generation capabilities | Rapid prototyping, feature implementation | โกโกโกโก |
| Qwen-Plus | ๐ High-performance code generation model | Enterprise application development | โกโกโก |
| Qwen-Turbo | โก Fast-response lightweight model | Real-time programming assistance, rapid iteration | โกโกโกโกโก |
๐ง Add Custom Models โ
You can add more model providers in settings to extend Chaterm's functionality. Supports multiple integration methods to meet different needs:
๐ Cloud Model Integration โ
1. ๐ LiteLLM Integration โ
Connect to various model providers through LiteLLM with unified API interface:
| Configuration Item | Description | Required |
|---|---|---|
| URL Address | LiteLLM service endpoint | โ Required |
| API Key | Access key | โ Required |
| Model Name | Specific model to use | โ Required |
Advantages: Unified interface, supports multiple model providers
2. ๐ค OpenAI Integration โ
Direct connection to OpenAI services with official support:
| Configuration Item | Description | Required |
|---|---|---|
| OpenAI URL Address | OpenAI API endpoint | โ Required |
| OpenAI API Key | OpenAI access key | โ Required |
| Model Name | GPT-4, GPT-3.5, etc. | โ Required |
Advantages: Official support, stable and reliable
3. โ๏ธ Amazon Bedrock โ
Using AWS Bedrock services, enterprise-grade solution:
| Configuration Item | Description | Required |
|---|---|---|
| AWS Access Key | AWS access key | โ Required |
| AWS Secret Key | AWS secret key | โ Required |
| AWS Session Token | Session token | ๐ถ Optional |
| AWS Region | Service region | โ Required |
| Custom VPC Endpoint | Private network endpoint | ๐ถ Optional |
| Cross-Region Inference | Multi-region deployment | ๐ถ Optional |
| Model Name | Bedrock model | โ Required |
Advantages: Enterprise-grade security, high availability
4. ๐ DeepSeek Integration โ
Connect to DeepSeek official API, enjoy advanced models:
| Configuration Item | Description | Required |
|---|---|---|
| DeepSeek API Key | DeepSeek access key | โ Required |
| Model Name | DeepSeek model | โ Required |
Advantages: Advanced models, strong reasoning capabilities
๐ Local Model Deployment โ
5. ๐ฆ Ollama Local Deployment โ
Using locally deployed Ollama models, protect data privacy:
| Configuration Item | Description | Required |
|---|---|---|
| Ollama URL Address | Local Ollama service address | โ Required |
| Model Name | Local model name | โ Required |
Advantages: Data privacy, offline available
๐ Usage Instructions โ
Quick Start โ
- Go to Settings Page - Click the settings icon in the top right corner
- Select "Models" Tab - Find model settings in the left menu
- Click "Add Model" Button - Start adding new model configuration
- Choose the Corresponding Provider - Select appropriate model provider based on needs
- Fill in Necessary Configuration Information - Fill in configuration items according to table requirements
- Save and Test Connection - Verify configuration is correct
๐ง Configuration Tips โ
- API Key Security: Use environment variables to store sensitive information
- Connection Testing: Always perform connection test after configuration
- Model Switching: Configure multiple models and switch as needed
- Performance Monitoring: Monitor model response time and usage costs
๐ฏ Model Selection Recommendations โ
Select by Use Case โ
| Use Case | Recommended Model | Reason |
|---|---|---|
| Daily Programming | Qwen-Turbo | โก Fast response, low cost |
| Complex Tasks | DeepSeek-R1 (thinking) | ๐ง Strong reasoning capabilities, deep analysis |
| Local Deployment | Ollama | ๐ Data privacy, offline available |
| Enterprise Applications | Amazon Bedrock | ๐ข Stable and reliable, security compliant |
| Multi-language Development | Qwen-Plus (thinking) | ๐ Multi-language support, strong understanding |
| Rapid Prototyping | GLM-4.5 | ๐ Fast generation, suitable for iteration |
Select by Performance Needs โ
๐ Pursuing Speed โ
- Qwen-Turbo - Fastest response
- GLM-4.5 - Balanced performance and quality
๐ง Pursuing Quality โ
- DeepSeek-R1 (thinking) - Strongest reasoning
- DeepSeek-V3.1 (thinking) - Complex analysis
๐ฐ Pursuing Cost Efficiency โ
- Qwen-Turbo - Lowest cost
- Ollama Local - No usage fees
๐ Pursuing Privacy โ
- Ollama Local - Complete localization
- Amazon Bedrock - Enterprise-grade security
๐ก Best Practices โ
Model Combination Usage โ
- Development Phase: Use fast models for rapid iteration
- Code Review: Use chain-of-thought models for deep analysis
- Production Environment: Use stable and reliable enterprise-grade models
Cost Optimization โ
- Local Models: Suitable for frequently used scenarios
- Cloud Models: Suitable for occasional complex tasks
- Hybrid Usage: Choose appropriate models based on task complexity
Security Considerations โ
- Sensitive Data: Prioritize local models
- Enterprise Environment: Use compliance-compliant models
- API Security: Regularly rotate API keys