AI Models
Choosing the Right AI Model
How to select the best AI model for your project.
Available Models
VULK provides access to cutting-edge AI models from the world's leading providers. Each model is optimized for different use cases, speeds, and budgets.
Model Tiers
Free Tier (Available to Everyone)
| Model | Provider | Speed | Quality | Context | Cost |
|---|---|---|---|---|---|
| Gemini 3 Flash | Ultra Fast | Excellent | 1M tokens | 0.25x | |
| Gemini 3 Pro | Fast | Excellent | 1M tokens | 1x | |
| GLM 4.7 | Zhipu AI | Fast | Very Good | 200K tokens | 0.08x |
| DeepSeek V3.2 | DeepSeek | Fast | Very Good | 163K tokens | 0.13x |
| DeepSeek V3.2 Speciale | DeepSeek | Fast | Very Good | 163K tokens | 0.14x |
| Amazon Nova 2 Lite | Amazon | Fast | Good | 1M tokens | 0.15x |
Premium Tier (Subscription Required)
| Model | Provider | Speed | Quality | Context | Cost |
|---|---|---|---|---|---|
| Claude Opus 4.5 | Anthropic | Standard | Elite | 1M tokens | 5x |
| GPT-5.2 | OpenAI | Fast | Elite | 400K tokens | 3x |
| GPT-5.1 Codex Max | OpenAI | Fast | Elite | 400K tokens | 2x |
| Mistral Devstral 2 | Mistral | Ultra Fast | Excellent | 262K tokens | 0.025x |
| MiniMax M2.1 | MiniMax | Fast | Very Good | 196K tokens | 0.06x |
| Grok 4.1 Fast | xAI | Ultra Fast | Excellent | 2M tokens | 0.1x |
Recommendations
Landing Pages & Marketing Sites
- Best: Gemini 3 Flash (fast, cost-effective)
- Alternative: GLM 4.7 (lowest cost)
Full-Stack Applications
- Best: Gemini 3 Pro (balanced quality/speed)
- Premium: Claude Opus 4.5 (highest quality)
Complex Business Logic
- Best: Claude Opus 4.5 (superior reasoning)
- Alternative: GPT-5.2 (excellent at architecture)
Mobile Apps (React Native / Flutter)
- Best: Gemini 3 Pro (great multimodal understanding)
- Fast: Gemini 3 Flash
API-Heavy Backends
- Best: GPT-5.1 Codex Max (optimized for code)
- Alternative: DeepSeek V3.2 (cost-effective)
Budget-Conscious Development
- Best: GLM 4.7 (0.08x cost)
- Alternative: Mistral Devstral 2 (0.025x cost)
Default Model
New projects use Gemini 3 Flash by default:
- 1M token context window
- 0.25x credit cost
- Excellent quality for most use cases
Changing Models
- Click the model name in the chat header
- Search or scroll to find your preferred model
- Select it - changes apply immediately
Credit Multipliers
The cost multiplier shows relative credit usage:
- 0.25x = Uses 25% of standard credits
- 1x = Standard credit usage
- 5x = Uses 5x standard credits (premium models)
Premium models typically produce higher quality code but consume more credits. Start with free models and upgrade when needed.