Model Catalog

Explore the range of supported models and find the right one for your needs

Filter by:
Sort by:

Phi-3 Mini

4B parameters

A compact but powerful model ideal for consumer hardware, offering excellent reasoning capabilities for its size.

Size: 4B
RAM Required: 8GB+
Context Length: 8K tokens
Best For: Low-resource environments
Performance Rating 8.2/10
View Details

DeepSeek Coder

7B parameters

Specialized model for code generation and understanding, with exceptional performance on coding tasks.

Size: 7B
RAM Required: 16GB+
Context Length: 16K tokens
Best For: Coding, programming tasks
Performance Rating 9.4/10
View Details

Mistral

7B parameters

General-purpose model with excellent reasoning capabilities and instruction following, optimized for chat.

Size: 7B
RAM Required: 16GB+
Context Length: 8K tokens
Best For: General use, chat
Performance Rating 9.2/10
View Details

Llama 2

7B parameters

Meta's open-source model with a good balance of reasoning and knowledge, suitable for various tasks.

Size: 7B
RAM Required: 16GB+
Context Length: 4K tokens
Best For: General use, content creation
Performance Rating 9.0/10
View Details

Mixtral

8x7B parameters

Mixture of Experts (MoE) model with significantly higher quality while maintaining reasonable hardware requirements.

Size: 8x7B
RAM Required: 24GB+
Context Length: 32K tokens
Best For: Complex reasoning, code, writing
Performance Rating 9.8/10
View Details

Llama 3

70B parameters

Latest model from Meta with state-of-the-art performance across reasoning, coding, and general knowledge tasks.

Size: 70B
RAM Required: 40GB+
Context Length: 8K tokens
Best For: High-demand tasks, research
Performance Rating 9.7/10
View Details