Your guide to running local LLMs in minutes
Setting up PearBerry CLI
Install PearBerry CLI globally using npm:
npm install -g pearberry-cli
This will make the pearberry
command available globally on your system.
Verify the installation:
pearberry --version
Core Commands
pearberry install deepseek-7b
This will download the DeepSeek Coder 7B quantized model and prepare it for use.
Available models: deepseek-7b, llama-7b, mistral-7b, phi-3-mini
pearberry list
Shows all installed models along with their versions and other details.
pearberry run
Starts the server with the default model, or use a specific model:
pearberry run --model deepseek-7b
pearberry chat
Opens an interactive chat interface in your terminal to converse with the model.
You must have a model running with pearberry run
first
Customizing PearBerry
PearBerry can be configured to suit your specific needs through the config.yaml
file.
# PearBerry Configuration File
# Located at ~/.pearberry/config.yaml
# Model settings
models:
default: "deepseek-7b"
storage_path: "~/pearberry/models"
# Runtime settings
runtime:
threads: 4
context_length: 4096
temperature: 0.7
# Interface settings
interface:
theme: "dark"
save_history: true
history_file: "~/pearberry/history.json"
You can edit this file manually or use the config command:
pearberry config set runtime.threads 8
For more advanced configuration options, check out the full documentation.
Common Issues & Solutions