Getting Started with PearBerry

Your guide to running local LLMs in minutes

Installation

Setting up PearBerry CLI

Install PearBerry CLI globally using npm:

npm install -g pearberry-cli

This will make the pearberry command available globally on your system.

Verify the installation:

pearberry --version

Usage

Core Commands

Install a model
Download and install a specific LLM model onto your system.
pearberry install deepseek-7b

This will download the DeepSeek Coder 7B quantized model and prepare it for use.

Available models: deepseek-7b, llama-7b, mistral-7b, phi-3-mini

List installed models
View all the LLM models currently installed on your system.
pearberry list

Shows all installed models along with their versions and other details.

Run a model
Start an LLM server with the specified model (or default if not specified).
pearberry run

Starts the server with the default model, or use a specific model:

pearberry run --model deepseek-7b
Chat with a model
Start an interactive chat session with the running model.
pearberry chat

Opens an interactive chat interface in your terminal to converse with the model.

You must have a model running with pearberry run first

Configuration

Customizing PearBerry

PearBerry can be configured to suit your specific needs through the config.yaml file.

# PearBerry Configuration File # Located at ~/.pearberry/config.yaml # Model settings models: default: "deepseek-7b" storage_path: "~/pearberry/models" # Runtime settings runtime: threads: 4 context_length: 4096 temperature: 0.7 # Interface settings interface: theme: "dark" save_history: true history_file: "~/pearberry/history.json"

You can edit this file manually or use the config command:

pearberry config set runtime.threads 8

For more advanced configuration options, check out the full documentation.

Troubleshooting

Common Issues & Solutions