This prompt defines how Brain behaves across all models. It's sent as the system message with every query.
Leave empty to use the default. You can tell it to speak a certain way, focus on specific topics, adopt a persona, etc.
Model Preferences
Show local models (Ollama) in model selector
When off, only cloud models (GPT, Claude, Gemini, etc.) appear in the dropdown. Local models are slower but free and private.
Local Models (Ollama)
Manage which models are loaded into RAM. Loaded models respond faster but use memory. Unload models you're not using to free RAM.