The AI System Daemon runs silently in the background, continuously watching your system resources and stepping in the moment something looks wrong. When CPU usage, RAM consumption, or hardware temperature crosses a threshold, the daemon collects a snapshot of the top offending processes, sends that data to your configured AI provider, and surfaces a plain-language root-cause analysis — all without you having to open a terminal.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/AhmedSaadi0/NibrasShell/llms.txt
Use this file to discover all available pages before exploring further.
How it works
NibrasShell wires together two QML singletons —SystemService and AiAnalysisService — to build a closed monitoring loop.
Continuous sampling
SystemService runs system_monitor.py as a persistent background process. Every tick it emits updated CPU usage, RAM usage, and maximum temperature values. Separate temperature readings are gathered for CPU, GPU, and storage via system_diagnostics.py.Threshold detection
Each metric is compared against your configured thresholds (
cpuHighLoadThreshold, ramHighLoadThreshold, tempHighThreshold). When a value crosses its threshold, SystemService emits a typed alert signal — cpuAlert, ramAlert, or tempAlert.Process diagnostics
AiAnalysisService receives the alert and immediately calls back into SystemService to run system_diagnostics.py --action cpu or --action ram. This yields a ranked list of the top processes currently consuming the resource. Temperature spikes additionally collect per-device readings for CPU, GPU, and storage.AI root-cause analysis
The collected snapshot — current value, previous value, delta, threshold, top processes, and temperature data — is serialised as JSON and sent to
ai/main.py via AiService. Your AI provider receives the payload and returns a structured analysis including a title, narrative, root-cause hypothesis, confidence score, thermal risk level, process anomaly details, and a list of recommended actions.Smart cooldowns
Two independent cooldown layers prevent alert floods. A spike cooldown (
resourceAlertCooldownMs) gates how often a new spike event can be created for a given resource type. A process cooldown tracks the top process by pid:name key — if the same process already triggered an alert recently, the duplicate is silently dropped. Temperature spikes use a longer fixed cooldown of 300 seconds.Boot analysis
Three seconds after shell startup,
SystemService calls AiService with callBootAnalysisAi to summarise what ran at startup. The result populates bootStatusTitle, aiBootSummary, bootTimeText, and bootLogsModel, giving you an at-a-glance picture of your last boot without digging through journal logs.Event model
AiAnalysisService keeps an in-memory list of up to 50 analysis events (maxEventsCount). Each entry starts with isLoading: true and aiAnalysis: "Analyzing... Please wait.". Once the AI responds, the entry is updated in place with the full structured result. The event list is available to any UI component that imports AiAnalysisService.
Each event carries:
| Field | Description |
|---|---|
type | CPU, RAM, or TEMP |
value | Current metric value at alert time |
severity | NORMAL, WARNING, or CRITICAL |
aiTitle | Short AI-generated headline |
aiNarrative | Plain-language description of the event |
aiRootCause | Root-cause hypothesis |
aiConfidence | Confidence score (0–100) |
aiThermalRisk | Risk level for temperature events |
aiProcessName | Name of the anomalous process |
aiActions | List of recommended remediation actions |
Configuration
All daemon settings live in~/.nibrasshell.json. The relevant keys are:
AI provider settings
AI provider settings
| Key | Description |
|---|---|
aiProvider | Your AI provider name: gemini, openai, deepseek, openrouter, ollama, or local |
aiApiKey | Default API key used across all AI features |
systemAiApiKey | Override API key used specifically for system analysis requests |
systemAiModel | Model to use for system spike analysis (e.g. a fast, low-latency model) |
Alert toggles
Alert toggles
| Key | Type | Description |
|---|---|---|
enableHighCpuAlert | boolean | Enable or disable CPU high-load alerts |
enableHighRamAlert | boolean | Enable or disable RAM high-load alerts |
Thresholds
Thresholds
| Key | Type | Description |
|---|---|---|
cpuHighLoadThreshold | number (0–100) | CPU percentage that triggers an alert |
ramHighLoadThreshold | number (0–100) | RAM percentage that triggers an alert |
tempHighThreshold | number (°C) | Temperature that triggers a thermal alert (default: 85°C) |
resourceAlertCooldownMs | number (ms) | Minimum time between repeated alerts for the same resource |
Sound alerts
Sound alerts
| Key | Type | Description |
|---|---|---|
playCpuAlarmSound | boolean | Play an alarm sound when a CPU alert fires |
playRamAlarmSound | boolean | Play an alarm sound when a RAM alert fires |
Supported AI providers
NibrasShell’sai/main.py gateway supports the following providers out of the box:
- Gemini — Google’s Gemini models (use
ai/list-gemini.pyto fetch available model names) - OpenRouter — Access to a wide range of hosted models via a single API key
- OpenAI — GPT models via the OpenAI API
- DeepSeek — DeepSeek models via the DeepSeek API
- Ollama — Locally hosted models with no API key required
- Local — A generic local provider for custom endpoints