The Smart Intelligence Engine (Documentation Index
Fetch the complete documentation index at: https://mintlify.com/deskiziarecords/QUIMERIA-HYPERION/llms.txt
Use this file to discover all available pages before exploring further.
core/kernel/smart_intelligence.py) is the pattern memory and predictive analytics layer of QUIMERIA-HYPERION. It augments the structural and sensor layers with learned experience: every candle sequence the system has ever seen is stored as a neural embedding, and on each new bar the engine retrieves the most similar historical contexts to estimate what happens next. It also maintains a consolidation index and adaptive confidence gate that prevent the system from over-trading in choppy markets or after a losing streak.
Architecture overview
The engine is composed of three cooperating components:Candle encoder
Before any vector operation, every candle is reduced to a single symbol from a seven-character alphabet:| Symbol | Candle type | Condition |
|---|---|---|
B | Bullish strong | body > 2 × avg_body, close > open |
X | Bearish strong | body > 2 × avg_body, close < open |
I | Bullish weak | body ≤ avg_body, close > open |
W | Bearish weak | body ≤ avg_body, close < open |
D | Doji / neutral | body ≤ 0.1 × avg_body |
S | Spinning top | body ≤ avg_body and range > 2 × body |
U | Uncertain / gap | Fallback for ambiguous structures |
avg_body_history is a rolling deque of length 20, so the encoding adapts to recent volatility conditions. A candle that would be B in a calm period may be only I during high-volatility sessions.
FAISS pattern memory
FAISSMemory is the long-term memory of the engine. It stores every observed n-gram (default: last 5 symbols) as a 16-dimensional neural embedding and uses FAISS inner-product search to retrieve the k most similar historical contexts.
GRU embedding network
The embedding network converts a symbol sequence into a vector using a three-layer architecture:torch.no_grad() for inference-only speed.
FAISS index
FAISS is an optional dependency. Install with
pip install faiss-cpu (or faiss-gpu for CUDA). Without it, the engine falls back to a NumPy dot-product KNN that is functionally equivalent but slower for large memory stores.Memory query and output metrics
On each bar, the engine queries thek=50 nearest historical n-grams and computes three statistics:
| Output | Key | Description |
|---|---|---|
| Expected epsilon | expected_epsilon | Weighted average outcome of similar historical sequences |
| Memory confidence | memory_confidence | How consistently similar sequences produced the same outcome |
| Predictability MI | predictability_mi | Mutual information between pattern and next symbol |
ρ_c Consolidation Index
TheSmartMarketStateEngine tracks the density of neutral candles in the recent sequence. ρ_c (rho-c) is the fraction of doji (D) and spinning-top (S) candles in the last window (default 120) bars:
ρ_c indicates a choppy, trendless market where IPDA is in the Accumulation phase and displacement is not yet imminent. The engine suppresses expected_epsilon when ρ_c is elevated, preventing entries into consolidation noise.
The full state vector also includes:
| Field | Description |
|---|---|
rho_c | Consolidation density (doji fraction) |
p_stop_hunt | Probability of a stop hunt based on B/X → D/S transitions |
eta_trend | Trend efficiency — mean run length normalized to target of 5 bars |
entropy | Shannon entropy of symbol distribution (high = choppy) |
Adaptive Performance Gate
The Adaptive Performance Gate (AdaptivePerformanceGate in core/kernel/market_shark_forensics.py) raises the confidence threshold required for execution after consecutive losses. It is a self-correcting circuit: the system becomes more selective when its recent performance degrades.
The gate reads the recent P&L from logs/trades.log and computes a rolling win rate. If the win rate falls below the gate’s threshold, adaptive_threshold is elevated. The SMK outputs this in the forensics dict:
adaptive_threshold > 1.0 means the gate is active and execution requires higher-than-normal fusion confidence. The gate resets to 1.0 after a winning trade.
HSKM sparse kernel attention (v1.1)
Version 1.1 adds the Hierarchical Sparse Kernel Memory module (core/kernel/sparse_kernel_attention.py). HSKM replaces the dense FAISS flat index with a sparse attention mechanism that selectively weights historical memories by structural similarity rather than raw embedding distance. This improves retrieval quality in regimes where the symbol sequence is ambiguous.
HSKM is a v1.1 addition from the 16-repo integration sprint. As of the current release, it lives alongside
FAISSMemory and is activated by setting use_hskm=True in the SmartIntelligenceEngine constructor. GPU target (CUDA vs. MPS vs. CPU fallback) must be confirmed before enabling in production.Integration via the AEGIS bridge
TheSmartIntelligenceEngine.step() output appears in the step() result dict under smart_intelligence:
backend/aegis_bridge.py) reads memory_confidence as an additional weighting factor for the StopLossManager. When memory_confidence is low (< 0.5), the stop-loss is tightened relative to the ATR-based default. When confidence is high (> 0.8), the SchurRouter may widen the target to reach the DOL level identified by the expansion predictor.
Ring 0 veto
Hard-stop guards upstream of the AEGIS bridge
IPDA Structural Compiler
Layer 1 context that feeds the AMD state machine