Lumina AI classifies student questions using a feedforward neural network built with Keras. The model takes a bag-of-words vector as input and outputs a probability distribution over all known intent tags.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/diazdavilajesus16-stack/IA-LUMINA/llms.txt
Use this file to discover all available pages before exploring further.
Model architecture
The model is defined intraining_chatbot.py as a Sequential network with four layers:
training_chatbot.py
| Layer | Units | Activation | Dropout |
|---|---|---|---|
| Input | 256 | ReLU | 0.5 |
| Hidden 1 | 128 | ReLU | 0.4 |
| Hidden 2 | 64 | ReLU | 0.3 |
| Output | len(classes) | Softmax | — |
Training configuration
training_chatbot.py
- Optimizer: SGD with Nesterov momentum
- Loss function: Categorical cross-entropy
- Epochs: 300
- Batch size: 8
Text preprocessing
Before the input reaches the model,chatbot.py runs a normalization pipeline:
chatbot.py
Inference thresholds
chatbot.py
- Above
0.60: the prediction is used directly - Between
0.35and0.60: used as a last resort after keyword fallback fails - Below
0.35: treated as no match
The model file is saved in HDF5 format as
chatbot_model.h5. It is loaded at startup with load_model('chatbot_model.h5', compile=False).