This guide walks you through everything you need to get Lumina AI running locally — from cloning the repository to opening your first conversation in the browser. Lumina AI is a Streamlit-based educational chatbot backed by a Keras neural network, and it ships with a pre-trained model so you can start chatting immediately after installation.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/diazdavilajesus16-stack/IA-LUMINA/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisites
- Python 3.8 or higher
pip(comes bundled with Python)- A terminal or command prompt
Install dependencies
Install all required Python packages from The file installs the following packages:
requirements.txt.requirements.txt
Verify model files
Lumina AI ships with a pre-trained model. Before launching, confirm the following files are present in the project root:
These files are included in the repository and do not need to be generated for a standard setup.
| File | Description |
|---|---|
chatbot_model.h5 | Trained Keras neural network |
words.pkl | Vocabulary used during training |
classes.pkl | Intent class labels |
respuestas.json | Intent definitions and responses |
If any of the model files (Training reads
chatbot_model.h5, words.pkl, or classes.pkl) are missing, you must train the model before launching. Run:respuestas.json and writes all three model files to the project root. It typically takes one to three minutes.Launch the app
Start the Streamlit server from the project root.Streamlit prints a local URL (typically
http://localhost:8501) in your terminal.Open your browser and start chatting
Navigate to
http://localhost:8501 in your browser. Lumina AI greets you and is ready to help with subjects including mathematics, physics, chemistry, history, literature, and study techniques.Use the sidebar to explore quick actions, load a PDF for context-aware answers, or open the subject panel to dive into a specific topic.