Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/diazdavilajesus16-stack/IA-LUMINA/llms.txt

Use this file to discover all available pages before exploring further.

This page covers everything you need to install Lumina AI and its dependencies from scratch. It explains each required package, documents the model and asset files the app expects to find at runtime, and provides platform-specific instructions for optional features like voice input and audio playback.

Python version requirement

Lumina AI requires Python 3.8 or higher. You can check your current version with:
python --version

Core installation

Install all Python dependencies declared in requirements.txt with a single command:
pip install -r requirements.txt
The file specifies the following packages:
PackageVersionPurpose
streamlit>=1.28.0Web UI framework that serves the chat interface
PyPDF2>=3.0.0Extracts text from uploaded PDFs for context-aware answers
gTTS>=2.3.0Converts Lumina AI’s responses to spoken audio (Google Text-to-Speech)
SpeechRecognition>=3.10.0Transcribes microphone input to text
pydub>=0.25.0Converts audio formats (e.g., WebM to WAV) before transcription

ML engine dependencies

chatbot.py — the neural network inference engine — requires three additional packages that are not listed in requirements.txt. Install them separately if you need to run chatbot.py directly or retrain the model:
pip install tensorflow numpy nltk
PackagePurpose
tensorflow / kerasLoads and runs the trained chatbot_model.h5 neural network
numpyBuilds the bag-of-words input vectors for the model
nltkTokenizes and lemmatizes user input before inference

Platform-specific notes

Voice input (SpeechRecognition + pydub) requires ffmpeg to convert audio formats. The core chat and PDF features work without it.
Install ffmpeg using Chocolatey:
choco install ffmpeg
Alternatively, download the ffmpeg binary from ffmpeg.org and add it to your PATH manually.

Required files

Lumina AI expects the following files to be present in the project root at runtime.

Model files

If chatbot_model.h5, words.pkl, or classes.pkl are missing, the ML engine will fail to load and the chatbot will not function. Run the training script to regenerate them:
python training_chatbot.py
This script reads respuestas.json and produces all three model files. Training runs for 300 epochs and typically completes in one to three minutes.
FileDescription
chatbot_model.h5The trained Keras Sequential model (three Dense layers with Dropout, softmax output)
words.pklPickle of the sorted vocabulary list built from intent patterns during training
classes.pklPickle of the sorted intent class labels built from intent tags during training
respuestas.jsonJSON file defining all intents: each intent has a tag, a list of patterns, and a list of responses

UI assets

The following image files are loaded by Main.py at startup. The app encodes them as Base64 and embeds them directly in the HTML so they render without a web server.

assets/robot_girl.png

Avatar image displayed in the sidebar and alongside assistant messages in the chat window.

assets/galaxy.png

Background image for the galactic UI theme. If missing, the app falls back to a CSS radial-gradient background automatically.

Build docs developers (and LLMs) love