Documentation Index
Fetch the complete documentation index at: https://mintlify.com/dronabopche/100-ML-AI-Project/llms.txt
Use this file to discover all available pages before exploring further.
Projects in this repository are deployed using one of two patterns depending on the intended audience. Backend integrations and web platform API calls use Flask, which exposes the model as a stateless REST endpoint. Interactive demonstrations intended for direct human use are built with Streamlit, which renders a browser-based UI without requiring any frontend code. Both patterns are self-contained: each project includes everything needed to run either deployment locally.
Deployment patterns
Flask REST API
Streamlit app
Flask is used when the model needs to be called programmatically — by a frontend application, another service, or an automated pipeline. The API accepts JSON, runs inference, and returns a JSON response. CORS is enabled so that web applications hosted on a different origin can call the endpoint directly.Application structure
# src/app.py
import os
from flask import Flask, request, jsonify
from flask_cors import CORS
from processing.preprocessing import preprocess_prompt
from output.predictor import predict_price
app = Flask(__name__)
CORS(app)
GEMINI_API_KEY = os.getenv("GEMINI_API_KEY")
@app.route("/", methods=["GET"])
def home():
return jsonify({"message": "House Price Prediction API is running"})
@app.route("/predict", methods=["POST"])
def predict():
try:
body = request.get_json()
if not body or "prompt" not in body:
return jsonify({"error": "Missing 'prompt' in request body"}), 400
prompt = body["prompt"]
if not GEMINI_API_KEY:
return jsonify({"error": "GEMINI_API_KEY not set in environment"}), 500
features_np = preprocess_prompt(prompt, GEMINI_API_KEY)
predicted_price = int(predict_price(features_np))
return jsonify({"predicted_sale_price": predicted_price})
except Exception as e:
return jsonify({"error": str(e)}), 500
if __name__ == "__main__":
app.run(debug=True)
Running the Flask server
# From the project root
python src/app.py
Flask listens on http://localhost:5000 by default. The development server reloads automatically when source files change (debug=True).Testing the API
# Health check
curl http://localhost:5000/
# Prediction request
curl -X POST http://localhost:5000/predict \
-H "Content-Type: application/json" \
-d '{"prompt": "house built in 1995, lot area 9000 sq ft, RL zoning"}'
Streamlit is used for interactive demonstrations where end users explore model behaviour through a browser UI. No HTML, CSS, or JavaScript is required — the entire interface is defined in Python. Streamlit is particularly well-suited to data science demos because it renders DataFrames, charts, and form inputs natively.Typical Streamlit app structure
# app.py (Streamlit variant)
import streamlit as st
import pickle
import numpy as np
# Load model
with open("models/lr_model.pkl", "rb") as f:
model = pickle.load(f)
st.title("House Price Predictor")
st.write("Enter property details to get a price estimate.")
lot_area = st.number_input("Lot Area (sq ft)", value=9500)
year_built = st.slider("Year Built", min_value=1900, max_value=2024, value=1975)
overall_cond = st.selectbox("Overall Condition (1–9)", list(range(1, 10)), index=4)
if st.button("Predict"):
features = np.array([[lot_area, overall_cond, year_built]])
prediction = model.predict(features)[0]
st.success(f"Estimated Sale Price: ${prediction:,.0f}")
Running the Streamlit app
# Install Streamlit if not already installed
pip install streamlit
# Launch the app
streamlit run app.py
Streamlit opens a browser tab automatically at http://localhost:8501. The app hot-reloads on file save during development.Installing dependencies
pip install -r requirements.txt
Deploying a Flask ML API
Install dependencies
Install all Python dependencies listed in requirements.txt. It is recommended to use a virtual environment to keep project packages isolated.python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -r requirements.txt
pip install flask flask-cors google-genai
Set environment variables
Export the Gemini API key before starting the server. The application reads this value at startup; it will not be refreshed if you set it after the process is running.export GEMINI_API_KEY="your-api-key-here"
Verify the model files are present
The predictor loads serialized models from the models/ directory at import time. Confirm the .pkl files exist before starting the server.ls models/
# lasso_model.pkl lr_model.pkl ridge_model.pkl
Start the server
Run the Flask entry point. The development server starts on port 5000.You should see output similar to: * Running on http://127.0.0.1:5000
* Debug mode: on
Confirm the API is healthy
curl http://localhost:5000/
# {"message": "House Price Prediction API is running"}
Send a prediction request
curl -X POST http://localhost:5000/predict \
-H "Content-Type: application/json" \
-d '{"prompt": "Single family home, built 1988, 8200 sq ft lot, RL zoning, inside lot"}'
# {"predicted_sale_price": 178400}
Environment variables
| Variable | Required | Description |
|---|
GEMINI_API_KEY | Yes | Google Gemini API key used to extract structured features from natural-language prompts. Obtain from Google AI Studio. |
The Flask development server (debug=True) is not suitable for production traffic. For production deployments, run the application behind a production WSGI server and consider containerizing with Docker.Gunicorn (WSGI server):pip install gunicorn
gunicorn --workers 2 --bind 0.0.0.0:5000 "src.app:app"
Docker:FROM python:3.10-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt \
&& pip install flask flask-cors google-genai gunicorn
COPY . .
ENV GEMINI_API_KEY=""
EXPOSE 5000
CMD ["gunicorn", "--workers", "2", "--bind", "0.0.0.0:5000", "src.app:app"]
docker build -t ml-api .
docker run -p 5000:5000 -e GEMINI_API_KEY="your-key" ml-api