Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/dronabopche/100-ML-AI-Project/llms.txt

Use this file to discover all available pages before exploring further.

Projects in this repository are deployed using one of two patterns depending on the intended audience. Backend integrations and web platform API calls use Flask, which exposes the model as a stateless REST endpoint. Interactive demonstrations intended for direct human use are built with Streamlit, which renders a browser-based UI without requiring any frontend code. Both patterns are self-contained: each project includes everything needed to run either deployment locally.

Deployment patterns

Flask is used when the model needs to be called programmatically — by a frontend application, another service, or an automated pipeline. The API accepts JSON, runs inference, and returns a JSON response. CORS is enabled so that web applications hosted on a different origin can call the endpoint directly.

Application structure

# src/app.py
import os
from flask import Flask, request, jsonify
from flask_cors import CORS

from processing.preprocessing import preprocess_prompt
from output.predictor import predict_price

app = Flask(__name__)
CORS(app)

GEMINI_API_KEY = os.getenv("GEMINI_API_KEY")

@app.route("/", methods=["GET"])
def home():
    return jsonify({"message": "House Price Prediction API is running"})

@app.route("/predict", methods=["POST"])
def predict():
    try:
        body = request.get_json()
        if not body or "prompt" not in body:
            return jsonify({"error": "Missing 'prompt' in request body"}), 400
        prompt = body["prompt"]
        if not GEMINI_API_KEY:
            return jsonify({"error": "GEMINI_API_KEY not set in environment"}), 500
        features_np = preprocess_prompt(prompt, GEMINI_API_KEY)
        predicted_price = int(predict_price(features_np))
        return jsonify({"predicted_sale_price": predicted_price})
    except Exception as e:
        return jsonify({"error": str(e)}), 500

if __name__ == "__main__":
    app.run(debug=True)

Running the Flask server

# From the project root
python src/app.py
Flask listens on http://localhost:5000 by default. The development server reloads automatically when source files change (debug=True).

Testing the API

# Health check
curl http://localhost:5000/

# Prediction request
curl -X POST http://localhost:5000/predict \
  -H "Content-Type: application/json" \
  -d '{"prompt": "house built in 1995, lot area 9000 sq ft, RL zoning"}'

Deploying a Flask ML API

1

Install dependencies

Install all Python dependencies listed in requirements.txt. It is recommended to use a virtual environment to keep project packages isolated.
python -m venv .venv
source .venv/bin/activate      # Windows: .venv\Scripts\activate
pip install -r requirements.txt
pip install flask flask-cors google-genai
2

Set environment variables

Export the Gemini API key before starting the server. The application reads this value at startup; it will not be refreshed if you set it after the process is running.
export GEMINI_API_KEY="your-api-key-here"
3

Verify the model files are present

The predictor loads serialized models from the models/ directory at import time. Confirm the .pkl files exist before starting the server.
ls models/
# lasso_model.pkl  lr_model.pkl  ridge_model.pkl
4

Start the server

Run the Flask entry point. The development server starts on port 5000.
python src/app.py
You should see output similar to:
 * Running on http://127.0.0.1:5000
 * Debug mode: on
5

Confirm the API is healthy

curl http://localhost:5000/
# {"message": "House Price Prediction API is running"}
6

Send a prediction request

curl -X POST http://localhost:5000/predict \
  -H "Content-Type: application/json" \
  -d '{"prompt": "Single family home, built 1988, 8200 sq ft lot, RL zoning, inside lot"}'
# {"predicted_sale_price": 178400}

Environment variables

VariableRequiredDescription
GEMINI_API_KEYYesGoogle Gemini API key used to extract structured features from natural-language prompts. Obtain from Google AI Studio.
The Flask development server (debug=True) is not suitable for production traffic. For production deployments, run the application behind a production WSGI server and consider containerizing with Docker.Gunicorn (WSGI server):
pip install gunicorn
gunicorn --workers 2 --bind 0.0.0.0:5000 "src.app:app"
Docker:
FROM python:3.10-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt \
    && pip install flask flask-cors google-genai gunicorn
COPY . .
ENV GEMINI_API_KEY=""
EXPOSE 5000
CMD ["gunicorn", "--workers", "2", "--bind", "0.0.0.0:5000", "src.app:app"]
docker build -t ml-api .
docker run -p 5000:5000 -e GEMINI_API_KEY="your-key" ml-api

Build docs developers (and LLMs) love