Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/dronabopche/100-ML-AI-Project/llms.txt

Use this file to discover all available pages before exploring further.

Every project in this repository is self-contained: it ships with its own dataset, trained model artifacts, preprocessing code, Jupyter notebook, and a Flask API you can run locally. This guide walks you through the full workflow using Project 01 — House Price Prediction — as a concrete example. The same steps apply to every other project in the repo.
Python 3.10 or later is required. The ML_To_Train/Readme.md badge specifies Python 3.10 as the baseline. Earlier versions may work but are not tested.
1

Clone the repository

Clone the repository to your local machine:
git clone https://github.com/dronabopche/100-ML-AI-Project.git
cd 100-ML-AI-Project
2

Navigate to a project

Each project lives inside ML_To_Train/. Navigate to the House Price Prediction project:
cd ML_To_Train/01_House_Price_Predict
The directory contains everything you need: the dataset, trained model files, preprocessing pipeline, notebook, and the Flask app.
3

Install dependencies

Install the project’s Python dependencies using the bundled requirements.txt:
pip install -r requirements.txt
For Project 01, this installs:
numpy==2.3.5
pandas==2.3.3
scikit-learn==1.8.0
matplotlib==3.10.8
seaborn==0.13.2
ipykernel==6.31.0
Other projects may have different dependencies — always install from the project’s own requirements.txt.
4

Run the Jupyter notebook

Open the project notebook to explore the full ML workflow — EDA, preprocessing, model training, and evaluation:
jupyter notebook House_Price_Prediction.ipynb
The notebook walks through each stage of the pipeline and is the primary learning artifact for the project.
5

Start the Flask API

Run the Flask API from the src/ directory:
python src/app.py
The API starts on http://localhost:5000 by default. You should see:
* Running on http://127.0.0.1:5000

Call the prediction API

The /predict endpoint accepts a natural-language description of a house and returns a predicted sale price. Send a POST request with a prompt field in the JSON body:
curl -X POST http://localhost:5000/predict \
  -H "Content-Type: application/json" \
  -d '{"prompt": "3 bedroom house with 1500 sq ft lot area, 2 bathrooms, built in 1990, located in a residential zone"}'
Sample response:
{"predicted_sale_price": 185000}
The preprocessing pipeline sends your prompt to the Gemini API, which extracts structured feature values. Those values are passed to all three trained models (Linear Regression, Ridge, Lasso), and the final price is the average of their predictions.
Projects that use natural-language input parsing (such as Project 01) require a Gemini API key. Set the GEMINI_API_KEY environment variable before starting the Flask server:
export GEMINI_API_KEY="your-api-key-here"
python src/app.py
Projects that take structured input directly do not require a Gemini API key.

Build docs developers (and LLMs) love