Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/vrashmanyu605-eng/Agentic_Sales-Markerting/llms.txt

Use this file to discover all available pages before exploring further.

This guide walks you through everything you need to run the Agentic Sales & Marketing pipeline for the first time. By the end, the system will discover leads in your target industry, research and qualify them, generate outreach and proposals, and write all results to a Google Sheet — all from a single python main.py command.
1

Clone the repository

Clone the project to your local machine and enter the directory.
git clone https://github.com/vrashmanyu605-eng/Agentic_Sales-Markerting.git
cd Agentic_Sales-Markerting
2

Install dependencies

Install all required Python packages using pip.
pip install -r requirements.txt
Key packages the workflow depends on:
PackagePurpose
langgraphMulti-agent graph orchestration
langchain-openaiLLM client (connects to LM Studio)
ddgsDuckDuckGo web search for lead discovery
requestsHTTP requests for website scraping
beautifulsoup4HTML parsing for company research
lxmlFast XML/HTML parser used with BeautifulSoup
google-authGoogle API authentication
google-api-python-clientGoogle Sheets API client
pymupdfPDF text extraction from your sales deck (provides the fitz module)
3

Configure LM Studio

The LLM client in llm.py connects to a locally running LM Studio instance. You must have LM Studio installed and serving the model before running the workflow.
  1. Download and install LM Studio.
  2. Download the gemma-3-4b-it model from within LM Studio.
  3. Start the local server. By default LM Studio listens on http://localhost:1234/v1.
  4. Confirm the server is running — the base URL in llm.py must match exactly:
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="gemma-3-4b-it",
    base_url="http://localhost:1234/v1",
    api_key="lm-studio",
    temperature=0.3,
    streaming=True
)
If you use a different port or model, update llm.py accordingly before proceeding.
4

Set up Google Sheets credentials

The CRM update agent writes lead data to a Google Sheet using a service account. You need a credentials.json file in the project root directory.
  1. Create a Google Cloud project and enable the Google Sheets API.
  2. Create a service account and download the JSON key file.
  3. Rename the file to credentials.json and place it in the project root (next to main.py).
  4. Share your target Google Sheet with the service account email address, granting Editor access.
See the Google Sheets setup guide for detailed instructions.
The workflow will fail at the CRM update step if credentials.json is missing from the project root. The google_sheets_tool.py tool checks for this file explicitly and returns an error if it is not found.
5

Prepare your input files

The workflow reads two files at startup:jd.txt — Client requirements document. Write a plain-text description of what the target client is looking for. The discovery and research agents use this to find and evaluate matching leads.
Looking for an IT services partner with experience in AI automation,
cloud migration, and enterprise software development. Must have
demonstrated case studies in the financial services sector.
Sales deck PDF — Your company’s credentials deck. Place it in the project root and update the sales_deck_path variable in main.py to match the file name. The proposal generation agent extracts text from this PDF to build proposals.
sales_deck_path = "Creds_Deck_Webanix_Development_Services_2026 1.pdf"
6

Configure main.py

Open main.py and update the initial_state dictionary with your company and campaign details. The fields you most commonly need to change are highlighted in the comments below.
initial_state = {

    "task": (
        "Find potential IT services leads, research them, "
        "generate outreach strategies and proposals, "
        "and save all details to Google Sheets."
    ),

    "company_name": None,               # Populated by the discovery agent

    "target_industry": "IT Services / AI Automation",  # Change to your target

    "sender_name": "Vrashmanyu",        # Your name for outreach emails

    "client_name": "Webanix Prospect",  # Generic placeholder for the lead

    "sales_deck_text": sales_deck_text, # Loaded from PDF automatically

    "client_requirements": client_requirements,  # Loaded from jd.txt automatically

    "competitors_data": "Standard AI/ML Services Competitors",

    "company_services": """
    AI Automation, Agentic AI Systems, Web Development,
    Cloud Solutions, DevOps, Data Engineering.
    """,

    "service_details": """
    We provide enterprise AI automation, custom software development,
    workflow orchestration, and AI-powered business solutions.
    """,

    "ideal_customer_profile": """
    Mid-size and enterprise companies investing in AI,
    digital transformation, and operational efficiency.
    """,

    "pricing_information": "Projects range $10k-$50k.",

    "pricing_data": "Custom pricing based on scope.",

    "case_studies": """
    1. Automated inventory management.
    2. Cloud migration.
    3. AI-driven customer support.
    """,

    "meeting_transcript": "Initial discovery phase - no transcript yet.",

    "spreadsheet_id": "1MsG4jkVacHwuw2cxTQ_Vt5cW5qKoBgGJH1IqfDCdRto",  # Your Sheet ID

    "spreadsheet_range": "Sheet1!A1",

    "workflow_stage": "start",

    "completed": False,

    "progress_percentage": 0
}
At minimum, update target_industry, sender_name, company_services, ideal_customer_profile, pricing_information, case_studies, and spreadsheet_id to match your business.
7

Run the workflow

Start the pipeline with:
python main.py
The terminal prints a live trace of the pipeline as it runs. Each agent node prints a separator block with its name in uppercase, followed by its output. You will see discovery results first (a numbered list of companies), then per-lead output from each agent in sequence: [LEAD RESEARCH], [ICP ANALYSIS], [COMPETITOR ANALYSIS], [OUTREACH CONTENT], [PROPOSAL DOCUMENT], and [CRM UPDATE]. The supervisor also prints its routing decision and reasoning between each lead. When all leads are processed, a WORKFLOW COMPLETED block prints the full workflow_history as JSON.

Next steps

Configuration reference

Full documentation of every field in the workflow state and LLM settings.

Agent reference

Understand what each agent does and how to extend or replace one.

Google Sheets setup

Detailed guide for creating service account credentials and sharing your sheet.

Architecture overview

Deep dive into the LangGraph graph structure, supervisor logic, and state flow.

Build docs developers (and LLMs) love