Skip to main content

Overview

The AI & ML Suite pack provides comprehensive coverage of artificial intelligence and machine learning workflows. From data science and model training to LLM architecture and prompt engineering — this pack equips you to build production AI systems. Perfect for ML engineers, AI researchers, data scientists, and teams building LLM-powered applications.

Installation

npx github:dmicheneau/opencode-template-agent install --pack ai

Included Agents

ai-engineer

AI Systems EngineerEnd-to-end AI systems from model selection to production deployment, MLOps, and monitoring

data-scientist

Data Science ExpertStatistical analysis, predictive modeling, feature engineering, experimentation, and insights

ml-engineer

ML EngineeringProduction ML pipelines, model serving, automated retraining, monitoring, and performance tuning

llm-architect

LLM System DesignLLM architecture, fine-tuning, RAG systems, inference optimization, and evaluation frameworks

prompt-engineer

Prompt EngineeringPrompt design, optimization, analysis, chain-of-thought reasoning, and structured outputs

search-specialist

Search & ResearchAdvanced search techniques, information retrieval, multi-source synthesis, and web research

Who Should Use This Pack?

Build production ML systems with automated pipelines, model serving, and monitoring
Experiment with models, fine-tuning, and novel architectures
Analyze data, build predictive models, and derive insights from complex datasets
Build RAG systems, chatbots, and LLM-powered applications

Example Workflow

Here’s how to build an LLM-powered application using the AI pack:
1

Research and plan

Use search-specialist to gather information and ai-engineer for architecture
@ai/search-specialist
Research the latest RAG techniques and vector database options

@ai/ai-engineer
Design an architecture for a customer support chatbot with RAG
2

Prepare and analyze data

Use data-scientist for exploratory data analysis
@ai/data-scientist
Analyze this customer support ticket dataset for patterns and insights
3

Design the LLM system

Use llm-architect for RAG architecture and model selection
@ai/llm-architect
Design a RAG system with embeddings, vector search, and context retrieval
4

Optimize prompts

Use prompt-engineer to create effective prompts
@ai/prompt-engineer
Optimize this prompt for customer support responses with structured JSON output
5

Build production pipeline

Use ml-engineer for serving infrastructure
@ai/ml-engineer
Create a production pipeline for embedding generation and model serving
6

Deploy and monitor

Use ai-engineer for deployment and monitoring
@ai/ai-engineer
Set up monitoring for latency, token usage, and response quality

Key Capabilities

LLM Systems

  • RAG (Retrieval-Augmented Generation) architecture
  • Fine-tuning and model adaptation
  • Prompt engineering and optimization
  • Context window management
  • Inference optimization and caching

Machine Learning

  • Model training and hyperparameter tuning
  • Feature engineering and selection
  • Model evaluation and validation
  • Automated retraining pipelines
  • A/B testing frameworks

Data Science

  • Exploratory data analysis
  • Statistical modeling
  • Predictive analytics
  • Time series forecasting
  • Causal inference

Production ML

  • Model serving infrastructure
  • Batch and real-time inference
  • Model monitoring and drift detection
  • MLOps automation
  • Performance optimization

Common Use Cases

Agents: llm-architect → prompt-engineer → ml-engineer → ai-engineerBuild retrieval-augmented generation systems for Q&A, chatbots, or documentation search.

Tech Stack Coverage

AreaTechnologiesAgents
LLMOpenAI, Anthropic, Llama, RAG, vector DBsllm-architect, prompt-engineer
MLPyTorch, TensorFlow, scikit-learn, XGBoostml-engineer, data-scientist
DataPandas, NumPy, SQL, Sparkdata-scientist, search-specialist
MLOpsMLflow, Weights & Biases, Kubeflowml-engineer, ai-engineer
ServingFastAPI, Ray Serve, TorchServe, Tritonml-engineer, ai-engineer
MonitoringPrometheus, Grafana, custom metricsai-engineer, ml-engineer

LLM Application Patterns

Use llm-architect and prompt-engineer to build systems that retrieve relevant context before generating responses
Use llm-architect and ai-engineer to create autonomous agents that use tools and make decisions
Use llm-architect and ml-engineer to adapt foundation models to specific domains or tasks
Use prompt-engineer to generate JSON, SQL, or other structured formats reliably

ML Model Types

Model TypeUse CasesKey Agents
ClassificationSpam detection, sentiment analysis, fraud detectiondata-scientist, ml-engineer
RegressionPrice prediction, forecasting, demand estimationdata-scientist, ml-engineer
ClusteringCustomer segmentation, anomaly detectiondata-scientist
RecommenderProduct recommendations, content personalizationdata-scientist, ml-engineer
NLPText classification, NER, summarizationllm-architect, prompt-engineer
Computer VisionImage classification, object detection, segmentationai-engineer, ml-engineer

Complementary Agents

Consider adding these agents for expanded capabilities:
  • data-engineer — Build ETL pipelines and data infrastructure
  • mlops-engineer — Deep MLOps expertise for production systems
  • python-pro — Advanced Python for ML development
  • api-architect — Design APIs for model serving
  • performance-engineer — Optimize inference latency

Next Steps

Install AI Pack

npx github:dmicheneau/opencode-template-agent install --pack ai

Explore Individual Agents

Browse detailed documentation for each agent

Data Stack Pack

Add data engineering for ETL and warehousing

ML to Production Pack

Alternative pack with MLOps and deployment focus

Build docs developers (and LLMs) love