Skip to main content
Integration patterns for popular AI agent frameworks. All examples are available in examples/agents/ in the source repository.

Why These Patterns Work

warp-md’s agent-first design makes framework integration straightforward:
  • Contract-first - Pydantic schemas validate inputs/outputs
  • CLI-accessible - Any framework can invoke via subprocess
  • Machine-readable - JSON envelopes for reliable parsing
  • Streaming support - NDJSON events for real-time progress

Framework Support

FrameworkPatternStatusExample Location
LangChainStructuredTool✅ Readyexamples/agents/langchain/
CrewAIToolBase✅ Readyexamples/agents/crewai/
AutoGenfunction_map✅ Readyexamples/agents/autogen/
OpenAI AgentsFunction calling✅ Readyexamples/agents/openai/
MCPFastMCP server✅ Readypython/warp_md/mcp_server.py

1. LangChain

Installation

pip install langchain langchain-core langchain-openai warp-md

Tool Implementation

LangChain integration uses StructuredTool with Pydantic validation:
from langchain_core.tools import BaseTool
from langchain_core.pydantic_v1 import BaseModel, Field
import subprocess
import json

class WarpMDInput(BaseModel):
    """Input schema for warp-md analysis."""
    topology: str = Field(..., description="Path to topology file")
    trajectory: str = Field(..., description="Path to trajectory file")
    analyses: str = Field(
        ...,
        description='JSON array of analyses. Example: [{"name": "rg", "selection": "protein"}]'
    )
    output_dir: str = Field(default=".", description="Output directory")
    device: str = Field(default="auto", description="Compute device")

class WarpMDTool(BaseTool):
    name = "warp_md_analysis"
    description = """
    Perform MD trajectory analysis using warp-md.
    Supports: rg, rmsd, msd, rdf, conductivity, hbond, dssp, pca, rmsf, etc.
    """
    args_schema = WarpMDInput

    def _run(self, topology: str, trajectory: str, analyses: str,
             output_dir: str = ".", device: str = "auto") -> str:
        # Parse analyses
        analyses_list = json.loads(analyses)
        
        # Build request
        run_request = {
            "version": "warp-md.agent.v1",
            "system": topology,
            "trajectory": trajectory,
            "device": device,
            "output_dir": output_dir,
            "analyses": analyses_list,
        }
        
        # Write config and execute
        config_path = "_warp_md_request.json"
        with open(config_path, "w") as f:
            json.dump(run_request, f)
        
        result = subprocess.run(
            ["warp-md", "run", config_path],
            capture_output=True,
            text=True,
        )
        
        return result.stdout

Example: Multi-Analysis Agent

from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_openai import ChatOpenAI
from langchain import hub
from examples.agents.langchain.warp_md_tool import WarpMDTool

tool = WarpMDTool()
llm = ChatOpenAI(model="gpt-4o", temperature=0)
prompt = hub.pull("hwchase17/openai-tools")
agent = create_tool_calling_agent(llm, [tool], prompt)
executor = AgentExecutor(agent=agent, tools=[tool], verbose=True)

result = executor.invoke({
    "input": """
    Analyze the protein dynamics in trajectory.xtc:
    1. Calculate radius of gyration for the protein
    2. Compute RMSD for the backbone
    3. Analyze secondary structure with DSSP
    """
})
LangChain’s tool calling agent automatically parses natural language into structured tool calls with validated parameters.

2. CrewAI

Installation

pip install crewai crewai-tools warp-md

Tool Implementation

CrewAI uses ToolBase for multi-agent collaboration:
from crewai_tools import ToolBase
from pydantic import BaseModel, Field
import subprocess
import json

class WarpMDInput(BaseModel):
    topology: str = Field(..., description="Path to topology file")
    trajectory: str = Field(..., description="Path to trajectory file")
    analyses: str = Field(..., description="JSON array of analyses")
    output_dir: str = Field(default=".", description="Output directory")

class WarpMDAnalysisTool(ToolBase):
    name: str = "warp_md_analysis"
    description: str = """
    Perform molecular dynamics trajectory analysis.
    Supports: rg, rmsd, msd, rdf, conductivity, hbond, dssp, pca, etc.
    """
    args_schema: Type[BaseModel] = WarpMDInput

    def _run(self, topology: str, trajectory: str, analyses: str,
             output_dir: str = ".") -> str:
        analyses_list = json.loads(analyses)
        
        run_request = {
            "version": "warp-md.agent.v1",
            "system": topology,
            "trajectory": trajectory,
            "output_dir": output_dir,
            "analyses": analyses_list,
        }
        
        config_path = "_warp_md_request.json"
        with open(config_path, "w") as f:
            json.dump(run_request, f)
        
        result = subprocess.run(
            ["warp-md", "run", config_path],
            capture_output=True,
            text=True,
        )
        
        return result.stdout
CrewAI’s multi-agent architecture enables parallel analysis by specialist agents, each focusing on different aspects of the trajectory.

3. OpenAI Agents SDK

Installation

pip install openai warp-md

Tool Implementation

OpenAI Agents use function calling with JSON schemas:
import openai
import subprocess
import json
from typing import Dict, Any, List

def warp_md_analysis(
    topology: str,
    trajectory: str,
    analyses: List[Dict[str, Any]],
    output_dir: str = ".",
    device: str = "auto"
) -> Dict[str, Any]:
    """
    Perform molecular dynamics trajectory analysis.
    
    Args:
        topology: Path to topology file (PDB, GRO, PDBQT)
        trajectory: Path to trajectory file (DCD, XTC)
        analyses: List of analysis specifications
        output_dir: Output directory for results
        device: Compute device (auto, cpu, cuda)
    
    Returns:
        Analysis result envelope
    """
    run_request = {
        "version": "warp-md.agent.v1",
        "system": topology,
        "trajectory": trajectory,
        "device": device,
        "output_dir": output_dir,
        "analyses": analyses,
    }
    
    config_path = "_warp_md_request.json"
    with open(config_path, "w") as f:
        json.dump(run_request, f)
    
    result = subprocess.run(
        ["warp-md", "run", config_path],
        capture_output=True,
        text=True,
    )
    
    return json.loads(result.stdout)

# Function schema for OpenAI
tools = [
    {
        "type": "function",
        "function": {
            "name": "warp_md_analysis",
            "description": "Perform molecular dynamics trajectory analysis",
            "parameters": {
                "type": "object",
                "properties": {
                    "topology": {"type": "string", "description": "Topology file path"},
                    "trajectory": {"type": "string", "description": "Trajectory file path"},
                    "analyses": {
                        "type": "array",
                        "items": {"type": "object"},
                        "description": "Analysis specifications"
                    },
                },
                "required": ["topology", "trajectory", "analyses"]
            }
        }
    }
]

4. AutoGen

Installation

pip install pyautogen warp-md

Tool Implementation

AutoGen uses function_map for conversational agents:
import autogen
import subprocess
import json
from typing import Dict, Any, List

def warp_md_analysis(
    topology: str,
    trajectory: str,
    analyses: List[Dict[str, Any]],
    output_dir: str = "."
) -> str:
    """
    Perform molecular dynamics trajectory analysis.
    
    Args:
        topology: Topology file path
        trajectory: Trajectory file path
        analyses: Analysis specifications
        output_dir: Output directory
    
    Returns:
        Analysis result summary
    """
    run_request = {
        "version": "warp-md.agent.v1",
        "system": topology,
        "trajectory": trajectory,
        "output_dir": output_dir,
        "analyses": analyses,
    }
    
    config_path = "_warp_md_request.json"
    with open(config_path, "w") as f:
        json.dump(run_request, f)
    
    result = subprocess.run(
        ["warp-md", "run", config_path],
        capture_output=True,
        text=True,
    )
    
    envelope = json.loads(result.stdout)
    if envelope["status"] == "ok":
        return f"✓ Completed {len(envelope['results'])} analyses in {envelope['elapsed_ms']}ms"
    else:
        return f"✗ Error: {envelope['error']['message']}"

# Register function
function_map = {
    "warp_md_analysis": warp_md_analysis,
}

5. MCP (Model Context Protocol)

warp-md includes a native MCP server for Claude Desktop and other MCP clients.

Installation

pip install warp-md mcp

Configuration (Claude Desktop)

Add to claude_desktop_config.json:
{
  "mcpServers": {
    "warp-md": {
      "command": "warp-md",
      "args": ["mcp"]
    }
  }
}

Available MCP Tools

The MCP server exposes these tools:
ToolDescription
run_analysisRun MD analyses on a trajectory
list_analysesList all available analysis types
get_analysis_schemaGet parameter schema for an analysis
validate_configValidate a config without running
pack_moleculesPack molecules with warp-pack
build_peptideBuild peptides with warp-pep
mutate_peptideMutate peptide residues

Example Usage (Claude)

You: Analyze the protein in trajectory.xtc using structure.pdb. 
     Calculate radius of gyration and RMSD.

Claude: I'll analyze the trajectory for you.
        [Calls run_analysis tool]
        
        ✓ Analysis complete! 
        - Rg: rg.npz (mean: 18.5 Å)
        - RMSD: rmsd.npz (mean: 2.3 Å)
The MCP server implementation is in python/warp_md/mcp_server.py and uses FastMCP for tool registration.

Common Utilities

All frameworks can use the shared utilities in examples/agents/warp_utils.py:

Progress Tracking

from examples.agents.warp_utils import run_with_progress

# Run with automatic progress display
result = run_with_progress([
    "warp-pack", "--config", "pack.yaml", "--stream"
])

# Output:
# 📦 Packing 150 molecules...
#   → Placing molecules...
#     Placed 50/150 (33.3%)
#   → Optimizing...
#     Iter 100: f=2.1e-03 (10.0%)
#   ✓ Complete: 4500 atoms in 52s

Event Streaming

from examples.agents.warp_utils import parse_stream_events

for event in parse_stream_events(process.stderr):
    if event["event"] == "analysis_completed":
        print(f"✓ {event['analysis']}: {event['out']}")
    elif event["event"] == "checkpoint":
        print(f"  Progress: {event['progress_pct']:.1f}%")

Best Practices

1. Error Handling

Always check the status field and handle errors gracefully:
result = run_analysis(...)
envelope = json.loads(result)

if envelope["status"] == "error":
    error = envelope["error"]
    if error["code"] == "E_SELECTION_EMPTY":
        print(f"No atoms matched selection: {error['context']['selection']}")
    elif error["code"] == "E_ANALYSIS_SPEC":
        print(f"Missing parameters: {error['message']}")
    else:
        print(f"Error: {error['message']}")
else:
    for r in envelope["results"]:
        print(f"✓ {r['analysis']}: {r['out']}")

2. Streaming for Long Operations

Enable streaming for long analyses to provide user feedback:
run_request = {
    "version": "warp-md.agent.v1",
    "stream": "ndjson",  # Enable streaming
    "checkpoint": {      # Enable checkpoints
        "enabled": True,
        "interval_frames": 1000
    },
    ...
}

3. Validate Before Running

Use validation to catch errors early:
from warp_md.agent_schema import validate_run_request
from pydantic import ValidationError

try:
    cfg = validate_run_request(run_request)
    # Proceed with execution
except ValidationError as exc:
    # Handle validation errors before running
    print(f"Invalid request: {exc}")

4. Batch Multiple Analyses

Run multiple analyses in a single request to share trajectory loading:
run_request = {
    "analyses": [
        {"name": "rg", "selection": "protein"},
        {"name": "rmsd", "selection": "backbone"},
        {"name": "rmsf", "selection": "protein"},
        {"name": "dssp"},
    ]
}

Example: Complete Workflow

Here’s a complete example combining multiple frameworks:
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_openai import ChatOpenAI
from examples.agents.langchain.warp_md_tool import WarpMDTool
from examples.agents.warp_utils import run_with_progress

# Step 1: Pack the system (with progress)
pack_result = run_with_progress([
    "warp-pack",
    "--config", "solvate.yaml",
    "--output", "solvated.pdb",
    "--stream"
])

print(f"✓ Packed {pack_result['total_atoms']} atoms")

# Step 2: Run simulation (external)
# ... run MD simulation ...

# Step 3: Analyze with LangChain agent
tool = WarpMDTool()
llm = ChatOpenAI(model="gpt-4o")
agent = create_tool_calling_agent(llm, [tool], prompt)
executor = AgentExecutor(agent=agent, tools=[tool])

analysis = executor.invoke({
    "input": """
    Analyze the solvated protein trajectory:
    1. Calculate Rg and RMSD
    2. Analyze secondary structure
    3. Calculate water RDF
    """
})

print(analysis["output"])

See Also

Overview

Why warp-md is agent-friendly

Schema Reference

Complete Pydantic schema reference

Build docs developers (and LLMs) love