Overview
The process function uses AI (Groq’s LLaMA model) to analyze Python error tracebacks and generate structured, actionable debugging advice.
Function Signature
def process(
traceback_message: str,
original_error_information: str,
context: str
) -> object
Parameters
The full error traceback string showing the sequence of function calls leading to the error
original_error_information
The original error message and type (e.g., “SyntaxError: ’(’ was never closed”)
Relevant source code context, typically generated by relational_error_parsing_function
Returns
A JSON string containing structured error analysis with the following schema:{
"where": {
"repository_path": "<absolute path to repository>",
"file_name": "<name of file containing error>",
"line_number": "<line number where error occurred>"
},
"what": {
"error_type": "<specific Python error type>",
"description": "<concise explanation of error>"
},
"how": {
"error_origination": "<line number where error originated>",
"suggested_code_solution": "<code snippet to fix the error>"
}
}
Configuration
Environment Variables
Groq API key for authentication. Set in .env file or environment:export API="your-groq-api-key"
AI Model
- Model:
llama3-70b-8192
- Provider: Groq
- Response Format: JSON object
Example Usage
Basic Error Analysis
from process.process import process
from relational import relational_error_parsing_function
# Get error and context
traceback, error_info, context = relational_error_parsing_function(
['python3', 'buggy_script.py']
)
# Analyze with AI
response = process(traceback, error_info, context)
import json
analysis = json.loads(response)
print(f"Error in: {analysis['where']['file_name']}")
print(f"Line: {analysis['where']['line_number']}")
print(f"Fix: {analysis['how']['suggested_code_solution']}")
Complete Debugging Workflow
import json
from relational import relational_error_parsing_function
from process.process import process
# Execute and capture error with full context
traceback, error_info, context = relational_error_parsing_function(
['python3', 'app.py'],
flag='-r' # Include all related files
)
# Get AI analysis
result = process(traceback, error_info, context)
analysis = json.loads(result)
# Display structured solution
print("\n=== Error Analysis ===")
print(f"Location: {analysis['where']['file_name']}:{analysis['where']['line_number']}")
print(f"\nType: {analysis['what']['error_type']}")
print(f"Description: {analysis['what']['description']}")
print(f"\n=== Suggested Fix ===")
print(analysis['how']['suggested_code_solution'])
AI System Prompts
The function uses a multi-stage system prompt that:
- Positions the AI as an expert Python debugging assistant
- Provides step-by-step analysis instructions
- Enforces strict JSON output format
- Constrains responses to be specific and actionable
- Focuses on the most critical error when multiple exist
- Provides code solutions without placeholders
Notes
- Requires a valid Groq API key in the environment
- Response is always valid JSON (enforced by
response_format)
- Focuses on fixing the specific line causing the error
- Works best with comprehensive context from
-r flag
- Model analyzes error holistically but fixes the root cause