Skip to main content
Use this guide when you are unsure whether a task was already initialized, do not know its task ID, or need to know its current state before deciding what to do next.

Check Existing Task prompt

Copy this prompt into your agent session, then replace ... with either Task file: <path/to/task-file.md> or the task text pasted on the following lines.
Use $repo-task-proof-loop to find the existing repo-local task that matches the task described below, inspect its artifacts, and report the matched task ID, current status, and next recommended step.
...

CLI status command

Run status directly from the repository root:
python3 "$SKILL_PATH/scripts/task_loop.py" status --task-id <TASK_ID>
The command exits with code 0 and prints a JSON report to stdout.

Interpreting status output

The status report is a JSON object with the following fields:
FieldDescription
required_files_presentMap of each required artifact path to a boolean indicating whether it exists on disk
evidence_overall_statusPASS, FAIL, UNKNOWN, or a PARSE_ERROR string if evidence.json could not be read
verdict_overall_statusPASS, FAIL, UNKNOWN, or a PARSE_ERROR string if verdict.json could not be read
non_pass_criteriaList of criteria objects from verdict.json whose status is FAIL or UNKNOWN

Example: task initialized but not yet built

{
  "repo_root": "/home/user/myrepo",
  "task_id": "feature-auth-hardening",
  "task_dir": "/home/user/myrepo/.agent/tasks/feature-auth-hardening",
  "exists": true,
  "required_files_present": {
    "spec.md": true,
    "evidence.md": true,
    "evidence.json": true,
    "verdict.json": true,
    "problems.md": true,
    "raw/build.txt": true,
    "raw/test-unit.txt": true,
    "raw/test-integration.txt": true,
    "raw/lint.txt": true,
    "raw/screenshot-1.png": true
  },
  "evidence_overall_status": "UNKNOWN",
  "verdict_overall_status": "UNKNOWN",
  "non_pass_criteria": []
}

Example: task with failing criteria

{
  "repo_root": "/home/user/myrepo",
  "task_id": "feature-auth-hardening",
  "task_dir": "/home/user/myrepo/.agent/tasks/feature-auth-hardening",
  "exists": true,
  "required_files_present": {
    "spec.md": true,
    "evidence.md": true,
    "evidence.json": true,
    "verdict.json": true,
    "problems.md": true,
    "raw/build.txt": true,
    "raw/test-unit.txt": true,
    "raw/test-integration.txt": true,
    "raw/lint.txt": true,
    "raw/screenshot-1.png": true
  },
  "evidence_overall_status": "FAIL",
  "verdict_overall_status": "FAIL",
  "non_pass_criteria": [
    {
      "id": "AC2",
      "status": "FAIL",
      "reason": "Session refresh endpoint does not invalidate the old token."
    }
  ]
}

The validate command

validate performs a deeper structural check than status. It verifies that all required files are present and that evidence.json and verdict.json conform to the expected schemas.
python3 "$SKILL_PATH/scripts/task_loop.py" validate --task-id <TASK_ID>
validate checks:
  • All required artifact paths exist under .agent/tasks/<TASK_ID>/
  • evidence.json contains task_id, overall_status, acceptance_criteria, changed_files, commands_for_fresh_verifier, and known_gaps
  • evidence.json task_id matches the requested TASK_ID
  • evidence.json overall_status is PASS, FAIL, or UNKNOWN
  • Each entry in acceptance_criteria has id, text, status, proof, and gaps
  • verdict.json contains task_id, overall_verdict, criteria, commands_run, and artifacts_used
  • verdict.json task_id matches the requested TASK_ID
  • verdict.json overall_verdict is PASS, FAIL, or UNKNOWN
  • Each entry in criteria has id, status, and reason

Example validate output (valid)

{
  "repo_root": "/home/user/myrepo",
  "task_id": "feature-auth-hardening",
  "task_dir": "/home/user/myrepo/.agent/tasks/feature-auth-hardening",
  "valid": true,
  "missing_files": [],
  "errors": []
}

Example validate output (invalid)

{
  "repo_root": "/home/user/myrepo",
  "task_id": "feature-auth-hardening",
  "task_dir": "/home/user/myrepo/.agent/tasks/feature-auth-hardening",
  "valid": false,
  "missing_files": [],
  "errors": [
    "evidence.json task_id does not match the requested TASK_ID."
  ]
}
validate exits with code 0 when valid and 1 when there are errors or missing files.

Build docs developers (and LLMs) love