You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: ralph/prd.json
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -192,7 +192,7 @@
192
192
"Test by running on existing official runs and verifying REPORT.md is generated with LoCoBench data"
193
193
],
194
194
"priority": 11,
195
-
"passes": false,
195
+
"passes": true,
196
196
"notes": "This is the deterministic-only report. LLM judge evaluation is a separate step that consumes context files generated by US-012. CLI invocation: python3 scripts/generate_eval_report.py --runs-dir ~/evals/custom_agents/agents/claudecode/runs/official/ --output-dir ./eval_reports/. No external dependencies — stdlib only (json, dataclasses, pathlib, statistics, csv, datetime, argparse). The script must be runnable from the CodeContextBench repo root with plain python3."
"notes": "CLI invocation: python3 -m scripts.ccb_metrics.judge_context --runs-dir ~/evals/custom_agents/agents/claudecode/runs/official/ --benchmarks-dir ./benchmarks/ --output-dir ./judge_contexts/. Also importable: from scripts.ccb_metrics.judge_context import generate_judge_contexts. Key mapping: task_id in run dirs maps to benchmark dirs by prefix (e.g., 'pkg-doc-001' matches benchmarks/kubernetes_docs/pkg-doc-001/). For LoCoBench, task dirs under benchmarks/locobench_agent/tasks/ match by full task name. For SWE-bench, instruction comes from Harbor's dataset, not local files."
0 commit comments