Skip to content

feat(bench): benchmark expansion#23643

Open
BrianCLong wants to merge 1 commit intomainfrom
feat/bench-expansion-6701132027924806688
Open

feat(bench): benchmark expansion#23643
BrianCLong wants to merge 1 commit intomainfrom
feat/bench-expansion-6701132027924806688

Conversation

@BrianCLong
Copy link
Copy Markdown
Owner

Expanded Summit Bench benchmark coverage by adding deterministic fixtures and updating cases for GraphRAG, tooluse, and multi-agent coordination.


PR created automatically by Jules for task 6701132027924806688 started by @BrianCLong

- Identify gaps in GraphRAG, tooluse, and multi-agent coordination benchmark cases.
- Add deterministic fixtures under GOLDEN/datasets/**.
- Update `cases.json` in the respective directories to include the new cases.
- Keep deterministic outputs stable by hardcoding generated data.
- Update `evaluation/scoring/evidence.ts` to export missing functions.

Co-authored-by: BrianCLong <6404035+BrianCLong@users.noreply.github.com>
@google-labs-jules
Copy link
Copy Markdown
Contributor

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 9, 2026

Warning

Rate limit exceeded

@BrianCLong has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 6 minutes and 48 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 6 minutes and 48 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: b3a7ee3a-7186-43a5-9a7c-11baff85fb1e

📥 Commits

Reviewing files that changed from the base of the PR and between 56b074b and 3061a1a.

📒 Files selected for processing (16)
  • GOLDEN/datasets/graphrag/EVID_graphrag_attribute_prediction_2882bce6.json
  • GOLDEN/datasets/graphrag/EVID_graphrag_attribute_prediction_30a10bee.json
  • GOLDEN/datasets/graphrag/EVID_graphrag_community_detection_59c8dab1.json
  • GOLDEN/datasets/graphrag/EVID_graphrag_community_detection_b4678e68.json
  • GOLDEN/datasets/graphrag/cases.json
  • GOLDEN/datasets/multi-agent/EVID_multiagent_resource_allocation_8efecaad.json
  • GOLDEN/datasets/multi-agent/EVID_multiagent_resource_allocation_e0f2b6f1.json
  • GOLDEN/datasets/multi-agent/EVID_multiagent_role_discovery_7514ea55.json
  • GOLDEN/datasets/multi-agent/EVID_multiagent_role_discovery_a6e46d05.json
  • GOLDEN/datasets/multi-agent/cases.json
  • GOLDEN/datasets/tooluse/EVID_tooluse_code_execution_1dd8e73e.json
  • GOLDEN/datasets/tooluse/EVID_tooluse_code_execution_c7d5cea8.json
  • GOLDEN/datasets/tooluse/EVID_tooluse_file_system_2571feb3.json
  • GOLDEN/datasets/tooluse/EVID_tooluse_file_system_bab1d56c.json
  • GOLDEN/datasets/tooluse/cases.json
  • evaluation/scoring/evidence.ts
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/bench-expansion-6701132027924806688

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request expands the evaluation suite by adding new datasets and test cases for GraphRAG, multi-agent resource allocation, and tool-use scenarios. It also introduces scoring functions for evidence retrieval and tool efficiency. Feedback focused on correcting the naming of the evidence scoring function—which currently calculates recall rather than precision—optimizing its performance using a Set, and handling edge cases in the tool efficiency calculation where zero steps are taken.

Comment on lines +59 to +63
export function scoreEvidencePrecision(required: string[], provided: string[]): number {
if (required.length === 0) return 1.0;
const hits = required.filter(r => provided.includes(r)).length;
return hits / required.length;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The function name scoreEvidencePrecision is misleading because the implementation calculates Recall (hits / required.length) rather than Precision (hits / provided.length). Additionally, using provided.includes(r) inside a filter results in O(N * M) complexity. Using a Set for the provided items improves performance to O(N + M).

Suggested change
export function scoreEvidencePrecision(required: string[], provided: string[]): number {
if (required.length === 0) return 1.0;
const hits = required.filter(r => provided.includes(r)).length;
return hits / required.length;
}
export function scoreEvidenceRecall(required: string[], provided: string[]): number {
if (required.length === 0) return 1.0;
const providedSet = new Set(provided);
const hits = required.filter(r => providedSet.has(r)).length;
return hits / required.length;
}

Comment on lines +65 to +68
export function scoreToolEfficiency(optimalSteps: number, actualSteps: number): number {
if (actualSteps <= optimalSteps) return 1.0;
return Math.max(0, optimalSteps / actualSteps);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current implementation returns 1.0 (perfect efficiency) when actualSteps is 0 and optimalSteps is greater than 0. In a benchmark context, taking zero steps when some are required usually indicates a failure to attempt the task. It is safer to return 0.0 in this case unless optimalSteps is also 0.

Suggested change
export function scoreToolEfficiency(optimalSteps: number, actualSteps: number): number {
if (actualSteps <= optimalSteps) return 1.0;
return Math.max(0, optimalSteps / actualSteps);
}
export function scoreToolEfficiency(optimalSteps: number, actualSteps: number): number {
if (actualSteps === 0) return optimalSteps === 0 ? 1.0 : 0.0;
if (actualSteps <= optimalSteps) return 1.0;
return Math.max(0, optimalSteps / actualSteps);
}

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 3061a1a144

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

export function scoreEvidencePrecision(required: string[], provided: string[]): number {
if (required.length === 0) return 1.0;
const hits = required.filter(r => provided.includes(r)).length;
return hits / required.length;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Compute precision using provided evidence count

This implementation divides by required.length, which computes recall/coverage, not precision. If a run returns all required evidence plus many irrelevant IDs, the score still becomes 1.0 (for example, 2 required hits out of 20 provided), so false positives are never penalized and benchmark precision is systematically inflated. Precision should use the provided evidence count (ideally deduplicated) as the denominator.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant