Skip to content

Conversation

@ployts
Copy link
Collaborator

@ployts ployts commented Jan 13, 2026

OpenJudge Version

0.2.0

Description

This PR updates the consistency analyzer and agent grader tests to align with the project's parameter naming conventions and improve test robustness.

Background and purpose:

Refactored consistency_analyzer.py to follow the standard parameter naming convention (using query/response instead of domain-specific parameters)
Updated all agent grader tests to ensure they properly handle the standardized parameter signatures

Checklist

Please check the following items before code is ready to be reviewed.

  • Code has been formatted with pre-commit run --all-files command
  • All tests are passing
  • Docstrings are in Google style
  • Related documentation has been updated (e.g. links, examples, etc.)
  • Code is ready for review

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ployts, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on improving the robustness and consistency of the OpenJudge evaluation framework. It refactors the consistency_analyzer.py to better handle edge cases in score calculation and updates numerous agent grader tests to align with standardized parameter naming conventions. These changes ensure more reliable evaluation results and a more maintainable testing suite.

Highlights

  • Consistency Analyzer Robustness: The ConsistencyAnalyzer has been enhanced to gracefully handle cases where Pearson correlation coefficients result in NaN. This now correctly assigns a consistency score of 1.0 if both score arrays are constant and equal, and 0.0 if one is constant and the other varies, improving the reliability of consistency measurements.
  • Standardized Grader Parameter Mapping: Agent grader tests have been updated to explicitly define parameter mappings within GraderConfig using a mapper dictionary. This ensures consistent parameter signatures across different graders, aligning with project conventions.
  • Consistency Analyzer Integration in Tests: The ConsistencyAnalyzer.analyze method now requires a dataset argument, and all relevant agent grader consistency tests have been updated to pass this parameter, reflecting a change in the analyzer's API or usage.
  • LLMGrader Test Refinements: Unit tests for LLMGrader have been refined, including changes to mock response structures (score and reason are now part of the parsed attribute), removal of a dedicated error handling test, and a more robust serialization test using actual model configurations.
  • Model and Metadata Updates in Tests: Several tests have updated the default LLM model from qwen3-max or qwen-max to qwen3-32b. Additionally, TrajectoryComprehensiveGrader tests now access step_evaluations and is_resolved from the result.metadata attribute instead of result.parsed, indicating a change in how these results are stored.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the consistency analyzer and updates several agent grader tests to improve robustness and align with new parameter naming conventions. The changes include adding NaN handling to the consistency analyzer and updating test signatures across multiple files. My review identified a couple of minor issues in the tests: a leftover debug print statement and a significantly lowered accuracy threshold that might mask test flakiness. Addressing these points will help ensure the tests are clean and effective.


# Assert that quality metrics meet expected thresholds
assert accuracy_result.accuracy >= 0.7, f"Accuracy below threshold: {accuracy_result.accuracy}"
assert accuracy_result.accuracy >= 0.5, f"Accuracy below threshold: {accuracy_result.accuracy}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The accuracy threshold has been lowered from 0.7 to 0.5. This is a significant reduction and might be masking instability or poor performance of the model on this test case. Could you provide some context for this change? It would be better to investigate the cause of the lower accuracy and improve the test or model prompt if possible, rather than lowering the quality bar.

Comment on lines +244 to +245
# Print accuracy for debugging
print(f"Accuracy: {accuracy_result.accuracy}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This print statement appears to be for debugging purposes. It should be removed before merging to keep the test output clean.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants