Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Return both score and reason from metric.measure() #1350

Open
Mujae opened this issue Feb 8, 2025 · 2 comments
Open

Feature Request: Return both score and reason from metric.measure() #1350

Mujae opened this issue Feb 8, 2025 · 2 comments

Comments

@Mujae
Copy link

Mujae commented Feb 8, 2025

Description

Currently, metric.measure() only returns the score, while evaluate() returns both score and reason. Since measure() internally calls evaluate() and stores the reason in self.reason, it would be more convenient if measure() returned both values directly.

This would be particularly useful when evaluating in batches, as it avoids the need to call metric.reason separately for each instance. Instead of handling post-processing in multiple steps, it would be cleaner and more efficient to get both score and reason as a return value from measure().

Proposed Solution

Modify metric.measure() to return a tuple (score, reason), similar to what evaluate() provides, rather than requiring users to retrieve metric.reason separately.

This would make the API more intuitive and reduce extra steps in evaluation pipelines.
Would love to hear your thoughts on this.

@Mujae Mujae changed the title Feature Request: Return both score and reason from metric.measure() Feature Request: Return both score and reason from G-evalmetric.measure() Feb 8, 2025
@Mujae Mujae changed the title Feature Request: Return both score and reason from G-evalmetric.measure() Feature Request: Return both score and reason from metric.measure() Feb 8, 2025
@penguine-ip
Copy link
Contributor

Hey @Mujae thanks for the suggestion! The think with reason is it is not required if include_reason is False. For those that need both score and reason they can just do metric.score and metric.reason.

@Mujae
Copy link
Author

Mujae commented Feb 12, 2025

Thank you for your response!
Yes, that's correct. However, in the case of G-eval, it always returns a reason, so there is no include_reason option. Given that, if measure itself returns the reason, there would be no need to access metric.reason separately, which is why I made the suggestion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants