Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

non boolean custom evals #82

Merged
merged 5 commits into from
Jul 5, 2024
Merged

Conversation

akshat-g
Copy link
Contributor

@akshat-g akshat-g commented Jul 3, 2024

Summary by CodeRabbit

  • New Features

    • Introduced new evaluation capability in the CustomPrompt class to handle different output types ('boolean' and 'numeric').
  • Enhancements

    • Added new enum values (CUSTOM_PROMPT_SCORE and SCORE) to the MetricType class.
  • Documentation

    • Updated run_custom_eval.ipynb with new execution counts, response texts, and display scores for better clarity on evaluations.

Copy link
Contributor

coderabbitai bot commented Jul 3, 2024

Warning

Review failed

The head commit changed during the review from 702f1c7 to 3ecdf25.

Walkthrough

Recent changes enhance the CustomPrompt class in evaluator.py by adding support for different output types (boolean and numeric). This update includes new methods (_system_message, _evaluate) and the introduction of the _output_type attribute. Additionally, new metric types (CUSTOM_PROMPT_SCORE and SCORE) are added to the MetricType class. Finally, the run_custom_eval.ipynb example notebook is updated to reflect these new functionalities.

Changes

Files Change Summary
athina/evals/llm/custom_prompt/evaluator.py Added _output_type attribute, new methods (_system_message, _evaluate), and related logic to handle different output types.
athina/metrics/metric_type.py Added new enum values CUSTOM_PROMPT_SCORE and SCORE to MetricType class.
examples/run_custom_eval.ipynb Updated execution counts, response texts, and custom_prompt_score values to reflect new functionalities.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant CustomPrompt
    participant LLM
    participant Logger
    participant Metrics

    User->>CustomPrompt: Initialize(output_type='boolean')
    CustomPrompt->>Logger: Log initialization
    User->>CustomPrompt: _system_message()
    CustomPrompt->>Logger: Log system message construction
    CustomPrompt->Return System message
    User->>CustomPrompt: _evaluate(kwargs)
    CustomPrompt->>Logger: Log evaluation start
    CustomPrompt->>LLM: Run completion with constructed prompt
    LLM-->>CustomPrompt: Return response
    CustomPrompt->>Metrics: Process response based on output type
    CustomPrompt-->>User: Return EvalResult
    CustomPrompt->>Logger: Log evaluation end
Loading

Poem

In the land of code where changes sprout,
New types and metrics dance about. 🌟
Custom Prompts now judge with flair,
Boolean or numeric, they handle with care. 🐇✨
A rabbit cheers this coding spree,
Here’s to new paths in LLM’s sea! 🚀


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 090d6d8 and bc02015.

Files selected for processing (2)
  • athina/evals/llm/custom_prompt/evaluator.py (5 hunks)
  • athina/metrics/metric_type.py (1 hunks)
Additional comments not posted (6)
athina/metrics/metric_type.py (1)

31-31: LGTM! Verify the usage of the new enum value.

The addition of the CUSTOM_PROMPT_SCORE enum value looks good. Ensure that it is correctly used in the codebase.

Verification successful

The new enum value CUSTOM_PROMPT_SCORE is correctly integrated and used in the codebase.

  • athina/metrics/metric_type.py: Definition of CUSTOM_PROMPT_SCORE.
  • athina/evals/llm/custom_prompt/evaluator.py: Usage in EvalResultMetric.
Scripts executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Description: Verify the usage of the new enum value `CUSTOM_PROMPT_SCORE`.

# Test: Search for the usage of the new enum value. Expect: Correct usage in the codebase.
rg --type python $'CUSTOM_PROMPT_SCORE'

Length of output: 69



Script:

#!/bin/bash
# Description: Verify the usage of the new enum value `CUSTOM_PROMPT_SCORE`.

# Test: Search for the usage of the new enum value. Expect: Correct usage in the codebase.
rg 'CUSTOM_PROMPT_SCORE' --glob '*.py'

Length of output: 263

athina/evals/llm/custom_prompt/evaluator.py (5)

1-2: LGTM! Approved imports.

The new imports time, logger, EvalResult, EvalResultMetric, and MetricType are necessary for the new functionality and look good.

Also applies to: 11-12


20-20: LGTM! Approved attribute addition.

The addition of the _output_type attribute is necessary for handling different output types and looks good.


30-30: LGTM! Approved constructor changes.

The initialization of the _output_type attribute in the constructor is necessary and looks good.

Also applies to: 45-45


105-125: LGTM! Approved _system_message method.

The _system_message method correctly handles 'boolean' and 'numeric' output types by returning appropriate instructions. The default case is handled by calling the superclass method, which looks good.

Verification successful

LGTM! Approved _system_message method.

The _system_message method correctly handles 'boolean' and 'numeric' output types by returning appropriate instructions. The default case is handled by calling the superclass method, which has been verified to exist in LlmEvaluator.

  • athina/evals/llm/llm_evaluator.py: LlmEvaluator class and _system_message method confirmed.
Scripts executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Description: Verify the superclass method `_system_message`.

# Test: Search for the implementation of `_system_message` in the superclass. Expect: Correct implementation.
ast-grep --lang python --pattern $'class LlmEvaluator {
  $$$
  def _system_message($_) {
    $$$
  }
  $$$  
}'

Length of output: 458



Script:

#!/bin/bash
# Description: Verify the superclass method `_system_message`.

# Test: Search for the LlmEvaluator class and its methods to verify the existence of the _system_message method.

# Search for the LlmEvaluator class definition
ast-grep --lang python --pattern 'class LlmEvaluator { $$$ }'

# Search for the _system_message method within the LlmEvaluator class
ast-grep --lang python --pattern 'class LlmEvaluator { $$$ def _system_message($_) { $$$ } $$$ }'

Length of output: 957



Script:

#!/bin/bash
# Description: Verify the superclass method `_system_message`.

# Test: Search for the LlmEvaluator class and its methods to verify the existence of the _system_message method.

# Search for the LlmEvaluator class definition
ast-grep --lang python --pattern 'class LlmEvaluator' --json

# Search for the _system_message method within the LlmEvaluator class
ast-grep --lang python --pattern 'def _system_message' --json

Length of output: 822



Script:

#!/bin/bash
# Description: Verify the superclass method `_system_message`.

# Test: Search for the LlmEvaluator class and its methods to verify the existence of the _system_message method.

# Search for the LlmEvaluator class definition
rg 'class LlmEvaluator'

# Search for the _system_message method within the codebase
rg 'def _system_message'

Length of output: 338


127-176: LGTM! Approved _evaluate method.

The _evaluate method correctly handles different output types and generates evaluation metrics. The method also handles exceptions and logs errors, which looks good.

Verification successful

LGTM! Approved _evaluate method.

The _evaluate method correctly handles different output types and generates evaluation metrics. The method also handles exceptions and logs errors, which looks good. The verification confirms that exceptions are appropriately managed across various usages.

  • athina/evals/base_evaluator.py
  • athina/evals/guardrails/profanity_free/evaluator.py
  • athina/evals/guardrails/unusual_prompt/evaluator.py
  • athina/evals/guardrails/sfw/evaluator.py
  • athina/evals/guardrails/sensitive_topics/evaluator.py
  • athina/evals/guardrails/reading_time/evaluator.py
  • athina/evals/guardrails/toxic_language/evaluator.py
  • athina/evals/guardrails/restrict_to_topic/evaluator.py
  • athina/evals/grounded/grounded_evaluator.py
  • athina/evals/llm/summary_accuracy/evaluator.py
  • athina/evals/guardrails/no_secrets_present/evaluator.py
  • athina/evals/function/function_evaluator.py
  • athina/evals/ragas/ragas_evaluator.py
  • athina/evals/guardrails/gibberish_text/evaluator.py
  • athina/evals/guardrails/detect_pii/evaluator.py
  • athina/evals/guardrails/correct_language/evaluator.py
  • athina/evals/guardrails/politeness_check/evaluator.py
Scripts executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Description: Verify the exception handling in the `_evaluate` method.

# Test: Search for the usage of the `_evaluate` method. Expect: Correct exception handling.
rg --type python $'_evaluate'

Length of output: 59



Script:

#!/bin/bash
# Description: Verify the exception handling in the `_evaluate` method.

# Test: Search for the usage of the `_evaluate` method. Expect: Correct exception handling.
rg '_evaluate' --glob '*.py'

Length of output: 4653



Script:

#!/bin/bash
# Description: Extract the context around the `_evaluate` calls to verify exception handling.

# Extract lines around `_evaluate` calls in Python files.
rg '_evaluate' --glob '*.py' -A 3 -B 3

Length of output: 24784



Script:

#!/bin/bash
# Description: Verify exception handling around `_evaluate` calls in identified files.

# Check for exception handling context around `_evaluate` calls.
rg '_evaluate' --glob '*.py' -A 10 -B 10 | grep -E 'try|except|logger.error|traceback'

Length of output: 2088

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between bc02015 and cc365a4.

Files selected for processing (3)
  • athina/evals/llm/custom_prompt/evaluator.py (5 hunks)
  • athina/metrics/metric_type.py (1 hunks)
  • examples/run_custom_eval.ipynb (11 hunks)
Files skipped from review due to trivial changes (1)
  • examples/run_custom_eval.ipynb
Files skipped from review as they are similar to previous changes (2)
  • athina/evals/llm/custom_prompt/evaluator.py
  • athina/metrics/metric_type.py

Copy link
Collaborator

@vivek-athina vivek-athina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@akshat-g akshat-g merged commit 10780b1 into main Jul 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants