Skip to content

Commit

Permalink
Merge pull request #455 from parea-ai/PAI-670-relax-0-to-1-range-of-e…
Browse files Browse the repository at this point in the history
…val-metrics

feat: only require that eval score is convertible to float
  • Loading branch information
joschkabraun authored Feb 13, 2024
2 parents daf927a + 67aca63 commit cfb44cd
Show file tree
Hide file tree
Showing 3 changed files with 3 additions and 3 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ The scores associated with the traces will be logged to the Parea [dashboard](ht
local CSV file if you don't have a Parea API key.

Evaluation functions receive an argument `log` (of type [Log](parea/schemas/models.py)) and should return a
float between 0 (bad) and 1 (good) inclusive. You don't need to start from scratch, there are pre-defined evaluation
float. You don't need to start from scratch, there are pre-defined evaluation
functions for [general purpose](parea/evals/general),
[chat](parea/evals/chat), [RAG](parea/evals/rag), and [summarization](parea/evals/summary) apps :)

Expand Down
2 changes: 1 addition & 1 deletion parea/schemas/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ class FeedbackRequest:
@define
class NamedEvaluationScore:
name: str
score: float = field(validator=[validators.ge(0), validators.le(1)])
score: float


@define
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ build-backend = "poetry.core.masonry.api"
[tool.poetry]
name = "parea-ai"
packages = [{ include = "parea" }]
version = "0.2.68"
version = "0.2.69"
description = "Parea python sdk"
readme = "README.md"
authors = ["joel-parea-ai <[email protected]>"]
Expand Down

0 comments on commit cfb44cd

Please sign in to comment.