Skip to content

Commit

Permalink
Mod: Update docstring.
Browse files Browse the repository at this point in the history
  • Loading branch information
Labbeti committed Jul 21, 2023
1 parent 72eda97 commit 2eae3fe
Show file tree
Hide file tree
Showing 4 changed files with 4 additions and 4 deletions.
2 changes: 1 addition & 1 deletion src/aac_metrics/functional/fense.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ def fense(
:param penalty: The penalty coefficient applied. Higher value means to lower the cos-sim scores when an error is detected. defaults to 0.9.
:param device: The PyTorch device used to run FENSE models. If "auto", it will use cuda if available. defaults to "auto".
:param batch_size: The batch size of the sBERT and echecker models. defaults to 32.
:param reset_state: If True, reset the state of the PyTorch global generator after the pre-trained model are built. defaults to True.
:param reset_state: If True, reset the state of the PyTorch global generator after the initialization of the pre-trained models. defaults to True.
:param return_probs: If True, return each individual error probability given by the fluency detector model. defaults to True.
:param verbose: The verbose level. defaults to 0.
:returns: A tuple of globals and locals scores or a scalar tensor with the main global score.
Expand Down
2 changes: 1 addition & 1 deletion src/aac_metrics/functional/fluerr.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ def fluerr(
:param error_threshold: The threshold used to detect fluency errors for echecker model. defaults to 0.9.
:param device: The PyTorch device used to run FENSE models. If "auto", it will use cuda if available. defaults to "auto".
:param batch_size: The batch size of the echecker models. defaults to 32.
:param reset_state: If True, reset the state of the PyTorch global generator after the pre-trained model are built. defaults to True.
:param reset_state: If True, reset the state of the PyTorch global generator after the initialization of the pre-trained models. defaults to True.
:param return_probs: If True, return each individual error probability given by the fluency detector model. defaults to True.
:param verbose: The verbose level. defaults to 0.
:returns: A tuple of globals and locals scores or a scalar tensor with the main global score.
Expand Down
2 changes: 1 addition & 1 deletion src/aac_metrics/functional/sbert_sim.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ def sbert_sim(
:param sbert_model: The sentence BERT model used to extract sentence embeddings for cosine-similarity. defaults to "paraphrase-TinyBERT-L6-v2".
:param device: The PyTorch device used to run FENSE models. If "auto", it will use cuda if available. defaults to "auto".
:param batch_size: The batch size of the sBERT models. defaults to 32.
:param reset_state: If True, reset the state of the PyTorch global generator after the pre-trained model are built. defaults to True.
:param reset_state: If True, reset the state of the PyTorch global generator after the initialization of the pre-trained models. defaults to True.
:param verbose: The verbose level. defaults to 0.
:returns: A tuple of globals and locals scores or a scalar tensor with the main global score.
"""
Expand Down
2 changes: 1 addition & 1 deletion src/aac_metrics/functional/spider_fl.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ def spider_fl(
:param error_threshold: The threshold used to detect fluency errors for echecker model. defaults to 0.9.
:param device: The PyTorch device used to run FENSE models. If "auto", it will use cuda if available. defaults to "auto".
:param batch_size: The batch size of the sBERT and echecker models. defaults to 32.
:param reset_state: If True, reset the state of the PyTorch global generator after the pre-trained model are built. defaults to True.
:param reset_state: If True, reset the state of the PyTorch global generator after the initialization of the pre-trained models. defaults to True.
:param return_probs: If True, return each individual error probability given by the fluency detector model. defaults to True.
:param penalty: The penalty coefficient applied. Higher value means to lower the cos-sim scores when an error is detected. defaults to 0.9.
:param verbose: The verbose level. defaults to 0.
Expand Down

0 comments on commit 2eae3fe

Please sign in to comment.