Releases: Tiiiger/bert_score
Releases · Tiiiger/bert_score
Version 0.3.13
Version 0.3.12
Version 0.3.11
- Updated to version 0.3.11
- Support 6 DeBERTa v3 models
- Support 3 ByT5 models
Version 0.3.10
- Updated to version 0.3.10
- Support 8 SimCSE models
- Fix the support of scibert (to be compatible with transformers >= 4.0.0)
- Add scripts for reproducing some results in our paper (See this folder)
- Support fast tokenizers in huggingface transformers with
--use_fast_tokenizer
. Notably, you will get different scores because of the difference in the tokenizer implementations (#106). - Fix non-zero recall problem for empty candidate strings (#107).
- Add Turkish BERT Supoort (#108).
Version 0.3.9
- Support 3 BigBird models
- Fix bugs for mBART and T5
- Support 4 mT5 models as requested (#93)
Version 0.3.8
- Support 53 new pretrained models including BART, mBART, BORT, DeBERTa, T5, mT5, BERTweet, MPNet, ConvBERT, SqueezeBERT, SpanBERT, PEGASUS, Longformer, LED, Blendbot, etc. Among them, DeBERTa achives higher correlation with human scores than RoBERTa (our default) on WMT16 dataset. The correlations are presented in this Google sheet.
- Please consider using
--model_type microsoft/deberta-xlarge-mnli
or--model_type microsoft/deberta-large-mnli
(faster) if you want the scores to correlate better with human scores. - Add baseline files for DeBERTa models.
- Add example code to generate baseline files (please see the details).
Version 0.3.7
Version 0.3.6
Updated to version 0.3.6
- Support custom baseline files #74
- The option
--rescale-with-baseline
is changed to--rescale_with_baseline
so that it is consistent with other options.