Skip to content

Commit

Permalink
Fix leaderboard summary average score to micro-average
Browse files Browse the repository at this point in the history
  • Loading branch information
lsz05 committed Sep 11, 2024
1 parent d0a2218 commit 9df586a
Show file tree
Hide file tree
Showing 2 changed files with 40 additions and 30 deletions.
56 changes: 28 additions & 28 deletions leaderboard.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,37 +3,37 @@ This leaderboard shows the results stored under `docs/results`. The scores are a

## Summary

The summary shows the average scores within each task.
The summary shows the average scores within each task. The average score is the average of scores by dataset.

| Model | Avg. | Retrieval | STS | Classification | Reranking | Clustering | PairClassification |
|:----------------------------------------------|:----------|:------------|:----------|:-----------------|:------------|:-------------|:---------------------|
| OpenAI/text-embedding-3-large | **73.97** | **74.48** | 82.52 | **77.58** | **93.58** | 53.32 | 62.35 |
| cl-nagoya/ruri-large | 73.45 | 73.02 | 83.13 | 77.43 | 92.99 | 51.82 | 62.29 |
| cl-nagoya/ruri-base | 72.95 | 69.82 | 82.87 | 75.58 | 92.91 | **54.16** | 62.38 |
| pkshatech/GLuCoSE-base-ja-v2 | 72.63 | 71.88 | **83.25** | 74.70 | 92.90 | 50.68 | 62.40 |
| cl-nagoya/ruri-small | 72.45 | 69.41 | 82.79 | 76.22 | 93.00 | 51.19 | 62.11 |
| intfloat/multilingual-e5-large | 71.65 | 70.98 | 79.70 | 72.89 | 92.96 | 51.24 | 62.15 |
| pkshatech/RoSEtta-base-ja | 71.23 | 71.87 | 81.32 | 72.39 | 92.68 | 46.97 | 62.19 |
| OpenAI/text-embedding-3-small | 70.86 | 66.39 | 79.46 | 73.06 | 92.92 | 51.06 | 62.27 |
| pkshatech/GLuCoSE-base-ja | 70.44 | 59.02 | 78.71 | 76.82 | 91.90 | 49.78 | **66.39** |
| intfloat/multilingual-e5-base | 70.12 | 68.21 | 79.84 | 69.30 | 92.85 | 48.26 | 62.26 |
| intfloat/multilingual-e5-small | 69.52 | 67.27 | 80.07 | 67.62 | 93.03 | 46.91 | 62.19 |
| OpenAI/text-embedding-ada-002 | 69.48 | 64.38 | 79.02 | 69.75 | 93.04 | 48.30 | 62.40 |
| cl-nagoya/sup-simcse-ja-base | 68.56 | 49.64 | 82.05 | 73.47 | 91.83 | 51.79 | 62.57 |
| MU-Kindai/Japanese-SimCSE-BERT-large-unsup | 66.89 | 47.38 | 78.99 | 73.13 | 91.30 | 48.25 | 62.27 |
| oshizo/sbert-jsnli-luke-japanese-base-lite | 66.75 | 43.00 | 76.60 | 76.61 | 91.56 | 50.33 | 62.38 |
| cl-nagoya/sup-simcse-ja-large | 66.51 | 37.62 | 83.18 | 73.73 | 91.48 | 50.56 | 62.51 |
| cl-nagoya/unsup-simcse-ja-large | 66.27 | 40.53 | 80.56 | 74.66 | 90.95 | 48.41 | 62.49 |
| MU-Kindai/Japanese-SimCSE-BERT-base-unsup | 66.23 | 46.36 | 77.49 | 73.30 | 91.16 | 46.68 | 62.38 |
| MU-Kindai/Japanese-SimCSE-BERT-large-sup | 65.28 | 40.82 | 78.28 | 73.47 | 90.95 | 45.81 | 62.35 |
| MU-Kindai/Japanese-MixCSE-BERT-base | 65.14 | 42.59 | 77.05 | 72.90 | 91.01 | 44.95 | 62.33 |
| cl-nagoya/unsup-simcse-ja-base | 65.07 | 40.23 | 78.72 | 73.07 | 91.16 | 44.77 | 62.44 |
| MU-Kindai/Japanese-DiffCSE-BERT-base | 64.77 | 41.79 | 75.50 | 73.77 | 90.95 | 44.22 | 62.38 |
| sentence-transformers/LaBSE | 64.70 | 40.12 | 76.56 | 72.66 | 91.63 | 44.88 | 62.33 |
| pkshatech/simcse-ja-bert-base-clcmlp | 64.42 | 37.00 | 76.80 | 71.30 | 91.49 | 47.53 | 62.40 |
| MU-Kindai/Japanese-SimCSE-BERT-base-sup | 64.15 | 41.32 | 74.66 | 72.76 | 90.66 | 43.11 | 62.37 |
| colorfulscoop/sbert-base-ja | 58.85 | 16.52 | 70.42 | 69.07 | 89.97 | 44.81 | 62.31 |
| sentence-transformers/stsb-xlm-r-multilingual | 58.01 | 21.00 | 75.40 | 71.84 | 90.20 | 27.46 | 62.20 |
| OpenAI/text-embedding-3-large | **74.05** | **74.48** | 82.52 | **77.58** | **93.58** | 53.32 | 62.35 |
| cl-nagoya/ruri-large | 73.31 | 73.02 | 83.13 | 77.43 | 92.99 | 51.82 | 62.29 |
| pkshatech/GLuCoSE-base-ja-v2 | 72.07 | 71.88 | **83.25** | 74.70 | 92.90 | 50.68 | 62.40 |
| cl-nagoya/ruri-base | 71.91 | 69.82 | 82.87 | 75.58 | 92.91 | **54.16** | 62.38 |
| cl-nagoya/ruri-small | 71.53 | 69.41 | 82.79 | 76.22 | 93.00 | 51.19 | 62.11 |
| intfloat/multilingual-e5-large | 70.90 | 70.98 | 79.70 | 72.89 | 92.96 | 51.24 | 62.15 |
| pkshatech/RoSEtta-base-ja | 70.76 | 71.87 | 81.32 | 72.39 | 92.68 | 46.97 | 62.19 |
| OpenAI/text-embedding-3-small | 69.18 | 66.39 | 79.46 | 73.06 | 92.92 | 51.06 | 62.27 |
| intfloat/multilingual-e5-base | 68.61 | 68.21 | 79.84 | 69.30 | 92.85 | 48.26 | 62.26 |
| intfloat/multilingual-e5-small | 67.71 | 67.27 | 80.07 | 67.62 | 93.03 | 46.91 | 62.19 |
| pkshatech/GLuCoSE-base-ja | 67.29 | 59.02 | 78.71 | 76.82 | 91.90 | 49.78 | **66.39** |
| OpenAI/text-embedding-ada-002 | 67.21 | 64.38 | 79.02 | 69.75 | 93.04 | 48.30 | 62.40 |
| cl-nagoya/sup-simcse-ja-base | 63.36 | 49.64 | 82.05 | 73.47 | 91.83 | 51.79 | 62.57 |
| MU-Kindai/Japanese-SimCSE-BERT-large-unsup | 61.55 | 47.38 | 78.99 | 73.13 | 91.30 | 48.25 | 62.27 |
| MU-Kindai/Japanese-SimCSE-BERT-base-unsup | 60.83 | 46.36 | 77.49 | 73.30 | 91.16 | 46.68 | 62.38 |
| oshizo/sbert-jsnli-luke-japanese-base-lite | 60.77 | 43.00 | 76.60 | 76.61 | 91.56 | 50.33 | 62.38 |
| cl-nagoya/unsup-simcse-ja-large | 59.58 | 40.53 | 80.56 | 74.66 | 90.95 | 48.41 | 62.49 |
| MU-Kindai/Japanese-MixCSE-BERT-base | 59.03 | 42.59 | 77.05 | 72.90 | 91.01 | 44.95 | 62.33 |
| cl-nagoya/sup-simcse-ja-large | 58.88 | 37.62 | 83.18 | 73.73 | 91.48 | 50.56 | 62.51 |
| MU-Kindai/Japanese-SimCSE-BERT-large-sup | 58.77 | 40.82 | 78.28 | 73.47 | 90.95 | 45.81 | 62.35 |
| MU-Kindai/Japanese-DiffCSE-BERT-base | 58.66 | 41.79 | 75.50 | 73.77 | 90.95 | 44.22 | 62.38 |
| cl-nagoya/unsup-simcse-ja-base | 58.39 | 40.23 | 78.72 | 73.07 | 91.16 | 44.77 | 62.44 |
| sentence-transformers/LaBSE | 58.01 | 40.12 | 76.56 | 72.66 | 91.63 | 44.88 | 62.33 |
| MU-Kindai/Japanese-SimCSE-BERT-base-sup | 57.97 | 41.32 | 74.66 | 72.76 | 90.66 | 43.11 | 62.37 |
| pkshatech/simcse-ja-bert-base-clcmlp | 56.86 | 37.00 | 76.80 | 71.30 | 91.49 | 47.53 | 62.40 |
| sentence-transformers/stsb-xlm-r-multilingual | 48.21 | 21.00 | 75.40 | 71.84 | 90.20 | 27.46 | 62.20 |
| colorfulscoop/sbert-base-ja | 47.38 | 16.52 | 70.42 | 69.07 | 89.97 | 44.81 | 62.31 |

## Retrieval
| Model | Avg. | jagovfaqs_22k<br>(ndcg@10) | jaqket<br>(ndcg@10) | mrtydi<br>(ndcg@10) | nlp_journal_abs_intro<br>(ndcg@10) | nlp_journal_title_abs<br>(ndcg@10) | nlp_journal_title_intro<br>(ndcg@10) |
Expand Down
14 changes: 12 additions & 2 deletions make_leaderboard.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,14 @@ def format_score(score: float) -> str:
table_list: list[list[str | float]] = []
for model_signature, dataset_scores in task_results.items():
model_scores = [dataset_scores[k] for k in dataset_keys]
average_score = sum(model_scores) / len(model_scores)
if task_name == SUMMARY_KEY:
scores_by_dataset = []
for _task_name, _task_results in all_results.items():
if _task_name != SUMMARY_KEY:
scores_by_dataset.extend(list(_task_results[model_signature].values()))
average_score = sum(scores_by_dataset) / len(scores_by_dataset)
else:
average_score = sum(model_scores) / len(model_scores)
table_list.append([model_signature, average_score, *model_scores])

# sort by the average score
Expand Down Expand Up @@ -97,7 +104,10 @@ def format_score(score: float) -> str:
f.write(f"## {task_name}\n")

if task_name == SUMMARY_KEY:
f.write("\nThe summary shows the average scores within each task.\n\n")
f.write(
"\nThe summary shows the average scores within each task. "
"The average score is the average of scores by dataset.\n\n"
)

f.write(markdown_table)
f.write("\n\n")

0 comments on commit 9df586a

Please sign in to comment.