Skip to content

Commit

Permalink
fixed typos
Browse files Browse the repository at this point in the history
  • Loading branch information
takumiohym committed Sep 27, 2024
1 parent 421040e commit df241a4
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 4 deletions.
4 changes: 2 additions & 2 deletions notebooks/text_models/labs/rnn_encoder_decoder.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -943,7 +943,7 @@
"\n",
"It still is imperfect, since it gives no credit to synonyms and so human evaluation is still best when feasible. However BLEU is commonly considered the best among bad options for an automated metric.\n",
"\n",
"The Hugging Face evaluagte framework has an implementation that we will use.\n",
"The Hugging Face evaluate framework has an implementation that we will use.\n",
"\n",
"We can't run calculate BLEU during training, because at that time the correct decoder input is used. Instead we'll calculate it now.\n",
"\n",
Expand All @@ -965,7 +965,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's now average the `bleu_1` and `bleu_4` scores for all the sentence pairs in the eval set. The next cell takes around 1 minute (8 minutes for full dataset eval) to run, the bulk of which is decoding the sentences in the validation set. Please wait unitl completes."
"Let's now average the `bleu_1` and `bleu_4` scores for all the sentence pairs in the eval set. The next cell takes around 1 minute (8 minutes for full dataset eval) to run, the bulk of which is decoding the sentences in the validation set. Please wait until completes."
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions notebooks/text_models/solutions/rnn_encoder_decoder.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -951,7 +951,7 @@
"\n",
"It still is imperfect, since it gives no credit to synonyms and so human evaluation is still best when feasible. However BLEU is commonly considered the best among bad options for an automated metric.\n",
"\n",
"The Hugging Face evaluagte framework has an implementation that we will use.\n",
"The Hugging Face evaluate framework has an implementation that we will use.\n",
"\n",
"We can't run calculate BLEU during training, because at that time the correct decoder input is used. Instead we'll calculate it now.\n",
"\n",
Expand All @@ -975,7 +975,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's now average the `bleu_1` and `bleu_4` scores for all the sentence pairs in the eval set. The next cell takes around 1 minute (8 minutes for full dataset eval) to run, the bulk of which is decoding the sentences in the validation set. Please wait unitl completes."
"Let's now average the `bleu_1` and `bleu_4` scores for all the sentence pairs in the eval set. The next cell takes around 1 minute (8 minutes for full dataset eval) to run, the bulk of which is decoding the sentences in the validation set. Please wait until completes."
]
},
{
Expand Down

0 comments on commit df241a4

Please sign in to comment.