diff --git a/notebooks/text_models/labs/rnn_encoder_decoder.ipynb b/notebooks/text_models/labs/rnn_encoder_decoder.ipynb index d64b22e2..6b9dbcd1 100644 --- a/notebooks/text_models/labs/rnn_encoder_decoder.ipynb +++ b/notebooks/text_models/labs/rnn_encoder_decoder.ipynb @@ -943,7 +943,7 @@ "\n", "It still is imperfect, since it gives no credit to synonyms and so human evaluation is still best when feasible. However BLEU is commonly considered the best among bad options for an automated metric.\n", "\n", - "The Hugging Face evaluagte framework has an implementation that we will use.\n", + "The Hugging Face evaluate framework has an implementation that we will use.\n", "\n", "We can't run calculate BLEU during training, because at that time the correct decoder input is used. Instead we'll calculate it now.\n", "\n", @@ -965,7 +965,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Let's now average the `bleu_1` and `bleu_4` scores for all the sentence pairs in the eval set. The next cell takes around 1 minute (8 minutes for full dataset eval) to run, the bulk of which is decoding the sentences in the validation set. Please wait unitl completes." + "Let's now average the `bleu_1` and `bleu_4` scores for all the sentence pairs in the eval set. The next cell takes around 1 minute (8 minutes for full dataset eval) to run, the bulk of which is decoding the sentences in the validation set. Please wait until completes." ] }, { diff --git a/notebooks/text_models/solutions/rnn_encoder_decoder.ipynb b/notebooks/text_models/solutions/rnn_encoder_decoder.ipynb index fd25fc65..2e01dbf2 100644 --- a/notebooks/text_models/solutions/rnn_encoder_decoder.ipynb +++ b/notebooks/text_models/solutions/rnn_encoder_decoder.ipynb @@ -951,7 +951,7 @@ "\n", "It still is imperfect, since it gives no credit to synonyms and so human evaluation is still best when feasible. However BLEU is commonly considered the best among bad options for an automated metric.\n", "\n", - "The Hugging Face evaluagte framework has an implementation that we will use.\n", + "The Hugging Face evaluate framework has an implementation that we will use.\n", "\n", "We can't run calculate BLEU during training, because at that time the correct decoder input is used. Instead we'll calculate it now.\n", "\n", @@ -975,7 +975,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Let's now average the `bleu_1` and `bleu_4` scores for all the sentence pairs in the eval set. The next cell takes around 1 minute (8 minutes for full dataset eval) to run, the bulk of which is decoding the sentences in the validation set. Please wait unitl completes." + "Let's now average the `bleu_1` and `bleu_4` scores for all the sentence pairs in the eval set. The next cell takes around 1 minute (8 minutes for full dataset eval) to run, the bulk of which is decoding the sentences in the validation set. Please wait until completes." ] }, {