Skip to content

Commit d742984

Browse files
grasskincopybara-github
authored andcommitted
Fix small typo
PiperOrigin-RevId: 574223535
1 parent c6fe18c commit d742984

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

site/en/tutorials/distribute/custom_training.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -364,7 +364,7 @@
364364
"\n",
365365
" * Input batches shorter than `GLOBAL_BATCH_SIZE` create unpleasant corner cases in several places. In practice, it often works best to avoid them by allowing batches to span epoch boundaries using `Dataset.repeat().batch()` and defining approximate epochs by step counts, not dataset ends. Alternatively, `Dataset.batch(drop_remainder=True)` maintains the notion of epoch but drops the last few examples.\n",
366366
"\n",
367-
" For illustration, this example goes the harder route and allows short batches, so that each training epoch contains each trainig example exactly once.\n",
367+
" For illustration, this example goes the harder route and allows short batches, so that each training epoch contains each training example exactly once.\n",
368368
" \n",
369369
" Which denominator should be used by `tf.nn.compute_average_loss()`?\n",
370370
"\n",

0 commit comments

Comments
 (0)