|
125 | 125 | " followed by a hidden layer with 100 nodes, followed by an output layer with 10 nodes.\n",
|
126 | 126 | " :param activation_fn: Callable\n",
|
127 | 127 | " The function used for the output of each hidden layer. The network will use the same\n",
|
128 |
| - " activation function on every hidden layer and no activate function on the output layer.\n", |
| 128 | + " activation function on every hidden layer and no activation function on the output layer.\n", |
129 | 129 | " e.g. Pass tf.nn.relu to use ReLU activations on your hidden layers.\n",
|
130 | 130 | " :param use_batch_norm: bool\n",
|
131 | 131 | " Pass True to create a network that uses batch normalization; False otherwise\n",
|
|
278 | 278 | " if False, perform inference with batch normalization using estimated population mean and variance.\n",
|
279 | 279 | " Note: in real life, *always* perform inference using the population mean and variance.\n",
|
280 | 280 | " This parameter exists just to support demonstrating what happens if you don't.\n",
|
281 |
| - " :param include_individual_predictions: bool (default True)\n", |
| 281 | + " :param include_individual_predictions: bool (default False)\n", |
282 | 282 | " This function always performs an accuracy test against the entire test set. But if this parameter\n",
|
283 | 283 | " is True, it performs an extra test, doing 200 predictions one at a time, and displays the results\n",
|
284 | 284 | " and accuracy.\n",
|
|
1806 | 1806 | " train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))\n",
|
1807 | 1807 | " train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))\n",
|
1808 | 1808 | "\n",
|
1809 |
| - " # The 'tf.control_dependencies' context tells TensorFlow it must calculate 'train_mean' \n", |
1810 |
| - " # and 'train_variance' before it calculates the 'tf.nn.batch_normalization' layer.\n", |
| 1809 | + " # The `tf.control_dependencies` context tells TensorFlow it must calculate `train_mean` \n", |
| 1810 | + " # and `train_variance` before it calculates the `tf.nn.batch_normalization` layer.\n", |
1811 | 1811 | " # This is necessary because the those two operations are not actually in the graph\n",
|
1812 | 1812 | " # connecting the linear_output and batch_normalization layers, \n",
|
1813 | 1813 | " # so TensorFlow would otherwise just skip them.\n",
|
|
0 commit comments