Skip to content

Commit 1f6922f

Browse files
committed
Links with relative paths
1 parent 0292f62 commit 1f6922f

File tree

5 files changed

+9
-11
lines changed

5 files changed

+9
-11
lines changed

08_Convolutional_Neural_Networks/01_Intro_to_CNN/readme.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,10 +4,10 @@ Convolutional Neural Networks (CNNs) are responsible for the latest major breakt
44

55
In mathematics, a convolution is a function which is applied over the output of another function. In our case, we will consider applying a matrix mutliplication (filter) across an image. See the below diagram for an example of how this may work.
66

7-
![Convolutional Filter](https://github.com/nfmcclure/tensorflow_cookbook/blob/master/08_Convolutional_Neural_Networks/images/01_intro_cnn.png)
7+
![Convolutional Filter](../images/01_intro_cnn.png)
88

99
CNNs generally follow a structure. The main convolutional setup is (input array) -> (convolutional filter layer) -> (Pooling) -> (Activation layer). The above diagram depicts how a convolutional layer may create one feature. Generally, filters are multidimensional and end up creating many features. It is also common to have a completely separate filter-feature creator of different sizes acting on the same layer. After this convolutional filter, it is common to apply a pooling layer. This pooling may be a max-pooling or an average pooling or another aggregation. One of the key concepts here is that the pooling layer has no parameters- while decreasing the layer size. See the below diagram for an example of max-pooling.
1010

11-
![Convolutional Filter](https://github.com/nfmcclure/tensorflow_cookbook/blob/master/08_Convolutional_Neural_Networks/images/01_intro_cnn2.png)
11+
![Convolutional Filter](../images/01_intro_cnn2.png)
1212

13-
After the max pooling, there is generally an activation layer. One of the more common activation layers is the ReLU (Rectified Linear Unit). See [Chapter 1, Section 6](https://github.com/nfmcclure/tensorflow_cookbook/tree/master/01_Introduction/06_Implementing_Activation_Functions) for examples.
13+
After the max pooling, there is generally an activation layer. One of the more common activation layers is the ReLU (Rectified Linear Unit). See [Chapter 1, Section 6](../../01_Introduction/06_Implementing_Activation_Functions) for examples.

08_Convolutional_Neural_Networks/02_Intro_to_CNN_MNIST/readme.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,11 @@ Here we illustrate how to use a simple CNN with three convolutional units to pre
55

66
When the script is done training the model, you should see similar output to the following graphs.
77

8-
![Loss and Accuracy](https://github.com/nfmcclure/tensorflow_cookbook/blob/master/08_Convolutional_Neural_Networks/images/02_cnn1_loss_acc.png "Loss and Accuracy")
8+
![Loss and Accuracy](../images/02_cnn1_loss_acc.png "Loss and Accuracy")
99

1010
Training and test loss (left) and test batch accuracy (right).
1111

1212

13-
![Sample Test Batch](https://github.com/nfmcclure/tensorflow_cookbook/blob/master/08_Convolutional_Neural_Networks/images/02_cnn1_mnist_output.png "Sample of 6 Images")
13+
![Sample Test Batch](../images/02_cnn1_mnist_output.png "Sample of 6 Images")
1414

1515
A random set of 6 digits with actuals and predicted labels. You can see a prediction failure in the lower right box.

08_Convolutional_Neural_Networks/03_CNN_CIFAR10/readme.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,6 @@ Here we will build an convolutional neural network to predict the CIFAR-10 data.
55

66
The script provided will download and unzip the CIFAR-10 data. Then it will start training a CNN from scratch. You should see similar output at the end to the following two graphs.
77

8-
![Loss and Accuracy](https://github.com/nfmcclure/tensorflow_cookbook/blob/master/08_Convolutional_Neural_Networks/images/03_cnn2_loss_acc.png)
8+
![Loss and Accuracy](../images/03_cnn2_loss_acc.png)
99

1010
Here we see the training loss (left) and the test batch accuracy (right).

08_Convolutional_Neural_Networks/05_Stylenet_NeuralStyle/readme.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,10 +5,8 @@ The purpose of this script is to illustrate how to do stylenet in TensorFlow. W
55

66
## Prerequisites
77
* Download the VGG-verydeep-19.mat file [here](http://www.vlfeat.org/matconvnet/models/beta16/imagenet-vgg-verydeep-19.mat).
8-
* You must download two images, a [style image](https://github.com/nfmcclure/tensorflow_cookbook/blob/master/08_Convolutional_Neural_Networks/images/starry_night.jpg) and a [content image](https://github.com/nfmcclure/tensorflow_cookbook/blob/master/08_Convolutional_Neural_Networks/images/book_cover.jpg) for the algorithm to blend. (Image links are to the images used in the book.)
8+
* You must download two images, a [style image](../images/starry_night.jpg) and a [content image](../images/book_cover.jpg) for the algorithm to blend. (Image links are to the images used in the book.)
99

1010
The algorithm will output temporary images during training.
1111

12-
![Stylenet Example](https://github.com/nfmcclure/tensorflow_cookbook/blob/master/08_Convolutional_Neural_Networks/images/05_stylenet_ex.png)
13-
14-
12+
![Stylenet Example](../images/05_stylenet_ex.png)

08_Convolutional_Neural_Networks/06_Deepdream/readme.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ Note: There is no new code in this script. It originates from the TensorFlow tu
55

66
Here are some potential outputs.
77

8-
![deepdream outputs](https://github.com/nfmcclure/tensorflow_cookbook/blob/master/08_Convolutional_Neural_Networks/images/06_deepdream_ex.png)
8+
![deepdream outputs](../images/06_deepdream_ex.png)
99
> Deepdream results on four features with the book's cover image.
1010
1111
Our interpretation so far:

0 commit comments

Comments
 (0)