diff --git a/Character_Level_RNN_Exercise.ipynb b/Character_Level_RNN_Exercise.ipynb
new file mode 100644
index 0000000..f4203bf
--- /dev/null
+++ b/Character_Level_RNN_Exercise.ipynb
@@ -0,0 +1,1573 @@
+{
+ "nbformat": 4,
+ "nbformat_minor": 0,
+ "metadata": {
+ "colab": {
+ "name": "Character_Level_RNN_Exercise.ipynb",
+ "version": "0.3.2",
+ "provenance": [],
+ "include_colab_link": true
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.6.4"
+ },
+ "kernelspec": {
+ "name": "python3",
+ "display_name": "Python 3"
+ },
+ "accelerator": "GPU"
+ },
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "view-in-github",
+ "colab_type": "text"
+ },
+ "source": [
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "PPIbxTQIqYSR",
+ "colab_type": "text"
+ },
+ "source": [
+ "# Character-Level LSTM in PyTorch\n",
+ "\n",
+ "In this notebook, I'll construct a character-level LSTM with PyTorch. The network will train character by character on some text, then generate new text character by character. As an example, I will train on Anna Karenina. **This model will be able to generate new text based on the text from the book!**\n",
+ "\n",
+ "This network is based off of Andrej Karpathy's [post on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) and [implementation in Torch](https://github.com/karpathy/char-rnn). Below is the general architecture of the character-wise RNN.\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "AdQfxURDqYTB",
+ "colab_type": "text"
+ },
+ "source": [
+ "First let's load in our required resources for data loading and model creation."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "VTMcNXeyqYTG",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "source": [
+ "import numpy as np\n",
+ "import torch\n",
+ "from torch import nn\n",
+ "import torch.nn.functional as F"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "ZWJWJoAP37TH",
+ "colab_type": "code",
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 122
+ },
+ "outputId": "85ccb8a5-6e2b-41cd-a4ec-09f22774d1c0"
+ },
+ "source": [
+ "from google.colab import drive\n",
+ "drive.mount('/content/gdrive')"
+ ],
+ "execution_count": 2,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "text": [
+ "Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=email%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdocs.test%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.photos.readonly%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fpeopleapi.readonly&response_type=code\n",
+ "\n",
+ "Enter your authorization code:\n",
+ "··········\n",
+ "Mounted at /content/gdrive\n"
+ ],
+ "name": "stdout"
+ }
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "HiSx0gThqYTO",
+ "colab_type": "text"
+ },
+ "source": [
+ "## Load in Data\n",
+ "\n",
+ "Then, we'll load the Anna Karenina text file and convert it into integers for our network to use. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "ML7taE_9qYTP",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "source": [
+ "# open text file and read in data as `text`\n",
+ "with open('gdrive/My Drive/Google Drive/Google Colab/Colab Notebooks/Deep Learning Nano Degree/3_RNN/char-rnn/data/anna.txt', 'r') as f:\n",
+ " text = f.read()"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "OsQ8b1ufqYTZ",
+ "colab_type": "text"
+ },
+ "source": [
+ "Let's check out the first 100 characters, make sure everything is peachy. According to the [American Book Review](http://americanbookreview.org/100bestlines.asp), this is the 6th best first line of a book ever."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "qEyb6m-JqYTd",
+ "colab_type": "code",
+ "outputId": "faad7099-2d5b-4129-f9f3-f901bbbb1f02",
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 34
+ }
+ },
+ "source": [
+ "text[:100]"
+ ],
+ "execution_count": 4,
+ "outputs": [
+ {
+ "output_type": "execute_result",
+ "data": {
+ "text/plain": [
+ "'Chapter 1\\n\\n\\nHappy families are all alike; every unhappy family is unhappy in its own\\nway.\\n\\nEverythin'"
+ ]
+ },
+ "metadata": {
+ "tags": []
+ },
+ "execution_count": 4
+ }
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "cvhrVzE9qYTi",
+ "colab_type": "text"
+ },
+ "source": [
+ "### Tokenization\n",
+ "\n",
+ "In the cells, below, I'm creating a couple **dictionaries** to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "qibPRF6BqYTq",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "source": [
+ "# encode the text and map each character to an integer and vice versa\n",
+ "\n",
+ "# we create two dictionaries:\n",
+ "# 1. int2char, which maps integers to characters\n",
+ "# 2. char2int, which maps characters to unique integers\n",
+ "chars = tuple(set(text))\n",
+ "int2char = dict(enumerate(chars))\n",
+ "char2int = {ch: ii for ii, ch in int2char.items()}\n",
+ "\n",
+ "# encode the text\n",
+ "encoded = np.array([char2int[ch] for ch in text])"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "nr22e-uYqYT_",
+ "colab_type": "text"
+ },
+ "source": [
+ "And we can see those same characters from above, encoded as integers."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "XxSV_Xu-qYUA",
+ "colab_type": "code",
+ "outputId": "fa0d9f96-60d8-4ac9-99f2-ca89071f91db",
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 119
+ }
+ },
+ "source": [
+ "encoded[:100]"
+ ],
+ "execution_count": 6,
+ "outputs": [
+ {
+ "output_type": "execute_result",
+ "data": {
+ "text/plain": [
+ "array([42, 45, 37, 73, 2, 39, 31, 69, 6, 46, 46, 46, 68, 37, 73, 73, 18,\n",
+ " 69, 54, 37, 50, 1, 12, 1, 39, 25, 69, 37, 31, 39, 69, 37, 12, 12,\n",
+ " 69, 37, 12, 1, 70, 39, 13, 69, 39, 14, 39, 31, 18, 69, 80, 41, 45,\n",
+ " 37, 73, 73, 18, 69, 54, 37, 50, 1, 12, 18, 69, 1, 25, 69, 80, 41,\n",
+ " 45, 37, 73, 73, 18, 69, 1, 41, 69, 1, 2, 25, 69, 26, 34, 41, 46,\n",
+ " 34, 37, 18, 3, 46, 46, 43, 14, 39, 31, 18, 2, 45, 1, 41])"
+ ]
+ },
+ "metadata": {
+ "tags": []
+ },
+ "execution_count": 6
+ }
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "dUqTr8YdqYUD",
+ "colab_type": "text"
+ },
+ "source": [
+ "## Pre-processing the data\n",
+ "\n",
+ "As you can see in our char-RNN image above, our LSTM expects an input that is **one-hot encoded** meaning that each character is converted into an integer (via our created dictionary) and *then* converted into a column vector where only it's corresponding integer index will have the value of 1 and the rest of the vector will be filled with 0's. Since we're one-hot encoding the data, let's make a function to do that!\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "8vOnv1m5qYUE",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "source": [
+ "# Interesting implementation of one_hot_coding\n",
+ "def one_hot_encode(arr, n_labels):\n",
+ " \n",
+ " # Initialize the the encoded array\n",
+ " one_hot = np.zeros((np.multiply(*arr.shape), n_labels), dtype=np.float32)\n",
+ " \n",
+ " # Fill the appropriate elements with ones\n",
+ " one_hot[np.arange(one_hot.shape[0]), arr.flatten()] = 1.\n",
+ " \n",
+ " # Finally reshape it to get back to the original array\n",
+ " one_hot = one_hot.reshape((*arr.shape, n_labels))\n",
+ " \n",
+ " return one_hot"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "3PWfVnoUqYUI",
+ "colab_type": "code",
+ "outputId": "b39b50bf-ae87-4758-9b8c-e339cd77c91e",
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 68
+ }
+ },
+ "source": [
+ "# check that the function works as expected\n",
+ "test_seq = np.array([[3, 5, 1]])\n",
+ "one_hot = one_hot_encode(test_seq, 8)\n",
+ "print(one_hot)"
+ ],
+ "execution_count": 8,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "text": [
+ "[[[0. 0. 0. 1. 0. 0. 0. 0.]\n",
+ " [0. 0. 0. 0. 0. 1. 0. 0.]\n",
+ " [0. 1. 0. 0. 0. 0. 0. 0.]]]\n"
+ ],
+ "name": "stdout"
+ }
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "cGjOk249qYUN",
+ "colab_type": "text"
+ },
+ "source": [
+ "## Making training mini-batches\n",
+ "\n",
+ "\n",
+ "To train on this data, we also want to create mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:\n",
+ "\n",
+ "
\n",
+ "\n",
+ "\n",
+ "
\n",
+ "\n",
+ "In this example, we'll take the encoded characters (passed in as the `arr` parameter) and split them into multiple sequences, given by `batch_size`. Each of our sequences will be `seq_length` long.\n",
+ "\n",
+ "### Creating Batches\n",
+ "\n",
+ "**1. The first thing we need to do is discard some of the text so we only have completely full mini-batches. **\n",
+ "\n",
+ "Each batch contains $N \\times M$ characters, where $N$ is the batch size (the number of sequences in a batch) and $M$ is the seq_length or number of time steps in a sequence. Then, to get the total number of batches, $K$, that we can make from the array `arr`, you divide the length of `arr` by the number of characters per batch. Once you know the number of batches, you can get the total number of characters to keep from `arr`, $N * M * K$.\n",
+ "\n",
+ "**2. After that, we need to split `arr` into $N$ batches. ** \n",
+ "\n",
+ "You can do this using `arr.reshape(size)` where `size` is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences in a batch, so let's make that the size of the first dimension. For the second dimension, you can use `-1` as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \\times (M * K)$.\n",
+ "\n",
+ "**3. Now that we have this array, we can iterate through it to get our mini-batches. **\n",
+ "\n",
+ "The idea is each batch is a $N \\times M$ window on the $N \\times (M * K)$ array. For each subsequent batch, the window moves over by `seq_length`. We also want to create both the input and target arrays. Remember that the targets are just the inputs shifted over by one character. The way I like to do this window is use `range` to take steps of size `n_steps` from $0$ to `arr.shape[1]`, the total number of tokens in each sequence. That way, the integers you get from `range` always point to the start of a batch, and each window is `seq_length` wide.\n",
+ "\n",
+ "> **TODO:** Write the code for creating batches in the function below. The exercises in this notebook _will not be easy_. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, **type out the solution code yourself.**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "nwSsBRhT7_7v",
+ "colab_type": "code",
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 34
+ },
+ "outputId": "55401229-6640-4cc0-e846-5e103a219fb8"
+ },
+ "source": [
+ "arr = encoded[:50]\n",
+ "batch_size = 3\n",
+ "seq_length = 3\n",
+ "n_batches = int(np.floor(len(arr)/(batch_size * seq_length)))\n",
+ "n_batches"
+ ],
+ "execution_count": 9,
+ "outputs": [
+ {
+ "output_type": "execute_result",
+ "data": {
+ "text/plain": [
+ "5"
+ ]
+ },
+ "metadata": {
+ "tags": []
+ },
+ "execution_count": 9
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "SHmn0tDMqYUP",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "source": [
+ "def get_batches(arr, batch_size, seq_length):\n",
+ " '''Create a generator that returns batches of size\n",
+ " batch_size x seq_length from arr.\n",
+ " \n",
+ " Arguments\n",
+ " ---------\n",
+ " arr: Array you want to make batches from\n",
+ " batch_size: Batch size, the number of sequences per batch\n",
+ " seq_length: Number of encoded chars in a sequence\n",
+ " '''\n",
+ " \n",
+ " ## TODO: Get the number of batches we can make\n",
+ " n_batches = int(np.floor(len(arr)/(batch_size * seq_length)))\n",
+ " \n",
+ " ## TODO: Keep only enough characters to make full batches\n",
+ " arr = arr[:n_batches * batch_size * seq_length]\n",
+ " \n",
+ " ## TODO: Reshape into batch_size rows\n",
+ " arr = arr.reshape((batch_size, -1))\n",
+ " \n",
+ " ## TODO: Iterate over the batches using a window of size seq_length\n",
+ " # The features\n",
+ " for n in range(0, arr.shape[1], seq_length):\n",
+ " x = arr[:, n:n+seq_length]\n",
+ " # The targets, shifted by one\n",
+ " y = np.zeros_like(x)\n",
+ " try:\n",
+ " y[:, :-1], y[:, -1] = x[:, 1:], arr[:, n+seq_length]\n",
+ " except IndexError:\n",
+ " y[:, :-1], y[:, -1] = x[:, 1:], arr[:, 0]\n",
+ " yield x, y"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "HxGST7TxqYUU",
+ "colab_type": "text"
+ },
+ "source": [
+ "### Test Your Implementation\n",
+ "\n",
+ "Now I'll make some data sets and we can check out what's going on as we batch data. Here, as an example, I'm going to use a batch size of 8 and 50 sequence steps."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "BpwavEfnqYUV",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "source": [
+ "batches = get_batches(encoded, 8, 50)\n",
+ "x, y = next(batches)"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "gON_wEmbqYUY",
+ "colab_type": "code",
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 340
+ },
+ "outputId": "8635ede7-4f2a-4738-b5b0-54d70c0ad1ac"
+ },
+ "source": [
+ "# printing out the first 10 items in a sequence\n",
+ "print('x\\n', x[:10, :10])\n",
+ "print('\\ny\\n', y[:10, :10])"
+ ],
+ "execution_count": 34,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "text": [
+ "x\n",
+ " [[42 45 37 73 2 39 31 69 6 46]\n",
+ " [25 26 41 69 2 45 37 2 69 37]\n",
+ " [39 41 30 69 26 31 69 37 69 54]\n",
+ " [25 69 2 45 39 69 32 45 1 39]\n",
+ " [69 25 37 34 69 45 39 31 69 2]\n",
+ " [32 80 25 25 1 26 41 69 37 41]\n",
+ " [69 27 41 41 37 69 45 37 30 69]\n",
+ " [ 5 82 12 26 41 25 70 18 3 69]]\n",
+ "\n",
+ "y\n",
+ " [[45 37 73 2 39 31 69 6 46 46]\n",
+ " [26 41 69 2 45 37 2 69 37 2]\n",
+ " [41 30 69 26 31 69 37 69 54 26]\n",
+ " [69 2 45 39 69 32 45 1 39 54]\n",
+ " [25 37 34 69 45 39 31 69 2 39]\n",
+ " [80 25 25 1 26 41 69 37 41 30]\n",
+ " [27 41 41 37 69 45 37 30 69 25]\n",
+ " [82 12 26 41 25 70 18 3 69 51]]\n"
+ ],
+ "name": "stdout"
+ }
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "rimqCQXWqYUe",
+ "colab_type": "text"
+ },
+ "source": [
+ "If you implemented `get_batches` correctly, the above output should look something like \n",
+ "```\n",
+ "x\n",
+ " [[25 8 60 11 45 27 28 73 1 2]\n",
+ " [17 7 20 73 45 8 60 45 73 60]\n",
+ " [27 20 80 73 7 28 73 60 73 65]\n",
+ " [17 73 45 8 27 73 66 8 46 27]\n",
+ " [73 17 60 12 73 8 27 28 73 45]\n",
+ " [66 64 17 17 46 7 20 73 60 20]\n",
+ " [73 76 20 20 60 73 8 60 80 73]\n",
+ " [47 35 43 7 20 17 24 50 37 73]]\n",
+ "\n",
+ "y\n",
+ " [[ 8 60 11 45 27 28 73 1 2 2]\n",
+ " [ 7 20 73 45 8 60 45 73 60 45]\n",
+ " [20 80 73 7 28 73 60 73 65 7]\n",
+ " [73 45 8 27 73 66 8 46 27 65]\n",
+ " [17 60 12 73 8 27 28 73 45 27]\n",
+ " [64 17 17 46 7 20 73 60 20 80]\n",
+ " [76 20 20 60 73 8 60 80 73 17]\n",
+ " [35 43 7 20 17 24 50 37 73 36]]\n",
+ " ```\n",
+ " although the exact numbers may be different. Check to make sure the data is shifted over one step for `y`."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "OrQ-DmlzqYUh",
+ "colab_type": "text"
+ },
+ "source": [
+ "---\n",
+ "## Defining the network with PyTorch\n",
+ "\n",
+ "Below is where you'll define the network.\n",
+ "\n",
+ "
\n",
+ "\n",
+ "Next, you'll use PyTorch to define the architecture of the network. We start by defining the layers and operations we want. Then, define a method for the forward pass. You've also been given a method for predicting characters."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "yT0hv9k1qYUj",
+ "colab_type": "text"
+ },
+ "source": [
+ "### Model Structure\n",
+ "\n",
+ "In `__init__` the suggested structure is as follows:\n",
+ "* Create and store the necessary dictionaries (this has been done for you)\n",
+ "* Define an LSTM layer that takes as params: an input size (the number of characters), a hidden layer size `n_hidden`, a number of layers `n_layers`, a dropout probability `drop_prob`, and a batch_first boolean (True, since we are batching)\n",
+ "* Define a dropout layer with `drop_prob`\n",
+ "* Define a fully-connected layer with params: input size `n_hidden` and output size (the number of characters)\n",
+ "* Finally, initialize the weights (again, this has been given)\n",
+ "\n",
+ "Note that some parameters have been named and given in the `__init__` function, and we use them and store them by doing something like `self.drop_prob = drop_prob`."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "NLqsgKW7qYUk",
+ "colab_type": "text"
+ },
+ "source": [
+ "---\n",
+ "### LSTM Inputs/Outputs\n",
+ "\n",
+ "You can create a basic [LSTM layer](https://pytorch.org/docs/stable/nn.html#lstm) as follows\n",
+ "\n",
+ "```python\n",
+ "self.lstm = nn.LSTM(input_size, n_hidden, n_layers, \n",
+ " dropout=drop_prob, batch_first=True)\n",
+ "```\n",
+ "\n",
+ "where `input_size` is the number of characters this cell expects to see as sequential input, and `n_hidden` is the number of units in the hidden layers in the cell. And we can add dropout by adding a dropout parameter with a specified probability; this will automatically add dropout to the inputs or outputs. Finally, in the `forward` function, we can stack up the LSTM cells into layers using `.view`. With this, you pass in a list of cells and it will send the output of one cell into the next cell.\n",
+ "\n",
+ "We also need to create an initial hidden state of all zeros. This is done like so\n",
+ "\n",
+ "```python\n",
+ "self.init_hidden()\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "33mYEGMNqYUl",
+ "colab_type": "code",
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 34
+ },
+ "outputId": "ddfeaf0b-f4ce-4991-871c-d3b1d4495341"
+ },
+ "source": [
+ "# check if GPU is available\n",
+ "train_on_gpu = torch.cuda.is_available()\n",
+ "if(train_on_gpu):\n",
+ " print('Training on GPU!')\n",
+ "else: \n",
+ " print('No GPU available, training on CPU; consider making n_epochs very small.')"
+ ],
+ "execution_count": 35,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "text": [
+ "Training on GPU!\n"
+ ],
+ "name": "stdout"
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "-w06OCBSqYUs",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "source": [
+ "class CharRNN(nn.Module):\n",
+ " \n",
+ " def __init__(self, tokens, n_hidden=256, n_layers=2,\n",
+ " drop_prob=0.5, lr=0.001):\n",
+ " super().__init__()\n",
+ " self.drop_prob = drop_prob\n",
+ " self.n_layers = n_layers\n",
+ " self.n_hidden = n_hidden\n",
+ " self.lr = lr\n",
+ " \n",
+ " # creating character dictionaries\n",
+ " self.chars = tokens\n",
+ " self.int2char = dict(enumerate(self.chars))\n",
+ " self.char2int = {ch: ii for ii, ch in self.int2char.items()}\n",
+ " \n",
+ " ## TODO: define the layers of the model\n",
+ " self.lstm = nn.LSTM(len(self.chars), n_hidden, n_layers, \n",
+ " dropout=drop_prob, batch_first=True)\n",
+ " # define a drop out layer\n",
+ " self.dropout = nn.Dropout(drop_prob)\n",
+ " self.fc = nn.Linear(n_hidden, len(self.chars))\n",
+ " \n",
+ " \n",
+ " def forward(self, x, hidden):\n",
+ " ''' Forward pass through the network. \n",
+ " These inputs are x, and the hidden/cell state `hidden`. '''\n",
+ " \n",
+ " ## TODO: Get the outputs and the new hidden state from the lstm\n",
+ " r_output, hidden = self.lstm(x, hidden)\n",
+ " out = self.dropout(r_output)\n",
+ " \n",
+ " # Stack up LSTM outputs using view\n",
+ " out = out.contiguous().view(-1, self.n_hidden)\n",
+ " \n",
+ " out = self.fc(out)\n",
+ " \n",
+ " # return the final output and the hidden state\n",
+ " return out, hidden\n",
+ " \n",
+ " \n",
+ " def init_hidden(self, batch_size):\n",
+ " ''' Initializes hidden state '''\n",
+ " # Create two new tensors with sizes n_layers x batch_size x n_hidden,\n",
+ " # initialized to zero, for hidden state and cell state of LSTM\n",
+ " weight = next(self.parameters()).data\n",
+ " \n",
+ " if (train_on_gpu):\n",
+ " hidden = (weight.new(self.n_layers, batch_size, self.n_hidden).zero_().cuda(),\n",
+ " weight.new(self.n_layers, batch_size, self.n_hidden).zero_().cuda())\n",
+ " else:\n",
+ " hidden = (weight.new(self.n_layers, batch_size, self.n_hidden).zero_(),\n",
+ " weight.new(self.n_layers, batch_size, self.n_hidden).zero_())\n",
+ " \n",
+ " return hidden\n",
+ " "
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "pIpTHcyAqYUy",
+ "colab_type": "text"
+ },
+ "source": [
+ "## Time to train\n",
+ "\n",
+ "The train function gives us the ability to set the number of epochs, the learning rate, and other parameters.\n",
+ "\n",
+ "Below we're using an Adam optimizer and cross entropy loss since we are looking at character class scores as output. We calculate the loss and perform backpropagation, as usual!\n",
+ "\n",
+ "A couple of details about training: \n",
+ ">* Within the batch loop, we detach the hidden state from its history; this time setting it equal to a new *tuple* variable because an LSTM has a hidden state that is a tuple of the hidden and cell states.\n",
+ "* We use [`clip_grad_norm_`](https://pytorch.org/docs/stable/_modules/torch/nn/utils/clip_grad.html) to help prevent exploding gradients."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "1WQ-xNDXqYUz",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "source": [
+ "def train(net, data, epochs=10, batch_size=10, seq_length=50, lr=0.001, clip=5, val_frac=0.1, print_every=10):\n",
+ " ''' Training a network \n",
+ " \n",
+ " Arguments\n",
+ " ---------\n",
+ " \n",
+ " net: CharRNN network\n",
+ " data: text data to train the network\n",
+ " epochs: Number of epochs to train\n",
+ " batch_size: Number of mini-sequences per mini-batch, aka batch size\n",
+ " seq_length: Number of character steps per mini-batch\n",
+ " lr: learning rate\n",
+ " clip: gradient clipping\n",
+ " val_frac: Fraction of data to hold out for validation\n",
+ " print_every: Number of steps for printing training and validation loss\n",
+ " \n",
+ " '''\n",
+ " net.train()\n",
+ " \n",
+ " opt = torch.optim.Adam(net.parameters(), lr=lr)\n",
+ " criterion = nn.CrossEntropyLoss()\n",
+ " \n",
+ " # create training and validation data\n",
+ " val_idx = int(len(data)*(1-val_frac))\n",
+ " data, val_data = data[:val_idx], data[val_idx:]\n",
+ " \n",
+ " if(train_on_gpu):\n",
+ " net.cuda()\n",
+ " \n",
+ " counter = 0\n",
+ " n_chars = len(net.chars)\n",
+ " for e in range(epochs):\n",
+ " # initialize hidden state\n",
+ " h = net.init_hidden(batch_size)\n",
+ " \n",
+ " for x, y in get_batches(data, batch_size, seq_length):\n",
+ " counter += 1\n",
+ " \n",
+ " # One-hot encode our data and make them Torch tensors\n",
+ " x = one_hot_encode(x, n_chars)\n",
+ " inputs, targets = torch.from_numpy(x), torch.from_numpy(y)\n",
+ " \n",
+ " if(train_on_gpu):\n",
+ " inputs, targets = inputs.cuda(), targets.cuda()\n",
+ "\n",
+ " # Creating new variables for the hidden state, otherwise\n",
+ " # we'd backprop through the entire training history\n",
+ " h = tuple([each.data for each in h])\n",
+ "\n",
+ " # zero accumulated gradients\n",
+ " net.zero_grad()\n",
+ " \n",
+ " # get the output from the model\n",
+ " output, h = net(inputs, h)\n",
+ " \n",
+ " # calculate the loss and perform backprop\n",
+ " loss = criterion(output, targets.view(batch_size*seq_length).long())\n",
+ " loss.backward()\n",
+ " # `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.\n",
+ " nn.utils.clip_grad_norm_(net.parameters(), clip)\n",
+ " opt.step()\n",
+ " \n",
+ " # loss stats\n",
+ " if counter % print_every == 0:\n",
+ " # Get validation loss\n",
+ " val_h = net.init_hidden(batch_size)\n",
+ " val_losses = []\n",
+ " net.eval()\n",
+ " for x, y in get_batches(val_data, batch_size, seq_length):\n",
+ " # One-hot encode our data and make them Torch tensors\n",
+ " x = one_hot_encode(x, n_chars)\n",
+ " x, y = torch.from_numpy(x), torch.from_numpy(y)\n",
+ " \n",
+ " # Creating new variables for the hidden state, otherwise\n",
+ " # we'd backprop through the entire training history\n",
+ " val_h = tuple([each.data for each in val_h])\n",
+ " \n",
+ " inputs, targets = x, y\n",
+ " if(train_on_gpu):\n",
+ " inputs, targets = inputs.cuda(), targets.cuda()\n",
+ "\n",
+ " output, val_h = net(inputs, val_h)\n",
+ " val_loss = criterion(output, targets.view(batch_size*seq_length).long())\n",
+ " \n",
+ " val_losses.append(val_loss.item())\n",
+ " \n",
+ " net.train() # reset to train mode after iterationg through validation data\n",
+ " \n",
+ " print(\"Epoch: {}/{}...\".format(e+1, epochs),\n",
+ " \"Step: {}...\".format(counter),\n",
+ " \"Loss: {:.4f}...\".format(loss.item()),\n",
+ " \"Val Loss: {:.4f}\".format(np.mean(val_losses)))"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "4otnZsxiqYU5",
+ "colab_type": "text"
+ },
+ "source": [
+ "## Instantiating the model\n",
+ "\n",
+ "Now we can actually train the network. First we'll create the network itself, with some given hyperparameters. Then, define the mini-batches sizes, and start training!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "vjYl7WhTqYU8",
+ "colab_type": "code",
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 102
+ },
+ "outputId": "aeba106c-e371-4dcf-c468-b9afa1ec50d1"
+ },
+ "source": [
+ "## TODO: set your model hyperparameters\n",
+ "# define and print the net\n",
+ "n_hidden = 512\n",
+ "n_layers = 2\n",
+ "\n",
+ "net = CharRNN(tokens=chars, n_hidden=512, n_layers=2)\n",
+ "print(net)"
+ ],
+ "execution_count": 38,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "text": [
+ "CharRNN(\n",
+ " (lstm): LSTM(83, 512, num_layers=2, batch_first=True, dropout=0.5)\n",
+ " (dropout): Dropout(p=0.5)\n",
+ " (fc): Linear(in_features=512, out_features=83, bias=True)\n",
+ ")\n"
+ ],
+ "name": "stdout"
+ }
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "WQiOYJgTqYVB",
+ "colab_type": "text"
+ },
+ "source": [
+ "### Set your training hyperparameters!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "scrolled": true,
+ "id": "gVjWwtk6qYVD",
+ "colab_type": "code",
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 1000
+ },
+ "outputId": "980c825d-2cad-46aa-f3ff-f8c25a41a0f6"
+ },
+ "source": [
+ "batch_size = 128\n",
+ "seq_length = 100\n",
+ "n_epochs = 20 # start small if you are just testing initial behavior\n",
+ "\n",
+ "# train the model\n",
+ "train(net, encoded, epochs=n_epochs, batch_size=batch_size, seq_length=seq_length, lr=0.001, print_every=10)"
+ ],
+ "execution_count": 39,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "text": [
+ "Epoch: 1/20... Step: 10... Loss: 3.2614... Val Loss: 3.1951\n",
+ "Epoch: 1/20... Step: 20... Loss: 3.1350... Val Loss: 3.1305\n",
+ "Epoch: 1/20... Step: 30... Loss: 3.1383... Val Loss: 3.1214\n",
+ "Epoch: 1/20... Step: 40... Loss: 3.1069... Val Loss: 3.1194\n",
+ "Epoch: 1/20... Step: 50... Loss: 3.1395... Val Loss: 3.1168\n",
+ "Epoch: 1/20... Step: 60... Loss: 3.1169... Val Loss: 3.1152\n",
+ "Epoch: 1/20... Step: 70... Loss: 3.1071... Val Loss: 3.1131\n",
+ "Epoch: 1/20... Step: 80... Loss: 3.1199... Val Loss: 3.1072\n",
+ "Epoch: 1/20... Step: 90... Loss: 3.1114... Val Loss: 3.0931\n",
+ "Epoch: 1/20... Step: 100... Loss: 3.0735... Val Loss: 3.0608\n",
+ "Epoch: 1/20... Step: 110... Loss: 3.0195... Val Loss: 2.9951\n",
+ "Epoch: 1/20... Step: 120... Loss: 2.9062... Val Loss: 2.8998\n",
+ "Epoch: 1/20... Step: 130... Loss: 2.8132... Val Loss: 2.7787\n",
+ "Epoch: 2/20... Step: 140... Loss: 2.6994... Val Loss: 2.6409\n",
+ "Epoch: 2/20... Step: 150... Loss: 2.6095... Val Loss: 2.5610\n",
+ "Epoch: 2/20... Step: 160... Loss: 2.5407... Val Loss: 2.5020\n",
+ "Epoch: 2/20... Step: 170... Loss: 2.4687... Val Loss: 2.4581\n",
+ "Epoch: 2/20... Step: 180... Loss: 2.4509... Val Loss: 2.4172\n",
+ "Epoch: 2/20... Step: 190... Loss: 2.3988... Val Loss: 2.3858\n",
+ "Epoch: 2/20... Step: 200... Loss: 2.4069... Val Loss: 2.3595\n",
+ "Epoch: 2/20... Step: 210... Loss: 2.3625... Val Loss: 2.3299\n",
+ "Epoch: 2/20... Step: 220... Loss: 2.3185... Val Loss: 2.2983\n",
+ "Epoch: 2/20... Step: 230... Loss: 2.3056... Val Loss: 2.2652\n",
+ "Epoch: 2/20... Step: 240... Loss: 2.2813... Val Loss: 2.2383\n",
+ "Epoch: 2/20... Step: 250... Loss: 2.2146... Val Loss: 2.2164\n",
+ "Epoch: 2/20... Step: 260... Loss: 2.1885... Val Loss: 2.1868\n",
+ "Epoch: 2/20... Step: 270... Loss: 2.2002... Val Loss: 2.1716\n",
+ "Epoch: 3/20... Step: 280... Loss: 2.1905... Val Loss: 2.1418\n",
+ "Epoch: 3/20... Step: 290... Loss: 2.1508... Val Loss: 2.1201\n",
+ "Epoch: 3/20... Step: 300... Loss: 2.1271... Val Loss: 2.0969\n",
+ "Epoch: 3/20... Step: 310... Loss: 2.1003... Val Loss: 2.0758\n",
+ "Epoch: 3/20... Step: 320... Loss: 2.0686... Val Loss: 2.0531\n",
+ "Epoch: 3/20... Step: 330... Loss: 2.0385... Val Loss: 2.0409\n",
+ "Epoch: 3/20... Step: 340... Loss: 2.0631... Val Loss: 2.0145\n",
+ "Epoch: 3/20... Step: 350... Loss: 2.0393... Val Loss: 2.0011\n",
+ "Epoch: 3/20... Step: 360... Loss: 1.9728... Val Loss: 1.9799\n",
+ "Epoch: 3/20... Step: 370... Loss: 2.0065... Val Loss: 1.9650\n",
+ "Epoch: 3/20... Step: 380... Loss: 1.9729... Val Loss: 1.9507\n",
+ "Epoch: 3/20... Step: 390... Loss: 1.9518... Val Loss: 1.9366\n",
+ "Epoch: 3/20... Step: 400... Loss: 1.9174... Val Loss: 1.9190\n",
+ "Epoch: 3/20... Step: 410... Loss: 1.9385... Val Loss: 1.9025\n",
+ "Epoch: 4/20... Step: 420... Loss: 1.9196... Val Loss: 1.8889\n",
+ "Epoch: 4/20... Step: 430... Loss: 1.9147... Val Loss: 1.8748\n",
+ "Epoch: 4/20... Step: 440... Loss: 1.8936... Val Loss: 1.8642\n",
+ "Epoch: 4/20... Step: 450... Loss: 1.8303... Val Loss: 1.8466\n",
+ "Epoch: 4/20... Step: 460... Loss: 1.8220... Val Loss: 1.8353\n",
+ "Epoch: 4/20... Step: 470... Loss: 1.8528... Val Loss: 1.8302\n",
+ "Epoch: 4/20... Step: 480... Loss: 1.8279... Val Loss: 1.8130\n",
+ "Epoch: 4/20... Step: 490... Loss: 1.8358... Val Loss: 1.8054\n",
+ "Epoch: 4/20... Step: 500... Loss: 1.8372... Val Loss: 1.7942\n",
+ "Epoch: 4/20... Step: 510... Loss: 1.8153... Val Loss: 1.7806\n",
+ "Epoch: 4/20... Step: 520... Loss: 1.8135... Val Loss: 1.7761\n",
+ "Epoch: 4/20... Step: 530... Loss: 1.7814... Val Loss: 1.7671\n",
+ "Epoch: 4/20... Step: 540... Loss: 1.7528... Val Loss: 1.7549\n",
+ "Epoch: 4/20... Step: 550... Loss: 1.7950... Val Loss: 1.7400\n",
+ "Epoch: 5/20... Step: 560... Loss: 1.7597... Val Loss: 1.7336\n",
+ "Epoch: 5/20... Step: 570... Loss: 1.7452... Val Loss: 1.7251\n",
+ "Epoch: 5/20... Step: 580... Loss: 1.7198... Val Loss: 1.7138\n",
+ "Epoch: 5/20... Step: 590... Loss: 1.7189... Val Loss: 1.7062\n",
+ "Epoch: 5/20... Step: 600... Loss: 1.7101... Val Loss: 1.6988\n",
+ "Epoch: 5/20... Step: 610... Loss: 1.6910... Val Loss: 1.6963\n",
+ "Epoch: 5/20... Step: 620... Loss: 1.7002... Val Loss: 1.6881\n",
+ "Epoch: 5/20... Step: 630... Loss: 1.7089... Val Loss: 1.6796\n",
+ "Epoch: 5/20... Step: 640... Loss: 1.6829... Val Loss: 1.6724\n",
+ "Epoch: 5/20... Step: 650... Loss: 1.6686... Val Loss: 1.6600\n",
+ "Epoch: 5/20... Step: 660... Loss: 1.6472... Val Loss: 1.6551\n",
+ "Epoch: 5/20... Step: 670... Loss: 1.6710... Val Loss: 1.6499\n",
+ "Epoch: 5/20... Step: 680... Loss: 1.6789... Val Loss: 1.6439\n",
+ "Epoch: 5/20... Step: 690... Loss: 1.6412... Val Loss: 1.6384\n",
+ "Epoch: 6/20... Step: 700... Loss: 1.6474... Val Loss: 1.6332\n",
+ "Epoch: 6/20... Step: 710... Loss: 1.6492... Val Loss: 1.6302\n",
+ "Epoch: 6/20... Step: 720... Loss: 1.6247... Val Loss: 1.6190\n",
+ "Epoch: 6/20... Step: 730... Loss: 1.6373... Val Loss: 1.6141\n",
+ "Epoch: 6/20... Step: 740... Loss: 1.6087... Val Loss: 1.6114\n",
+ "Epoch: 6/20... Step: 750... Loss: 1.5919... Val Loss: 1.6073\n",
+ "Epoch: 6/20... Step: 760... Loss: 1.6213... Val Loss: 1.6062\n",
+ "Epoch: 6/20... Step: 770... Loss: 1.6042... Val Loss: 1.6007\n",
+ "Epoch: 6/20... Step: 780... Loss: 1.5877... Val Loss: 1.5948\n",
+ "Epoch: 6/20... Step: 790... Loss: 1.5790... Val Loss: 1.5870\n",
+ "Epoch: 6/20... Step: 800... Loss: 1.5978... Val Loss: 1.5816\n",
+ "Epoch: 6/20... Step: 810... Loss: 1.5814... Val Loss: 1.5795\n",
+ "Epoch: 6/20... Step: 820... Loss: 1.5498... Val Loss: 1.5719\n",
+ "Epoch: 6/20... Step: 830... Loss: 1.5953... Val Loss: 1.5669\n",
+ "Epoch: 7/20... Step: 840... Loss: 1.5383... Val Loss: 1.5637\n",
+ "Epoch: 7/20... Step: 850... Loss: 1.5637... Val Loss: 1.5626\n",
+ "Epoch: 7/20... Step: 860... Loss: 1.5493... Val Loss: 1.5526\n",
+ "Epoch: 7/20... Step: 870... Loss: 1.5555... Val Loss: 1.5497\n",
+ "Epoch: 7/20... Step: 880... Loss: 1.5547... Val Loss: 1.5480\n",
+ "Epoch: 7/20... Step: 890... Loss: 1.5531... Val Loss: 1.5481\n",
+ "Epoch: 7/20... Step: 900... Loss: 1.5344... Val Loss: 1.5453\n",
+ "Epoch: 7/20... Step: 910... Loss: 1.5122... Val Loss: 1.5426\n",
+ "Epoch: 7/20... Step: 920... Loss: 1.5376... Val Loss: 1.5358\n",
+ "Epoch: 7/20... Step: 930... Loss: 1.5136... Val Loss: 1.5283\n",
+ "Epoch: 7/20... Step: 940... Loss: 1.5270... Val Loss: 1.5249\n",
+ "Epoch: 7/20... Step: 950... Loss: 1.5367... Val Loss: 1.5209\n",
+ "Epoch: 7/20... Step: 960... Loss: 1.5289... Val Loss: 1.5190\n",
+ "Epoch: 7/20... Step: 970... Loss: 1.5373... Val Loss: 1.5142\n",
+ "Epoch: 8/20... Step: 980... Loss: 1.5075... Val Loss: 1.5139\n",
+ "Epoch: 8/20... Step: 990... Loss: 1.5111... Val Loss: 1.5091\n",
+ "Epoch: 8/20... Step: 1000... Loss: 1.5009... Val Loss: 1.5056\n",
+ "Epoch: 8/20... Step: 1010... Loss: 1.5445... Val Loss: 1.5019\n",
+ "Epoch: 8/20... Step: 1020... Loss: 1.5083... Val Loss: 1.4984\n",
+ "Epoch: 8/20... Step: 1030... Loss: 1.4904... Val Loss: 1.4992\n",
+ "Epoch: 8/20... Step: 1040... Loss: 1.5020... Val Loss: 1.5012\n",
+ "Epoch: 8/20... Step: 1050... Loss: 1.4803... Val Loss: 1.4920\n",
+ "Epoch: 8/20... Step: 1060... Loss: 1.4825... Val Loss: 1.4893\n",
+ "Epoch: 8/20... Step: 1070... Loss: 1.4911... Val Loss: 1.4849\n",
+ "Epoch: 8/20... Step: 1080... Loss: 1.4944... Val Loss: 1.4826\n",
+ "Epoch: 8/20... Step: 1090... Loss: 1.4631... Val Loss: 1.4816\n",
+ "Epoch: 8/20... Step: 1100... Loss: 1.4667... Val Loss: 1.4769\n",
+ "Epoch: 8/20... Step: 1110... Loss: 1.4625... Val Loss: 1.4722\n",
+ "Epoch: 9/20... Step: 1120... Loss: 1.4934... Val Loss: 1.4749\n",
+ "Epoch: 9/20... Step: 1130... Loss: 1.4786... Val Loss: 1.4677\n",
+ "Epoch: 9/20... Step: 1140... Loss: 1.4707... Val Loss: 1.4655\n",
+ "Epoch: 9/20... Step: 1150... Loss: 1.4839... Val Loss: 1.4637\n",
+ "Epoch: 9/20... Step: 1160... Loss: 1.4431... Val Loss: 1.4619\n",
+ "Epoch: 9/20... Step: 1170... Loss: 1.4520... Val Loss: 1.4594\n",
+ "Epoch: 9/20... Step: 1180... Loss: 1.4410... Val Loss: 1.4591\n",
+ "Epoch: 9/20... Step: 1190... Loss: 1.4790... Val Loss: 1.4577\n",
+ "Epoch: 9/20... Step: 1200... Loss: 1.4321... Val Loss: 1.4533\n",
+ "Epoch: 9/20... Step: 1210... Loss: 1.4497... Val Loss: 1.4474\n",
+ "Epoch: 9/20... Step: 1220... Loss: 1.4376... Val Loss: 1.4525\n",
+ "Epoch: 9/20... Step: 1230... Loss: 1.4136... Val Loss: 1.4486\n",
+ "Epoch: 9/20... Step: 1240... Loss: 1.4219... Val Loss: 1.4456\n",
+ "Epoch: 9/20... Step: 1250... Loss: 1.4304... Val Loss: 1.4382\n",
+ "Epoch: 10/20... Step: 1260... Loss: 1.4482... Val Loss: 1.4448\n",
+ "Epoch: 10/20... Step: 1270... Loss: 1.4344... Val Loss: 1.4358\n",
+ "Epoch: 10/20... Step: 1280... Loss: 1.4439... Val Loss: 1.4337\n",
+ "Epoch: 10/20... Step: 1290... Loss: 1.4315... Val Loss: 1.4326\n",
+ "Epoch: 10/20... Step: 1300... Loss: 1.4279... Val Loss: 1.4315\n",
+ "Epoch: 10/20... Step: 1310... Loss: 1.4291... Val Loss: 1.4306\n",
+ "Epoch: 10/20... Step: 1320... Loss: 1.3924... Val Loss: 1.4298\n",
+ "Epoch: 10/20... Step: 1330... Loss: 1.4038... Val Loss: 1.4273\n",
+ "Epoch: 10/20... Step: 1340... Loss: 1.3886... Val Loss: 1.4219\n",
+ "Epoch: 10/20... Step: 1350... Loss: 1.3893... Val Loss: 1.4190\n",
+ "Epoch: 10/20... Step: 1360... Loss: 1.3965... Val Loss: 1.4220\n",
+ "Epoch: 10/20... Step: 1370... Loss: 1.3848... Val Loss: 1.4196\n",
+ "Epoch: 10/20... Step: 1380... Loss: 1.4216... Val Loss: 1.4157\n",
+ "Epoch: 10/20... Step: 1390... Loss: 1.4271... Val Loss: 1.4111\n",
+ "Epoch: 11/20... Step: 1400... Loss: 1.4192... Val Loss: 1.4187\n",
+ "Epoch: 11/20... Step: 1410... Loss: 1.4324... Val Loss: 1.4110\n",
+ "Epoch: 11/20... Step: 1420... Loss: 1.4315... Val Loss: 1.4058\n",
+ "Epoch: 11/20... Step: 1430... Loss: 1.3910... Val Loss: 1.4069\n",
+ "Epoch: 11/20... Step: 1440... Loss: 1.4270... Val Loss: 1.4044\n",
+ "Epoch: 11/20... Step: 1450... Loss: 1.3550... Val Loss: 1.4042\n",
+ "Epoch: 11/20... Step: 1460... Loss: 1.3786... Val Loss: 1.4058\n",
+ "Epoch: 11/20... Step: 1470... Loss: 1.3712... Val Loss: 1.4039\n",
+ "Epoch: 11/20... Step: 1480... Loss: 1.3842... Val Loss: 1.3984\n",
+ "Epoch: 11/20... Step: 1490... Loss: 1.3833... Val Loss: 1.3955\n",
+ "Epoch: 11/20... Step: 1500... Loss: 1.3584... Val Loss: 1.3996\n",
+ "Epoch: 11/20... Step: 1510... Loss: 1.3497... Val Loss: 1.3945\n",
+ "Epoch: 11/20... Step: 1520... Loss: 1.3820... Val Loss: 1.3904\n",
+ "Epoch: 12/20... Step: 1530... Loss: 1.4313... Val Loss: 1.3915\n",
+ "Epoch: 12/20... Step: 1540... Loss: 1.3890... Val Loss: 1.3945\n",
+ "Epoch: 12/20... Step: 1550... Loss: 1.3924... Val Loss: 1.3916\n",
+ "Epoch: 12/20... Step: 1560... Loss: 1.4008... Val Loss: 1.3847\n",
+ "Epoch: 12/20... Step: 1570... Loss: 1.3475... Val Loss: 1.3824\n",
+ "Epoch: 12/20... Step: 1580... Loss: 1.3290... Val Loss: 1.3824\n",
+ "Epoch: 12/20... Step: 1590... Loss: 1.3275... Val Loss: 1.3818\n",
+ "Epoch: 12/20... Step: 1600... Loss: 1.3566... Val Loss: 1.3817\n",
+ "Epoch: 12/20... Step: 1610... Loss: 1.3439... Val Loss: 1.3860\n",
+ "Epoch: 12/20... Step: 1620... Loss: 1.3406... Val Loss: 1.3783\n",
+ "Epoch: 12/20... Step: 1630... Loss: 1.3627... Val Loss: 1.3763\n",
+ "Epoch: 12/20... Step: 1640... Loss: 1.3488... Val Loss: 1.3822\n",
+ "Epoch: 12/20... Step: 1650... Loss: 1.3226... Val Loss: 1.3771\n",
+ "Epoch: 12/20... Step: 1660... Loss: 1.3741... Val Loss: 1.3706\n",
+ "Epoch: 13/20... Step: 1670... Loss: 1.3371... Val Loss: 1.3739\n",
+ "Epoch: 13/20... Step: 1680... Loss: 1.3607... Val Loss: 1.3734\n",
+ "Epoch: 13/20... Step: 1690... Loss: 1.3321... Val Loss: 1.3715\n",
+ "Epoch: 13/20... Step: 1700... Loss: 1.3322... Val Loss: 1.3670\n",
+ "Epoch: 13/20... Step: 1710... Loss: 1.3164... Val Loss: 1.3654\n",
+ "Epoch: 13/20... Step: 1720... Loss: 1.3327... Val Loss: 1.3672\n",
+ "Epoch: 13/20... Step: 1730... Loss: 1.3627... Val Loss: 1.3642\n",
+ "Epoch: 13/20... Step: 1740... Loss: 1.3239... Val Loss: 1.3672\n",
+ "Epoch: 13/20... Step: 1750... Loss: 1.3039... Val Loss: 1.3667\n",
+ "Epoch: 13/20... Step: 1760... Loss: 1.3292... Val Loss: 1.3594\n",
+ "Epoch: 13/20... Step: 1770... Loss: 1.3530... Val Loss: 1.3588\n",
+ "Epoch: 13/20... Step: 1780... Loss: 1.3258... Val Loss: 1.3612\n",
+ "Epoch: 13/20... Step: 1790... Loss: 1.3138... Val Loss: 1.3603\n",
+ "Epoch: 13/20... Step: 1800... Loss: 1.3330... Val Loss: 1.3541\n",
+ "Epoch: 14/20... Step: 1810... Loss: 1.3337... Val Loss: 1.3638\n",
+ "Epoch: 14/20... Step: 1820... Loss: 1.3276... Val Loss: 1.3536\n",
+ "Epoch: 14/20... Step: 1830... Loss: 1.3379... Val Loss: 1.3540\n",
+ "Epoch: 14/20... Step: 1840... Loss: 1.2849... Val Loss: 1.3518\n",
+ "Epoch: 14/20... Step: 1850... Loss: 1.2728... Val Loss: 1.3506\n",
+ "Epoch: 14/20... Step: 1860... Loss: 1.3305... Val Loss: 1.3501\n",
+ "Epoch: 14/20... Step: 1870... Loss: 1.3389... Val Loss: 1.3445\n",
+ "Epoch: 14/20... Step: 1880... Loss: 1.3359... Val Loss: 1.3504\n",
+ "Epoch: 14/20... Step: 1890... Loss: 1.3440... Val Loss: 1.3542\n",
+ "Epoch: 14/20... Step: 1900... Loss: 1.3175... Val Loss: 1.3478\n",
+ "Epoch: 14/20... Step: 1910... Loss: 1.3213... Val Loss: 1.3475\n",
+ "Epoch: 14/20... Step: 1920... Loss: 1.3163... Val Loss: 1.3485\n",
+ "Epoch: 14/20... Step: 1930... Loss: 1.2832... Val Loss: 1.3473\n",
+ "Epoch: 14/20... Step: 1940... Loss: 1.3349... Val Loss: 1.3422\n",
+ "Epoch: 15/20... Step: 1950... Loss: 1.3071... Val Loss: 1.3578\n",
+ "Epoch: 15/20... Step: 1960... Loss: 1.3192... Val Loss: 1.3455\n",
+ "Epoch: 15/20... Step: 1970... Loss: 1.3069... Val Loss: 1.3428\n",
+ "Epoch: 15/20... Step: 1980... Loss: 1.2938... Val Loss: 1.3426\n",
+ "Epoch: 15/20... Step: 1990... Loss: 1.3039... Val Loss: 1.3425\n",
+ "Epoch: 15/20... Step: 2000... Loss: 1.2843... Val Loss: 1.3395\n",
+ "Epoch: 15/20... Step: 2010... Loss: 1.2993... Val Loss: 1.3340\n",
+ "Epoch: 15/20... Step: 2020... Loss: 1.3153... Val Loss: 1.3397\n",
+ "Epoch: 15/20... Step: 2030... Loss: 1.2831... Val Loss: 1.3411\n",
+ "Epoch: 15/20... Step: 2040... Loss: 1.3088... Val Loss: 1.3348\n",
+ "Epoch: 15/20... Step: 2050... Loss: 1.2919... Val Loss: 1.3325\n",
+ "Epoch: 15/20... Step: 2060... Loss: 1.2980... Val Loss: 1.3330\n",
+ "Epoch: 15/20... Step: 2070... Loss: 1.3111... Val Loss: 1.3326\n",
+ "Epoch: 15/20... Step: 2080... Loss: 1.3023... Val Loss: 1.3300\n",
+ "Epoch: 16/20... Step: 2090... Loss: 1.3095... Val Loss: 1.3390\n",
+ "Epoch: 16/20... Step: 2100... Loss: 1.2820... Val Loss: 1.3334\n",
+ "Epoch: 16/20... Step: 2110... Loss: 1.2868... Val Loss: 1.3318\n",
+ "Epoch: 16/20... Step: 2120... Loss: 1.2945... Val Loss: 1.3323\n",
+ "Epoch: 16/20... Step: 2130... Loss: 1.2736... Val Loss: 1.3312\n",
+ "Epoch: 16/20... Step: 2140... Loss: 1.2816... Val Loss: 1.3245\n",
+ "Epoch: 16/20... Step: 2150... Loss: 1.3031... Val Loss: 1.3247\n",
+ "Epoch: 16/20... Step: 2160... Loss: 1.2808... Val Loss: 1.3282\n",
+ "Epoch: 16/20... Step: 2170... Loss: 1.2708... Val Loss: 1.3282\n",
+ "Epoch: 16/20... Step: 2180... Loss: 1.2687... Val Loss: 1.3235\n",
+ "Epoch: 16/20... Step: 2190... Loss: 1.2969... Val Loss: 1.3246\n",
+ "Epoch: 16/20... Step: 2200... Loss: 1.2702... Val Loss: 1.3264\n",
+ "Epoch: 16/20... Step: 2210... Loss: 1.2375... Val Loss: 1.3248\n",
+ "Epoch: 16/20... Step: 2220... Loss: 1.2878... Val Loss: 1.3228\n",
+ "Epoch: 17/20... Step: 2230... Loss: 1.2571... Val Loss: 1.3291\n",
+ "Epoch: 17/20... Step: 2240... Loss: 1.2746... Val Loss: 1.3252\n",
+ "Epoch: 17/20... Step: 2250... Loss: 1.2528... Val Loss: 1.3268\n",
+ "Epoch: 17/20... Step: 2260... Loss: 1.2637... Val Loss: 1.3224\n",
+ "Epoch: 17/20... Step: 2270... Loss: 1.2766... Val Loss: 1.3211\n",
+ "Epoch: 17/20... Step: 2280... Loss: 1.2760... Val Loss: 1.3154\n",
+ "Epoch: 17/20... Step: 2290... Loss: 1.2823... Val Loss: 1.3198\n",
+ "Epoch: 17/20... Step: 2300... Loss: 1.2396... Val Loss: 1.3249\n",
+ "Epoch: 17/20... Step: 2310... Loss: 1.2631... Val Loss: 1.3216\n",
+ "Epoch: 17/20... Step: 2320... Loss: 1.2544... Val Loss: 1.3178\n",
+ "Epoch: 17/20... Step: 2330... Loss: 1.2597... Val Loss: 1.3161\n",
+ "Epoch: 17/20... Step: 2340... Loss: 1.2751... Val Loss: 1.3196\n",
+ "Epoch: 17/20... Step: 2350... Loss: 1.2713... Val Loss: 1.3175\n",
+ "Epoch: 17/20... Step: 2360... Loss: 1.2748... Val Loss: 1.3167\n",
+ "Epoch: 18/20... Step: 2370... Loss: 1.2474... Val Loss: 1.3220\n",
+ "Epoch: 18/20... Step: 2380... Loss: 1.2581... Val Loss: 1.3147\n",
+ "Epoch: 18/20... Step: 2390... Loss: 1.2510... Val Loss: 1.3151\n",
+ "Epoch: 18/20... Step: 2400... Loss: 1.2767... Val Loss: 1.3123\n",
+ "Epoch: 18/20... Step: 2410... Loss: 1.2779... Val Loss: 1.3122\n",
+ "Epoch: 18/20... Step: 2420... Loss: 1.2595... Val Loss: 1.3075\n",
+ "Epoch: 18/20... Step: 2430... Loss: 1.2669... Val Loss: 1.3122\n",
+ "Epoch: 18/20... Step: 2440... Loss: 1.2541... Val Loss: 1.3153\n",
+ "Epoch: 18/20... Step: 2450... Loss: 1.2418... Val Loss: 1.3104\n",
+ "Epoch: 18/20... Step: 2460... Loss: 1.2626... Val Loss: 1.3088\n",
+ "Epoch: 18/20... Step: 2470... Loss: 1.2527... Val Loss: 1.3095\n",
+ "Epoch: 18/20... Step: 2480... Loss: 1.2386... Val Loss: 1.3130\n",
+ "Epoch: 18/20... Step: 2490... Loss: 1.2432... Val Loss: 1.3081\n",
+ "Epoch: 18/20... Step: 2500... Loss: 1.2465... Val Loss: 1.3080\n",
+ "Epoch: 19/20... Step: 2510... Loss: 1.2423... Val Loss: 1.3085\n",
+ "Epoch: 19/20... Step: 2520... Loss: 1.2584... Val Loss: 1.3049\n",
+ "Epoch: 19/20... Step: 2530... Loss: 1.2611... Val Loss: 1.3083\n",
+ "Epoch: 19/20... Step: 2540... Loss: 1.2713... Val Loss: 1.3032\n",
+ "Epoch: 19/20... Step: 2550... Loss: 1.2411... Val Loss: 1.3057\n",
+ "Epoch: 19/20... Step: 2560... Loss: 1.2554... Val Loss: 1.3034\n",
+ "Epoch: 19/20... Step: 2570... Loss: 1.2431... Val Loss: 1.3047\n",
+ "Epoch: 19/20... Step: 2580... Loss: 1.2692... Val Loss: 1.3066\n",
+ "Epoch: 19/20... Step: 2590... Loss: 1.2423... Val Loss: 1.3036\n",
+ "Epoch: 19/20... Step: 2600... Loss: 1.2379... Val Loss: 1.3034\n",
+ "Epoch: 19/20... Step: 2610... Loss: 1.2275... Val Loss: 1.3043\n",
+ "Epoch: 19/20... Step: 2620... Loss: 1.2203... Val Loss: 1.3062\n",
+ "Epoch: 19/20... Step: 2630... Loss: 1.2422... Val Loss: 1.3005\n",
+ "Epoch: 19/20... Step: 2640... Loss: 1.2471... Val Loss: 1.3048\n",
+ "Epoch: 20/20... Step: 2650... Loss: 1.2504... Val Loss: 1.3048\n",
+ "Epoch: 20/20... Step: 2660... Loss: 1.2502... Val Loss: 1.3008\n",
+ "Epoch: 20/20... Step: 2670... Loss: 1.2556... Val Loss: 1.2988\n",
+ "Epoch: 20/20... Step: 2680... Loss: 1.2495... Val Loss: 1.2970\n",
+ "Epoch: 20/20... Step: 2690... Loss: 1.2399... Val Loss: 1.2982\n",
+ "Epoch: 20/20... Step: 2700... Loss: 1.2541... Val Loss: 1.2988\n",
+ "Epoch: 20/20... Step: 2710... Loss: 1.2250... Val Loss: 1.3010\n",
+ "Epoch: 20/20... Step: 2720... Loss: 1.2255... Val Loss: 1.2993\n",
+ "Epoch: 20/20... Step: 2730... Loss: 1.2162... Val Loss: 1.2986\n",
+ "Epoch: 20/20... Step: 2740... Loss: 1.2122... Val Loss: 1.3004\n",
+ "Epoch: 20/20... Step: 2750... Loss: 1.2207... Val Loss: 1.3069\n",
+ "Epoch: 20/20... Step: 2760... Loss: 1.2089... Val Loss: 1.2998\n",
+ "Epoch: 20/20... Step: 2770... Loss: 1.2464... Val Loss: 1.2972\n",
+ "Epoch: 20/20... Step: 2780... Loss: 1.2706... Val Loss: 1.2973\n"
+ ],
+ "name": "stdout"
+ }
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "CaSbTkepqYVK",
+ "colab_type": "text"
+ },
+ "source": [
+ "## Getting the best model\n",
+ "\n",
+ "To set your hyperparameters to get the best performance, you'll want to watch the training and validation losses. If your training loss is much lower than the validation loss, you're overfitting. Increase regularization (more dropout) or use a smaller network. If the training and validation losses are close, you're underfitting so you can increase the size of the network."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "L-5-1BVPqYVQ",
+ "colab_type": "text"
+ },
+ "source": [
+ "## Hyperparameters\n",
+ "\n",
+ "Here are the hyperparameters for the network.\n",
+ "\n",
+ "In defining the model:\n",
+ "* `n_hidden` - The number of units in the hidden layers.\n",
+ "* `n_layers` - Number of hidden LSTM layers to use.\n",
+ "\n",
+ "We assume that dropout probability and learning rate will be kept at the default, in this example.\n",
+ "\n",
+ "And in training:\n",
+ "* `batch_size` - Number of sequences running through the network in one pass.\n",
+ "* `seq_length` - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.\n",
+ "* `lr` - Learning rate for training\n",
+ "\n",
+ "Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to [where it originally came from](https://github.com/karpathy/char-rnn#tips-and-tricks).\n",
+ "\n",
+ "> ## Tips and Tricks\n",
+ "\n",
+ ">### Monitoring Validation Loss vs. Training Loss\n",
+ ">If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:\n",
+ "\n",
+ "> - If your training loss is much lower than validation loss then this means the network might be **overfitting**. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.\n",
+ "> - If your training/validation loss are about equal then your model is **underfitting**. Increase the size of your model (either number of layers or the raw number of neurons per layer)\n",
+ "\n",
+ "> ### Approximate number of parameters\n",
+ "\n",
+ "> The two most important parameters that control the model are `n_hidden` and `n_layers`. I would advise that you always use `n_layers` of either 2/3. The `n_hidden` can be adjusted based on how much data you have. The two important quantities to keep track of here are:\n",
+ "\n",
+ "> - The number of parameters in your model. This is printed when you start training.\n",
+ "> - The size of your dataset. 1MB file is approximately 1 million characters.\n",
+ "\n",
+ ">These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:\n",
+ "\n",
+ "> - I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make `n_hidden` larger.\n",
+ "> - I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.\n",
+ "\n",
+ "> ### Best models strategy\n",
+ "\n",
+ ">The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.\n",
+ "\n",
+ ">It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.\n",
+ "\n",
+ ">By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "BJthmIE2qYVS",
+ "colab_type": "text"
+ },
+ "source": [
+ "## Checkpoint\n",
+ "\n",
+ "After training, we'll save the model so we can load it again later if we need too. Here I'm saving the parameters needed to create the same architecture, the hidden layer hyperparameters and the text characters."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "OzfiilvvqYVX",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "source": [
+ "# change the name, for saving multiple files\n",
+ "model_name = 'rnn_x_epoch.net'\n",
+ "\n",
+ "checkpoint = {'n_hidden': net.n_hidden,\n",
+ " 'n_layers': net.n_layers,\n",
+ " 'state_dict': net.state_dict(),\n",
+ " 'tokens': net.chars}\n",
+ "\n",
+ "with open(model_name, 'wb') as f:\n",
+ " torch.save(checkpoint, f)"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "H9jUH9dzqYVh",
+ "colab_type": "text"
+ },
+ "source": [
+ "---\n",
+ "## Making Predictions\n",
+ "\n",
+ "Now that the model is trained, we'll want to sample from it and make predictions about next characters! To sample, we pass in a character and have the network predict the next character. Then we take that character, pass it back in, and get another predicted character. Just keep doing this and you'll generate a bunch of text!\n",
+ "\n",
+ "### A note on the `predict` function\n",
+ "\n",
+ "The output of our RNN is from a fully-connected layer and it outputs a **distribution of next-character scores**.\n",
+ "\n",
+ "> To actually get the next character, we apply a softmax function, which gives us a *probability* distribution that we can then sample to predict the next character.\n",
+ "\n",
+ "### Top K sampling\n",
+ "\n",
+ "Our predictions come from a categorical probability distribution over all the possible characters. We can make the sample text and make it more reasonable to handle (with less variables) by only considering some $K$ most probable characters. This will prevent the network from giving us completely absurd characters while allowing it to introduce some noise and randomness into the sampled text. Read more about [topk, here](https://pytorch.org/docs/stable/torch.html#torch.topk).\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ahfctHwpOWL5",
+ "colab_type": "text"
+ },
+ "source": [
+ "#### Very Interesting"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "KOefeAv7qYVj",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "source": [
+ "def predict(net, char, h=None, top_k=None):\n",
+ " ''' Given a character, predict the next character.\n",
+ " Returns the predicted character and the hidden state.\n",
+ " '''\n",
+ " \n",
+ " # tensor inputs\n",
+ " x = np.array([[net.char2int[char]]])\n",
+ " x = one_hot_encode(x, len(net.chars))\n",
+ " inputs = torch.from_numpy(x)\n",
+ " \n",
+ " if(train_on_gpu):\n",
+ " inputs = inputs.cuda()\n",
+ " \n",
+ " # detach hidden state from history\n",
+ " h = tuple([each.data for each in h])\n",
+ " # get the output of the model\n",
+ " out, h = net(inputs, h)\n",
+ "\n",
+ " # get the character probabilities\n",
+ " p = F.softmax(out, dim=1).data\n",
+ " if(train_on_gpu):\n",
+ " p = p.cpu() # move to cpu\n",
+ " \n",
+ " # get top characters\n",
+ " if top_k is None:\n",
+ " top_ch = np.arange(len(net.chars))\n",
+ " else:\n",
+ " p, top_ch = p.topk(top_k)\n",
+ " top_ch = top_ch.numpy().squeeze()\n",
+ " \n",
+ " # select the likely next character with some element of randomness\n",
+ " p = p.numpy().squeeze()\n",
+ " char = np.random.choice(top_ch, p=p/p.sum())\n",
+ " \n",
+ " # return the encoded value of the predicted char and the hidden state\n",
+ " return net.int2char[char], h"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "JitGmmElqYVm",
+ "colab_type": "text"
+ },
+ "source": [
+ "### Priming and generating text \n",
+ "\n",
+ "Typically you'll want to prime the network so you can build up a hidden state. Otherwise the network will start out generating characters at random. In general the first bunch of characters will be a little rough since it hasn't built up a long history of characters to predict from."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "OVdh-ubWqYVn",
+ "colab_type": "code",
+ "colab": {}
+ },
+ "source": [
+ "def sample(net, size, prime='The', top_k=None):\n",
+ " \n",
+ " if(train_on_gpu):\n",
+ " net.cuda()\n",
+ " else:\n",
+ " net.cpu()\n",
+ " \n",
+ " net.eval() # eval mode\n",
+ " \n",
+ " # First off, run through the prime characters\n",
+ " chars = [ch for ch in prime]\n",
+ " h = net.init_hidden(1)\n",
+ " for ch in prime:\n",
+ " char, h = predict(net, ch, h, top_k=top_k)\n",
+ "\n",
+ " chars.append(char)\n",
+ " \n",
+ " # Now pass in the previous character and get a new one\n",
+ " for ii in range(size):\n",
+ " char, h = predict(net, chars[-1], h, top_k=top_k)\n",
+ " chars.append(char)\n",
+ "\n",
+ " return ''.join(chars)"
+ ],
+ "execution_count": 0,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "BC_-zuolqYVt",
+ "colab_type": "code",
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 360
+ },
+ "outputId": "b93be8fd-ae8c-47f3-aacf-28af343a9bed"
+ },
+ "source": [
+ "print(sample(net, 1000, prime='Anna', top_k=5))"
+ ],
+ "execution_count": 43,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "text": [
+ "Anna to their. In the same way this mistaken, a sender and most thrown of the memory.\n",
+ "\n",
+ "\"I shouldn't a conversation in much myself a signe, I doubt anything before.... I don't know\n",
+ "that you\n",
+ "did not accurty to the same state, then though their coming a singer art of their field and harrings one. And I don't wonker and tire it off on. I'll gain. A some of\n",
+ "there's a\n",
+ "moment, as I shall go to\n",
+ "the point of impossible telling\n",
+ "one of all the considering and talking of it. He's\n",
+ "so impossible, to think in the stream and sure, that he was\n",
+ "not a good and that I was sitting on ten him.\"\n",
+ "\n",
+ "\"I don't say to\n",
+ "home! I don't this word, I don't knew if you know you,\" said Stepan Arkadyevitch, smoking tenderly, as\n",
+ "he would start\n",
+ "finish, and that would not be settred, and she came to say to him, and there was still tenderness to him to her subject, and\n",
+ "that he had seen something time, and he saw it was not for a sort\n",
+ "in his\n",
+ "held. But so to she talked to him to tell him and asking the children to the man of the mome\n"
+ ],
+ "name": "stdout"
+ }
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "1xSafX7pqYVw",
+ "colab_type": "text"
+ },
+ "source": [
+ "## Loading a checkpoint"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "5jhzw4sYqYVx",
+ "colab_type": "code",
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 34
+ },
+ "outputId": "ab19e202-e95c-4b09-f09f-909ae8c6b440"
+ },
+ "source": [
+ "# Here we have loaded in a model that trained over 20 epochs `rnn_20_epoch.net`\n",
+ "with open('rnn_x_epoch.net', 'rb') as f:\n",
+ " checkpoint = torch.load(f)\n",
+ " \n",
+ "loaded = CharRNN(checkpoint['tokens'], n_hidden=checkpoint['n_hidden'], n_layers=checkpoint['n_layers'])\n",
+ "loaded.load_state_dict(checkpoint['state_dict'])"
+ ],
+ "execution_count": 44,
+ "outputs": [
+ {
+ "output_type": "execute_result",
+ "data": {
+ "text/plain": [
+ "IncompatibleKeys(missing_keys=[], unexpected_keys=[])"
+ ]
+ },
+ "metadata": {
+ "tags": []
+ },
+ "execution_count": 44
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "metadata": {
+ "id": "TAuiSSwwqYV7",
+ "colab_type": "code",
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 785
+ },
+ "outputId": "948ece93-3cbb-4e82-c5a7-7558622d8ab3"
+ },
+ "source": [
+ "# Sample using a loaded model\n",
+ "print(sample(loaded, 2000, top_k=5, prime=\"And Levin said\"))"
+ ],
+ "execution_count": 45,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "text": [
+ "And Levin said to\n",
+ "him, he was drinking at her as though, he sat in sorts; \"his master were not been\n",
+ "to tell you at the picture,\" he said, smiling face and to be drawing round the\n",
+ "rapidly at his significance.\n",
+ "\n",
+ "\"To step in my contror in the contrary in the peasants and the same, but I should be in the province. I don't know, and we could not have seem no one\n",
+ "to\n",
+ "stay, that it seess home; there's\n",
+ "a subsear in the country. There's no minute.\n",
+ "\n",
+ "And how deciness that I have seen him. I can't\n",
+ "believe anything that it is to go, and was there, at hose of their minds that I don't care for, and say, after that things was at the set of\n",
+ "their some mentory of a\n",
+ "change and seeming...\"\n",
+ "\n",
+ "The policical smile of his brother was not talking of the significance. The country there all at once his faith and his call too, hele of his side, though she was not a mund and should hear in the sigh, but he could not help\n",
+ "the point.\n",
+ "\n",
+ "\"I haven't been telling the conversation\n",
+ "to say, too, that he was strunge in the country and attain the country,\" said Levin. \"The conception on the most\n",
+ "princess to ching the study too,\n",
+ "and I have got to have the political farting for that most marriage.\"\n",
+ "\n",
+ "\"You're true to the people!\" said Levin.\n",
+ "\n",
+ "The steam of a chair with all her stolling of the room with steps, a can looker face, and taken her anxalutions with a close friends and a conversation of their\n",
+ "servens and times and dargly. And her loss, the son had been dreaded to the princess, and she heard the strong.\n",
+ "\n",
+ "\"When it is so? I am not\n",
+ "the\n",
+ "peasant, then, you know that I don't, when you were solded, and see that it's all\n",
+ "mad that they meet to have, and I can't answer about, because\n",
+ "this was,\"\n",
+ "said Anna.\n",
+ "\n",
+ "\"Well, and that's the positively.\"\n",
+ "\n",
+ "\"Well, that's a men and seem in the sack face to him,\" he said. \"I am a second and string, I shall get to the\n",
+ "peatants...\"\n",
+ "\n",
+ "\"I have been from this time, some doctor answer to be the picture. There has\n",
+ "something not\n",
+ "been all too...\" he answered. \"Who, you must say to\n",
+ "this to see you to make you\n"
+ ],
+ "name": "stdout"
+ }
+ ]
+ }
+ ]
+}
\ No newline at end of file