master
/ 14_recurrent_neural_networks.ipynb

14_recurrent_neural_networks.ipynb @masterview markup · raw · history · blame

Notebook

Chapter 14 – Recurrent Neural Networks

This notebook contains all the sample code and solutions to the exercises in chapter 14.

Setup

First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:

In [1]:
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals

# Common imports
import numpy as np
import os

# to make this notebook's output stable across runs
def reset_graph(seed=42):
    tf.reset_default_graph()
    tf.set_random_seed(seed)
    np.random.seed(seed)

# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12

# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "rnn"

def save_fig(fig_id, tight_layout=True):
    path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
    print("Saving figure", fig_id)
    if tight_layout:
        plt.tight_layout()
    plt.savefig(path, format='png', dpi=300)

Then of course we will need TensorFlow:

In [2]:
import tensorflow as tf

Basic RNNs

Manual RNN

In [3]:
reset_graph()

n_inputs = 3
n_neurons = 5

X0 = tf.placeholder(tf.float32, [None, n_inputs])
X1 = tf.placeholder(tf.float32, [None, n_inputs])

Wx = tf.Variable(tf.random_normal(shape=[n_inputs, n_neurons],dtype=tf.float32))
Wy = tf.Variable(tf.random_normal(shape=[n_neurons,n_neurons],dtype=tf.float32))
b = tf.Variable(tf.zeros([1, n_neurons], dtype=tf.float32))

Y0 = tf.tanh(tf.matmul(X0, Wx) + b)
Y1 = tf.tanh(tf.matmul(Y0, Wy) + tf.matmul(X1, Wx) + b)

init = tf.global_variables_initializer()
In [4]:
import numpy as np

X0_batch = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 0, 1]]) # t = 0
X1_batch = np.array([[9, 8, 7], [0, 0, 0], [6, 5, 4], [3, 2, 1]]) # t = 1

with tf.Session() as sess:
    init.run()
    Y0_val, Y1_val = sess.run([Y0, Y1], feed_dict={X0: X0_batch, X1: X1_batch})
In [5]:
print(Y0_val)
[[-0.06640061  0.9625767   0.6810579   0.7091854  -0.89821595]
 [ 0.99777555 -0.7197888  -0.99657613  0.96739244 -0.99989706]
 [ 0.99999785 -0.9989881  -0.99999887  0.9967763  -0.9999999 ]
 [ 1.         -1.         -1.         -0.9981892   0.9995087 ]]
In [6]:
print(Y1_val)
[[ 1.         -1.         -1.          0.40200272 -0.99999994]
 [-0.12210429  0.62805295  0.96718436 -0.9937122  -0.2583933 ]
 [ 0.99999815 -0.9999994  -0.99999744 -0.8594331  -0.99998796]
 [ 0.99928296 -0.9999981  -0.9999059   0.98579615 -0.9220575 ]]

Using static_rnn()

In [7]:
n_inputs = 3
n_neurons = 5
In [8]:
reset_graph()

X0 = tf.placeholder(tf.float32, [None, n_inputs])
X1 = tf.placeholder(tf.float32, [None, n_inputs])

basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
output_seqs, states = tf.contrib.rnn.static_rnn(basic_cell, [X0, X1],
                                                dtype=tf.float32)
Y0, Y1 = output_seqs
In [9]:
init = tf.global_variables_initializer()
In [10]:
X0_batch = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 0, 1]])
X1_batch = np.array([[9, 8, 7], [0, 0, 0], [6, 5, 4], [3, 2, 1]])

with tf.Session() as sess:
    init.run()
    Y0_val, Y1_val = sess.run([Y0, Y1], feed_dict={X0: X0_batch, X1: X1_batch})
In [11]:
Y0_val
Out[11]:
array([[ 0.30741337, -0.32884312, -0.6542847 , -0.9385059 ,  0.52089024],
       [ 0.9912275 , -0.95425415, -0.7518078 , -0.9995208 ,  0.98202336],
       [ 0.99992675, -0.99783254, -0.82473516, -0.99999636,  0.99947786],
       [ 0.9967709 , -0.6875061 ,  0.8419969 ,  0.93039113,  0.81206834]],
      dtype=float32)
In [12]:
Y1_val
Out[12]:
array([[ 0.99998885, -0.9997606 , -0.06679297, -0.9999803 ,  0.99982214],
       [-0.65249425, -0.5152086 , -0.37968948, -0.5922594 , -0.08968376],
       [ 0.998624  , -0.99715203, -0.03308632, -0.9991566 ,  0.9932902 ],
       [ 0.99681675, -0.9598194 ,  0.39660627, -0.8307605 ,  0.7967197 ]],
      dtype=float32)
In [13]:
from tensorflow_graph_in_jupyter import show_graph
In [14]:
show_graph(tf.get_default_graph())

Packing sequences

In [15]:
n_steps = 2
n_inputs = 3
n_neurons = 5
In [16]:
reset_graph()

X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
X_seqs = tf.unstack(tf.transpose(X, perm=[1, 0, 2]))

basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
output_seqs, states = tf.contrib.rnn.static_rnn(basic_cell, X_seqs,
                                                dtype=tf.float32)
outputs = tf.transpose(tf.stack(output_seqs), perm=[1, 0, 2])
In [17]:
init = tf.global_variables_initializer()
In [18]:
X_batch = np.array([
        # t = 0      t = 1 
        [[0, 1, 2], [9, 8, 7]], # instance 1
        [[3, 4, 5], [0, 0, 0]], # instance 2
        [[6, 7, 8], [6, 5, 4]], # instance 3
        [[9, 0, 1], [3, 2, 1]], # instance 4
    ])

with tf.Session() as sess:
    init.run()
    outputs_val = outputs.eval(feed_dict={X: X_batch})
In [19]:
print(outputs_val)
[[[-0.4565232  -0.6806412   0.40938237  0.631045   -0.45732823]
  [-0.94288003 -0.9998869   0.9405581   0.99999845 -0.99999976]]

 [[-0.80015343 -0.99218273  0.78177965  0.9971032  -0.9964609 ]
  [-0.637116    0.11300934  0.5798437   0.43105593 -0.6371699 ]]

 [[-0.93605185 -0.99983793  0.9308867   0.9999814  -0.99998313]
  [-0.9165386  -0.99456036  0.89605415  0.9998719  -0.9999751 ]]

 [[ 0.99273676 -0.9981933  -0.5554365   0.99890316 -0.9953323 ]
  [-0.02746333 -0.7319198   0.7827872   0.9525682  -0.9781772 ]]]
In [20]:
print(np.transpose(outputs_val, axes=[1, 0, 2])[1])
[[-0.94288003 -0.9998869   0.9405581   0.99999845 -0.99999976]
 [-0.637116    0.11300934  0.5798437   0.43105593 -0.6371699 ]
 [-0.9165386  -0.99456036  0.89605415  0.9998719  -0.9999751 ]
 [-0.02746333 -0.7319198   0.7827872   0.9525682  -0.9781772 ]]

Using dynamic_rnn()

In [21]:
n_steps = 2
n_inputs = 3
n_neurons = 5
In [22]:
reset_graph()

X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])

basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)
In [23]:
init = tf.global_variables_initializer()
In [24]:
X_batch = np.array([
        [[0, 1, 2], [9, 8, 7]], # instance 1
        [[3, 4, 5], [0, 0, 0]], # instance 2
        [[6, 7, 8], [6, 5, 4]], # instance 3
        [[9, 0, 1], [3, 2, 1]], # instance 4
    ])

with tf.Session() as sess:
    init.run()
    outputs_val = outputs.eval(feed_dict={X: X_batch})
In [25]:
print(outputs_val)
[[[-0.85115266  0.8735834   0.5802911   0.8954789  -0.0557505 ]
  [-0.99999595  0.9999957   0.9981816   1.          0.37679613]]

 [[-0.99832934  0.9992038   0.98071444  0.99998486  0.25192654]
  [-0.7081804  -0.07723369 -0.8522789   0.5845347  -0.7878094 ]]

 [[-0.9999826   0.9999953   0.99928635  1.          0.51590705]
  [-0.9993956   0.9984095   0.83422637  0.9999998  -0.47325197]]

 [[ 0.87888587  0.07356028  0.9721692   0.9998546  -0.7351168 ]
  [-0.9134514   0.3600957   0.7624865   0.99817705  0.80142   ]]]
In [26]:
show_graph(tf.get_default_graph())

Setting the sequence lengths

In [27]:
n_steps = 2
n_inputs = 3
n_neurons = 5

reset_graph()

X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
In [28]:
seq_length = tf.placeholder(tf.int32, [None])
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32,
                                    sequence_length=seq_length)
In [29]:
init = tf.global_variables_initializer()
In [30]:
X_batch = np.array([
        # step 0     step 1
        [[0, 1, 2], [9, 8, 7]], # instance 1
        [[3, 4, 5], [0, 0, 0]], # instance 2 (padded with zero vectors)
        [[6, 7, 8], [6, 5, 4]], # instance 3
        [[9, 0, 1], [3, 2, 1]], # instance 4
    ])
seq_length_batch = np.array([2, 1, 2, 2])
In [31]:
with tf.Session() as sess:
    init.run()
    outputs_val, states_val = sess.run(
        [outputs, states], feed_dict={X: X_batch, seq_length: seq_length_batch})
In [32]:
print(outputs_val)
[[[-0.91231877  0.16516447  0.5548655  -0.3915935   0.20846416]
  [-1.          0.95672596  0.9983169   0.9997017   0.96518576]]

 [[-0.99986124  0.67022896  0.9723652   0.66310453  0.74457586]
  [ 0.          0.          0.          0.          0.        ]]

 [[-0.99999976  0.89679974  0.9986295   0.96475154  0.93662006]
  [-0.99995255  0.96819544  0.9600286   0.9870626   0.8545923 ]]

 [[-0.9643544   0.9950159  -0.361507    0.99833775  0.99949706]
  [-0.9613586   0.9568762   0.71322876  0.9772921  -0.09582992]]]
In [33]:
print(states_val)
[[-1.          0.95672596  0.9983169   0.9997017   0.96518576]
 [-0.99986124  0.67022896  0.9723652   0.66310453  0.74457586]
 [-0.99995255  0.96819544  0.9600286   0.9870626   0.8545923 ]
 [-0.9613586   0.9568762   0.71322876  0.9772921  -0.09582992]]

Training a sequence classifier

Note: the book uses tensorflow.contrib.layers.fully_connected() rather than tf.layers.dense() (which did not exist when this chapter was written). It is now preferable to use tf.layers.dense(), because anything in the contrib module may change or be deleted without notice. The dense() function is almost identical to the fully_connected() function. The main differences relevant to this chapter are:

  • several parameters are renamed: scope becomes name, activation_fn becomes activation (and similarly the _fn suffix is removed from other parameters such as normalizer_fn), weights_initializer becomes kernel_initializer, etc.
  • the default activation is now None rather than tf.nn.relu.
In [34]:
reset_graph()

n_steps = 28
n_inputs = 28
n_neurons = 150
n_outputs = 10

learning_rate = 0.001

X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.int32, [None])

basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)

logits = tf.layers.dense(states, n_outputs)
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,
                                                          logits=logits)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))

init = tf.global_variables_initializer()
In [35]:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
X_test = mnist.test.images.reshape((-1, n_steps, n_inputs))
y_test = mnist.test.labels
Extracting /tmp/data/train-images-idx3-ubyte.gz
Extracting /tmp/data/train-labels-idx1-ubyte.gz
Extracting /tmp/data/t10k-images-idx3-ubyte.gz
Extracting /tmp/data/t10k-labels-idx1-ubyte.gz
In [36]:
n_epochs = 100
batch_size = 150

with tf.Session() as sess:
    init.run()
    for epoch in range(n_epochs):
        for iteration in range(mnist.train.num_examples // batch_size):
            X_batch, y_batch = mnist.train.next_batch(batch_size)
            X_batch = X_batch.reshape((-1, n_steps, n_inputs))
            sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
        acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
        acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
        print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
0 Train accuracy: 0.946667 Test accuracy: 0.9366
1 Train accuracy: 0.966667 Test accuracy: 0.9488
2 Train accuracy: 0.96 Test accuracy: 0.9614
3 Train accuracy: 0.966667 Test accuracy: 0.9572
4 Train accuracy: 0.946667 Test accuracy: 0.9624
5 Train accuracy: 0.96 Test accuracy: 0.9634
6 Train accuracy: 0.973333 Test accuracy: 0.9714
7 Train accuracy: 0.98 Test accuracy: 0.9686
8 Train accuracy: 0.953333 Test accuracy: 0.9647
9 Train accuracy: 0.973333 Test accuracy: 0.9706
10 Train accuracy: 0.973333 Test accuracy: 0.9697
11 Train accuracy: 0.966667 Test accuracy: 0.9692
12 Train accuracy: 0.986667 Test accuracy: 0.9751
13 Train accuracy: 0.993333 Test accuracy: 0.9738
14 Train accuracy: 0.986667 Test accuracy: 0.9721
15 Train accuracy: 0.98 Test accuracy: 0.9752
16 Train accuracy: 0.993333 Test accuracy: 0.9779
17 Train accuracy: 0.98 Test accuracy: 0.976
18 Train accuracy: 0.986667 Test accuracy: 0.976
19 Train accuracy: 0.98 Test accuracy: 0.9709
20 Train accuracy: 0.986667 Test accuracy: 0.9758
21 Train accuracy: 0.98 Test accuracy: 0.9751
22 Train accuracy: 0.986667 Test accuracy: 0.9748
23 Train accuracy: 0.98 Test accuracy: 0.9767
24 Train accuracy: 0.98 Test accuracy: 0.9696
25 Train accuracy: 1.0 Test accuracy: 0.9779
26 Train accuracy: 0.98 Test accuracy: 0.979
27 Train accuracy: 0.98 Test accuracy: 0.9783
28 Train accuracy: 0.966667 Test accuracy: 0.9793
29 Train accuracy: 0.986667 Test accuracy: 0.9756
30 Train accuracy: 0.98 Test accuracy: 0.9765
31 Train accuracy: 0.986667 Test accuracy: 0.9784
32 Train accuracy: 0.986667 Test accuracy: 0.9753
33 Train accuracy: 0.98 Test accuracy: 0.9769
34 Train accuracy: 0.993333 Test accuracy: 0.9785
35 Train accuracy: 1.0 Test accuracy: 0.9787
36 Train accuracy: 1.0 Test accuracy: 0.971
37 Train accuracy: 0.993333 Test accuracy: 0.9782
38 Train accuracy: 0.993333 Test accuracy: 0.9755
39 Train accuracy: 0.986667 Test accuracy: 0.9732
40 Train accuracy: 0.986667 Test accuracy: 0.9695
41 Train accuracy: 0.986667 Test accuracy: 0.9812
42 Train accuracy: 0.993333 Test accuracy: 0.9785
43 Train accuracy: 0.993333 Test accuracy: 0.9768
44 Train accuracy: 0.98 Test accuracy: 0.979
45 Train accuracy: 0.986667 Test accuracy: 0.981
46 Train accuracy: 0.993333 Test accuracy: 0.9792
47 Train accuracy: 1.0 Test accuracy: 0.9812
48 Train accuracy: 0.993333 Test accuracy: 0.9775
49 Train accuracy: 1.0 Test accuracy: 0.9747
50 Train accuracy: 1.0 Test accuracy: 0.9815
51 Train accuracy: 0.986667 Test accuracy: 0.9805
52 Train accuracy: 0.986667 Test accuracy: 0.9798
53 Train accuracy: 0.986667 Test accuracy: 0.9791
54 Train accuracy: 1.0 Test accuracy: 0.9771
55 Train accuracy: 0.986667 Test accuracy: 0.98
56 Train accuracy: 0.993333 Test accuracy: 0.978
57 Train accuracy: 0.986667 Test accuracy: 0.9794
58 Train accuracy: 0.993333 Test accuracy: 0.9784
59 Train accuracy: 0.993333 Test accuracy: 0.9826
60 Train accuracy: 0.986667 Test accuracy: 0.9746
61 Train accuracy: 1.0 Test accuracy: 0.978
62 Train accuracy: 1.0 Test accuracy: 0.9757
63 Train accuracy: 1.0 Test accuracy: 0.9792
64 Train accuracy: 0.986667 Test accuracy: 0.9758
65 Train accuracy: 1.0 Test accuracy: 0.9823
66 Train accuracy: 0.986667 Test accuracy: 0.9752
67 Train accuracy: 0.986667 Test accuracy: 0.9794
68 Train accuracy: 0.993333 Test accuracy: 0.9798
69 Train accuracy: 0.993333 Test accuracy: 0.9762
70 Train accuracy: 0.993333 Test accuracy: 0.9799
71 Train accuracy: 1.0 Test accuracy: 0.9804
72 Train accuracy: 1.0 Test accuracy: 0.9777
73 Train accuracy: 0.993333 Test accuracy: 0.9771
74 Train accuracy: 0.986667 Test accuracy: 0.9764
75 Train accuracy: 0.993333 Test accuracy: 0.9749
76 Train accuracy: 0.993333 Test accuracy: 0.9804
77 Train accuracy: 0.993333 Test accuracy: 0.9776
78 Train accuracy: 0.986667 Test accuracy: 0.9749
79 Train accuracy: 0.993333 Test accuracy: 0.976
80 Train accuracy: 0.986667 Test accuracy: 0.9754
81 Train accuracy: 0.98 Test accuracy: 0.9783
82 Train accuracy: 1.0 Test accuracy: 0.9787
83 Train accuracy: 0.993333 Test accuracy: 0.9793
84 Train accuracy: 0.986667 Test accuracy: 0.9739
85 Train accuracy: 1.0 Test accuracy: 0.9776
86 Train accuracy: 0.986667 Test accuracy: 0.9761
87 Train accuracy: 0.98 Test accuracy: 0.9781
88 Train accuracy: 0.98 Test accuracy: 0.9549
89 Train accuracy: 0.98 Test accuracy: 0.9767
90 Train accuracy: 1.0 Test accuracy: 0.9809
91 Train accuracy: 0.98 Test accuracy: 0.9809
92 Train accuracy: 0.986667 Test accuracy: 0.9754
93 Train accuracy: 0.993333 Test accuracy: 0.979
94 Train accuracy: 1.0 Test accuracy: 0.9777
95 Train accuracy: 1.0 Test accuracy: 0.9771
96 Train accuracy: 1.0 Test accuracy: 0.9814
97 Train accuracy: 0.993333 Test accuracy: 0.9673
98 Train accuracy: 0.993333 Test accuracy: 0.9788
99 Train accuracy: 1.0 Test accuracy: 0.9792

Multi-layer RNN

In [37]:
reset_graph()

n_steps = 28
n_inputs = 28
n_outputs = 10

learning_rate = 0.001

X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.int32, [None])
In [38]:
n_neurons = 100
n_layers = 3

layers = [tf.contrib.rnn.BasicRNNCell(num_units=n_neurons,
                                      activation=tf.nn.relu)
          for layer in range(n_layers)]
multi_layer_cell = tf.contrib.rnn.MultiRNNCell(layers)
outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
In [39]:
states_concat = tf.concat(axis=1, values=states)
logits = tf.layers.dense(states_concat, n_outputs)
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))

init = tf.global_variables_initializer()
In [40]:
n_epochs = 10
batch_size = 150

with tf.Session() as sess:
    init.run()
    for epoch in range(n_epochs):
        for iteration in range(mnist.train.num_examples // batch_size):
            X_batch, y_batch = mnist.train.next_batch(batch_size)
            X_batch = X_batch.reshape((-1, n_steps, n_inputs))
            sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
        acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
        acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
        print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
0 Train accuracy: 0.946667 Test accuracy: 0.9495
1 Train accuracy: 0.973333 Test accuracy: 0.9625
2 Train accuracy: 0.953333 Test accuracy: 0.9648
3 Train accuracy: 0.986667 Test accuracy: 0.9761
4 Train accuracy: 0.993333 Test accuracy: 0.9719
5 Train accuracy: 0.993333 Test accuracy: 0.9739
6 Train accuracy: 0.98 Test accuracy: 0.9746
7 Train accuracy: 0.98 Test accuracy: 0.9782
8 Train accuracy: 0.986667 Test accuracy: 0.9768
9 Train accuracy: 0.986667 Test accuracy: 0.9769

Time series

In [41]:
t_min, t_max = 0, 30
resolution = 0.1

def time_series(t):
    return t * np.sin(t) / 3 + 2 * np.sin(t*5)

def next_batch(batch_size, n_steps):
    t0 = np.random.rand(batch_size, 1) * (t_max - t_min - n_steps * resolution)
    Ts = t0 + np.arange(0., n_steps + 1) * resolution
    ys = time_series(Ts)
    return ys[:, :-1].reshape(-1, n_steps, 1), ys[:, 1:].reshape(-1, n_steps, 1)
In [42]:
t = np.linspace(t_min, t_max, int((t_max - t_min) / resolution))

n_steps = 20
t_instance = np.linspace(12.2, 12.2 + resolution * (n_steps + 1), n_steps + 1)

plt.figure(figsize=(11,4))
plt.subplot(121)
plt.title("A time series (generated)", fontsize=14)
plt.plot(t, time_series(t), label=r"$t . \sin(t) / 3 + 2 . \sin(5t)$")
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "b-", linewidth=3, label="A training instance")
plt.legend(loc="lower left", fontsize=14)
plt.axis([0, 30, -17, 13])
plt.xlabel("Time")
plt.ylabel("Value")

plt.subplot(122)
plt.title("A training instance", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.legend(loc="upper left")
plt.xlabel("Time")


save_fig("time_series_plot")
plt.show()
Saving figure time_series_plot
In [43]:
X_batch, y_batch = next_batch(1, n_steps)
In [44]:
np.c_[X_batch[0], y_batch[0]]
Out[44]:
array([[ 1.38452097,  2.05081182],
       [ 2.05081182,  2.29742291],
       [ 2.29742291,  2.0465599 ],
       [ 2.0465599 ,  1.34009916],
       [ 1.34009916,  0.32948704],
       [ 0.32948704, -0.76115235],
       [-0.76115235, -1.68967022],
       [-1.68967022, -2.25492776],
       [-2.25492776, -2.34576159],
       [-2.34576159, -1.96789418],
       [-1.96789418, -1.24220428],
       [-1.24220428, -0.37478448],
       [-0.37478448,  0.39387907],
       [ 0.39387907,  0.84815766],
       [ 0.84815766,  0.85045064],
       [ 0.85045064,  0.3752526 ],
       [ 0.3752526 , -0.48422846],
       [-0.48422846, -1.53852738],
       [-1.53852738, -2.54795941],
       [-2.54795941, -3.28097239]])

Using an OuputProjectionWrapper

Let's create the RNN. It will contain 100 recurrent neurons and we will unroll it over 20 time steps since each traiing instance will be 20 inputs long. Each input will contain only one feature (the value at that time). The targets are also sequences of 20 inputs, each containing a sigle value:

In [45]:
reset_graph()

n_steps = 20
n_inputs = 1
n_neurons = 100
n_outputs = 1

X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])

cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu)
outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)

At each time step we now have an output vector of size 100. But what we actually want is a single output value at each time step. The simplest solution is to wrap the cell in an OutputProjectionWrapper.

In [46]:
reset_graph()

n_steps = 20
n_inputs = 1
n_neurons = 100
n_outputs = 1

X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
In [47]:
cell = tf.contrib.rnn.OutputProjectionWrapper(
    tf.contrib.rnn.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu),
    output_size=n_outputs)
In [48]:
outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
In [49]:
learning_rate = 0.001

loss = tf.reduce_mean(tf.square(outputs - y)) # MSE
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)

init = tf.global_variables_initializer()
In [50]:
saver = tf.train.Saver()
In [51]:
n_iterations = 1500
batch_size = 50

with tf.Session() as sess:
    init.run()
    for iteration in range(n_iterations):
        X_batch, y_batch = next_batch(batch_size, n_steps)
        sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
        if iteration % 100 == 0:
            mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
            print(iteration, "\tMSE:", mse)
    
    saver.save(sess, "./my_time_series_model") # not shown in the book
0 	MSE: 18.9177
100 	MSE: 0.762551
200 	MSE: 0.290907
300 	MSE: 0.149525
400 	MSE: 0.0772763
500 	MSE: 0.0665644
600 	MSE: 0.0636215
700 	MSE: 0.0559812
800 	MSE: 0.0554487
900 	MSE: 0.0529518
1000 	MSE: 0.0534466
1100 	MSE: 0.051972
1200 	MSE: 0.0442769
1300 	MSE: 0.0547834
1400 	MSE: 0.0466028
In [52]:
with tf.Session() as sess:                          # not shown in the book
    saver.restore(sess, "./my_time_series_model")   # not shown

    X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
    y_pred = sess.run(outputs, feed_dict={X: X_new})
INFO:tensorflow:Restoring parameters from ./my_time_series_model
In [53]:
y_pred
Out[53]:
array([[[-3.42637658],
        [-2.46367216],
        [-1.15637207],
        [ 0.76617211],
        [ 2.29781222],
        [ 3.12525225],
        [ 3.4899745 ],
        [ 3.33646941],
        [ 2.82800984],
        [ 2.24254656],
        [ 1.71146703],
        [ 1.62021911],
        [ 2.00660276],
        [ 2.79940104],
        [ 3.91245008],
        [ 5.18602514],
        [ 6.16892099],
        [ 6.69519091],
        [ 6.66508675],
        [ 6.10952806]]], dtype=float32)
In [54]:
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")

save_fig("time_series_pred_plot")
plt.show()
Saving figure time_series_pred_plot

Without using an OutputProjectionWrapper

In [55]:
reset_graph()

n_steps = 20
n_inputs = 1
n_neurons = 100

X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
In [56]:
cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu)
rnn_outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
In [57]:
n_outputs = 1
learning_rate = 0.001
In [58]:
stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons])
stacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs)
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])
In [59]:
loss = tf.reduce_mean(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)

init = tf.global_variables_initializer()
saver = tf.train.Saver()
In [60]:
n_iterations = 1500
batch_size = 50

with tf.Session() as sess:
    init.run()
    for iteration in range(n_iterations):
        X_batch, y_batch = next_batch(batch_size, n_steps)
        sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
        if iteration % 100 == 0:
            mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
            print(iteration, "\tMSE:", mse)
    
    X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
    y_pred = sess.run(outputs, feed_dict={X: X_new})
    
    saver.save(sess, "./my_time_series_model")
0 	MSE: 10.1622
100 	MSE: 0.406083
200 	MSE: 0.116547
300 	MSE: 0.0686967
400 	MSE: 0.0689586
500 	MSE: 0.0625274
600 	MSE: 0.056175
700 	MSE: 0.0507173
800 	MSE: 0.0516515
900 	MSE: 0.0502068
1000 	MSE: 0.0497242
1100 	MSE: 0.0506494
1200 	MSE: 0.0413749
1300 	MSE: 0.0501782
1400 	MSE: 0.0440728
In [61]:
y_pred
Out[61]:
array([[[-3.40363002],
        [-2.45480704],
        [-1.10760546],
        [ 0.85981268],
        [ 2.19396615],
        [ 3.03818107],
        [ 3.44206738],
        [ 3.40469313],
        [ 2.89831972],
        [ 2.2327323 ],
        [ 1.62960398],
        [ 1.48093259],
        [ 1.91334391],
        [ 2.81092381],
        [ 3.99130607],
        [ 5.17190695],
        [ 6.16106033],
        [ 6.65612125],
        [ 6.57139349],
        [ 5.97566175]]], dtype=float32)
In [62]:
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")

plt.show()

Generating a creative new sequence

In [63]:
with tf.Session() as sess:                        # not shown in the book
    saver.restore(sess, "./my_time_series_model") # not shown

    sequence = [0.] * n_steps
    for iteration in range(300):
        X_batch = np.array(sequence[-n_steps:]).reshape(1, n_steps, 1)
        y_pred = sess.run(outputs, feed_dict={X: X_batch})
        sequence.append(y_pred[0, -1, 0])
INFO:tensorflow:Restoring parameters from ./my_time_series_model
In [64]:
plt.figure(figsize=(8,4))
plt.plot(np.arange(len(sequence)), sequence, "b-")
plt.plot(t[:n_steps], sequence[:n_steps], "b-", linewidth=3)
plt.xlabel("Time")
plt.ylabel("Value")
plt.show()
In [65]:
with tf.Session() as sess:
    saver.restore(sess, "./my_time_series_model")

    sequence1 = [0. for i in range(n_steps)]
    for iteration in range(len(t) - n_steps):
        X_batch = np.array(sequence1[-n_steps:]).reshape(1, n_steps, 1)
        y_pred = sess.run(outputs, feed_dict={X: X_batch})
        sequence1.append(y_pred[0, -1, 0])

    sequence2 = [time_series(i * resolution + t_min + (t_max-t_min/3)) for i in range(n_steps)]
    for iteration in range(len(t) - n_steps):
        X_batch = np.array(sequence2[-n_steps:]).reshape(1, n_steps, 1)
        y_pred = sess.run(outputs, feed_dict={X: X_batch})
        sequence2.append(y_pred[0, -1, 0])

plt.figure(figsize=(11,4))
plt.subplot(121)
plt.plot(t, sequence1, "b-")
plt.plot(t[:n_steps], sequence1[:n_steps], "b-", linewidth=3)
plt.xlabel("Time")
plt.ylabel("Value")

plt.subplot(122)
plt.plot(t, sequence2, "b-")
plt.plot(t[:n_steps], sequence2[:n_steps], "b-", linewidth=3)
plt.xlabel("Time")
save_fig("creative_sequence_plot")
plt.show()
INFO:tensorflow:Restoring parameters from ./my_time_series_model
Saving figure creative_sequence_plot

Deep RNN

MultiRNNCell

In [66]:
reset_graph()

n_inputs = 2
n_steps = 5

X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
In [67]:
n_neurons = 100
n_layers = 3

layers = [tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
          for layer in range(n_layers)]
multi_layer_cell = tf.contrib.rnn.MultiRNNCell(layers)
outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
In [68]:
init = tf.global_variables_initializer()
In [69]:
X_batch = np.random.rand(2, n_steps, n_inputs)
In [70]:
with tf.Session() as sess:
    init.run()
    outputs_val, states_val = sess.run([outputs, states], feed_dict={X: X_batch})
In [71]:
outputs_val.shape
Out[71]:
(2, 5, 100)

Distributing a Deep RNN Across Multiple GPUs

Do NOT do this:

In [72]:
with tf.device("/gpu:0"):  # BAD! This is ignored.
    layer1 = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)

with tf.device("/gpu:1"):  # BAD! Ignored again.
    layer2 = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)

Instead, you need a DeviceCellWrapper:

In [73]:
import tensorflow as tf

class DeviceCellWrapper(tf.contrib.rnn.RNNCell):
  def __init__(self, device, cell):
    self._cell = cell
    self._device = device

  @property
  def state_size(self):
    return self._cell.state_size

  @property
  def output_size(self):
    return self._cell.output_size

  def __call__(self, inputs, state, scope=None):
    with tf.device(self._device):
        return self._cell(inputs, state, scope)
In [74]:
reset_graph()

n_inputs = 5
n_steps = 20
n_neurons = 100

X = tf.placeholder(tf.float32, shape=[None, n_steps, n_inputs])
In [75]:
devices = ["/cpu:0", "/cpu:0", "/cpu:0"] # replace with ["/gpu:0", "/gpu:1", "/gpu:2"] if you have 3 GPUs
cells = [DeviceCellWrapper(dev,tf.contrib.rnn.BasicRNNCell(num_units=n_neurons))
         for dev in devices]
multi_layer_cell = tf.contrib.rnn.MultiRNNCell(cells)
outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)

Alternatively, since TensorFlow 1.1, you can use the tf.contrib.rnn.DeviceWrapper class (alias tf.nn.rnn_cell.DeviceWrapper since TF 1.2).

In [76]:
init = tf.global_variables_initializer()
In [77]:
with tf.Session() as sess:
    init.run()
    print(sess.run(outputs, feed_dict={X: np.random.rand(2, n_steps, n_inputs)}))
[[[ 0.06828325 -0.11375451  0.06424566 ..., -0.24244198 -0.04821674
   -0.12077257]
  [ 0.07453813 -0.2251049   0.20471548 ..., -0.14811224 -0.0922538
   -0.04429063]
  [ 0.13780868 -0.14680631 -0.0095655  ..., -0.08136044  0.07381541
   -0.03125775]
  ..., 
  [-0.25400278 -0.32078549  0.39923593 ..., -0.26669216  0.33505771
   -0.03757669]
  [ 0.22596692 -0.09880773 -0.27422303 ..., -0.13385999 -0.25443044
   -0.36498186]
  [ 0.1655937  -0.33435836  0.34313348 ..., -0.36904442  0.06908746
    0.46574104]]

 [[ 0.00489879 -0.03151967  0.02628033 ..., -0.19341362 -0.0730375
    0.00451888]
  [ 0.03073939 -0.0579551   0.17785911 ..., -0.20945786  0.05200011
   -0.07436937]
  [ 0.00192378 -0.25690764  0.12488247 ...,  0.02644884 -0.25046453
   -0.12239399]
  ..., 
  [-0.13501379 -0.06209698  0.1595035  ..., -0.20012119 -0.3338365
   -0.09281697]
  [-0.44347292  0.20323271  0.12526961 ..., -0.07962411  0.27046496
    0.31884009]
  [ 0.23965777 -0.22903362  0.07749593 ..., -0.02653922  0.084024
    0.02313657]]]

Dropout

In [78]:
reset_graph()

n_inputs = 1
n_neurons = 100
n_layers = 3
n_steps = 20
n_outputs = 1
In [79]:
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])

Note: the input_keep_prob parameter can be a placeholder, making it possible to set it to any value you want during training, and to 1.0 during testing (effectively turning dropout off). This is a much more elegant solution than what was recommended in earlier versions of the book (i.e., writing your own wrapper class or having a separate model for training and testing). Thanks to Shen Cheng for bringing this to my attention.

In [80]:
keep_prob = tf.placeholder_with_default(1.0, shape=())
cells = [tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
         for layer in range(n_layers)]
cells_drop = [tf.contrib.rnn.DropoutWrapper(cell, input_keep_prob=keep_prob)
              for cell in cells]
multi_layer_cell = tf.contrib.rnn.MultiRNNCell(cells_drop)
rnn_outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
In [81]:
learning_rate = 0.01

stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons])
stacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs)
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])

loss = tf.reduce_mean(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)

init = tf.global_variables_initializer()
saver = tf.train.Saver()
In [82]:
n_iterations = 1500
batch_size = 50
train_keep_prob = 0.5

with tf.Session() as sess:
    init.run()
    for iteration in range(n_iterations):
        X_batch, y_batch = next_batch(batch_size, n_steps)
        _, mse = sess.run([training_op, loss],
                          feed_dict={X: X_batch, y: y_batch,
                                     keep_prob: train_keep_prob})
        if iteration % 100 == 0:                   # not shown in the book
            print(iteration, "Training MSE:", mse) # not shown
    
    saver.save(sess, "./my_dropout_time_series_model")
0 Training MSE: 14.9647
100 Training MSE: 4.7444
200 Training MSE: 3.65731
300 Training MSE: 3.9575
400 Training MSE: 3.08749
500 Training MSE: 2.63093
600 Training MSE: 3.60809
700 Training MSE: 4.01957
800 Training MSE: 4.08699
900 Training MSE: 3.31099
1000 Training MSE: 2.95242
1100 Training MSE: 3.30796
1200 Training MSE: 3.28226
1300 Training MSE: 2.87491
1400 Training MSE: 3.31036
In [83]:
with tf.Session() as sess:
    saver.restore(sess, "./my_dropout_time_series_model")

    X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
    y_pred = sess.run(outputs, feed_dict={X: X_new})
INFO:tensorflow:Restoring parameters from ./my_dropout_time_series_model
In [84]:
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")

plt.show()

Oops, it seems that Dropout does not help at all in this particular case. :/

LSTM

In [85]:
reset_graph()

lstm_cell = tf.contrib.rnn.BasicLSTMCell(num_units=n_neurons)
In [86]:
n_steps = 28
n_inputs = 28
n_neurons = 150
n_outputs = 10
n_layers = 3

learning_rate = 0.001

X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.int32, [None])

lstm_cells = [tf.contrib.rnn.BasicLSTMCell(num_units=n_neurons)
              for layer in range(n_layers)]
multi_cell = tf.contrib.rnn.MultiRNNCell(lstm_cells)
outputs, states = tf.nn.dynamic_rnn(multi_cell, X, dtype=tf.float32)
top_layer_h_state = states[-1][1]
logits = tf.layers.dense(top_layer_h_state, n_outputs, name="softmax")
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
    
init = tf.global_variables_initializer()
In [87]:
states
Out[87]:
(LSTMStateTuple(c=<tf.Tensor 'rnn/while/Exit_2:0' shape=(?, 150) dtype=float32>, h=<tf.Tensor 'rnn/while/Exit_3:0' shape=(?, 150) dtype=float32>),
 LSTMStateTuple(c=<tf.Tensor 'rnn/while/Exit_4:0' shape=(?, 150) dtype=float32>, h=<tf.Tensor 'rnn/while/Exit_5:0' shape=(?, 150) dtype=float32>),
 LSTMStateTuple(c=<tf.Tensor 'rnn/while/Exit_6:0' shape=(?, 150) dtype=float32>, h=<tf.Tensor 'rnn/while/Exit_7:0' shape=(?, 150) dtype=float32>))
In [88]:
top_layer_h_state
Out[88]:
<tf.Tensor 'rnn/while/Exit_7:0' shape=(?, 150) dtype=float32>
In [89]:
n_epochs = 10
batch_size = 150

with tf.Session() as sess:
    init.run()
    for epoch in range(n_epochs):
        for iteration in range(mnist.train.num_examples // batch_size):
            X_batch, y_batch = mnist.train.next_batch(batch_size)
            X_batch = X_batch.reshape((batch_size, n_steps, n_inputs))
            sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
        acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
        acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
        print("Epoch", epoch, "Train accuracy =", acc_train, "Test accuracy =", acc_test)
Epoch 0 Train accuracy = 0.946667 Test accuracy = 0.9506
Epoch 1 Train accuracy = 0.98 Test accuracy = 0.9691
Epoch 2 Train accuracy = 0.993333 Test accuracy = 0.9745
Epoch 3 Train accuracy = 0.993333 Test accuracy = 0.982
Epoch 4 Train accuracy = 0.973333 Test accuracy = 0.9819
Epoch 5 Train accuracy = 0.993333 Test accuracy = 0.9838
Epoch 6 Train accuracy = 0.993333 Test accuracy = 0.9799
Epoch 7 Train accuracy = 0.993333 Test accuracy = 0.9873
Epoch 8 Train accuracy = 1.0 Test accuracy = 0.9862
Epoch 9 Train accuracy = 1.0 Test accuracy = 0.9873
In [90]:
lstm_cell = tf.contrib.rnn.LSTMCell(num_units=n_neurons, use_peepholes=True)
In [91]:
gru_cell = tf.contrib.rnn.GRUCell(num_units=n_neurons)

Embeddings

This section is based on TensorFlow's Word2Vec tutorial.

Fetch the data

In [92]:
from six.moves import urllib

import errno
import os
import zipfile

WORDS_PATH = "datasets/words"
WORDS_URL = 'http://mattmahoney.net/dc/text8.zip'

def mkdir_p(path):
    """Create directories, ok if they already exist.
    
    This is for python 2 support. In python >=3.2, simply use:
    >>> os.makedirs(path, exist_ok=True)
    """
    try:
        os.makedirs(path)
    except OSError as exc:
        if exc.errno == errno.EEXIST and os.path.isdir(path):
            pass
        else:
            raise

def fetch_words_data(words_url=WORDS_URL, words_path=WORDS_PATH):
    os.makedirs(words_path, exist_ok=True)
    zip_path = os.path.join(words_path, "words.zip")
    if not os.path.exists(zip_path):
        urllib.request.urlretrieve(words_url, zip_path)
    with zipfile.ZipFile(zip_path) as f:
        data = f.read(f.namelist()[0])
    return data.decode("ascii").split()
In [93]:
words = fetch_words_data()
In [94]:
words[:5]
Out[94]:
['anarchism', 'originated', 'as', 'a', 'term']

Build the dictionary

In [95]:
from collections import Counter

vocabulary_size = 50000

vocabulary = [("UNK", None)] + Counter(words).most_common(vocabulary_size - 1)
vocabulary = np.array([word for word, _ in vocabulary])
dictionary = {word: code for code, word in enumerate(vocabulary)}
data = np.array([dictionary.get(word, 0) for word in words])
In [96]:
" ".join(words[:9]), data[:9]
Out[96]:
('anarchism originated as a term of abuse first used',
 array([5242, 3081,   12,    6,  195,    2, 3136,   46,   59]))
In [97]:
" ".join([vocabulary[word_index] for word_index in [5241, 3081, 12, 6, 195, 2, 3134, 46, 59]])
Out[97]:
'default originated as a term of presidency first used'
In [98]:
words[24], data[24]
Out[98]:
('culottes', 0)

Generate batches

In [99]:
import random
from collections import deque

def generate_batch(batch_size, num_skips, skip_window):
    global data_index
    assert batch_size % num_skips == 0
    assert num_skips <= 2 * skip_window
    batch = np.ndarray(shape=(batch_size), dtype=np.int32)
    labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
    span = 2 * skip_window + 1 # [ skip_window target skip_window ]
    buffer = deque(maxlen=span)
    for _ in range(span):
        buffer.append(data[data_index])
        data_index = (data_index + 1) % len(data)
    for i in range(batch_size // num_skips):
        target = skip_window  # target label at the center of the buffer
        targets_to_avoid = [ skip_window ]
        for j in range(num_skips):
            while target in targets_to_avoid:
                target = random.randint(0, span - 1)
            targets_to_avoid.append(target)
            batch[i * num_skips + j] = buffer[skip_window]
            labels[i * num_skips + j, 0] = buffer[target]
        buffer.append(data[data_index])
        data_index = (data_index + 1) % len(data)
    return batch, labels
In [100]:
data_index=0
batch, labels = generate_batch(8, 2, 1)
In [101]:
batch, [vocabulary[word] for word in batch]
Out[101]:
(array([3081, 3081,   12,   12,    6,    6,  195,  195], dtype=int32),
 ['originated', 'originated', 'as', 'as', 'a', 'a', 'term', 'term'])
In [102]:
labels, [vocabulary[word] for word in labels[:, 0]]
Out[102]:
(array([[  12],
        [5242],
        [   6],
        [3081],
        [  12],
        [ 195],
        [   2],
        [   6]], dtype=int32),
 ['as', 'anarchism', 'a', 'originated', 'as', 'term', 'of', 'a'])

Build the model

In [103]:
batch_size = 128
embedding_size = 128  # Dimension of the embedding vector.
skip_window = 1       # How many words to consider left and right.
num_skips = 2         # How many times to reuse an input to generate a label.

# We pick a random validation set to sample nearest neighbors. Here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent.
valid_size = 16     # Random set of words to evaluate similarity on.
valid_window = 100  # Only pick dev samples in the head of the distribution.
valid_examples = np.random.choice(valid_window, valid_size, replace=False)
num_sampled = 64    # Number of negative examples to sample.

learning_rate = 0.01
In [104]:
reset_graph()

# Input data.
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
In [105]:
vocabulary_size = 50000
embedding_size = 150

# Look up embeddings for inputs.
init_embeds = tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0)
embeddings = tf.Variable(init_embeds)
In [106]:
train_inputs = tf.placeholder(tf.int32, shape=[None])
embed = tf.nn.embedding_lookup(embeddings, train_inputs)
In [107]:
# Construct the variables for the NCE loss
nce_weights = tf.Variable(
    tf.truncated_normal([vocabulary_size, embedding_size],
                        stddev=1.0 / np.sqrt(embedding_size)))
nce_biases = tf.Variable(tf.zeros([vocabulary_size]))

# Compute the average NCE loss for the batch.
# tf.nce_loss automatically draws a new sample of the negative labels each
# time we evaluate the loss.
loss = tf.reduce_mean(
    tf.nn.nce_loss(nce_weights, nce_biases, train_labels, embed,
                   num_sampled, vocabulary_size))

# Construct the Adam optimizer
optimizer = tf.train.AdamOptimizer(learning_rate)
training_op = optimizer.minimize(loss)

# Compute the cosine similarity between minibatch examples and all embeddings.
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), axis=1, keep_dims=True))
normalized_embeddings = embeddings / norm
valid_embeddings = tf.nn.embedding_lookup(normalized_embeddings, valid_dataset)
similarity = tf.matmul(valid_embeddings, normalized_embeddings, transpose_b=True)

# Add variable initializer.
init = tf.global_variables_initializer()

Train the model

In [108]:
num_steps = 10001

with tf.Session() as session:
    init.run()

    average_loss = 0
    for step in range(num_steps):
        print("\rIteration: {}".format(step), end="\t")
        batch_inputs, batch_labels = generate_batch(batch_size, num_skips, skip_window)
        feed_dict = {train_inputs : batch_inputs, train_labels : batch_labels}

        # We perform one update step by evaluating the training op (including it
        # in the list of returned values for session.run()
        _, loss_val = session.run([training_op, loss], feed_dict=feed_dict)
        average_loss += loss_val

        if step % 2000 == 0:
            if step > 0:
                average_loss /= 2000
            # The average loss is an estimate of the loss over the last 2000 batches.
            print("Average loss at step ", step, ": ", average_loss)
            average_loss = 0

        # Note that this is expensive (~20% slowdown if computed every 500 steps)
        if step % 10000 == 0:
            sim = similarity.eval()
            for i in range(valid_size):
                valid_word = vocabulary[valid_examples[i]]
                top_k = 8 # number of nearest neighbors
                nearest = (-sim[i, :]).argsort()[1:top_k+1]
                log_str = "Nearest to %s:" % valid_word
                for k in range(top_k):
                    close_word = vocabulary[nearest[k]]
                    log_str = "%s %s," % (log_str, close_word)
                print(log_str)

    final_embeddings = normalized_embeddings.eval()
Iteration: 0	Average loss at step  0 :  285.895568848
Nearest to would: stems, secessionist, coronary, provers, surpasses, subscriber, losing, excommunicated,
Nearest to on: enumerative, camus, biochemical, skeptic, idealised, sooty, grossman, flashing,
Nearest to four: maj, glue, doubt, emil, civilis, generative, bested, necrosis,
Nearest to his: exerted, talmudic, switzerland, particles, somehow, standstill, value, hau,
Nearest to often: billed, shrine, lems, aalborg, prescriptions, winger, manichaeism, sox,
Nearest to in: charlestown, cobe, moody, appeals, dauphin, buenos, liar, inuit,
Nearest to an: rota, haihowak, melatonin, graham, guaranteeing, catalyze, pcd, gobind,
Nearest to eight: onstage, usn, brl, guevara, arbitrate, wild, radiocarbon, manning,
Nearest to these: thames, natufian, lasted, genghis, amitabh, dodo, bain, thorne,
Nearest to nine: mindstorms, blocks, proceed, tn, nihilist, elucidated, aspects, intermediate,
Nearest to called: humanoid, dealing, writ, gracefully, superfluous, wooing, confessed, diputados,
Nearest to about: smile, mirrored, affaires, ducktales, threatening, eureka, hiller, woke,
Nearest to up: subgenre, proposal, straps, orchestral, constants, equinox, gmc, livejournal,
Nearest to one: tubulin, hijack, alum, decentralisation, sombre, lords, ts, drumming,
Nearest to and: soft, flap, costumed, heldenplatz, paleo, kiel, monotheistic, fermion,
Nearest to been: navigable, mansi, wards, ud, hideous, yong, linear, josh,
Iteration: 2000	Average loss at step  2000 :  130.956728756
Iteration: 4000	Average loss at step  4000 :  62.7161855488
Iteration: 6000	Average loss at step  6000 :  42.0940016119
Iteration: 8000	Average loss at step  8000 :  31.4492927847
Iteration: 10000	Average loss at step  10000 :  25.6146161559
Nearest to would: to, washed, fit, investigators, can, was, repeated, alkanes,
Nearest to on: in, aegean, actinium, methylene, donohue, levitt, of, and,
Nearest to four: nine, six, five, two, one, seven, eight, zero,
Nearest to his: the, and, satisfies, rescue, molyneux, conquer, babylon, bambaataa,
Nearest to often: priests, progressed, sometimes, it, whitfield, participation, horrible, guise,
Nearest to in: and, on, ampere, of, with, conformations, omotic, altaic,
Nearest to an: a, this, induced, submachine, and, imaged, that, the,
Nearest to eight: nine, one, six, four, seven, two, five, three,
Nearest to these: nsu, altaic, neutral, signification, romanus, cosmonaut, and, staggered,
Nearest to nine: one, six, eight, four, seven, five, two, zero,
Nearest to called: motorola, is, absurd, matrices, lubricants, familial, outer, rigor,
Nearest to about: denunciation, fixtures, gide, endeavor, abortions, esp, methanol, unarable,
Nearest to up: spite, delos, lighthouses, possessions, arcade, unhappy, exposure, socialite,
Nearest to one: nine, two, six, eight, seven, four, five, three,
Nearest to and: in, of, a, the, asparagales, altaic, which, UNK,
Nearest to been: have, bani, annealing, are, alhazred, ppp, columbus, guise,

Let's save the final embeddings (of course you can use a TensorFlow Saver if you prefer):

In [109]:
np.save("./my_final_embeddings.npy", final_embeddings)

Plot the embeddings

In [110]:
def plot_with_labels(low_dim_embs, labels):
    assert low_dim_embs.shape[0] >= len(labels), "More labels than embeddings"
    plt.figure(figsize=(18, 18))  #in inches
    for i, label in enumerate(labels):
        x, y = low_dim_embs[i,:]
        plt.scatter(x, y)
        plt.annotate(label,
                     xy=(x, y),
                     xytext=(5, 2),
                     textcoords='offset points',
                     ha='right',
                     va='bottom')
In [111]:
from sklearn.manifold import TSNE

tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
plot_only = 500
low_dim_embs = tsne.fit_transform(final_embeddings[:plot_only,:])
labels = [vocabulary[i] for i in range(plot_only)]
plot_with_labels(low_dim_embs, labels)

Machine Translation

The basic_rnn_seq2seq() function creates a simple Encoder/Decoder model: it first runs an RNN to encode encoder_inputs into a state vector, then runs a decoder initialized with the last encoder state on decoder_inputs. Encoder and decoder use the same RNN cell type but they don't share parameters.

In [112]:
import tensorflow as tf
reset_graph()

n_steps = 50
n_neurons = 200
n_layers = 3
num_encoder_symbols = 20000
num_decoder_symbols = 20000
embedding_size = 150
learning_rate = 0.01

X = tf.placeholder(tf.int32, [None, n_steps]) # English sentences
Y = tf.placeholder(tf.int32, [None, n_steps]) # French translations
W = tf.placeholder(tf.float32, [None, n_steps - 1, 1])
Y_input = Y[:, :-1]
Y_target = Y[:, 1:]

encoder_inputs = tf.unstack(tf.transpose(X)) # list of 1D tensors
decoder_inputs = tf.unstack(tf.transpose(Y_input)) # list of 1D tensors

lstm_cells = [tf.contrib.rnn.BasicLSTMCell(num_units=n_neurons)
              for layer in range(n_layers)]
cell = tf.contrib.rnn.MultiRNNCell(lstm_cells)

output_seqs, states = tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq(
    encoder_inputs,
    decoder_inputs,
    cell,
    num_encoder_symbols,
    num_decoder_symbols,
    embedding_size)

logits = tf.transpose(tf.unstack(output_seqs), perm=[1, 0, 2])
In [113]:
logits_flat = tf.reshape(logits, [-1, num_decoder_symbols])
Y_target_flat = tf.reshape(Y_target, [-1])
W_flat = tf.reshape(W, [-1])
xentropy = W_flat * tf.nn.sparse_softmax_cross_entropy_with_logits(labels=Y_target_flat, logits=logits_flat)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)

init = tf.global_variables_initializer()

Exercise solutions

1. to 6.

See Appendix A.

7. Embedded Reber Grammars

First we need to build a function that generates strings based on a grammar. The grammar will be represented as a list of possible transitions for each state. A transition specifies the string to output (or a grammar to generate it) and the next state.

In [114]:
from random import choice, seed

# to make this notebook's output stable across runs
seed(42)
np.random.seed(42)

default_reber_grammar = [
    [("B", 1)],           # (state 0) =B=>(state 1)
    [("T", 2), ("P", 3)], # (state 1) =T=>(state 2) or =P=>(state 3)
    [("S", 2), ("X", 4)], # (state 2) =S=>(state 2) or =X=>(state 4)
    [("T", 3), ("V", 5)], # and so on...
    [("X", 3), ("S", 6)],
    [("P", 4), ("V", 6)],
    [("E", None)]]        # (state 6) =E=>(terminal state)

embedded_reber_grammar = [
    [("B", 1)],
    [("T", 2), ("P", 3)],
    [(default_reber_grammar, 4)],
    [(default_reber_grammar, 5)],
    [("T", 6)],
    [("P", 6)],
    [("E", None)]]

def generate_string(grammar):
    state = 0
    output = []
    while state is not None:
        production, state = choice(grammar[state])
        if isinstance(production, list):
            production = generate_string(grammar=production)
        output.append(production)
    return "".join(output)

Let's generate a few strings based on the default Reber grammar:

In [115]:
for _ in range(25):
    print(generate_string(default_reber_grammar), end=" ")
BTXXTTTTVPXTTTTTVPSE BTXSE BTXXTVPSE BTXXVPSE BTSSXXTTVVE BTXSE BTSSSXSE BPTTTVVE BTXXVVE BPTTVVE BTSXXTTTTVPSE BPTTVVE BPTVPSE BPTTVPXVVE BPVPXTTTVPXTVPSE BTXSE BPTTTTVPXTTTTTTTVPXVVE BPTVVE BTXSE BPTTTVVE BTSXXVPSE BTXXTTTTTVVE BPTTVPSE BPVVE BPTTTVPXVPXTTTTTVPXTTVVE 

Looks good. Now let's generate a few strings based on the embedded Reber grammar:

In [116]:
for _ in range(25):
    print(generate_string(embedded_reber_grammar), end=" ")
BPBPTVVEPE BTBPTVPXVVETE BPBPTTTVVEPE BPBTXSEPE BPBPTTTTTVPSEPE BTBTSXSETE BPBPVPSEPE BPBPVVEPE BPBTXSEPE BPBTSXSEPE BTBPTTVVETE BPBPVVEPE BTBTXSETE BPBPTTVVEPE BTBTSXXVVETE BTBTXXTVPXTVPSETE BTBPTVVETE BPBPVPXTTVPXTVVEPE BTBTXSETE BPBTXSEPE BPBTSXXTVPSEPE BPBPVVEPE BPBPTTTTTTTTTTVPXVVEPE BPBPVVEPE BPBPVVEPE 

Okay, now we need a function to generate strings that do not respect the grammar. We could generate a random string, but the task would be a bit too easy, so instead we will generate a string that respects the grammar, and we will corrupt it by changing just one character:

In [117]:
def generate_corrupted_string(grammar, chars="BEPSTVX"):
    good_string = generate_string(grammar)
    index = np.random.randint(len(good_string))
    good_char = good_string[index]
    bad_char = choice(list(set(chars) - set(good_char)))
    return good_string[:index] + bad_char + good_string[index + 1:]

Let's look at a few corrupted strings:

In [118]:
for _ in range(25):
    print(generate_corrupted_string(embedded_reber_grammar), end=" ")
BPBPVPPEPE BPBPXSEPE BPBPTVVPPE BTBPTPSETE BTBPVVBTE BPBTSSXXTPTVVEPE BPPTSXXTVPSEPE BPBTXSSPE BTBPTTTVPSPTE BPBTSXXTTTSTTVVEPE BPBBXXVPXTVPXTTVVEPE BPBPTTVBEPE BPBPVVETE BPEPTTVVEPE BPBPVTSEPE BPBTXXXVVEPE BXBPTTTVPXVVETE BPBTSSXBEPE BPBPVBEPE BSBTXSEPE BTBPTVPXVPXVVETB TPBTSXXTVPXVPSEPE BPBTSXXTTTVXSEPE BPBPVPXVVSPE BTBTSBSETE 

It's not possible to feed a string directly to an RNN: we need to convert it to a sequence of vectors, first. Each vector will represent a single letter, using a one-hot encoding. For example, the letter "B" will be represented as the vector [1, 0, 0, 0, 0, 0, 0], the letter E will be represented as [0, 1, 0, 0, 0, 0, 0] and so on. Let's write a function that converts a string to a sequence of such one-hot vectors. Note that if the string is shorted than n_steps, it will be padded with zero vectors (later, we will tell TensorFlow how long each string actually is using the sequence_length parameter).

In [119]:
def string_to_one_hot_vectors(string, n_steps, chars="BEPSTVX"):
    char_to_index = {char: index for index, char in enumerate(chars)}
    output = np.zeros((n_steps, len(chars)), dtype=np.int32)
    for index, char in enumerate(string):
        output[index, char_to_index[char]] = 1.
    return output
In [120]:
string_to_one_hot_vectors("BTBTXSETE", 12)
Out[120]:
array([[1, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 1, 0, 0],
       [1, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 1, 0, 0],
       [0, 0, 0, 0, 0, 0, 1],
       [0, 0, 0, 1, 0, 0, 0],
       [0, 1, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 1, 0, 0],
       [0, 1, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0]], dtype=int32)

We can now generate the dataset, with 50% good strings, and 50% bad strings:

In [121]:
def generate_dataset(size):
    good_strings = [generate_string(embedded_reber_grammar)
                    for _ in range(size // 2)]
    bad_strings = [generate_corrupted_string(embedded_reber_grammar)
                   for _ in range(size - size // 2)]
    all_strings = good_strings + bad_strings
    n_steps = max([len(string) for string in all_strings])
    X = np.array([string_to_one_hot_vectors(string, n_steps)
                  for string in all_strings])
    seq_length = np.array([len(string) for string in all_strings])
    y = np.array([[1] for _ in range(len(good_strings))] +
                 [[0] for _ in range(len(bad_strings))])
    rnd_idx = np.random.permutation(size)
    return X[rnd_idx], seq_length[rnd_idx], y[rnd_idx]
In [122]:
X_train, l_train, y_train = generate_dataset(10000)

Let's take a look at the first training instances:

In [123]:
X_train[0]
Out[123]:
array([[1, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 1, 0, 0],
       [1, 0, 0, 0, 0, 0, 0],
       [0, 0, 1, 0, 0, 0, 0],
       [0, 0, 0, 0, 1, 0, 0],
       [0, 0, 0, 0, 1, 0, 0],
       [0, 0, 0, 0, 0, 1, 0],
       [0, 0, 1, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 1],
       [0, 0, 0, 0, 0, 1, 0],
       [0, 0, 1, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 1],
       [0, 0, 0, 0, 0, 1, 0],
       [0, 0, 1, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 1],
       [0, 0, 1, 0, 0, 0, 0],
       [0, 0, 1, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 1],
       [0, 0, 0, 0, 0, 1, 0],
       [0, 0, 0, 0, 0, 1, 0],
       [0, 1, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 1, 0, 0],
       [0, 1, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0, 0, 0]], dtype=int32)

It's padded with a lot of zeros because the longest string in the dataset is that long. How long is this particular string?

In [124]:
l_train[0]
Out[124]:
23

What class is it?

In [125]:
y_train[0]
Out[125]:
array([0])

Perfect! We are ready to create the RNN to identify good strings. We build a sequence classifier very similar to the one we built earlier to classify MNIST images, with two main differences:

  • First, the input strings have variable length, so we need to specify the sequence_length when calling the dynamic_rnn() function.
  • Second, this is a binary classifier, so we only need one output neuron that will output, for each input string, the estimated log probability that it is a good string. For multiclass classification, we used sparse_softmax_cross_entropy_with_logits() but for binary classification we use sigmoid_cross_entropy_with_logits().
In [126]:
reset_graph()

possible_chars = "BEPSTVX"
n_inputs = len(possible_chars)
n_neurons = 30
n_outputs = 1

learning_rate = 0.02
momentum = 0.95

X = tf.placeholder(tf.float32, [None, None, n_inputs], name="X")
seq_length = tf.placeholder(tf.int32, [None], name="seq_length")
y = tf.placeholder(tf.float32, [None, 1], name="y")

gru_cell = tf.contrib.rnn.GRUCell(num_units=n_neurons)
outputs, states = tf.nn.dynamic_rnn(gru_cell, X, dtype=tf.float32,
                                    sequence_length=seq_length)

logits = tf.layers.dense(states, n_outputs, name="logits")
y_pred = tf.cast(tf.greater(logits, 0.), tf.float32, name="y_pred")
y_proba = tf.nn.sigmoid(logits, name="y_proba")

xentropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate,
                                       momentum=momentum,
                                       use_nesterov=True)
training_op = optimizer.minimize(loss)

correct = tf.equal(y_pred, y, name="correct")
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy")

init = tf.global_variables_initializer()
saver = tf.train.Saver()

Now let's generate a validation set so we can track progress during training:

In [127]:
X_val, l_val, y_val = generate_dataset(5000)
In [128]:
n_epochs = 50
batch_size = 50

with tf.Session() as sess:
    init.run()
    for epoch in range(n_epochs):
        X_batches = np.array_split(X_train, len(X_train) // batch_size)
        l_batches = np.array_split(l_train, len(l_train) // batch_size)
        y_batches = np.array_split(y_train, len(y_train) // batch_size)
        for X_batch, l_batch, y_batch in zip(X_batches, l_batches, y_batches):
            loss_val, _ = sess.run(
                [loss, training_op],
                feed_dict={X: X_batch, seq_length: l_batch, y: y_batch})
        acc_train = accuracy.eval(feed_dict={X: X_batch, seq_length: l_batch, y: y_batch})
        acc_val = accuracy.eval(feed_dict={X: X_val, seq_length: l_val, y: y_val})
        print("{:4d}  Train loss: {:.4f}, accuracy: {:.2f}%  Validation accuracy: {:.2f}%".format(
            epoch, loss_val, 100 * acc_train, 100 * acc_val))
        saver.save(sess, "./my_reber_classifier")
   0  Train loss: 0.6869, accuracy: 54.00%  Validation accuracy: 58.86%
   1  Train loss: 0.6651, accuracy: 54.00%  Validation accuracy: 59.56%
   2  Train loss: 0.6379, accuracy: 72.00%  Validation accuracy: 71.92%
   3  Train loss: 0.5612, accuracy: 66.00%  Validation accuracy: 72.88%
   4  Train loss: 0.4574, accuracy: 82.00%  Validation accuracy: 78.56%
   5  Train loss: 0.3446, accuracy: 84.00%  Validation accuracy: 85.80%
   6  Train loss: 0.3048, accuracy: 88.00%  Validation accuracy: 87.96%
   7  Train loss: 0.3263, accuracy: 90.00%  Validation accuracy: 91.28%
   8  Train loss: 0.2017, accuracy: 94.00%  Validation accuracy: 94.92%
   9  Train loss: 0.1600, accuracy: 98.00%  Validation accuracy: 95.06%
  10  Train loss: 0.1414, accuracy: 98.00%  Validation accuracy: 96.20%
  11  Train loss: 0.0723, accuracy: 98.00%  Validation accuracy: 98.26%
  12  Train loss: 0.0181, accuracy: 100.00%  Validation accuracy: 98.70%
  13  Train loss: 0.0769, accuracy: 100.00%  Validation accuracy: 97.64%
  14  Train loss: 0.0111, accuracy: 100.00%  Validation accuracy: 99.56%
  15  Train loss: 0.0987, accuracy: 100.00%  Validation accuracy: 97.46%
  16  Train loss: 0.0080, accuracy: 100.00%  Validation accuracy: 99.66%
  17  Train loss: 0.0061, accuracy: 100.00%  Validation accuracy: 99.76%
  18  Train loss: 0.0025, accuracy: 100.00%  Validation accuracy: 99.84%
  19  Train loss: 0.0405, accuracy: 100.00%  Validation accuracy: 98.42%
  20  Train loss: 0.0058, accuracy: 100.00%  Validation accuracy: 99.78%
  21  Train loss: 0.0072, accuracy: 100.00%  Validation accuracy: 99.30%
  22  Train loss: 0.0015, accuracy: 100.00%  Validation accuracy: 99.96%
  23  Train loss: 0.0013, accuracy: 100.00%  Validation accuracy: 99.90%
  24  Train loss: 0.0013, accuracy: 100.00%  Validation accuracy: 99.94%
  25  Train loss: 0.0007, accuracy: 100.00%  Validation accuracy: 100.00%
  26  Train loss: 0.0006, accuracy: 100.00%  Validation accuracy: 100.00%
  27  Train loss: 0.0005, accuracy: 100.00%  Validation accuracy: 100.00%
  28  Train loss: 0.0004, accuracy: 100.00%  Validation accuracy: 100.00%
  29  Train loss: 0.0004, accuracy: 100.00%  Validation accuracy: 100.00%
  30  Train loss: 0.0003, accuracy: 100.00%  Validation accuracy: 100.00%
  31  Train loss: 0.0003, accuracy: 100.00%  Validation accuracy: 100.00%
  32  Train loss: 0.0003, accuracy: 100.00%  Validation accuracy: 100.00%
  33  Train loss: 0.0003, accuracy: 100.00%  Validation accuracy: 100.00%
  34  Train loss: 0.0002, accuracy: 100.00%  Validation accuracy: 100.00%
  35  Train loss: 0.0002, accuracy: 100.00%  Validation accuracy: 100.00%
  36  Train loss: 0.0002, accuracy: 100.00%  Validation accuracy: 100.00%
  37  Train loss: 0.0002, accuracy: 100.00%  Validation accuracy: 100.00%
  38  Train loss: 0.0002, accuracy: 100.00%  Validation accuracy: 100.00%
  39  Train loss: 0.0002, accuracy: 100.00%  Validation accuracy: 100.00%
  40  Train loss: 0.0002, accuracy: 100.00%  Validation accuracy: 100.00%
  41  Train loss: 0.0002, accuracy: 100.00%  Validation accuracy: 100.00%
  42  Train loss: 0.0002, accuracy: 100.00%  Validation accuracy: 100.00%
  43  Train loss: 0.0002, accuracy: 100.00%  Validation accuracy: 100.00%
  44  Train loss: 0.0002, accuracy: 100.00%  Validation accuracy: 100.00%
  45  Train loss: 0.0002, accuracy: 100.00%  Validation accuracy: 100.00%
  46  Train loss: 0.0001, accuracy: 100.00%  Validation accuracy: 100.00%
  47  Train loss: 0.0001, accuracy: 100.00%  Validation accuracy: 100.00%
  48  Train loss: 0.0001, accuracy: 100.00%  Validation accuracy: 100.00%
  49  Train loss: 0.0001, accuracy: 100.00%  Validation accuracy: 100.00%

Now let's test our RNN on two tricky strings: the first one is bad while the second one is good. They only differ by the second to last character. If the RNN gets this right, it shows that it managed to notice the pattern that the second letter should always be equal to the second to last letter. That requires a fairly long short-term memory (which is the reason why we used a GRU cell).

In [129]:
test_strings = [
    "BPBTSSSSSSSSSSSSXXTTTTTVPXTTVPXTTTTTTTVPXVPXVPXTTTVVETE",
    "BPBTSSSSSSSSSSSSXXTTTTTVPXTTVPXTTTTTTTVPXVPXVPXTTTVVEPE"]
l_test = np.array([len(s) for s in test_strings])
max_length = l_test.max()
X_test = [string_to_one_hot_vectors(s, n_steps=max_length)
          for s in test_strings]

with tf.Session() as sess:
    saver.restore(sess, "./my_reber_classifier")
    y_proba_val = y_proba.eval(feed_dict={X: X_test, seq_length: l_test})

print()
print("Estimated probability that these are Reber strings:")
for index, string in enumerate(test_strings):
    print("{}: {:.2f}%".format(string, 100 * y_proba_val[index][0]))
INFO:tensorflow:Restoring parameters from ./my_reber_classifier

Estimated probability that these are Reber strings:
BPBTSSSSSSSSSSSSXXTTTTTVPXTTVPXTTTTTTTVPXVPXVPXTTTVVETE: 0.00%
BPBTSSSSSSSSSSSSXXTTTTTVPXTTVPXTTTTTTTVPXVPXVPXTTTVVEPE: 100.00%

Ta-da! It worked fine. The RNN found the correct answers with absolute confidence. :)

8. and 9.

Coming soon...

In [ ]: