I’ve spent the past few weeks playing around with TensorFlow. It’s a library for machine learning that Google developed in-house and recently made publicly available. You can use it via a simple-ish python module.

The core concept behind TensorFlow is that computations are encapsulated in to nodes or ‘ops’ to use their jargon. One op often relies on the output of another so they form a directed graph.

What’s neat about this set up is that we just need to call the one op whose output we’re interested in. TensorFlow then runs all the other ops required to generate the output.

Even better, encapsulating computation inside self contained ops makes distributed computing much easier. different ops can run on different computers, or on the cpu or gpu and so on.

Curve Fitting

I thought I’d have a play at getting TensorFlow to do some curve fitting. The code is at the bottom of the post, here’s the plan.

  1. Build a single layer perceptron
  2. train it using back propagation
  3. then feed in some randomly chosen test inputs

Here are the results for a damped sin wave. The grey dots are the training points, I got matplotlib to join them with the true curve, The green dots are my network’s attempt at recreating the curve.

Here we go:

I’m really quite impressed with the first one. 3 nodes and only 100 training steps and its not really far off. But even with more training steps you can’t seem to do much better. I increased the nodes to 7 by trial and error and found that 600 steps does very well indeed.

Here’s another, 3 nodes and 100 steps.png

It looks like my network likes bell curves, this is e^(-x^2).

And how about a square wave?

 

3 nodes and 100 steps

Oh dear. More nodes!

7 nodes and 200 steps

Not bad. Looks like a sin wave. More of everything:

10 nodes and 1000 steps

Ok, not bad. It it just me or do those little wings on the edges of each square look like the ones you get with a Fourier transform?

I’ll definitely be playing with tensorFlow more. Do drop me a line if anyone out there is interested in doing physics with it.


import os
import math
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt

HIDDEN_NODES    = 8
MIN             = -6.0
MAX             = 6.0
TRAIN_POINTS    = 30
TRAIN_STEPS     = 400
TEST_POINTS     = 100
NOISE           = 0.0
FUNC            = 'sin wave'     #choose from below

#The function to approximate
def bell_curve(x): return math.exp(-(x**2.0))
def sin(x): return math.sin(x)
def damped(x): return np.sin(x) * np.exp(-0.3*x) # needs 7 nodes
def square(x):

    q = 5.0

    if x % q >= q/2:
        y = 1.0
    else:
        y = -1.0
    return y

functions = {   'sin wave'          : sin,
                'damped sin wave'   : damped,
                'bell curve'        : bell_curve,
                'square wave'       : square
            }

func = functions[ FUNC ]

#helper function
def writeToFile(filename = "out.csv", x_data = [], y_data = []):
    #open output file
    out_file = open(filename, "w")

    length = x_data.size

    for i in range(length):
        line = "%10.4f, %10.4f\n" % (x_data[i], y_data[i])
        out_file.write(line)

    out_file.close()

#build nn -------------------------------

#input
x = tf.placeholder("float", [1,None])

#1st hidden layer

#weights
W_1 = tf.Variable(tf.truncated_normal(shape = [HIDDEN_NODES,1], stddev = 0.1))

#biases
b_1 = tf.Variable(tf.constant(0.1, shape = [HIDDEN_NODES,1]))

#op to calculate L1 outputs and filter with relu
L1 = tf.nn.sigmoid( tf.matmul(W_1, x) + b_1)

# output layer
W_3 = tf.Variable(tf.truncated_normal(shape = [1, HIDDEN_NODES], stddev = 0.1))
b_3 = tf.Variable(tf.constant(0.1, shape = [1]))

#output
y = tf.matmul(W_3, L1) + b_3
#build nn -------------------------------

#build tensorboard summary ops ----------
hist1 = tf.histogram_summary("weights 1", W_1)
hist2 = tf.histogram_summary("weights 3", W_3)
merged = tf.merge_all_summaries()
writer = tf.train.SummaryWriter("logs")
#----------------------------------------

#get ready to test
y_input = tf.placeholder("float")

#define error = delta squared
error = tf.square(y - y_input)

loss = tf.reduce_mean(error)

#training op to minimise the error
#passing step into minimize increments it each time
step = tf.Variable(0, trainable = False)
rate = tf.train.exponential_decay(0.15, step, 1, 0.9999)
train = tf.train.AdamOptimizer(rate).minimize(loss, global_step = step)

#create training data
x_train = np.linspace(MIN, MAX, TRAIN_POINTS)
y_train = np.array(map(func, x_train))

#add noise
if NOISE > 0.0:
    y_train += np.random.normal(0.0, NOISE, TRAIN_POINTS)

#print "x_train", x_train
#print "y_train", y_train

#save training data to file
#("training_data.csv", x_train, y_train)

with tf.Session() as sess:

    #get ready to run
    init_op = tf.initialize_all_variables()
    sess.run(init_op)

    #training loop
    for step in range(TRAIN_STEPS):

        #run!
        _, current_loss = sess.run([train, loss], feed_dict={x : [x_train], y_input: [y_train]})

        #test
        if step % 100 == 0:
            #print current_loss
            print "Loss: ", current_loss

            #tensor board----
            result = sess.run(merged, feed_dict={x : [x_train]})

            writer.add_summary(result, step)
            #----------------

    #testing using randomly chosen x values
    x_test = np.random.uniform(MIN, MAX, TEST_POINTS)
    test_vals = x_test

    output = sess.run(y, feed_dict={x : [test_vals]})

    #writeToFile("output.csv", test_vals, output[0])

    #plot data ------------------

    #create folder if it doesn't exist

    script_dir = os.path.dirname(__file__)
    results_dir = os.path.join(script_dir, FUNC + '/')

    if not os.path.isdir(results_dir):
        os.makedirs(results_dir)

    #graph title
    title = str(HIDDEN_NODES) + " nodes and " + str(TRAIN_STEPS) + " steps"
    plt.figure().suptitle(title)

    #true curve
    x = np.arange(MIN,MAX,0.1)
    y = map(func, x)
    plt.plot(x,y, color='gray')

    #training data
    plt.scatter(x_train, y_train, marker = 'x', color = 'grey')

    #nn output
    plt.scatter(test_vals, output[0], color = 'green')

    #save
    plt.savefig(results_dir + "%s.png" % title)