Tensorboard quick start in 5 minutes.

Check out Diffgram.com for Deep Learning

Note this was written in 2017 and has not been updated.

Tensorboard is a web app to view information about your Tensorflow app. Data is written in Tensorflow and read by Tensorboard.

It’s an amazing debugger. You can see variables being changed over time and the control flow / computational graph of your application.

Go

  1. Type 5 lines of code in Tensorflow to write data

There are many great resources linked below that are more rigorous. This is a quick start that covers the a few core elements to get it running.

A completed example is available as a Jupyter Notebook.

1. Add code to your Tensorflow program to collect data (3 min)

1.1 What do you want to track? (1+ lines)

Call tf.summary.histogram() to store info from a computed result, say softmax weights, predications, loss, etc.

Suggest trying to add only 1 or 2 here to get started.

bolded code is code to add to your application.

# .... your code ...def your_sub_function():  softmax_w = tf.Variable(tf.truncated_normal( (in_size, ...)  tf.summary.histogram(“softmax_w”, softmax_w)  # Another variable you want to store  predictions = tf.nn.softmax(logits, name="predictions")  tf.summary.histogram("predictions", predictions)

# .... your code ...

That’s it! Just add this where you wish to record information:

tf.summary.histogram(“your_variable_name”, your_variable)

List of summary operations available.

1.2 Save stuff during training (4 lines)

In your tf.Session() define a writer.

In your training loop, define a merge with tf.summary.merge_all().

In sess.run() pass merge (a Tensor), get back a summary.

Add summary to your writer. Done.

with tf.Session() as sess:  train_writer = tf.summary.FileWriter( './logs/1/train ', sess.graph)  counter = 0
for e in range(epochs)
for x, y in get_batches(....):
counter += 1

merge = tf.summary.merge_all()
summary, batch_loss, new_state, _ = sess.run([merge, model.loss, model.final_state, model.optimizer],feed_dict=feed) train_writer.add_summary(summary, counter) # .... your code ...

Note: In your own code, `counter` should be your iterator.

Later, you can add as many variables as you wish without changing this code. (ie 3 histograms, 2 scalers, etc.)

2. Start training operations (< 1 min)

Start training your network :)

Go ahead and start training it now, and in a minute you will start seeing data come in!

3. Start Tensorboard server (< 1 min)

Open a terminal window in your root project directory. Run:

tensorboard --logdir logs/1

Go to the URL it provides OR on windows:

http://localhost:6006/
Example with logs stored in logs/2

Tensorboard! :)

Suggest first clicking graphs to see visuals!

Double click things to see more detail.

By default tensorboard updates every 30 seconds, or you can refresh the web app to see your training results coming in!

Try histograms:

Now that it’s working, we can clean up graph by adding names:

Add tf.name_scope() to your functions

with tf.name_scope(“RNN_init_state”):
initial_state = rnn_cells.zero_state(batch_size, tf.float32)

Thanks for reading! :)

BTW, if you are working on a deep learning project, check out Diffgram: Plug and play for computer vision!

Further resources

Official documentation

Official introduction video

Looking to build Training Data or Many Deep Learning Models?

Check out Diffgram.com for Deep Learning