Introduction to TensorBoard using TensorFlow

Hey fellow learner! Today let’s learn to implement something unique and interesting.

Ever heard about TensorBoards? No? Yes? Whatever your answer is today let’s learn about TensorBoards and implement them using TensorFlow.

Introduction to TensorBoards

Tensorboard is the interface for visualization of the graph and other modeling methods. Multiple views are available in TensorFlow and different views represent inputs and outputs of various formats.

The different views are as follows:

  1. Scalars – Visualize scalar values
  2. Graph – Visualize the graphs of your models
  3. Distributions – Visualize how data changes over time
  4. Histograms – Visualize distributions in a 3-dimensional perspective
  5. Projector – Can be used to visualize word embeddings
  6. Image – Visualizing image data
  7. Audio – Visualizing audio data
  8. Text – Visualizing text data

Implementing TensorBoards

Step 1: Loading TensorBoard Extension and importing necessary modules

First, we load the TensorBoard Extension and importing the necessary modules using the following code:

%load_ext tensorboard
from datetime import datetime
from packaging import version
import tensorflow as tf
from tensorflow import keras
import numpy as np

Step 2: Preparing the data for training

In today’s code, we are going to make use of Linear Regression. one of the simplest algorithms in Machine Learning. The following code shows the data preparation for the same:

size=1000
training_percent=0.8
training_size=int(size*training_percent)

x_input=np.linspace(-1,1,size)
np.random.shuffle(x_input)
noise=np.random.normal(0, 0.05, (size, ))
y_input=0.5*x_input+2+noise

x_train, y_train = x_input[:training_size],y_input[:training_size]
x_test, y_test = x_input[training_size:], y_input[training_size:]

First, we define the size of the data and the training data. Along with this, we will define the x and y input data in which the y data contains noise to make the data more realistic. We further separate the data into training and testing datasets using the 80-20 rule.

Step 3: Creating the log before model creation and compilation

Before we train and compile the model we create a log to store the losses which include creating Keras tensorboard callbacks, specifying the log directory, and passing the callbacks to the model.fit() function which will be used later on. The following code block shows the creation of the same:

logdir = "logs/scalars/" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)

Step 4: Model creation, Model compiling, Model Training, and Loss calculation

The following code shows you how you create a sequential model and compile the data we created in Step 2. The testing data is then fitted in the model and then the average loss is calculated.

model = keras.models.Sequential([keras.layers.Dense(16, input_dim=1),keras.layers.Dense(1),])
model.compile(loss='mse',optimizer=keras.optimizers.SGD(lr=0.2),)
print("Training data in process....")

training_history = model.fit(x_train,y_train,batch_size=training_size,
                             verbose=0,epochs=100,validation_data=(x_test, y_test),
                             callbacks=[tensorboard_callback],)
print("Testing Loss calculated : ", np.average(training_history.history['loss']))

The output of the model testing displays the loss as 0.04265307889552787

Step 5: Visualizing the results of the losses on a TensorBoard

Everything becomes fun when you have good visualization. Hence the following code will help you to visualize the model losses:

%tensorboard --logdir logs/scalars

The output is as follows:

Tensorboard(1)

Conclusion

In this post, you successfully implemented a basic TensorBoard of scalars using Tensorflow. Keep reading more posts to learn more!

Read the following posts to learn more!

Leave a Reply

Your email address will not be published. Required fields are marked *