Creating Neural Networks using TensorFlow and Keras
Hello mate!! In this tutorial, I will guide you about how to develop Artificial Neural Networks using TensorFlow and Keras Python deep learning module and API. Before you start with the development, make sure you have the dependency installed in your system.
I will explain the development of using code snippets. Let us start with importing the required libraries. I assume you are already aware of the theoretical component of these neural networks and have an idea of Python. Click here for TensorFlow documentation.
#importing the libraries import tensorflow as tf from tensorflow import keras #loading the dataset (xtrain, ytrain), (xval, yval) = keras.datasets.fashion_mnist.load_data()
I have decided to use the MNIST dataset because it will be easier for you to gain access to. I have divided the data into training and testing at the time of loading itself.
The next snippet will consist of pre-processing the data from the MNIST dataset(like normalization, etc.) and function to create the dataset. Below is the Python code:
def preprocessdata(dat1, dat2): dat1 = tf.cast(dat1, tf.float32) / 255.0 dat2 = tf.cast(dat2, tf.int64) return dat1,dat2 def creating_dataset(data1, data2, no_of_classes=10): data2 = tf.one_hot(data2, depth=no_of_classes) return tf.data.Dataset.from_tensor_slices((data1, data2)).map(preprocessdata).shuffle(len(data2)).batch(128)
The above snippet consists of pre-processing of the data that was taken for this task. Initially, I normalized the pixels to make it easier for the neural network to function. The second function was developed to create a dataset from the pre-processed data and “tf.one_hot” was used to create one-hot encoding for the data.
I will create a neural network in the next snippet, which you give you an idea about how to develop a neural network using TensorFlow.
model = keras.Sequential([ keras.layers.Reshape(target_shape=(28 * 28,), input_shape=(28, 28)), keras.layers.Dense(units=256, activation='relu'), keras.layers.Dense(units=128, activation='relu'), keras.layers.Dense(units=64, activation='relu'), keras.layers.Dense(units=10, activation='softmax') ])
I have decided to develop a considerably simple architecture as the MNIST dataset is a basic one. You can increase its depth depending on the dataset present and also make sure not to keep it too deep, which might lead to a vanishing gradient problem. We usually use ‘relu’ for the hidden layers. The mathematical proof is beyond the scope of this tutorial. As the output has 10 classes to classify, we are using “softmax” as the activation layer.
train_data = create_dataset(xtrain, ytrain) validation_data = create_dataset(xval, yval) model.compile(optimizer='adam', loss=tf.losses.CategoricalCrossentropy(from_logits=True), metrics=['accuracy']) history = model.fit( train_data.repeat(), steps_per_epoch=500, epochs=20, validation_data=validation_data.repeat(), validation_steps=4 )
In this snippet, I have compiled the model. I used 20 epochs so that the model can train well. I used ‘adam’ as the optimizer because it is generally the best optimizer when compared to other optimizers like SGD, etc.
As the data is categorical, I used “CategoricalCrossentropy” for the loss function and achieved around 80% accuracy on this model and it is great for real-time approach. I believe you have understood how to create neural networks using TensorFlow and Keras from this tutorial.