Save, Load and Export Models with Keras in Python
Hello everyone, In this article, you will learn about the basic steps to save, load and export models with Keras in Python. Creating models in Keras is really easy so here you’ll get to know how to save them and how they can be used in future. As we move on you’ll learn various steps which are essential while working on a project.
Importing Libraries
Below we have imported all the necessary Python libraries for the project. And we are also creating new folders that are tmp, models, model_name and weights.
import tensorflow as tf import numpy as np import os print('TensorFlow version:', tf.__version__) folders = ['tmp', 'models', 'model_name', 'weights'] for folder in folders: if not os.path.isdir(folder): os.mkdir(folder) print(os.listdir('.'))
Output:
TensorFlow version: 2.0.0 ['.ipynb_checkpoints', 'models', 'model_name', 'Save, Load and Export Keras Models - Completed.ipynb', 'tmp', 'weights']
Creating The Model
So we are going to work on fashion mnist dataset and we will make our model according to it. In this model, we will have 3 dense layers with the last one as output layer with activation function as ‘softmax’ and number of nodes as 10. while the other two layers have 128 nodes and ‘relu’ as the activation function. Finally, while compiling the model we have taken loss as ‘categorical_crossentropy’, the optimizer is ‘adam’ and metrics would be ‘acc’ i.e. the accuracy.
def create_model(): model = tf.keras.models.Sequential([ tf.keras.layers.Dense(128, activation='relu', input_shape=(784,)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc']) return model model = create_model() model.summary()
Output:
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense (Dense) (None, 128) 100480 _________________________________________________________________ dense_1 (Dense) (None, 128) 16512 _________________________________________________________________ dense_2 (Dense) (None, 10) 1290 ================================================================= Total params: 118,282 Trainable params: 118,282 Non-trainable params: 0 _________________________________________________________________
Data Preprocessing
Here we have used fashion mnist dataset from Keras datasets. We are normalizing the images before reshaping them because we want to get accurate results. At last, we are performing one-hot encoding and converting the labels into categories since we are using categorical cross-entropy.
Below is the Python code:
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data() x_train = np.reshape(x_train, (x_train.shape[0], 784))/255. x_test = np.reshape(x_test, (x_test.shape[0], 784))/255. y_train = tf.keras.utils.to_categorical(y_train) y_test = tf.keras.utils.to_categorical(y_test)
Model Checkpointing While Training
Many times you have to save the data so that it can be used again. While training for certain weights we get the most accurate value and thus we should be able to save those values of weight so that we could use them again. Hence we will now see how to save the weights.
We are going to use the weights folder that we created earlier. We have used the model.fit function to pass the values to the model with a number of epochs as 2 and batch size as 512. In fit function for callbacks, we have used ModelCheckpoints function which will help us get the correct values. So we are saving each epoch with its associated accuracy.
checkpoint_dir = 'weights/' _ = model.fit( x_train, y_train, validation_data=(x_test, y_test), epochs=2, batch_size=512, callbacks=[ tf.keras.callbacks.ModelCheckpoint( os.path.join(checkpoint_dir, 'epoch_{epoch:02d}_acc_{val_acc:.4f}'), monitor='val_acc', save_weights_only=True, save_best_only=True ) ] )
Output:
Train on 60000 samples, validate on 10000 samples Epoch 1/2 60000/60000 [==============================] - 2s 28us/sample - loss: 0.6906 - acc: 0.7694 - val_loss: 0.5013 - val_acc: 0.8185 Epoch 2/2 60000/60000 [==============================] - 1s 18us/sample - loss: 0.4405 - acc: 0.8455 - val_loss: 0.4434 - val_acc: 0.8416
Now Let’s see the folder where we saved the files. Here we can see that the second file has more accuracy that is why it is saved after the first one.
os.listdir(checkpoint_dir)
Output:
['checkpoint', 'epoch_01_acc_0.8185.data-00000-of-00001', 'epoch_01_acc_0.8185.index', 'epoch_02_acc_0.8416.data-00000-of-00001', 'epoch_02_acc_0.8416.index']
Loading Weights
Now, we will load the weights that we have saved earlier and see what impact does it make.
Here, we are calling our model with loading the weights and we will observe the output.
model = create_model() print(model.evaluate(x_test, y_test, verbose=False))
Output:
[2.3328444290161134, 0.1592]
Above we can see the loss is very high and accuracy is very low.
Now let’s see what will be the output when the saved weights are called.
model.load_weights('weights/epoch_02_acc_0.8416') print(model.evaluate(x_test, y_test, verbose=False))
Output:
[0.44337423400878906, 0.8416]
Here, we found that the loss is reduced and accuracy is great.
Note: While calling the saved weights we are passing the ‘.index’ file without mentioning ‘.index’ just the epoch and accuracy. When you run this code please make sure you have noted the file name correctly because the accuracy may vary.
Saving Model While Training
It is similar to Model Checkpointing that we have done earlier in this tutorial. Here, We only have to change save_weights_only to False because we want to save the model this time. Also in the filename, we have to add the extension of ‘.h5’. It is up to you if you have to save the best file or not.
models_dir = 'models' model = create_model() _ = model.fit( x_train, y_train, validation_data=(x_test, y_test), epochs=2, batch_size=512, callbacks=[ tf.keras.callbacks.ModelCheckpoint( os.path.join(models_dir, 'epoch_{epoch:02d}_acc_{val_acc:.4f}.h5'), monitor='val_acc', save_weights_only=False, save_best_only=False ) ] )
Output:
Train on 60000 samples, validate on 10000 samples Epoch 1/2 60000/60000 [==============================] - 2s 29us/sample - loss: 0.6860 - acc: 0.7665 - val_loss: 0.4879 - val_acc: 0.8318 Epoch 2/2 60000/60000 [==============================] - 1s 17us/sample - loss: 0.4319 - acc: 0.8496 - val_loss: 0.4275 - val_acc: 0.8494
Let us see the files in the models folder:
os.listdir(models_dir)
Output:
['epoch_01_acc_0.8318.h5', 'epoch_02_acc_0.8494.h5']
Loading Models
Similar to loading wights, Now we will load the models from the saved file.
First, we will see the output when we don’t load any model just call the function to create new model.
model = create_model() print(model.evaluate(x_test, y_test, verbose=False))
Output:
[2.4127998825073242, 0.1113]
We can see that the loss is very high and accuracy is very low here. So let us see what will happen when we load the model.
model = tf.keras.models.load_model('models/epoch_02_acc_0.8494.h5') model.summary() print(model.evaluate(x_test, y_test, verbose=False))
Output:
Model: "sequential_2" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_6 (Dense) (None, 128) 100480 _________________________________________________________________ dense_7 (Dense) (None, 128) 16512 _________________________________________________________________ dense_8 (Dense) (None, 10) 1290 ================================================================= Total params: 118,282 Trainable params: 118,282 Non-trainable params: 0 _________________________________________________________________ [0.4274738217830658, 0.8494]
So when we load the model we will get less loss and much better accuracy.
Manually Saving Weights and Models
So to save weights manually we are calling a function save_weights where we have given the filename to save the weights.
model.save_weights('tmp/manually_saved') print(os.listdir('tmp'))
Output:
['checkpoint', 'manually_saved.data-00000-of-00001', 'manually_saved.index']
These are the contents of the directory ‘tmp’.
Now, to save both the weights and the model we will use save function.
model.save('tmp/manually_saved_model.h5') print(os.listdir('tmp'))
Output:
['checkpoint', 'manually_saved.data-00000-of-00001', 'manually_saved.index', 'manually_saved_model.h5']
These are the contents of the directory ‘tmp’ with ‘manually_saved_model.h5’ as the addition.
Exporting and Restoring SavedModel Format
Apart from saving and loading, we can also use export and restore Keras models from the SaveModel format.
Here we will use model.save() function and pass the directory name. The function will identify it and automatically export it in the SaveModel format.
model.save('model_name') print(os.listdir('model_name'))
Output:
['assets', 'saved_model.pb', 'variables']
So now, we will see how to use SavedModel format as a Keras model. We will use load_model function which we have used before. Here, we will pass the directory path as the directory name itself. So it will assume that we are trying to load a SavedModel. Thus it will convert it from SavedModel to Keras model. And finally, we will evaluate the results.
model = tf.keras.models.load_model('model_name') print(model.evaluate(x_test, y_test, verbose=False))
Output:
[0.42747382011413576, 0.8494]
Here, we found that the accuracy of the model is almost 85% and similar to what we were expecting.
Thus, by following tasks we got to know about various ways to save, load and export models in Keras.
Thank you. I hope you enjoyed reading the article and learned something new from it.
Leave a Reply