# Understanding Convolutional Neural Network (CNN) in TensorFlow

In this tutorial, let us find out what Convolutional Neural Network (CNN) is and it’s uses with TensorFlow in Python code example. We will be using TensorFlow which is a popular deep learning framework.

## Convolutional Neural Network Introduction:

Convolutional Neural Network is a specialized neural network for processing data that has an input shape like a image or 2D matrix. Moreover, it is used for image detection and classification. Convolutional Neural Network  is a class of artificial neural networks most commonly used to analyze visual imagery. Convolutional Neural Network can also be used to predict values , given the trained data.

## A simple example:

The classic use of CNN is image classification. Above all, it is used to classify whether an image is a bird or a car. Let us go through the below example, where a CNN is used to predict the housing price with respect to rooms. Here we are using the popular deep learning library TensorFlow to make our Convolutional Neural Network (CNN).

#### 1. Importing necessary libraries.

Here we are importing libraries required for this example.

```import tensorflow as tf
import numpy as np
from tensorflow import keras```

#### 2. Training the data.

Now we pass in some values to train the function. Here ‘x’ represents the number of bedrooms whereas ‘y’ represents the cost of each house respectively.
Here there is only a single neuron for this function. The loss function measures the how good the guess is and then passes it to the optimizer.
Optimizer makes sure that the next guess is better than the one before.
Here the loss is ‘mean squared error‘ and optimizer is ‘stochastic gradient descent‘ , the TensorFlow documentation can be checked for more details.

The training of the data takes place in fit command.It tells us of fitting the values of ‘y’ for ‘x’

```def house_model(y_new):
x = np.array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 8.0, 9.0, 10.0,11.0, 12.0, 13.0], dtype=float)
y = np.array([100.0, 150.0, 200.0, 250.0, 300.0, 350.0, 450.0, 500.0, 550.0,600.0, 650.0,700.0], dtype=float)

model = tf.keras.Sequential([keras.layers.Dense(units=1,input_shape=[1])])
model.compile(optimizer='sgd',loss='mean_squared_error')
model.fit(x,y,epochs=4000)
return (model.predict(y_new)[0]+1) //100
```

#### 3. Predicting the value.

Here we are trying to predict the value for a 7  bedroom house., it is evident from the data that it costs 400., hence we are trying to make sure the output is close to 400.

```prediction = house_model([7.0])
print(prediction)```

#### Output:

Here we can see that the predicted result is close to 400. Increased epoch value leads to better results.

```12/12 [==============================] - 0s 11ms/sample - loss: 209838.7500
Epoch 2/100
12/12 [==============================] - 0s 117us/sample - loss: 19126.5352
Epoch 3/100
12/12 [==============================] - 0s 97us/sample - loss: 2111.4070
Epoch 4/100
12/12 [==============================] - 0s 94us/sample - loss: 589.9004
Epoch 5/100
12/12 [==============================] - 0s 96us/sample - loss: 450.4402
Epoch 6/100
12/12 [==============================] - 0s 95us/sample - loss: 434.2926
Epoch 7/100
12/12 [==============================] - 0s 86us/sample - loss: 429.1788
Epoch 8/100
12/12 [==============================] - 0s 87us/sample - loss: 425.0830
Epoch 9/100
12/12 [==============================] - 0s 89us/sample - loss: 421.1125
Epoch 10/100
12/12 [==============================] - 0s 108us/sample - loss: 417.1864
Epoch 11/100
12/12 [==============================] - 0s 118us/sample - loss: 413.2977
Epoch 12/100
12/12 [==============================] - 0s 96us/sample - loss: 409.4454
Epoch 13/100
12/12 [==============================] - 0s 95us/sample - loss: 405.6286
Epoch 14/100
12/12 [==============================] - 0s 95us/sample - loss: 401.8479
Epoch 15/100
12/12 [==============================] - 0s 96us/sample - loss: 398.1024
Epoch 16/100
12/12 [==============================] - 0s 96us/sample - loss: 394.3917
Epoch 17/100
12/12 [==============================] - 0s 95us/sample - loss: 390.7153
Epoch 18/100
12/12 [==============================] - 0s 97us/sample - loss: 387.0735
Epoch 19/100
12/12 [==============================] - 0s 123us/sample - loss: 383.4654
Epoch 20/100
12/12 [==============================] - 0s 103us/sample - loss: 379.8912
Epoch 21/100
12/12 [==============================] - 0s 96us/sample - loss: 376.3503
Epoch 22/100
12/12 [==============================] - 0s 96us/sample - loss: 372.8422
Epoch 23/100
12/12 [==============================] - 0s 83us/sample - loss: 369.3668
Epoch 24/100
12/12 [==============================] - 0s 90us/sample - loss: 365.9237
Epoch 25/100
12/12 [==============================] - 0s 86us/sample - loss: 362.5131
Epoch 26/100
12/12 [==============================] - 0s 83us/sample - loss: 359.1340
Epoch 27/100
12/12 [==============================] - 0s 78us/sample - loss: 355.7863
Epoch 28/100
12/12 [==============================] - 0s 91us/sample - loss: 352.4701
Epoch 29/100
12/12 [==============================] - 0s 83us/sample - loss: 349.1844
Epoch 30/100
12/12 [==============================] - 0s 93us/sample - loss: 345.9299
Epoch 31/100
12/12 [==============================] - 0s 94us/sample - loss: 342.7055
Epoch 32/100
12/12 [==============================] - 0s 94us/sample - loss: 339.5108
Epoch 33/100
12/12 [==============================] - 0s 86us/sample - loss: 336.3465
Epoch 34/100
12/12 [==============================] - 0s 91us/sample - loss: 333.2111
Epoch 35/100
12/12 [==============================] - 0s 84us/sample - loss: 330.1054
Epoch 36/100
12/12 [==============================] - 0s 90us/sample - loss: 327.0284
Epoch 37/100
12/12 [==============================] - 0s 87us/sample - loss: 323.9803
Epoch 38/100
12/12 [==============================] - 0s 82us/sample - loss: 320.9605
Epoch 39/100
12/12 [==============================] - 0s 92us/sample - loss: 317.9686
Epoch 40/100
12/12 [==============================] - 0s 91us/sample - loss: 315.0049
Epoch 41/100
12/12 [==============================] - 0s 96us/sample - loss: 312.0686
Epoch 42/100
12/12 [==============================] - 0s 115us/sample - loss: 309.1598
Epoch 43/100
12/12 [==============================] - 0s 91us/sample - loss: 306.2781
Epoch 44/100
12/12 [==============================] - 0s 94us/sample - loss: 303.4231
Epoch 45/100
12/12 [==============================] - 0s 83us/sample - loss: 300.5948
Epoch 46/100
12/12 [==============================] - 0s 83us/sample - loss: 297.7930
Epoch 47/100
12/12 [==============================] - 0s 80us/sample - loss: 295.0171
Epoch 48/100
12/12 [==============================] - 0s 78us/sample - loss: 292.2673
Epoch 49/100
12/12 [==============================] - 0s 83us/sample - loss: 289.5433
Epoch 50/100
12/12 [==============================] - 0s 81us/sample - loss: 286.8441
Epoch 51/100
12/12 [==============================] - 0s 111us/sample - loss: 284.1707
Epoch 52/100
12/12 [==============================] - 0s 91us/sample - loss: 281.5216
Epoch 53/100
12/12 [==============================] - 0s 91us/sample - loss: 278.8977
Epoch 54/100
12/12 [==============================] - 0s 94us/sample - loss: 276.2981
Epoch 55/100
12/12 [==============================] - 0s 99us/sample - loss: 273.7229
Epoch 56/100
12/12 [==============================] - 0s 78us/sample - loss: 271.1713
Epoch 57/100
12/12 [==============================] - 0s 84us/sample - loss: 268.6435
Epoch 58/100
12/12 [==============================] - 0s 82us/sample - loss: 266.1396
Epoch 59/100
12/12 [==============================] - 0s 80us/sample - loss: 263.6588
Epoch 60/100
12/12 [==============================] - 0s 81us/sample - loss: 261.2013
Epoch 61/100
12/12 [==============================] - 0s 77us/sample - loss: 258.7663
Epoch 62/100
12/12 [==============================] - 0s 84us/sample - loss: 256.3547
Epoch 63/100
12/12 [==============================] - 0s 83us/sample - loss: 253.9650
Epoch 64/100
12/12 [==============================] - 0s 82us/sample - loss: 251.5979
Epoch 65/100
12/12 [==============================] - 0s 84us/sample - loss: 249.2526
Epoch 66/100
12/12 [==============================] - 0s 81us/sample - loss: 246.9293
Epoch 67/100
12/12 [==============================] - 0s 106us/sample - loss: 244.6275
Epoch 68/100
12/12 [==============================] - 0s 79us/sample - loss: 242.3474
Epoch 69/100
12/12 [==============================] - 0s 91us/sample - loss: 240.0884
Epoch 70/100
12/12 [==============================] - 0s 91us/sample - loss: 237.8507
Epoch 71/100
12/12 [==============================] - 0s 80us/sample - loss: 235.6335
Epoch 72/100
12/12 [==============================] - 0s 81us/sample - loss: 233.4371
Epoch 73/100
12/12 [==============================] - 0s 84us/sample - loss: 231.2612
Epoch 74/100
12/12 [==============================] - 0s 84us/sample - loss: 229.1056
Epoch 75/100
12/12 [==============================] - 0s 80us/sample - loss: 226.9702
Epoch 76/100
12/12 [==============================] - 0s 82us/sample - loss: 224.8547
Epoch 77/100
12/12 [==============================] - 0s 83us/sample - loss: 222.7585
Epoch 78/100
12/12 [==============================] - 0s 84us/sample - loss: 220.6823
Epoch 79/100
12/12 [==============================] - 0s 85us/sample - loss: 218.6251
Epoch 80/100
12/12 [==============================] - 0s 78us/sample - loss: 216.5873
Epoch 81/100
12/12 [==============================] - 0s 86us/sample - loss: 214.5685
Epoch 82/100
12/12 [==============================] - 0s 87us/sample - loss: 212.5685
Epoch 83/100
12/12 [==============================] - 0s 90us/sample - loss: 210.5872
Epoch 84/100
12/12 [==============================] - 0s 88us/sample - loss: 208.6243
Epoch 85/100
12/12 [==============================] - 0s 90us/sample - loss: 206.6795
Epoch 86/100
12/12 [==============================] - 0s 95us/sample - loss: 204.7533
Epoch 87/100
12/12 [==============================] - 0s 94us/sample - loss: 202.8445
Epoch 88/100
12/12 [==============================] - 0s 89us/sample - loss: 200.9537
Epoch 89/100
12/12 [==============================] - 0s 89us/sample - loss: 199.0808
Epoch 90/100
12/12 [==============================] - 0s 109us/sample - loss: 197.2249
Epoch 91/100
12/12 [==============================] - 0s 134us/sample - loss: 195.3868
Epoch 92/100
12/12 [==============================] - 0s 134us/sample - loss: 193.5656
Epoch 93/100
12/12 [==============================] - 0s 95us/sample - loss: 191.7613
Epoch 94/100
12/12 [==============================] - 0s 88us/sample - loss: 189.9739
Epoch 95/100
12/12 [==============================] - 0s 86us/sample - loss: 188.2032
Epoch 96/100
12/12 [==============================] - 0s 91us/sample - loss: 186.4487
Epoch 97/100
12/12 [==============================] - 0s 85us/sample - loss: 184.7109
Epoch 98/100
12/12 [==============================] - 0s 83us/sample - loss: 182.9890
Epoch 99/100
12/12 [==============================] - 0s 88us/sample - loss: 181.2835
Epoch 100/100
12/12 [==============================] - 0s 88us/sample - loss: 179.5939
[[393.59177]]```

In addition to the number of bedrooms, it is possible to add more constraints and use more neurons to predict the housing price.