# STOCK PRICES PREDICTION WITH TENSORFLOW/KERAS

Hey everyone!

Today, we will explore one of the trickiest predictions present in the worldly scenario that is STOCK MARKET and will use TensorFlow deep learning Python library with Keras API.

Stock Market is one of the most fluctuating fields, there are various factors that go into the analysis of the future happenings. It involves great dependency on physical and physiological factors. When so many factors go into the picture it becomes difficult to predict what the future price will be of a particular stock. Keeping in mind a stock that one wants to invest in, the decision of building the

faith happens when one looks into the history of that stock/company. The history of that company speaks a lot about its current prices and future possibilities.

Time plays the most crucial role in this prediction, hence this falls under the Time-Series domain of Machine Learning.

We will use the previous data of a particular company to predict the future price of the stock.

There are several Machine Learning algorithms that can be used for this such as Linear Regression, KNN (k-nearest neighbors), LSTM, etc. But here we will use LSTM (Long Short Term Memory) algorithm.

## Long Short Term Memory

When understanding the Recurrent Neural Networks, in sequence predictions RNN’s are capable of predicting your current price by consideration of the previous day’s stock prices and understands the trend of that particular stock.

However, while using RNN, there is a limitation that they can remember previous data for a short period of time only, once more and more layers get formed or more data is fed, it tends to lose its reproduction capability of the network. Hence, an improvement over this is LSTM’s.

LSTMs are extremely effective for sequence prediction problems. The explanation for LSTM working so admirably is on the grounds that it can store past data that is significant and dispose of the information that is not of use to perform predictions. Coming to the point of stocks, the past values are a huge proof for future happenings of the next prices of the stock. LSTM in general consists of three gates:

- The input gate: The input gate acts as the passage for new information in the cell

state. - The forget gate: Discards the information that is not important.
- The output gate: In LSTM it chooses the information to be shown as output

and also decides the next hidden state.

Now, let us dive into the code to understand LSTM’s better!

### We are going to work on PVR shares for this particular project.

Let us import the necessary packages, Pandas, Matplotlib and Numpy

#import all the packages that are needed import pandas as pd import numpy as np #import matplotlib to plot within notebook import matplotlib.pyplot as plt %matplotlib inline

Normalize your data, **Normalization** scales the input variable individually to the range of 0-1, that is the range for floating-point values where we get the maximum precision.

#normalizing the data from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler(feature_range=(0, 1))

Read the CSV file and print the data of the file to understand what the data holds. The CSV file was downloaded from PVR CSV , you can find various other datasets on yahoo finance.

#reading the file df = pd.read_csv('PVR.NS.csv') #print the head and no. of rows and columns print('Number of rows and columns:', df.shape) df.head()

Date | Open | High | Low | Close | Adj Close | Volume |

06-01-2020 | 1875 | 1887.44995 | 1850.09998 | 1857 | 1852.61157 | 213477 |

07-01-2020 | 1859 | 1888.94995 | 1855.69995 | 1867.15002 | 1862.73755 | 312508 |

08-01-2020 | 1860 | 1868.65002 | 1833.80005 | 1863.59998 | 1859.19592 | 255105 |

09-01-2020 | 1875 | 1910.05005 | 1871.25 | 1901.55005 | 1897.05627 | 409561 |

10-01-2020 | 1913.40002 | 1916.75 | 1894 | 1905.69995 | 1901.19641 | 329620 |

## Visualization of Data

We can observe that there are seven different variables in the dataset – Date, Open, High, Low, Close, Adjacent close price, and the total volume of that stock being bought that particular day. Our dataset has a total of 250 values present in it.

• The ‘Date’ represents the stock’s properties for that day.

• ‘Open’ represents the starting price for that stock(here PVR) and ‘Close’

is the last price of the stock when the market closes.

• ‘High’ are the maximum prices of the share where ‘Low’ are the

minimum prices for a 24 hours gap.

• ‘Volume’ is the total shares bought and sold on that particular day. Hence

a measure of how the performance was for that day

Let us plot the graph of closing price history over the years

#setting index as date df['Date'] = pd.to_datetime(df.Date,format='%Y-%m-%d') df.index = df['Date'] #plot plt.figure(figsize=(20,10)) plt.plot(df['Open'], label='Open Price history')

Now that we have visualized our data we will move on to the training part and prediction part of the project. Let us Import the required libraries.

#importing required libraries from sklearn.preprocessing import MinMaxScaler from keras.layers import Dense, Dropout, LSTM from keras.models import Sequential

Here, what we are targeting is the ‘Opening price’ of the share and hence it is our target variable.

Now, we will create a new dataset that would consist of data and our target variable that would be operated upon.

Date | Open |

06-01-2020 | 1875 |

07-01-2020 | 1859 |

08-01-2020 | 1860 |

09-01-2020 | 1875 |

10-01-2020 | 1913.4 |

## Preparing Training and Test Data

Have a look at the new data frame formed, it consists of Date and Opening Price

of PVR over the years.

Let us split the data frame into Testing and Training Datasets. Head back to the

block where we read the data and found out its shape to be (250,7) which

means it has 250 total values. Hence, we extract these values from our pvr_data

and store them in the dataset variable. From here on we split training data into 150

values and validation data into the remaining 100 values.

#create train and test sets dataset = pvr_data.values train = dataset[0:150,:] valid = dataset[150:,:]

Let us convert the dataset to x_train and y_train

#converting dataset into x_train and y_train scaler = MinMaxScaler(feature_range=(0, 1)) data_scaled = scaler.fit_transform(dataset)

x_train, y_train = [], [] for i in range(1,len(train)): x_train.append(data_scaled[i-1:i,0]) y_train.append(data_scaled[i,0]) x_train, y_train = np.array(x_train), np.array(y_train) x_train = np.reshape(x_train, (x_train.shape[0],x_train.shape[1],1))

## Creating the Model

Now, let us create the LSTM network and fit it.

A Sequential model is used here as we have just one input tensor and one

output tensor and it helps us build our model layer by layer while allowing us

to add layers in it as well.

We then compile the model by using the loss function as a mean squared error, which

helps make sure how good or bad prediction it is and then we use the custom

optimizer as SGD (Stochastic gradient descent) with learning rate set to 0.01and

momentum to be 0.9 to make the next guess.

model = Sequential() model.add(LSTM(units=50, return_sequences=True, input_shape=(x_train.shape[1],1))) model.add(Dropout(.4)) model.add(LSTM(units=25,activation="relu", recurrent_activation="sigmoid")) model.add(Dense(10)) model.add(Dense(1)) from keras.optimizers import SGD opt=SGD(lr=0.01,momentum=0.9) model.compile(loss='mean_squared_error', optimizer=opt) model.fit(x_train, y_train, epochs=50, batch_size=2, verbose=2)

After training over 50 epochs, we get our loss to be 0.0059 which is a very good

result over our data.

Epoch 50/50 – 0s – loss: 0.0059

Now, let us fetch the inputs by removing the training data. Then we reshape

them into a 1d array for giving them into the model. Now perform a scalar transformation on data to improve the predictive modeling.

Take a note if you have missed out on it, len(pvr_data)= 250 and len(valid)= 100

inputs = pvr_data[len(pvr_data) - len(valid) - 1:].values inputs = inputs.reshape(-1,1) inputs = scaler.transform(inputs)

## Running inference

Now let us define our test data and run the prediction function on it. Run

inverse_transfrom on the predicted opening_price to get the original encoding

test_data = [] for i in range(1,inputs.shape[0]): test_data.append(inputs[i-1:i,0]) test_data = np.array(test_data) test_data = np.reshape(test_data, (test_data.shape[0],test_data.shape[1],1)) opening_price = model.predict(test_data) opening_price = scaler.inverse_transform(opening_price)

Let us visualize how appropriate our predictions are.

#for plotting train = pvr_data[:150] valid = pvr_data[150:] valid['Predictions'] = opening_price plt.plot((train['Open'])) plt.plot((valid[['Open','Predictions']])) plt.figure(figsize=(20,10))

Well, it is really impressive how LSTM has predicted the values with good accuracy. The flexibility of LSTM comes in with the ability to tune its hyperparameters, these include changing the number of layers, adding new types of layers, adding drop out, or increasing the number of epochs.

Time series is a very complex field of study for predictions as it involves complex parameters to follow. If this project intrigues you all. Do try and research more on how LSTM’s work.

Thank you for reading!

## Leave a Reply