Predicting how the stock market will perform is one of the most difficult things. There are so many factors involved in the prediction—physical factors vs. psychological factors, rational and irrational behavior, etc. All these aspects combine to make share prices volatile and very difficult to predict with a high degree of accuracy.
Can we use machine learning as a game-changer in this domain? Using features like the latest announcements about an organization, their quarterly revenue results, etc., machine learning techniques can unearth patterns and insights we didn’t see before, and these can be used to make unerringly accurate predictions.
In this article, we will work with historical data about the stock prices of a publicly listed company. We will implement a mix of machine learning algorithms to predict the future stock price of this company, starting with simple algorithms like averaging and linear regression, and then move on to advanced techniques like Auto ARIMA and LSTM. The above-stated machine learning algorithms can be easily learned from this ML Course online.
The core idea behind this article is to showcase how these algorithms are implemented. I will briefly describe the technique and provide relevant links to brush up on the concepts as and when necessary. In case you’re a newcomer to the world of time series, I suggest going through the following articles first:
Are you a beginner looking for a place to start your data science journey? We’re presenting a comprehensive course, full of knowledge and data science learning, curated just for you! This course covers everything from the basics of Machine Learning to Advanced concepts of ML, Deep Learning, and Time series.
We’ll dive into the implementation part of this article soon, but first, it’s important to establish what we aim to solve. Broadly, stock market machine learning analysis is divided into Fundamental Analysis and Technical Analysis.
As you might have guessed, our focus will be on the technical analysis part. We’ll be using a dataset from Quandl (you can find historical data for various stocks here) and for this particular project, I have used the data for ‘Tata Global Beverages’. Time to dive in!
Note: Here is the dataset I used for the code: Download
We will first load the dataset and define the target variable for the problem:
Python Code:
The dataset contains multiple variables: date, open, high, low, last, close, total_trade_quantity, and turnover.
Another important thing to note is that the market is closed on weekends and public holidays. Notice the above table again; some date values are missing—2/10/2018, 6/10/2018, and 7/10/2018. Of these dates, the 2nd is a national holiday, while the 6th and 7th fall on a weekend.
The profit or loss calculation is usually determined by the closing price of a stock for the day, hence we will consider the closing price as the target variable. Let’s plot the target variable to understand how it’s shaping up in our data:
#setting index as date df['Date'] = pd.to_datetime(df.Date,format='%Y-%m-%d') df.index = df['Date'] #plot plt.figure(figsize=(16,8)) plt.plot(df['Close'], label='Close Price history')
In the upcoming sections, we will explore these variables and use different techniques to predict the stock’s daily closing price.
‘Average’ is easily one of the most common things we use in our daily lives. Calculating the average marks to determine overall performance or finding the average temperature of the past few days to get an idea about today’s temperature are all routine tasks we do on a regular basis. So, this is a good starting point to use on our dataset for making predictions.
The predicted closing price for each day will be the average of a set of previously observed values. Instead of using the simple average, we will use the moving average technique, which uses the latest set of values for each prediction. In other words, for each subsequent step, the predicted values are taken into consideration while removing the oldest observed value from the set. Here is a simple figure that will help you understand this more clearly.
We will implement this technique on our dataset. The first step is to create a dataframe that contains only the Date and Close price columns, then split it into train and validation sets to verify our predictions.
Just checking the RMSE does not help us understand how the model performed. Let’s visualize this to get a more intuitive understanding. Here is a plot of the predicted values along with the actual values.
#plot valid['Predictions'] = 0 valid['Predictions'] = preds plt.plot(train['Close']) plt.plot(valid[['Close', 'Predictions']])
The RMSE value is close to 105, but the results are not very promising (as shown in the plot). The predicted values are of the same range as the observed values in the train set (initially, there is an increasing trend and then a slight decrease).
In the next section, we will examine two commonly used machine learning techniques—linear Regression and kNN—and see how they perform on our stock market machine learning data.
The most basic machine learning algorithm that can be implemented on this data is linear regression. The linear regression model returns an equation determining the relationship between the independent and dependent variables.
The equation for linear regression can be written as:
Here, x1, x2,….xn represent the independent variables while the coefficients θ1, θ2, …. θn represent the weights. You can refer to the following article to study linear regression in more detail:
For our problem statement, we do not have a set of independent variables. Instead, we have only the dates. Let us use the date column to extract features like day, month, year, Monday/Friday, etc., and then fit a linear regression model.
We will first sort the dataset in ascending order and then create a separate dataset so that any new feature created does not affect the original data.
#setting index as date values df['Date'] = pd.to_datetime(df.Date,format='%Y-%m-%d') df.index = df['Date'] #sorting data = df.sort_index(ascending=True, axis=0) #creating a separate dataset new_data = pd.DataFrame(index=range(0,len(df)),columns=['Date', 'Close']) for i in range(0,len(data)): new_data['Date'][i] = data['Date'][i] new_data['Close'][i] = data['Close'][i]
#create features from fastai.structured import add_datepart add_datepart(new_data, 'Date') new_data.drop('Elapsed', axis=1, inplace=True) #elapsed will be the time stamp
This creates features such as:
‘Year’, ‘Month’, ‘Week’, ‘Day’, ‘Dayofweek’, ‘Dayofyear’, ‘Is_month_end’, ‘Is_month_start’, ‘Is_quarter_end’, ‘Is_quarter_start’, ‘Is_year_end’, and ‘Is_year_start’.
Note: I have used add_datepart from the fastai library. If you do not have it installed, you can simply use the command pip install fastai. Otherwise, you can create these features using simple for loops in Python. I have shown an example below.
Apart from this, we can add features that we believe would be relevant to the predictions. For instance, I hypothesize that the first and last days of the week could affect the stock’s closing price far more than the other days. So, I have created a feature that identifies whether a given day is Monday/Friday or Tuesday/Wednesday/Thursday. This can be done using the following lines of code:
new_data['mon_fri'] = 0 for i in range(0,len(new_data)): if (new_data['Dayofweek'][i] == 0 or new_data['Dayofweek'][i] == 4): new_data['mon_fri'][i] = 1 else: new_data['mon_fri'][i] = 0
If the day of the week is equal to 0 or 4, the column value will be 1; otherwise, it will be 0. Similarly, you can create multiple features. If you have ideas for features to help predict stock prices, please share them in the comment section.
We will now split the data into train and validation sets to check the model’s performance.
#split into train and validation train = new_data[:987] valid = new_data[987:] x_train = train.drop('Close', axis=1) y_train = train['Close'] x_valid = valid.drop('Close', axis=1) y_valid = valid['Close'] #implement linear regression from sklearn.linear_model import LinearRegression model = LinearRegression() model.fit(x_train,y_train)
#make predictions and find the rmse preds = model.predict(x_valid) rms=np.sqrt(np.mean(np.power((np.array(y_valid)-np.array(preds)),2))) rms
121.16291596523156
The RMSE value is higher than the previous technique, showing that linear regression has performed poorly. Let’s look at the plot and understand why linear regression has not done well:
#plot valid['Predictions'] = 0 valid['Predictions'] = preds valid.index = new_data[987:].index train.index = new_data[:987].index plt.plot(train['Close']) plt.plot(valid[['Close', 'Predictions']])
Linear regression is a simple technique and quite easy to interpret, but there are a few obvious disadvantages. One problem with using regression algorithms is that the model overfits the date and month column. Instead of taking into account the previous values from the point of prediction, the model will consider the value from the same date a month ago or the same date/month a year ago.
As seen from the plot above, the stock price dropped in January 2016 and January 2017. The model predicted the same for January 2018. A linear regression technique can perform well for problems such as Big Mart sales, where the independent features are useful for determining the target value.
Another interesting ML algorithm for stock market prediction machine learning that one can use here is kNN (k nearest neighbors). Based on the independent variables, kNN finds the similarity between new and old data points. Let me explain this with a simple example.
Consider the height and age of 11 people. Based on given features (‘Age’ and ‘Height’), the table can be represented in a graphical format as shown below:
To determine the weight for ID #11, kNN considers the weight of the nearest neighbors of this ID. The weight of ID #11 is predicted to be the average of it’s neighbors. If we consider three neighbours (k=3) for now, the weight for ID#11 would be = (77+72+60)/3 = 69.66 kg.
For a detailed understanding of kNN, you can refer to the following articles:
#importing libraries from sklearn import neighbors from sklearn.model_selection import GridSearchCV from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler(feature_range=(0, 1))
Using the same train and validation set from the last section:
#scaling data x_train_scaled = scaler.fit_transform(x_train) x_train = pd.DataFrame(x_train_scaled) x_valid_scaled = scaler.fit_transform(x_valid) x_valid = pd.DataFrame(x_valid_scaled) #using gridsearch to find the best parameter params = {'n_neighbors':[2,3,4,5,6,7,8,9]} knn = neighbors.KNeighborsRegressor() model = GridSearchCV(knn, params, cv=5) #fit the model and make predictions model.fit(x_train,y_train) preds = model.predict(x_valid)
#rmse rms=np.sqrt(np.mean(np.power((np.array(y_valid)-np.array(preds)),2))) rms
115.17086550026721
The RMSE value does not differ greatly, but a plot of the predicted and actual values should provide a clearer picture.
#plot valid['Predictions'] = 0 valid['Predictions'] = preds plt.plot(valid[['Close', 'Predictions']]) plt.plot(train['Close'])
The RMSE value is almost similar to the linear regression model, and the plot shows the same pattern. Like linear regression, kNN also identified a drop in January 2018 since that has been the pattern for years. We can safely say that regression algorithms have not performed well on this dataset.
Let’s examine some time series forecasting techniques to see how they perform when faced with this stock price prediction challenge.
ARIMA is a very popular statistical method for time series forecasting. ARIMA models take into account the past values to predict the future values. There are three important parameters in ARIMA:
Parameter tuning for ARIMA consumes a lot of time. So we will use auto ARIMA, which automatically selects the best combination of (p,q,d) to provide the least error. To read more about how auto ARIMA works, refer to this article:
from pyramid.arima import auto_arima data = df.sort_index(ascending=True, axis=0) train = data[:987] valid = data[987:] training = train['Close'] validation = valid['Close'] model = auto_arima(training, start_p=1, start_q=1,max_p=3, max_q=3, m=12,start_P=0, seasonal=True,d=1, D=1, trace=True,error_action='ignore',suppress_warnings=True) model.fit(training) forecast = model.predict(n_periods=248) forecast = pd.DataFrame(forecast,index = valid.index,columns=['Prediction'])
rms=np.sqrt(np.mean(np.power((np.array(valid['Close'])-np.array(forecast['Prediction'])),2))) rms
44.954584993246954
#plot plt.plot(train['Close']) plt.plot(valid['Close']) plt.plot(forecast['Prediction'])
As we saw earlier, an auto ARIMA model uses past data to understand the pattern in the time series. Using these values, the model captured an increasing trend in the series. Although the stock market predictions using this machine learning are far better than those of the previously implemented machine learning models, these predictions are still not close to the real values.
As the plot shows, the model has captured a trend in the series but does not focus on the seasonality. In the next section, we will implement a time series model that takes both trend and seasonality into account.
Several time series techniques can be implemented on the stock prediction machine learning dataset, but most of these techniques require extensive data preprocessing before fitting the model. Prophet, designed and pioneered by Facebook, is a time series forecasting library that requires no data preprocessing and is extremely simple to implement. The input for Prophet is a dataframe with two columns: date and target (ds and y).
Prophet tries to capture the seasonality in the past data and works well when the dataset is large. Here is an interesting article that explains Prophet simply and intuitively:
#importing prophet from fbprophet import Prophet #creating dataframe new_data = pd.DataFrame(index=range(0,len(df)),columns=['Date', 'Close']) for i in range(0,len(data)): new_data['Date'][i] = data['Date'][i] new_data['Close'][i] = data['Close'][i] new_data['Date'] = pd.to_datetime(new_data.Date,format='%Y-%m-%d') new_data.index = new_data['Date'] #preparing data new_data.rename(columns={'Close': 'y', 'Date': 'ds'}, inplace=True) #train and validation train = new_data[:987] valid = new_data[987:] #fit the model model = Prophet() model.fit(train) #predictions close_prices = model.make_future_dataframe(periods=len(valid)) forecast = model.predict(close_prices)
#rmse forecast_valid = forecast['yhat'][987:] rms=np.sqrt(np.mean(np.power((np.array(valid['y'])-np.array(forecast_valid)),2))) rms
57.494461930575149
#plot valid['Predictions'] = 0 valid['Predictions'] = forecast_valid.values plt.plot(train['y']) plt.plot(valid[['y', 'Predictions']])
Prophet (like most time series forecasting techniques) tries to capture the trend and seasonality from past data. This model usually performs well on time series datasets but fails to live up to its reputation in this case.
As it turns out, stock prices do not have a particular trend or seasonality. They depend highly on what is currently going on in the market, and thus, the prices rise and fall. Hence, forecasting techniques like ARIMA, SARIMA, and Prophet would not show good results for this particular problem.
Let us go ahead and try another advanced technique – Long Short Term Memory (LSTM).
LSTMs are widely used for sequence prediction problems and have proven extremely effective. They work so well because LSTM can store past important information and forget the information that is not. LSTM has three gates:
For a more detailed understanding of LSTM and its architecture, you can go through the below article:
For now, let us implement LSTM as a black box and check its performance on our particular data.
#importing required libraries from sklearn.preprocessing import MinMaxScaler from keras.models import Sequential from keras.layers import Dense, Dropout, LSTM #creating dataframe data = df.sort_index(ascending=True, axis=0) new_data = pd.DataFrame(index=range(0,len(df)),columns=['Date', 'Close']) for i in range(0,len(data)): new_data['Date'][i] = data['Date'][i] new_data['Close'][i] = data['Close'][i] #setting index new_data.index = new_data.Date new_data.drop('Date', axis=1, inplace=True) #creating train and test sets dataset = new_data.values train = dataset[0:987,:] valid = dataset[987:,:] #converting dataset into x_train and y_train scaler = MinMaxScaler(feature_range=(0, 1)) scaled_data = scaler.fit_transform(dataset) x_train, y_train = [], [] for i in range(60,len(train)): x_train.append(scaled_data[i-60:i,0]) y_train.append(scaled_data[i,0]) x_train, y_train = np.array(x_train), np.array(y_train) x_train = np.reshape(x_train, (x_train.shape[0],x_train.shape[1],1)) # create and fit the LSTM network model = Sequential() model.add(LSTM(units=50, return_sequences=True, input_shape=(x_train.shape[1],1))) model.add(LSTM(units=50)) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') model.fit(x_train, y_train, epochs=1, batch_size=1, verbose=2) #predicting 246 values, using past 60 from the train data inputs = new_data[len(new_data) - len(valid) - 60:].values inputs = inputs.reshape(-1,1) inputs = scaler.transform(inputs) X_test = [] for i in range(60,inputs.shape[0]): X_test.append(inputs[i-60:i,0]) X_test = np.array(X_test) X_test = np.reshape(X_test, (X_test.shape[0],X_test.shape[1],1)) closing_price = model.predict(X_test) closing_price = scaler.inverse_transform(closing_price)
rms=np.sqrt(np.mean(np.power((valid-closing_price),2))) rms
11.772259608962642
#for plotting train = new_data[:987] valid = new_data[987:] valid['Predictions'] = closing_price plt.plot(train['Close']) plt.plot(valid[['Close','Predictions']])
Wow! The LSTM model can be tuned for various parameters, such as changing the number of LSTM layers, adding a dropout value, or increasing the number of epochs. But are the predictions from LSTM enough to identify whether the stock price will increase or decrease? Certainly not!
As I mentioned at the start of the article, stock price is affected by news about the company and other factors like demonetization or merger/demerger. Certain intangible factors as well can often be impossible to predict beforehand.
Time series forecasting is a very intriguing field, as I have realized while writing these articles. The community perceives it as a complex field, and while there is a grain of truth in that, it’s not so difficult once you get the hang of the basic techniques.
A. Yes, it is possible to predict the stock market with Deep Learning algorithms such as moving average, linear regression, Auto ARIMA, LSTM, and more.
A. Moving average, linear regression, KNN (k-nearest neighbor), Auto ARIMA, and LSTM (Long Short Term Memory) are some of the most common Deep Learning algorithms that predict stock prices.
A. Fundamental Analysis and Technical Analysis are the two ways of analyzing and predicting stock prices.
Lorem ipsum dolor sit amet, consectetur adipiscing elit,
Isn't the LSTM model using your "validation" data as part of its modeling to generate its predictions since it only goes back 60 days. Your other techniques are only using the "training" data and don't have the benefit of looking back 60 days from the target prediction day. Is this a fair comparison?
Hi James, The idea isn't to compare the techniques but to see what works best for stock market predictions. Certainly for this problem LSTM works well, while for other problems, other techniques might perform better. We can add a lookback component with LSTM is an added advantage
Getting index error - --------------------------------------------------------------------------- IndexError Traceback (most recent call last) in 1 #Results ----> 2 rms=np.sqrt(np.mean(np.power((np.array(valid['Close']) - np.array(valid['Predictions'])),2))) 3 rms IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
Hi Jay, Please use the following command before calculating rmse
valid['Predictions'] = 0
valid['Predictions'] = closing_price
. I have updated the same in the articleThanks for sharing. I guess if you plot P(t) against P(t-1) you can pretty much get a chart similar to the LSTM results. If that is the case, then a simple bench mark for any of the models would be using yesterday's price as today's prediction. A model has to beat that, at least.
After running the following codes- train['Date'].min(), train['Date'].max(), valid['Date'].min(), valid['Date'].max() (Timestamp('2013-10-08 00:00:00'), Timestamp('2017-10-06 00:00:00'), Timestamp('2017-10-09 00:00:00'), Timestamp('2018-10-08 00:00:00')) I am getting the following error : name 'Timestamp' is not defined Please help.
Hi Pankaj, The command is only
train[‘Date’].min(), train[‘Date’].max(), valid[‘Date’].min(), valid[‘Date’].max()
, the timestamp is the result I got by running the above command.Pankaj use below code from pandas import Timestamp
Hi Aishwarya, Just curious. LSTM works just TOO well !! Is splitting dataset to train & valid step carry out after the normalizing step ?! i.e. #converting dataset into x_train and y_train scaler = MinMaxScaler(feature_range=(0, 1)) scaled_data = scaler.fit_transform(dataset) then dataset = new_data train = dataset[:987] valid = dataset[987:] Then this x_train, y_train = [], [] for i in range(60,len(train )): # <- replace dataset with train ?! x_train.append(scaled_data[i-60:i,0]) y_train.append(scaled_data[i,0]) x_train, y_train = np.array(x_train), np.array(y_train) Guide me on this. Thanks
Hi Zarief, Yes, the train and test set are created after scaling the data using the for loop :
x_train, y_train = [], [] for i in range(60,len(train )): x_train.append(scaled_data[i-60:i,0]) #we have used the scaled data here y_train.append(scaled_data[i,0]) x_train, y_train = np.array(x_train), np.array(y_train)
Secondly, the commanddataset = new_data.values
will be before scaling the data, as shown in the article, sincedataset
is used for scaling and hence must be defined before.hi I'm getting below error... ''">>> #import packages >>> import pandas as pd Traceback (most recent call last): File "", line 1, in import pandas as pd ImportError: No module named pandas""""
Hi rohit, Is the issue resolved? Have you worked with pandas previously?
Try running pip install pandas in a command line
Hi.. Thanks for nicely elaborating LSTM implementation in the article. However, in LSTM rms part if you can guide, as I am getting the following error : valid['Predictions'] = 0.0 valid['Predictions'] = closing_price rms=np.sqrt(np.mean(np.power((np.array(valid['Close'])-np.array(valid['Predictions'])),2))) rms ############################################################################## IndexError Traceback (most recent call last) in () ----> 1 valid['Predictions'] = 0.0 2 valid['Predictions'] = closing_price 3 rms=np.sqrt(np.mean(np.power((np.array(valid['Close'])-np.array(valid['Predictions'])),2))) 4 rms IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
Still same error - --------------------------------------------------------------------------- IndexError Traceback (most recent call last) in ----> 1 valid['Predictions'] = closing_price IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices And also why are we doing valid['Predictions'] = 0 and valid['Predictions'] = closing_price instead of valid['Predictions'] = closing_price
Yes you can skip the line but it will still show an error because the index hasn't been defined. Have you followed the code from the start? Please add the following code lines and check if it works new_data.index = data['Date'] #considering the date column has been set to datetime format valid.index = new_data[987:].index train.index = new_data[:987].index Let me know if this works. Other wise share the notebook you are working on and I will look into it.
Hi, Nice article. I have installed fastai but I am getting the following error: ModuleNotFoundError: No module named 'fastai.structured' Any idea?
Hi Roberto, Directly clone it from here : https://github.com/fastai/fastai . Let me know if you still face an issue.
Hello AISHWARYA, I dont know what's your motivation to spend such a long time to write this blog But thank you soooooo much!!! Appreciate your time for both the words and codes (easy to follow) !!! Such a great work!!!
Really glad you liked the article. Thank you!
new_data.index=data['Date'] valid.index=new_data[987:].index train.index=new_data[:987].index gives AttributeError Traceback (most recent call last) in () 1 new_data.index=data['Date'] ----> 2 valid.index=new_data[987:].index 3 train.index=new_data[:987].index AttributeError: 'numpy.ndarray' object has no attribute 'index' valid['Predictions'] = 0 valid['Predictions'] = closing_price gives IndexError Traceback (most recent call last) in () ----> 1 valid['Predictions'] = 0 2 valid['Predictions'] = closing_price IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices V good article. I am getting above errors. Kindly help solving it
Hi, Please print the validation head and see if the index values are actually the dates or just numbers. If they are numbers, change the index to dates. Please share the screenshot here or via mail (dropped a mail)
Hi, Thanks for putting the efforts in writing the article. If you believe LSTM model works this well, try buying few shares of Tata Global beverages and let us know the returns on the same. I guess, you would understand the concept of over-fit. Thanks, Ravi
Hi Ravi, I actually did finally train my model on the complete data and predicted for next 10 days (and checked against the results for the week). The first 2 predictions weren't exactly good but next 3 were (didn't check the remaining). Secondly, I agree that machine learning models aren't the only thing one can trust, years of experience & awareness about what's happening in the market can beat any ml/dl model when it comes to stock predictions. I wanted to explore this domain and I have learnt more while working on this dataset than I did while writing my previous articles.
Its nice tutorial, thanks. I need to know how can i predict just tomorrow’s price?
Hi, To make predictions only for the next day, the validation set should have only 1 row (with past 60 values).
What is the difference between last and closing price?
The difference is not significant. I plotted the two variables and they overlapped each other.
Hi, thanks for the article. I got this error: rms=np.sqrt(np.mean(np.power((np.array(valid['Close'])-preds),2))) ValueError: operands could not be broadcast together with shapes (1076,) (248,) Can you help me on this?
Hi, Looks like the validation set and predictions have different lengths. Can you please share your notebook with me? I have sent you a mail on the email ID you provided.
I ran into the same error as ValueError: operands could not be broadcast together with shapes (1088,) (248,). Guidance towards resolution would be appreciated.
Thank you so much for the code, you inspire me a lot. I applied your algorithm for my example and it it works fine. Now i have to make it predict the price for the next 5 years, do you know how to achieve that? I should use all the data without splitting into test and train for training and somehow generate new dates in that array, then predict the value for them. I tried to modify your code but i couldn't figure it out. I'm new to ML and it's really hard to understand those functions and classes. Thank you very much in advance
Hi Alex, Currently the model is trained to look at the recent past data and make predictions for the next day. To make predictions for 5 years in future, we'll first have to change the way we train model here, and we'll need a, much bigger dataset as well.
Hi, I am getting this error x_train_scaled = scaler.fit_transform(x_train) Traceback (most recent call last): File "", line 1, in x_train_scaled = scaler.fit_transform(x_train) File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\base.py", line 517, in fit_transform return self.fit(X, **fit_params).transform(X) File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\preprocessing\data.py", line 308, in fit return self.partial_fit(X, y) File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\preprocessing\data.py", line 334, in partial_fit estimator=self, dtype=FLOAT_DTYPES) File "C:\ProgramData\Anaconda3\lib\site-packages\sklearn\utils\validation.py", line 433, in check_array array = np.array(array, dtype=dtype, order=order, copy=copy) TypeError: float() argument must be a string or a number, not 'Timestamp'
Hi Shabbir, The data you are trying to scale has the "Date" column as well. Please follow the code from the start. If you have extracted features from the date column, you can drop this column and then go ahead with the implementation.
Hi AISHWARYA, It's a great initiative. Can you please explain the process of getting the future price of a stock ? You told that validation set should have only one row. Can you please explain this ?
Hello AISHWARYA , It's a great initiative . Can you please explain how to approach if I want to predict the next day price ? You explained it in the previous comments but I didn't get the full meaning "the validation set should have only 1 row (with past 60 values)" . It will be helpful if you give an example. Thanks in advance .
Hello Aishwarya Great article. Thank you so much. I read the above comments but it's still unclear for me what or where I have to change to predict only tomorrow's price for example. Your help is much appreciated. Thanks and regards, Sascha #predicting 246 values, using past 60 from the train data inputs = new_data[len(new_data) - len(valid) - 60:].values inputs = inputs.reshape(-1,1) inputs = scaler.transform(inputs) X_test = [] for i in range(60,inputs.shape[0]): X_test.append(inputs[i-60:i,0]) X_test = np.array(X_test) X_test = np.reshape(X_test, (X_test.shape[0],X_test.shape[1],1)) closing_price = model.predict(X_test) closing_price = scaler.inverse_transform(closing_price)
Hi Aishwarya, Can you please eloborate on where to change the code for 1 day or 3 days predictions? Thanks, Sumanth
Hello AISHWARYA SINGH, Nice article...I had been working FOREX data to use seasonality to predict the next days direction for many weeks and your code under the FastAi part gave me an idea on how to go about it. Thanks for a great article. Have you tried predicting the stock data based bulls and bears only, using classification? I used np.log(df['close']/df.['close'].shift(1)) to find the returns and if its negative I use "np.where" to assign -1 to it and if its positive I assign 1 to it. All I want to predict is if tomorrow would be 1 or -1. I have not been able to get a 54% accuracy from that model. a 60% would be very profitable when automated. would you want to see my attempts?
Hi Ricky, That's a very interesting idea. Which algorithm did you use ?
Aishwarya: I am neither a quant nor an expert programmer. But I see a problem in how you constructed your models. Instead of LSTM if you were to simply predict tomorrow's price as today's closing price you will end up with a nice graph and what appears to be highly accurate prediction. What is more appropriate is to try to predict whether next day's return is positive or negative and check the correct answer against total predictions. In general, the data needs to be "stationary" and you need to take either returns per day or price differences to come up with the correct model. Let me know if you agree with me.
Thanks for sharing valuable information. a worth reading blog. I try this algorithm for my example and it works excellent.
Really glad you liked it. Thanks Shubham!
Hi Asihwarya, As far as i understand, the model takes in 60 days of real data to predict the next day's value in LSTM. I wonder how the results will be like if you take a predicted value to predict the next value instead. This will allow us to predict say 2 years of data for long term trading.
If you do have the real time data, it'd be preferable to use that instead since you'll get more accurate results. Otherwise definitely using the predicted values would be beneficial.
Dear Aishwarya, Thank you for this informative article. I am a trader but i am looking for ways to automate my decisions, not necessarily to the point of machine putting the order but as a guide to reduces the burden of decision and pattern recognition. i am experimenting with dynamic time warping to recognise some of the price action behavior, which i have learned after thousands of hours of observation. I am using kmeans, and some combination of euclidean distance to achieve that. What i am hoping is that, one day you produced an article that could show how dtw algorithm could be use to identify identical behaviour in stock price action. You know how dtw works in speak recognition, it could be use to identify one behavior that could happen in hundredth of ways. There are many academic research on the topic but few practical examples. I hope as data science professional, you could be able to write something on this. However, if you know any article that Vidhya have publish about dtw or speech recognition. Please, do point out them to me, as there are many articles to do it.
Hi Aishwarya, I am not able to download the dataset getting empty CSV file with header. Could you please help me.
Hi, I sent you the dataset via mail. Please check.
I am also not able to download the test data (NSE-TATAGLOBAL(1).csv), could you send me? thanks!
Could you share your email id please?
Could you send me the full working code, i cant seem to get it to work
I have sent you a mail.
Even I am not getting Could u please send me full working code
Hi Aishwarya, Thanks for sharing this great Information. I am new to LSTM so can you help me 1- I confused of how I predict next day price while I do not have (open-close-high - low) for the day I want to predict the price in it. 2- if you can give me some tutorials or books to learn in details how can I use lstm to predict stock price in short term and long term. Thanks in advance
could you kindly share me the full code, I have strong interest in time series analysis
Hi AISHWARYA SINGH, thanks a lot for your great article .
Glad you liked it Doaa
please how can I predict sock price for next day or for 200 days ahead while I do not have test data (high - low - open - volume ) for days I want to predict the stock price of It. sorry because I am new in DL. If you can tell me about tutorials explain how can I use LSTM in details for stock price prediction Thanks in advance
could you share the full code, I have strong interest in time series analysis
Hi chao, The code is shared within the article itself.
Great article Aishwarya. Kudos to you for taking out to the time to explain it in detail. Can you send me a full working code.
Hi Aishwarya, when I downloaded the .csv file, the dates are shown as a bunch of '######'. Could you please send me the code and the dataset through email?
Good article to understand LSTM. Thankyou for this tutorial, really appreciable. As I understood from this, if I want to predict the stock price for tomorrow I can use len(valid) = 1 before the following line: inputs = new_data[len(new_data) - len(valid) - 60:].values If I use len(valid) = 2, will it give me price for tomorrow and day after or tomorrow and today ? I am little bit confuse. Also, when I run this model for Reliance Industries stock, it gives me mse ~41. I am using 4000 as training set and 1184 as valid set. Does it make any difference ?
Hi ! Thank you for the tutorial ! I wonder is it possible to get Confidence Percentage for each predictions ?
Hi, Thanks for the very interesting article. I have a comment with the way you scale the dataset in this line of code: scaled_data = scaler.fit_transform(dataset) Here you are taking the max and min values of the *entire* dataset to do scaling. Later you use the same max and min values to scale the test set in this line of code: inputs = scaler.transform(inputs) However in practice we will not have any prior information about the test set and so we will not know their max and min values. I think if you do fit_transform on the train set is more correct ie. scaled_data = scaler.fit_transform(train).
Hi Aishwarya, The link provided in this article seems to be a lot smaller (min date is 3-2-2017) where in your article is spoken about year 2013... Could you please help me providing te Original CSV file? Thanx!
Hi Robert, Shared the dataset via mail.
Could you send me the full working code? Have a nice day!
Hi ! Nice tutorial ! I wonder if you know how to get probability for each prediction ? Thank you
Hello....very good article I am new to this field please guide on 1. how to save the trained model 2. how to predict next day value using this model. what input will be required and how to input same thank ou very much in advance
Hi, was wondering with your lstm code how you would do a prediction of a price or prices in the future. I have tried modifying the dataset with future dates and setting the values to 0 to see it predict it however seems like this reduces the accuracy a lot. Also was wondering with this line ***scaled_data = scaler.fit_transform(dataset)*** does that mean we are training our model on the whole dataset because i don't see the variable ***train = dataset[0:987,:]*** be really ever used.
Hi...very nice tutorial. I am new to data science, though had some idea on Statistics but pretty new to LSTM. Could you please guide me in how can I predict future values? 1. The data base I have downloaded had 410 rows, so I have changed the training data set to 307 rows (75%). The last value is for date 28/09/18 2. I have followed all your codes and LSTM seems pretty good fit. 3. Getting error while calculating rms (rms=np.sqrt(np.mean(np.power((valid-closing_price),2))) 4. Error received: ValueError: Unable to coerce to DataFrame, shape must be (103, 2): given (103, 1) 5. The code required to predict future values. Say first I want to predict for 29/09/18. After that, If I want to use it for predicting value for 30/09/18 and so on. Thank you in advance. Cheers!
As a training material this article is very good. Thank you Aishwarya! As a practical tool this method is useless. It will not even predict where the price will close tomorrow - above or below of today's market close.
Hi AISHWARYA SINGH, Your article is very interesting, specialy the LSTM section. I'am trying to run your code, but I have problems with the libraries in Python 2.7 and in Python 3.6. Could you, please, specify the Python version and the libraries needed with their dependancies and their version number ? Thank you very much in advance.
Why 987 in: train = new_data[:987] valid = new_data[987:]
Hi Emerson, I have used 4 years data for training and 1 year for testing. Splitting at 987 distributes the data in required format.
How can I solve this problem: ModuleNotFoundError: No module named 'fastai.structured' Thanks
Hey, Go to this link: https://github.com/fastai/fastai and clone/download. The error should be resolved.
Amazing Article. Very helpful . Thanks.
for predicting values, why are you taking values from validation set at each iteration. it should be like this, 987 + 60 --> predict -->987+60+1 th value take 988+59+(1(predicted)) --> predict 988+60+1 th value let me know if i am wrong. if you are taking values from validation set everytime, your RMS error going to be less as expected.so we need to take first 60 values from validation set and predict remaining validation set points.
Hello Aishwarya By any chance you changed the length of the dataset?? When I download, it gives me the dataset name as 'NSE-BSE' and it contains 411 rows. Splitting at 987 won't leave anything for validation. OR am I not downloading the right dataset ??
Hello, I have questions about your model LSTM. I don't understand the building of "X_test". I have understood the aim and the bulding of your variable "inputs", but I believe that your variable "X_test" use the observed values. Imagine that the dataset has 500 values and we use the first 400 for the train and you want to predict the others. For the first day, you will use the values 340 until 400 and you can predict the value 401. Ok no problem ! For the second day, you will use the values 341 until 401. If you want to predict on several months, you have to use the predicted value 401. But your value 401 is the value contained in the list "inputs". So you don't use the predicted variable but you use the observed value. Actually if your model can predict on 2 months, I can replace values in valid and the curve of predictions can't change (because I don't modify the train). I tried to replace all the observed values between 401 and 500 by 0... The predicted values change completely ! I feel that you do predictions using the real value of the day before to compute the actual day but you don't predict the stock market on several months. My question is : what part didn't I understand ? I thank you and have a good day :)
Hey Aishwarya, your article is super helpful. Thanks a ton. Keep going!!!
Thanks Arun!
Thanks a lot for sharing this. I keep wondering if we just use the previous close as a prediction for next day, would the chart essentially the same as the LTMS? Can LTMS beat that simple prediction?
Hey Aishwarya, a related question. So say if you need to build a system based on LSTM to predict 50 stocks on any day. Would you create 50 LSTM models and select the right stock model at runtime OR train one LSTM model only for all 50 stocks ?? What would be your approach ?
Hello, I do not understand the construction of forecasts in the LSTM method. I note N the first value to predict. The objective is to compare the predicted values N to N + 10 ​​with the observed values. To predict the value N, you take the 60 previous observed values ​​ok. To predict the value N + 1, what values ​​do you take? I don't understand if you use : - 59 previous observed values ​​without taking the value in N - 59 previous values ​​+ the predictive value in N - 59 previous values ​​+ the observed value in N I try to understand the methodology but this little point is blocking me. All the rest of the article is very clear. I thank you :)
Hi Aishwarya, please can you tell me about books explain how to use LSTM in stock market in details thanks in advance
Aishwaryai, I found your article very interesting. I'm not familiar with programing but I was able to follow the reason for each step in the process. I do have a question. You noted that there are many other factors that will ultimately affect the market. As someone who works in marketing and has to be plugged into social media I use many tools that tell me how a company is being perceived. Twitter and Facebook both offer lot of opportunity for social listening and the tools we use in advertising/marketing to measure take the temperature of the market are quite powerful. Would it be possible to incorporate some machine learning to find patterns in the positive/negative sentiment measurements that are constantly active to help predict when stock values are going to change and in which direction? I feel like your article is using current and past market data but doesn't incorporate social perception. http://www.aclweb.org/anthology/W18-3102 The study linked above only focuses on stock related posts made from verified accounts but didn't take public perception measurements into account in their data. With how connected to social media the world has become I wonder if combining the data you used in your article with data from social listening tools used by marketers could create predictive tools that would essentially game the system.
u can use sentimental analysis.....rest u can search yourself... ....u can watch sirajvideo....for more info
Hi AISHWARYA SINGH, You blog is very impressive. Could you please help me for building a system with following requirements: The system will do trade training using huge data (historical/past trades and stocks prices) and incremental data from new daily trades and prices. The system will include a user-friendly interface which means that everyone even people with low knowledge with computers will be able to use this system on any computer/laptop. The user will have the option to run a prediction any time he wants, the user will be able to make the system to do trade training any time he wants. As mentioned above the system will be able to do training based on historical/past data and incremental new daily data, which means data will be updated consistently so the system will be always updated. Thanks in advance
# Complete code with yahoo data reader # Calculated dataset length trainig set length and prediction length (plen) # coding: utf-8 # In[1]: #import packages from pandas_datareader import data import fix_yahoo_finance as yf yf.pdr_override() # <== that's all it takes :-) import pandas as pd import numpy as np #to plot within notebook import matplotlib.pyplot as plt get_ipython().run_line_magic('matplotlib', 'inline') #setting figure size from matplotlib.pylab import rcParams rcParams['figure.figsize'] = 20,10 #for normalizing data from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler(feature_range=(0, 1)) #read the file #df = pd.read_csv('NSE-BSE.csv') #df['Date'] = df.Date.apply( # lambda x: pd.to_datetime(x).strftime('%Y-%m-%d')) # Define the instruments to download. We would like to see Apple, Microsoft and the S&P500 index. #tickers = 'PYPL' tickers = 'AMD' # We would like all available data from 01/01/2000 until 12/31/2016. start_date = '2015-01-01' end_date = '2019-02-01' # download dataframe df = data.get_data_yahoo(tickers, start=start_date, end=end_date) df.columns = [c.replace(' ', '_') for c in df.columns] #print the tail df.tail() #df.dtypes # In[2]: #plot plt.figure(figsize=(16,8)) plt.plot(df['Close'], label='Close Price history') # In[3]: #importing required libraries from sklearn.preprocessing import MinMaxScaler from keras.models import Sequential from keras.layers import Dense, Dropout, LSTM #creating dataframe data = df.sort_index(ascending=True, axis=0) new_data = pd.DataFrame(index=range(0,len(df)),columns=['Date', 'Close']) for i in range(0,len(data)): new_data['Date'][i] = pd.to_datetime(df.index[i], unit='D').date() new_data['Close'][i] = data['Close'][i] #setting index new_data.index = new_data.Date new_data.drop('Date', axis=1, inplace=True) #Predict number of days plen = 80 #creating train and test sets dataset = new_data.values train = dataset[0:len(data)-plen,:] valid = dataset[len(data)-plen:,:] # Display data set len new_data.shape, train.shape, valid.shape # In[4]: #converting dataset into x_train and y_train scaler = MinMaxScaler(feature_range=(0, 1)) scaled_data = scaler.fit_transform(dataset) x_train, y_train = [], [] for i in range(plen,len(train)): x_train.append(scaled_data[i-plen:i,0]) y_train.append(scaled_data[i,0]) x_train, y_train = np.array(x_train), np.array(y_train) x_train = np.reshape(x_train, (x_train.shape[0],x_train.shape[1],1)) # In[5]: # create and fit the LSTM network model = Sequential() model.add(LSTM(units=50, return_sequences=True, input_shape=(x_train.shape[1],1))) model.add(LSTM(units=50)) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') model.fit(x_train, y_train, epochs=1, batch_size=1, verbose=2) # In[6]: #predicting values, using past plen from the train data inputs = new_data[len(new_data) - len(valid) - plen:].values inputs = inputs.reshape(-1,1) inputs = scaler.transform(inputs) X_test = [] for i in range(plen,inputs.shape[0]): X_test.append(inputs[i-plen:i,0]) X_test = np.array(X_test) X_test = np.reshape(X_test, (X_test.shape[0],X_test.shape[1],1)) closing_price = model.predict(X_test) closing_price = scaler.inverse_transform(closing_price) # In[7]: rms=np.sqrt(np.mean(np.power((valid-closing_price),2))) rms # In[9]: #for plotting train = new_data[:len(data)-plen] valid = new_data[len(data)-plen:] valid['Predictions'] = 0 valid['Predictions'] = closing_price dts=train['Close'] dtv=valid[['Close','Predictions']] plt.figure(figsize=(16,8)) plt.grid() plt.plot(dts[plen:]) plt.plot(dtv[:plen]) dtv.tail()
Hi there, thanks for the detailed explanation. Just a simple question.. The data set had many features but we chose 'Date' feature to work with. Why didn't we consider other features? What is the approach that we take while choosing the feature set? And, what the need to separate Date feature to do linear regression? Thanks
HI Aishwarya my name is pratik am a research engineer i have done similar work not only with stock data but also desigjn sentiment analyzer to get the sentiment of the news as well as the twitter data to get the cumilative score for final prediction i have use cnn and random forest i think ur normalization need to be modified there use EWMA panda techinique to smoothen the results and it will enhance ur prediction i think we have to use reinforcement technique as a reward for dip in prioce and high in price due to news or various factors this will enhace the results i didnt try yet but i feel reinforement is the solution for stock price problem
Hi, Aishwarya I am working on research on the same topic and I have a strong interest in this. SO, could you please send me the full working of this article along with the code. My email id is [email protected]
help me to solve this error rms=np.sqrt(np.mean(np.power((np.array(valid['Close'])-preds),2))) ValueError: operands could not be broadcast together with shapes (0,) (248,)
Hi Aishwarya, could you share the full code please. I am getting the below error: ValueError Traceback (most recent call last) in () 1 #calculate rmse ----> 2 rms=np.sqrt(np.mean(np.power((np.array(valid['Close'])-preds),2))) ValueError: operands could not be broadcast together with shapes (0,) (248,) 1 ​
Hi How I can solve this error? AttributeError: 'DataFrame' object has no attribute 'Date' Thanks
Hi After dropping the date column(Linear Regression method), I am getting an error like this. AttributeError: 'DataFrame' object has no attribute 'Date' How do I resolve this error?
Hi Could please share the entire working code. Thanks
Thank you for this great blog! I have a question of how to code the following issues in the given dataset of data_stock.csv 1. Which stocks are apparently similar in performance 2. Identify which all stocks are moving together and which all stocks are diff from each other? 3. How many unique patterns that exist in the historical stock data set, based onn fluctuations in price.
Make no mistake, at the expense of LSTM. The model simply takes the data from the previous days and follow trends. This model can't even predict up or down the stock will go. It's easy to check with the following code-the prediction accuracy is only 47%. valid['R_Close'] = valid['Close'].diff()/valid['Close'].shift(1) valid['R_Pred'] = valid['Predictions'].diff()/valid['Predictions'].shift(1) valid['event'] = valid['R_Close']/valid['R_Pred'] print(len(valid[valid['event'] > 0])/len(valid)) plt.plot(valid[['R_Close','R_Pred']])
--------------------------------------------------------------------------- i am getting this error please help me to debug the error. ValueError Traceback (most recent call last) in () 3 4 #calculate rmse ----> 5 rms=np.sqrt(np.mean(np.power((np.array(valid['Close'])-preds),2))) 6 rms ValueError: operands could not be broadcast together with shapes (0,) (248,)
Hi, thanks for the article. I got this error: rms=np.sqrt(np.mean(np.power((np.array(valid[‘Close’])-preds),2))) ValueError: operands could not be broadcast together with shapes (1076,) (248,) Can you help me on this?
Hi, i got this error can you please help me through this #calculate rmse rms=np.sqrt(np.mean(np.power((np.array(valid['Close'])-preds),2))) print (rms) ValueError Traceback (most recent call last) in () 1 #calculate rmse ----> 2 rms=np.sqrt(np.mean(np.power((np.array(valid['Close'])-preds),2))) 3 print (rms) 4 ValueError: operands could not be broadcast together with shapes (410,) (248,)
How can I predict the future 30 values? thank you so much
You should consider implementing the simplest model you can think of. Since your LSTM uses all data up to a given date to predict just the next date, a good benchmark model would be the following: a model, that simply repeats the value from the last day to predict the value for the following day. In pandas this is simply implemented as: prediction = new_data.shift(1) To get a plot of that model: new_data.join(prediction, lsuffix='_train', rsuffix='_predict').plot()) With that simple model I get a rms of 11.85 (not much worse compared to the 11.77 from LSTM).
Hi, I think you should only use train data to fit MinMaxScaler and then transform train and valid data separately. Thanks
Hey I tried working on this code and i am getting an error, "float() argument must be a string or a number, not 'Timestamp' " Can you please help me solve it
Hi Aishwarya, Thanks very much for taking the time to post this article! I have been looking for an intro into time series analysis of stock prices. Your information was a great start and I was able to duplicate your LSTM code output. I apologize for the naive question on Python coding but, regarding your LSTM method to predict the next day closing price based on prior 60 day close prices, how would you output the next day close price prediction? For example, if I print the last day LSTM results from your dataset, I see: Date Actual Close Predicted Close 2019-01-04 213.8 228.571320 If it were real time (I mean if the current day was actually Jan 4th), how would you output the predicted next day close for Jan 5th? Thanks again...nice article!
Hi, Aishwarya could you please tell me that for measurement of the performance of the algorithms what metric are u taking?
The error shows up while calculating rms #calculate rmse rms=np.sqrt(np.mean(np.power((np.array(valid['Close'])-preds),2))) rms --------------------------------------------------------------------------- ValueError Traceback (most recent call last) in 1 #calculate rmse ----> 2 rms=np.sqrt(np.mean(np.power((np.array(valid['Close'])-preds),2))) 3 rms ValueError: operands could not be broadcast together with shapes (0,) (248,) When edited according to AISHWARYA SINGH who says, Please use the following command before calculating rmse: valid['Predictions'] = 0 valid['Predictions'] = closing_price After editing the code like this, #calculate rmse valid['Predictions'] = 0 valid['Predictions'] = closing_price rms=np.sqrt(np.mean(np.power((np.array(valid['Close'])-preds),2))) rms --------------------------------------------------------------------------- NameError Traceback (most recent call last) in 1 #calculate rmse 2 valid['Predictions'] = 0 ----> 3 valid['Predictions'] = closing_price 4 rms=np.sqrt(np.mean(np.power((np.array(valid['Close'])-preds),2))) 5 rms NameError: name 'closing_price' is not defined Basically, there's error in both ways. Is it because of that I'm using Python 3? Help would be appreciated!
Hi Asihwarya, Thanks for sharing this very interesting tutorial. Can you please share the full working code? Regards, Marcelo
mam can you help me in how to find the accuracy of the model
Hi Aishwarya, could you please share the data with me, Thank you.
lets assume we have 1000 instances of data separated into training and validation. Can we predict and get the value of the 1005th instance that is not present in the data?
Is there any way to do this with tensor-flow or anything other than keras? If so, can you point me in the right direction? I'm running into problems downloading keras and I haven't found a solution yet. Thanks Aishwarya!
In section LSTM I get this error, could u please tell me why and how to fix this when you are free? thank u very much. TypeError Traceback (most recent call last) in () 55 X_test = np.reshape(X_test, (X_test.shape[0],X_test.shape[1],1)) 56 print(X_test) ---> 57 closing_price = model.predict(X_test) 58 closing_price = scaler.inverse_transform(closing_price) 59 TypeError: data type not understood
Hi, Thanks for the article. Regarding the following line: - #predicting 246 values, using past 60 from the train data inputs = new_data[len(new_data) - len(valid) - 60:].values May I know what value obtained for : - len(new_data) ? len(valid) ? inputs ?
Hi Justin, Valid and closing price length is 248, input is 308, and new data 1235 same as the len of df
Hi Aishwarya, I was suspicious of your program, since it worked it little too well, so I played around with the program, and it after I tried to make it predict some future stocks (by making the validation set go to the future, and filling the rows with zeroes ), and you can imagine my surprise when the prediction said the stocks would drop like a stone. I investigated further, and found that your data leaks in the LSTM implementation, and that's why it works so well.
Hi, Don't fill the rows with zero! fill it with the predicted value. So you made a prediction for next day, use that to predict the third day. Even if you do not use the validation set as done here, use the predictions by your model. Giving it zero as input for the last 2-3 days, the model would understand that yesterday's closing price was zero, and will show a drastic drop.
Hi Aishwarya, #predicting 246 values, using past 60 from the train data inputs = new_data[len(new_data) - len(valid) - 60:].values Regarding the above line, may I know what are the values obtained for the following: len(new_data) ? len(valid) ? inputs ? Thanks.
Hi! Great post! If I want to predict 10 or 15 days instead of 1. Which part of the code I have to change? I can´t get it..thanks!!!!!!!!
Hi! Great post! I can´t figure how to predict more tan one day. Could you help me? which line do you have to change? greetings!!
I'm using jupyter notebook in anaconda3 navigator and I'm getting this error ModuleNotFoundError: No module named 'fastai'. can you help me to resolve this
Hi Aishwarya, I am working on product development for financial domain. If you could share code project with me and along with some material which can help me learning ML basics that would be great. My email id is gcchavda[a]gmail[dt]com.
i want data on which you performed above exercises as i am learning data science course i need to perform projects.over data.....
Hello, It was a great experience learning from your article, it would have been much better if you explained each line of the code in all the approaches. My doubt is why is the RMSE value varying on each execution on the same data if I execute immediately in a short period of time and secondly correct me if I am wrong the orange line in the predicted graph for each model has to coincide with the original blue line only then we could conclude it to be the right forecast? please reply ASAP
Hi Aishwarya Help me to print the predicted values along with plotting of values. and secondly how can we implement the same code for intraday values when the date-time values are like 4/2/2019 2:07:00 AM thirdly what changes are required to see next 5 days values and print them Thanks for the code, it has helped me to start with it.
Fully agree with Tobias his analysis. By the way, I like the article but not the way LSTM is implemented. Yes, it uses data it is not supposed to know if you want to compare different types of models. And for you information, but here goal is obviously just to compare models, machine learning cannot predict financial markets just based on historical prices.would be too easy. But, as I wrote, article present different models and that is valuable for many people as I see.
Hi Aishwarya, I want you to explain all your code about LSTM. but now i want to know why you use unit = 50 in this below line 'model.add(LSTM(units=50, return_sequences=True, input_shape=(x_train.shape[1],1)))' Thank you
Hi Aishwarya, Really an intersting article, Please check if the following is correct. To predict the future values of more than 5 years, I've used the following code to predict next 100 days: pred_price=[] num_pred=100 X_test = np.reshape(X_test, (X_test.shape[0],X_test.shape[1],1)) print(X_test.shape) print(X_test[0,:,0]) X_test = np.array(X_test) for i in range(num_pred): closing_price = model.predict(X_test) print(closing_price) X_test=np.delete(X_test[0,:,0],0) #print(X_test.shape) print(X_test) X_test=np.append(X_test,closing_price) #print(X_test.shape) print(X_test) X_test=np.reshape(X_test,(1,X_test.shape[0],1)) print(X_test.shape) closing_price = scaler.inverse_transform(closing_price) pred_price.append(closing_price) pred_price=np.array(pred_price) print(type(pred_price)) print(pred_price.shape) pred_price=np.reshape(pred_price,(num_pred,1)) print(pred_price.shape) print(pred_price)
Hi, Firstly, really an amazing article written by you on AnalyticsVidhya. Thanks a lot for the post. So, I tried to predict more than test_set. The code goes as follows: pred_price=[] num_pred=10 X_test = np.reshape(X_test, (X_test.shape[0],X_test.shape[1],1)) print(X_test.shape) print(X_test[0,:,0]) X_test = np.array(X_test) for i in range(num_pred): closing_price = model.predict(X_test) print(closing_price) X_test=np.delete(X_test[0,:,0],0) #print(X_test.shape) print(X_test) X_test=np.append(X_test,closing_price) #print(X_test.shape) print(X_test) X_test=np.reshape(X_test,(1,X_test.shape[0],1)) print(X_test.shape) closing_price = scaler.inverse_transform(closing_price) pred_price.append(closing_price) pred_price=np.array(pred_price) print(type(pred_price)) print(pred_price.shape) pred_price=np.reshape(pred_price,(num_pred,1)) print(pred_price.shape) print(pred_price) This has shown that the values are always increasing. I've attached the python workbook file and data csv file. Is there anything wrong with the code. Waiting for your response.
I am having an error: cannot import name 'blosc_version' Please help me.
Thank you for this great work. The prediction results were only 63 rows not like said 248 as the same scripts were use as you posted. Could you please help? Thanks so much. Also I am using Python not Anconda. I could not install pystan to verify the Prophet which needs FBProphet that needs pystan. Any comments? Do I have to use Anconda? When I installed Anconda which supposed have pandas, numpy but error messages were it could not find them, not could it find the access to install. Thanks again.
Hi, My name is Arjun, i like to do project using ML. I like to design a prediction model to predict price of spices.Can I use this same code to develop a model?
If I could get 'Prediction' value for single date how to write code. I tried for newdata.index but couldn't get date for prediction
I'm not sure why this happened. the second last line of code: closing_price = model.predict(X_test) apparently goes in an infinite loop, hours running but no output. I have a pretty new Macbook Pro, processing power is not an issue. is this normal?
Hi Aishwarya, I want to use close price and volume both for training the model. I also want to predict prices for next 3 months. Can you please help me with this as I am new to python.
Please send to me the working python code. Also, please send the workaround I do not have fastai installed.
I train the data , and see the predictions of test data , and it also worked on bitcoin data i used , but How can i predict by giving new values means like i want to predict for 17 apr 2019 , which line of statement can do this ?
In KNN it is giving me an error : invalid literal for float() can you please help me with that
How you are going to predict future stock price, here you are using the test_X and test_y from the original datasets. that is the reason it is giving accurate prediction using LSTM when you are going to predict future stock price and need to create test_X and test_y and feed the value to predict it wont give good result.
Hello Aishwarya, Thanks for sharing! If I may ask. 1) Why do you make the cut-off at 987 for training? Why not 988 or any other number? Any rule of thumb to go by here? 2) Why look at 60 days to predict 61st? Would looking at more days back yield a "better" prediction? Any rule of thumb for this? 3) Why set units to 50? As above, is there a rule of thumb? 4) Could you please show me a quick example with what you mean by "You need to create a validation set with only 10 rows as input to the LSTM model." when trying to predict more than 1 day in advance? I cant really see where to make the change(s) to make that work. Many thanks! Peter
Hi Aish, Thanks for this great article. I tried the but Im getting errors in fastai and other snippets. Can you please send me the code?
Aishwarya I appreciate your work, I am engineering student and working on Machine Learning, I was trying to implement that code to my own but I found some error, from the LinearRegression (fastai library) It does not work, when I install the library fastai, it shows the error [no module torch] and I can find torch on internet I found pytorch which is not working for that. Please help me...
hI, Which are the other sequence prediction time series problems where LSTM can be applied
Hi, The average % error from LSTM model results is around 4-5% per day. If I use previous day close as a prediction for today's close, I got a better prediction with average % error of 2%. The graph will look even prettier than LSTM if we use previous close as a prediction. But we all know using previous day close as a prediction won't make you money.
Hi, I tried out this algorithm with real data from Binance. It works well, but I'm afraid it does not help predict anything. let's see whether I am missing anything (and I hope so!). I don't want to have any test set because I want to guess the results in the real world, so I trained everything. When predicting, in this case, we have 2 axes x: dates, y: price. We should be able to give the trained model an x coordinate (time units) and then it should return the y (prices) it estimates will be the real one. But after looking into it, I realised prices are being used when predicting instead of dates. Why? In the real world, I don't have these prices! I have only time units! I would like to use these time units in order to predict which prices will be in each one of them, and not using prices I won't have. Am I missing anything? And please, that's what I wanna hear!
Hello, I'm having trouble getting past the fastai section. I can't download the package. Each time I try , I get a memory error (i don't think my server can handle the entire package download...it's almost a GB). I was trying to recreate it like you mentioned using simple loops, and I was able to get something. Here is the code: for i in range(0,len(new_data)): new_data['Year'][i] = new_data['Date'][i].year new_data['Month'][i] = new_data['Date'][i].month new_data['Week'][i] = new_data['Date'][i].strftime("%U") new_data['Day'][i] = new_data['Date'][i].day new_data['Dayofweek'][i] = new_data['Date'][i].strftime("%w") new_data['Dayofyear'][i] = new_data['Date'][i].strftime("%j") if new_data['Date'][i].day == 1: new_data['Is_month_start'][i] = 1 tomorrow = new_data['Date'][i]+timedelta(days=1) if tomorrow.day == 1: new_data['Is_month_end'][i] = 1 if new_data['Date'][i].strftime("%d %m") in q_end: new_data['Is_quarter_end'][i] = 1 if new_data['Date'][i].strftime("%d %m") in q_start: new_data['Is_quarter_start'][i] = 1 if new_data['Date'][i].strftime("%d %m") == '01 01': new_data['Is_year_start'][i] = 1 if new_data['Date'][i].strftime("%d %m") == '31 12': new_data['Is_year_end'][i] = 1 But when I get to the next section I'm having trouble with the LRM. Can you confirm that this adds the proper columns and what not? Many thanks, and thanks for a great guide.
Hey, Thanks, this is a really great article. I'm having some issues in the linear regression part. I was not able to download the fastai package (memory error..) so I tried to go at it with simple loops instead. Here is my code: for i in range(0,len(new_data)): new_data['Year'][i] = new_data['Date'][i].year new_data['Month'][i] = new_data['Date'][i].month new_data['Week'][i] = new_data['Date'][i].strftime("%U") new_data['Day'][i] = new_data['Date'][i].day new_data['Dayofweek'][i] = new_data['Date'][i].strftime("%w") new_data['Dayofyear'][i] = new_data['Date'][i].strftime("%j") if new_data['Date'][i].day == 1: new_data['Is_month_start'][i] = 1 tomorrow = new_data['Date'][i]+timedelta(days=1) if tomorrow.day == 1: new_data['Is_month_end'][i] = 1 if new_data['Date'][i].strftime("%d %m") in q_end: new_data['Is_quarter_end'][i] = 1 if new_data['Date'][i].strftime("%d %m") in q_start: new_data['Is_quarter_start'][i] = 1 if new_data['Date'][i].strftime("%d %m") == '01 01': new_data['Is_year_start'][i] = 1 if new_data['Date'][i].strftime("%d %m") == '31 12': new_data['Is_year_end'][i] = 1 new_data.head(35) I'm not sure if I made an error here or not, but everything seems to come out ok. I don't know exactly what the output should look like though. Maybe you could include a new_data.head() in your code, just to have an idea of what the data looks like? Anyway, when I try to run the regression, I get this error: TypeError: float() argument must be a string or a number, not 'Timestamp' Thanks again!
Great article, couple of questions (I'm a beginner with DL, dabbled with SkLearn before) .. 1) How do i reproduce the LSTM model on my local Windows machine (or in a Google cloud account for AI, if that's easier)? Do i install Conda on Windows or some other stack? [The Github/fastai clone is supported only on Linux machines] 2) Could you post the daily prediction and actual prices through the validation period? I'm trying to assess the Win-rate (i.e. accuracy of predicted direction) and the Net Profits and Net Losses from a daily investing strategy that invested Long/Short daily based on the prediction. It's hard to tell from the chart - typically, unless there's 60% win-rate and Profit Factor (i.e. Net Profit/Net Loss) of 2x, it's not tradable.
Hi Aishwarya, My name is Jonathan Lim, and I have some experience in data science and ML. First of all, thank you so much for this post! I have been trying to figure how to predict the stock market for a while now, and your post ties up the knot between ML and stock market prediction really well! Thus, I would love to learn more about predicting the stock market using ML from you and to hear more about your experiences working in the data science and ML field! :) I have attached my LinkedIn profile along with this comment, which I hope to connect with you. Thank you, and I hope to hear from you soon! Sincerely, Jonathan Lim
Plz would anyone send me complete code without errors. ? i am facing a lot of issues.
Thanks Aishwarya, i am getting below error : from fastai.structured import add_datepart --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) in ----> 1 from fastai.structured import add_datepart ModuleNotFoundError: No module named 'fastai' i tried pip install fastai still no luck can you please help me with this
Hi, While predicting you use last 60 actual values from train and predict the first value of Pred[0]. Then you use the last 59 actual of train and 1 actual value from valid and get Pred[1] and so on...Thus not using the predicted values to further predict values but rather using all actual values and making separate 1 day predictions. Is my understanding right?
Hi Aishwarya Singh Its really nice article, can you please share me latest dataset Thanks
Hi aish, Can u share ideas little deeper on how to predict future sales for two more days. it wil be helpful
How does one add a feature to be evaluated for predictions. For example Volume, to the LSTM. All help appreciated.
Hi Aishwarya, I am writing you on a thread that has no comments for last 20 months. I read the article and found it interesting. I have a few questions as well as a strategy that I want to automate. Seeing your research and interest in the topic, maybe we can collaborate. Plz revert if you would want to discuss it further.
Hi! Thanks a lot for this great article. I'm getting the below error KeyError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_23052/3766128780.py in 43 44 #predicting 246 values, using past 60 from the train data ---> 45 inputs = new_data[len(new_data) - len(valid) - 60:].values 46 inputs = inputs.reshape(-1,1) 47 inputs = scaler.transform(inputs) ~\anaconda3\lib\site-packages\pandas\core\frame.py in __getitem__(self, key) 3428 3429 # Do we have a slicer (on rows)? -> 3430 indexer = convert_to_index_sliceable(self, key) 3431 if indexer is not None: 3432 if isinstance(indexer, np.ndarray):
Can someone suggest a code to predict values for the next 30 days, whose values are not present in the dataset?
Hi Aishwarya, please may you share the full working code with me. Really appreciate it!
Hi, please help - how to provide input data (invest 10000) for 10 days using the LSTM model and get the predicted value. Can't find any place to provide those inputs.
Hey my accuracy is in negative for all model. Can you please guide me?