Reputation: 811
I am following the article https://www.analyticsvidhya.com/blog/2018/10/predicting-stock-price-machine-learningnd-deep-learning-techniques-python/ .
First, I try to use the data in the article, i.e., https://s3-ap-south-1.amazonaws.com/av-blog-media/wp-content/uploads/2019/03/NSE-TATAGLOBAL11.csv . The script produces the same result as the article.
Then, I try to download another data set from Yahoo Finance. THe data set is larger(2805 rows rather than 1000+ rows in the article sample data set). However, after using LSTM method, I will get the loss as nan. Why? How to solve the problem?
Upvotes: 0
Views: 520
Reputation: 11927
Most probably the data your are using will contain nan
. Removing those rows or filling those rows with appropriate values will fix the issue.
You can check for nan
using np.isnan(X)
Upvotes: 1
Reputation: 321
follow the below step by step :
Normalize your data by quantile normalizing, To be rigorous, compute this transformation on the training data, not on the entire dataset.
Add regularization, either by increasing the dropout rate or adding L1 and L2 penalties to the weights. L1 regularization is analogous to feature selection.
If these still don't help, reduce the size of your network to decrease the network parameters that caused train with less data. This is not always the best idea since it can harm performance.
finally, Increase the batch size, it could potentially increase the stability of the optimization.
Upvotes: 0