alancc
alancc

Reputation: 811

Why I get nan for Keras loss?

I am following the article https://www.analyticsvidhya.com/blog/2018/10/predicting-stock-price-machine-learningnd-deep-learning-techniques-python/ .

First, I try to use the data in the article, i.e., https://s3-ap-south-1.amazonaws.com/av-blog-media/wp-content/uploads/2019/03/NSE-TATAGLOBAL11.csv . The script produces the same result as the article.

Then, I try to download another data set from Yahoo Finance. THe data set is larger(2805 rows rather than 1000+ rows in the article sample data set). However, after using LSTM method, I will get the loss as nan. Why? How to solve the problem?

Upvotes: 0

Views: 520

Answers (2)

Sreeram TP
Sreeram TP

Reputation: 11927

Most probably the data your are using will contain nan. Removing those rows or filling those rows with appropriate values will fix the issue.

You can check for nan using np.isnan(X)

Upvotes: 1

Ali Ghofrani
Ali Ghofrani

Reputation: 321

follow the below step by step :

  1. Normalize your data by quantile normalizing, To be rigorous, compute this transformation on the training data, not on the entire dataset.

  2. Add regularization, either by increasing the dropout rate or adding L1 and L2 penalties to the weights. L1 regularization is analogous to feature selection.

  3. If these still don't help, reduce the size of your network to decrease the network parameters that caused train with less data. This is not always the best idea since it can harm performance.

  4. finally, Increase the batch size, it could potentially increase the stability of the optimization.

Upvotes: 0

Related Questions