John
John

Reputation: 587

How to skip remainder of dataframe after blank line?

The following routine retrieves a data file.

wget.download("https://www.aaii.com/files/surveys/sentiment.xls", "C:/temp/sentiment.xls")
df = pd.read_excel("C:/temp/sentiment.xls", sheet_name = "SENTIMENT", skiprows=3, parse_dates=['Date'], date_format='%m-%d-%y', index_col ='Date')

The first three data lines are incomplete so I can slice them off like this df[3:]

At about line 1640 there is a blank line. I wish to skip the rest of the file after that line. I tried to find that line like so and get its index so I could do another slice, but I get nan for the index value.

df[df.isnull().all(1)].index.values[0]

How can I find that line and skip the rest of the file?

Upvotes: 0

Views: 92

Answers (1)

SpghttCd
SpghttCd

Reputation: 10880

I think you have two nan-row problems in this file:

  1. The first row after the header is already an empty row leading to a nan index.
  2. The reason for your post here, the empty row which indicates the end of the data you're interested in.

first import the data as you did it:

df = pd.read_excel("sentiment.xls", sheet_name = "SENTIMENT", skiprows=3, parse_dates=['Date'], date_format='%m-%d-%y', index_col ='Date')

df.head()
                     Bullish  Neutral  Bearish   ...      High     Low   Close
Date                                             ...                          
NaN                      NaN      NaN      NaN   ...       NaN     NaN     NaN
1987-06-26 00:00:00      NaN      NaN      NaN   ...       NaN     NaN     NaN
1987-07-17 00:00:00      NaN      NaN      NaN   ...    314.59  307.63  314.59
1987-07-24 00:00:00     0.36     0.50     0.14   ...    311.39  307.81  309.27
1987-07-31 00:00:00     0.26     0.48     0.26   ...    318.66  310.65  318.66

then remove first empty row (nan-index), problem No1:

df = df[1:]

df.head()
                     Bullish  Neutral  Bearish   ...      High     Low   Close
Date                                             ...                          
1987-06-26 00:00:00      NaN      NaN      NaN   ...       NaN     NaN     NaN
1987-07-17 00:00:00      NaN      NaN      NaN   ...    314.59  307.63  314.59
1987-07-24 00:00:00     0.36     0.50     0.14   ...    311.39  307.81  309.27
1987-07-31 00:00:00     0.26     0.48     0.26   ...    318.66  310.65  318.66
1987-08-07 00:00:00     0.56     0.15     0.29   ...    323.00  316.23  323.00

And now you want to index all rows before the first nan-index, problem No2.
Idea: create a boolean array with True entries for all nan- indices, cast to integer and build the cumulative sum. Now you have an array, which is 0 for all the data you want and >0 from any unwanted line on until the end.
This tested against 0 returns a boolean index for your data:

data_idx = df.index.isna().astype(int).cumsum() == 0

Applied to your dataframe:

df[data_idx]
                      Bullish   Neutral   ...         Low    Close
Date                                      ...                     
1987-06-26 00:00:00       NaN       NaN   ...         NaN      NaN
1987-07-17 00:00:00       NaN       NaN   ...      307.63   314.59
1987-07-24 00:00:00  0.360000  0.500000   ...      307.81   309.27
1987-07-31 00:00:00  0.260000  0.480000   ...      310.65   318.66
1987-08-07 00:00:00  0.560000  0.150000   ...      316.23   323.00
                      ...       ...   ...         ...      ...
2018-10-11 00:00:00  0.306061  0.339394   ...     2784.86  2785.68
2018-10-18 00:00:00  0.339350  0.310469   ...     2710.51  2809.21
2018-10-25 00:00:00  0.279693  0.310345   ...     2651.89  2656.10
2018-11-01 00:00:00  0.379310  0.275862   ...     2603.54  2711.74
2018-11-08 00:00:00  0.412844  0.275229   ...     2700.44  2813.89

[1635 rows x 12 columns]

Upvotes: 1

Related Questions