Reputation: 65
My csv file looks like this.
,timestamp,side,size,price,tickDirection,grossValue,homeNotional,foreignNotional
0,1569974396.557895,1,11668,8319.5,1,140248813.0,11668,1.40248813
1,1569974394.78865,0,5000,8319.0,0,60103377.0,5000,0.60103377
2,1569974392.355395,0,564,8319.0,0,6779660.999999999,564,0.06779661
3,1569974383.797042,0,100,8319.0,0,1202067.0,100,0.01202067
4,1569974382.944569,0,3,8319.0,0,36062.0,3,0.00036062
5,1569974382.944569,0,7412,8319.0,-1,89097247.0,7412,0.89097247
There's a nameless index column. I want to remove this column.
When I read this in pandas, it just interprets it as an index and moves on.
The problem is, when you now use df[::-1]
, it flips the indexes as well. So df[::-1]['timestamp][0]
is the same as df['timestamp'][0]
if the file was read with indexes, but not if it was read without.
How do I make it actually ignore the index column so that df[::-1]
doesn't flip my indexes?
I tried usecols
in read_csv
, but it doesn't matter, it reads the indexes as well as the columns specified. I tried del df['']
, but it doesn't work because it doesn't interpret the index column as column ''
, even though that's what it is.
Upvotes: 1
Views: 91
Reputation: 120391
Just use index_col=0
df = pd.read_csv('data.csv', index_col=0)
print(df)
# Output
timestamp side size price tickDirection grossValue homeNotional foreignNotional
0 1.569974e+09 1 11668 8319.5 1 140248813.0 11668 1.402488
1 1.569974e+09 0 5000 8319.0 0 60103377.0 5000 0.601034
2 1.569974e+09 0 564 8319.0 0 6779661.0 564 0.067797
3 1.569974e+09 0 100 8319.0 0 1202067.0 100 0.012021
4 1.569974e+09 0 3 8319.0 0 36062.0 3 0.000361
5 1.569974e+09 0 7412 8319.0 -1 89097247.0 7412 0.890972
Upvotes: 1
Reputation: 606
If I understand correctly you issue, you can just set timestamp as your index:
df.set_index('timestamp', drop = True)
Upvotes: 0