Reputation: 823
I have a CSV with two delimiters (;
) and (,
) it looks like this:
vin;vorgangid;eventkm;D_8_lamsoni_w_time;D_8_lamsoni_w_value
V345578;295234545;13;-1000.0,-980.0;7.9921875,11.984375
V346670;329781064;13;-960.0,-940.0;7.9921875,11.984375
I want to import it into a pandas data frame, with the (;
) acting as a column separator and (,
) as a separator for a list
or array
using float
as data type. So far I am using this method, but I am sure there is something easier out there.
aa=0;
csv_import=pd.read_csv(folder+FileName, ';')
for col in csv_import.columns:
aa=aa+1
if type(csv_import[col][0])== str and aa>3:
# string to list of strings
csv_import[col]=csv_import[col].apply(lambda x:x.split(','))
# make the list of stings into a list of floats
csv_import[col]=csv_import[col].apply(lambda x: [float(y) for y in x])
Upvotes: 3
Views: 6177
Reputation: 210832
first read CSV using ;
as a delimiter:
df = pd.read_csv(filename, sep=';')
UPDATE:
In [67]: num_cols = df.columns.difference(['vin','vorgangid','eventkm'])
In [68]: num_cols
Out[68]: Index(['D_8_lamsoni_w_time', 'D_8_lamsoni_w_value'], dtype='object')
In [69]: df[num_cols] = (df[num_cols].apply(lambda x: x.str.split(',', expand=True)
....: .stack()
....: .astype(float)
....: .unstack()
....: .values.tolist())
....: )
In [70]: df
Out[70]:
vin vorgangid eventkm D_8_lamsoni_w_time D_8_lamsoni_w_value
0 V345578 295234545 13 [-1000.0, -980.0] [7.9921875, 11.984375]
1 V346670 329781064 13 [-960.0, -940.0] [7.9921875, 11.984375]
In [71]: type(df.loc[0, 'D_8_lamsoni_w_value'][0])
Out[71]: float
OLD answer:
Now we can split numbers into lists in the "number" columns:
In [20]: df[['D_8_lamsoni_w_time', 'D_8_lamsoni_w_value']] = \
df[['D_8_lamsoni_w_time', 'D_8_lamsoni_w_value']].apply(lambda x: x.str.split(','))
In [21]: df
Out[21]:
vin vorgangid eventkm D_8_lamsoni_w_time D_8_lamsoni_w_value
0 V345578 295234545 13 [-1000.0, -980.0] [7.9921875, 11.984375]
1 V346670 329781064 13 [-960.0, -940.0] [7.9921875, 11.984375]
Upvotes: 2
Reputation: 862511
You can use parameter converters
in read_csv
and define custom function for spliting:
def f(x):
return [float(i) for i in x.split(',')]
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp),
sep=";",
converters={'D_8_lamsoni_w_time':f, 'D_8_lamsoni_w_value':f})
print (df)
vin vorgangid eventkm D_8_lamsoni_w_time D_8_lamsoni_w_value
0 V345578 295234545 13 [-1000.0, -980.0] [7.9921875, 11.984375]
1 V346670 329781064 13 [-960.0, -940.0] [7.9921875, 11.984375]
Another solution working with NaN
in 4.
and 5.
columns:
You can use read_csv
with separators ;
, then apply str.split
to 4.
and 5.
column selected by iloc
and convert each value in list
to float
:
import pandas as pd
import numpy as np
import io
temp=u"""vin;vorgangid;eventkm;D_8_lamsoni_w_time;D_8_lamsoni_w_value
V345578;295234545;13;-1000.0,-980.0;7.9921875,11.984375
V346670;329781064;13;-960.0,-940.0;7.9921875,11.984375"""
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp), sep=";")
print (df)
vin vorgangid eventkm D_8_lamsoni_w_time D_8_lamsoni_w_value
0 V345578 295234545 13 -1000.0,-980.0 7.9921875,11.984375
1 V346670 329781064 13 -960.0,-940.0 7.9921875,11.984375
#split 4.th and 5th column and convert to numpy array
df.iloc[:,3] = df.iloc[:,3].str.split(',').apply(lambda x: [float(i) for i in x])
df.iloc[:,4] = df.iloc[:,4].str.split(',').apply(lambda x: [float(i) for i in x])
print (df)
vin vorgangid eventkm D_8_lamsoni_w_time D_8_lamsoni_w_value
0 V345578 295234545 13 [-1000.0, -980.0] [7.9921875, 11.984375]
1 V346670 329781064 13 [-960.0, -940.0] [7.9921875, 11.984375]
If need numpy arrays
instead lists
:
#split 4.th and 5th column and convert to numpy array
df.iloc[:,3] = df.iloc[:,3].str.split(',').apply(lambda x: np.array([float(i) for i in x]))
df.iloc[:,4] = df.iloc[:,4].str.split(',').apply(lambda x: np.array([float(i) for i in x]))
print (df)
vin vorgangid eventkm D_8_lamsoni_w_time D_8_lamsoni_w_value
0 V345578 295234545 13 [-1000.0, -980.0] [7.9921875, 11.984375]
1 V346670 329781064 13 [-960.0, -940.0] [7.9921875, 11.984375]
print (type(df.iloc[0,3]))
<class 'numpy.ndarray'>
I try improve your solutiuon:
a=0;
csv_import=pd.read_csv(folder+FileName, ';')
for col in csv_import.columns:
a += 1
if type(csv_import.ix[0, col])== str and a>3:
# string to list of strings
csv_import[col]=csv_import[col].apply(lambda x: [float(y) for y in x.split(',')])
Upvotes: 2
Reputation: 76297
Asides from the other fine answers here, which are more pandas-specific, it should be noted that Python itself is pretty powerful when it comes to string processing. You can just place the result of replacing ';'
with ','
in a StringIO
object, and work normally from there:
In [8]: import pandas as pd
In [9]: from cStringIO import StringIO
In [10]: pd.read_csv(StringIO(''.join(l.replace(';', ',') for l in open('stuff.csv'))))
Out[10]:
vin vorgangid eventkm D_8_lamsoni_w_time \
V345578 295234545 13 -1000.0 -980.0 7.992188
V346670 329781064 13 -960.0 -940.0 7.992188
D_8_lamsoni_w_value
V345578 295234545 11.984375
V346670 329781064 11.984375
Upvotes: 4