pythonUser
pythonUser

Reputation: 213

How to remove decimal point from string using pandas

I'm reading an xls file and converting to csv file in databricks using pyspark. My input data is of string format 101101114501700 in the xls file. But after converting it to CSV format using pandas and writing to the datalake folder my data is showing as 101101114501700.0. My code is given below. Please help me why am I getting the decimal part in the data.

for file in os.listdir("/path/to/file"):
     if file.endswith(".xls"):
       filepath = os.path.join("/path/to/file",file)         
       filepath_pd = pd.ExcelFile(filepath)
       names = filepath_pd.sheet_names        
       df = pd.concat([filepath_pd.parse(name) for name in names])        
       df1 = df.to_csv("/path/to/file"+file.split('.')[0]+".csv", sep=',', encoding='utf-8', index=False)
       print(time.strftime("%Y%m%d-%H%M%S") + ": XLS files converted to CSV and moved to folder"

Upvotes: 0

Views: 4060

Answers (2)

Yuan JI
Yuan JI

Reputation: 2995

Your question has nothing to do with Spark or PySpark. It's related to Pandas.

This is because Pandas interpret and infer columns' data type automatically. Since all the values of your column are numeric, Pandas will consider it as float data type.

To avoid this, pandas.ExcelFile.parse method accepts an argument called converters, you could use this to tell Pandas the specific column data type by:

# if you want one specific column as string
df = pd.concat([filepath_pd.parse(name, converters={'column_name': str}) for name in names])

OR

# if you want all columns as string
# and you have multi sheets and they do not have same columns
# this merge all sheets into one dataframe
def get_converters(excel_file, sheet_name, dt_cols):
    cols = excel_file.parse(sheet_name).columns
    converters = {col: str for col in cols if col not in dt_cols}
    for col in dt_cols:
        converters[col] = pd.to_datetime
    return converters

df = pd.concat([filepath_pd.parse(name, converters=get_converters(filepath_pd, name, ['date_column'])) for name in names]).reset_index(drop=True)

OR

# if you want all columns as string
# and all your sheets have same columns
cols = filepath_pd.parse().columns
dt_cols = ['date_column']
converters = {col: str for col in cols if col not in dt_cols}
for col in dt_cols:
    converters[col] = pd.to_datetime
df = pd.concat([filepath_pd.parse(name, converters=converters) for name in names]).reset_index(drop=True)

Upvotes: 0

gilgorio
gilgorio

Reputation: 558

I think the field is automatically parsed as float when reading the excel. I would correct it afterwards:

df['column_name'] = df['column_name'].astype(int)

If your column contains Nulls you can´t convert to integer so you will need to fill nulls first:

df['column_name'] = df['column_name'].fillna(0).astype(int)

Then you can concatenate and store the way you were doing it

Upvotes: 2

Related Questions