Reputation: 70
I am reading csv file into spark dataframe. The csv has empty spaces " " in many columns, i want to remove these empty spaces. There are 500 columns in csv, So i cannot specific columns manually in my code
Sample data:
ADVANCE_TYPE CHNG_DT BU_IN
A 20190718 1
20190728 2
20190714
B 20190705
20190724 4
Code:
from pyspark.sql.functions import col,when,regexp_replace,trim
df_csv = spark.read.options(header='true').options(delimiter=',').options(inferSchema='true').options(nullValue="None").csv("test41.csv")
for col_name in df_csv.columns:
df_csv = df_csv.select(trim(col(col_name)))
But these code is not removing empty spaces. Please help!
Upvotes: 0
Views: 296
Reputation: 31480
You can apply trim on all the required columns using list comprehension.
Example:
df=spark.createDataFrame([(" ","12343"," ","9 "," 0")])
#finding length of each column
expr=[length(col(col_name)).name('length'+ col_name) for col_name in df.columns]
df.select(expr).show()
#+--------+--------+--------+--------+--------+
#|length_1|length_2|length_3|length_4|length_5|
#+--------+--------+--------+--------+--------+
#| 3| 5| 3| 3| 4|
#+--------+--------+--------+--------+--------+
#trim on all the df columns
expr=[trim(col(col_name)).name(col_name) for col_name in df.columns]
df1=df.select(expr)
df1.show()
#+---+-----+---+---+---+
#| _1| _2| _3| _4| _5|
#+---+-----+---+---+---+
#| |12343| | 9| 0|
#+---+-----+---+---+---+
#length on df1 columns
expr=[length(col(col_name)).name('length'+ col_name) for col_name in df.columns]
df1.select(expr).show()
#+--------+--------+--------+--------+--------+
#|length_1|length_2|length_3|length_4|length_5|
#+--------+--------+--------+--------+--------+
#| 0| 5| 0| 1| 1|
#+--------+--------+--------+--------+--------+
Upvotes: 2