Reputation: 311
My main goal is to cast all columns of any df to string so, that comparison would be easy.
I have tried below multiple ways already suggested . but couldn’t succeed :
target_df = target_df.select([col(c).cast("string") for c in target_df.columns])
this gave error :
pyspark.sql.utils.AnalysisException: "Can't extract value from SDV#155: need struct type but got string;"
Next one I have tried is :
target_df = target_df.select([col(c).cast(StringType()).alias(c) for c in columns_list])
error :
pyspark.sql.utils.AnalysisException: "Can't extract value from SDV#27: need struct type but got string;"
Next method is :
for column in target_df.columns:
target_df = target_df.withColumn(column, target_df[column].cast('string'))
error :
pyspark.sql.utils.AnalysisException: "Can't extract value from SDV#27: need struct type but got string;"
Few lines code that exists before cast :
columns_list = source_df.columns.copy()
target_df = target_df.toDF(*columns_list)
schema of sample df on which im trying :
root
|-- A: string (nullable = true)
|-- S: string (nullable = true)
|-- D: string (nullable = true)
|-- F: string (nullable = true)
|-- G: double (nullable = true)
|-- H: double (nullable = true)
|-- J: string (nullable = true)
|-- K: string (nullable = true)
|-- L: string (nullable = true)
|-- M: string (nullable = true)
|-- N: string (nullable = true)
|-- B: string (nullable = true)
|-- V: string (nullable = true)
|-- C: string (nullable = true)
|-- X: string (nullable = true)
|-- Y: string (nullable = true)
|-- U: double (nullable = true)
|-- I: string (nullable = true)
|-- R: string (nullable = true)
|-- T: string (nullable = true)
|-- Q: string (nullable = true)
|-- E: double (nullable = true)
|-- W: string (nullable = true)
|-- AS: string (nullable = true)
|-- DSC: string (nullable = true)
|-- DCV: string (nullable = true)
|-- WV: string (nullable = true)
|-- SDV: string (nullable = true)
|-- SDV.1: string (nullable = true)
|-- WDV: string (nullable = true)
|-- FWFV: string (nullable = true)
|-- ERBVSER: string (nullable = true)
Upvotes: 2
Views: 6153
Reputation: 13998
As suggested, the error was from the dot .
in the column named SDV.1
which has to be enclosed with back-ticks when selecting the column:
for column in target_df.columns:
target_df = target_df.withColumn(column, target_df['`{}`'.format(column)].cast('string'))
or
target_df = target_df.select([col('`{}`'.format(c)).cast(StringType()).alias(c) for c in columns_list])
Upvotes: 3
Reputation: 11274
I don't see anything wrong with your approach
>>> df = spark.createDataFrame([(1,25),(1,20),(1,20),(2,26)],['id','age'])
>>> df.show()
+---+---+
| id|age|
+---+---+
| 1| 25|
| 1| 20|
| 1| 20|
| 2| 26|
+---+---+
>>> df.printSchema()
root
|-- id: long (nullable = true)
|-- age: long (nullable = true)
>>> df.select([col(i).cast('string') for i in df.columns]).printSchema()
root
|-- id: string (nullable = true)
|-- age: string (nullable = true)
>>> df.select([col(i).cast('string') for i in df.columns]).show()
+---+---+
| id|age|
+---+---+
| 1| 25|
| 1| 20|
| 1| 20|
| 2| 26|
+---+---+
Upvotes: 0