Georg Heiler
Georg Heiler

Reputation: 17724

pyspark type error on reading a pandas dataframe

I read some CSV file into pandas, nicely preprocessed it and set dtypes to desired values of float, int, category. However, when trying to import it into spark I get the following error:

Can not merge type <class 'pyspark.sql.types.DoubleType'> and <class 'pyspark.sql.types.StringType'>

After trying to trace it for a while I some source for my troubles -> see the CSV file:

"myColumns"
""
"A"

Red into pandas like: small = pd.read_csv(os.path.expanduser('myCsv.csv'))

And failing to import it to spark with:

sparkDF = spark.createDataFrame(small)

Currently I use Spark 2.0.0

Possibly multiple columns are affected. How can I deal with this problem?

enter image description here

Upvotes: 3

Views: 10220

Answers (1)

eliasah
eliasah

Reputation: 40380

You'll need to define the spark DataFrame schema explicitly and pass it to the createDataFrame function :

from pyspark.sql.types import *
import pandas as pd

small = pdf.read_csv("data.csv")
small.head()
#  myColumns
# 0       NaN
# 1         A
sch = StructType([StructField("myColumns", StringType(), True)])

df = spark.createDataFrame(small, sch)
df.show()
# +---------+
# |myColumns|
# +---------+
# |      NaN|
# |        A|
# +---------+

df.printSchema()
# root
# |-- myColumns: string (nullable = true)

Upvotes: 5

Related Questions