Reputation: 2117
I have a pandas dataframe that I am writing to a table in HDFS. I can write the data to a table when the Srum_Entry_Creation
is StringType()
, but I need it to be TimestampType()
. This is where I am running into TypeError: TimestampType can not accept object '2019-05-20 12:03:00' in type <class 'str'>
or TypeError: TimestampType can not accept object 1558353780000000000 in type <class 'int'>
. I have tried converting the column to different date formats in python, before defining the schema but can seem to get the import to work.
df
Srum_Entry_ID Connected_Time Machine Srum_Entry_Creation
0 5769.0 0.018218 Computer1 2019-05-20 12:03:00
1 5770.0 0.000359 Computer1 2019-05-20 12:03:00
2 5771.0 0.042674 Computer2 2019-05-20 13:03:00
3 5772.0 0.043229 Computer2 2019-05-20 14:04:00
4 5773.0 0.032222 Computer3 2019-05-20 14:04:00
spark = SparkSession.builder.appName('application').getOrCreate()
schema = StructType([StructField('Srum_Entry_ID', FloatType(), False),
StructField('Connected_Time', FloatType(), True),
StructField('Machine', StringType(), True),
StructField('Srum_Entry_Creation', TimestampType(), True)])
dataframe = spark.createDataFrame(df, schema)
dataframe.write. \
mode("append"). \
option("path", "/user/hive/warehouse/analytics.db/srum_network_connections"). \
saveAsTable("analytics.srum_network_connections")
I have tried:
df['Srum_Entry_Creation'] = df['Srum_Entry_Creation'].astype('datetime64[ns]')
error:
TypeError: TimestampType can not accept object 1558353780000000000 in type <class 'int'>
and
df['Srum_Entry_Creation'] = pd.to_datetime(df['Srum_Entry_Creation'])
error:
TypeError: TimestampType can not accept object 1558353780000000000 in type <class 'int'>
and if I just leave it as a string in the pandas dataframe I get:
error: TypeError: TimestampType can not accept object '2019-05-20 12:03:00' in type <class 'str'>
Upvotes: 5
Views: 7257
Reputation: 2117
In short I converted the datetime to epoch time
df['epoch'] = (df['New_Srum_Entry_Creation'] - dt.datetime(1970,1,1)).dt.total_seconds()
df['epoch'] = df['epoch'].astype('Int64')
Then used IntegerType() for the schema
StructField('epoch', IntegerType(),True)
Upvotes: 1