Reputation: 768
Spark SQL - 2.3 and 2.2. PySpark.
One date is 2019-11-19
and other is 2019-11-19T17:19:39.214841000000
.
Need to convert both to yyyy-MM-ddThh:mm:ss.SSSSSSSS
Need to use in spark.sql(select ......)
So far have tried about 20 options but all are giving null.
Tried:
from_utc_timestamp(A.SE_TS, 'UTC')
from_unixtime(A.SE_TS, 'yyyy-MM-dd HH:mm:ss')
from_unixtime(A.SE_TS)
to_date(A.SE_TS, 'yyyy-MM-dd HH:mm:ss')
to_date(A.SE_TS, 'yyyy-MM-dd hh:mm:ss.SSSS') (In many combinations of upper and lowercase)
from_unixtime(unix_timestamp(), "y-MM-dd'T'hh:mm:ssZ") - Gives syntax issues on ""
All are giving null.
Edit: Data:
+--------------------------------+-------------+
|A.SE_TS |B.SE_TS |
+--------------------------------+-------------+
|2019-11-19T17:19:39.214841000000|2019-11-19 |
+--------------------------------+-------------+
Upvotes: 1
Views: 2979
Reputation: 7409
So here it is:
Java's Simple Date Format supports only second precision
However, you can still parse the strings to a timestamp in this way:
df.withColumn("date", F.to_timestamp(F.lit("2019-11-19T17:19:39.214841000000"), "yyyy-MM-dd'T'HH:mm:ss")).select("date").show(5)
+-------------------+
| date|
+-------------------+
|2019-11-19 17:19:39|
|2019-11-19 17:19:39|
|2019-11-19 17:19:39|
|2019-11-19 17:19:39|
|2019-11-19 17:19:39|
+-------------------+
You can write a custom function like the way mentioned in the above link, which lets you do the ordering using the microseconds in the timestamp.
Please refer : pault's answer on Convert date string to timestamp in pySpark
EDIT:
I tried with spark.sql(query)
as well:
df = df.withColumn("date_string", F.lit("2019-11-19T17:19:39.214841000000"))
df.registerTempTable("df")
query = """SELECT to_timestamp(date_string, "yyyy-MM-dd'T'HH:mm:ss") as time from df limit 3"""
spark.sql(query).show()
+-------------------+
| time|
+-------------------+
|2019-11-19 17:19:39|
|2019-11-19 17:19:39|
|2019-11-19 17:19:39|
+-------------------+
Upvotes: 1