Robin
Robin

Reputation: 128

Converting timestamp to UTC in Spark Scala

My Environment is Spark 2.1, Scala

This could be simple, but I am breaking my head.

My Dataframe, myDF is like bellow -

+--------------------+----------------+  
|     orign_timestamp | origin_timezone|  
+--------------------+----------------+  
|2018-05-03T14:56:...|America/St_Johns|  
|2018-05-03T14:56:...| America/Toronto|  
|2018-05-03T14:56:...| America/Toronto|    
|2018-05-03T14:56:...| America/Toronto|  
|2018-05-03T14:56:...| America/Halifax|  
|2018-05-03T14:56:...| America/Toronto|  
|2018-05-03T14:56:...| America/Toronto|  
+--------------------+----------------+   

I need to convert orign_timestamp to UTC and add as new column to DF. Code bellow is working fine.

myDF.withColumn("time_utc", to_utc_timestamp(from_unixtime(unix_timestamp(col("orign_timestamp"), "yyyy-MM-dd'T'HH:mm:ss")),("America/Montreal"))).show

Problem is I have fixed timezone to "America/Montreal". I need to pass timeZone form "orign_timeone" column. I tried

myDF.withColumn("time_utc", to_utc_timestamp(from_unixtime(unix_timestamp(col("orign_timestamp"), "yyyy-MM-dd'T'HH:mm:ss")), col("orign_timezone".toString.trim))).show

got Error:
<console>:34: error: type mismatch;
 found   : org.apache.spark.sql.Column
 required: String

I tried code bellow, did not through exception but new column had same time as origin_time.

myDF.withColumn("origin_timestamp", to_utc_timestamp(from_unixtime(unix_timestamp(col("orign_timestamp"), "yyyy-MM-dd'T'HH:mm:ss")), col("rign_timezone").toString)).show

Upvotes: 8

Views: 15885

Answers (2)

Kunda
Kunda

Reputation: 463

If you upgrade to Spark 2.4, you can use the overload that accepts a Column for the timezone.

Alternatively, for a type-safe access to the function, you can use the underlying class:

new Column(
  ToUTCTimestamp(
    from_unixtime(unix_timestamp(col("orign_timestamp"), "yyyy-MM-dd'T'HH:mm:ss")).expr, 
    col("orign_timezone").expr
  )
)

Upvotes: 2

Alper t. Turker
Alper t. Turker

Reputation: 35249

Whenever you experience problem like this one, you can use expr

import org.apache.spark.sql.functions._

val df = Seq(
  ("2018-05-03T14:56:00", "America/St_Johns"), 
  ("2018-05-03T14:56:00", "America/Toronto"), 
  ("2018-05-03T14:56:00", "America/Halifax")
).toDF("origin_timestamp", "origin_timezone")

df.withColumn("time_utc",
  expr("to_utc_timestamp(origin_timestamp, origin_timezone)")
).show

// +-------------------+----------------+-------------------+
// |   origin_timestamp| origin_timezone|           time_utc|
// +-------------------+----------------+-------------------+
// |2018-05-03T14:56:00|America/St_Johns|2018-05-03 17:26:00|
// |2018-05-03T14:56:00| America/Toronto|2018-05-03 18:56:00|
// |2018-05-03T14:56:00| America/Halifax|2018-05-03 17:56:00|
// +-------------------+----------------+-------------------+

or selectExpr:

df.selectExpr(
  "*", "to_utc_timestamp(origin_timestamp, origin_timezone) as time_utc"
).show

// +-------------------+----------------+-------------------+
// |   origin_timestamp| origin_timezone|           time_utc|
// +-------------------+----------------+-------------------+
// |2018-05-03T14:56:00|America/St_Johns|2018-05-03 17:26:00|
// |2018-05-03T14:56:00| America/Toronto|2018-05-03 18:56:00|
// |2018-05-03T14:56:00| America/Halifax|2018-05-03 17:56:00|
// +-------------------+----------------+-------------------+

Upvotes: 10

Related Questions