ErhWen Kuo
ErhWen Kuo

Reputation: 1507

How do I convert column of unix epoch to Date in Apache spark DataFrame using Java?

I have a json data file which contain one property [creationDate] which is unix epoc in "long" number type. The Apache Spark DataFrame schema look like below:

root 
 |-- creationDate: long (nullable = true) 
 |-- id: long (nullable = true) 
 |-- postTypeId: long (nullable = true)
 |-- tags: array (nullable = true)
 |    |-- element: string (containsNull = true)
 |-- title: string (nullable = true)
 |-- viewCount: long (nullable = true)

I would like to do some groupBy "creationData_Year" which need to get from "creationDate".

What's the easiest way to do this kind of convert in DataFrame using Java?

Upvotes: 10

Views: 29180

Answers (3)

ErhWen Kuo
ErhWen Kuo

Reputation: 1507

After checking spark dataframe api and sql function, I come out below snippet:

DateFrame df = sqlContext.read().json("MY_JSON_DATA_FILE");

DataFrame df_DateConverted = df.withColumn("creationDt", from_unixtime(df.col("creationDate").divide(1000)));

The reason why "creationDate" column is divided by "1000" is cause the TimeUnit is different. The orgin "creationDate" is unix epoch in "milli-second", however spark sql "from_unixtime" is designed to handle unix epoch in "second".

Upvotes: 16

Ganesh
Ganesh

Reputation: 757

In spark scala,

spark.sql("select from_unixtime(1593543333062/1000) as ts").show(false)

Upvotes: 5

Ray Metz
Ray Metz

Reputation: 81

pyspark converts from Unix epoch milliseconds to dataframe timestamp

df.select(from_unixtime((df.my_date_column.cast('bigint')/1000)).cast('timestamp').alias('my_date_column'))

Upvotes: 8

Related Questions