keiv.fly
keiv.fly

Reputation: 4035

How to convert a string column with milliseconds to a timestamp with milliseconds in Spark 2.1 using Scala?

I am using Spark 2.1 with Scala.

How to convert a string column with milliseconds to a timestamp with milliseconds?

I tried the following code from the question Better way to convert a string field into timestamp in Spark

import org.apache.spark.sql.functions.unix_timestamp
val tdf = Seq((1L, "05/26/2016 01:01:01.601"), (2L, "#$@#@#")).toDF("id", "dts")
val tts = unix_timestamp($"dts", "MM/dd/yyyy HH:mm:ss.SSS").cast("timestamp")
tdf.withColumn("ts", tts).show(2, false)

But I get the result without milliseconds:

+---+-----------------------+---------------------+
|id |dts                    |ts                   |
+---+-----------------------+---------------------+
|1  |05/26/2016 01:01:01.601|2016-05-26 01:01:01.0|
|2  |#$@#@#                 |null                 |
+---+-----------------------+---------------------+

Upvotes: 10

Views: 16177

Answers (3)

gokulnath s
gokulnath s

Reputation: 1

import org.apache.spark.sql.functions;
import org.apache.spark.sql.types.DataTypes;


dataFrame.withColumn(
    "time_stamp", 
    dataFrame.col("milliseconds_in_string")
        .cast(DataTypes.LongType)
        .cast(DataTypes.TimestampType)
)

the code is in java and it is easy to convert to scala

Upvotes: -1

Paul Bendevis
Paul Bendevis

Reputation: 2621

There is an easier way than making a UDF. Just parse the millisecond data and add it to the unix timestamp (the following code works with pyspark and should be very close the scala equivalent):

timeFmt = "yyyy/MM/dd HH:mm:ss.SSS"
df = df.withColumn('ux_t', unix_timestamp(df.t, format=timeFmt) + substring(df.t, -3, 3).cast('float')/1000)

Result: '2017/03/05 14:02:41.865' is converted to 1488722561.865

Upvotes: 6

keiv.fly
keiv.fly

Reputation: 4035

UDF with SimpleDateFormat works. The idea is taken from the Ram Ghadiyaram's link to an UDF logic.

import java.text.SimpleDateFormat
import java.sql.Timestamp
import org.apache.spark.sql.functions.udf
import scala.util.{Try, Success, Failure}

val getTimestamp: (String => Option[Timestamp]) = s => s match {
  case "" => None
  case _ => {
    val format = new SimpleDateFormat("MM/dd/yyyy' 'HH:mm:ss.SSS")
    Try(new Timestamp(format.parse(s).getTime)) match {
      case Success(t) => Some(t)
      case Failure(_) => None
    }    
  }
}

val getTimestampUDF = udf(getTimestamp)
val tdf = Seq((1L, "05/26/2016 01:01:01.601"), (2L, "#$@#@#")).toDF("id", "dts")
val tts = getTimestampUDF($"dts")
tdf.withColumn("ts", tts).show(2, false)

with output:

+---+-----------------------+-----------------------+
|id |dts                    |ts                     |
+---+-----------------------+-----------------------+
|1  |05/26/2016 01:01:01.601|2016-05-26 01:01:01.601|
|2  |#$@#@#                 |null                   |
+---+-----------------------+-----------------------+

Upvotes: 9

Related Questions