Vikas J
Vikas J

Reputation: 358

Spark SQL filter multiple similar fields

Is there a better way to write a filter of multiple conditions which are similar in nature on a spark dataframe.

Assuming df is a spark dataframe having timestamp columns t1,t2,t3,t4.

val filteredDF=df.filter(col("t1").lt(current_date()-expr("INTERVAL 30 DAYS")) || col("t2").lt(current_date()-expr("INTERVAL 30 DAYS")) ||
col("t3").lt(current_date()-expr("INTERVAL 30 DAYS")) ||
col("t4").lt(current_date()-expr("INTERVAL 30 DAYS"))) 

Any better way to write the same. Since I'm new to scala, I kind of don't know the best practices to code in scala yet. Appreciate any help.

Upvotes: 0

Views: 99

Answers (2)

stack0114106
stack0114106

Reputation: 8711

Check this out:

scala>  val df =Seq( ( (Timestamp.valueOf("2019-01-01 01:02:03")), (Timestamp.valueOf("2019-01-10 01:02:03")), (Timestamp.valueOf("2019-01-15 01:02:03") ), (Timestamp.valueOf("2019-02-22 01:02:03")) ) ).toDF("t1","t2","t3","t4")
df: org.apache.spark.sql.DataFrame = [t1: timestamp, t2: timestamp ... 2 more fields]

scala> df.show(false)
+-------------------+-------------------+-------------------+-------------------+
|t1                 |t2                 |t3                 |t4                 |
+-------------------+-------------------+-------------------+-------------------+
|2019-01-01 01:02:03|2019-01-10 01:02:03|2019-01-15 01:02:03|2019-02-22 01:02:03|
+-------------------+-------------------+-------------------+-------------------+


scala> val ts_cols = df.dtypes.filter( _._2 == "TimestampType" ).map( _._1)
ts_cols: Array[String] = Array(t1, t2, t3, t4)

scala> val exp1 = ts_cols.map ( x=> col(x).lt(current_date()-expr("INTERVAL 30 DAYS")) ).reduce( _||_ )
exp1: org.apache.spark.sql.Column = ((((t1 < (current_date() - interval 4 weeks 2 days)) OR (t2 < (current_date() - interval 4 weeks 2 days))) OR (t3 < (current_date() - interval 4 weeks 2 days))) OR (t4 < (current_date() - interval 4 weeks 2 days)))

scala> df.select(col("*"),exp1.as("ts_comp") ).show(false)
+-------------------+-------------------+-------------------+-------------------+-------+
|t1                 |t2                 |t3                 |t4                 |ts_comp|
+-------------------+-------------------+-------------------+-------------------+-------+
|2019-01-01 01:02:03|2019-01-10 01:02:03|2019-01-15 01:02:03|2019-02-22 01:02:03|false  |
+-------------------+-------------------+-------------------+-------------------+-------+

true test case

scala> val df2 =Seq( ( (Timestamp.valueOf("2018-01-01 01:02:03")), (Timestamp.valueOf("2018-01-10 01:02:03")), (Timestamp.valueOf("2018-01-15 01:
02:03") ), (Timestamp.valueOf("2018-02-22 01:02:03")) ) ).toDF("t1","t2","t3","t4")
df2: org.apache.spark.sql.DataFrame = [t1: timestamp, t2: timestamp ... 2 more fields]

scala> df2.select(col("*"),exp1.as("ts_comp") ).show(false)
+-------------------+-------------------+-------------------+-------------------+-------+
|t1                 |t2                 |t3                 |t4                 |ts_comp|
+-------------------+-------------------+-------------------+-------------------+-------+
|2018-01-01 01:02:03|2018-01-10 01:02:03|2018-01-15 01:02:03|2018-02-22 01:02:03|true   |
+-------------------+-------------------+-------------------+-------------------+-------+


scala>

Upvotes: 0

shay__
shay__

Reputation: 3990

import df.sparkSession.implicits._
import org.apache.spark.sql.functions._
def filterDates(dates: Column*): Column = 
  dates
    .map(_.lt(current_date()-expr("INTERVAL 30 DAYS")))
    .reduce(_ or _)
val filteredDF = df.filter(filterDates($"t1", $"t2", $"t3", $"t4"))

I didn't even check if it compiles, but give or take a few typos it should do the job.

Upvotes: 0

Related Questions