Ha Jar
Ha Jar

Reputation: 193

Join dataframe with order by desc limit on spark /java

I'm using the following code :

Dataset <Row> dataframee =  df1.as("a").join(df2.as("b"),
            df2.col("id_device").equalTo(df1.col("ID_device_previous")).
                    and(df2.col("id_vehicule").equalTo(df1.col("ID_vehicule_previous"))).
                and(df2.col("tracking_time").lt(df1.col("date_track_previous")))
            ,"left").selectExpr("a.*", "b.ID_tracking as ID_pprevious", "b.km as KM_pprevious","b.tracking_time as tracking_time_pprevious","b.speed as speed_pprevious");

I get the df1 dataframe join with multiple line from df2 dataframe.

But what I want is to join the df1 dataframe with df2 dataframe ON the same condition and order by df2.col("tracking_time") desc limit(0,1)

EDIT

I tried the following code , but it doesn't work .

df1.registerTempTable("data");
df2.createOrReplaceTempView("tdays");
Dataset<Row> d_f = sparkSession.sql("select a.*  from data as a  LEFT JOIN (select  b.tracking_time from tdays as b where  b.id_device = a.ID_device_previous and  b.id_vehicule = a.ID_vehicule_previous  and b.tracking_time < a.date_track_previous order by b.tracking_time desc limit 1 )");

 
  

I need your help

Upvotes: 1

Views: 420

Answers (1)

kavetiraviteja
kavetiraviteja

Reputation: 2208

you can do this in multiple ways which I'm aware of

  1. you can do dropDuplicates on your joined dataframee DF.

    val finalDF = dataframee.dropDuplicates("") // specified columns which you want to be distinct/unique in final output

(OR)

  1. spark-sql

    import spark.sql.implicits._
    df1.createOrReplaceTempViews("table1")
    df2.createOrReplaceTempViews("table2")
    spark.sql("join query with groupBy distinct columns").select(df("*")) 
    

Upvotes: 1

Related Questions