user811602
user811602

Reputation: 1354

Spark: Row filter based on Column value

I have millions of rows as dataframe like this:

val df = Seq(("id1", "ACTIVE"), ("id1", "INACTIVE"), ("id1", "INACTIVE"), ("id2", "ACTIVE"), ("id3", "INACTIVE"), ("id3", "INACTIVE")).toDF("id", "status")

scala> df.show(false)
+---+--------+
|id |status  |
+---+--------+
|id1|ACTIVE  |
|id1|INACTIVE|
|id1|INACTIVE|
|id2|ACTIVE  |
|id3|INACTIVE|
|id3|INACTIVE|
+---+--------+

Now I want to divide this data into three separate dataFrame like this:

  1. Only ACTIVE ids (like id2), say activeDF
  2. Only INACTIVE ids (like id3), say inactiveDF
  3. Having both ACTIVE and INACTIVE as status, say bothDF

How can I calculate activeDF and inactiveDF?

I know that bothDF can be calculated like

df.select("id").distinct.except(activeDF).except(inactiveDF)

, but this will involve shuffling (as 'distinct' operation required same). Is there any better way to calculate bothDF

Versions:

Spark : 2.2.1
Scala : 2.11

Upvotes: 1

Views: 569

Answers (2)

C.S.Reddy Gadipally
C.S.Reddy Gadipally

Reputation: 1758

just another way - groupBy, collect as set and then if the size of the set is 1, it is either active or inactive only, else both

scala> val df = Seq(("id1", "ACTIVE"), ("id1", "INACTIVE"), ("id1", "INACTIVE"), ("id2", "ACTIVE"), ("id3", "INACTIVE"), ("id3", "INACTIVE"), ("id4", "ACTIVE"), ("id5", "ACTIVE"), ("id6", "INACTIVE"), ("id7", "ACTIVE"), ("id7", "INACTIVE")).toDF("id", "status")
df: org.apache.spark.sql.DataFrame = [id: string, status: string]

scala> df.show(false)
+---+--------+
|id |status  |
+---+--------+
|id1|ACTIVE  |
|id1|INACTIVE|
|id1|INACTIVE|
|id2|ACTIVE  |
|id3|INACTIVE|
|id3|INACTIVE|
|id4|ACTIVE  |
|id5|ACTIVE  |
|id6|INACTIVE|
|id7|ACTIVE  |
|id7|INACTIVE|
+---+--------+


scala> val allstatusDF = df.groupBy("id").agg(collect_set("status") as "allstatus")
allstatusDF: org.apache.spark.sql.DataFrame = [id: string, allstatus: array<string>]

scala> allstatusDF.show(false)
+---+------------------+
|id |allstatus         |
+---+------------------+
|id7|[ACTIVE, INACTIVE]|
|id3|[INACTIVE]        |
|id5|[ACTIVE]          |
|id6|[INACTIVE]        |
|id1|[ACTIVE, INACTIVE]|
|id2|[ACTIVE]          |
|id4|[ACTIVE]          |
+---+------------------+


scala> allstatusDF.withColumn("status", when(size($"allstatus") === 1, $"allstatus".getItem(0)).otherwise("BOTH")).show(false)
+---+------------------+--------+
|id |allstatus         |status  |
+---+------------------+--------+
|id7|[ACTIVE, INACTIVE]|BOTH    |
|id3|[INACTIVE]        |INACTIVE|
|id5|[ACTIVE]          |ACTIVE  |
|id6|[INACTIVE]        |INACTIVE|
|id1|[ACTIVE, INACTIVE]|BOTH    |
|id2|[ACTIVE]          |ACTIVE  |
|id4|[ACTIVE]          |ACTIVE  |
+---+------------------+--------+

Upvotes: 1

user10938362
user10938362

Reputation: 4161

The most elegant solution is to pivot on status

val counts = df
  .groupBy("id")
  .pivot("status", Seq("ACTIVE", "INACTIVE"))
  .count

or equivalent direct agg

val counts = df
  .groupBy("id")
  .agg(
    count(when($"status" === "ACTIVE", true)) as "ACTIVE",
    count(when($"status" === "INACTIVE", true)) as "INACTIVE"
  )

followed by a simple CASE ... WHEN:

val result = counts.withColumn(
  "status",
  when($"ACTIVE" === 0, "INACTIVE")
    .when($"inactive" === 0, "ACTIVE")
    .otherwise("BOTH")
)

result.show
+---+------+--------+--------+                                                  
| id|ACTIVE|INACTIVE|  status|
+---+------+--------+--------+
|id3|     0|       2|INACTIVE|
|id1|     1|       2|    BOTH|
|id2|     1|       0|  ACTIVE|
+---+------+--------+--------+

Later you can separate the result with filters or dump to disk with source that supports partitionBy (How to split a dataframe into dataframes with same column values?).

Upvotes: 2

Related Questions