Reputation: 55
I have a DataFrame DF and I want to count the number of each txn under the 2 Categories (Cat1 and Cat2).
DF
+------------+-------+
| Category | txn |
+-----===----+-------+
| Cat1 | A |
| Cat2 | A |
| Cat1 | B |
| Cat1 | C |
| Cat2 | D |
| Cat1 | D |
| Cat2 | C |
| Cat1 | D |
| Cat1 | A |
| Cat2 | C |
| Cat1 | D |
| Cat1 | A |
| Cat2 | B |
| Cat1 | C |
| Cat2 | D |
+------------+-------+
Code:
DF.groupBy("category_name").agg(count("txn").as("txn_count")).show(false)
But this only give me the total count for each category.
Desired output: (the format doesn't matter, just need the count)
+------------+---------------------+
| Category | txn_count |
+-----===----+---------------------+
| Cat1 | A(3),B(1),C(2),D(3) |
| Cat2 | A(1),B(1),C(2),D(2) |
+------------+---------------------+
Thank you in advance.
Upvotes: 1
Views: 1692
Reputation: 37852
You can first group by both columns (using count
) and then group by Category
only (using collect_list
):
import org.apache.spark.sql.functions._
import spark.implicits._
val result = DF
.groupBy("Category", "txn").count()
.groupBy("Category").agg(collect_list(struct("txn", "count")) as "txn_count")
result.show(false)
// prints:
// +--------+--------------------------------+
// |Category|txn_count |
// +--------+--------------------------------+
// |Cat2 |[[D, 2], [C, 2], [B, 1], [A, 1]]|
// |Cat1 |[[D, 3], [C, 2], [B, 1], [A, 3]]|
// +--------+--------------------------------+
Upvotes: 5