Reputation: 45
I am using Spark/Scala to process a Hive
table which contains transaction data for each member. I need to get the max record for each member. I did this task using the below code and it works successfully but the performance is not got.
I need to ask if there is any other way to enhance the performance of this code? I found some ways to do it using spark-sql but I prefer Spark
Dataframe or Dataset.
The below example will reproduce my code and my data.
val mamberData = Seq(
Row("1234", "CX", java.sql.Timestamp.valueOf("2018-09-09 00:00:00")),
Row("1234", "CX", java.sql.Timestamp.valueOf("2018-03-02 00:00:00")),
Row("5678", "NY", java.sql.Timestamp.valueOf("2019-01-01 00:00:00")),
Row("5678", "NY", java.sql.Timestamp.valueOf("2018-01-01 00:00:00")),
Row("7088", "SF", java.sql.Timestamp.valueOf("2018-09-01 00:00:00"))
)
val MemberDataSchema = List(
StructField("member_id", StringType, nullable = true),
StructField("member_state", StringType, nullable = true),
StructField("activation_date", TimestampType, nullable = true)
)
import spark.implicits._
val memberDF =spark.createDataFrame(
spark.sparkContext.parallelize(mamberData),
StructType(MemberDataSchema)
)
val memberDfMaxDate = memberDF.groupBy('member_id).agg(max('activation_date).as("activation_date"))
val memberDFMaxOnly = memberDF.join(memberDfMaxDate,Seq("member_id","activation_date"))
The output is below
+---------+------------+-------------------+
|member_id|member_state|activation_date |
+---------+------------+-------------------+
|1234 |CX |2018-09-09 00:00:00|
|1234 |CX |2018-03-02 00:00:00|
|5678 |NY |2019-01-01 00:00:00|
|5678 |NY |2018-01-01 00:00:00|
|7088 |SF |2018-09-01 00:00:00|
+---------+------------+-------------------+
+---------+-------------------+------------+
|member_id| activation_date|member_state|
+---------+-------------------+------------+
| 7088|2018-09-01 00:00:00| SF|
| 1234|2018-09-09 00:00:00| CX|
| 5678|2019-01-01 00:00:00| NY|
+---------+-------------------+------------+
Upvotes: 3
Views: 4863
Reputation: 27373
DataFrame's groupBy
is as efficient as it gets (more efficient than Window-Functions due to partial aggregation).
But you can avoid the join by using a struct
within in the aggregation-clause:
val memberDfMaxOnly = memberDF.groupBy('member_id).agg(max(struct('activation_date, 'member_state)).as("row_selection"))
.select(
$"member_id",
$"row_selection.activation_date",
$"row_selection.member_state"
)
Upvotes: 1
Reputation: 3354
Use window functions to assign a rank and filter the first in each group.
import org.apache.spark.sql.expressions.Window
// Partition by member_id order by activation_date
val byMemberId = Window.partitionBy($"member_id").orderBy($"activation_date" desc)
// Get the new DF applying window function
val memberDFMaxOnly = memberDF.select('*, rank().over(byMemberId) as 'rank).where($"rank" === 1).drop("rank")
// View the results
memberDFMaxOnly.show()
+---------+------------+-------------------+
|member_id|member_state| activation_date|
+---------+------------+-------------------+
| 1234| CX|2018-09-09 00:00:00|
| 5678| NY|2019-01-01 00:00:00|
| 7088| SF|2018-09-01 00:00:00|
+---------+------------+-------------------+
Upvotes: 1
Reputation: 1590
You could use lots of techniques, for example Ranking
or Dataset
. I prefer to use reduceGroups
as it is function style way and easy to interpret.
case class MemberDetails(member_id: String, member_state: String, activation_date: FileStreamSource.Timestamp)
val dataDS: Dataset[MemberDetails] = spark.createDataFrame(
spark.sparkContext.parallelize(mamberData),
StructType(MemberDataSchema)
).as[MemberDetails]
.groupByKey(_.member_id)
.reduceGroups((r1, r2) ⇒ if (r1.activation_date > r2.activation_date) r1 else r2)
.map { case (key, row) ⇒ row }
dataDS.show(truncate = false)
Upvotes: 3