Reputation: 79
I have a use case where I want to use a value from another dataset. For example:
Table 1: Items
Name | Price
------------
Apple |10
Mango| 20
Grape |30
Table 2 : Item_Quantity
Name | Quantity
Apple |5
Mango| 2
Grape |2
I want to calculate total cost and prepare a final dataset.
Cost
Name | Cost
Apple |50 (10*5)
Mango| 40 (20*2)
Grape |60 (30*2)
How can I achieve this in spark? Appreciate your help.
===================
Need help with this one too..
Table 1: Items
Name | Code | Quantity
-------------------
Apple-1 |APP | 10
Mango-1| MAN | 20
Grape-1|GRA | 30
Apple-2 |APP | 20
Mango-2| MAN | 30
Grape -2|GRA | 50
Table 2 : Item_CODE_Price
Code | Price
----------------
APP |5
MAN| 2
GRA |2
I want to calculate total cost using code to get the price and prepare a final dataset.
Cost
Name | Cost
--------------
Apple-1 |50 (10*5)
Mango-1| 40 (20*2)
Grape-1 |60 (30*2)
Apple-2 |100 (20*5)
Mango-2| 60 (30*2)
Grape-2 |100 (50*2)
Upvotes: 0
Views: 812
Reputation: 23109
You can join
two tables with the same Name
and create a new column
with withColumn
as below
val df1 = spark.sparkContext.parallelize(Seq(
("Apple",10),
("Mango",20),
("Grape",30)
)).toDF("Name","Price" )
val df2 = spark.sparkContext.parallelize(Seq(
("Apple",5),
("Mango",2),
("Grape",2)
)).toDF("Name","Quantity" )
//join and create new column
val newDF = df1.join(df2, Seq("Name"))
.withColumn("Cost", $"Price" * $"Quantity")
newDF.show(false)
Output:
+-----+-----+--------+----+
|Name |Price|Quantity|Cost|
+-----+-----+--------+----+
|Grape|30 |2 |60 |
|Mango|20 |2 |40 |
|Apple|10 |5 |50 |
+-----+-----+--------+----+
The second case is you just need to join with Code and drop columns that you don't want in final as
val newDF = df2.join(df1, Seq("CODE"))
.withColumn("Cost", $"Price" * $"Quantity")
.drop("Code", "Price", "Quantity")
This example is in scala, there won't be much difference if you need in java
Hope this helps!
Upvotes: 1