doomdaam
doomdaam

Reputation: 783

How to correctly join two dataframes in Spark

Given these datasets:

productsMetadataDF

{'asin': '0006428320', 'title': 'Six Sonatas For Two Flutes Or Violins, Volume 2 (#4-6)', 'price': 17.95, 'imUrl': 'http://ecx.images-amazon.com/images/I/41EpRmh8MEL._SY300_.jpg', 'salesRank': {'Musical Instruments': 207315}, 'categories': [['Musical Instruments', 'Instrument Accessories', 'General Accessories', 'Sheet Music Folders']]}

productsRatingsDF

{"reviewerID": "AORCXT2CLTQFR", "asin": "0006428320", "reviewerName": "Justo Roteta", "helpful": [0, 0], "overall": 4.0, "summary": "Not a classic but still a good album from Yellowman.", "unixReviewTime": 1383436800, "reviewTime": "11 3, 2013"}

and this function:

def findProductFeatures(productsRatingsDF : DataFrame, productsMetadataDF : DataFrame) : DataFrame = {
    productsRatingsDF
      .withColumn("averageRating", avg("overall"))
      .join(productsMetadataDF,"asin")
      .select($"asin", $"categories", $"price", $"averageRating")
  }

Would this be the correct way to join these two datasets, based on the asin?

Here's the error I get:

Exception in thread "main" org.apache.spark.sql.AnalysisException: grouping expressions sequence is empty, and '`asin`' is not an aggregate function. Wrap '(avg(`overall`) AS `averageRating`)' in windowing function(s) or wrap '`asin`' in first() (or first_value) if you don't care which value you get.;;
Aggregate [asin#6, helpful#7, overall#8, reviewText#9, reviewTime#10, reviewerID#11, reviewerName#12, summary#13, unixReviewTime#14L, avg(overall#8) AS averageRating#99]
+- Relation[asin#6,helpful#7,overall#8,reviewText#9,reviewTime#10,reviewerID#11,reviewerName#12,summary#13,unixReviewTime#14L] json

Do I understand the error correct, there's an error in the way I join? I tried changing the order of the .withColumn and .join, but it didn't work. There also seems to be an error when I try to input avg("overall") into a column, based on the asin number.

The end result should be, that I get a dataframe of 4 columns "asin", "categories" ,"price", and "averageRating".

Upvotes: 1

Views: 70

Answers (1)

Raphael Roth
Raphael Roth

Reputation: 27373

The problem seems to be :

.withColumn("averageRating", avg("overall"))

Do a proper aggregation before joining:

df
.groupBy("asin") // your columns
.agg(avg("overall").as("averageRating"))

Upvotes: 1

Related Questions