Salm
Salm

Reputation: 69

How to use group by for multiple columns with count?

I have file called tags(UserId,MovieId,Tag) as input for algorithm and convert it into table by registerTempTable . val orderedId = sqlContext.sql("SELECT MovieId AS Id,Tag FROM tag ORDER BY MovieId") this query give me file consists of Id,tag as input for second step val eachTagCount =orderedId.groupBy(" Id,Tag").count() but error appear

case class DataClass( MovieId:Int,UserId: Int, Tag: String)
// Create an RDD of DataClass objects and register it as a table.
val Data = sc.textFile("file:///usr/local/spark/dataset/tagupdate").map(_.split(",")).map(p => DataClass(p(0).trim.toInt, p(1).trim.toInt, p(2).trim)).toDF()
Data.registerTempTable("tag")
val orderedId = sqlContext.sql("SELECT MovieId AS Id,Tag FROM tag ORDER BY MovieId")
orderedId.rdd
  .map(_.toSeq.map(_+"").reduce(_+","+_))
  .saveAsTextFile("/usr/local/spark/dataset/algorithm3/output")
  val eachTagCount =orderedId.groupBy(" Id,Tag").count()
eachTagCount.rdd
 .map(_.toSeq.map(_+"").reduce(_+","+_))
 .saveAsTextFile("/usr/local/spark/dataset/algorithm3/output2")

Exception:

Caused by: org.apache.spark.sql.AnalysisException: Cannot resolve column name " Id,Tag" among (Id, Tag);
    at org.apache.spark.sql.DataFrame$$anonfun$resolve$1.apply(DataFrame.scala:152)
    at org.apache.spark.sql.DataFrame$$anonfun$resolve$1.apply(DataFrame.scala:152)
    at scala.Option.getOrElse(Option.scala:121)

how to solve this error?

Upvotes: 1

Views: 747

Answers (1)

Rahul Sharma
Rahul Sharma

Reputation: 36

Try this val eachTagCount =orderedId.groupBy("Id","Tag").count(). You are using single string for multiple columns.

Upvotes: 1

Related Questions