DanMatlin
DanMatlin

Reputation: 1252

Scala dataset map fails with exception No applicable constructor/method found for zero actual parameters

I have the following case classes

case class FeedbackData (prefix : String, position : Int, click : Boolean,
                         suggestion: Suggestion,
                         history : List[RequestHistory],
                         eventTimestamp: Long)

case class Suggestion (clicks : Long, sources : List[String], ctr : Float)

case class RequestHistory (timestamp: Long, url: String)

I use it to perform a map operation on my dataset

sqlContext = ss.sqlContext
import sqlContext.implicits._


val input: Dataset[FeedbackData] = ss.read.json("filename").as(Encoders.bean(classOf[FeedbackData]))

input.map(row => transformRow(row))

At runtime I see the exception

java.util.concurrent.ExecutionException: org.codehaus.commons.compiler.CompileException: File 'generated.java', Line 24, Column 81: failed to compile: 

No applicable constructor/method found for zero actual parameters; candidates are: "package.FeedbackData(java.lang.String, int, boolean, package.Suggestion, scala.collection.immutable.List, long)"

What am I doing wrong ?

Upvotes: 0

Views: 645

Answers (2)

Felix Feng
Felix Feng

Reputation: 321

Inspired from @pasha701,use case could be

case class Student(id: Int, name: String)
import spark.implicits._
val df = Seq((1, "james"), (2, "tony")).toDF("id", "name")
df.printSchema()
df.as[Student].rdd.map{
      stu=>
        stu.id+"\t"+stu.name
    }.collect().foreach(println)

output:

root
  |-- id: integer (nullable = false)
  |-- name: string (nullable = true)

1   james
2   tony

reference:https://spark.apache.org/docs/2.4.0/sql-getting-started.html

Upvotes: 0

pasha701
pasha701

Reputation: 7207

Context is fine here, issue with case class, Scala long (Long) have to used instead of Java long (long):

case class A(num1 : Long, num2 : Long, num3 : Long)

Upvotes: 2

Related Questions