Reputation: 2230
I am reading a message from Kafka topic which has multiple partitions. While reading from message no issue, while Committing the offset range to Kafka, I am getting an error. I tried my level best and not able to resolve this issue.
Code
object ParallelStreamJob {
def main(args: Array[String]): Unit = {
val spark = SparkHelper.getOrCreateSparkSession()
val ssc = new StreamingContext(spark.sparkContext, Seconds(10))
spark.sparkContext.setLogLevel("WARN")
val kafkaStream = {
val kafkaParams = Map[String, Object](
"bootstrap.servers" -> "localhost:9092",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
"group.id" -> "welcome3",
"auto.offset.reset" -> "latest",
"enable.auto.commit" -> (false: java.lang.Boolean)
)
val topics = Array("test2")
val numPartitionsOfInputTopic = 2
val streams = (1 to numPartitionsOfInputTopic) map {
_ => KafkaUtils.createDirectStream[String, String]( ssc, PreferConsistent, Subscribe[String, String](topics, kafkaParams) )
}
streams
}
// var offsetRanges = Array[OffsetRange]()
kafkaStream.foreach(rdd=> {
rdd.foreachRDD(conRec=> {
val offsetRanges = conRec.asInstanceOf[HasOffsetRanges].offsetRanges
conRec.foreach(str=> {
println(str.value())
for (o <- offsetRanges) {
println(s"${o.topic} ${o.partition} ${o.fromOffset} ${o.untilOffset}")
}
})
kafkaStream.asInstanceOf[CanCommitOffsets].commitAsync(offsetRanges)
})
})
println(" Spark parallel reader is ready !!!")
ssc.start()
ssc.awaitTermination()
}
}
Error
18/03/19 21:21:30 ERROR JobScheduler: Error running job streaming job 1521512490000 ms.0
java.lang.ClassCastException: scala.collection.immutable.Vector cannot be cast to org.apache.spark.streaming.kafka010.CanCommitOffsets
at com.cts.ignite.inventory.core.ParallelStreamJob$$anonfun$main$1$$anonfun$apply$1.apply(ParallelStreamJob.scala:48)
at com.cts.ignite.inventory.core.ParallelStreamJob$$anonfun$main$1$$anonfun$apply$1.apply(ParallelStreamJob.scala:39)
at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:628)
at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:628)
at org.a
Upvotes: 0
Views: 2278
Reputation: 2230
change kafkaStream.asInstanceOf[CanCommitOffsets].commitAsync(offsetRanges) line to rdd.asInstanceOf[CanCommitOffsets].commitAsync(offsetRanges)
Upvotes: 0
Reputation: 443
you can commit the offset like
stream.foreachRDD { rdd =>
val offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
// some time later, after outputs have completed
stream.asInstanceOf[CanCommitOffsets].commitAsync(offsetRanges)
}
in your case kafkaStream
is Seq
of stream. change you commit line.
Reference: https://spark.apache.org/docs/latest/streaming-kafka-0-10-integration.html
Upvotes: 0