Reputation: 797
I'm writing a Spark (v1.6.0) batch job which reads from a Kafka topic.
For this I can use org.apache.spark.streaming.kafka.KafkaUtils#createRDD
however,
I need to set the offsets for all the partitions and also need to store them somewhere (ZK? HDFS?) to know from where to start the next batch job.
What is the right approach to read from Kafka in a batch job?
I'm also thinking about writing a streaming job instead, which reads from auto.offset.reset=smallest
and saves the checkpoint
to HDFS and then in the next run it starts from that.
But in this case how can I just fetch once and stop streaming after the first batch?
Upvotes: 7
Views: 8364
Reputation: 611
createRDD
is the right approach for reading a batch from kafka.
To query for info about the latest / earliest available offsets, look at KafkaCluster.scala
methods getLatestLeaderOffsets
and getEarliestLeaderOffsets
. That file was private
, but should be public
in the latest versions of spark.
Upvotes: 4