Pilar
Pilar

Reputation: 77

impossible to ingest data in solr with kafka

I am trying to insert the data with kafka automatically to solr and banana, but it is impossible for me because of this

error in #Convert SolrDocuments

java.lang.NumberFormatException: For input string: "2007 " at java.lang.NumberFormatException.forInputString(NumberFormatException. java:65) at java.lang.Integer.parseInt(Integer.java:580) at java.lang.Integer.valueOf(Integer.java:766) at com.example.streaming.EventParseUtil.convertData(EventParseUtil.java: 24) at com.example.streaming.CarEventsProcessor.lambda$main$91ca40fe$1(CarEv entsProcessor.java:76) at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.appl y(JavaPairRDD.scala:1015) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at scala.collection.convert.Wrappers$IteratorWrapper.next(Wrappers.scala :30) at com.lucidworks.spark.SolrSupport$5.call(SolrSupport.java:216) at com.lucidworks.spark.SolrSupport$5.call(SolrSupport.java:210) at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.app ly(JavaRDDLike.scala:225) at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.app ly(JavaRDDLike.scala:225) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$3 5.apply(RDD.scala:927) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$3 5.apply(RDD.scala:927) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.sc ala:1857) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.sc ala:1857) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:247) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor. java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor .java:624) at java.lang.Thread.run(Thread.java:748) 18/10/06 01:10:08 ERROR executor.Executor: Exception in task 1.0 in stage 0.0 (T ID 1) java.lang.NumberFormatException: For input string: "2007 " at java.lang.NumberFormatException.forInputString(NumberFormatException. java:65) at java.lang.Integer.parseInt(Integer.java:580) at java.lang.Integer.valueOf(Integer.java:766) at com.example.streaming.EventParseUtil.convertData(EventParseUtil.java: 24) at com.example.streaming.CarEventsProcessor.lambda$main$91ca40fe$1(CarEv entsProcessor.java:76) at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.appl y(JavaPairRDD.scala:1015) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at scala.collection.convert.Wrappers$IteratorWrapper.next(Wrappers.scala :30) at com.lucidworks.spark.SolrSupport$5.call(SolrSupport.java:216) at com.lucidworks.spark.SolrSupport$5.call(SolrSupport.java:210) at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.app ly(JavaRDDLike.scala:225) at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.app ly(JavaRDDLike.scala:225) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$3 5.apply(RDD.scala:927) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$3 5.apply(RDD.scala:927) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.sc ala:1857) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.sc ala:1857) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:247) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor. java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor .java:624) at java.lang.Thread.run(Thread.java:748) 18/10/06 01:10:08 ERROR executor.Executor: Exception in task 0.0 in stage 0.0 (T ID 0) java.lang.NumberFormatException: For input string: "2007 " at java.lang.NumberFormatException.forInputString(NumberFormatException. java:65) at java.lang.Integer.parseInt(Integer.java:580) at java.lang.Integer.valueOf(Integer.java:766) at com.example.streaming.EventParseUtil.convertData(EventParseUtil.java: 24) at com.example.streaming.CarEventsProcessor.lambda$main$91ca40fe$1(CarEv entsProcessor.java:76) at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.appl y(JavaPairRDD.scala:1015) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at scala.collection.convert.Wrappers$IteratorWrapper.next(Wrappers.scala :30) at com.lucidworks.spark.SolrSupport$5.call(SolrSupport.java:216) at com.lucidworks.spark.SolrSupport$5.call(SolrSupport.java:210) at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.app ly(JavaRDDLike.scala:225) at org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1.app ly(JavaRDDLike.scala:225) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$3 5.apply(RDD.scala:927) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$3 5.apply(RDD.scala:927) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.sc ala:1857) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.sc ala:1857) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:247) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor. java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor .java:624) at java.lang.Thread.run(Thread.java:748) 18/10/06 01:10:08 ERROR spark.SolrSupport: Send batch to collection connectedCar
Data failed due to: org.apache.solr.common.SolrException: Collection not found:
connectedCarData

I attach the complete code.

Does anyone have any doubt what the failure might be?

public class CarEventsProcessor {

 private CarEventsProcessor() {}

 public static void main(String[] args) throws JsonParseException, JsonMappingException, IOException {
  if (args.length < 4) {
   System.err
    .println("Usage: CarEventsProcessor <brokers> <topics> <zk_url> <index_name>\n" +
     "  <brokers> is a list of one or more Kafka brokers\n" +
     "  <topics> is a list of one or more kafka topics to consume from\n" +
     " <zk_url> zookeeper url\n" +
     " <index_name> name of solr index\n\n");
   System.exit(1);
  }

  String brokers = args[0];
  String topics = args[1];
  String zk_url = args[2];
  String index_name = args[3];

  ObjectMapper objectMapper = new ObjectMapper();
  objectMapper.registerModule(new DefaultScalaModule());

  // Create context with a 2 seconds batch interval
  SparkConf sparkConf = new SparkConf()
   .setAppName("CarEventsProcessor");
  sparkConf.setMaster("local[4]");

  JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, Durations.seconds(60));
  jssc.sparkContext().setLogLevel("ERROR");

  HashSet < String > topicsSet = new HashSet < String > (Arrays.asList(topics.split(",")));
  HashMap < String, String > kafkaParams = new HashMap < String, String > ();
  kafkaParams.put("metadata.broker.list", brokers);


  // Create direct kafka stream with brokers and topics
  JavaPairInputDStream < String, String > messages = KafkaUtils.createDirectStream(jssc, String.class, String.class,
   StringDecoder.class, StringDecoder.class, kafkaParams, topicsSet);


  // Get the messages and extract payload
  JavaDStream < String > events = messages
   .map(new Function < Tuple2 < String, String > , String > () {
    @Override
    public String call(Tuple2 < String, String > tuple2) {
     return tuple2._2();
    }
   });

  //convert to SolrDocuments
  JavaDStream < SolrInputDocument > parsedSolrEvents = events.map(incomingRecord -> EventParseUtil.convertData(incomingRecord));

  //send to solr
  SolrSupport.indexDStreamOfDocs(zk_url, index_name, 10, parsedSolrEvents);

  parsedSolrEvents.print();
  jssc.start();
  jssc.awaitTermination();
 }
}

Upvotes: 0

Views: 471

Answers (1)

OneCricketeer
OneCricketeer

Reputation: 191738

NumberFormatException: For input string: "2007 "... at com.example.streaming.EventParseUtil.convertData(EventParseUtil.java: 24)

You've called Integer.parseInt on a string that contains whitespacing.

You must trim the string parameter to that method.

And at the bottom of the error, you have a Collection not being found.


In general, the HDP recommended way to get this data between Solr and Kafka would be to use Nifi

Upvotes: 0

Related Questions