Reputation: 4849
I am using pyspark
I read a libsvm file, transpose it, and then save it again.
I save every data row as MLUtils.labeledpoint object with sparse data
I tried using MLUtils.saveaslibsvm and than read the files using MLUtils.loadlibsvm, and I get the following error
ValueError: could not convert string to float: [
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193) at org.apache.spark.api.python.PythonRunner$$anon$1.(PythonRDD.scala:234) at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152) at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at org.apache.spark.rdd.RDD$$anonfun$8.apply(RDD.scala:336) at org.apache.spark.rdd.RDD$$anonfun$8.apply(RDD.scala:334) at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1055) at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1029) at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:969) at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1029) at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:760) at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:334) at org.apache.spark.rdd.RDD.iterator(RDD.scala:285) at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more
I read in the MLUtils page that if you want to use loadlabeledpoints, you need to save the data using RDD.saveAsTextFile but when i do this, i get
17/08/10 16:55:51 WARN TaskSetManager: Lost task 1.0 in stage 1.0 (TID 3, 192.168.1.205, executor 0): org.apache.spark.SparkException: Cannot parse a double from: [ at org.apache.spark.mllib.util.NumericParser$.parseDouble(NumericParser.scala:120) at org.apache.spark.mllib.util.NumericParser$.parseArray(NumericParser.scala:70) at org.apache.spark.mllib.util.NumericParser$.parseTuple(NumericParser.scala:91) at org.apache.spark.mllib.util.NumericParser$.parse(NumericParser.scala:41) at org.apache.spark.mllib.regression.LabeledPoint$.parse(LabeledPoint.scala:62) at org.apache.spark.mllib.util.MLUtils$$anonfun$loadLabeledPoints$1.apply(MLUtils.scala:195) at org.apache.spark.mllib.util.MLUtils$$anonfun$loadLabeledPoints$1.apply(MLUtils.scala:195) at scala.collection.Iterator$$anon$11.next(Iterator.scala:409) at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.next(SerDeUtil.scala:121) at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.next(SerDeUtil.scala:112) at scala.collection.Iterator$class.foreach(Iterator.scala:893) at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.foreach(SerDeUtil.scala:112) at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48) at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310) at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.to(SerDeUtil.scala:112) at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302) at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.toBuffer(SerDeUtil.scala:112) at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289) at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.toArray(SerDeUtil.scala:112) at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:936) at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:936) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2062) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2062) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.NumberFormatException: For input string: "[" at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:2043) at sun.misc.FloatingDecimal.parseDouble(FloatingDecimal.java:110) at java.lang.Double.parseDouble(Double.java:538) at org.apache.spark.mllib.util.NumericParser$.parseDouble(NumericParser.scala:117) ... 30 more
How can i save RDD of labeled points as libsvm format and than load it back from the disk using pyspark?
Thanks
Upvotes: 0
Views: 261
Reputation: 4849
The issue was that writing the LabledPoints to file did not use the libsvm format and then it was hard to re-read it.
I solved it by creating the labled point in memory, and then before writing it to file, i converted it to libsvm format string, and then wrote it as text, after, i was able to read it as libsvm format
def pointToLibsvmRow(point):
s = point.features.reshape(2,-1, order="C").transpose().astype("str")
pairs = [str(int(float(point.label)))] + ["%s:%s" % (str(int(float(a))), b) for a, b in s.tolist()]
st = " ".join(pairs)
return st
Upvotes: 0