Anshul Kalra
Anshul Kalra

Reputation: 198

Convert a JavaRDD String to JavaRDD Vector

I'm trying to load a csv file as a JavaRDD String and then want to get the data in JavaRDD Vector

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.mllib.feature.HashingTF;
import org.apache.spark.mllib.linalg.Vector;
import org.apache.spark.mllib.linalg.Vectors;
import org.apache.spark.mllib.regression.LabeledPoint;
import org.apache.spark.mllib.stat.MultivariateStatisticalSummary;
import org.apache.spark.mllib.stat.Statistics;

import breeze.collection.mutable.SparseArray;
import scala.collection.immutable.Seq;




public class Trial {
    public void start() throws InstantiationException, IllegalAccessException,
    ClassNotFoundException {

        run();
    }


    private void run(){
SparkConf conf = new SparkConf().setAppName("csvparser");
JavaSparkContext jsc = new JavaSparkContext(conf);
        JavaRDD<String> data = jsc.textFile("C:/Users/kalraa2/Documents/trial.csv");
JavaRDD<Vector> datamain = data.flatMap(null);
MultivariateStatisticalSummary mat = Statistics.colStats(datamain.rdd());

        System.out.println(mat.mean());


    }

    private List<Vector> Seq(Vector dv) {
        // TODO Auto-generated method stub
        return null;
    }


    public static void main(String[] args) throws Exception {

        Trial trial = new Trial();
        trial.start();
    }
}

The program is running without any error but i'm not able to get anything when trying to run it on spark-machine. Can anyone tell me whether the conversion of string RDD to Vector RDD is correct.

My csv file consist of only one column which are floating numbers

Upvotes: 1

Views: 2813

Answers (3)

Brad
Brad

Reputation: 15879

Assuming your trial.csv file looks like this

1.0
2.0
3.0

Taking your original code from your question a one line change is required with Java 8

SparkConf conf = new SparkConf().setAppName("csvparser").setMaster("local");
JavaSparkContext jsc = new JavaSparkContext(conf);
JavaRDD<String> data = jsc.textFile("C:/Users/kalraa2/Documents/trial.csv");
JavaRDD<Vector> datamain = data.map(s -> Vectors.dense(Double.parseDouble(s)));
MultivariateStatisticalSummary mat = Statistics.colStats(datamain.rdd());

System.out.println(mat.mean());

Prints 2.0

Upvotes: 0

Anshul Kalra
Anshul Kalra

Reputation: 198

I solved my answer by changing the code to this

JavaRDD<Vector> datamain = data.map(new Function<String,Vector>(){
            public Vector call(String s){
                String[] sarray = s.trim().split("\\r?\\n");
                double[] values = new double[sarray.length];
                for (int i = 0; i < sarray.length; i++) {
                  values[i] = Double.parseDouble(sarray[i]);
                  System.out.println(values[i]);
                }
                return Vectors.dense(values);  
                }
            }
        );

Upvotes: 0

Marek Dudek
Marek Dudek

Reputation: 171

The null in this flatMap invocation might be a problem:

JavaRDD<Vector> datamain = data.flatMap(null);

Upvotes: 1

Related Questions