Thamali Wijewardhana
Thamali Wijewardhana

Reputation: 512

Apache Spark Decision Tree Predictions

I have the following code for classification using decision trees. I need to get the predictions of the test dataset into a java array and print them. Can someone help me to extend this code for that. I need to have an a 2D array of predicted label and actual label and print the predicted labels.

public class DecisionTreeClass {
    public  static void main(String args[]){
        SparkConf sparkConf = new SparkConf().setAppName("DecisionTreeClass").setMaster("local[2]");
        JavaSparkContext jsc = new JavaSparkContext(sparkConf);


        // Load and parse the data file.
        String datapath = "/home/thamali/Desktop/tlib.txt";
        JavaRDD<LabeledPoint> data = MLUtils.loadLibSVMFile(jsc.sc(), datapath).toJavaRDD();//A training example used in supervised learning is called a “labeled point” in MLlib.
        // Split the data into training and test sets (30% held out for testing)
        JavaRDD<LabeledPoint>[] splits = data.randomSplit(new double[]{0.7, 0.3});
        JavaRDD<LabeledPoint> trainingData = splits[0];
        JavaRDD<LabeledPoint> testData = splits[1];

        // Set parameters.
        //  Empty categoricalFeaturesInfo indicates all features are continuous.
        Integer numClasses = 12;
        Map<Integer, Integer> categoricalFeaturesInfo = new HashMap();
        String impurity = "gini";
        Integer maxDepth = 5;
        Integer maxBins = 32;

        // Train a DecisionTree model for classification.
        final DecisionTreeModel model = DecisionTree.trainClassifier(trainingData, numClasses,
                categoricalFeaturesInfo, impurity, maxDepth, maxBins);

        // Evaluate model on test instances and compute test error
        JavaPairRDD<Double, Double> predictionAndLabel =
                testData.mapToPair(new PairFunction<LabeledPoint, Double, Double>() {
                    @Override
                    public Tuple2<Double, Double> call(LabeledPoint p) {
                        return new Tuple2(model.predict(p.features()), p.label());
                    }
                });

        Double testErr =
                1.0 * predictionAndLabel.filter(new Function<Tuple2<Double, Double>, Boolean>() {
                    @Override
                    public Boolean call(Tuple2<Double, Double> pl) {
                        return !pl._1().equals(pl._2());
                    }
                }).count() / testData.count();

        System.out.println("Test Error: " + testErr);
        System.out.println("Learned classification tree model:\n" + model.toDebugString());


    }

}

Upvotes: 0

Views: 449

Answers (1)

Derek_M
Derek_M

Reputation: 1048

You basically have exactly that with the prediction and label variable. If you really needed a list of a 2d double arrays, you could change the method that you use to:

JavaRDD<double[]> valuesAndPreds = testData.map(point -> new double[]{model.predict(point.features()), point.label()});

and run collect on that reference for a list of 2d double arrays.

List<double[]> values = valuesAndPreds.collect();

I would take a look at the docs here: https://spark.apache.org/docs/latest/mllib-evaluation-metrics.html . You can also change the data to get additional statical performance measurements of your model with classes like MulticlassMetrics. This requires changing the mapToPair function to a map function and changing the generics to an object. So something like:

JavaRDD<Tuple2<Object, Object>> valuesAndPreds = testData().map(point -> new Tuple2<>(model.predict(point.features()), point.label()));

Then running:

MulticlassMetrics multiclassMetrics = new MulticlassMetrics(JavaRDD.toRDD(valuesAndPreds));

All of this stuff is very well documented in Spark's MLLib documentation. Also, you mentioned needing to print the results. If this is homework, I will let you figure out that part, since it would be a good exercise to learn how to do that from a list.

Edit:

ALSO, noticed that you are using java 7, and what I have is from java 8. To answer your main question in how to turn into a 2d double array, you would do:

JavaRDD<double[]> valuesAndPreds = testData.map(new org.apache.spark.api.java.function.Function<LabeledPoint, double[]>() {
                @Override
                public double[] call(LabeledPoint point) {
                    return new double[]{model.predict(point.features()), point.label()};
                }
            });

Then run collect, to get a list of two doubles. Also, to give a hint on the printing part, take a look at the java.util.Arrays toString implementation.

Upvotes: 1

Related Questions