jowwel93
jowwel93

Reputation: 203

Create labeledPoints from a Spark DataFrame using Pyspark

I have a spark Dataframe with two coulmn "label" and "sparse Vector" obtained after applying Countvectorizer to the corpus of tweet.

When trying to train Random Forest Regressor model i found that it accept only Type LabeledPoint.

Does any one know how to convert my spark DataFrame to LabeledPoint

Upvotes: 5

Views: 5511

Answers (1)

hamza tuna
hamza tuna

Reputation: 1497

Which spark version you are using. Spark use spark ml instead of mllib.

from pyspark.ml.feature import CountVectorizer
from pyspark.ml.classification import RandomForestClassifier
from pyspark.sql import functions as F

# Input data: Each row is a bag of words with a ID.
df = sqlContext.createDataFrame([
    (0, "a b c".split(" ")),
    (1, "a b b c a".split(" "))
], ["id", "words"])

# fit a CountVectorizerModel from the corpus.
cv = CountVectorizer(inputCol="words", outputCol="features", vocabSize=3, minDF=2.0)

model = cv.fit(df)

result = model.transform(df).withColumn('label', F.lit(0))
rf = RandomForestClassifier(labelCol="label", featuresCol="features", numTrees=10)
rf.fit(result)

if You insist on mllib:

from pyspark.mllib.regression import LabeledPoint
from pyspark.mllib.tree import RandomForest

rdd = result \ 
          .rdd \
          .map(lambda row: LabeledPoint(row['label'], row['features'].toArray()))
RandomForest.trainClassifier(rdd, 2, {}, 3)

Upvotes: 6

Related Questions