muktadiur
muktadiur

Reputation: 439

Convert Apache Spark Scala code to Python

Can anyone convert this very simple scala code to python?

val words = Array("one", "two", "two", "three", "three", "three")
val wordPairsRDD = sc.parallelize(words).map(word => (word, 1))

val wordCountsWithGroup = wordPairsRDD
    .groupByKey()
    .map(t => (t._1, t._2.sum))
    .collect()

Upvotes: 3

Views: 10843

Answers (3)

Ahmad Ragab
Ahmad Ragab

Reputation: 1117

Assuming you already have a Spark context defined and ready to go:

 from operator import add
 words = ["one", "two", "two", "three", "three", "three"]
 wordsPairRDD = sc.parallelize(words).map(lambda word: (word, 1))
      .reduceByKey(add)
      .collect()

Checkout the github examples repo: Python Examples

Upvotes: 2

Junayy
Junayy

Reputation: 1150

Two translate in python :

from operator import add
wordsList = ["one", "two", "two", "three", "three", "three"]
words = sc.parallelize(wordsList ).map(lambda l :(l,1)).reduceByKey(add).collect()
print words
words = sc.parallelize(wordsList ).map(lambda l : (l,1)).groupByKey().map(lambda t: (t[0], sum(t[1]))).collect()
print words

Upvotes: 2

Soroosh Sarabadani
Soroosh Sarabadani

Reputation: 439

try this:

words = ["one", "two", "two", "three", "three", "three"]
wordPairsRDD = sc.parallelize(words).map(lambda word : (word, 1))

wordCountsWithGroup = wordPairsRDD
    .groupByKey()
    .map(lambda t: (t[0], sum(t[1])))
    .collect()

Upvotes: 5

Related Questions