Reputation: 439
Can anyone convert this very simple scala code to python?
val words = Array("one", "two", "two", "three", "three", "three")
val wordPairsRDD = sc.parallelize(words).map(word => (word, 1))
val wordCountsWithGroup = wordPairsRDD
.groupByKey()
.map(t => (t._1, t._2.sum))
.collect()
Upvotes: 3
Views: 10843
Reputation: 1117
Assuming you already have a Spark context defined and ready to go:
from operator import add
words = ["one", "two", "two", "three", "three", "three"]
wordsPairRDD = sc.parallelize(words).map(lambda word: (word, 1))
.reduceByKey(add)
.collect()
Checkout the github examples repo: Python Examples
Upvotes: 2
Reputation: 1150
Two translate in python :
from operator import add
wordsList = ["one", "two", "two", "three", "three", "three"]
words = sc.parallelize(wordsList ).map(lambda l :(l,1)).reduceByKey(add).collect()
print words
words = sc.parallelize(wordsList ).map(lambda l : (l,1)).groupByKey().map(lambda t: (t[0], sum(t[1]))).collect()
print words
Upvotes: 2
Reputation: 439
try this:
words = ["one", "two", "two", "three", "three", "three"]
wordPairsRDD = sc.parallelize(words).map(lambda word : (word, 1))
wordCountsWithGroup = wordPairsRDD
.groupByKey()
.map(lambda t: (t[0], sum(t[1])))
.collect()
Upvotes: 5