tfirinci
tfirinci

Reputation: 159

create a dataframe from dictionary by using RDD in pyspark

I have a dictionary that name is “Word_Count” , key is represent the word and values represent the number word in text. My aim is to convert it to a dataframe with two columns words and count

items = list(Word_Counts.items())[:5]
items

output:

[('Akdeniz’in', 14), ('en', 13287), ('büyük', 3168), ('deniz', 1276), ('festivali:', 6)]

When I used sc.parallelize to establish a RDD , I realized that it drop all values and only keys remain as a result when I create a table , it contains only from keys. Please let me know how can establish a dataframe from a dictionary by using RDD

rdd1 = sc.parallelize(Word_Counts)
Df_Hur = spark.read.json(rdd1)
rdd1.take(5)

output:

['Akdeniz’in', 'en', 'büyük', 'deniz', 'festivali:']

Df_Hur.show(5)

output:

+---------------+ 
|_corrupt_record|
+---------------+ 
| Akdeniz’in|
| en| 
| büyük| 
| deniz| 
| festivali:| 
+---------------+

My aim is :

   word       count
  Akdeniz’in    14
  en            13287
  büyük         3168
  deniz         1276
  festivali:    6

Upvotes: 2

Views: 1610

Answers (1)

RobinFrcd
RobinFrcd

Reputation: 5426

You can feed word_count.items() directly to parallelize:

df_hur = sc.parallelize(word_count.items()).toDF(['word', 'count'])

df_hur.show()

>>>
+----------+-----+
|      word|count|
+----------+-----+
|Akdeniz’in|   14|
|        en|13287|
|     büyük| 3168|
|     deniz| 1276|
|festivali:|    6|
+----------+-----+

Upvotes: 2

Related Questions