user2366149
user2366149

Reputation:

Flatten a RDD in PySpark

I am trying to process data using PySpark. Following is my sample code:

rdd = sc.parallelize([[u'9', u'9', u'HF', u'63300001', u'IN HF', u'03/09/2004', u'9', u'HF'], [u'10', u'10', u'HF', u'63300001', u'IN HF', u'03/09/2004', u'9', u'HF']]) 

out = rdd.map(lambda l : (l[0:3],str(l[3]).zfill(8)[:4],l[4:]))

out.take(2)

[([u'9', u'9', u'HF'], '6330', [u'IN HF', u'03/09/2004', u'9', u'HF']), ([u'10', u'10', u'HF'], '6330', [u'IN HF', u'03/09/2004', u'9', u'HF'])]

expected output:
[[u'9', u'9', u'HF', '6330', u'IN HF', u'03/09/2004', u'9', u'HF'], [u'10', u'10', u'HF', '6330', u'IN HF', u'03/09/2004', u'9', u'HF']]

Is there any method to flatten the RDD in spark?

Upvotes: 2

Views: 1383

Answers (1)

zero323
zero323

Reputation: 330073

You don't need anything Spark specific here. Something like this should be more than enough:

out = rdd.map(lambda l : (l[0:3] + [str(l[3]).zfill(8)[:4]] + l[4:])

Destructuring inside lambda could be more readable though. I mean something like this:

rdd = sc.parallelize([(1, 2, 3), (4, 5, 6)])
rdd.map(lambda (x, y, z): (x, str(y).zfill(8), z))

Upvotes: 2

Related Questions