Reputation: 1985
I have a Hive query that returns data in this format:
ip, category, score
1.2.3.4, X, 5
10.10.10.10, A, 2
1.2.3.4, Y, 2
12.12.12.12, G, 10
1.2.3.4, Z, 9
10.10.10.10, X, 3
In PySpark, I get this via hive_context.sql(my_query).rdd
Each ip address can have multiple scores (hence multiple rows). I would like to get this data in a json/array format as follows:
{
"ip": "1.2.3.4",
"scores": [
{
"category": "X",
"score": 10
},
{
"category": "Y",
"score": 2
},
{
"category": "Z",
"score": 9
},
],
"ip": "10.10.10.10",
"scores": [
{
"category": "A",
"score": 2
},
{
"category": "X",
"score": 3
},
],
"ip": "12.12.12.12",
"scores": [
{
"category": "G",
"score": 10
},
],
}
Note that the RDD isn't necessarily sorted and the RDD can easily contain a couple of hundred million rows. I'm new to PySpark so any pointers on how to go about this efficiently would help.
Upvotes: 2
Views: 1571
Reputation: 214967
groupBy
ip
and then transform the grouped RDD to what you needed:
rdd.groupBy(lambda r: r.ip).map(
lambda g: {
'ip': g[0],
'scores': [{'category': x['category'], 'score': x['score']} for x in g[1]]}
).collect()
# [{'ip': '1.2.3.4', 'scores': [{'category': 'X', 'score': 5}, {'category': 'Y', 'score': 2}, {'category': 'Z', 'score': 9}]}, {'ip': '12.12.12.12', 'scores': [{'category': 'G', 'score': 10}]}, {'ip': '10.10.10.10', 'scores': [{'category': 'A', 'score': 2}, {'category': 'X', 'score': 3}]}]
Upvotes: 2