AnmolDave
AnmolDave

Reputation: 445

Combine two rows in spark based on a condition in pyspark

I have input record in following format: Input data format

enter image description here

I want the data to be transofmmed in the following format: Output data format

enter image description here

I want to combine my 2 rows based on the condition type.

As per my knowledge I need to take the composite key of the 3 data fields and compare the type fields once they are equal.

Can someone please help me with the implementation in Spark using Python?

EDIT: Following is my try using RDD in pyspark

record = spark.read.csv("wasb:///records.csv",header=True).rdd
print("Total records: %d")%record.count()
private_ip = record.map(lambda fields: fields[2]).distinct().count()
private_port = record.map(lambda fields: fields[3]).distinct().count()
destination_ip = record.map(lambda fields: fields[6]).distinct().count()
destination_port = record.map(lambda fields: fields[7]).distinct().count()
print("private_ip:%d, private_port:%d, destination ip:%d, destination_port:%d")%(private_ip,private_port,destination_ip,destination_port)
types = record.map(lambda fields: ((fields[2],fields[3],fields[6],fields[7]),fields[0])).reduceByKey(lambda a,b:a+','+b)
print types.first()

And following is my output till now.

((u'100.79.195.101', u'54835', u'58.96.162.33', u'80'), u'22-02-2016 13:11:03,22-02-2016 13:13:53')

Upvotes: 4

Views: 14347

Answers (2)

Prem
Prem

Reputation: 11975

Hope this helps!
(Edit note: tweaked code after getting the updated requirement)

import pyspark.sql.functions as func
#create RDD
rdd = sc.parallelize([(22,'C','xxx','yyy','zzz'),(23,'D','xxx','yyy','zzz'),(24,'C','xxx1','yyy1','zzz1'),(25,'D','xxx1','yyy1','zzz1')])

#convert RDD to dataframe
df = rdd.toDF(['Date','Type','Data1','Data2','Data3'])
df.show()

#group by 3 data columns to create list of date & type
df1 = df.sort("Data1","Data2","Data3","Type").groupBy("Data1","Data2","Data3").agg(func.collect_list("Type"),func.collect_list("Date")).withColumnRenamed("collect_list(Type)", "Type_list").withColumnRenamed("collect_list(Date)", "Date_list")
#add 2 new columns by splitting above date list based on type list's value
df2 = df1.where((func.col("Type_list")[0]=='C') & (func.col("Type_list")[1]=='D')).withColumn("Start Date",df1.Date_list[0]).withColumn("End Date",df1.Date_list[1])
#select only relevant columns as an output
df2.select("Data1","Data2","Data3","Start Date","End Date").show()



Alternate solution using RDD :-
(Edit note: added below snippet as @AnmolDave is interested in RDD solution as well)

import pyspark.sql.types as typ
rdd = sc.parallelize([('xxx','yyy','zzz','C',22),('xxx','yyy','zzz','D',23),('xxx1','yyy1','zzz1','C', 24),('xxx1','yyy1','zzz1','D', 25)])
reduced = rdd.map(lambda row: ((row[0], row[1], row[2]), [(row[3], row[4])]))\
    .reduceByKey(lambda x,y: x+y)\
    .map(lambda row: (row[0], sorted(row[1], key=lambda text: text[0])))\
    .map(lambda row: (
            row[0][0],
            row[0][1],
            row[0][2],
            ','.join([str(e[0]) for e in row[1]]),
            row[1][0][1],
            row[1][1][1]
        )
    )\
    .filter(lambda row: row[3]=="C,D")

schema_red = typ.StructType([
        typ.StructField('Data1', typ.StringType(), False),
        typ.StructField('Data2', typ.StringType(), False),
        typ.StructField('Data3', typ.StringType(), False),
        typ.StructField('Type', typ.StringType(), False),
        typ.StructField('Start Date', typ.StringType(), False),
        typ.StructField('End Date', typ.StringType(), False)
    ])

df_red = sqlContext.createDataFrame(reduced, schema_red)
df_red.show()

Upvotes: 7

koiralo
koiralo

Reputation: 23119

Here is a simple example as you want, code is on scala hope you can change to python.

//create a dummy data 
val df = Seq((22, "C", "xxx","yyy","zzz"), (23, "C", "xxx","yyy","zzz")).toDF("Date", "Type", "Data1", "Data2", "Data3")

+----+----+-----+-----+-----+
|Date|Type|Data1|Data2|Data3|
+----+----+-----+-----+-----+
|  22|   C|  xxx|  yyy|  zzz|
|  23|   C|  xxx|  yyy|  zzz|
+----+----+-----+-----+-----+

//group by three fields and collect as list for column Date
val df1 = df.groupBy("Data1", "Data2", "Data3").agg(collect_list($"Date"))
+-----+-----+-----+--------+
|Data1|Data2|Data3|    Date|
+-----+-----+-----+--------+
|  xxx|  yyy|  zzz|[22, 23]|
+-----+-----+-----+--------+


//create new column with the given array of date
df1.withColumn("Start Date", $"Date"(0)).withColumn("End Date", $"Date"(1)).show
+-----+-----+-----+--------+----------+--------+
|Data1|Data2|Data3|    Date|Start Date|End Date|
+-----+-----+-----+--------+----------+--------+
|  xxx|  yyy|  zzz|[22, 23]|        22|      23|
+-----+-----+-----+--------+----------+--------+

Hope this helps!

Upvotes: 3

Related Questions