nmr
nmr

Reputation: 753

creating multiple data frames from existing data frame in pyspark

I have a data frame in pyspark like below

data = [{"B_ID": 'TEST', "Category": 'Category A', "ID": 1, "Value": 1},
        {"B_ID": 'TEST', "Category": 'Category B', "ID": 2, "Value": 2},
        {"B_ID": 'TEST', "Category": 'Category C', "ID": 3, "Value": None},
        {"B_ID": 'TEST', "Category": 'Category D', "ID": 4, "Value": 3},
        ]

df = spark.createDataFrame(data)
df.show()

+----+----------+---+-----+
|B_ID|  Category| ID|Value|
+----+----------+---+-----+
|TEST|Category A|  1|    1|
|TEST|Category B|  2|    2|
|TEST|Category C|  3| null|
|TEST|Category D|  4|    3|
+----+----------+---+-----+

Now from this above data frame I want to create some data frames by changing column values in some columns.

I have done like below

import pyspark.sql.functions as f
from functools import reduce

value_1 = 'TEST_1'

# changing B_ID column values and ID column values
df1 = df.withColumn("B_ID", f.lit(value_1)).withColumn("id", f.lit(5))
df1.show()
+------+----------+---+-----+
|  B_ID|  Category| id|Value|
+------+----------+---+-----+
|TEST_1|Category A|  5|    1|
|TEST_1|Category B|  5|    2|
|TEST_1|Category C|  5| null|
|TEST_1|Category D|  5|    3|
+------+----------+---+-----+


value_2 = 'TESTING'
df2 = df.withColumn("B_ID", f.lit(value_2)).withColumn("id", f.col("id"))
df2.show()
+-------+----------+---+-----+
|   B_ID|  Category| id|Value|
+-------+----------+---+-----+
|TESTING|Category A|  1|    1|
|TESTING|Category B|  2|    2|
|TESTING|Category C|  3| null|
|TESTING|Category D|  4|    3|
+-------+----------+---+-----+

df3 = df.withColumn("B_ID", f.col("B_ID")).withColumn("id", f.lit("6"))
df3.show()

+----+----------+---+-----+
|B_ID|  Category| id|Value|
+----+----------+---+-----+
|TEST|Category A|  6|    1|
|TEST|Category B|  6|    2|
|TEST|Category C|  6| null|
|TEST|Category D|  6|    3|
+----+----------+---+-----+

Now after creating the data frames I want to union all the newly created data frames

I have done like below # list of data frames to be unioned list_df = [df1, df2, df3]

# union all the data frames
final_df = reduce(f.DataFrame.union, list_df)

final_df.show()
+-------+----------+---+-----+
|   B_ID|  Category| id|Value|
+-------+----------+---+-----+
| TEST_1|Category A|  5|    1|
| TEST_1|Category B|  5|    2|
| TEST_1|Category C|  5| null|
| TEST_1|Category D|  5|    3|
|TESTING|Category A|  1|    1|
|TESTING|Category B|  2|    2|
|TESTING|Category C|  3| null|
|TESTING|Category D|  4|    3|
|   TEST|Category A|  6|    1|
|   TEST|Category B|  6|    2|
|   TEST|Category C|  6| null|
|   TEST|Category D|  6|    3|
+-------+----------+---+-----+

I am achieving what I want. But I would like to know if there are any other better approaches to achieve my result.

Upvotes: 0

Views: 1106

Answers (1)

mck
mck

Reputation: 42342

Here's another way using inline explode:

df2 = df.selectExpr(
    'Category',
    'Value',
    "inline(array(('TEST_1' as B_ID, 5 as id), ('TESTING' as B_ID, id), (B_ID, 6 as id)))"
).select(df.columns)

df2.show()
+-------+----------+---+-----+
|   B_ID|  Category| ID|Value|
+-------+----------+---+-----+
| TEST_1|Category A|  5|    1|
|TESTING|Category A|  1|    1|
|   TEST|Category A|  6|    1|
| TEST_1|Category B|  5|    2|
|TESTING|Category B|  2|    2|
|   TEST|Category B|  6|    2|
| TEST_1|Category C|  5| null|
|TESTING|Category C|  3| null|
|   TEST|Category C|  6| null|
| TEST_1|Category D|  5|    3|
|TESTING|Category D|  4|    3|
|   TEST|Category D|  6|    3|
+-------+----------+---+-----+

Upvotes: 1

Related Questions