subash poudel
subash poudel

Reputation: 438

how to create a new columns with random values in pyspark?

I tried to initialize new columns with random values in pandas. I did this way

df['business_vertical'] = np.random.choice(['Retail', 'SME', 'Cor'], df.shape[0])

How do I do it in pyspark?

Upvotes: 16

Views: 33281

Answers (4)

Powers
Powers

Reputation: 19308

Here's how you can solve this with the array_choice function in quinn:

import quinn

df = spark.createDataFrame([('a',), ('b',), ('c',)], ['letter'])
cols = list(map(lambda c: F.lit(c), ['Retail', 'SME', 'Cor']))
df.withColumn('business_vertical', quinn.array_choice(F.array(cols))).show()
+------+-----------------+
|letter|business_vertical|
+------+-----------------+
|     a|              SME|
|     b|           Retail|
|     c|              SME|
+------+-----------------+

array_choice is generic and can easily be used to select a random value from an existing ArrayType column. Suppose you have the following DataFrame.

+------------+
|     letters|
+------------+
|   [a, b, c]|
|[a, b, c, d]|
|         [x]|
|          []|
+------------+

Here's how you can grab a random letter.

actual_df = df.withColumn(
    "random_letter",
    quinn.array_choice(F.col("letters"))
)
actual_df.show()
+------------+-------------+
|     letters|random_letter|
+------------+-------------+
|   [a, b, c]|            a|
|[a, b, c, d]|            d|
|         [x]|            x|
|          []|         null|
+------------+-------------+

Here's the array_choice function definition:

def array_choice(col):
    index = (F.rand()*F.size(col)).cast("int")
    return col[index]

This post explains fetching random values from PySpark arrays in more detail.

Upvotes: 1

Steven
Steven

Reputation: 15258

Just generate a list of values and then extract them randomly :

from pyspark.sql import functions as F

df.withColumn(
  "business_vertical",
  F.array(
    F.lit("Retail"),
    F.lit("SME"),
    F.lit("Cor"),
  ).getItem(
    (F.rand()*3).cast("int")
  )
)

Upvotes: 34

Mahsa Hassankashi
Mahsa Hassankashi

Reputation: 2139

For random number:

import random
randomnum= random.randint(1000,9999)

or numpy.random.choice

import org.apache.spark.sql.functions.lit
val newdf = df.withColumn("newcol",lit("your-random"))

or: pandas.Series.combine_first

s1 = pd.Series([1, np.nan])
s2 = pd.Series([3, 4])
s1.combine_first(s2)

Upvotes: 0

Pintu
Pintu

Reputation: 308

You can use pyspark.sql.functions.rand()

df.withColumn('rand_col', F.rand()).show()  

Upvotes: -1

Related Questions