Wassadamo
Wassadamo

Reputation: 1376

Extract First Non-Null Positive Element From Array in PySpark

I have data like:

from pyspark.sql import SparkSession, Row
import pyspark.sql.functions as F

dd = spark.createDataFrame([
    ('0', [Row(f1=0),Row(f1=1),Row(f1=None)]),
    ('1', [Row(f1=None), Row(f1=2)]),
    ('2', [])
], ['id', 'arr'])

And want a new column containing the first non-zero element in the 'arr' array, or null. In this case:

id | target_elt
0  | 1
1  | 2
2  | Null

Note that the array elements are of type Struct with an IntegerType field "f1"

My attempt:

positiveNonNull = F.udf(
    lambda array: [
        x.f1 for x in array 
        if (x.f1 is not None) & (x.f1 > 0)
    ], ArrayType(LongType())
)
dd.withColumn('newcol', positiveNonNull(F.col('arr')).getItem(0)).show()

I get TypeError: '>=' not supported between instances of 'NoneType' and 'int'

Upvotes: 0

Views: 895

Answers (1)

Wassadamo
Wassadamo

Reputation: 1376

Figured it out by wrapping the lambda code in a helper:

def val_if_pos(f1_value):
    if f1_value > 0:
        return f1_value

posNonNull = F.udf(lambda array: [val_if_pos(x.f1) for x in array if x.f1 is not None], ArrayType(LongType()))
(dd.withColumn('_temp', posNonNull(F.col('arr'))
).withColumn('firstPosNonNull', F.expr("FILTER(_temp, x -> x is not null)").getItem(0)
).drop('_temp')
).show()

Upvotes: 1

Related Questions