MarkNS
MarkNS

Reputation: 4021

PySpark DataFrame filter using logical AND over list of conditions -- Numpy All Equivalent

I'm trying to filter rows of a PySpark dataframe if the values of all columns are zero.

I was hoping to use something like this, (using the numpy function np.all() ):

from pyspark.sql.functions import col
df.filter(all([(col(c) != 0) for c in df.columns]))

But I get the ValueError:

ValueError: Cannot convert column into bool: please use '&' for 'and', '|' for 'or', '~' for 'not' when building DataFrame boolean expressions.

Is there any way to perform the logical and on a list of conditions? What is the corresponding np.all functionality in PySpark?

Upvotes: 17

Views: 6561

Answers (1)

zero323
zero323

Reputation: 330343

Just reduce the list of predicates

from pyspark.sql.functions import lit
from operator import and_
from functools import reduce

df.where(reduce(and_, (col(c) != 0 for c in df.columns)))

or

df.where(reduce(and_, (col(c) != 0 for c in df.columns), lit(True)))

if you expect that the list of predicates might be empty.

For example if data looks like this:

df  = sc.parallelize([
    (0, 0, 0), (1, 0, 0),  (0, 1, 0), (0, 0, 1), (1, 1, 1)
]).toDF(["x", "y", "z"])

the result will be:

+---+---+---+
|  x|  y|  z|
+---+---+---+
|  1|  1|  1|
+---+---+---+

Upvotes: 23

Related Questions