Bruno Canal
Bruno Canal

Reputation: 465

ValueError: Cannot convert column into bool

I'm trying build a new column on dataframe as below:

l = [(2, 1), (1,1)]
df = spark.createDataFrame(l)

def calc_dif(x,y):
    if (x>y) and (x==1):
        return x-y

dfNew = df.withColumn("calc", calc_dif(df["_1"], df["_2"]))
dfNew.show()

But, I get:

Traceback (most recent call last):
  File "/tmp/zeppelin_pyspark-2807412651452069487.py", line 346, in <module>
Exception: Traceback (most recent call last):
  File "/tmp/zeppelin_pyspark-2807412651452069487.py", line 334, in <module>
  File "<stdin>", line 38, in <module>
  File "<stdin>", line 36, in calc_dif
  File "/usr/hdp/current/spark2-client/python/pyspark/sql/column.py", line 426, in __nonzero__
    raise ValueError("Cannot convert column into bool: please use '&' for 'and', '|' for 'or', "
ValueError: Cannot convert column into bool: please use '&' for 'and', '|' for 'or', '~' for 'not' when building DataFrame boolean expressions.

Why It happens? How can I fix It?

Upvotes: 19

Views: 71782

Answers (4)

Ephesus
Ephesus

Reputation: 963

For anyone who faces the same error message, check the brackets. Sometimes boolean expression needs more specific expressions like;

DF_New= 
df1.withColumn('EventStatus',\
                  F.when(((F.col("Adjusted_Timestamp")) <\
                          (F.col("Event_Finish"))) &\
                         ((F.col("Adjusted_Timestamp"))>\ 
                           F.col("Event_Start"))),1).otherwise(0))

Upvotes: 5

Anne
Anne

Reputation: 593

For anyone who has a similar error: I was trying to pass an rdd when I needed a Pandas object and got the same error. Obviously, I could simply solve it by a ".toPandas()"

Upvotes: 4

mkaran
mkaran

Reputation: 2718

It is complaining because you give your calc_dif function the whole column objects, not the actual data of the respective rows. You need to use a udf to wrap your calc_dif function :

from pyspark.sql.types import IntegerType
from pyspark.sql.functions import udf

l = [(2, 1), (1,1)]
df = spark.createDataFrame(l)

def calc_dif(x,y):
    # using the udf the calc_dif is called for every row in the dataframe
    # x and y are the values of the two columns 
    if (x>y) and (x==1):
        return x-y

udf_calc = udf(calc_dif, IntegerType())

dfNew = df.withColumn("calc", udf_calc("_1", "_2"))
dfNew.show()

# since x < y calc_dif returns None
+---+---+----+
| _1| _2|calc|
+---+---+----+
|  2|  1|null|
|  1|  1|null|
+---+---+----+

Upvotes: 7

Alper t. Turker
Alper t. Turker

Reputation: 35249

Either use udf:

from pyspark.sql.functions import udf

@udf("integer")
def calc_dif(x,y):
    if (x>y) and (x==1):
        return x-y

or case when (recommended)

from pyspark.sql.functions import when

def calc_dif(x,y):
    when(( x > y) & (x == 1), x - y)

The first one computes on Python objects, the second one on Spark Columns

Upvotes: 15

Related Questions