Reputation: 113
I want to filter the col_2 which is a list column to a certain condition, the code was written in pandas, I'm trying to convert it to Pyspark.
schema = StructType([
StructField( 'vin', StringType(), True),StructField( 'age', IntegerType(), True),StructField( 'var', IntegerType(), True),StructField( 'rim', IntegerType(), True),StructField( 'cap', IntegerType(), True),StructField( 'cur', IntegerType(), True)
])
data = [['tom', 10,54,87,23,90], ['nick', 15,63,23,11,65], ['juli', 14,87,9,43,21]]
df=spark.createDataFrame(data,schema)
df.show()
>>>
+----+---+---+---+---+---+
| vin|age|var|rim|cap|cur|
+----+---+---+---+---+---+
| tom| 10| 54| 87| 23| 90|
|nick| 15| 63| 23| 11| 65|
|juli| 14| 87| 9| 43| 21|
+----+---+---+---+---+---+
col_2=['age','var','rim']
df=df.select(*col_2)
df.show()
>>>
+---+---+---+
|age|var|rim|
+---+---+---+
| 10| 54| 87|
| 15| 63| 23|
| 14| 87| 9|
+---+---+---+
df=df.filter(F.col(*col_2) >=10)
Upvotes: 0
Views: 1007
Reputation: 42422
You can't filter on the condition that a list of columns is greater than 10; but you can chain a list of conditions where each column is greater than 10, with &
(and) or |
(or), depending on your needs.
from functools import reduce
col_2 = ['age','var','rim']
df2 = df.filter(
reduce(
lambda x, y: x | y, # `|` means `or`; use `&` if you want `and`
[(F.col(c) >= 10) for c in col_2]
)
)
Upvotes: 3