Reputation: 3
I have a table with ~5k columns and ~1 M rows that looks like this:
ID | Col1 | Col2 | Col3 | Col4 | Col5 | Col6 | Col7 | Col8 | Col9 | Col10 | Col11 |
---|---|---|---|---|---|---|---|---|---|---|---|
ID1 | 0 | 1 | 0 | 1 | 0 | 2 | 1 | 1 | 2 | 2 | 0 |
ID2 | 1 | 0 | 1 | 0 | 1 | 2 | 0 | 0 | 0 | 2 | 1 |
ID3 | 1 | 0 | 0 | 0 | 0 | 2 | 1 | 0 | 1 | 1 | 1 |
ID4 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 |
ID5 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 |
I want to select different columns matching column names from different lists and subset the rows according to different criteria. For example if my list1 has col1, col3, col4, col11 and list2 has col2, col6, col9, col10. I want to filter rows as list1 == 0 AND list2 == 1. E.g df1 = df.filter((df.col1 == 0) & (df.col3 == 0) & (df.col4 == 0) & (df.col6== 1) & (df.col9 == 1) & (df.col10 == 1))
. Instead of adding column name each time, I want these columns to be selected from two different lists. How can I achieve this using PySpark?
Upvotes: 0
Views: 1911
Reputation: 1
Try this, This worked for me
tmp_cols = ['col1','col2']
filter_conditions = [col(c) ==1 for c in tmp_cols]
nestor = lambda u, v : (u) | (v)
df = df.filter(reduce(nestor, filter_conditions))
Upvotes: 0
Reputation: 3
I found the solution that I was looking for: I was able to filter large number of columns using following steps:
list1 = ped.filter((ped.pheno == 2)).select("IID")
list1 = list1.select('IID').rdd.map(lambda row : row [0]).collect()`
list2 = ped.filter((ped.pheno == 1)).select("IID")
list2 = list2.select('IID').rdd.map(lambda row : row [0]).collect()
## 2. Using these lists I filtered the columns as follows:
df1 = df.where ("AND".join([(%s ==1)"%(col) for col in list1]))
df1 = df.where ("AND".join([(%s ==0)"%(col) for col in list2]))
Thanks for other helpful solutions as well!
Upvotes: 0
Reputation: 3047
If you need to compare a lot of columns like this consider tuple-wise comparisons like this:
from pyspark.sql.functions import lit, struct
source_tuple = struct(col("col1"), col("col3"), col("col4"), col("col6"), col("col9"), col("col10"))
target_tuple1 = struct([lit(0), lit(0), lit(0), lit(1), lit(1), lit(1)])
df1 = df.where(source_tuple.isin([target_tuple1]))
You can build the left and right sides dynamically and add multiple tuples to the right-hand side. If you need multiple left-hand sides combine the resulting DataFrames using UNION ALL.
Upvotes: 1