mjat
mjat

Reputation: 41

Collect only not null columns of each row to an array

The difficulty is is that I'm trying to avoid UDFs as much as possible.

I have a dataset "wordsDS", which contains many null values:

+------+------+------+------+
|word_0|word_1|word_2|word_3|
+------+------+------+------+
|     a|     b|  null|     d|
|  null|     f|     m|  null|
|  null|  null|     d|  null|
+--------------+------+-----|

I need to collect all of the columns for each row to array. I don't know the number of columns in advance, so I'm using columns() method.

groupedQueries = wordsDS.withColumn("collected",
      functions.array(Arrays.stream(wordsDS.columns())
               .map(functions::col).toArray(Column[]::new)));;

But this approach produces empty elements:

+--------------------+
|           collected|
+--------------------+
|           [a, b,,d]|
|          [, f, m,,]|
|            [,, d,,]|
+--------------------+

Instead, I need the following result:

+--------------------+
|           collected|
+--------------------+
|           [a, b, d]|
|              [f, m]|
|                 [d]|
+--------------------+

So basically, I need to collect all of the columns for each row to array with the following requirements:

  1. Resulting array doesn't contain empty elements.
  2. Don't know number of columns upfront.

I've also though of the approach of filter the dataset's "collected" column for empty values, but can't come up with anything else except UDF. I'm trying to avoid UDFs in order not to kill performance, if anyone could suggest a way to filter the dataset's "collected" column for empty values with as little overhead as possible, that would be really helpful.

Upvotes: 4

Views: 2187

Answers (3)

NNM
NNM

Reputation: 398

display(df_part_groups.withColumn("combined", F.array_except(F.array("*"), F.array(F.lit("null"))) ))

This statement doesn't remove the null. It keeps the distinct occurrences of null.

Use this instead:

display(df_part_groups.withColumn("combined", F.array_except(F.array("*"), F.array(F.lit(""))) ))

Upvotes: 0

Mahesh Gupta
Mahesh Gupta

Reputation: 1892

spark <2.0 you can use def to remove null

scala> var df = Seq(("a",  "b",  "null",  "d"),("null",  "f",  "m",  "null"),("null",  "null",  "d",  "null")).toDF("word_0","word_1","word_2","word_3")


scala> def arrayNullFilter = udf((arr: Seq[String]) => arr.filter(x=>x != "null"))

scala> df.select(array('*).as('all)).withColumn("test",arrayNullFilter(col("all"))).show
+--------------------+---------+
|                 all|     test|
+--------------------+---------+
|     [a, b, null, d]|[a, b, d]|
|  [null, f, m, null]|   [f, m]|
|[null, null, d, n...|      [d]|
+--------------------+---------+

hope this helps you.

Upvotes: 0

Raphael Roth
Raphael Roth

Reputation: 27373

you can use array("*") to get all the elements into 1 array, then use array_except (needs Spark 2.4+) to filter out nulls:

df
  .select(array_except(array("*"),array(lit(null))).as("collected"))
  .show()

gives

+---------+
|collected|
+---------+
|[a, b, d]|
|   [f, m]|
|      [d]|
+---------+

Upvotes: 4

Related Questions