newleaf
newleaf

Reputation: 2457

Count Non Null values in column in PySpark

I have a dataframe which contains null values:

from pyspark.sql import functions as F
df = spark.createDataFrame(
    [(125, '2012-10-10', 'tv'),
     (20, '2012-10-10', 'phone'),
     (40, '2012-10-10', 'tv'),
     (None, '2012-10-10', 'tv')],
    ["Sales", "date", "product"]
)

I need to count the Non Null values in the "Sales" column.

I tried 3 methods.

The first one I got it right:

df.where(F.col("sales").isNotNull()).groupBy('product')\
  .agg((F.count(F.col("Sales")).alias("sales_count"))).show()

# product   | sales_count
# phone     |  1
# tv        |  2

The second one, it's not correct:

df.groupBy('product')\
  .agg((F.count(F.col("Sales").isNotNull()).alias("sales_count"))).show()

# product   | sales_count
# phone     |  1
# tv        |  3

The third one, I got the error:

 df.groupBy('product')\
   .agg((F.col("Sales").isNotNull().count()).alias("sales_count")).show()

TypeError: 'Column' object is not callable

What might cause errors in the second and third methods?

Upvotes: 7

Views: 32923

Answers (4)

AzSurya Teja
AzSurya Teja

Reputation: 187

Check out the countIf() function .Inside the function place your condition like:

df.select(count_if(col("Sales").isNotNull()).show()

supports from spark 3.x

Upvotes: 1

ZygD
ZygD

Reputation: 24478

Count non-null values

  • only for every string and numeric column:

    df.summary("count").show()
    # +-------+-----+----+-------+
    # |summary|Sales|date|product|
    # +-------+-----+----+-------+
    # |  count|    3|   4|      4|
    # +-------+-----+----+-------+
    
  • for every column of any type:

    df.agg(*[F.count(c).alias(c) for c in df.columns]).show()
    # +-----+----+-------+
    # |Sales|date|product|
    # +-----+----+-------+
    # |    3|   4|      4|
    # +-----+----+-------+
    

Upvotes: 0

Ramesh Maharjan
Ramesh Maharjan

Reputation: 41987

The first attempt of yours is filtering out the rows with null in Sales column before you did the aggregation. Thus it is giving you the correct result.

But with the second code

df.groupBy('product') \
    .agg((F.count(F.col("Sales").isNotNull()).alias("sales_count"))).show()

You haven't filtered out and did aggregation on whole dataset. If you analyze closely F.col("Sales").isNotNull() would give you boolean columns i.e. true and false. So F.count(F.col("Sales").isNotNull()) is just counting the boolean values in the grouped dataset which is evident if you create a new column as below.

df.withColumn("isNotNull", F.col("Sales").isNotNull()).show()

which would give you

+-----+----------+-------+---------+
|Sales|      date|product|isNotNull|
+-----+----------+-------+---------+
|  125|2012-10-10|     tv|     true|
|   20|2012-10-10|  phone|     true|
|   40|2012-10-10|     tv|     true|
| null|2012-10-10|     tv|    false|
+-----+----------+-------+---------+

So the counts are correct with your second attempt.

For your third attempt, .count() is an action which cannot be used in aggregation transformation. Only functions returning Column dataType can be used in .agg() and they can be inbuilt functions, UDFs or your own functions.

Upvotes: 6

MaxU - stand with Ukraine
MaxU - stand with Ukraine

Reputation: 210972

There is an easier way:

>>> df.groupBy("product").agg({"Sales":"count"}).show()
+-------+------------+
|product|count(Sales)|
+-------+------------+
|  phone|           1|
|     tv|           2|
+-------+------------+

Upvotes: 6

Related Questions