Crubal Chenxi Li
Crubal Chenxi Li

Reputation: 313

PySpark calculate percentage that every column is 'missing'

I am using PySpark and try to calculate the percentage of records that every column has missing ('null') values.

dataframe we are going to work with: df (and many more columns)

id fb linkedin snapchat ...
1 aa (null) (null) ...
2 (null) aaa (null) ...
3 (null) (null) a ...
4 (null) (null) (null) ...

With the following script I am able to get 'Null' rate for every column:

df.select([round((count(when(isnan(c) | col(c).isNull(), c))/count(lit(1))), 6).alias(c) for c in df.columns])  

Just wondering how can we calculate the percentage that every column has 'null' value ?(assuming there are many columns, and we don't want to specify every column name)

Thanks!

Upvotes: 0

Views: 4104

Answers (2)

Vaebhav
Vaebhav

Reputation: 5052

Another way would be to create a custom function - calc_null_percent utilising the best of both worlds from Spark and Pandas

The custom func , will contain the total_count & null_count respective to each columns

Data Preparation

from pyspark import SparkContext
from pyspark.sql import SQLContext
from functools import reduce
import pyspark.sql.functions as F
import pandas as pd
import numpy as np

from io import StringIO

sc = SparkContext.getOrCreate()
sql = SQLContext(sc)

input_str = """
1,0,null,
1,null,0,
null,1,0,
1,0,0,
1,0,0,
null,0,1,
1,1,0,
1,1,null,
null,1,0
""".split(',')

input_values = list(map(lambda x: x.strip() if x.strip() != 'null' else None, input_str))

cols = list(map(lambda x: x.strip() if x.strip() != 'null' else None, "col1,col2,col3".split(',')))
            
n = len(input_values)

n_col = 3

input_list = [tuple(input_values[i:i+n_col]) for i in range(0,n,n_col)]

sparkDF = sql.createDataFrame(input_list, cols)

sparkDF.show()

+----+----+----+
|col1|col2|col3|
+----+----+----+
|   1|   0|null|
|   1|null|   0|
|null|   1|   0|
|   1|   0|   0|
|   1|   0|   0|
|null|   0|   1|
|   1|   1|   0|
|   1|   1|null|
|null|   1|   0|
+----+----+----+

Custom Func

def calc_null_percent(spark_df,sort=True):
    
    pd_col_count = spark_df.select([F.count(F.col(c)).alias(c) 
                                        for (c,c_type) in spark_df.dtypes]
                                  ).toPandas().T.reset_index().rename(columns={0: 'total_count'
                                                                               ,'index':'column'})
    
    pd_col_null_count = spark_df.select([F.sum(F.when(F.isnan(c) | F.isnull(c),1).otherwise(0)).alias(c)
                                         for (c,c_type) in spark_df.dtypes]
                                       ).toPandas().T.reset_index().rename(columns={0: 'null_count'
                                                                                    ,'index':'column'})
    
    final_df = pd.merge(pd_col_count,pd_col_null_count,on=['column'])
    
    final_df['null_percentage'] = final_df['null_count'] * 100 / final_df['total_count']

    if len(final_df) == 0:
        print("There are no any missing values!")
        return None

    return final_df

nullStatsDF = sql.createDataFrame(calc_null_percent(sparkDF))

nullStatsDF.show()

+------+-----------+----------+------------------+
|column|total_count|null_count|   null_percentage|
+------+-----------+----------+------------------+
|  col1|          6|         3|              50.0|
|  col2|          8|         1|              12.5|
|  col3|          7|         2|28.571428571428573|
+------+-----------+----------+------------------+

Upvotes: 1

Ric S
Ric S

Reputation: 9267

Assuming you do not consider a few columns for the count of missing values (here I assumed that your column id should not contain missings), you can use the following code

import pyspark.sql.functions as F

# select columns in which you want to check for missing values
relevant_columns = [c for c in df.columns if c != 'id']

# number of total records
n_records = df.count()

# percentage of rows with all missings in relevant_columns
my_perc = df \
  .select((F.lit(len(relevant_columns)) - (sum(df[c].isNull().cast('int') for c in relevant_columns))).alias('n')) \
  .filter(F.col('n') == 0) \
  .count() / n_records * 100

print(my_perc)
# 25.0

Upvotes: 1

Related Questions