Reputation: 223
I have a Spark dataframe with a column (assigned_products
) of type string that contains values such as the following:
"POWER BI PRO+Power BI (free)+AUDIO CONFERENCING+OFFICE 365 ENTERPRISE E5 WITHOUT AUDIO CONFERENCING"
I would like to count the occurrences of +
in the string for and return that value in a new column.
I tried the following, but I keep returning errors.
from pyspark.sql.functions import col
DF.withColumn('Number_Products_Assigned', col("assigned_products").count("+"))
I'm running my code in Azure Databricks on a cluster running Apache Spark 2.3.1.
Upvotes: 22
Views: 26774
Reputation: 24386
Spark 3.4+ has regexp_count
F.expr(r"regexp_count(col_name, '\\+')")
Full example:
from pyspark.sql import functions as F
df = spark.createDataFrame(
[("POWER BI PRO+Power BI (free)+AUDIO CONFERENCING+OFFICE 365 ENTERPRISE E5 WITHOUT AUDIO CONFERENCING",)],
["assigned_products"])
df = df.withColumn('Number_Products_Assigned', F.expr(r"regexp_count(assigned_products, '\\+')"))
df.show()
# +--------------------+------------------------+
# | assigned_products|Number_Products_Assigned|
# +--------------------+------------------------+
# |POWER BI PRO+Powe...| 3|
# +--------------------+------------------------+
Upvotes: 1
Reputation: 2596
Replace will replace the occurrence of the sub-string with null string. So we can count the occurrences by comparing the lengths before and after the replacement as follows:
SELECT length(x) - length(replace(x,'+')) as substring_count
FROM (select 'abc+def+ghi++aaa' as x) -- Sample data
Output:
substring_count
---------------
4
import pyspark.sql.functions as F
df1 = spark.sql("select 'abc+def+ghi++aaa' as x") # Sample data
df1.withColumn('substring_count',
F.length(col('x'))
- F.length(F.regexp_replace(col('x'), '\+', ''))
).show()
Output:
+----------------+---------------+
| x|substring_count|
+----------------+---------------+
|abc+def+ghi++aaa| 4|
+----------------+---------------+
Upvotes: 2
Reputation: 43504
Here's a non-udf solution. Split your string on the character you are trying to count and the value you want is the length of the resultant array minus 1:
from pyspark.sql.functions import col, size, split
DF.withColumn('Number_Products_Assigned', size(split(col("assigned_products"), r"\+")) - 1)
You have to escape the +
because it's a special regex character.
+--------------------+------------------------+
| assigned_products|Number_Products_Assigned|
+--------------------+------------------------+
|POWER BI PRO+Powe...| 3|
+--------------------+------------------------+
Upvotes: 25
Reputation: 25909
from pyspark.sql.functions import col,udf
@udf(returnType='int')
def cnt(s):
return s.count('+')
DF.withColumn('Number_Products_Assigned', cnt(col("assigned_products")))
Upvotes: 6