Guilherme Lana
Guilherme Lana

Reputation: 21

How do I count consecutive values with pyspark?

I am trying to count consecutive values that appear in a column with Pyspark. I have the column "a" in my dataframe and expect to create the column "b".

+---+---+
|  a|  b|
+---+---+
|  0|  1|
|  0|  2|
|  0|  3|
|  0|  4|
|  0|  5|
|  1|  1|
|  1|  2|
|  1|  3|
|  1|  4|
|  1|  5|
|  1|  6|
|  2|  1|
|  2|  2|
|  2|  3|
|  2|  4|
|  2|  5|
|  2|  6|
|  3|  1|
|  3|  2|
|  3|  3|
+---+---+

I have tried to create the column "b" with lag function over some window, but without success.

w = Window\
  .partitionBy(df.some_id)\
  .orderBy(df.timestamp_column)

df.withColumn(
  "b",
  f.when(df.a == f.lag(df.a).over(w),
         f.sum(f.lit(1)).over(w)).otherwise(f.lit(0))
)

Upvotes: 2

Views: 806

Answers (1)

Guilherme Lana
Guilherme Lana

Reputation: 21

I could resolve this issue with the following code:

df.withColumn("b",
  f.row_number().over(Window.partitionBy("a").orderBy("timestamp_column"))

Upvotes: 0

Related Questions