Reputation: 3811
df
Date Col1 COl2
2010-01-01 23 28
2012-09-01 50 70
2010-03-04 80 10
2012-04-01 19 20
2012-03-05 67 9
df_new=df.withColumn('year',year(df['Date']))
Date Col1 COl2 year
2010-01-01 23 28 2010
2012-09-01 50 70 2012 and so on
Now, I am trying to find the maximum of Col1 and Col2 for each year. So I use groupby:
df_new.groupby('year').max().show()
THe result I get is not what I expected. Result obtained
year max(year)
2010 2010
2012 2012 and so on
Expected result
year max(Col1) max(Col2)
2010 80 28
2012 67 70
Upvotes: 0
Views: 1360
Reputation: 2003
In case you have a huge dataset it is better to use a Window function in such cases as below , This performs way better than groupBy
from pyspark.sql import functions as F
from pyspark.sql.window import Window as W
df = spark.table("test_poc")
df = df.withColumn("Year", F.year(F.col('date')))
_w = W.partitionBy(F.col('year'))
df = df.withColumn('max_col', F.max('id').over(_w)).withColumn('min_col', F.min('id').over(_w))
df.show()
---------OUTPUT------------
+---+-------------------+----+-------+-------+
| id| date|Year|max_col|min_col|
+---+-------------------+----+-------+-------+
| 5|2019-12-31 23:26:59|2019| 5| 2|
| 2|2019-12-31 23:26:59|2019| 5| 2|
| 1|1969-12-31 23:26:59|1969| 3| 1|
| 2|1969-12-31 23:26:30|1969| 3| 1|
| 3|1969-12-31 23:26:26|1969| 3| 1|
| 4|2020-12-31 23:26:59|2020| 4| 1|
| 1|2020-12-31 23:26:59|2020| 4| 1|
+---+-------------------+----+-------+-------+
Upvotes: 1
Reputation: 4045
you should perform multiple max
, agg
on the Col1
and Col2
from pyspark.sql import functions as F
df_new.groupBy(F.year("Date")).agg(F.max("Col1"),F.max("Col2"))
.show()
Upvotes: 1
Reputation: 10362
Check below code.
from pyspark.sql import functions as F
df.withColumn('year',F.year(df['date'])).groupBy("year").agg(F.max("col1").alias("max_col1"),F.max("col2").alias("max_col2")).show()
Upvotes: 1