rclakmal
rclakmal

Reputation: 1982

Sort in descending order in PySpark

I'm using PySpark (Python 2.7.9/Spark 1.3.1) and have a dataframe GroupObject which I need to filter & sort in the descending order. Trying to achieve it via this piece of code.

group_by_dataframe.count().filter("`count` >= 10").sort('count', ascending=False)

But it throws the following error.

sort() got an unexpected keyword argument 'ascending'

Upvotes: 149

Views: 448109

Answers (8)

Narendra Maru
Narendra Maru

Reputation: 827

you can use groupBy and orderBy as follows also (as in pyspark 3.0+)

dataFrameWay = df.groupBy("firstName").count().withColumnRenamed("count","distinct_name").sort(desc("count"))

Upvotes: 8

Wria Mohammed
Wria Mohammed

Reputation: 1611

You can use pyspark.sql.functions.desc instead.

from pyspark.sql.functions import desc

g.groupBy('dst').count().sort(desc('count')).show()

Upvotes: -2

Mr RK
Mr RK

Reputation: 19

PySpark added Pandas style sort operator with the ascending keyword argument in version 1.4.0. You can now use

df.sort('<col_name>', ascending = False)

Or you can use the orderBy function:

df.orderBy('<col_name>').desc()

Upvotes: 1

Aramis NSR
Aramis NSR

Reputation: 1847

RDD.sortBy(keyfunc, ascending=True, numPartitions=None)

an example:

words =  rdd2.flatMap(lambda line: line.split(" "))
counter = words.map(lambda word: (word,1)).reduceByKey(lambda a,b: a+b)

print(counter.sortBy(lambda a: a[1],ascending=False).take(10))

Upvotes: 1

Henrique Florencio
Henrique Florencio

Reputation: 3751

Use orderBy:

df.orderBy('column_name', ascending=False)

Complete answer:

group_by_dataframe.count().filter("`count` >= 10").orderBy('count', ascending=False)

http://spark.apache.org/docs/2.0.0/api/python/pyspark.sql.html

Upvotes: 158

Prabhath Kota
Prabhath Kota

Reputation: 113

In pyspark 2.4.4

1) group_by_dataframe.count().filter("`count` >= 10").orderBy('count', ascending=False)

2) from pyspark.sql.functions import desc
   group_by_dataframe.count().filter("`count` >= 10").orderBy('count').sort(desc('count'))

No need to import in 1) and 1) is short & easy to read,
So I prefer 1) over 2)

Upvotes: 7

gdoron
gdoron

Reputation: 150303

By far the most convenient way is using this:

df.orderBy(df.column_name.desc())

Doesn't require special imports.

Upvotes: 37

zero323
zero323

Reputation: 330373

In PySpark 1.3 sort method doesn't take ascending parameter. You can use desc method instead:

from pyspark.sql.functions import col

(group_by_dataframe
    .count()
    .filter("`count` >= 10")
    .sort(col("count").desc()))

or desc function:

from pyspark.sql.functions import desc

(group_by_dataframe
    .count()
    .filter("`count` >= 10")
    .sort(desc("count"))

Both methods can be used with with Spark >= 1.3 (including Spark 2.x).

Upvotes: 230

Related Questions