khaled alomar
khaled alomar

Reputation: 683

haskell quick sort complexity?

I run the below quick sort Haskell function (using merge and sort algorithm):

qsort []=[]
qsort (x:xs) =
  qsort smaller ++ [x] ++ qsort larger
    where
      smaller = [a | a <- xs , a<x ] -- edit
      larger  = [b | b <- xs , b>=x ]

In order to test the runtime for this for a bit big amount of data, I run the below code:

qsort [100000,99999..0]

It takes till now 40 min and it is still running! Even a list of 100,000 is not that big, so:

  1. how could I handle data a billion of data?
  2. is this sort algorithm(merge and sort) takes less time in other language like C#, python...is this a problem in haskell language
  3. how could I predict the runtime of an algorithm using its complexity?

Upvotes: 4

Views: 909

Answers (1)

Chris Martin
Chris Martin

Reputation: 30756

The worst case time of quicksort is n2. Your implementation uses the first element as the pivot, which results in worst-case performance when the input is sorted (or sorted in reverse).

To get quicksort to perform at its expected complexity of n log n, you need either randomized pivot selection, or to randomly shuffle the input before sorting.

Upvotes: 7

Related Questions