Swapnil Pandey
Swapnil Pandey

Reputation: 599

How to find number of elements greater/smaller than an element in an array?

I want to find the fastest way to find the number of elements which are smaller/greater than each element in an array.

For example : Array is [5, 6, 8, 2, 4]. Considering first element, no of elements smaller than it is 2.

The best I could make myself was that each element be compared with the rest of the array elements but it takes a long time for large arrays with number of entries approx 10^5.

My code:

for(i=0;i<n;i++)
{
    count=0;
    for(j=0;j<n;j++)
    {
        if( i!=j && (ar[i]>ar[j]) )
        {
            count++;
        }
    }
    printf("%lld ",count);
}

Edit: I want to display the number of elements smaller than each array element. That is for the above example, I want the answer to be : 2 3 4 0 1 And the array can contain repeated values.

Upvotes: 3

Views: 44594

Answers (7)

VINIT MUNDRA
VINIT MUNDRA

Reputation: 11

The best way would be to use Fenwick Trees/Binary Indexed Trees....those data structures are made specially for these kind of problems have a look at them, Google it or watch a video bcoz it can't be explained by typing...

Upvotes: 0

Neel Alex
Neel Alex

Reputation: 733

The following code snippet will help you with your query.
It is implemented using binary search and hence this can be done log(n) time complexity,
where n is the number of elements in the array.

Note that you need to sort the array first.
hence the overall time complexity is n*log(n).

#include<bits/stdc++.h>
using namespace std;

int binarySearchLargerElenents(int arr[], int val){
    int l = 0;
    int size = sizeof(arr)/sizeof(arr[0]);
    int r = size-1;
    int resultantIndex = size;
    while(l<=r){
        int mid = l + (r-l) / 2;
        if(arr[mid] <= val){
            l = mid +1;
        }
        else{
            r = mid -1;
            resultantIndex = mid;
        }
    }
    return (size-resultantIndex);
}

int binarySearchSmallerElements(int arr[], int val){
    int l = 0;
    int size = sizeof(arr)/sizeof(arr[0]);
    int r = size-1;
    int resultantIndex = size;
    // strictly less than val.
    // while(l<=r){
    //   int mid = l + (r - l) / 2;
    //   if(arr[mid] < val){
    //     l = mid + 1;
    //
    //   }
    //   else{
    //     r = mid - 1;
    //     resultantIndex = mid;
    //   }
    // }

    // less that or equal to val
    while(l<=r){
        int mid = l + (r-l) / 2;
        if(arr[mid] <= val){
            l = mid +1;
        }
        else{
            r = mid-1;
            resultantIndex = mid;
        }
    }
    return resultantIndex;
}
int main(){
    int arr[] = {1, 2, 3, 4, 5, 5, 7};
    int x;
    cout<<"Input x: ";
    cin>>x;
    int countOfSmallerElements = binarySearchSmallerElements(arr, x);
    int countOfLargerElements = binarySearchLargerElenents(arr, x);

    cout<<" smaller than "<<x <<" : "<< countOfSmallerElements<<endl;
    cout<<" larger than "<<x <<" : "<< countOfLargerElements<<endl;
}

Upvotes: 3

Adnan Azmat
Adnan Azmat

Reputation: 33

Sort the array. Now for each query you can reduce your O(log n) time to O(1) time. Create a HashMap. Loop through each element of the sorted array and fill the HashMap with elements as the key and the sorted position as its value. HashMap provides O(1) lookup so its more time efficient then Binary Search for each query.

Upvotes: 0

suren
suren

Reputation: 385

The simplest trick is sort the array first and count the number of elements in the array. So for example if you have 10 elements then your first element is less than 9 and likewise the second element is smaller than 8 and so on.

The one exception is where you have two of the same items, there you have to compare only two items, the first and next. I think it is the most feasible solution.

Upvotes: 0

amit
amit

Reputation: 178491

As other have said, you can solve it in O(nlogn) by sorting the array, and then finding the index of the first number lower/higher than x for each element x.

I want to prove a lower bound for the problem:
It cannot be done better than Omega(nlogn) under algebraic tree model (which basically means no hashings).

Proof: We will prove it by reducing Element Distinctness Problem.
Assume we had an algorithm A that solves this problem in O(f(n))
Given an array arr , invoke A on the array. The result is a new array res, where res[i] is the number of elements lower than arr[i].
Note that for any two indices i,j: res[i] == res[j] iff arr[i] == arr[j].
This, there is a duplicate in arr iff there is a duplicate in res.
However, all elements in res are in range [0,n-1]. This means we have an element distinctness problem where all elements are bounded by n.
This variant can be solved in O(n) using modification of bucket sort.

So, we have basically showed an algorithm that solves the problem (Element Distinctness) in O(n + f(n)), but since element distinctness is Omega(nlogn) problem under this model, it means f(n) itself must be in Omega(nlogn)

QED

Upvotes: 2

Ivaylo Strandjev
Ivaylo Strandjev

Reputation: 70989

If you plan to do a single similar query, you can not improve the linear approach that you already proposed. However if you plan to perform many similar queries you may sort the array and then perform a binary search for each query. This would lead to O(n*log(n)) precomputation complexity and O(log(n)) complexity for each query. Note that this approach would only be an improvement if you plan to perform queries that are more than O(log(n)).

Upvotes: 1

Natasha Dutta
Natasha Dutta

Reputation: 3272

A simple approach will be:

  1. Sort the array first, possibly using qsort()
  2. Perform a binary search on the sorted array.
  3. Calculate the remaining elements from the desired elements, based on the size of the array.

Upvotes: 0

Related Questions