Reputation: 137
I am looking at this challenge:
You are provided an array A[ ] of N elements.
Also, you have to answer M queries.
Each query is of following type-Given a value X, find A[1]%X + A[2]%X + ...... + A[N]%X
- 1<=N<=100000
- 1<=M<=100000
- 1<=X<=100000
- 1<=elements of array<=100000
I am having a problem in computing this value in an optimized way.
How can we compute this value for different X?
Upvotes: 1
Views: 2424
Reputation: 1
#include<bits/stdc++.h>
using namespace std;
int main(){
int t;
cin >> t;
while(t--){
int n;
cin >> n;
int arr[n];
long long int sum = 0;
for(int i=0;i<n;i++){
cin >> arr[i];
}
cout << accumulate(arr, arr+n, sum) - n << '\n';
}
}
In case you don't know about accumulate refer this.
Upvotes: 0
Reputation: 15035
Here is a way that you could at least reduce the multiplicative factor in the time complexity.
In the C standard, the modulo (or remainder) is defined to be a % b = a - (a / b) * b
(where /
is integer division).
A naive, iterative way (possibly useful on embedded systems with no division unit) to compute the modulo is therefore (pseudo-code):
function remainder (A, B):
rem = A
while rem > B:
rem -= B;
return rem
But how does this help us at all? Suppose we:
A[i]
in ascending orderA[] -> S
I
) greater than X
X
must be subtracted from all elements in the array from index I
onwards. Therefore we must subtract (N - I + 1) * X
from the sum S
.
K
, initialize to zero) which is equal to the total multiple of X
we must subtract from S
to find the sum of all remainders. Thus at this stage we could simply add N - I + 1
to K
.L = 2X, 3X, ...
and so on, until we have passed the end of the array.S - K * X
.Pseudocode:
function findSumOfRemainder (A[N], X):
sort A
S = sum A
K = 0
L = X
I = 0
while I < N:
I = lowest index such that A[I] >= L
K += N - I + 1
L += X
return S - K * X
What is the best way to find I
at each stage, and how does it relate to the time-complexity?
Binary search: Since the entire array is sorted, to find the first index I
at which A[I] >= L
, we can just do a binary search on the array (or succeeding sub-array at each stage of the iteration, bounded by [I, N - 1]
). This has complexity O( log[N - I + 1] )
.
Linear search: Self-explanatory - increment I
until A[I] >= L
, taking O( N - I + 1 )
You may dismiss the linear search method as being "stupid" - but let's look at the two different extreme cases. For simplicity we can assume that the values of A
are "uniformly" distributed.
(max(A) / X) ~ N
: We will have to compute very few values of I
; binary search is the preferred method here because the complexity would be bounded by O([NX / max(A)] * log[N])
, which is much better than that of linear search O(N)
.
(max(A) / X) << N
: We will have to compute many values of I
, each separated by only a few indices. In this case the total binary search complexity would be bounded by O(log N) + O(log[N-1]) + O(log[N-2]) + ... ~ O(N log N)
, which is significantly worse than that of linear search.
So which one do we choose? Well this is where I must get off, because I don't know what the optimal answer would be (if there even is one). But the best I can say is to set some threshold value for the ratio max(A) / X
- if greater then choose binary search, else linear.
I welcome any comments on the above + possible improvements; the range constraint of the values may allow better methods for finding values of I
(e.g. radix sort?).
Upvotes: 4