Reputation: 431
A is an array containing at most 105 integers.
We have to do 2 kinds of operations on this array in log(N) complexity (where, N= number of elements in A).
Operation 1, given v,i,j we have to add v to A[k] (i<=k<=j).
Operation 2, given i & j calculate ( A[i] * A[i+1] * A[i+2] * .... * A[j] ) % M. (M is a prime, and will be same for all operations).
There will be almost 105 operations to be made.
If it's not possible in log(N), then what is the best possible complexity to do the operations?
Upvotes: 4
Views: 147
Reputation: 5939
It's possible that j-i is on the order of N, and you've got to change each of them. That makes any algorithm faster than O(N) impossible, as Paul said. K is not a parameter of the problem, it's just a variable, therefore log(K) in Bidhan's answer makes no sense.
Now, if the question would be not about time complexity proper, but about the height of the tree of massively parallel operations (such as you have on CUDA, for example), then, given enough threads one would trivially perform Operation 1 in O(1) due to independence of all operations, and Operation 2 in O(log(N)) by multiplying mod M pairs of adjacent elements (requires 100000/2 threads), then pairs of adjacent results, etc, until the answer is reached.
This was, however, not the question. Barring massively parallel computing the complexity of each operation is O(N), no matter how you perform that.
Upvotes: 0
Reputation: 27577
Since it looks like you have to access all the elements in the range [i, j]
, the complexity depends on the linear size of that range,
Upvotes: 1