apacay
apacay

Reputation: 1712

Parallelism and Arrays? a Runtime defined amount of calculations to a runtime defined sub group of elements

I have an undefined amount of elements (less than a long size) to which I have to perform a runtime amount of calculations to some of them (the operations to do and to which element positions is something received as an argument).

The arguments then, the number of positions and a List Instructions (add this to the positions going from X to Y, substract this to the positions from P to Q)

The solution I had was:

  1. Fill List<long> Output with 0s according to the length received by args;

  2. long output[] = Output.ToArray();

  3. Instructions.ForEach(op => DoOperations(op)); (Where op had from what position to what position do I have to do which calculation with what numbers)

  4. A for doing calculations on each of the filtered positions of output[] .

    Now the thing is the time constraint makes this all not viable, so I need an async/WhenAll into action, but also I need all of them to share the output so they can all operate on it, it's not a problem which operation gets solved first. And there's no ref or out parameters in Async.

What is the most efficient and clean way to do this?

@Edit to show how parallel is working:

static long arrayManipulation(int n, int[][] queries) 
    {
        long[] operate;
        for(long i=0;i<n;i++) 
            lOperate.Add(0);

        operate = lOperate.ToArray();
        List<int[]> lQueries = new List<int[]>(queries.ToList<int[]>());    
        Parallel.ForEach(lQueries, op=>
        {
            Parallel.For(op[0]-1, op[1], i=> (operate[i]+= (long)op[2]));
        });
        return operate.Max();
    }

Upvotes: 0

Views: 51

Answers (1)

NetMage
NetMage

Reputation: 26917

Without any information on the scale of the issue, it is hard to attempt to optmize, but some quick tests show that running the add operations in parallel is not worth it. There is some overhead to setting up the Parallel.For and calling the lambda method, and that swamps the time to just use a basic for loop and addition.

OTOH, using an outside Parallel.For to step through the operations is worthwhile (on a multi-core CPU at least) running four times faster on my computer. Unrolling the inner loops a few times to batch the operations does not seem to provide an advantage at 2x and 4x. Using the standard LINQ Max operator is only 0.2% of the total time, so attempting to track the max during the operations does not seem worthwhile.

So my suggestion is:

Parallel.For(0, operations.Length, j1 => {
    var op = operations[j1];
    for (int j2 = op[0]; j2 < op[1]; ++j2)
        operate[j2] += op[2];
});
var ans = operate.Max();

Upvotes: 1

Related Questions