Hair of Slytherin
Hair of Slytherin

Reputation: 406

How to optimize this CUDA kernel

I've profiled my model and it seems that this kernel accounts for about 2/3 of my total runtime. I was looking for suggestions to optimize it. The code is as follows.

__global__ void calcFlux(double* concs, double* fluxes, double* dt)
{
    int idx = blockIdx.x * blockDim.x + threadIdx.x;
    fluxes[idx]=knowles_flux(idx, concs);
    //fluxes[idx]=flux(idx, concs);
}

__device__ double knowles_flux(int r, double *conc)
{
    double frag_term = 0;
    double flux = 0;
    if (r == ((maxlength)-1))
    {
        //Calculation type : "Max"
        flux = -km*(r)*conc[r]+2*(ka)*conc[r-1]*conc[0];
    }
    else if (r > ((nc)-1))
    {
        //Calculation type : "F"
        //arrSum3(conc, &frag_term, r+1, maxlength-1);
        for (int s = r+1; s < (maxlength); s++)
        {
            frag_term += conc[s];
        }
        flux = -(km)*(r)*conc[r] + 2*(km)*frag_term - 2*(ka)*conc[r]*conc[0] + 2*(ka)*conc[r-1]*conc[0];
    }
    else if (r == ((nc)-1))
    {
        //Calculation type : "N"
        //arrSum3(conc, &frag_term, r+1, maxlength-1);
        for (int s = r+1; s < (maxlength); s++)
        {
            frag_term += conc[s];
        }
        flux = (kn)*pow(conc[0],(nc)) + 2*(km)*frag_term - 2*(ka)*conc[r]*conc[0];
    }
    else if (r < ((nc)-1))
    {
    //Calculation type : "O"
        flux = 0;
    }
    return flux;
}

Just to give you an idea of why the for loop is an issue, this kernel is launched on an array of about maxlength = 9000 elements. For our purposes now, nc is in the range of 2-6. Here's an illustration of how this kernel processes the incoming array (conc). For this array, five different types of calculations need to be applied to different groups of elements.

Array element : 0 1 2 3 4 5 6 7 8 9 ... 8955 8956 8957 8958 8959 8960
Type of calc  : M O O O O O N F F F ...   F   F    F    F    F   Max

The potential problems I've been trying to deal with right now are branch divergence from the quadruple if-else and the for loop.

My idea for dealing with the branch divergence is to break this kernel down into four separate device functions or kernels that treat each region separately and all launch at the same time. I'm not sure this is significantly better than just letting the branch divergence take place, which if I'm not mistaken, would cause the four calculation types to be run in serial.

To deal with the for loop, you'll notice that there's a commented out arrSum3 function, which I wrote based off my previously (and probably poorly) written parallel reduction kernel. Using it in place of the for loop drastically increased my runtime. I feel like there's a clever way to accomplish what I'm trying to do with the for loop, but I'm just not that smart and my advisor is tired of me "wasting time" thinking about it.

Appreciate any help.

EDIT

Full code is located here : https://stackoverflow.com/q/21170233/1218689

Upvotes: 3

Views: 519

Answers (1)

huseyin tugrul buyukisik
huseyin tugrul buyukisik

Reputation: 11910

Assuming sgn() and abs() are not derived from "if"s and "else"s

__device__ double knowles_flux(int r, double *conc)
{
    double frag_term = 0;
    double flux = 0;

        //Calculation type : "Max"
        //no divergence
        //should prefer 20-30 extra cycles instead of a branching.
        //may not be good for CPU
        fluxA = (1-abs(sgn(r-(maxlength-1)))) * (-km*(r)*conc[r]+2*(ka)*conc[r-1]*conc[0]);
        //is zero if r and maxlength-1 are not equal

        //always compute this in shared memory so work will be equal for all cores, no divergence

        // you should divide kernel into several pieces to do a reduction
        // but if you dont want that, then you can try :
        for (int s = 0;s<someLimit ; s++) // all count for same number of cycles so no divergence
        {
            frag_term += conc[s] * (   abs(sgn( s-maxlength ))*sgn(1- sgn( s-maxlength ))  )* (      sgn(1+sgn(s-(r+1)))  );
        }
         //but you can make easier of this using "add and assign" operation
         // in local memory (was it __shared in CUDA?)
         //  global conc[] to local concL[] memory(using all cores)(100 cycles)
         // for(others from zero to upper_limit)
         // if(localID==0)
         // {
         //    frag_termL[0]+=concL[s]             // local to local (10 cycles/assign.)
         //    frag_termL[0+others]=frag_termL[0]; // local to local (10 cycles/assign.)
         // }  -----> uses nearly same number of cycles but uses much less energy
         //using single core (2000 instr. with single core vs 1000 instr. with 2k cores)
         // in local memory, then copy it to private registers accordingly using all cores



        //Calculation type : "F"

        fluxB = (  abs(sgn(r-(nc-1)))*sgn(1+sgn(r-(nc-1)))   )*(-(km)*(r)*conc[r] + 2*(km)*frag_term - 2*(ka)*conc[r]*conc[0] + 2*(ka)*conc[r-1]*conc[0]);
        // is zero if r is not greater than (nc-1)



        //Calculation type : "N"


        fluxC = (   1-abs(sgn(r-(nc-1)))   )*((kn)*pow(conc[0],(nc)) + 2*(km)*frag_term - 2*(ka)*conc[r]*conc[0]);
        //zero if r and nc-1 are not equal



    flux=fluxA+fluxB+fluxC; //only one of these can be different than zero

    flux=flux*(   -sgn(r-(nc-1))*sgn(1-sgn(r-(nc-1)))  )
    //zero if r > (nc-1)

    return flux;
}

Okay, let me open a bit:

if(a>b) x+=y;

can be taken as

if a-b is negative sgn(a-b) is -1
then adding 1 to that -1 gives zero ==> satisfies lower part of comparison(a<b)
x+= (sgn(a-b) +1) = 0 if a<b (not a>b), x unchanged

if(a-b) is zero, sgn(a-b) is zero
then we should multiply the upper solution with sgn(a-b) too!
x+= y*(sgn(a-b) +1)*sgn(a-b)
means
x+= y*( 0  +  1) * 0 = 0           a==b is satisfied too!

lets check what happens if a>b
x+= y*(sgn(a-b) +1)*sgn(a-b)
x+= y*(1 +1)*1  ==> y*2 is not acceptable, needs another sgn on outherside

x+= y* sgn((sgn(a-b)+1)*sgn(a-b))

x+= y* sgn((1+1)*1)

x+= y* sgn(2)   

x+= y only when a is greater than b

when there are too many

abs(sgn(r-(nc-1))

then you can re-use it as

tmp=abs(sgn(r-(nc-1))

.....  *tmp*(tmp-1) ....
...... +tmp*zxc[s] .....
......  ......

to decrease total cycles even more! Register accessing can be in the level of terabytes/s so shouldnt be a problem. Just as doing that for global access:

tmpGlobal= conc[r];

...... tmpGlobal * tmp .....
.... tmpGlobal +x -y ....

all private registers doing stuff in terabytes per second.

Warning: reading from conc[-1] shouldnt cause any faults as long as it is multiplied by zero if the real address of conc[0] is not real zero already . But writing is hazardous.

if you need to escape from conc[-1] anyway, you can multiply the index with some absolut-ified value too! See:

 tmp=conc[i-1] becomes   tmp=conc[abs((i-1))] will always read from positive index, the value will be multiplied by zero later anyway. This was lower bound protection.
  You can apply a higher bound protection too. Just this adds even more cycles.

Think about using vector-shuffle operations if working on a pure scalar values is not fast enough when accessing conc[r-1] and conc[r+1]. Shuffle operation between a vector's elements is faster than copying it through local mem to another core/thread.

Upvotes: 4

Related Questions