user1111929
user1111929

Reputation: 6099

OpenCL: how to use local memory in JOCL

Assume that I want to perform parallel computations on a large fixed object, e.g. a fixed large sparse (directed) graph, or any similar kind of object.

To do any reasonable computations on this graph or object, such as random walks in the graph, putting the graph in global memory is presumably out of the question for speed reasons.

That leaves local/private memory. If I have understood the GPU architecture correct, there is virtually no speed difference between (read-only) access of local or private memory, is that correct? I'm reluctant to copy the graph to private memory, since this would mean that every single work unit has to store the entire graph, which could eat away the GPU's memory very quickly (and for very large graphs even reducing the number of cores that can be used and/or make the OS unstable).

So, assuming I'm correct above on the read speed of local vs private, how do I do this in practice? If e.g. for simplification I reduce the graph to an int[] from and an int[] to (storing begin and end of each directed edge), I can of course make the kernel look like this

computeMe(__local const int *to, __local const int *from, __global int *result) {
     //...
}

but I don't see how I should call this from JOCL, since no private/local/global modifier is given there.

Will the local variables be written automatically to the memory of each local workgroup? Or how does this work? It's not clear to me at all how I should be doing this memory assignment correctly.

Upvotes: 1

Views: 1508

Answers (2)

isti_spl
isti_spl

Reputation: 716

You wrote "putting the graph in global memory is presumably out of the question for speed reasons." - well you don't have much other choices. I mean the data is in general in the global memory.

(As a side note - In particular cases you might recast it to textures (if the element format is suitable). Also the so called 'constant' memory on nvidia is optimized for 'broadcast' type of operations meaning all threads read from the same location, which i guess it's not the case. I would suggest to stay away from these types at the beginning.)

Ok, as advice, first try simply to use the 'global memory'. The local memory's 'lifetime' is only during the execution of the kernel. It is justified only if you re-use the same data element more than once (think it about as a cache memory where you explicitely preload).

Also local mem is limited to about 16-48kbytes, so it can store only a portion of your data. Try to decompose your graph in subgraphs that fits into these blocks.

In your representation you could partition the edges (from[], to[]) into fixed-size groups.

the generic pattern is

step 1. copy from global to local

your_local_array[ get_local_id(0) ] = input_global_mem[ get_global_id(0) ]

step 2. make sure every thread does the op barrier(local mem fence)

now, the work-items (threads) can work on the subgraph loaded in the local memory.

remember, the local mem will contain only a limited portion of the entire graph. if you need to access arbitrary nodes from any threads, the above pattern will not be usable.

I suggest for beginning to make experiments with the algorithm without using local memory (read directly from global) and make sure it works correctly (usually there are some surprises on the road). Later you can identify which data portions you might store in local mem to speed it up.

Upvotes: 1

prunge
prunge

Reputation: 23268

You can't pass values for local memory arguments from the host. The host cannot read/write local memory. To use local memory, you still need to pass the data in as global, then copy from global to local before you use it. This is only beneficial if you are reading the data many times.

How about constant memory? If your input data does not change and it not too large, putting your input data into constant memory might give you a considerable speedup. The available constant memory is typically around 16K to 64K.

computeMe(__constant int *to, __constant int *from, __global int *result) {
 //...
}

Edit (add references):

For an example use of __local memory in OpenCL, see here.

For NVidia hardware, more performance details are NVidia OpenCL best practices guide (PDF). In there, there is more information on performance differences between the memory types.

Upvotes: 3

Related Questions