user664303
user664303

Reputation: 2063

How to bind a 2D layered texture to pitched linear memory in CUDA

I have a CUDA compute capability 2.0 graphics card and the CUDA Toolkit 4.0, and I want to make use of the new tex2DLayered texture lookup function. However, the size of my array (1280 x 960 x 200 layers, unsigned short) is too large to allocate a cuda3DArray of this size, so I want to bind pitched linear memory to the texture instead. However, I cannot find any description of how to do this in any of the CUDA documentation or SDK examples, including the Simple Layered Texture example, which uses a cuda3DArray rather than linear memory. I've also searched online, without success.

Can anyone either provide the code necessary to bind the texture, or a link to some instructions on how to do this? Thanks.

Upvotes: 2

Views: 1321

Answers (1)

user664303
user664303

Reputation: 2063

Section 3.2.10.1.5 of the CUDA C Programming Guide v4.0 states that:

"A layered texture can only be bound to a CUDA array created by calling cudaMalloc3DArray() with the cudaArrayLayered flag (and a height of zero for one-dimensional layered texture)."

Upvotes: 1

Related Questions