Reputation: 123
I have a D3D11 Texture2d with the format DXGI_FORMAT_R10G10B10A2_UNORM
and want to convert this into a D3D11 Texture2d with a DXGI_FORMAT_R32G32B32A32_FLOAT
or DXGI_FORMAT_R8G8B8A8_UINT
format, as those textures can only be imported into CUDA.
For performance reasons I want this to fully operate on the GPU. I read some threads suggesting, I should set the second texture as a render target and render the first texture onto it or to convert the texture via a pixel shader.
But as I don't know a lot about D3D I wasn't able to do it like that. In an ideal world I would be able to do this stuff without setting up a whole rendering pipeline including IA, VS, etc...
Does anyone maybe has an example of this or any hints? Thanks in advance!
Upvotes: 3
Views: 881
Reputation: 41057
On the GPU, the way you do this conversion is a render-to-texture which requires at least a minimal 'offscreen' rendering setup.
Create a render target view (DXGI_FORMAT_R32G32B32A32_FLOAT
, DXGI_FORMAT_R8G8B8A8_UINT
, etc.). The restriction here is it needs to be a format supported as a render target view on your Direct3D Hardware Feature level. See Microsoft Docs.
Create a SRV for your source texture. Again, needs to be supported as a texture by your Direct3D Hardware device feature level.
Render the source texture to the RTV as a 'full-screen quad'. with Direct3D Hardware Feature Level 10.0 or greater, you can have the quad self-generated in the Vertex Shader so you don't really need a Vertex Buffer for this. See this code.
Given your are starting with DXGI_FORMAT_R10G10B10A2_UNORM
, then you pretty much require Direct3D Hardware Feature Level 10.0 or better. That actually makes it pretty easy. You still need to actually get a full rendering pipeline going, although you don't need a 'swapchain'.
You may find this tutorial helpful.
Upvotes: 5