Reputation: 9681
I have created a swap chain using the following piece of code:
DXGI_SWAP_CHAIN_DESC swapchainDesc;
// Clear out the struct for use
ZeroMemory(&swapchainDesc, sizeof(DXGI_SWAP_CHAIN_DESC));
// Fill the swap chain description struct
swapchainDesc.BufferCount = 1; // one back buffer
swapchainDesc.BufferDesc.Width = width;
swapchainDesc.BufferDesc.Height = height;
swapchainDesc.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; // use 32-bit color DXGI_FORMAT_R8G8B8A8_UNORM DXGI_FORMAT_R32G32B32_FLOAT
swapchainDesc.BufferDesc.RefreshRate.Numerator = 60;
swapchainDesc.BufferDesc.RefreshRate.Denominator = 1;
swapchainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; // how swap chain is to be used
swapchainDesc.OutputWindow = hWnd; // the window to be used
swapchainDesc.SampleDesc.Count = 1; // how many multisamples (1-4, above is not guaranteed for DX11 GPUs)
swapchainDesc.SampleDesc.Quality = 0; // multisample quality level
swapchainDesc.Windowed = TRUE; // windowed/full-screen mode
// Configure DX feature level
D3D_FEATURE_LEVEL featureLevels[] =
{
D3D_FEATURE_LEVEL_11_1,
D3D_FEATURE_LEVEL_11_0,
D3D_FEATURE_LEVEL_10_1,
D3D_FEATURE_LEVEL_10_0
};
UINT numFeatureLevels = ARRAYSIZE(featureLevels);
char devname_dx[128];
bool device_ok = getAdapter(&adapter, devname_dx);
DXGI_ADAPTER_DESC adapterDesc;
adapter->GetDesc(&adapterDesc); //devid = 9436, subsysid = 177803304
std::cout << "DirectX GPU " << devname_dx << " : [BusID " << adapterDesc.DeviceId << " | DevID " << adapterDesc.SubSysId << "]" << std::endl;
// Create a device, device context and swap chain using the information in the scd struct
hr = D3D11CreateDeviceAndSwapChain(
adapter, // graphics adapter, which was already checked for CUDA support
D3D_DRIVER_TYPE_UNKNOWN, // If adapter is not NULL, we need to use UNKNOWN, otherwise use D3D_DRIVER_TYPE_HARDWARE
NULL, // software
NULL, // flags
featureLevels, // D3D features to enable
numFeatureLevels, // D3D feature count
D3D11_SDK_VERSION, // D3D SDK version (here 11)
&swapchainDesc, // swap chain description
&swapchain, // swap chain
&device, // D3D device
NULL, // supported feature level (use DXGI_ADAPTER_DESC variable to retrieve the information, e.g. &featureLevel)
&device_context); // D3D device context
if (FAILED(hr))
{
std::string msg = std::string("Failed to create DX device and swap chain: ") + std::to_string(hr);
throw std::exception(msg.c_str());
}
After releasing my adapter
adapter->Release();
and retrieving my device context
device->GetImmediateContext(&device_context);
I map a 2D texture texture_rgb
as the only buffer
hr = swapchain->GetBuffer(0, __uuidof(ID3D11Texture2D), (LPVOID*)&texture_rgb);
that is used by my render target view
hr = device->CreateRenderTargetView(texture_rgb, nullptr, &rtv);
I would like to create a staging 2D texture. Currently it will be used only for looking up some pixel data but later I will be integrating it with CUDA. It reflects the contents of my texture_rgb
- dimensions, color format, pixel data etc.
In order to do that I retrieve the description of texture_rgb
D3D11_TEXTURE2D_DESC texture_rgb_debugDesc;
texture_rgb->GetDesc(&texture_rgb_debugDesc);
and modify just the D3D11_TEXTURE2D_DESC::Usage
component
texture_rgb_debugDesc.Usage = D3D11_USAGE::D3D11_USAGE_STAGING;
I then attempt to create the other texture by using the modified description
ID3D11Texture2D* texture_rgb_debug = nullptr;
hr = device->CreateTexture2D(&texture_rgb_debugDesc, nullptr, &texture_rgb_debug);
However, DirectX fails with the following error:
E_INVALIDARG One or more arguments are invalid.
If I change the D3D11_TEXTURE2D_DESC::Usage
to the initial one (here D3D11_USAGE_DEFAULT
), the texture is created without a problem.
Upvotes: 1
Views: 343
Reputation: 8953
Staging textures cannot be bound in any stage for the pipeline, so BindFlags should be 0.
You need to also set usage as you did. Last part, since you want to read data back into the cpu, is also to override the CPUAccessFlags :
ID3D11Texture2D* texture_rgb; //your swapchain
D3D11_TEXTURE2D_DESC texture_rgb_debugDesc;
texture_rgb->GetDesc(&texture_rgb_debugDesc);
texture_rgb_debugDesc.BindFlags = 0; //needed as no bind flags are allowed
texture_rgb_debugDesc.CPUAccessFlags = D3D11_CPU_ACCESS_FLAG::D3D11_CPU_ACCESS_READ;
texture_rgb_debugDesc.Usage = D3D11_USAGE_STAGING;
Also note that Multisampled textures cannot be created as staging, so if you want to support this you will also need to first resolve into a non multisampled resource, then do your copy operation.
To integrate with CUDA you do not need staging resources at all, you can interop using render targets directly.
Also to capture those errors, you should use D3D11_CREATE_DEVICE_DEBUG flag when you create your device.
hr = D3D11CreateDeviceAndSwapChain(
adapter, // graphics adapter, which was already checked for CUDA support
D3D_DRIVER_TYPE_UNKNOWN, // If adapter is not NULL, we need to use UNKNOWN, otherwise use D3D_DRIVER_TYPE_HARDWARE
NULL,
D3D11_CREATE_DEVICE_DEBUG, // flags (make this conditional)
featureLevels,
numFeatureLevels,
D3D11_SDK_VERSION,
&swapchainDesc, // swap chain description
&swapchain, // swap chain
&device, // D3D device
NULL, // supported feature level (use .
&featureLevel)
&device_context);
This will give a more meaningful error message when something fails in your visual studio debug output window (versus invalidargs which is not of much help).
Upvotes: 1
Reputation: 37468
A texture obtained from a swap chain ought to have a D3D11_BIND_RENDER_TARGET
bind flag while a texture with D3D11_USAGE_STAGING
usage can not be used as an input or output of any rendering stage and therefore is not compatible with D3D11_BIND_RENDER_TARGET
flag. You should probably override all the fields in texture desc, except for dimensions and format.
Upvotes: 1