Emil
Emil

Reputation: 187

In D3D12, can the render target view be any buffer?

In the samples I have looked at so far some of the commands are something like:

  1. D3D12_DESCRIPTOR_HEAP_DESC with D3D12_DESCRIPTOR_HEAP_TYPE::D3D12_DESCRIPTOR_HEAP_TYPE_RTV
  2. ID3D12Device::CreateDescriptorHeap
  3. D3D12_CPU_DESCRIPTOR_HANDLE with ID3D12DescriptorHeap::GetCPUDescriptorHandleForHeapStart (and maybe ID3D12Device::GetDescriptorHandleIncrementSize)
  4. (Optional?) ID3D12Device::CreateRenderTargetView
    1. (Optional?) IDXGISwapChain3::GetBuffer
  5. Schedule rendering stuff to a command list, of note OMSetRenderTargets and DrawInstanced, and then close the command list.
  6. ID3D12CommandQueue::ExecuteCommandLists
    1. (Optional?) IDXGISwapChain3::Present
  7. Schedule a signal with ID3D12CommandQueue::Signal on a fence
  8. Wait for the GPU to finish with ID3D12Fence::SetEventOnCompletion and WaitForSingleObjectEx

If possible, how can step 4.1 be replaced with a buffer of choice? I.e. how can one create a ID3D12Resource* and render to it, and then read from it into say a std::vector? (I assume if this is possible that step 6.1 can be ignored, since there is no render target view for the swap chain to present to. Perhaps step 4 is unnecessary as well in this case ? Maybe only OMSetRenderTargets matters ?)

Upvotes: 3

Views: 3708

Answers (1)

Chuck Walbourn
Chuck Walbourn

Reputation: 41077

It depends on the video memory architecture as to where exactly the render target can be located. On some systems, it's in dedicated video memory that only the video card can access. In some systems it's in video memory shared across the bus that both CPU and GPU can access. In unified memory architectures, everything is in system memory.

Therefore, you have restrictions on where exactly the render target can be located. This is why you have to use D3D12_HEAP_TYPE_DEFAULT and specify D3D12_RESOURCE_FLAG_ALLOW_RENDER_TARGET when creating the ID3D12Resource you plan to bind as a render target (this is implicitly done by DXGI when you create the swap chain render target as well).

Generally speaking you can't and shouldn't use the low-level DXGI surface creation APIs to create Direct3D resources. They mostly exist for system use rather than by applications.

Unless you happen to be on a UMA system, you should minimize the CPU access to the render target as it will require expensive copies otherwise. Even on UMA systems, there's also required de-tiling as well to get the results into a linear form.

Direct3D 12 also offers the "Placed Resource" methods as well which can provide more control over where exactly the memory is allocated (or more specifically where the virtual memory addresses are allocated), but you still have to abide by the underlying architecture limitations. Depending on the memory architecture, you can "alias" multiple different ID3D12Resource instances all using the same memory (such as a render target being be aliased as a unordered access resource), but you are responsible for inserting the required resource barriers into the command list (and test it) to make sure that it works reliably on all DX12 hardware. See MSDN.

You do not have to Present your render target if you don't need the user to see the result.

Memory Management Strategies

UMA Optimizations: CPU Accessible Textures and Standard Swizzle

Getting Started with Direct3D 12

If you are new to DirectX 12, you should see the DirectX Tool Kit for DirectX 12 tutorials. If you aren't already familiar with DirectX 11, you should wait on DirectX 12 and start with DirectX Tool Kit for DirectX 11.

Upvotes: 5

Related Questions