Dimitry Rakhlei
Dimitry Rakhlei

Reputation: 191

DirectX 11 changing the pixel bytes

Followed this guide here

I am tasked with "using map and unmap methods to draw a line across the screen by setting pixel byte data to rgb red values".

I have the sprite and background displaying but have no idea how to get the data.

I also tried doing this:

//Create device
D3D11_TEXTURE2D_DESC desc;
ZeroMemory(&desc, sizeof(D3D11_TEXTURE2D_DESC));
desc.Width = 500;
desc.Height = 300;
desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
desc.Usage = D3D11_USAGE_DYNAMIC;
desc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
desc.MiscFlags = 0;
desc.MipLevels = 1;
desc.ArraySize = 1;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;


m_d3dDevice->CreateTexture2D(&desc, nullptr, &texture);
m_d3dDevice->CreateShaderResourceView(texture, 0, &textureView);

// Render
D3D11_MAPPED_SUBRESOURCE mapped;
m_d3dContext->Map(texture, 0, D3D11_MAP_WRITE_DISCARD, 0, &mapped);

data = (BYTE*)mapped.pData;
rows = (BYTE)sizeof(data);

std::cout << "hi" << std::endl;
m_d3dContext->Unmap(texture, 0);

Problem is that in that case data array is size 0 but has a pointer. This means that I am pointing to a texture that doesn't have any data or am I not getting this?

Edit: currently I found

D3D11_SHADER_RESOURCE_VIEW_DESC desc;
m_background->GetDesc(&desc);
desc.Buffer; // buffer

Upvotes: 0

Views: 1560

Answers (2)

Jeremy Trifilo
Jeremy Trifilo

Reputation: 470

I felt the need to create an Answer for this as when I searched for how do this. This question pops up first and the supplied answer didn't really solve the problem for me and wasn't quite the way I wanted to do it anyways...

In my program I have a method as below.

void ContentLoader::WritePixelsToShaderIndex(uint32_t *data, int width, int height, int index)
{
    D3D11_TEXTURE2D_DESC desc = {};
    desc.Width = width;
    desc.Height = height;
    desc.MipLevels = 1;
    desc.ArraySize = 1;
    desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
    desc.SampleDesc.Count = 1;
    desc.SampleDesc.Quality = 0;
    desc.Usage = D3D11_USAGE_DEFAULT;
    desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
    desc.CPUAccessFlags = 0;
    desc.MiscFlags = 0;

    D3D11_SUBRESOURCE_DATA initData;
    initData.pSysMem = data;
    initData.SysMemPitch = width * 4;
    initData.SysMemSlicePitch = width * height * 4;

    Microsoft::WRL::ComPtr<ID3D11Texture2D> tex;

    Engine::device->CreateTexture2D(&desc, &initData, tex.GetAddressOf());

    Engine::device->CreateShaderResourceView(tex.Get(), NULL, ContentLoader::GetTextureAddress(index));
}

Then using the below code I tested drawing a Blue Square with a White Line. And it works perfectly fine. The issue I was getting was setting the System Mem Slice and Mem Pitch after looking in the WICTextureLoader class I was able to figure out how the data is stored. So it appears the

MemPitch = The Row's Size in Bytes. MemSlice = The Total Image Pixels Size In Bytes.

const int WIDTH = 200;
const int HEIGHT = 200;
const uint32_t RED = 255 | (0 << 8) | (0 << 16) | (255 << 24);
const uint32_t WHITE = 255 | (255 << 8) | (255 << 16) | (255 << 24);
const uint32_t BLUE = 0 | (0 << 8) | (255 << 16) | (255 << 24);
uint32_t *buffer = new uint32_t[WIDTH * HEIGHT];
bool flip = false;
for (int X = 0; X < WIDTH; ++X)
{
    for (int Y = 0; Y < HEIGHT; ++Y)
    {
        int pixel = X + Y * WIDTH;
        buffer[pixel] = flip ? BLUE : WHITE;
    }
    flip = true;
}

WritePixelsToShaderIndex(buffer, WIDTH, HEIGHT, 3);
delete [] buffer;

Upvotes: 1

Chuck Walbourn
Chuck Walbourn

Reputation: 41057

First of all, most of those functions return HRESULT values that you are ignoring. That's not safe as you will miss important errors that invalidate the remaining code. You can use if(FAILED(...)) if you want, or you can use ThrowIfFailed, but you can't just ignore the return value in a functioning app.

HRESULT hr = m_d3dDevice->CreateTexture2D(&desc, nullptr, &texture);
if (FAILED(hr))
    // error!
hr = m_d3dDevice->CreateShaderResourceView(texture, 0, &textureView);
if (FAILED(hr))
    // error!

// Render
D3D11_MAPPED_SUBRESOURCE mapped;
hr = m_d3dContext->Map(texture, 0, D3D11_MAP_WRITE_DISCARD, 0, &mapped);
if (FAILED(hr))
    // error!

Second, you should enable the Debug Device and look for diagnostic output which will likely point you to the reason for the failure.

sizeof(data) is always going to be 4 or 8 since data is a BYTE* i.e. the size of a pointer. It has nothing to do with the size of your data array. The locked buffer pointed to by mapped.pData is going to be mapped.RowPitch * desc.Height bytes in size.

You have to copy your pixel data into it row-by-row. Depending on the format and other factors, mapped.RowPitch is not necessarily going to be 4 * desc.Width--4 bytes per pixel is because you are using a format of DXGI_FORMAT_B8G8R8A8_UNORM. It should be at least that big, but it could be bigger to align the overall size.

This is pseudo-code and not necessarily an efficient way to do it, but:

for(UINT y = 0; y < desc.Height; ++y )
{
    for(UINT x = 0; x < desc.Width; ++x )
    {
        // Find the memory location of the pixel at (x,y)
        int pixel = y * mapped.RowPitch + (x*4)
        BYTE* blue = data[pixel];
        BYTE* green = data[pixel] + 1;
        BYTE* red = data[pixel] + 2;
        BYTE* alpha = data[pixel] + 3;

        *blue = /* value between 0 and 255 */;
        *green = /* value between 0 and 255 */;
        *red = /* value between 0 and 255 */;
        *alpha = /* value between 0 and 255 */;
    }
}

You should take a look at DirectXTex which does a lot of this kind of row-by-row processing.

Upvotes: 0

Related Questions