Nikolai
Nikolai

Reputation: 1173

Binding size '...' is larger than the maximum binding size (134217728)

I want to handle large amount of data on GPU. I write simple test shader, that just make array ements negative. Here it is:

@group(0) @binding(0) var<storage, read> sourceArray : array<f32>;
@group(0) @binding(1) var<storage, read_write> resultArray : array<f32>;

@compute @workgroup_size(256, 1)
fn main(@builtin(global_invocation_id) global_id : vec3<u32>) {
    resultArray[global_id.x] = -sourceArray[global_id.x];
}

And I want handle 100000000 elements. Here is JS code, that I wrote to do it:

const ELEMENTS_COUNT = 100000000

// Source array
const sourceArray = new Float32Array(ELEMENTS_COUNT);
for (let i = 0; i < sourceArray.length; i++) {
    sourceArray[i] = i;
}

const gpuSourceArrayBuffer = device.createBuffer({
    mappedAtCreation: true,
    size: sourceArray.byteLength,
    usage: GPUBufferUsage.STORAGE
});
const sourceArrayBuffer = gpuSourceArrayBuffer.getMappedRange();

new Float32Array(sourceArrayBuffer).set(sourceArray);
gpuSourceArrayBuffer.unmap();


// Result array
const resultArrayBufferSize = Float32Array.BYTES_PER_ELEMENT * (sourceArray.length);
const gpuResultArrayBuffer = device.createBuffer({
    size: resultArrayBufferSize,
    usage: GPUBufferUsage.STORAGE | GPUBufferUsage.COPY_SRC
});


// Compute shader code
const shaderModule = device.createShaderModule({
    code: shaderText
});

// Pipeline setup
const computePipeline = device.createComputePipeline({
    layout: "auto",
    compute: {
        module: shaderModule,
        entryPoint: "main"
    }
});

// Bind group
const bindGroup = device.createBindGroup({
    layout: computePipeline.getBindGroupLayout(0),
    entries: [
        {
            binding: 0,
            resource: {
                buffer: gpuSourceArrayBuffer
            }
        },
        {
            binding: 1,
            resource: {
                buffer: gpuResultArrayBuffer
            }
        }
    ]
});


// Commands submission
const commandEncoder = device.createCommandEncoder();
const passEncoder = commandEncoder.beginComputePass();
passEncoder.setPipeline(computePipeline);
passEncoder.setBindGroup(0, bindGroup);
passEncoder.dispatchWorkgroups(Math.ceil(ELEMENTS_COUNT / 256.0));
passEncoder.end();

// Get a GPU buffer for reading in an unmapped state.
const gpuReadBuffer = device.createBuffer({
    size: resultArrayBufferSize,
    usage: GPUBufferUsage.COPY_DST | GPUBufferUsage.MAP_READ
});

// Encode commands for copying buffer to buffer.
commandEncoder.copyBufferToBuffer(
    gpuResultArrayBuffer /* source buffer */,
    0 /* source offset */,
    gpuReadBuffer /* destination buffer */,
    0 /* destination offset */,
    resultArrayBufferSize /* size */
);

// Submit GPU commands.
const gpuCommands = commandEncoder.finish();
device.queue.submit([gpuCommands]);

// Read buffer.
await gpuReadBuffer.mapAsync(GPUMapMode.READ);
const arrayBuffer = gpuReadBuffer.getMappedRange();
console.log(new Float32Array(arrayBuffer));

And I receive an error Binding size (400000000) is larger than the maximum binding size (134217728). How can I fix this error? Maybe there is a way to create some continious data feed(stream) for GPU, to not provide all data as single piece?

Upvotes: 1

Views: 841

Answers (1)

Jinlei Li
Jinlei Li

Reputation: 360

This issue occurs because the size of the buffer you created exceeds the size of maxBufferSize.

Use device.limits.maxBufferSize to query the maximum buffer size that the current device can support.

The size of maxBufferSize is usually related to the memory size of the current device. Here are some numbers I tested with wgpu

Device maxBufferSize
M1 max MacBook Pro (64G Memory) 36864 MB = 36 GB
Intel MacBook Pro (2018, 32G Memorry) 3072 MB = 3 GB
iPhone 6 Plus 256 MB
iPad Mini 4 497 MB
iPhone 8 plus 747 MB
iPad Pro 2018 947 MB
iPhone 12 Pro 1433 MB

Upvotes: 2

Related Questions