Reputation: 53077
I'm trying to use textures on WebGL to perform parallel array folding operations. The problem is I don't know how to save the result of a computation performed on the shader on the GPU itself. I also don't know how to read it back from JavaScript when the computation is done. See below for illustration:
precision mediump float;
varying vec2 pix_pos; // the 2D coord of the texture
uniform sampler2D texture; // this is the input array
uniform sum_step(uniform* sampler2D){
// this should perform a reduction step
// for example, if texture = [1,2,3,4,5,6,7,8]
// then, in the next step, it would be:
// [3,7,11,15]
// and so on, until we get the sum in O(log(n))
};
void main(void) {
if (texture.length > 1) // if not fully processed, computes a step
texture = sum_step(texture); // don't work, texture is readonly!
};
Upvotes: 2
Views: 1118
Reputation:
The GPU writes pixels. Those pixels ARE THE SAVED DATA.
For example this article shows how to read 9 pixels of data, multiple each by a constant, divide them by a number and then write a pixel with a shader that looks like this
precision mediump float;
// our texture
uniform sampler2D u_image;
uniform vec2 u_textureSize;
uniform float u_kernel[9];
// the texCoords passed in from the vertex shader.
varying vec2 v_texCoord;
void main() {
vec2 onePixel = vec2(1.0, 1.0) / u_textureSize;
vec4 colorSum =
texture2D(u_image, v_texCoord + onePixel * vec2(-1, -1)) * u_kernel[0] +
texture2D(u_image, v_texCoord + onePixel * vec2( 0, -1)) * u_kernel[1] +
texture2D(u_image, v_texCoord + onePixel * vec2( 1, -1)) * u_kernel[2] +
texture2D(u_image, v_texCoord + onePixel * vec2(-1, 0)) * u_kernel[3] +
texture2D(u_image, v_texCoord + onePixel * vec2( 0, 0)) * u_kernel[4] +
texture2D(u_image, v_texCoord + onePixel * vec2( 1, 0)) * u_kernel[5] +
texture2D(u_image, v_texCoord + onePixel * vec2(-1, 1)) * u_kernel[6] +
texture2D(u_image, v_texCoord + onePixel * vec2( 0, 1)) * u_kernel[7] +
texture2D(u_image, v_texCoord + onePixel * vec2( 1, 1)) * u_kernel[8] ;
float kernelWeight =
u_kernel[0] +
u_kernel[1] +
u_kernel[2] +
u_kernel[3] +
u_kernel[4] +
u_kernel[5] +
u_kernel[6] +
u_kernel[7] +
u_kernel[8] ;
if (kernelWeight <= 0.0) {
kernelWeight = 1.0;
}
// Divide the sum by the weight but just use rgb
// we'll set alpha to 1.0
gl_FragColor = vec4((colorSum / kernelWeight).rgb, 1.0);
}
To read the data back you call gl.readPixels
.
When writing your pixels you can either write them to the canvas, the default, or you can make a texture, attach it to a framebuffer and write to (by drawing) and read from (by calling gl.readPixels) the framebuffer's texture.
That sample linked above only uses RGBA
/UNSIGNED_BYTE
textures which are 8 bits per channel, 4 channels. You can also use RGBA
/FLOAT
textures if the user's hardware supports it by enabling the OES_texture_float
extension.
The only complication is, in WebGL (1.0), you can't read floats using gl.readPixels
. Only bytes are allowed. But, once you have your data in a FLOAT texture you could then draw that texture into an RGBA
/UNSIGNED_BYTE
texture and split the float data into 4 bytes and then read it back out as bytes (using gl.readPixels
) and assemble it back into floats in JavaScript
PS: Yes I know that linking to code is bad but the question itself is answered. (you save data by drawing pixels and read data by calling gl.readPixels
) The link is only there as an example.
Upvotes: 3