Reputation: 8990
This is a beginner's question, but I am a little confused about how this works in OpenGL. Also, I am aware that in practice, given the figures I'm using, this won't make a real difference in terms of framerate, but I would like to understand how this works.
My problem
Let's say I want to display a starry night sky, i.e. a set of white points on a black background. Assuming the stars do not move, I can think of two options to do that in OpenGL:
Define each star as an OpenGL vertex and call glDrawElements
to render them.
Use a full-screen texture, rendered on an OpenGL quad.
Let's say my screen is 1920x1080 wide and I want to draw 1000 stars. Then, if we quickly compare the workload associated with each option: the first one has to draw 1000 vertices whereas the second one uses only 4 vertices but must uselessly render 1920x1080 = 2*106 pixels.
My questions
Should I conclude that the first option is most efficient ? If not, why ?
I'm more particularly interested in OpenGL ES (2.0), so is the answer the same for OpenGL and OpenGL ES (2.0) ?
Upvotes: 1
Views: 518
Reputation: 553
It totally depends on the resolution. In fact, you're right that you'd limit the vertices amount, but you have to understand the Graphics Pipeline. Even tough the texture is only black and white, OpenGL has to work with each Texel of the texture, getting even more expensive if you don't use mipmapping (using auto-generated lower resolution texture variants on distance). Let's say you're using a texture of the size 640 * 480 for the stars, used for the quad in the sky. Then OpenGL has to compute 4 vertex and 307200 texels for your sky, each having four components (r,g,b,a).
Indeed, you'd only have to compute 4 vertices, but instead a huge ammount of texels. So if you really have this black sky with the ~1000 stars, it should be more efficient to draw vertex array with glDrawElements. And yes, it should be the same for OpenGL and GLES.
Upvotes: 2