Reputation: 3124
I have a MFC application which main draw loop has to draw a set of huge points. Right now, this is done as follows:
void CmodguiView::OnDraw(CDC* /*pDC*/) {
wglMakeCurrent(m_hDC, m_hRC);
// Clear color and depth buffer bits
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
const std::vector<DensePoint> *pts;
pts = getPts();
if (pts) {
glBegin(GL_POINTS);
for (auto &&pt : *pts) {
glColor3ub(pt.r, pt.g, pt.b);
glVertex3f(pt.x, pt.y, pt.z);
}
glEnd();
SwapBuffers(m_hDC);
}
}
How do I optimize this? Can I avoid the for
loop?
Would it be possible, for instance, to rotate the points directly?
Upvotes: 0
Views: 690
Reputation: 162164
The bottleneck are the countless calls of glVertex. This is called immediate mode, is denoted by glBegin…glEnd block and it has been deprecated for well over 10 years. Also immediate mode got obsolete with the introduction of vertex arrays over 15 years ago. So don't use it.
Instead you should use vertex arrays. You may combine them with buffer objects to further improve performance.
Anyway, assuming DensePoint
is written as
struct DensePoint {
GLubyte r,g,b;
GLfloat x,y,z;
};
You can replace the glBegin…glEnd block with
std::vector<DensePoint> const * const pts = getPts();
if(pts) {
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
glColorPointer (3, GL_UNSIGNED_BYTE,
sizeof(DensePoint), &((*pts)[0].r));
glVertexPointer(3, GL_FLOAT,
sizeof(DensePoint), &((*pts)[0].x));
glDrawArrays(GL_POINTS, 0, pts->size());
}
Note that this still uses the old OpenGL-1.1 fixed function pipeline and points data stays in CPU/system memory, so this is will get not as much throughput as modern VBO based drawing operations. Also with modern OpenGL you get freely defined vertex attributes through shaders. But it's easy enough to go there starting from the code above.
Upvotes: 3