smilingbuddha
smilingbuddha

Reputation: 14670

Visualization on GPU in a CUDA environment

I am interested in implementing some particle techniques for fluid simulation on GPU's using CUDA.

From my experience, the usual way to do visualizations in computational physics on the CPU is to make the program/application output data files (like .vtk files) at every needed time-step and visualize them using a data visualization tool like ParaView or VisIt.

Is there a way to do the visualization directly on the GPU? Do I have to keep sending back per-time-step data files to the CPU, so that after the program exits I can use ParaView/VisIt on them? Wouldn't there be a performance penalty if so?

If some popular ways of doing GPU based visualization for high-performance computing can be listed here that would be really helpful.

Note: I will be using the graphics cards GTX 570 and Tesla C2050 for most of my work. I am using CUDA 4.0.

Upvotes: 2

Views: 4038

Answers (2)

Michael Haidl
Michael Haidl

Reputation: 5482

CUDA <-> OpenGL is quite simple.

You create a OpenGL vertex buffer like this:

unsigned int _vbo;
cudaGraphicsResource *_vb_resource;

glGenBuffers(1, &_vbo);
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
glBufferData(GL_ARRAY_BUFFER, size, NULL, GL_DYNAMIC_DRAW);

register it with CUDA:

cudaGraphicsGLRegisterBuffer(_vb_resource, _vbo, FLAGS);

map it before you call your visualisation kernel:

vertex_t *ptr = NULL;
size_t size; 
cudaGraphicsMapResources(1, &_vb_resource, 0);
cudaGraphicsResourceGetMappedPointer(ptr, &size, _vb_resource);

call your kernel with the vertex buffer as argument:

visData<<<BLOCKS, THREADS>>>(data, ptr);

unmap your vertex buffer

cudaGraphicsUnmapResources(1, &_vb_resource, 0);

and your done. OpenGL can now render your data.

Also CUDA can interop with DirectX 10 and 11 (even 9 but this is deprecated and depressing)

Have a look at the samples posted by Martin Beckett. It is not that tricky.

EDIT: A little hint: Disabling vsync may result in better computation performance. vsync is per default enabled. If you have only on thread for rendering and calculations vsync will drop your calculation fps rate to the vsync rate of the used display (60Hz)

To disable vSync on Windows and Linux use:

    // turn off vsync
#ifndef _WIN32
    // or just try it for non Windows 
    if (glXGetProcAddress((const GLubyte*) "glXSwapIntervalMESA") != NULL)
        ((void (*)(int))glXGetProcAddress((const GLubyte*) "glXSwapIntervalMESA"))(0);
    else if (glXGetProcAddress((const GLubyte*) "glXSwapIntervalSGI") != NULL)
        ((void (*)(int))glXGetProcAddress((const GLubyte*) "glXSwapIntervalSGI"))(0);
#else
    wglSwapIntervalEXT(0);
#endif

Upvotes: 6

Martin Beckett
Martin Beckett

Reputation: 96147

Cuda, not surprisingly, plays very nicely with opengl.

So you can have CUDA write your results to a VBO and render the scene with openGL, or if it is a map type result directly to a texture

Upvotes: 3

Related Questions