ingham
ingham

Reputation: 1645

Large 3D scene streaming

I'm working on a 3D engine suitable for very large scene display. Appart of the rendering itself (frustum culling, occlusion culling, etc.), I'm wondering what is the best solution for scene management.

Data is given as a huge list of 3D meshs, with no relation between them, so I can't generate portals, I think...

The main goal is to be able to run this engine on systems with low RAM (500MB-1GB), and the scenes loaded into it are very large and can contain millions of triangles, which leads to very intensive memory usage. I'm actually working with a loose octree right now, constructed on loading, it works well on small and medium scenes, but many scenes are just to huge to fit entirely in memory, so here come my question:

How would you handle scenes to load and unload chunks dynamically (and ideally seamlessly), and what would you base on to determine if a chunk should be loaded/unloaded? If needed, I can create a custom file format, as scenes are being exported using a custom exporter on known 3D authoring tools.

Important information: Many scenes can't be effectively occluded, because of their construction. Example: A very huge pipe network, so there isn't so much occlusion but very high number of elements.

Upvotes: 5

Views: 2045

Answers (2)

StarShine
StarShine

Reputation: 2050

If the vast amount of ram is going to be used by textures, there are commercial packages available such as the GraniteSDK that offer seamless LOD-based texture streaming using a virtual texture cache. See http://graphinesoftware.com/granite . Alternatively you can look at http://ir-ltd.net/

In fact you can use the same technique to construct poly's on the fly from texture data in the shader, but it's going to be a bit more complicated.

For voxels there is a techniques to construct oct-trees entirely in GPU memory, and page in/out the parts you really need. The rendering can then be done using raycasting. See this post: Use octree to organize 3D volume data in GPU , http://www.icare3d.org/research/GTC2012_Voxelization_public.pdf and http://www.cse.chalmers.se/~kampe/highResolutionSparseVoxelDAGs.pdf

It comes down to how static the scene is going to be, and following from that, how well you can pre-bake the data according to your vizualization needs. It would already help if you can determine visibility constraints up front (e.g. google Potential Visiblity Sets) and organize it so that you can stream it at request. Since the visualizer will have limits, you always end up with a strategy to fit a section of the data into GPU memory as quickly and accurately as possible.

Upvotes: 0

dv1729
dv1729

Reputation: 1057

I think that the best solution will be a "solution pack", a pack of different techniques.

  • Level of detail(LOD) can reduce memory footprint if unused levels are not loaded. It can be changed more or less seamlessly by using an alpha mix between the old and the new detail. The easiest controller will use mesh distance to camera.
  • Freeing the host memory(RAM) when the object has been uploaded to the GPU (device), and obviously free all unsued memory (OpenGL resources too). Valgrind can help you with this one.
  • Use low quality meshes and use tessellation to increase visual quality.
  • Use VBO indexing, this should reduce VRAM usage and increase performance
  • Don't use meshes if possible, terrain can be rendered using heightmaps. Some things can be procedurally generated.
  • Use bump or/and normalmaps. This will improve quality, then you can reduce vertex count.
  • Divide those "pipes" into different meshes.
  • Fake 3D meshes with 2D images: impostors, skydomes...

Upvotes: 2

Related Questions