Reputation: 113
I would really appreciate your thoughts on compute-generated terrain, LOD, etc. and what the 'right way' to do it is.
Here's my current plan:
I'm procedurally generating a large finite world, where at any point most or all of the map is visible. Texturing/Colouring is done in vert/frag shader.
I'm about 50% through implementing: Generate the closest chunks (eg a 5x5 heightmap around the player) on the CPU using a noise function. In the middle distance, using instances of a compute shader to generate the vertices of heightmap chunks and passing buffer to vert/frag shader. In the far distance, generating 1 huge combined chunk with much less dense vertices and passing to vert/frag.
My questions are: Is this the (or A) right way to handle LOD/Chunks/Distant terrain?
Should I instead generate everything on the GPU and pass a mesh back to CPU for collision, instead of using CPU for near chunks?
What function should I be using to generate and draw the map? I'm currently using DrawProceduralNow in OnRenderObject(). I'm just starting experimenting with using MaterialPropertyBlocks and DrawProcedural in Update().
One idea is to have semi-autonomous chunks that change their settings depending on player location. Or relative chunks that are based on the space around the player, so a middle distance chunk will always be a middle LOD (distance between vertices), but the heightmap is updated as the player walks towards it.
I'm trying to avoid spending too much time going down the wrong rabbit hole. If I can establish the right concepts first, that will save me a lot of time.
Edit: I'm also considering pre-generating heightmap textures to save on procedural calculation time. But that's a moot point, my questions are around what to do after I already have the vertices.
Upvotes: 2
Views: 869