Brandon Slaght
Brandon Slaght

Reputation: 1057

Mapping extremely high resolution to sphere in SceneKit

In my iOS app, I have a set of planet maps as images, some as large as 14k resolution. I can apply downsized versions to spheres to create models of planets in SceneKit. However, I want users to be able to zoom in to see details on the full resolution images. I also want to be able to do this without my app running out of memory. Is there a way to automatically tile textures to a sphere like Google Maps does, and only load the parts and resolutions when they are needed?

Upvotes: 5

Views: 730

Answers (2)

Jürgen
Jürgen

Reputation: 11

What you need to do is to split the textures into smaller pieces and provide different sizes depending on the level of detail. When your camera is zoomed in deep enough you can use the textures with the highest resolution, but you will also need to restrict the number of textures to be shown. When you show a planet's surface zoomed in only a small piece of it can be seen on screen, but zoomed out the entire front surface is shown. So split your texture into small pieces and also generate textures of lower resolution for other zoom levels. You will also need to create custom geometry and assign the small high resolution texture pieces on it. Finally you will need to decide which textured geometry to show for which camera view depending on distance or view angle. Using the view frustum you also need to decide which parts are seen in the current scene. I'm currently facing the same issue. I already created all the sub meshes and all the smaller textures using SCNNode's (don't load the textures here - they need to be loaded on demand only!), however I don't have a working solution for the test which sub nodes are visible. The isvisible inside frustumof method of the scene does not help here, because it only does a bounding box test and the bounding boxes are to large, most of them will always be partly inside the view frustum (so I'm currently trying to implement my own tests). And you will also need some surface normals to test if the front of the surface is pointing in the direction of the camera. Unfortunately, because I'm still have no working solution I cannot post any code here - I can only describe my "coding plan" which would work (at least using OpenGL I already implemented things like that years ago). Maybe the basic idea behind the solution is already helpful for you? Otherwise, perhaps we can find out the rest together... :-)

Upvotes: 1

Hal Mueller
Hal Mueller

Reputation: 7646

Two optimization techniques that you can apply using very little programming effort are mipmaps and levels of detail.

If you set the mipFilter property of the SCNMaterial representing your planet map, you'll get an automatically generated mipmap.

If you supply some SCNLevelOfDetail instances for your planet's SCNGeometry, you'll get versions with highly reduced polygon counts that will save memory.

These are both mentioned in the 2013 WWDC SceneKit talk. SCNLevelOfDetail is mentioned again in 2014. The 2014 sample code has examples of both: mipmap generation in the AAPLPresentationViewController, and LOD in slide 58.

Upvotes: 1

Related Questions