Reputation: 34503
Our app lets users upload custom images to serve as materials for SCNNodes, as you can see from the screenshots and code below.
Screenshot 1 shows SCNNodes when the materials use a scale of 1.
Screenshot 2 shows the same nodes with a scale of 2.
While using a scale of 2 sharpens the texture/materials noticeably, it also repeats the image because of the wrapS
and wrapT
properties. Using Mirror
or Clamp
for these properties instead of Repeat
did not help.
In SceneKit or UIKit, you supply an image of higher resolution and scale down to improve sharpness for different devices. For instance, for a 50x50 button, you supply a 100x100 image. You can see the contrast between UIKit sharpness and SceneKit sharpness for the same image by viewing the sharpness of the same images when rendered in UIKit components at the bottom.
1) How do you apply the same principle to SceneKit?
2) More importantly, how can you achieve the texture/material sharpness of screenshot 2 while avoiding the repeating behavior?
Code:
// Create box geometry
let box = SCNBox(width: 1.0, height: 1.0, length: 1.0, chamferRadius: 0.0)
box.firstMaterial!.diffuse.contents = style.getContents() // This returns a UIImage
box.firstMaterial!.specular.contents = UIColor.whiteColor()
// Increase resolution for image styles
let scale = Float(2)
if style.type == .Image {
box.firstMaterial!.diffuse.contentsTransform = SCNMatrix4MakeScale(scale, scale, scale)
//box.firstMaterial!.locksAmbientWithDiffuse = true
box.firstMaterial!.diffuse.wrapS = .Repeat
box.firstMaterial!.diffuse.wrapT = .Repeat
box.firstMaterial!.diffuse.mipFilter = .Linear
}
Textures:
Upvotes: 2
Views: 1683
Reputation: 6278
You'll need to think in terms of the actual pixels your images are going to take up on the screen, at the closest position to the camera, and with the greatest degree of perspective distortion.
So, by way of example, a cube close to the camera, on the left of the scene, might have an edge very near the "lens" of the camera, and be taking up (for example) much of the y axis (height) of the screen. If this is a common scenario, in your game, then aiming at guessing this size in (real) pixels will give you an idea of how big you need your texture for sharpness.
This is one of the reasons LOD (Level of Detail) functionality exists in 3D engines, so that not all the objects in a scene need to have the best, biggest quality texture on them all the time, nor the greatest number of polygons to express their shape.
There's also different types of texture processing algorithms in most 3D engines, for smoothing. Turning these off, if they exist in SceneKit, will be a big plus to getting sharp(er) textures, too.
(not directly applicable to your request, but shows how this works)
https://developer.apple.com/reference/scenekit/scnlevelofdetail
This is more applicable. This "trick" is really old, I remember it from the first 3D cards. It renders your texture smaller:
https://developer.apple.com/reference/scenekit/scnmaterialproperty/1395398-mipfilter
I read this as being wonderfully automatic, on second look:
From the above page:
"SceneKit automatically creates several mipmap levels for the material property’s image contents, each at a fraction of the original image’s size. When rendering, SceneKit automatically samples texels from the mipmap level closest to the size being rendered."
Upvotes: 2