Reputation: 1237
I implemented a GBuffer normal storing optimization algorithm called best fit normal according this paper from Crytek: http://www.crytek.com/cryengine/presentations/CryENGINE3-reaching-the-speed-of-light. It was implemented by D3D11, but with Shader Model 4.0.
I need to sample a best fit normal look-up texture due to use this algorithm. Actually it's a screen space operation, and this texture has only one level mipmap, so I should sample only level 0 of this texture by using TextureObject.SampleLevel( texcoord, 0.0f ). But when I sample it with only level 0, the final image appears some black and white noise pixels. If I use TexObj.Sample( texcoord ) instead, these noise disappeared. As far as I know, these two function should act exactly same in this case, but why the results are different? Is there any magic when GPU executes these instructions?
By the way, when I use SampleLevel to sample the texture, the final quality looks better than using Sample. Any help will be grateful, Thank you!
Upvotes: 2
Views: 435
Reputation: 100
Have you tried to analyze the issue with PIX (or VS2012 graphics debugger) to ensure that your source texture is the same in both cases? Also, it may be helpful if you could post the two comparison screen shots with the two methods.
Upvotes: 1