субота, 9 серпня 2008 р.

Making some use out of hardware multisampling

OK, everybody can do "proper" deferred rendering+MSAA on DX10 hardware or current-next-gen consoles:
  1. render multi-sampled G-buffers
  2. perform lighting ops on ordinary buffer, while calculate lighting for several sub-samples and average them.
  3. continue as usual
That's pretty easy, but have one obivious downside: although we'll save a few milliseconds by emploing MSAA in stage #1 (instead of super-sampling), we are still supersampling the lighting function...

From quality perspective...that's seams to be really good, because the lighting is what causes some aliasing by itself. In reality because of the nature of HW-MSAA we are just wasting performance here, because we often run identical calculations on fully identical subsamples. DX11 will help with that, but until then...you can only reduce the (unneeded) performance impact by sharing(or distributing) some computations between sub-samples. For example, the simple trick: when you are doing some kind of PCF, instead of sampling shadow-map 16 times for each sub-sample, you can sample only 4 times for each sub-sample, but each sub-sample uses it's own subset of the original 16 samples. With carefull selection of sampling kernel, it looks almost identical to fully super-sampled lighting.

How to utilize MSAA on DX9 hardware?

OK, everybody understands that while working in the MSAA mode hardware still executes pixel shader only once per final pixel, although it calculates specified number of Z-samples attached to that "colour" fragment and sends all that down to the ROPs/render-backend. So, in most cases the final picture consists of exactly the same data as non-AA picture, although some pixels are "blended" with their neibourhoods (actually, that's not the case, but you've got the idea).
The trick here is to find which fragments have contributed to the current fragment and by what amount. I found two ways that looks "convicing", here is the pipeline:

  1. render everything (including lighting) as usual including the final shading/combine pass, but not including the post-processing stages.
  2. re-render all the on-screen geometry to MSAA-RT while projecting and sampling the texture from stage #1 using either a) the cheap way - use the direction obtained from the difference of interpolated texture-coordinate and the same texture-coordinate with "_centroid" modifier applied, or b) interpolate fragment depth as well and search closest Z in the nearby samples
  3. resolve your MSAA-RT and continue as usual

Now the downsides:

2.a. suffers from the hardware trick/cheat/bug all IHVs I know have implemented. When your pixel/fragment actually covers just two sub-samples the centroid modifier will correctly give you location between whose two. Obviously one covered sub-sample is correct as well. But when your fragment covers three sub-samples out of four, they give us not the actual centroid position, but center instead! Shame on them!

Another downside is polygon intersections. In that places the algorithms behaves exactly as nVidia CSAA, and you'll got little-to-no AA there.

2.b. suffers from precision, for it to work correctly the depth values from nearby samples should be really precise. FP16 doesn't work. FP32 / D24X8 works, but you still have to slightly bias/scale difference towards current/center sample to avoid a lot of artifacts of incorrectly selected samples. A little side note: all IHVs provide us with some way to directly read depth-buffer under DX9, although that's "cheat" territory of graphics programming.

And another downside of 2.b.: performance. For 4X-AA you'll have to take 5 depth-samples + 1 final color sample for each fragment + some math - and that is not free either.

Conclusion:
I don't really know why does everybody want to use hardware MSAA for anti-aliasing aside of performance. The only good looking anti-aliasing filter is super-sampling with jittered/rotated sub-samples.

But on DX9 HW (or X360) the method 2.a. is relatively cheap and looks good. 2.b. looks better but is too slow for some hardware (X360 for example). For PS3 its probably faster to use the DX10-style 2X-AA, because the cost of re-rendering the whole visible geometry again would be too high for it.

Anyway, hardware-MSAA is just one method to approximate anti-aliased image out of thousands others.
Stay tuned :)

пʼятниця, 18 січня 2008 р.

Deferred anti-aliasing (AA)

OK, I'm going to post a series of really small articles/comments on the difficulties usually referred as deferred shading problems. But don't expect me to post very often :)

The most difficult thing is anti-aliasing. First of all, I'd like to mention that I don't really like MSAA :) I am a big fan of supersampling. That's because in CG we have a lot of aliasing we can't really control, like specular "reflections" on the bumpy surface...heck! Even some diffuse maps can alias pretty bad under certain conditions! Unfortunately that's still too slow for most hardware/engines/content/etc...

So, the summary of methods I've implemented in the past in the 4A-Engine (the "Metro 2033" deferred renderer):
  1. HW MSAA (Yeah! I know everybody will say this is not possible :))
  2. 2D supersampling (both oversized RTs and multi-pass thing where you can control sampling locations)
  3. Temporal supersampling (via temporal re-projection and caching)
  4. Edge-detection and weighting/blurring (not really AA, but can produce results similar to MSAA with somewhat "heavy" shader)
  5. DX10-style mixed multi-sampling for G-Buffers and supersampling/undersampling of lighting
  6. (not implemented) I am still waiting for the DX11 multi-frequency shading to expand the list of techniques :)
I'll go into detail in the next posts :)