//-----------------------------------------------------------------------------
// Name: Volume Fog Direct3D Sample
//
// Copyright (c) 1998-2001 Microsoft Corporation. All rights reserved.
//-----------------------------------------------------------------------------
Description
===========
The Volume Fog sample shows the per-pixel density volumetric rendering
technique. The fog volume is modeled as a polygonal mesh, and the
density of the fog at every pixel is computed by subtracting the front
side of the fog volume from the back side. The fog is mixed with the
scene by accumulating an in/out test at every pixel -- that is, back-facing
fog polygons will add, while front-facing ones will subtract. If the value
is non zero, then the scene intersects the fog and the scene's depth value
is used. In order to get better results, this demo uses 12 bits of precision
by encoding high and low bits in different color channels.
Path
====
Source: DXSDK\Samples\Multimedia\D3D\volumefog
Executable: DXSDK\Samples\Multimedia\D3D\Bin
User's Guide
============
The following keys are implemented.
<J> Move object backward on the Z axis
<M> Move Object forward on the z Axis
<H> Move Object forward on the X axis
<K> Move object backward on the X axis
<N> Move object forward on the y axis
<Y> Move object backward on the y axis
Camera Controls:
<LEFT> Slide Left
<RIGHT> Slide Right
<DOWN> Slide Down
<UP> Slide Up
<W> Move forward
<S> Move backward
<NUMPAD8> Pitch Down
<NUMPAD2> Pitch Up
<NUMPAD4> Turn Right
<NUMPAD6> Turn Left
<NUMPAD9> Roll CW
<NUMPAD7> Roll CCW
Mouse Controls:
Rotates Fog Volume.
Programming Notes
=================
Introduction
The article "Volumetric Rendering in Real-Time," printed in the 2001 GDC
Proceedings, covered the basis of volumetric depth rendering, but at the
time of the writing, no pixel-shader-compliant hardware was available.
This supplement describes a process designed to achieve two goals: to get
more precision out of an 8 bit part, and to allow the creation of concave
fog volumes.
Handling Concavity
Computing the distance of fog for the convex case was relatively simple.
Recall that the front side of the fog volume was subtracted away from the
back side (where the depth is measured in number of units from the camera).
Unfortunately, this does not work with concave fog volumes because at any
given pixel, it may have two back sides and two front sides. The solution
is intuitive and has sound mathematical backing: sum all of the front
sides and subtract them from the summed back sides.
So now, computing concavity is as simple as adding the multiple front
sides and subtracting them from the multiple back sides. Clearly, a meager
8 bits won�t be enough for this. Every bit added would allow another
summation and subtraction, and allow for more complex fog scenes.
There is an important assumption being made about the fog volume. Is must
be a continuous, orientable hull. That is, it cannot have any holes in
it. Every ray cast through the volume must enter through hull the same
number of times it exits.
Getting Higher Precision
Although most 3D hardware handle 32 bits, it is really four
8-bit channels. The way most hardware works today, there is only one place
where the fog depths could be summed up: the alpha blender. The alpha
blender is typically used to blend on alpha textures by configuring the
source destination to multiply against the source alpha, and the
destination to multiply against the inverse alpha. However, they can also
be used to add (or subtract) the source and destination color channels.
Unfortunately, there is no way to perform a carry operation here: If one
channel would exceed 255 for a color value, it simply saturates to 255.
In order to perform higher bit precision additions on the alpha blending
unit, the incoming data has to be formatted in a way which is compatible
with the way the alpha blender adds. To do this, the color channels can
hold different bits of the actual result, and most importantly, be allowed
some overlap in their bits.
This sample uses the following scheme: The red channel will contain the
upper 8 bits, and the blue channel will contain the lower 4 plus 3 carry
spots. The upper bit should not be used for reasons which are discussed
later. So the actual value encoded is Red*16+Blue. Now, the alpha blender
will add multiple values in this format correctly up to 8 times before
there is any possibility of a carry bit not propagating. This limits the
fog hulls to ones which do not have concavity where looking on any
direction a ray might pass in and out of the volume more than 8 times.
Encoding the bits in which will be added cannot be done with a pixel
shader. There are two primary limitations. First, the color interpolators
are 8 bit as well. Since the depth is computed on a per vertex level, this
won�t let higher bit values into the independent color channels. Even if
the color channel had a higher precision, the pixel shader has no
instruction to capture the lower bits of a higher bit value.
The alternative is to use a texture to hold the encoded depths. The
advantage of this is twofold. First, texture interpolaters have much
higher precision than color interpolaters, and second, no pixel shader is
needed for initial step of summing the font and back sides of the fog
volume. It should be possible, on parts with at least 12 bits precision in
the pixel shader, to emded the precision in a texture registers instead.
Unfortunately, most hardware limits the dimensions of textures. 4096 is a
typical limitation. This amounts to 12 bits of precision to be encoded in
the texture. 12 bits, however, is vastly superior to 8 bits and can make
all the difference to making fog volumes practical. More precision could
be obtained by making the texture a sliding window, and breaking the
object into sizable chunks which would index into that depth, but this
sample does not do this.
Setting it all Up
Three important details remain: The actual summing of the fog sides,
compensating for objects inside the fog, and the final subtraction.
The summing is done in three steps.
First, the scene needs to be rendered
to set a Z buffer. This will prevent fog pixels from being drawn which are
behind some totally occluding objects. In a real application, this z could
be shared from the pass which draws the geometry. The Z is then write
disabled so that fog writes will not update the z buffer.
After this, the summing is exactly as expected. The app simply draws all
the forward facing polygons in one buffer, adding up their results, and
then draws all the backward facing polygons in another buffer. There is
one potential problem, however. In order to sum the depths of the fog
volume, the alpha blend constants need to be set to one for the
destination and one for the source, thereby adding the incoming pixel with
the one already in the buffer.
Unfortunately, this does not take into account objects inside the fog that
are acting as a surrogate fog cover. In this case, the scene itself must
be added to scene since the far end of the fog would have been rejected by
the Z test.
At first, this looks like an easy solution. In the previous article, the
buffers were set up so that they were initialized to the scene�s depth
value. This way, fog depth values would replace any depth valu
评论0