- Illumination Pass
- Ambient Occlusion (AO) Pass
- Reflection Occlusion Pass
- Lights Pass
- Simple Beauty Pass
- Radiance Occlusion (RO) Pass
- Irradiance Pass
- RGB Depth Pass
- Focus Pass
- Normal Pass in Normalized Tangent Space
- SSS Pass
- Object Matte Pass
- Material Matte Pass
- UV Texture Pass
- Motion Vectors for RSMB
- Saving Buffers
- Additional Resources
In this series of articles, we are exploring several of the possibilities that Filter Node Editors (FNE) provide within LightWave for render management. Some of these possibilities haven't been seen before, and though they are possible through custom-made tools, the nice thing about FNEs is that any CG artist can achieve the same thing in a very artist-friendly way. Moreover, some other possibilities can speed up some tasks with more efficiency and more flexibility than any other conventional solution provided by the package.
But, what's an FNE? We won't find any reference to them in the manual, nor is a plugin with that name shipped with the package. In fact, there's no native solution in this matter yet, but there are some interesting experimental versions proposed by brilliant developers that can be used in production for some specific purposes.
We can think about a Filter Node Editor as a place within LightWave where we can process any image at will in a nodal environment. The nodal system adds a completely new dimension to this processing by adding more capabilities and flexibility to the way we work. These processings can be performed in input images in the Image Editor before rendering (well-known as pre-processing) or in output images after rendering (post-processing), or even while they are rendering.
These Node Editors affect our render at two different levels: at the pixel level and at the image level.
We can process an image at the pixel level in post-processing or while the render takes place. By depending on our camera settings, this Node Editor can affect the renderer after or before sub-frame operations, that is to say, after or before antialiasing, motion blur, depth of field, etc. have been added to the final image. (For example, we could greatly speed up and enhance native LW DOF with a nodal setup in FNE.) This kind of Node Editor is called a Node Pixel Filter (NPF) or Pixel Filter Node Editor (PFNE). We can find a PFNE in the Image Filter list on the Image Editor; however, we can't apply it to input images in pre-process. For post-processing operations, we find a PFNE in Effects > Processing > Add Pixel Filter.
We can also use an FNE at the image level, that is to say, after the final image and image buffers have been created. This kind of Node Editor is called a Node Image Filter (NIF) or Image Filter Node Editor (IFNE). We can apply an IFNE to the whole image as post-processing after it has been generated, but not while it's being rendered. An IFNE can also be applied to input images as a pre-process in the Image Editor (e.g., we could compose two landscapes within a third blank image to map it in to a card for a matte-painting background). For post-processing operations, we find an IFNE in Effects > Processing > Add Image Filter.
As we can realize at this point, applying an FNE at the pixel level or at the image level brings different connotations to the processing of a rendered image. This will affect our results. But we'll discover later on how these differences could be useful depending on the specific task and the FNE that we'll use.
In this article we are going to explore the possibilities of:
DP_Filter Node Editors, composed of two main modules:
- Node Image Filter (DP_NIF), which works as an IFNE
- Node Pixel Filter (DP_NPF), which works as a PFNE
These Filter Node Editors were developed by the most prolific LW nodes developer and excellent CG artist Denis Pontonnier. If you use LightWave, you must have used quite a few of his useful tools.
These plugins are available for x32 and x64 versions for LW 9.5 and LW 9.6. (There's also a previous version for LW 9.0 - 9.3.1 that we'll not cover in this article.)
Note: DP_Filter Node Editors includes several other tool nodes as well. Through working together, this really becomes a complete render management system within LightWave 3D.
Tibe3_PixelNode, which works as a PFNE
This was developed by the excellent CG artist and developer Chris Huf. It doesn't matter what 3D package you use, if you like IBL, you must have used his very useful sIBLedit.
These plugins are available for x32 versions for LW 9.6. (There's also a previous version for LW 9.0 - 9.2 and LW 9.3.1 that we'll not cover in this article.)
Note: While I'm writing this article, this FNE is in development, and some features and options might be improved and work somewhat differently by the time this article is published.
Every FNE has its own particularities in the way they work, and this series will explore specific features useful in the following aspects of render management:
1. —by extracting and exporting several unconventional passes.
2. —by composing and manipulating the previous passes together with other buffers and image components to get several interesting effects and speed up our work efficiently.
3. —by color-correcting our input images and renders, either for a consistent color flow or for final creative adjustments. In this article, we are going to explore the first group.
FNEs are able to extend render management capabilities within LightWave by providing the possibility to extract and export several unconventional passes at the same time in a single render.
Note: These multipass-rendering experiments with FNEs imply obtaining passes through buffers.
FNEs allow us to post-process these passes/buffers within the package and get results straight from the render engine in a faster way than with traditional techniques. If you are unfamiliar with multipass rendering within LightWave, please take a look at the interesting article about this topic written by William Vaughan called No Thanks... I'll Pass! (HDRI 3D Issue 26) for a better understanding of the importance of these techniques.
To get a clear idea about how these modules work, we'll begin our experiments with what is probably one of the most requested passes, the
An illumination pass contains the color and luminosity data of diffuse lighting and shadowing provided by direct and indirect illumination. More sophisticated requirements may require separating the ambient lighting from the diffuse lighting. Few buffer exporters can export an Illumination pass, and none exports the ambient lighting separately (Irradiance pass). Here we are going to learn how we might export both.
LightWave has a buffer called Shaded Diffuse, which exports all the mentioned data, but all mixed with the surface color data. By using the FNEs, we can get a clean Illumination pass from this LW buffer.
Add a DP_NIF instance in the Add Image Filter panel and double-click on it. A Nodal Editor is displayed, and we can see the Color output and an Alpha output of the Image Filter. To edit an existing buffer, add the Render Buffer Node: Add Node > DP_Kit > Processing > Render Buffer.
We see a list of available conventional buffers that we can export or modify (figure 1). The first Color buffer is the equivalent of Final Render in LW buffer exporters. Diffuse Color in Render Buffer Node is the equivalent of Shaded Diffuse in LW buffers. We can also see two Integer inputs and outputs for moving pixels in X- and Y-coordinates (very useful to simulate chromatic aberrations refracted through poor quality lenses and other effects) besides a Normalized Depth buffer that only works for DP_NIF. If you are not familiar with LW conventional buffers, please do look at the LW manual (Image Processing: Image Filters > Photoshop PSD Export).
To make a buffer active, double-click the Image Filter output and check the buffers you want to activate. In this case, we will use Raw Color and Diff Color. By activating a buffer, we are able to visualize and export a given buffer. To visualize a buffer, we first need to make a test render to gather the buffer’s data (press F9), but be aware that if you don't activate a buffer that you want to use later, you will need to activate it later and render again. As we are experimenting, and since we are going to use other buffers later, we’d better activate all of them now. Then we can connect a given buffer from the Render Buffer Node to the Image Filter output and see a preview in the small preview window. But here's a real nicety from DP_FNEs added not long ago: We can preview any buffer and any node configuration with Viper! Just double-click the Image Filter Color output and click on Viper. A Viper window will open—choose your desired preview size and enjoy the view. This is quite useful because we can preview, almost in real time, all the processings we perform in DP_FNEs.
To subtract the surface color data from our Diffuse Color buffer, we add a DP_Divide Node (DP_Kit > Processing). Though we could use the regular Divide node in IFNE, Denis Pontonnier devised this new node to avoid unpredictable results with the regular Divide node in PFNE if the value of the background is 0.0 (black). He is also sharing his node setup in Figure 2.
Note: In order this node setup may work properly, we need avoid colors with zero values in any of the RGB channels. i.e: instead of using pure red 255/0/0 we can use 255/1/1.
We multiply the alpha buffer to allow AA in the foreground edges around an empty background (no geometry). We could also connect the node output into the FG input of a Mixer Node and the alpha buffer to the Mixer's alpha input or we could use a Gradient Node. However, in any case, we may want to add a white background seeing that this is commonly a pass used with multiplicative blending modes (figure 3).
Another very useful feature added to DP_FNEs not long ago is the possibility to save presets. So we can save this setup as a preset by double-clicking the Image Filter output and pressing Add Preset option. Name it “illumination pass” or something like that. After a preset has been added to the LW Presets workspace, we can name our custom-made buffers/passes or any other node setup and classify them by categories. This can speed up our workflow for Multipass Rendering Compositing and Color Grading in future projects.
Tip: If we think we may need a node setup recurrently, we can use Pom's Group Node to store nodal networks in a single node. This will help us to keep our workspace clean and organized as well.
Note: You'll find all the nodal networks tried in this article as presets here, together with some sample scenes.
Now, disable DP_NIF and add a DP_NPF instance in the Add Pixel Filter panel and double-click on it. The other Nodal Editor is displayed, but this time we can see a Buffer Root with all the output buffers available. To transform LW buffers, we have to add a Render Buffer Node here, too.
The same as with DP_NIF, we need to activate the buffers we are going to edit, preview, or export. To do this, double-click in DP_NPF's Buffer Root and enable them. Since DP_NPF works at the pixel level before the entire image buffers have been generated, there are some things that work differently here. The background buffer is not fully compatible with DP_NPF, we can't preview in Viper what happens in DP_NPF, and buffers can't be transformed with nodes as in DP_NIF. At the beginning, this looks disadvantageous, but in fact, working in DP_NPF for some tasks is more advantageous because we can perform image transformations before sub-frame operations and even preview this in Viper with a little trick. The capability for making this possible was added by Denis Pontonnier recently:
Denis Pontonnier's Tip: Simply add a second instance of the Pixel Filter Node Editor for building your node tree. It inherits the "Multithread" and "RayTrace" Options of the first instance, but these options are still independent and can be modified and be different. The state of these options are now readable directly in the Processing tab.
Note: In this way, we can preview in Viper our node setups, add presets, and perform the same processings on conventional buffers that we are able to complete in DP_NIF, but with the PFNE advantages. Please take note of this because it will be important for later articles.
Since DP_NPF is applied per pixel, it’s a must to use the DP_Divide Node here. Later, we can group this node setup and save it as a .Nodes file or as an Illumination pass preset (figure 4).
Ambient Occlusion (AO) Pass
Some people call it just the Occlusion (Occ) pass. Though they could seem the same thing, they're not.
An occlusion pass is basically a shading method that represents the light obstructed by a geometry on every area of a surface. Commonly, the direction of the calculated light source is uniform and omnidirectional (a dome or a hemisphere), but it could be punctual too. Occluded areas will be black, and open areas will be white. This pass could be used for several purposes: In lighting, to provide a GI “look” or to add contact shadows, I also use it to fake soft shadows quickly; in surfacing, to occlude spherical reflection mapping; in texturing, to place dirt or oxidation in occluded areas for aged surfaces, etc.
On the other hand, in an ambient occlusion pass, light is calculated omnidirectionally but based on an environment texture (commonly a light probe or a spherical panorama, but it could be a gradient or a skylight shader). This means that the light source is not necessarily constant in all areas, and the same applies to its rays’ intensity and colors. Its usage is confined to lighting and shading, instead of GI or together with it.
We can generate occlusion passes from the GI or by setting an AO shader in surfaces. Later we will learn how we could extract and export an AO pass from GI (well-known as Radiance Occlusion). But for now, we'll see how we might get an AO pass obtained through a shader.
There are basically two ways to set up any pass-through shaders with these tools: locally or globally. If, let's say, we need the occlusion pass only in some objects or surfaces, or if we are already using an occlusion shader in the Surface Node Editor, or if we need different kinds of occlusion settings depending on each surface, we'll want to set up this pass locally in the Surface Node Editor, because each surface will require different shading settings. But if we need the same occlusion effect in all (or almost all) objects or surfaces, then we'll want to set up the pass globally in a PFNE, because settings will be the same for all of them.
For setting up a pass locally, Denis Pontonnier came up with a very ingenious solution by developing the DP_Extra Buffers Nodes. They are a set of two nodes: Store Extra Buffer Node, which works in the Surface Node Editor by storing any surface property, and Get Extra Buffer Node (for DP_NPF or DP_NIF) for gathering and exporting these data. Let's see how it works:
For setting up our occlusion pass, we just need to connect an occlusion node into Store Extra Buffer Node in the Surface Node Editor (figure 5).
Notice the occlusion node doesn’t need to be connected in the Surface Root in order to be stored, but it's a must to connect the Access ouput of Store Extra Buffer Node to an unused channel. We have up to 24 Extra Buffers. If you are planing to export several unconventional buffers, it's recommended to use about 12 buffers for storing any surface property and the other 12 additional Extra Buffers for other purposes that we'll see later in this article. Twelve Alpha inputs are for scalar nodes, and 12 Color inputs are for color and vector nodes.
If, for example, we are using an Ambient Occlusion node (Color output), we connect it to a color input (figure 6).
Later in DP_NIF or DP_NPF, we can use the Get Extra Buffer Node to gather the data from all surfaces in which we have used the Store Extra Buffer node and mix these additional buffers with the final rendered image or other buffers (figure 7).
Note: If we double-click in Store Extra Buffer node, we'll find 2 options for each buffer. Check Force Antialiasing Buffer when using Get Extra Buffer Node in DP_NIF, and activate Enable Antialiasing Buffer and Force Antialiasing Buffer when using Get Extra Buffer Node in DP_NPF. Most of the time, you'll want to leave both options checked.
The strategy in this case is to establish what buffer slot will store which surface property. We could store more than one type of property in the same buffer slot according to each surface (e.g., Occ in one surface, SSS in another surface for the same slot), but consider that these properties will be exported together in the same buffer. So it's better to export each type of property in different buffer slots.
Now, for setting up a pass globally, we can use PFNEs. The pioneer of this innovative approach was Chris Huf with his Tibe3_NodePF. This PFNE has the ability to apply procedurals, shaders, and Spot Info values where an actual object is located by evaluating these values three-dimensionally. For this reason, the Pixel Filter can only be applied to objects, not to background.
For applying an AO node globally, we just plug the node into the PFNode output (figure 8).
At the moment that I'm writing this article, the experimental version of Tibe3_NodePF for LW 9.6 works with Classic Camera (Motion Blur) and is applied globally per each pass. It also includes a module called Tibe3_PFOut to get LW buffers and some additional special buffers still in development.
Very recently, Denis Pontonnier improved his DP_NPF by adding the Global 3D Shading evaluation mode. By enabling Raytracing/Global 3D Shad. option, we switch from 2D to 3D evaluation and access these new powerful capabilities. The first thing we notice is that 24 new Global Buffers are displayed in Buffers Root for storing our new global buffers without the necessity of overwriting any LW buffer. There are also 12 Color inputs for storing color and vector ouputs and 12 Alpha inputs for storing 12 scalar outputs.
Since DP_NPF works per pixel, the antialiasing (AA) of the 3D evaluation is affected in a different way depending on the type of camera we are using. For the Classic Camera, we don't need any special consideration. Just input an AO node into the Color output to visualize our global AO pass.
But within this new evaluation mode, when we are using or processing conventional buffers with the new advanced cameras (Perspective Camera, Surface Baking Camera, Real Lens Camera, Distortion Camera, etc.), we need to activate the option MultiThead/Perps.AA in the Buffer Root panel in order to get AA from these cameras and—very important—to enable the possibility to work with more than one Core (figure 9).
Denis Pontonnier's tip: We can use Save Mask Node to disable Buffer Root preview for all previous connections to prevent issues/artifacts/crashes in DP_NPF connection (e.g., 3rd party AO nodes or LW reflection Blur). It also disables the Pixel render process (node evaluation) on background to prevent other kinds of issues.
On LW 32-bits version, we may need to connect Save Mask node with DP_Ambient Occlusion node if we are using more than one Thread and we have an empty (black) background. SG_AmbOcc, Occlusion, and Occlusion II don’t need it. Pom’s Ambient Occlusion doesn’t work in Pixel Filter context.
Denis Pontonnier's Note: A few third-party Shaders could crash Layout as soon as they are connected to the Buffers Root node. This is fixable by the developers. They need to know that in PFNE context, their shader can be evaluated with an empty Nodal Access structure (e.g., testing Raytrace functions).
Tip: In some rare cases, another way to prevent issues could be to use a single Thread in Render Options.
Global 3D Shading is not previewable with Viper in DP_NPF, but it's previewable in DP_NIF. Global Buffers can be mixed also with any other conventional buffer or final color output in DP_NPF or DP_NIF through Get Global Buffer node (figure 10).
Denis Pontonnier's tip: To avoid noisy shading like Ambient Occlusion or SSS with Adaptive Sampling (AS), use more samples in Shader node or better AS settings in the Advanced Cameras panel.
Note: The above tip is recommended when the noisy shading is not taken into account in the sampling process of Adaptive Sampling for the final render. It's not necessary if the noisy shading is sampled with the final image or if we are getting antialiasing through AA passes.
Though this is not a must for compositors, Denis implemented DOF and Motion Blur for Global Buffers for Classic and Advanced Cameras, which means we can set up these Global Buffers and get DOF, bokeh effects, and Motion Blur quite close to those of LW.
Reflection Occlusion Pass
A reflection occlusion pass is used when we are solving our reflections without Raytracing but we want to get a more realistic “look” for non-raytraced reflections. Seeing that Backdrop reflection or Spherical Map reflection doesn't take into account the ray path, it doesn't know if a geometry is obstructing or reflecting a ray. To fake a reflected ray obstructed by a geometry, we can use an Occlusion pass to specify which areas of the surface will reflect more or less a given spherical image or environment map.
If we are adjusting every surface with its specific reflections settings (Backdrop, Spherical Map, fresnel effects, blur, etc.), we'll want to convert LW's reflection buffer into an Occlusion Reflection pass. The setup could be as in Figure 11.
In DP_NPF, activate the Refl. Color buffer in Buffer Root and enable the Global 3D Shading mode (and Multithreading option if you are using an Advanced Camera), and multiply an Occlusion Shader by the Refl. Color buffer. Figure 12 is a Denis's node setup with a Scale node, which is a bit faster.
If we want to adjust the strength of the occluded areas, we can use a Mixer node (Multiply mode). Or maybe a Gradient node by adjusting the alpha percentage of the first Key to control the strength of the occlusion effect. Or a bit faster setup, with an Add node connected to a Clamp node to keep maximum reflection values (from 0 to 1) (figure 13).
We could also set up an Occlusion Reflection pass globally by connecting a reflection shader. If additionally we want to get fresnel effect, we could use the Geometry buffer as input parameter in a Gradient node. We use white color for both Keys (alpha goes from 100% to 0%) (figure 14).
Note: In this case we can use Save Mask Node to prevent any issue with Reflections' Blur option or Ani-Reflections.
Later we can input any of these node setups into a Global Buffer input (color)—to save it with a Get Global Buffers Node, or to continue transforming our Occlusion Reflection buffer in DP_NIF. Do you realize now why it is a good idea to leave 12 buffers empty for other uses?
A light pass isolates the diffuse lighting and shadowing component per light or groups of lights by storing their RGB data. Light passes, the same as in real photography, are useful for readjusting or even relighting our scene in post by using the structure of our original lighting rig. Since lighting setups are often solved with multiple lights, this type of pass requires multiple versions for each light or groups of lights. In LightWave, this is commonly made by setting up separate passes, but we can find ways to set up and export globally these passes in a single render with FNEs.
To do so, we can use the new DP_Light Group. This node is fantastic to group lights by using prefixes in their names. Lights that share the same prefix will belong to the same group. If we want to isolate individual lights, we can use Michael Wolf's very useful Single Light Lambert Node, which can be used to group individual lights as well by using an Add Node (vector), or we still could use the DP_Light Group by naming a given light with a unique prefix (a light group formed by only one light). Let's see how we might set up these passes.
In DP_NPF, add a DP_Light Group node and activate Global 3D Shading. This node is a Lambert diffuse shader that can output the Diffuse Color (with shadowed diffusion color), Light Color (not-shadowed diffusion color), and Shadows. In this case we'd use the first color output (Diffuse Color), pick pure white color, and enter the proper prefix in the Light Prefix Name option. In Figure 15 we have grouped a light rig of 16 lights into 3 groups. Later, we can input each Light Group node to Global Buffers (color inputs) and save these buffers with Get Global buffers node.
Tip: Let's say we don't like Lambertian, or we aren’t using it, or we want to set up another diffuse shading globally. In such a case, we still can use DP_Light Group node or db-w's Single Light Lambert node. Since we know these nodes use the Lambertian shading mode, we can divide our favorite diffuse shading model (pure white color) between a Lambert node (pure white color)—this output is the difference factor between our chosen diffuse shading and Lambert. Then we can use this factor to re-model the Lambert shading by inputting this result in the Color input of the DP_Light Group node (figure 16). The resultant shading model will be exactly like our chosen diffuse model!
Note: This trick works in PFNE (3D evaluation) or in Surface Node Editor with DP_Shadows Node as well. Now, let's say we are using different shading models for several surfaces. In that case, we would have to set up our lights passes locally. We could use our previous trick on each surface depending on the diffuse shading model we are using and we could store the output result with Store Extra Buffers Node; however, that wouldn't be very smart seeing that if we have many lights groups, we'd have to set up each group in the Surface Node Editor manually. The best strategy, then, would be to input in Store Extra Buffer Node, only the diffuse shading model of those surfaces that aren't Lambertian (figure 17). A lot easier!
Later in DP_NPF, we get those diffuse shadings through Get Extra Buffer node, and divide those (with DP_Divide node) with a Lambert node by enabling Global 3D Shading mode. For getting the matte of our non-Lambertian surfaces, we can use several methods.
Denis Pontonnier's tip: A logic node based on a dot product (taking the X, Y, and Z) should be fine here. Or you could also use DPKit Color Key node—default settings. Just plug in the Get Extra Buffer Color, and it will output the (scalar) matte.
In Figure 18, a Mixer Node is used to compose non-Lambertian with Lambert surfaces by using our matte as alpha. In this way, we don't need to subtract our non-Lambertian surfaces from the general light pass. The result is a bit faster, too.
We can group this nodal network with Pom's Group Node and, by changing the prefix name of the three Light Group nodes, we can set up several other light passes in a single render.
Remember to input these nodes in Global Buffers color inputs to save these custom-made buffers.
Simple Beauty Pass
A beauty pass is conformed by the Raw Color component and the diffuse lighting component without shadows. This last detail (shadows) is what prevents us from using the Raw Color buffer with the Diffuse Shading buffer to conform our beauty pass with native LW buffers. To get the diffuse shading component without shadows, we could try the DP_Shadows node by setting pure white color and diminishing Regular Shadows at 0% (figure 19).
Note: A checkbox for shadows in diffuse shading nodes (similar to Radiosity and Caustics) would allow us to get beauty passes for different diffuse shading models.
Radiance Occlusion (RO) Pass
A radiance occlusion pass is formed by the shadowing provided by the ambient lighting (commonly GI). It can also contain the color contribution from the distant scene (direct ambient lighting) and/or the color contibution from the local scene (indirect ambient lighting). Since this pass is commonly used in multiplicative blending modes, everything else in the image should be white. We might want to isolate this pass for mixing it with an occlusion pass, or for enhancing it in post-process if it has been rendered with just a few rays, or for adjusting its strength globally or locally in post.
If we want a Radiance Occlusion pass with the color contribution from the distant scene only (no indirect bounces), the easier way to get this buffer in LightWave is through an Ambient Occlusion shader stored locally (through Extra Buffers) or globally (through a PFNE). I've noticed we can get smoother and faster renders by setting up grainy shaders like this one locally through Extra Buffers, but I guess the suitable way will depend on the number of surfaces.
If we are using an AO shader globally (Backdrop Color Mapping of Occlusion II)—with Raytrace/Global 3D Shad. in DP_NPF or locally through Extra Buffers nodes—we just need to connect a Length node to get the alpha of the occluded areas and leave only the colors for these areas (figure 20).
Since this is a multiplicative pass, if we have an empty background (black color), we can use a Color Key Node or a Dot and Logic node to get the matte for a white background, or as in the above example, we can just use the Alpha buffer of Render Buffer node.
Finally, we can group this node tree with Pom's Group Node and plug this in Global Buffers to save this buffer through Get Global Buffer Node.
An irradiance pass is formed by the colorations provided by indirect bounces. This pass doesn't take into account shades provided by indirect (or direct) illumination. It's also well-known as the Color Bleed pass. We'd want to get an irradiance pass to control the mood and style of our CG photography. Though the strength of this effect can be adjusted with a nodal setup in the Surface Node Editor, we have better control of it in post. I've found this pass very useful, for example, for food photography, where many bounces provide more color richness in the shades and a more appetizing appearance. With this pass we are able to add or diminish this effect by saturating it or desaturating it without compromising render times.
We can get this pass globally or locally by using the DP_Color Bleed node; however, this node only provides the color component from local bounces (from adjacent surfaces), but not the bounces provided by direct ambient lighting. We could get both types of indirect bounces by modifying our Illumination pass. We just subtract the direct lighting component (DP_Shadows node) and the ambient occlusion component and later multiply this result by surface raw colors (figure 21).
Tip: Ambient Occlusion intensity should be proportional to GI intensity.
The same as with our Light passes, if we are using several diffuse shading models, we need to store those shading models locally through Store Extra Buffers node. Later in DP_NPF, we get those diffuse models through Get Extra Buffer node and mix the diffuse lighting component from non-Lambertian surfaces with the diffuse lighting of Lambert surfaces (from DP_Shadows Node) with a Mixer tool (figure 22).
A way a lot easier now of getting an Irradiance pass since LightWave 10 is with the new Radiosity and Ambient Occlusion buffers (figure 22a)..
Idea is just subtract the AO buffer from the Radiosity buffer for obtaining the irradiance component. Since high illumination values may result in negative values when subtracting, we limit the lowest value at 0.0 in Low parameter for avoiding black areas or splotches when adding this pass in floating-point space later in compositing. For not clipping the dynamic range of your scene, use a very high value in the High parameter (i.e. 65536% will be enough for real-world contrast ratios since we won't commonly go beyond 24 EVs).
Note: In this new way, this pass should be got in DP IFNE since is not possible obtain it at pixel level.
RGB Depth Pass
As we know, LightWave can output the Z-buffer with AA or without it, normalized or not. We are able to get the Depth buffer with DP_Render Buffer Node and a Normalized Depth Buffer in DP_NIF. But some people like to set up this pass separately to have more control of it (and because this pass is often rendered at double or quadruple resolution without antialiasing to avoid artifacts in sub-pixel details). We can quickly set up a Depth pass in PFNEs with a Distance to Camera gradient (figure 23).
But we can go further with this pass and set up an RGB Depth pass. This pass can be used to have more control for post-processing based on depth by separating foreground, midground, and background elements. I've found this kind of pass very useful to mimic the desaturation effect according to depth in underwater photography (like in the dolphin scene in my article in HDRI 3D Issue 19). We can isolate its RGB channels to mask color corrections, adjust the strength of backlighted particles, rain, clouds, or fog, or for adjusting foreground bokeh effects or background reflections, etc.
If we want to get an RGB Depth pass from LW buffers, we can use Depth or Norm. Depth from DP_Render Buffer Node by connecting this buffer into the Input parameter of a Gradient Node and coloring it. Or by using a colored Distance to Camera gradient from a Color Layer Node (figure 24).
Tip: We can store Color buffers in Global Alpha Buffers inputs by decomposing our color output in RGB scalar outputs. We need 3 Global Alpha Buffers instead of one. Later in post we can re-compose as a color buffer or use its RGB channels separately.
Some applications make use of focus passes/maps to set up the Depth Of Field effect. This pass is also very useful in LightWave to achieve the same effect through FNE. Unlike a Depth pass that is based on the parameter of distance to camera, a focus pass is based on an object distance (the focus object). Let's add a Null object and name it focus or something like that and place it in the area we want to be in focus. Within DP_NPF, activate Global 3D Shading mode and add a Distance to Object gradient (Layers > Scalar Layer) and pick the focus object as the reference object. The start Key is White, and the end Key is black. If we need to adjust the gradient parameter dynamically, we can connect this node in the input parameter of a Gradient node. Another way could be solving the distance to object gradient with a Distance node (vector), by picking the World Position of the focus object and the World Position of a given spot (figure 25).
In the next article of these series, we'll learn how we can use this pass to make our own DOF filter.
Normal Pass/Maps in Tangent Space
Surface normals are vectors perpendicular to the surface. Normal maps are images that describe the X, Y, and Z coordinates of a surface normal (in its RBG channels) according to a reference point. Depending on the reference point, we have different "formats" for normal mapping (World Space, Object Space, Texture Space, Tangent Space). Each one has its particular advantages and disadvantages; however, Tangent Space is probably the most suitable format since it can be tileable and applied to world geometry, rigid objects, deformable characters, etc. Some of these things are not possible with the other spaces/formats.
We can use normal maps in Tangent Space for enhancing the appearance and details of a low-poly model by generating a normal map from a high-poly object without increasing the number of polygons. Normal passes can be used as well for relighting our renders in a compositing package with applications like the Normality plugin (http://www.minning.de/software/normality).
LightWave has shaders like Bumprgb or Normal Color to output normal passes or normal maps (by baking them in UV texture maps). But none of them are in Tangent Space coordinates. For normal passes in Tangent Space, TrueAart has a nice free shader in their TrueArt's Node Library called Normal Pass. We can use this node to apply a Normal pass globally to our scene in DP_NPF by activating the Global 3D Shading evaluation mode. By making some experiments, we can get a native normal pass with a simple nodal setup as well (figure 26).
We just take the Normal data from Spot Info Node, invert the vectors by subtracting them from 1/1/1 values, and normalize this output. Voila! a Normal pass! The good thing about Normal output is that it takes into account not only the surface normal vector of the geometry of a given spot including surface smoothing, but also bump mapping. But let's go further. Suppose we want to add a bump texture globally for our normal pass. We can add this data with a Subtract Scaled node (figure 27).
This kind of result is usable as a normal pass in compositing packages for relighting, but since the background is gray, we might have some problems in borders for generating normal maps. In that case, we could mask the background with a Mixer node by using the alpha output from the Render Buffer Node and picking a blued color (127/127/255) for the BG (figure 28).
Something worth remembering in these experiments is that DP_NPF in 3D evaluation mode is able to work with Advanced Cameras as well. Which means we can convert any of our custom-made buffers into maps through Surface Baking Camera. So we might set up this buffer globally (in DP_NPF) or locally (in Surface NE and DP_NIF through Extra Buffers), and render with Surface Baking Camera by choosing the appropriate UV map for a given object (figure 29).
SSS shading represents the light scattered inside and along a surface. This effect is often used for transluscent materials like milk, skin, wax, rubber, marble, cheese, plants, fruits, frozen glass, etc. There are several ways to achieve it depending on the complexity of the material, but multilayered sub-surface scattered materials are probably the surfaces we'll want to split into passes to fine-tune it in post with more control and flexibility. Multilayered SSS can be achieved through Materials or Shaders. We cannot export a Material directly through Store Extra Buffer Node, but we can split the material in its shading components with the useful Split Material Node from Truart's Node Library or the very useful Material Blender Node from db-w (figure 30).
Note: We can export all SSS nodes through Store Extra Buffers. SSS, SSS2, and ChanLum, even the legacy nodes like Kappa, Kappa II, and Omega are exportable through Extra Buffers Nodes. So are SSS shadings from Fast Skin Material. The only Material not exportable is Simple Skin.
Tip: By adjusting the Epidermis Visibility and SubDermis Visibility percentages—and diffuse color—with different nodes, we are able to isolate these properties and store them separately through Store Extra Buffers Nodes.
We can set up SSS passes globally with the ChanLum shader as well (figure 31).
Note: For some reason, SSS and SSS2 don't work globally in any PFNE.
Tip: Do not use globally Legacy SSS shaders with the Multithread option enabled. It seems these old shaders are not compatible with the way PFNEs work in recent LW versions. If you want to use these SSS nodes globally and get AA, use Classic Camera instead.
Object Matte Pass / Object ID Pass
This pass stores a different matte color for each object in the scene. In post, matte colors allow us to isolate processings for each object in the scene. This can be done with color keying tools by extracting an alpha matte for a specific object. Each matte includes antialiasing, but it also may include Motion Blur or DOF if these effects have been rendered in the final image.
We can get an Object Matte pass globally within PFNEs by using an Object ID node or DP_Random node by driving the Hue of a Color Tool node (figure 32).
Tip: It’s very advisable for this pass to turn OFF Dither Intensity in the Processing tab of the Effects panel.
We can use Michael Wolf's Extended Spot Info or ObjectID Node by Symmetrix (which provides the most usable results that I’ve tried).
DP_Item Info may work as well. It could happen that the color of the first object may be too similar to the background color (remember DP_NPF interprets background as any other object). In such a case, we might use Save Mask Node.
Denis Pontonier's Tip: For getting a result similar to Object ID output of Extended Spot Info Node, we can add 10000001.8 to Item ID ouput of DP_Item Info Node (figure 33).
We can also try to find ways to set up an ID Object pass that can be recognized by ProEXR plugin or Nuke (and maybe Fusion). In such a case, we'd need integer values in a single channel buffer (gray scale) to define each object (16 bpc or 8 bpc). We could get it by inputting a DP_Item Info Node (curiously this is the only way it works) into brightness of the Color Tool node (with 001/001/001 value as base color) and render half floating EXR files without AA, DOF, or Motion Blur (figure 34).
Later, we could label it as an Object ID channel (tagging properly its type and data format). However, an Object ID buffer is useless for practical purposes without a Coverage buffer, which provides antialiasing to the integer buffer. This might be achieved through an erode/dilate filter perhaps, but seeing that there's no filter like this for LW, I haven't pursued this approach—though it's worth more extensive research.
Material Matte Pass
This pass stores a different matte color per each surface in the scene. The same as with the Object ID pass, a Material Matte includes antialiasing and also may include Motion Blur or DOF if these effects have been rendered in the final image.
At first we may think that since it's just a matte per surface, we might choose a different plain color for each surface and store them with Store Extra Buffer Node. But picking different colors might be a tedious process if we have many surfaces. A clever Denis Pontonier's idea was setting this up automatically in each surface, with a random value for each surface through Store Extra Buffers Node with a DP_Random node set as Randomized by Surface (figure 35).
Later in DP_NPF, we can input this random value to the Hue parameter of a Color Tool node by activating Global 3D Shading mode.
Tip: It’s very advisable for this pass to turn OFF Dither Intensity in the Processing tab of the Effects panel. Very recently, Denis devised a node called Surface ID to get this data inside PFNE. This is quite useful because with this node, we can easily get a surface matte pass globally (figure 36).
Denis Pontonnier’s note: This is a pseudo ID number, rather than a position of "unique" Surface ID in the scene surface list. So removing or adding a surface will change the output value. The usage of this pass in a compositing package is the same as that with an Object Matte pass.
UV Texture Pass
A UV map basically represents in two dimensions (U and V coordinates) a 3D object surface (X, Y, Z coordinates). The UV Texture buffer encodes the values of these coordinates into red and green channels of an image. We can use these coordinates in a compositing package later to retexture in 3D our rendered objects. Most of these applications use multichannel formats like RPF and RLA to read this data natively, but it's also possible to use these UV data exported in 16 bpc EXR images.
LightWave exports the UV Texture buffer through Extended RPF Export filter, but we can export this buffer also as EXR or any other format through FNEs.
Note: Revision Effects (http://www.revisionfx.com/) offers a plugin suite called RE:Map that includes a module called RE:Map UV for re-mapping UV textures from 16bcp/32bcp images for After Effects, Fusion, Final Cut Pro, Quantel generationQ, and other post-production applications.
For automatically generating a UV Texture pass for all our UV maps, we can export—through the Get Extra Buffers node— the UV coordinates data from each surface to DP_NPF by selecting the proper UV map with the UV Map node. Later in DP_NPF or DP_NIF, we filter these coordinates (scalar output) with a red and green gradient. The first Key is black, and the second Key is red or green (figure 37).
Tip: In DP_NIF we can multiply the Alpha buffer instead of using the Save Mask Node. Later, we mix these color outputs in additive mode and connect to a Global Color Input to be saved through a Get Global Buffer node.
Motion Vectors for RSMB (Update)
ReelSmart Motion Blur (RSMB) is a post-processing tool from RE:Vision Effects (http://www.revisionfx.com/) for adding, removing or improving motion blur effects in compositing packages. It's available for After Effects, Avid systems, Quantel generationQ, Nuke, Fusion, Autodesk Advanced systems and several other post-production tools.
Even when the automatic tracking in RSMBPro is good enough most of the times without the need of external motion vectors, if elements are too close to camera or move too fast so that pixels can be tracked from one frame to the other, we might need export our LightWave Motion Vectors in a format compatible with RSMB.
In LightWave, each motion vector buffer (X,Y) contains scalar floating point values where each pixel value of the motion vector image represents the distance translation of a pixel within one frame time, in image space. LightWave exports each X,Y motion vectors as separate scalar buffers. RSMB expects motion vectors in image space, but encoded differently. RSMB assumes that both motion vectors are contained in a single RGB image (floating point, 16-bpc or 8-bpc). There, the vector buffers are encoded in the red and green channels where positive and negative values are coded with different colors for each motion vector. When working in 8-bpc/16-bpc space, RSMB assumes the vector information has been normalized so that both X and Y range from -1 to 1, commoly using a constant value of 1/8 to scale the X and Y component values. Then, the setup of the figure 37a can be used when saving this buffer in 8-bpc/16-bpc image formats:
If we are going to save our motion vector pass for RSMB in floating point values, it's better don't apply no scaling in our node setup, otherwise later in the compositing package, the motion blur could be exagerated. In this case is better just invert the X motion vector by multiplying it by -1.0 and don't multiply the Y motion vector by no value. Later, in the RSMB panel, within the compositing package, scale both vector values by 0.125. (figure 37b)
In our node setup, the negative multiplication is made in X-axis but not in Y-axis because in LightWave coordinate system, positive Y-axis is pointing up and negative Y-axis is pointing down as RSMB expects, but positive X-axis goes from right to left and negative X-axis goes from left to right, the opposite of what RSMB expects.
Tip: We might use the new DP Mask Objects node for exporting the objects alpha (Mask's output) for areas where motion vectors are known to be valid, as is specified in RSMB documentation.
For simplifying this node setup, Michael Wolf suggested using the so useful Remap node from the db&w Tools (figure 37c).
In this case the Y Motion Vector is inverted according to what RSMB expects before conforming the input vector for the db&w Remap node.
For getting the same previous results for non-floating point formats, we could setup the Minimum and Maximum inputs at about -8.0 / 8.0 values, remapping them from 0.0 to 1.0. For floating point formats we just remap values from -1.0 to 1.0 at unclamped values from 0.0 to 1.0. The final subtraction is for coding the vector colors compatible with RSMB.
Not long ago, the process for exporting these buffers was very different depending on if we were using DP_NPF or DP_NIF. In the old way, we overwrote LW existing buffers in DP_NPF, and the process for DP_NIF wasn't as fast as now. Nowadays the process is very easy and efficient, and it's the same procedure for both FNEs. So we’re going to learn the new way in this article.
For DP_NPF, we double-click in the Buffer Root and activate RayTrace/Global3DShad. It enables temporal Global Buffers.
For DP_NIF, we double-click the Image Filter output and activate the Global Buffers option. 24 new additional buffers will be displayed. If we connect the output of our transformed buffers in the input of these Global Buffers, we can save up to 24 new buffers. We just need to be careful to choose the appropriate buffer (Color or Scalar). We can connect transformed buffers, or Spot Info data, or shaders or procedural textures applied globlally and evaluated in 3D.
To save a node or node tree connected to a Global Buffer, we add the new Get Global Buffer Node (DP_Kit > Processing). We can use this node in DP_NPF or in DP_NIF. To save a buffer, double-click on Get Global Buffer Node, and a pop-up window will open with the option to save these temporal buffers. We can choose the format, root, and name for our new buffers (figure 38).
Note: The alpha of a custom-made buffer can also be mixed with LW alpha (check Mix with Buffer option), or it can overwrite LW alpha (check Replace Buffer option).
In some cases, you may find that it's more convenient to overwrite an existing buffer. To do so, we use DP_NPF and activate the given buffer in Buffer Root, and connect the output of our custom-made buffer to the buffer we want to overwrite. Then we can add our favorite buffer exporter (Effects > Processing > Add Image Filter) and export the buffer we have chosen. We can export multiple buffers, too. If you are using something like exrTrader or Buffer_Saver for exporting, you can export multiple buffers with a single instance of these plugins. But if you are using something like Render Buffer Export, you'll need multiple instances of this plugin to export several or all buffers (figure 39).
The procedure for saving Extra Buffers stored locally in the Surface Node Editor or globally in DP_NPF is the same (let’s notice DP_NIF now has an Extra Buffer input!). We just need to add the Get Extra Buffer Node into DP_NPF or DP_NIF, double-click on it, and choose the format, root, and name for our buffers (figure 40).
Since LW 10.x, Denis Pontonnier added color space options in DP Get ExtraBuffers and DP Get GlobalBuffers for each image file format. Native LW color space presets can be set up individually (per image buffer) or globally per color or scalar images (figure 40).
Also in a more recent update, the image file format is now saved as string in the scene file, so if computers have different configuration (I/O plugins), the correct format will be loaded in these nodes.
In a production environment, we could use these custom-made passes directly through DP_FNE presets or by grouping these nodal networks with the very useful Pom's Group Node and exporting this single node as a .Node file in folders preorganized per passes. Notice that x32/x64 nodal networks are not interchangeable, and it is necessary to build nodal trees accordingly with each version. Consider that when we load a preset, it overwrites whatever we have in our nodal working space, and though you may find the Groups Node idea more suitable for loading multiple passes in your working space, the presets could be useful for loading groups of passes already preconfigured. They can also be useful for customizing more easily a single pass at a time.
Some nodes (like ones related with ID numbers and Extra Buffers) in the .Nodes files shared here might need to be re-added or re-assigned so that they can work properly for your particular scene. Some nodes related with distances, areas, and ranges might also need to be re-adjusted so that they can work for a given scene scale. Note: Remember that all the operations performed to build the passes shown in this article are possible within a compositing package.
LightWave has several options for multipass rendering, and though these experimental tools are useful not only for this, I’ve found they can actually be the best option for getting some unconventional passes in the fastest way. Another interesting tool is DP_Shader Node Editor, and although it’s not applied globally, it’s proven that this kind of approach may also work for multipass rendering and other interesting uses.
I think these experimental Node Editors show us that a built-in multipass nodal system is already possible in LightWave 3D. A native solution might have a new GUI especially designed for multipass rendering, and as Denis Pontonnier commented, it could be done in a better way, with a one-button solution and better integration.
These experiments are only some hints about what is possible through the Filter Node Editors for Multipass-Rendering techniques. I'm sure you'll improve these ideas and will find more interesting possibilities for such stunning tools. Thank you very much to these brilliant developers for such useful tolos. Special thanks to Denis Pontonnier for such great improvements of DP_FNE, and for his extensive suggestions and patience.
Denis Pontonnier’s website:
Chris Huf’s website:
Tumbler model by Russell Tawn:
Set by Chris Huf, courtesy of HDR Labs:
Lighting rigs made with Smart IBL and LIGHTBITCH, courtesy of Christian Bloch from HDR Labs:
Single Light Lambert, Material Blender, Remap, and Extended Spot Info nodes are part of db&w Tools, courtesy of Michael Wolf from db&w:
Group Node is part of Pom’s Node Pack, courtesy of Pomfried:
Normal Pass and Split Material nodes are part of TrueArt Node Library, courtesy of TrueArt:
SG_AmbOcc node, courtesy of Sebastian Goetsch:
Chanlum node, courtesy of Marlon Regien:
ObjectID node, courtesy of James Willmott from Symmetrix 3D:
Optimized low-poly Jotero model, courtesy of Denis Pontonnier