Manuel Kotulla – DIGITAL PRODUCTION https://digitalproduction.com Magazine for Digital Media Production Wed, 04 Dec 2024 13:29:38 +0000 en-US hourly 1 https://digitalproduction.com/wp-content/uploads/2024/09/cropped-DP_icon@4x-32x32.png Manuel Kotulla – DIGITAL PRODUCTION https://digitalproduction.com 32 32 236729828 A beastly good choice https://digitalproduction.com/2024/08/02/a-beastly-good-choice/ Fri, 02 Aug 2024 06:00:00 +0000 https://digitalproduction.com/?p=144524
The in-depth investigation of a FullCGI production with Houdini KineFX, Grooming and V-ray in Solaris.
]]>

Who has no hands and cares about our health? Our pets, of course. Yes, really. Our intestinal health is the biggest concern of our loved ones (furry flatmates). Following this idea from Serviceplan Health and Life, the TVC was created for the Burda Foundation’s current bowel cancer prevention campaign. The task “in a nutshell”: How can a dog and a cat engage in a loving dialogue, form words without the appropriate biological prerequisites, express human emotions and still remain animals?

After initially examining possible implementation routes and styles and collecting references (from strange things from the depths of the internet to Disney remakes), the path of real-looking animals, which are likeable thanks to their respective character-driven acting, crystallised: a somewhat grumpy older man’s dog lives together with a charming cat, masterfully spoken by Jürgen Prochnow and Katja Burkard, rounded off by Sky Du Mont as the voice-over artist.

Before moving into 3D, the scripted storyboard was created as shots in Resolve. The rough mix previously created by the sound studio served as a timing reference. This was then used to create an animation layout in Solaris to define the camera angles and staging of the animals, which was gradually replaced by renderings later on in the process. But first a few steps back and to the fur.

Vor dem Fell und Licht steht das Layout mit Posen ohne Animation.
Before the fur and light, there is the layout with poses without animation.
Im Vergleich der finale Frame
In comparison the final frame

From hedgehog to dog or: the grooming process

The grooming tools in Houdini – like everything else – offer different approaches to realising complex hair and fur structures and so the fluffy dog was created in a different way to the cat. This may be due to the fact that the rather short-haired dog was much easier to comb than the cat (take a look at a cat without fur, the ter has a completely different head shape), but it was also about maximising the learning effect and comparing different strategies. Right at the beginning, the path forks – all grooming SOPs in a separate node or the “official” path with the separation of guides, sim and hairgen with object level nodes?

The hair pipeline is basically the same: Manually drawn or procedurally generated guides are placed on a base mesh, deformed on an animated mesh, simulated if necessary (whiskers in the wind…) and finally serve as a source for the detailed (procedural) hair generation. This result can be cached wonderfully as USD/Alembic in a truckload of external hard drives. Alternatively, the last step can be skipped and the guides then serve as a source for the @Rendertime Hair-generating Hairprocedural LOP in Solaris.

Dank ein paar Strichen, die den Flow des Haarwuchses definieren, entsteht schnell ein Groom-Grundgerüst.
Thanks to a few strokes that define the flow of the hair growth, a basic groom framework is quickly created.
 Mit dem Node Guidegroom, im Prinzip ein Sculpting-Tool, wird daraus schnell eine genaue Definition von Haarlängen und speziellen Wuchsrichtungen.
With the Node Guidegroom, basically a sculpting tool, this quickly becomes a precise definition of hair lengths and specific growth directions.

So the dog follows the classic route: GuideGroom-Guidedeform/Sim-Hairgen. Select the mesh, click on GuideGroom and a hedgehog is ready. So that we can all still get to the dog, it is advisable to plan in advance: What hair types does the animal have? What is the flow/growth like? What kind of over-regions can the coat be roughly divided into?

Accordingly, various masks or attributes are painted onto the base mesh (or generated by the usual means), which are first relevant for guide generation and then later for all kinds of hairstyling manoeuvres, especially the density attribute. This means that the Guidegroom and Hairgen nodes not only know where hair should grow – the attribute can also be used in multiple ways, for example as the basis for the thickness of the hair. This means that the hair at the edges of the density map becomes gently thinner. The Guideadvect Node is a very quick way of defining the basic flow of the hair with the help of curves drawn on the mesh.

Das noch statische Grooming-System des Hundes. Gut zu sehen: Die Haardicke wird nicht nur zufällig in einer bestimmten Range eingestellt, sondern auch noch mit der Haarwuchsdichte multipliziert. Dadurch laufen die Haare an den Rändern dünner werdend aus.
The still static grooming system of the dog. Good to see: The hair thickness is not only set randomly within a certain range, but is also multiplied by the hair growth density. This causes the hair to thin out at the edges.

In the next step, the powerful but no longer procedural Guidegroom SOP node takes care of the fine adjustment. Hair lengths, special swirls and growth direction exceptions are sculpted interactively in the viewport. A more intuitive alternative to this is the fee-based Groombear plug-in, which allows much more fluid and complex work – be it faster creation and editing of hair masks or sculpting of HairClumps.

The “animation view” is also a dream to be able to paint guides in different poses/states of the base mesh. Once the guides have been moulded, they are fed into the Hairgen object. The main task of this node system is to give the individual zones their final look using various nodes such as Clump, Bend and various operations such as Length and Fuzz. Layering the effects is important here – the body fur, for example, is first divided into large but discreet clumps, which are then given several sub-clumps depending on the region. Later on in the nodetree, lively details such as flyaways are added – hair that goes against the general direction of growth and sticks out a little chaotically.

A great design aid here is the Guidemask Node, which generates a random selection of hairs or creates a gradient mask on the guides, for example, allowing clumping effects to be realised only in the direction of the hair tips. The masks can be created directly, painted or adapted and combined on the basis of existing masks. The process roughly follows the pattern of creating masks for specific body regions one after the other and then using these to control the operational nodes such as Clump, Fuzz or Bend.

The guide deform node comes into play to transfer the grooming created on the static mesh to the animation later on. At the right time, it is switched between the Guidegroom and Hairgen objects and sets the dog’s 1.2 million hairs and the cat’s 850,000 fur balls in motion. An accompanying node is the GuideSim, which performs a vellum (soft body) simulation at its core – gravity and wind effects send their regards.

The finished fur splendour, in the case of the dog, divided into three separate grooms for the body, head and whiskers, was imported directly into Solaris via SOP import for test purposes before caching. “Why Solaris”, some people will ask themselves – sometimes including me. Because Solaris can be an incredible toolset – until you run into bugs or strange processes.

Der auf den ersten Blick vielleicht komplexe, aber dafür detailliert kontrollierbare prozedurale Hairgen-Tree des Katzenfells mit selektierten Basis-Clump­node.
The procedural hairgen tree of the cat fur with selected base clumpnode, which may seem complex at first glance, but can be controlled in detail.
Masken lassen sich entweder auf die Basis-Geometrie oder schnell direkt bei den zu maskierenden Nodes painten, hier um das grundsätzliche Hairclumping im Gesicht zu definieren.
Masks can be painted either on the base geometry or quickly directly on the nodes to be masked, here to define the basic hair clumping in the face.

The Solaris workflow

In Solaris you work with LOP nodes. By decoding this abbreviation in Lighting Operators, the true benefit of the system quickly becomes clear – built around USD, however, the true strength lies in iterative work with light, material and camera.designed as a separate context or workspace, the first step is to import the assets. Either from the OBJ context conveniently via scene or SOP import or – assuming asset planning and pipeline – via sublayer or asset reference as USD files. The latter requires more work initially, but rewards you with much better performing scenes. Conveniently, you can export geometry as USD from almost anywhere in Houdini or create more complex USD files with material and, if necessary, variants in a separate Solaris Nodetree using a Component Builder preset.

Shaderloses und gleichmäßig ausgeleuchtetes Rendern zum Testen des Hairflows.
Shaderless and evenly lit rendering to test the hairflow.
Das IK-Setup der Vorderbeine in einem RigAttributeVop
The IK setup of the front legs in a RigAttributeVop

In-depth knowledge of USD is not necessary if you “only” want to put your scene in a nice light – it is sufficient to understand the hierarchical structure and the assignment of attributes and shaders to primitives (USD slang for all kinds of object types, not to be confused with faces in regular SOP. Can be defined in the OBJ context via group nodes, among other things).

USD and layers

You can do this – it’s really practical to load a central, finished asset of a dog with materials, but sitting statically in the rest pose, into each shot and only import the respective animation from the obj context and layer over it… But you don’t necessarily have to – if you don’t want to deal with USD, simply import everything from the OBJ context, add lights and shaders and use Solaris only as an advanced render area. For more complex set-ups, however, the question arises – if you have to cache in the OBJ area (hair!), why not do it in USD instead of Alembic?

After loading in the assets, the real fun begins – the continuous branching off of the nodetree in order to experiment non-destructively, incredibly quickly and flexibly with new light set-ups, camera settings or shaders. The approach is always camera-centric, i.e. the focal plane can be precisely set from the active camera view by clicking on the geometry.

Die verschiedenen Bestandteile des Assets, hier die Geometrie und die Haare des Hundes, werden per Sopimort in Solaris importiert. Verschiedene Rendererspezifische Einstellungen, die im Obj Kontext bei den jeweiligen Objekten zu finden sind, werden in Solaris per Render Geometrie Settings Node angepasst, hier der Width-Multiplier der Schnurrhaare. Danach werden die Shader erstellt und zu­gewiesen. Damit ist die Asseterstellung abgeschlossen und kann in eine USD-Datei exportiert werden.
The various components of the asset, in this case the geometry and the dog’s hair, are imported into Solaris via Sopimort. Various renderer-specific settings, which can be found in the Obj context for the respective objects, are adjusted in Solaris via Render Geometry Settings Node, here the width multiplier of the whiskers. The shaders are then created and assigned. The asset creation is now complete and can be exported to a USD file.
Zur einfachen USD Erstellung bringt Houdini den Component Builder mit. Einfach im Tab Menu in Solaris eingeben und die benötigten Nodes werden automatisch generiert.
Houdini comes with the Component Builder for easy USD creation. Simply enter it in the Menu tab in Solaris and the required nodes are generated automatically.

Lights no longer have to be moved back and forth in the traditional way using Gizmo, but can be placed interactively using three modes: A click in “Diffuse” mode sets the light so that the desired area is illuminated by the diffuse reflection, “Specular” sets the specular highlight at the desired location and with “Shadow” the pivot is selected first and then the location where the shadow should fall. In addition, the light size, intensity and distance can be adjusted using shortcuts. The system is quick, intuitive and fun to use. Moving over an object while holding down the mouse button and studying the changing lighting effect live is an experience you won’t want to miss. At this point, the system is still independent of Render-Delegate, and many of the extended lighting functions (Spotlight, Focus) are also largely supported by most renderers or, like V-ray, these bring their own parameters with them.

Another highlight is the light mixer node. From intensity and colour temperature to solo mode, all set lights can be edited conveniently and centrally. Here, too, the Nodetree system invites you to experiment effortlessly with the use of parallel light mixers – simply switch off one node, restore the origin or duplicate and change it. With the Light Linker LOP, the effect of lights can be limited to individual objects, useful for an extra eye light, for example. So before we set the scene for the dog and cat with Solaris and V-ray, we quickly bring them to life with KineFX. And then cache the animated hair.

Rigging and animation with KineFX

First of all: although the new APEX system was already available at the start of the project and offers promising concepts, it is still in a noticeable beta phase and was therefore not used. The rigging and animation system KineFX, which was introduced in Houdini 18.5, was used, and its procedural workflow did not even scare away my old rigging enemy. The special thing about KineFX is that the system handles the geometry, the skeleton and the animation separately. The ingenious thing about it is that it sees the rig as (connected) points that can be manipulated with the entire arsenal of tools available in Houdini, from simple soft transforms to noises to all kinds of fun with VEX.

Setup des Schwanzes. Das sanfte Wedeln wird wahlweise von mehreren Sinusfunktionen angetrieben.
Setup of the tail. The gentle wagging is optionally driven by several sine functions.
Die Skelettpipeline: Einseitig gezeichnet werden die Joints einfach gespiegelt.
The skeleton pipeline: Drawn on one side, the joints are simply mirrored.

The workflow “in a nutshell”: A skeleton node is used to draw half a skeleton over the beagle in its remaining pose with the help of anatomical drawings or to set the joints. In addition to the bones and joints, the joints for the facial muscles and the eyetarget are already taken into account here. Orient Joints are used to rotate the joints to the correct angles. Skeleton Mirror mirrors the skeleton and renames the joints accordingly. BonecaptureLines and Tetembed provide a finely subdivided mesh with the necessary attributes to transfer the joint weightings to the original mesh using BoneCapturebiharmonic. This works very well right from the start, creative or technical weighting adjustments can then be made interactively using Capture Layer Paint.

Animiert wird mit Control­shapes und verlinkten Slidern außerhalb des Rig-Netzwerkes.
Animations are created with control shapes and linked sliders outside the rig network.

This part of the geometry is now ready for animation. On the other side, the skeleton in the nodetree is split in two and provided with the rig pose node in which the actual animation takes place. To do this, the three streams, geometry, skeleton and rig pose animation are merged into a bone deform node, which analyses the movements and deforms the mesh accordingly. This very simple rig serves as the starting point for advanced functionalities such as Inverse Kinematics (IK), i.e. the backward-orientated movement of the joints. Put simply, instead of moving the paw from the shoulder joint by joint (FK), the paw is positioned directly and the remaining joints follow automatically. The tail can be switched back and forth between IK and FK and is kept in gentle motion procedurally with a simple sine function at the various joints (which are really just points with certain attributes!). This effect can in turn be varied in strength or completely deactivated using skeletonblend.

A wonderful and very time-saving function are the secondary motion nodes, which record a basic movement and apply bounce or overshoot to this joint. If the head moves, the ear automatically wobbles with a slight time delay. The secondary motion nodes can be chained, so that the tip of the ear wiggles a little later and more strongly than the rest of the ear.

Verkettete Secondary Motion Nodes sorgen zeitsparend für glaubwürdige Details in der Bewegung.
Linked secondary motion nodes save time and ensure credible details in the movement.
Schnellere Blend­Shapes dank VEX
Faster BlendShapes thanks to VEX

Blendshapes are added on the geometry side. These can be created in all kinds of ways, the simplest of which is the EDIT node in conjunction with the sculpting tool of the Modeller plug-in. Instead of the classic blendshape nodes or the newer character blendshapes, an attribute wrangle with two VEX mini lines (see image) is recommended for performance reasons, but is not necessary – as is typical for Houdini, there are many roads to Rome. Animation then takes place one network level higher with previously set control shapes and sliders for the blend shapes. After caching the animation and loading the hair into USD files, these are loaded into Solaris in the last step and set up as described at the beginning and placed in the scene with simple transform nodes.

Ein Node, sie alle zu knechten: der Light­mixer als zentrale Kontrollstation für alle Lichter.
One node to control them all: the Lightmixer as the central control station for all lights.

As there is nothing complicated about this setup apart from the hair and the rig itself, the shot setup also follows the credo “keep it simple”: after the geometry, only the lights, a camera and the render settings follow without more in-depth USD configurations. And since all assets are already available as USD, nothing needs to be cached (Solaris caches non-USD assets with every RenderToDisk process, which can be very detrimental to your nerves at some point). The actual file output then takes place via the USD Render Rop. The most important setting here is Render All Frames With a Single Process!

This eliminates the need to restart the render process after each frame, which is one of the biggest pitfalls when rendering from Solaris. Set the render delegate to V-ray and optionally, but highly recommended, activate the Mplay monitor. Otherwise there is only a progress bar and no visual control of the current frame.

Having finally arrived in Lo-Lo-LOPS land, you are spoilt for choice of renderer. Almost all engines now provide a Hydra delegate that can be used to render in Solaris. The following criteria have been defined for the casting of this project (what works well in the previous OBJ rendering does not necessarily have to work just as well in Solaris. The delegates are sometimes drastically modified for this).

Hairshader

How well can the hair shader be adjusted, what is the look of the hair? General visual quality: How much user tweaking does the engine need to produce a good image? Feature set & efficiency: Does the engine provide everything for the required production? Do you need to build your own tools or use hacks? Are relevant nodes missing? How complete is the existing feature set in Solaris? Are relevant AOVs easily output from Solaris? Is the engine in Solaris easy to set up? How well are the Lightning features (e.g. Lightfilter) supported? Does OCIO work without problems? How quickly can the support team help?

Speed and stability: When it comes to hair, rendering speed is of course a major issue, but not the most important one. Does the engine render the project within the set timeframe of 5-6 days? How easy is it to integrate the existing small render farm? How often does the renderer crash (when will we finally ask ourselves whether it crashes at all…)? A fast first-time-to-pixel would also be nice… The question of GPU rendering, which is a favourite for small teams or, in this case, CGI lone warriors, was clearly subordinate to the above parameters for me. After a few days of intensive study, the choice fell on VRay version 6.1 (a few of the new 6.2 features were already being made available via nightlies prior to release).

V-ray, Houdini and Solaris

The results briefly summarised, starting with what is probably the most important topic for cats and dogs – hair. And here Vray can come up with an excellent hair shader that not only provides a beautiful look OutOfTheBox, but can also be easily customised in a variety of ways.

Vrays Hair­shader bietet mannigfaltige Einstellungsmöglichkeiten sowie ergänzende Nodes, die Zeit sparen.
Vray’s hair shader offers a wide range of setting options and additional nodes that save time.

Basically, the shader is based on the behaviour of real hair – the more melanin pigments are set using the slider, the darker the hair. Pheomelanin makes the hair reddish/orange, the tiger thanks. The dye colour serves as a port for texture maps, which together with the two melanin sliders define the basic coat colours. As a hair does not have a uniform colour, the practical hair sampler node can be used to define gradient masks along the individual hair curves, which can be used to combine texture maps, colour values or even to blend the opacity.

Der gesamte Hairshader der Katze
The entire hair shader of the cat

Custom attributes can of course also be used, although the view of attribute values is somewhat hidden in Solaris. Diffuse is best used for fabrics, as hair does not normally have a diffuse component. The remaining sliders deal with the shine and reflection behaviour of the hair, whereby softness has a major influence on the look in terms of the contrast of the hair. Finally, the random settings should be emphasised as extremely practical, as they do exactly that: create credibility through random imperfection – and in our case, enable the advanced age of the protagonists with Gray Hair Density (which can also be controlled by attribute or hair sampler).

Die Attribute lassen sich in Solaris ungewohnt im Panel Scene Graph Details finden und ablesen.
The attributes can be found and read in the Scene Graph Details panel in Solaris.

The general visual quality is of course subjective, as all engines (well, most of them) ultimately achieve a decent look. What is relevant for me here is how believable an image looks out of the box, what cinematographic and colour design possibilities the engine offers and how easy or quick it is to achieve all this. Of course, a lot can be added in Nuke, but for reasons of efficiency I try to take the image as far as possible in the rendering and use the compositing more for the final touches or for elements of maximum control (DoF). V-ray comes with a physical camera that extends the standard Houdini camera with parameters familiar from the world of photography and filmmaking. This means that shutter speeds, ISO and aperture can be adjusted with real values to rotated elements or generally more plausibly than just with a generic slider. Motionblur in particular can be levelled out more easily. The Sun&Sky system, which works with global light intensities, also benefits from real exposure. In addition, the camera offers other settings borrowed from the real world, such as lens distortion (freely adjustable via slider or distortion map, e.g. from Nuke), OSL support for all kinds of individual effects and a highly customisable depth of field with optional anamorphic bokeh, which can also be controlled via the Aperture Texture Map. Another nice feature for all fans of photographic image composition is that the bokeh shape can be automatically moved towards the cat’s eye with increasing distance from the centre of the image using optical vignetting.

Lookdev-Ultraclose-up der Nasen- und Schnurrhaare. Das Be­reichs­rendern in Solaris wird von
Vray problemlos unterstützt.
Lookdev ultraclose-up of the nose and whiskers. Area rendering in Solaris is easily supported by Vray.
Die Render­Gallery speichert nicht nur Screenshots, sondern merkt sich auch das zugehörige NodeSetup. Per Klick lässt sich das jeweilige Setup wiederherstellen.
The RenderGallery not only saves screenshots, but also remembers the associated NodeSetup. The respective setup can be restored with a click.

Fortunately, the physical camera works 1:1 in Solaris’ viewport and of course in Vray’s own render viewer VFB (Vray Frame Buffer), which – great cinema – is the only 3rd party render viewer to date that works in Solaris at all. This is accompanied by access to (almost) all the functions that the VFB brings with it. It starts with its own snapshot history including a before/after view, followed by the possibility of displaying image composition overlays such as the golden ratio (otherwise in vanilla Solaris only loadable as an existing image via camera foreground image). Extensive colour management options including simple OCIO setup are just as integrated as complex lens (post) FX.

Vrays physikalische Kamera in Solaris
Vray’s physical camera in Solaris
Vrays physikalische Kamera in Solaris
Vray’s physical camera in Solaris

Although Vray comes with all the necessary OCIO configurations to work out-of-the-box, for convenience and consistency I recommend setting the parameters permanently via system environment variables, also to be on-pair with Houdini/Solaris’ own OCIO settings. Detailed instructions can be found here: is.gd/aces_setup. The only difference: Do not download your own OCIO-Config, but use the one from Houdini in the installation directory/packages.

AOV Set-up und Anzeige im Vray Frame Buffer oder Solaris Viewport
AOV setup and display in the Vray frame buffer or Solaris viewport
Grag als Topmodel zeigt die direkt im VFB per Cryptomatte maskierten Farb- und Belichtungskorrekturen. Zum Sparen von einfachen
Compositing-Schritten oder als schnelle Kundenpreview.
Grag as top model shows the colour and exposure corrections masked directly in the VFB via Cryptomatte. To save simple compositing steps or as a quick customer preview.

The Lens FX go beyond the usual bloom/glare and not only allow dedicated adjustment of star flares, but also all kinds of real-world phenomena such as lens scratches and dust, which can have a major influence on the look of the image. CAs are also possible as a post-effect since 6.2. Many of the options here can save time in the comp or at least serve as a quick preview for the customer. Thankfully, the PostFXs are not baked into the image, but can be output as an AOV – Husk takes the settings of the VFB into account when rendering. AOVs can be set in Solaris with Vray’s own node VrayStandartRenderVars and then added to the render product.

Lens FX mit Obstacle Image – das Bild stammt aus der Dokumentation von Chaos.
Lens FX with Obstacle Image – the image comes from the Chaos documentation.

This is stored as the last step in the render settings node. A little awkward, as Solaris sometimes is, but can be done independently of the 3rd party render engine. The denoisers (Optix, Intel Open Image Denoise, Vray-Eigen) are also created in this way and automatically activate all the required AOVs. The result can either be rendered directly or used for later denoising with the standalone denoiser (temporal blending!). Cryptomattes can be found in the VrayStandartRenderVar and can even be used directly in the VFB for masking colour corrections.

Lichtdemo – ein Arealight mit Custom Lightshader trifft auf eine DIY-FogBox mit Environment Fog Shader
Light demo – an area light with custom lightshader meets a DIY FogBox with environment fog shader
... und das dazugehörige Shader-Network.
… and the corresponding shader network.

The best (virtual) camera is useless without the right light (or as the photographers say: the beginner takes care of the technology, the professional takes care of the light). V-ray makes full use of the great lighting tools in Solaris and adds the option of creating your own light filters. To do this, V-ray’s softbox node, for example, is combined with everything your heart desires in a light filter library and fed into a V-ray TextureLightFilter LOP. This is assigned to the respective lights. In this way, gobos, light blockers or soft softboxes can be quickly realised.

In addition to all kinds of compositing and utility nodes such as texture layers, noises, round edges and easy-to-control ramps, the complete toolset also includes a dedicated SSS shader node that delivers beautiful results, especially for the cat’s ears and nose. MaterialX has been supported since 6.2. Even though the Sky/Sun and the newer procedural clouds from V-ray were not used in this project, the artist is pleased that these tools work in Solaris and can be controlled via Distant or Domelight.

A rejected idea at the beginning of the project envisaged more haptic spatiality instead of the undefined endless cave. Thanks to the integration of Chaos Cosmos into Houdini, various seating options for the animals were quickly tested. Chaos Cosmos is Vray’s own extensive asset and material library. Objects including V-ray shaders can be imported directly from a browser into Houdini with a single click. Only the translation to Solaris does not yet work smoothly here, but can still be managed with a little customisation. Solaris imports the .vrmesh as an instance, which breaks the material assignment. A current solution is to convert – to real geometry via the Sopmodify LOP and activated Unpack USD to Polygons.

Chaos Cosmos Assets im Test mit angepassten bzw. ausgetauschten Materialien
Chaos Cosmos assets in the test with customised or exchanged materials

Over the course of the year, Cosmos will also receive a machine learning-based prompt-to-material function that is directly integrated into the DCCs. The decent rendering times of an average of 20 minutes per frame could be well cushioned with the easy-to-set-up network rendering via distributed rendering.

With DR, all computers on the farm think about one image at the same time (it’s always a pleasure to see over 100 colourful buckets…) – simply activate this in the render settings and add the IP of the workers (these require their own Vray Render Node licence). Of course, the occasional crash cannot be avoided, although the cause was more often Solaris than V-ray itself. V-ray support reacts quickly to bug reports and publishes bug fixes almost every night in the nightlies (activation by e-mail required) and occasionally new functions or UI adjustments.

All in all, Vray not only offers a lot of features and a nice look, but also a really good integration into Solaris. The few functions not yet supported include decals, Aerial Perspective and Enmesh (instantiation of geometry patterns on a mesh for fine details, a bit like Zbrush Micropoly, only @Rendertime). This is very straightforward, even exotics such as the V-ray Clipper (Cut Geo@Rendertime) are supported. My personal highlight: Chaos has managed to get an external render view running in Solaris.

Minimal compositing in Nuke & finishing in Davinci Resolve

The philosophy of achieving as much as possible of the final look in Engine is achieved in Nuke by adjusting the Depth of Fields, fine-tuning and painting out distracting elements that would have been more time-consuming to correct in 3D. Thanks to the OCIO set-up using system environment variables, colours are handled consistently.

Der subtile wie schöne Halation Effekt um kontrast-reiche und helle Bildelemente wie die Highlights im Auge, möglich durch den Virtual Lens Node in Nuke.
The subtle and beautiful halation effect around high-contrast and bright image elements such as the highlights in the eye, made possible by the Virtual Lens Node in Nuke.
Zu Darstellungs-Zwecken auf die Haarspitze ge­trieben: minimales DoF per Bokeh Node und Deep Pixel Rendering sowie einer Custom Kernel-Map von Greyscale Gorilla, gut sichtbar in den Highlights der Augen. Durch die Deep-Pixeldaten sind einzelne Strukturen wie die Schnurrhaare sauber getrennt.
Taken to the extreme for display purposes: minimal DoF via Bokeh Node and Deep Pixel Rendering as well as a custom kernel map from Greyscale Gorilla, clearly visible in the highlights of the eyes. Thanks to the deep pixel data, individual structures such as the whiskers are clearly separated.

Although Vray renders outstanding camera blur, this task was handled (for maximum control) by the wonderful Bokeh Node in Nuke. Fed with DeepPixels, this tool creates the best and cleanest post FX bokeh, which is especially important for fur structures. In addition to real values, unphysical multipliers for the respective blur strength in front of and behind the focal plane can be freely set here. For maximum consistency, a 3D camera exported from Houdini, preferably via USD of course, can serve as the source of all values. Fine details are primarily provided by the Virtual Lens Node (Nukepedia), which can be used to realise wonderful optical phenomena. A favourite is the subtle halation effect, a red-orange halo around high-contrast, bright image elements. Chromatic aberrations such as haze and glare can also be simulated. Last but not least, the layout renderings are replaced by the comps from Nuke in the Resolve edit, refined with film grain and finally played out with beautiful sounds from the sound studio.

Deep to Points in Nuke. Im Gegensatz zu einem Z-Depth-Pass weiß Bokeh so, wie die Pixel tatsächlich in der Tiefe verteilt sind.
Deep to Points in Nuke. In contrast to a Z-Depth-Pass, Bokeh knows how the pixels are actually distributed in depth.

Conclusion

Houdini with V-ray & Solaris: A beastly good choice. This article was written with the help and presence of a cat. Many thanks to the team!

]]>
DIGITAL PRODUCTION 144524
Houdini 20 https://digitalproduction.com/2023/10/29/houdini-20/ Sun, 29 Oct 2023 16:44:00 +0000 https://digitalproduction.com/?p=151544
The long-awaited update has finally been released and brings with it a whole host of new features. So many, in fact, that a special issue would be worthwhile. For now, here's an overview and an in-depth article on rendering by Olaf Finkbeiner in the next issue.
]]>

With so much stuff – where to start? The fine new “little things” that would justify an entire release for some other software manufacturers or the big chunk? Suggestion – in order to be able to devote ourselves undisturbed to these chunks aka the new animation system, Feathers and Solaris/Karma updates, let’s take a look at the “sideshows” first and free up a lot of mental working memory.

Texture Mask Paint SOP aka Houdini 3D Painter

Houdini has long allowed masks and attributes to be painted directly onto the mesh, but until now only on the basis of topology. If you want to paint detailed masks, you need a correspondingly high-resolution mesh. The new Texture Mask Paint SOP, on the other hand, works on a UV basis and is therefore completely independent of the topology.

Great art on a small mesh resolution

The interactive tool not only offers the expected paint function, but also brush features that are more familiar from specialists such as Photoshop or Substance Painter. The applied brushstrokes can be softened with Smooth or smudged with Smudge. Texture and Stamp allow you to paint with textures, while Dirt and Cloud loosen up the brushstroke with noises. The whole thing also works symmetrically.

The mask created as the basis for the new scatter in Mask Node

The painting can be output as a volume mask based on the UVs, which can be used for any other purpose with the usual volume tools or simply used as a mask for the new Feather tools, for example. Alternatively, the volume mask can be exported via Heightfield-Ouptut as .exr and co. and used as a texture mask for hairgen/grooming or shading, for example. The appropriate node for scattering points into the freshly painted mask is called Scatter in Texture Mask and is able to understand the opacity or luminance of the paint node as a density multiplier. If you want to go deeper with the brush, you can use the free HPaint HDA(is.gd/hpaint) by Aaron Smith, a Houdiniesque version of the Grease Pencil.

Only a few seconds for 100,000 polys with automatic guides

Quad Remesh

Long wished for and finally here – a native Quadremesher. Although it is still in beta, it already delivers very good and, above all, very fast results. As soon as the node has the first remesh after a few seconds, the number of polygons can be changed almost in real time. Guides can currently only be created indirectly, but this will certainly change in the final version. Until then, you can draw a curve on the mesh, calculate the tangent using polyframe or orientalongCurves and then transfer it back as a guide attribute.

Scan with a very large number of polys (approx. 2,349,000)

Clouds

If the current weather isn’t cloudy enough for you, you can use the completely new cloud tools to create hyper-detailed hero clouds or even an entire skybox full of clouds. A Cloud Shape Noise serves as the basis for the former, which can either be fed with any geometry or create a structure close to the shape of a cloud itself. This basic shape is ripped apart, processed and transformed into a realistic cloud using new cloud noises (Cloud Billowy & Cloud Wispy Noise), which were created from the detailed analysis of real cloud structures. As these are classic volumes, any Houdini tool can be used to further customise the shape.

4 steps to the first cloud
3D Skybox
Skyfieldpattern presets, from top left to bottom right: Undulatus, Fibratus, Fractus, Floccus. Just like clouds are called.
Karma Render Multiscatter Skybox
Billowy Cloud, Karma Multiscatter

The clouds generated in this way can be rendered physically accurately using the new cloud shader with multiscattering. If you want to go a little faster and don’t want to be completely precise, you can use the Cloud Ambient Occlusion Shader, which delivers very useful results for background clouds with a much faster rendering time.

The skybox is similarly direct, with a choice between 2D skyfields and 3D clouds. The former are configured with the new Skyfield plus Skyfieldpattern node, which comes with various cloud type presets, and then designed using the Skybox node. The Skybox alone is sufficient for 3D clouds. The cloud nodes mentioned above offer the option of applying further deformations.

Minimalist setup based on the cloud tools as a generator of the point cloud..

Bubble Solver

A by-product of the new cloud tools is the bubble solver, which takes care of the correct interaction of bubbles. Overlaps are avoided and the individual bubbles are pressed against each other and deformed accordingly. As input, the node requires nothing more than a point cloud (particle, cloud generator or simply a grid) with the Pscale attribute set.

A grid as a basis shows how the deformation of the spheres takes place.
…and the matching rendering with the new thin wall and dispersion features.

Ripple Solver

Another very simple tool (almost too simple for Houdini) for simulating wave movements on geometry, for example the effect of raindrops on water, but also the impact of shock waves on character. The waves can be output as a float attribute for further use in shading or can be used for other purposes, for example as a density attribute for scattering.

Simple setup with particles as collider/source of wave movement

Solaris/Karma

The most important innovation here is the gold status of Karma XPU. The beta status has been cancelled and a whole host of new abilities have been added. These include the complete Light Filter Shader Set, Round Edges, Thin Wall (see Bubble Solver section), production boosters such as Cryptomatte and Deep Images, Absorption and Nested Dielectrics, faster First-Time-To-Pixel and generally increased speed, especially for Fur and Volumes. To be able to marvel at the latter, the render gallery now comes with an extensive statistics panel.

Karma XPU renders complex feather dresses particularly quickly thanks to Feather Procedural.
Comprehensive statistics are saved for each snapshot in the Rendergallery.

To go with the new Cloud Set, there is finally a Physical Sky (Hosek et al), which in principle combines the new Sky Dome Light Lop with a Distant Light under the bonnet and thus offers the usual Houdini flexibility – if desired. A lot has also happened in the area of shaders and material management. The new material library integrated in the material linker is a good way of creating a material collection more easily and assigning it via drag’n’drop. Some presets and access to the AMD Mtx library (online access) are included out of the box, but could be a little more user-friendly. If you are looking for an intuitive alternative, you can take a look at ODTools with its extensive asset library – is.gd/shelf_tools.

Physical Sky meets Cloud Shader.

The Karma Material Builder has been tidied up and is now equipped with the Geometry Properties Node to save time. A new fur shader offers settings for medulla manipulation, the inner core of the hair, so to speak, which in turn promises realistic lightscattering.

The new material library with direct access to the AMD Open Material Library.

As MaterialX is subject to constant growth and change and some essential (Uber) nodes are missing or have to be put together yourself, SideFX has thankfully supplied a whole host of new custom nodes, e.g. a ramp node. Fog Box (Uniform Volume for fog and other atmospheric effects), Background Plate, Ocean Procedural and Tone Mapping (Reinhardt and co.) also make it possible to work faster with XPU.

The new cloning offers the option of rendering and comparing the scene in parallel at different times or settings, for example to adjust the effect of a light at the beginning and end of a shot – temporally at the same time, so to speak. The Solaris toolset has also been optimised in many areas and offers, for example, the latest ACES support (applies to all areas of Houdini, not just Solaris), new lighting views and the RBD Desctruction LOP, a procedural that transforms RBD pieces at render time. Mplay can now export video formats natively without an FFmpeg installation, and not just from Solaris.

With its seamless integration, Karma can therefore be used very well as a production renderer and not only saves 3D party engines, but also a lot of setup time. Karma can also be used as a standalone Hydra Delegate from next year. Further in-depth information that would go beyond the scope of this article, as well as wonderful examples, can be found in Olaf’s article – already in progress!

The new Fur Shader with Medulla support

Feather

In addition to the Clouds, the new Feather system is another highlight of Houdini 20. To provide an efficient solution to the complex production challenges of feathers, Houdini offers over 20 new Feathers nodes, many of which are GPU-accelerated and interactive as well as perfectly integrated into the existing grooming tools. Individual feathers can be created with just a few nodes or curves, a shaft and outline, and can be brought into the desired (dis)order with the existing grooming nodes such as Frizz and Clump (of which there is also a special Feather version).

In addition to the classic Guidemask Node, the new Texture Paint Mask Node can also be used to mask certain areas. To create as many different feathers as efficiently as possible, shapes can be interpolated to form new shapes. Once the individual feather (sets) have been defined, they can be quickly drawn or scatterned using the familiar grooming tools such as Guide-Groom.

New options ensure clean interpolation between different feather type areas. Otherwise, the Feather Pipeline follows the work steps of the Hair Tools exactly with a few special new nodes, such as the GPU-supported Feather-Deintersect. This node interactively prevents unwanted overlaps and also ensures a fluffier look, for example. The finished feathers can then be easily converted into polygons and resampled for simulation and animation. The feather fun is best rendered with the new Hydradelegate-independent Feather Procedural, i.e. memory-saving high-resolution only at rendertime.

Order thanks to interactive Feather deintersect

Animation

The big star of H20 is undoubtedly the new APEX animation ecosystem, a direct challenge to the workflows of long-dead empires. Star text: Beta status with many good ideas, most of which are running well in initial tests and then still have to prove themselves in practice. Before we delve deeper into the topic of APEX, let’s take a brief tour of “general” improvements in terms of animation. In the animation editor, the areas of the curves can now be changed directly without having to use handles. A bit like the newer path tools in Illustrator and incredibly practical. Keyframes can now be moved via MMB without the mouse pointer having to remain directly over them.

The timeline has been given a useful companion in the form of the Animation Toolbar. Here, for example, breakdowns, tweens, easing, time offsets, noises or blending to certain frames or neighbouring keys can be set to speed up the animation processes with sliders. Finally, bookmarks can also be created and controlled with new shortcuts.

Finally bookmarks including a clear editor

What is Apex and what has happened to KineFX?

In short, and this would have taken up most of the time writing this article, so I’ll quote SideFX at this point: “APEX, a brand new context designed to deliver an animator-friendly environment that is built using a robust procedural rigging toolset.” For animators, this really does mean a completely new, viewport-centric and interactive experience. The entire animation now happens in the new Apex Scene-Animate Node, which has no parameters other than Reset, only interactive viewport tools. And what tools!

Apex Graph principles

The selection sets make it easier to select and save relevant controls, but it gets really exciting with new ideas such as locators and dynamic motion. The former allow us to set and save pivots on the go, so that we can move our mesh or character over these pivots away from the controls defined in the rig. The movement is not limited to just one mesh, but can also move everything that has been selected via the selection sets without any constraints.

Animation curves can now be adjusted directly by dragging on the path segments.
With the new animation toolbar, keys can be quickly and intuitively customised away from the animation editor.

And if you do need constraints, the new constraint tool allows you to create them directly in the viewport with just a few clicks (2: Driven, Driver, Done) – and remove them again or key the constraint state (On/Off). In addition, reference videos can finally be displayed in the viewport. Everything is controlled via the Animate Node tool menu. The ingenious Dynamic Motion Toolset can also be started here. Considering the original strengths of Houdini, it is only a logical consequence to integrate physical simulations directly into the animation tools.

The new radial menu of the Scene Animate Node

Thankfully for all pure animators without DOP-Network and Co. the workflow is as follows: The animator sets raw keyframes and contact points for his motion paths, such as the trajectory of a ball or a jumping character, and then lets Houdini adjust the animation physically correctly, fine-tune it and bake it back as keys. Everything simulated can be adjusted at any time. The animator also has direct access to the ragdoll system, allowing him to fling his character around without keyframes.

Various characters and meshes can be moved together without constraints using locators, pivots that can be set as required by the animator.

Kine Fx and rigging

APEX does not replace KineFX, but is part of it. Skeletons, skinning and the like are implemented as before and then transferred to an APEX rig (if you want, you can also animate using a rig pose as before and combine the whole thing with APEX). The idea of APEX is basically a further development of the basic idea of KineFX to treat joints and co. as geometry. However, if rigs are too complex, this leads to performance losses, so that the APEX Graph System, which treats geometry as pure data, allows considerably better performance. For more in-depth information on procedural rigging, be sure to watch the Houdini Hive videos, especially the one by Esther Trilsch here: is.gd/h20_rigging.

Ragdoll at its best: above the last state with keyframes, below the simulated result.

Conclusion

Lots of new features and tools and still no end in sight – except for this article. If you are interested in the updates in the areas of RBD (sticky collisions!), Vellum, Flip, Muscles, Pyro, ethically correct machine learning, PDG and actually every aspect of the program (including “hidden”, i.e. undocumented changes such as the new “Create Digital Asset” dialogue), we recommend the extensive, albeit often very technical, docs at this point.

Complex setup with creature vs. shockwave (available as an example in the Content Library)

In the Content Library(www.sidefx.com/contentlibrary/) you can download sample scenes for many new features. And they are quite something – they show more advanced setups than just the new tool and take a second or two to calculate the node network due to caches that are not included when opening them. Once you have rummaged through the nodes, the examples are a real source of knowledge. As always, H20 is also available as a free non-commercial version.
Take a look and happy rendering!

]]>
DIGITAL PRODUCTION 151544