Olaf Finkbeiner – DIGITAL PRODUCTION https://digitalproduction.com Magazine for Digital Media Production Wed, 04 Dec 2024 13:29:21 +0000 en-US hourly 1 https://digitalproduction.com/wp-content/uploads/2024/09/cropped-DP_icon@4x-32x32.png Olaf Finkbeiner – DIGITAL PRODUCTION https://digitalproduction.com 32 32 236729828 USD, Solaris and Karma in Houdini 20 (and elsewhere…) https://digitalproduction.com/2024/05/15/usd-solaris-and-karma-in-houdini-20-and-elsewhere/ Wed, 15 May 2024 21:32:00 +0000 https://digitalproduction.com/?p=144481
Solaris saw the light of day 4 years ago in Houdini 18. When I tried it out for the first time back then, it was still very bumpy and initially put me off. One crash followed another. If that happens today, then it's probably due to the graphics card driver, which is supposed to be up to date.
]]>

So that’s over and Solaris has grown up together with Karma XPU, the new hybrid renderer. Time to take a look at it!

Although I had experimented with Karma and Solaris (LOPS) time and again, I didn’t want to make the switch until it had fewer teething problems. That was the case with Houdini 20. I also wanted to build up my scenes in Houdini now and then open them up in Omniverse and Marmoset and see if that works. But more on that later…

Karma

There are a few new features in the Karma renderer. The most important one is that the XPU of the graphics card renderer is now out of beta. Many other things – for example dispersion, absorption, nested transparent materials (dielectrics), material blending, geometry lights (not for volumes), rounded corners, hextiling etc. etc. don’t really need to be explained in detail as they are not really innovative – but they are also very useful and complete Karma. An important function in Karma CPU is to be able to set the samples per geo and thus shorten the render times easily and sometimes massively. Unfortunately, it is not yet available in XPU and you have to make do “old skool style” with compositing and tricks.

AOV also „Arbitrary Object Variables“ (auch gerne Renderpasses genannt) brauchen jetzt keine „Accessory“ Nodes mehr ...
AOV, i.e. “Arbitrary Object Variables” (also often called render passes) no longer need “Accessory” nodes …
...und können extra einfach erstellt werden. Sie werden dann auch gleich in Mplay angezeigt.
…and can be created very easily. They are then also displayed immediately in Mplay.
Solaris (LOPs)

Anyone who has ever worked with a client/art director or director on the lighting of a scene knows what this means for the 3D artist – stress and a lot of organisation. The respective change requests have to be documented somehow, e.g. with screenshots of the parameters or with saved scenes or with copies of lights or their positions. A great deal of discipline is required to ensure that the overview is not lost. But there is now a remedy for this in Solaris. The significantly improved Render Gallery in Solaris is worth its weight in gold, because the status of the scene, e.g. the settings in the LightMixer, is saved with every snapshot and the images in the gallery are saved together with the Houdini scene. They can of course be named and tagged (keywords) and then filtered according to the tags. If a photo agency or the customer works with image numbers, these can be assigned to the snapshots as names and/or tags. A dream come true. This is a real workflow enhancement and normally only part of a pipeline in a larger studio.

Solaris – UI

Solaris is there to provide all the functionalities that are needed when a scene is assembled from geometry, materials, lighting and this should be separated from the geometry creation. It is therefore primarily about a user interface to USD. It is therefore only logical that a lot of improvements have been made to the user interface and a special Solaris LookDev desktop has been added. This then displays the Render Gallery, for example.

Viewport

In addition to the viewport, there are new lighting functions: “Disable Lighting” – switches off the light, e.g. to inspect self-luminous materials. “Headlight Only” – switches off all lights and only displays a direct light. “Dome Light Only” – switches off all lights except for the dome light in the viewport. “Normal Lighting” – displays the current lighting situation The interaction with the light mixer and the very convenient options for positioning lights makes illuminating the scene with Solaris very comfortable.

Render statistics

If you want to render, you want to know why it takes so long or where there is potential for optimisation. The RenderStats, which provide information on memory consumption, render time and much more, help with this. Pretty cool! But unfortunately, some of the information can only be viewed with an HTML viewer, which is cumbersome and annoying. There is also a heat map that shows where a lot of rendering time was used in the image. The metadata is saved in the EXR as JSON and can therefore also be used elsewhere. The UI is still in need of improvement with regard to RenderStats, but as this is the first incarnation of Render Stats, this can be forgiven and we look forward to the next versions.

Ich habe noch eine Palme und eine Aloe – Pflanze dazu
gestellt in Omnisphere. Irgendwas war ja da, dass jetzt jeder mehrere Pflanzen haben kann.
I have also added a palm tree and an aloe plant to Omnisphere. There was something about the fact that everyone can now have several plants.
From SOPS to LOPs

Working in Solaris has generally become much easier and more accessible for freelancers over the years. However, there are also areas where a lot of new things need to be learnt in order to create typical Houdini scenes with many objects or many instances in Solaris. There are two new nodes in H20: The New Merge Point Instancer LOP. In a point instancer prim, each point in a geometry is replaced at view or render time by an instance of the geometry of one of the prims with a “prototype” relationship to the instancer. This LOP allows you to efficiently merge multiple point instances. In this way, only the rest positions of the mesh and the points representing the animated transformations of the parts need to be saved. This means that less space is required on the hard disc. A new geometry clip sequence simplifies and speeds up the saving of valuable clips. With Value Clips, you can split large amounts of data across multiple files. This node should be your first choice when working with value clips.

The USD Scene Graph and Component Builder

For the scene graph to look so nice, the geometry must have attributes that are then used to sort the geo in the graph. The “@Name” attribute is the basic version and the minimum, so to speak. It is better to work with “@path” attributes which, if available, also overwrite “@Name”. So not like in other 3D programmes with drag and drop or something like that. However, it is even better and recommended to use the new “component builder” which takes care of all USD attributes and also the linking with materials. If you do this properly for all assets, you can export the scene as a USD scene and open it in Nvidia Omnisphere, for example.

Rendering

The Clone Control Panel looks almost like the Render Gallery described above. However, it is “only” the first step towards a “Multi-Shot and Multi-Asset Management System”. Nevertheless, it is already impressive and shows where the journey will take us. Anyone who has ever had to create a lighting set-up that looks good from several camera positions at the same time or during a tracking shot will really appreciate this feature, as it allows several render jobs to be running at the same time. The number of cores to be used in each case must be specified beforehand. The network and XPU – i.e. the graphics card – can also be selected. However, this is not limited to different cameras or frame numbers. Different render settings, visibility of objects, lights or whatever else is required can be compared and assessed interactively. This is still a little cumbersome, as the LOPs network has to be built accordingly. BUT so cool and really innovative.

Talking with SideFX

Besides playing with the new tools, DP hopped on a video call with the Karma (and Mantra) developers at SideFX, to get an idea where it is going…

Mark Elendt is a senior mathematician at SideFX. Mark has been with SideFX for over 32 years and worked on a lot of different parts of Houdini. He actually started working on Prisms, which was the product before Houdini. Marks passion is rendering – he wrote the Mantra render engine for prisms and rewrote Mantra for Houdini. And now he is working on Karma, the new flagship renderer from SideFX.

Brian Sharpe is a senior rendering deve­loper at SideFX. He has been in the graphics industry for 25 years, and spent a long time in computer games. He is at SideFX now for six years and working on the Karma XPU renderer.

DP: Karma XPU is marketed as a hybrid renderer. What does that actually mean? What is hybrid about Karma?

Mark Elendt: There are a lot of different purposes for rendering. You‘ve got the film production which has to deal with hundreds of millions of polygons and very complicated shading networks. But you have to provide all the way down to motion graphics, which need a really, really fast turnaround on renderings, or you‘ve got the various different things interior design, architectural design, even scientific visualization. Every renderer has to deal with different types of scenes. We came from Mantra, which was a real workhorse of a renderer – not really fast, but it was very flexible. So we have worked on a new renderer called Karma. Karma actually has two different engines in it. A CPU engine, which deals with very large scenes, but it also has the XPU engine, which can also deal with large scenes and it is geared for faster turnaround and more flexible use of hardware. The more you want to harness the GPU for speed and power, the more restrictive your renderer has to become. Mantra was very flexible. People could write shaders that could reach over to other pieces of geometry and do heaps of wacky stuff. Reach all the nooks and crannies of Houdini and do really powerful stuff.

Brian Sharpe: But then, as you start moving to a more efficient renderer, such as Karma CPU and then Karma XPU which harnesses the GPU so much more, the rendering architecture becomes a lot more rigid and people find they don‘t have so much flexibi­lity, but yet so much more speed. There‘s a trade-off. But we knew we needed to harness the GPU cores and get this really fast performance from them. Karma XPU can harness that, but it‘s a little bit more rigid and a little bit less flexible. But then we have Karma CPU which can still reach into all the nooks and crannies of Houdini to do powerful stuff and access the VEX language, but it‘s CPU only. XPU is a hybrid renderer that just views any sort of CPU or hardware on the machine as potentially executable, So it looks at any kind of GPU and says right, I can use that for rendering. And it also looks at the CPU says right, can I use that for rendering as well. And then it uses both, all the power on the machine to do rendering. It means that you‘re really maximizing performance, but it comes with a lot of benefits as well. For example, if you‘re saying it‘s too big to fit on the GPU memory that you can still keep rendering using only your CPU. So that‘s a very powerful thing.

DP: For me a hybrid renderer would be renderer that is using GPU and CPU at the same time and not using the CPU as a fallback? Maybe sharing the RAM or sharing the memory and doing some clever stuff there – and it would not create different results, even if the GPU is running out of VRAM Is this the case with Karma?

Mark Elendt: Karma’s code is written for the CPU and for the GPU. We find that when you work collaboratively, when you‘ve got both devices working on the same image, the CPU may only do 20 percent of the work, the GPU does 80 percent of the work. In some cases, the CPU only does 10 percent of the work while the GPU does 90percent, but in some cases the CPU does more. Right? It‘s a balance.

DP: And it balances itself and it takes care of that?

Mark Elendt: Yeah. You know, one device will do more work than the other.

DP: Depending on the shaders and what it does?

Brian Sharpe: Certain features like subsurface scattering doesn‘t run as efficiently on the GPUs…

Mark Elendt: If the CPU is only doing 20 percent of the image. And your image takes a minute to render and all of a sudden the GPU can‘t work anymore. Mm hmm. Well, the CPU seems really slow because the GPU was doing 80 percent of the work, so now it‘s only doing 20 percent of the work. The render is going to take five times as long. But we don‘t actually decide what‘s better. We just make every device work at full speed. Both devices can do subsurface scattering. But the GPU is not as efficient during subsurface rendering, so the CPU will take up more of the rendering time. It‘ll contribute maybe 30 or 40 percent of the render instead of only 20 percent. So both devices are working full out to generate the image. But some devices are better at some things.

DP: What about distributed rendering, as in many machines working on one image. Is that something that Karma CPU and XPU can do?

Mark Elendt: XPU is built to have multiple devices. If you‘ve got three GPUs, three graphics cards or four graphics cards in your box and one CPU, you can actually have five XPU devices working collaboratively on the image. It‘s not just one GPU and the CPU, it‘s as many GPUs as you have and CPU, but we can extend that in the future. We can then have other devices that are not necessarily just GPU or CPU devices. Currently it‘s not able to do distributed rendering, but the engine is prepared for it. XPU works by using USD underneath for the scene description. So XPU will take the USD buffers and send the required data to the device. So, if you‘ve got a teapot geometry, it will send that geometry to the GPU and to the CPU device. It might send it over the network to the device and then all the devices have the copies of the data and work together collaboratively to build the final image.

DP: Currently it is a Cuda implemen­tation?

Mark Elendt: Cuda has a lot of the production features that we‘re looking for. High quality programmable shading with C shaders. They‘ve got a very good development toolkit, but that doesn‘t hinder us from developing other devices. Whether we develop a Vulkan device or a metal device, we have the architecture that we can build these new devices using different technologies, and it will all work together on different platforms.

DP: But still also currently not executed?

Brian Sharpe: Currently its CUDA and Optix, whereby Optix is the library toolkit and CUDA is the language you want to plug into.

DP: Why not OpenCL?

Mark Elendt: We‘re leveraging a lot of the technology that comes with Optix and the Nvidida drivers. So it makes it a lot easier for us to have some stepping stones, which make it easier for us to implement.

Brian Sharpe: We can gain access to the RTX hardware. So that‘s the ray tracing hardware on Nvidia GPUs via Cuda OptiX. But they haven‘t exposed that to open CL yet. So if we wrote an openCL device, all the ray tracing would be in software, so it wouldn‘t be as performant. For now on video cards, it looks like we‘ll be sticking with Optix Cuda, but for other GPUs we will see.

Mark Elendt: We are trying to get to the stage where people are already using Karma XPU for production, but we‘d like to get to the stage where everyone feels comfortable and safe using Karma XPU for real production. Once we get there, then we can start expanding into other devices, whether it‘s distributed rendering or Metal or Vulkan or Opencl. So a lot of ways we can go in the future.

DP: What about USD?

Mark Elendt: There is a lot going on in USD development. At SideFX, we started on a look dev project called Solaris. When we started to work on Solaris, we evaluated the possibility of having Mantra ported to import USD and we realized that Mantra was getting a little bit long in the tooth. The architecture was not as flexible as we wanted. At that point we decided it was time for a new renderer and lets call it Karma. That will work straight natively with USD. Karma has got a lot of the rendering heri­tage that mantra has, but it‘s also streamlined and architected for more modern technologies.

Brian Sharpe: The architecture of XPU lives and with the same design principles as Karma CPU, it just chooses to execute the stuff on GPU. But there are things that KARMA XPU can‘t do, such as running things like VEX, so I decided just to leave it on the shelf for now and just concentrate on getting the MaterialX version going.

Mark Elendt: A lot of the technologies that KARMA is building on are open source and part of the Academy Software Foundation, the ACLU group. USD is not part of ASF yet, though they do have a working group. But again, it‘s a relatively new project. It‘s been used in ILM for four years and getting larger exposure recently. Autodesk is doing a lot of work on MaterialX, AMD has a big MaterialX gallery that you can download directly in Houdini20 and USD has a built in MaterialX support. So all of these software libraries work well together. Leveraging that means that users will be more familiar with the concepts. Well, what we‘re finding is MaterialX has a lot of the features that we need and is very general and flexible, but there are certain things that it doesn‘t have support for yet. We‘re working with the MaterialX team, we‘re pushing nodes up to them and working with them on building and extending and making MaterialX more flexible and more accessible to everybody.

Brian Sharpe: It‘s I‘ll add one thing to that, what we‘re finding and one really good reason and one really good benefit that we‘ve got Karma XPU. And also our CPU engine under the one umbrella, which is Karma is if someone‘s working within KARMA XPU and they‘re doing all their work and their are using MaterialX, they really do find this a certain feature that they just cannot exist, such as casting an arbitrary ray from a shader or something there is always a fallback to our Karma CPU render and do all that stuff in VEX that we really have to do. So that could be in the middle of production and they‘re not going to at that moment look out for another renderer. They keep going because there‘s a big safety net in Karma CPU.

Mark Elendt: And that‘s always been a philosophy of SideFX. Which is not to make users hit any brick walls. You want to be flexible enough that you can do anything, with Karma. We‘ve sort of pulled the reins in a bit, but we still have those back doors that if you are really savvy where you really are stuck, you can get out of the problem and solve your solution.

DP: Any plans for a sort of real time incarnation of Karma?

Mark Elendt: No. Karma is always going to be a path tracing renderer. There are path tracers out there that are real time, but they are specifically written for real time. So, they will send very few rays and rely on denoise. Karma is really intended to be a more general-purpose offline renderer. You get really fast feedback and sometimes images can take just seconds to render, but that‘s not real time. It‘s interactive.I think you might be able to look for something coming in the Houdini viewport that the real time rendering in the viewport using OpenGL or Vulkan the Raytracing support. The two types of rendering are getting closer together. We might end up with raytraced reflections in the viewport or some raytraced, soft shadows or something like that. But you may not get the subsurface scattering and the high-fidelity kind of lighting that you would get with Karma.

DP: What about denoising?

Mark Elendt: I don‘t think that we‘re going to spend a lot of research time on denoise theories at SideFX, but there are a lot of public denoisers and even proprietary denoisers which we can leverage inside of the architecture. We have integrated the OptiX denoiser and as well as the Intel Open Denoiser into Karma and we are also working with Nvidia. We feed back data to them and discuss denoising issues, but we let them do the lion‘s share of the heavy lifting.

Brian Sharpe: What we have in Karma CPU already is the automatic convergence mode, where Karma is smart about where it sends the rays. Where an area of noise is Karma hammers it with more samples.

DP: Is there anything that you think you want to tell the readers about the future of XPU?

Brian Sharpe: One thing needs a bit more explanation, the CPU running out of m­emory. And in extreme cases it can. But we‘re currently working on out-of-core rendering for you, so it doesn’t mean it‘s going to magically work on a one gigabyte scene or something, but it‘s going to get a lot better with memory going forward!

]]>
DIGITAL PRODUCTION 144481
Color Grab – Everything is so colourful here https://digitalproduction.com/2019/07/15/color-grab-ist-ja-alles-so-schoen-bunt-hier/ Mon, 15 Jul 2019 14:30:59 +0000 https://www.digitalproduction.com/?p=77028
The Color Grab app from Loomatix clearly stands out from all the other colour palette apps I have tested. For example, it has an accessibility feature and can read out colours for people who are colour-blind. What's more, Color Grab is free of charge and has no adverts.
]]>

With Color Grab, colours can be collected without having to resort to photos. You point your mobile phone camera at the colour area and with two clicks it is added to the palette. Of course, colours can also be collected from photos or a photo can be analysed automatically. It is a pity that the analysis always analyses the entire photo and not the previously selected area.

What´s the color of money – Farben realtime einsammeln. Farben aus Fotos einsammeln – hier ein Achat-Stein.

If you want to do this, you must first crop the photo in another app. The analysed colour map then shows in percentages which colours are present in the photo. You can then select which of the colours should be transferred to the palette.

Automatisch generierte Palette des Achat-Stein-Fotos
Automatically generated palette of the agate stone photo

Certainly a mobile phone cannot replace an X-Rite colour scanner, but Color Grab at least offers a white balance function, colours can be mixed and palettes can be generated with a number of colour harmony rules. In addition, a number of colour models such as RGB, HEX, HSV, LAB and many more are supported. Unfortunately, only RAL is available as a colour reference, as Pantone (X-Rite) is certainly not freely available like this app.

Farb-Bildanalyse – hier blauer Himmel Farben mischen Harmonische Paletten erzeugen – hier mit der Gaudi-Regel. Die Farbpalette kann sortiert und die Farben auch editiert und umbenannt werden.

The highlight of Color Grab, however, is the export function. The following are supported: Photoshop and Illustrator, CorelDraw and Paint, Gimp, Inkscape, AutoCAD, Krita and Cinepaint. In addition, CSV, text and a preview image as PNG are also supported.

]]>
DIGITAL PRODUCTION 77028