Search Results for “DP2403” – DIGITAL PRODUCTION https://digitalproduction.com Magazine for Digital Media Production Wed, 04 Dec 2024 13:29:31 +0000 en-US hourly 1 https://digitalproduction.com/wp-content/uploads/2024/09/cropped-DP_icon@4x-32x32.png Search Results for “DP2403” – DIGITAL PRODUCTION https://digitalproduction.com 32 32 236729828 We’re building an AI box! https://digitalproduction.com/2024/05/01/wir-bauen-eine-ki-kiste/ Wed, 01 May 2024 10:09:44 +0000 https://digitalproduction.com/?p=146014
Okay, we admit it: AI is slowly becoming interesting - even for everyday production. And as we see more and more tools that are relevant for artists, and not just for "tech bros", the more our workstation is glowing. So it's time for us to find out what it takes to work normally and what adjustments we can make to reconcile budget and computing power.
]]>

We asked an old friend of the editorial team about this – Peter Beck, Field Product Manager Workstations and Rugged at Dell Technologies in Germany. With many years of experience and access to the latest toys, and in contact with all kinds of users and developers, we naively asked him what it takes.

Peter has been with Dell Technologies for over 15 years, and has had every kind of contact with workstation users from Support Specialist to Workstation Consultancy to System Engineering and now Field Product Manager, and hopefully distils our questions down to the essentials.

DP: What hardware do you actually need to run Firefly, GPT4 or Stable Diffusion, for example?
Peter Beck: We have to make a distinction here between diffusion and transformer modelling, but we mustn’t forget GANs either. Depending on the number of parameters contained in an LLM, you can run it locally and offline on a notebook. I have already successfully tested this myself with a LlaMa-7B model on a two-year-old notebook.
Diffusion modelling can be a different story. There, local components, primarily the graphics card, already decide on the possible resolution of the image that is to be generated. The technologies required for this have been around for quite some time, which is why smaller jobs could also be run on older hardware. These technologies include the Tensor Cores from NVIDIA, the Matrix Cores from AMD or XMS, XMX, AVX-512, DL Boost and GNA from Intel.

DP: And from what computer size is this executable?
Peter Beck: Use case, requirements and budget determine where you should start. Discrete graphics cards are almost mandatory for diffusion modelling, but are also available in smaller systems. By small, however, I mean systems such as GenAI workstations. Classic workstation workloads are below what is required in terms of performance, especially for stable diffusion or StyleGAN.
If we take an NVIDIA RTX 3080 in a mobile workstation as an example, it is theoretically possible to calculate resolutions of up to 8192×8192 pixels, but in practice it is more likely to be half that. With StyleGAN or BigGAN, such resolutions are no longer even theoretically achievable with this graphics card. With BigGAN, I wouldn’t order resolutions higher than 1024×1024 on this hardware.

DP: And where does it start to be fun?
Peter Beck: Well, if we look at the upper end of the scale, then we’re in the data centre and talking about GPU clusters. But I suspect that’s not what the question is aimed at. A good example of a PC, or rather a workstation, is our mainstream class of tower workstation. It comes with Intel XEON W-2400, more than enough RAM and SSD and one or two RTX 6000-class graphics cards from Nvidia. The user will definitely enjoy it, the buyer perhaps less so. That’s why I deliberately raised the issue of budget earlier.
Ultimately, however, it always depends on the actual task that needs to be completed. When it comes to LLMs, our mainstream tower workstations are rather oversized. With GANs, they tend to be in the middle of the performance curve.


DP: What else do you need in the box to be able to work with it?
Peter Beck: Let’s stick with tools such as FireFly or Stable Diffusion, which are the most commonly used. With these tools, a well-equipped mobile workstation, ideally with HX processors, can achieve very useful and timely results. The same also applies to tower systems with normal CPUs, i.e. Core i7, i9 or Ryzen 7/9 and a mid-range graphics card from the professional segment.


DP: If we now go into the studio environment: What is a sensible entry-level class beyond “IT has something to play with if everything happens to run”?
Peter Beck: As I said, what is a sensible entry level depends on the use case. In the studio environment, however, we are clearly in the mainstream range of a tower workstation with W-2400 XEON CPUs from Intel or smaller Threadripper Pro CPUs from AMD. The choice of graphics card then depends on the budget, because you can work with almost all GPUs that provide computing cores for AI, such as the Tensor Cores from NVIDIA. I have spoken to companies that simply provide a six-figure sum of money just to test artificial intelligence for their own purposes. This is certainly not possible in all companies, but it also shows the importance of AI for their future direction.


DP: Where is the “sensible” class, where do you think it is the most fun?
Peter Beck: Sensible means efficient – and efficient means that I use the right tool and the right hardware for the planned project. If you want it to be fun, you need less waiting time and quick results, and in the best case you can also do other tasks at the same time. You could take a look at the data centre to see if there is any rack space available.
A good alternative to the classic server and all the necessary accessories is a workstation system that can be installed in the data centre. This is possible with almost all of our tower systems, for example. They can be configured with up to four large graphics cards such as the NVIDIA RTX 6000 of the Ada generation, but special accelerator cards such as the NVIDIA A800 are also available. Or you can actually opt for a workstation in a 2U rack format. Then you have no heat and noise emissions at the workstation and can easily utilise very powerful hardware.


DP: And if money is no object?
Peter Beck: That’s a very good question, especially when you consider that the training of a very well-known and now commercially freely available AI was carried out on 2,048 Nvidia A100s at a total cost of around 7.5 million US dollars. What I’m saying is that even with a sufficient budget, the purpose of such an investment should not be lost sight of. Of course, you can very quickly configure a single tower workstation with the value of a mid-range car and use it to do everything that still makes sense locally – but on the other hand, that’s exactly what you can do.

DP: So in theory, everything is ideal for cloud computing?
Peter Beck: Absolutely, and we already see this very often, for example with the well-known internet services that create images and texts or simply provide ideas and suggestions. Outsourcing computing power makes sense for many companies – be it in a public cloud or in a private cloud, which is much more popular in this country. In the creative sector, a public cloud is certainly more likely to be considered than when training a company chatbot with company data.

DP: Will the hardware requirements decrease and the training data become more available?
Peter Beck: As far as hardware requirements are concerned, we are already seeing technologies that outsource workloads from the CPU to special chips when using already trained artificial intelligence. These technologies will already be an integral part of processors this year. They offer great advantages in terms of computing power and battery life, particularly in the mobile sector. Such technologies have already existed for years in graphics cards, whether in tensor cores or matrix cores. The trend in this direction has been evident for some time, and the technologies will continue to improve with each new generation.

DP: And what will a generative workstation look like in 5 years’ time?
Peter Beck: Probably not that much will change in such a short time. We will still be working on mobile or stationary PCs in five years’ time. The software will certainly be developed further and will be able to utilise the upcoming hardware better. The systems will consume more power in order to deliver even more computing power, but this will also increase the requirements and demands of users. Letters and emails are perhaps a good analogy here.
In the past, letters were sent by post and you waited a week for a reply. Today, we receive a reply to an e-mail in a much shorter time, which means that we write even more e-mails.
What we will certainly see, however, is that artificial intelligence – for whatever purpose – is finding its way into almost everything. Be it in smartphones, consumer electronics or leisure activities: AI will play a role in all areas.

DP: Does “local” even make sense?
Peter Beck: Local definitely still makes sense at the moment. I often speak to users who want to use their own workstation under the desk to get to grips with artificial intelligence and develop solutions. This may be because there is no more space in the data centre or the computing power there is distributed among several users and therefore queuing is the order of the day. But even after development, users continue to rely on local systems, whether for fine-tuning or for testing the functions created.

DP: Which tools have you already played with and do you enjoy using them?
Peter Beck: I’m a child of games and always have to test everything there is to test. Like many others, I probably started with Llama from Meta. This tool even runs very well in the 13B model on my two-year-old mobile workstation. One of my most recent attempts was actually Stable Diffusion, which was relatively easy to implement with the sdGUI, but it did draw a lot of performance. Nothing more could be done with the notebook.

DP: And where do you think the point has been reached where the technology will be “finished”?
Peter Beck: The prerequisite for this would be that there is also finished software. However, if you look at the current forecasts for the further development of AI, they already go far beyond 2035 and speak of various directions. These include Narrow AI as an AI that is only trained for a very specific task, but works extremely precisely. And then we have Widening AI, an AI that can create new data relationships and logics. That may sound scary at first, but we’re talking about 2035 and beyond. This has actually already been around for a few years, but as far as I know, not yet in the commercial sector. Such technologies are currently mainly used in the research and development of AI.

]]>
DIGITAL PRODUCTION 146014
Blender 4.1 goes into detail https://digitalproduction.com/2024/05/13/blender-4-1-goes-into-detail/ Mon, 13 May 2024 07:32:00 +0000 https://digitalproduction.com/?p=144561
The Blender release cycle consists of three new versions of the software per year. The first release is usually characterised by new features. The reason for this is that the third and final release is a Long Term Support (LTS) version, which is supplied with bug fixes for another two years.
]]>

The developers are naturally more hesitant when it comes to adding new features and are happy to postpone them until the next cycle.

With a medium release, which includes the current version 4.1, clean-up work and improvements usually take place. This is the case again this time. Many of the new features that were introduced in Blender 4.0 have now been polished again.

Die Covergrafik der letzten Ausgabe mit der Kuwahara-Node verfremdet. Die Filtergröße ist umso kleiner, je näher die Elemente am Betrachter sind. Dadurch lassen sich die Schläuche im Inneren gut erkennen, während die Skulptur in die Tiefe immer weiter verschwimmt.
The cover graphic of the last edition with the Kuwahara Node alienated. The filter size is smaller the closer the elements are to the viewer. This makes it easy to recognise the hoses inside, while the sculpture becomes increasingly blurred in depth.
Kuwahara filter controllable

One example of this is the Kuwahara filter in the Compositor, which was introduced in Blender 4.0 and can be used to give images an oil painting look. This can now optionally be executed with higher precision, which should achieve a better result for images with HDR dynamic range and at particularly high resolutions with a slightly longer execution time. The size of the filter area is now no longer static, but can be influenced via a socket. This makes it possible, for example, for image elements to look more picturesque or blurred the further away they are from the camera.

Ein Beispiel für die Inpaint-Node. Die Mitte der Nase wurde maskiert und per Inpaint neu gefüllt. Links das Ergebnis in Blender 4.0, rechts in Blender 4.1. Die deutlich erkennbare Linie in der Mitte des Inpaint-Bereichs im linken Bild liegt daran, dass die Randpixel in der Mitte zusammenlaufen. Dank eines zweiten Passes ist der Bereich in Blender 4.1 weich und stetig.
An example of the inpaint node. The centre of the nose was masked and refilled using Inpaint. On the left the result in Blender 4.0, on the right in Blender 4.1. The clearly recognisable line in the middle of the inpaint area in the left image is due to the fact that the edge pixels converge in the middle. Thanks to a second pass, the area in Blender 4.1 is smooth and continuous.
Viewport compositor finally complete

The depth pass required for this now also works in Eevee and the Workbench engine and is supported by the Live Compositor. This allows you to display the compositing result in the viewport. In Blender 4.1, all nodes are now supported for the first time. Only the Render Layers node is limited to the image, alpha and depth passes. The latter is also not yet available with cycles and outputs the depth in normalised coordinates and not the absolute distance of the pixels to the camera sensor as in rendering. The developers therefore recommend attaching a normalise node directly to the depth pass if you want to use it in the viewport. This ensures that the result does not suddenly change during rendering.

The Split Viewer node has been replaced by a new node called Split. Like its predecessor, it divides the image into two halves, either along the X or Y axis. This allows two effects to be compared directly in one image. Unlike the Split Viewer node, it has an image output, so it can no longer be used purely for viewing, but can also be used to post-process or save the result. The Pixelate node has a new property size. Previously, you had to switch a node in front of the pixel effect to scale it down and a second node behind it to scale it up again. This can now be dispensed with and the size can be set directly in the node. The inpaint node can be used to remove ropes, markers and other small details from images and videos by extending the edge pixels of an area defined via a mask or the alpha channel inwards. In Blender 4.1, it now uses the Euclidean distance instead of the Manhattan distance, which should ensure a more even fill. In addition, the node now works in two steps, which means that there should no longer be any artefacts at the point where the fills converge. Previously, a clear line was usually visible there, but now the area looks soft and blurred.

Detail improvements have also been made to a number of other nodes. The Defocus node now calculates the bokeh radius more accurately, which means that the results match the output of render engines better.

The Sun Beams node now produces softer beams and the anti-aliasing of Z Combine and Dilate has been improved. The Double Edge Mask node works between 50 and 250 times faster and also utilises the edge pixels, whereas previously it was shifted one pixel inwards. The crop node now makes an image disappear completely if the upper border is below the lower border, whereas up to Blender 4.0 it would have inverted the crop in this case. The flip node now works in local coordinates. This means that the image no longer moves away when the source is moved. The UV map node now has a choice between anisotropic and nearest neighbour filtering. This simplifies some NPR workflows such as palette-based remapping of colour tones. More interpolation options may be implemented in the future.

Die verschiedenen Inter­polationsmodi für Strips im Video Sequence Editor (VSE). Neu hinzugekommen sind die beiden kubischen Algorithmen, wobei sich Mitchell grundsätzlich besser für Bilder eignet als Cubic B-Spline, welches auch an anderen Stellen in Blender zum Einsatz kommt.
The various interpolation modes for strips in the Video Sequence Editor (VSE). The two cubic algorithms are new additions, whereby Mitchell is generally better suited for images than Cubic B-Spline, which is also used in other places in Blender.

With the keying screen node, two-dimensional colour gradients are generated by sampling points on a source image. The idea behind this is to fill the colour input of a keying node with a gradient in order to compensate for uneven illumination of a green screen. The gradients created in this way were previously characterised by hard edges and linear gradients. The new version in Blender 4.1 uses Gaussian interpolation, which ensures a buttery smooth result. The compositor is now only executed if its result is actually displayed somewhere, for example with a viewer node or in the image editor. For the entire node tree, you can now select whether it should be calculated with full or automatic numerical precision. The latter uses half the bit strength for previews, which means that the calculations can be performed faster and with less memory, although this can lead to increased artefacts.

Eevee Next only in the next release

In the last issue, we reported that we were looking forward to Eevee Next, a modernised version of the Eevee real-time render engine supplied with Blender. This was actually supposed to be integrated into Blender 4.0, but was then postponed to Blender 4.1. And then came the news that it still does not meet the developers’ quality requirements and will only be released in Blender 4.2. In Blender 4.1, the light probes in Eevee were renamed from Reflection Cubemap to Sphere, Reflection Plane to Plane and Irradiance Grid to Volume. The changes are not purely cosmetic, but also affect the Python API.

Denoising mit OpenImageDenoise auf unterschiedlicher Hardware. Als Beispieldatei wurde der Junkshop-Splashscreen von Blender 2.81 verwendet. Eine Geforce RTX 3090-GPU entrauscht die Szene ca. 15-mal schneller als eine Intel i9-13900k-CPU.
Denoising with OpenImageDenoise on different hardware. The Junkshop splash screen from Blender 2.81 was used as an example file. A Geforce RTX 3090 GPU denoises the scene approx. 15 times faster than an Intel i9-13900k CPU.
OpenImageDenoise on the GPU

After rendering with pathtracing-based render engines such as Cycles, which is included in Blender, there is usually a post-processing step in which the image noise typical of pathtracing is removed. Blender comes with two solutions for this. OpenImageDenoise from Intel and the OptiX Denoiser from Nvidia. Previously, only the latter could be used on the graphics card, which meant that users of non-Nvidia hardware were excluded from the acceleration. This meant that with OpenImageDenoise, noise removal often took longer than the actual rendering. In Blender 4.1, OpenImageDenoise now also works on the graphics card. Specifically, Nvidia GPUs from GTX 16xx, TITAN V and all RTX models are supported, as well as Intel graphics chips with Xe-HPG architecture or newer and Apple Silicon with MacOS 13.0 or newer. AMD GPUs are not yet supported due to stability issues. If you are using a graphics card with an AMD RDNA2 or RDNA3 chip, you can switch to the alpha version of Blender 4.2, where support is already enabled. The developers have used the splash screen from Blender 2.81 as the basis for a benchmark. There, an Apple M2 Ultra GPU with 76 cores is more than three times as fast as an M2 Ultra CPU. An Intel i9-13900k CPU even takes around 15 times as long as an Nvidia RTX 3090.

Hardware support further expanded

If you are using an integrated graphics card from AMD with the RDNA3 chipset, you can now use it for rendering on the CPU. Rendering performance on the CPU under Linux has been improved by around 5 per cent across all benchmarks, which is particularly relevant for render farms and cloud rendering.

Improvements in video editing

Blender comes with its own video editing editor. The Video Sequence Editor (VSE) has received performance improvements in various areas. The timeline should now update three to four times faster for more complex projects. Colour management, audio resampling, reading and writing of frames and parts of the code for image transformation have also been optimised. The glow effect now works between six and ten times faster, wipe can even be calculated up to 20 times faster. Gamma Cross is now four times faster, Gaussian Blur one and a half times faster and Solid Colour twice as fast.

Die Vector Scopes lassen sich jetzt einfärben und behalten ihr Seitenverhältnis. Eine Linie zeigt den durchschnittlichen Kaukasischen Hautton an.
The vector scopes can now be coloured and retain their aspect ratio. A line shows the average Caucasian skin tone.
New scopes

The Luma Waveform is calculated eight to 15 times faster and has also received an optical update. The display has also been improved and now shows more information. The RGB Parade variant, in which the individual channels are displayed separately, now uses less saturated colours and slightly additive blending to make it more pleasant for the eyes. The histogram now also displays more information, is now less saturated and is also displayed faster thanks to GPU acceleration. The Vector Scope now retains its aspect ratio and has been given a line that corresponds to the average Caucasian skin type. It can also be coloured, making it less abstract.

Links Blender 4.0, rechts Blender 4.1. Von oben nach unten das normale Histogramm, die Waveform-Anzeige der Helligkeit und die Waveform-Anzeige nach RGB-Kanälen aufgeteilt, die sogenannte Parade-Ansicht.
Left Blender 4.0, right Blender 4.1. From top to bottom the normal histogram, the waveform display of brightness and the waveform display divided by RGB channels, the so-called parade view.
Audio waveforms as standard

In the Video Sequence Editor, the waveforms are now displayed by default for audio strips. As these are usually symmetrical, you can restrict the display to the upper half.

Automatically the best filtering

Cubic interpolation is now also offered when rotating and scaling strips. This was previously only available in the transform effect strip. Performance has also been improved at the same time. Cubic interpolation is offered in the B-Spline variant, which is also used elsewhere in Blender, and the Mitchell variant, which is usually better suited for images. The bilinear filter no longer produces a transparent border at the edge of the image when it is scaled up and a whole series of errors have been eliminated where images were shifted by one pixel at a time, resulting in annoying gaps. The subsampled3x3 filter has been replaced by a generalised box filter, which also performs well when images are scaled more than 3×3 smaller. By default, the filter that is expected to produce the best results in the situation is now applied to a strip. If a strip is not scaled or rotated and its position is only changed in integer steps, Nearest is selected as the filter. If an image is enlarged by more than double, Cubic Mitchell is used; if it is reduced by less than half, Blender 4.1 uses the Box filter and in all other cases the interpolation remains with Bilinear.

Outliner

In the Outliner, you can now double-click on a collection icon to select all its children. An Expand/Collapse All entry has been added to the context menu. This expands or collapses the entire hierarchy. Previously, this option was only available via the Shift A shortcut. Previously, it was not possible to apply modifiers to objects in the outliner; an entry has also been added to the context menu for this.

Wenn man durch die Kamera schaut, erscheint in Blender 4.1 ein neues Gizmo mit einem Vorhängeschloss als Icon. Damit kann die Option Lock Camera to View ein- und ausgeschaltet werden, was bisher nur umständlich im View-Tab in der Sidebar möglich war.
When you look through the camera, a new gizmo with a padlock icon appears in Blender 4.1. This allows you to switch the Lock Camera to View option on and off, which was previously only possible in the View tab in the sidebar.
Lock Camera to View is now a gizmo

Companies normally collect biometric data from the users of their software in order to improve the user interface. This could be heatmaps that show where users click particularly frequently or simply statistics on which functions are accessed how frequently and which menus are visited how often. Anyone who has concerns about data protection here is on the right track. This is also the reason why the Blender developers do not collect any such data. Instead, the development of the interface, like the rest of Blender, follows the open source approach. And that means mock-ups, demo implementations and constant discussions between programmers and users. The process can be perceived as tough, but it is the price of privacy. One example is the Lock Camera to View feature, which allows the camera to follow the user’s movements in the 3D viewport. This allows a camera to be positioned in the same way as you would otherwise move through the 3D viewport, which is why this function was particularly popular among beginners. If they knew about it at all, because it was located in the view tab sidebar, which is hidden by default. Quite deep in the interface for such a frequently used function. And so the idea of introducing another viewport gizmo came up years ago. This appears when you look through the camera and has a padlock as an icon. This small but useful change has now finally found its way into Blender 4.1.

Blender bringt einen eigenen Dateibrowser mit. In Blender 4.1 werden jetzt im Tooltip Metainformationen wie die Blenderversion, mit der ein Projekt gespeichert wurde, oder die Auflösung von Bildern und Bildrate von Videos angezeigt.
Blender comes with its own file browser. In Blender 4.1, meta information such as the Blender version used to save a project or the resolution of images and frame rate of videos are now displayed in the tooltip.
UI detail improvements

Tooltips in the file browser now show the Blender version in which a file was saved and metadata such as resolution for images or frame rate for video files. The tooltips are also displayed in the Open Recent menu, where the preview image can also be found. While you are working on a project, Blender automatically saves to the temporary directory every two minutes by default. If Blender crashes, you can then restore your project via File -> Recover -> Auto Save and continue working. However, it could happen that you save your project manually and then save it again immediately afterwards using Autosave. In Blender 4.1 the counter is now reset every time you save manually.

Links der Color Picker aus Blender 4.1, rechts in Blender 4.1. Die gewählte Farbe und Helligkeit wird direkt im Cursor angezeigt, was die Lesbarkeit vereinfacht.
On the left the colour picker from Blender 4.1, on the right in Blender 4.1. The selected colour and brightness is displayed directly in the cursor, which makes it easier to read.

With the colour picker, the selected colour and brightness are now displayed directly in the respective cursor, making it easier to read. In addition, many other details have been added to the interface, from optimising the rounding of the corners of pop-up and conventional menus, to higher quality shadows for these menus, to the animation markers, whose line is no longer drawn by the marker itself. The text that is used as default when adding a text object is now translated into the language in which the interface is used. So if you have set your interface to Spanish, you will now be greeted by “Texto” when you add a text object.

Import and export via drag and drop

External files in the formats Alembic, Collada, OBJ, OpenUSD, PLY, and STL can now be imported into Blender using drag-and-drop. The reader will have noticed that these are formats whose exporters and importers are not realised in Python, but in C. STL has been added in Blender 4.1 and should now work three to ten times as fast as the previous implementation in Python, which will still be supplied for a few versions but will be removed from Blender in the long term. In future versions of Blender, support for drag and drop will also be added for formats whose import and export are implemented in Python. This will be made possible by a new callback, which also gives developers of external add-ons the opportunity to implement drag-and-drop.

USD & Co

The exporter for the Universal Scene Description Language (USD) now supports armatures and shape keys, while the importer supports the instantiation of objects, collections and USD primitives on a point basis. These are loaded as a Point Cloud object with a Geometry Nodes setup with an Instance on Points node. The import can also be extended using Python hooks, making it easier to integrate Blender into in-house pipelines. The import and export of Stanford PLY files now supports Custom Vertex Attributes and when exporting to OBJ format, objects whose shading is completely set to Flat or Smooth are exported between 20 per cent and 40 per cent faster.

News on the glTF front

The glTF exporter can now optionally optimise the generated files for display with OpenGL using gltfpack by reordering the mesh data in such a way that memory consumption and draw calls are minimised. UDIMs are not supported by glTF. They are therefore now split during export, with each tile receiving its own material. Unused images and textures can now still be exported, e.g. because they are still needed later in an interactive application, and anisotropy is now supported for materials.

Bake bake Geonodes

Geometry nodes now allow intermediate results from node groups to be saved via baking. Previously, baking support was only available for the Simulation Zone. Data is now better deduplicated in the cache, which means that the file size should be significantly smaller in some cases. The caches should no longer be lost after an undo and volumes can now also be baked. The auto-smooth option for meshes has been replaced by a modifier node group asset. At the same time, you now have full control over the custom normals of a mesh in the geometry nodes.

Growth in the Geometry Nodes

The new Active Camera node returns the currently active camera, the Index Switch node allows you to select any input via an index and the Sort Elements node can be used to redefine the vertex order of a mesh. Split to Instances can be used to split a mesh into individual parts based on an ID and the Blackbody node known from the Shader Editor is now also available in Geometry Nodes.

New rotations step by step

There is a new Rotate Rotation node for rotations, which replaces the Rotate Euler node and is easier to use. This is part of the gradual introduction of the new Rotation Socket, which has been introduced in Blender 4.1 for the following nodes: Distribute Points on Faces, Instance on Points, Rotate Instances, Transform Geometry, Object Info and Instance Rotation.

Mit der Menu Switch-Node ist es jetzt möglich, Dropdown-Menüs für selbstgebaute Geometry Nodes-Assets zu erstellen.
With the Menu Switch node, it is now possible to create drop-down menus for custom-built geometry node assets.
Home-made Geometry Nodes

One of the design goals of geometry nodes in Blender is that users should be able to recreate high-level nodes completely with on-board tools. Until now, however, this was only possible to a limited extent, as some nodes work with drop-downs, a control element that you could not yet recreate yourself. In Blender 4.1, it is now possible to define your own drop-down menus via the menu switch node, which finally closes this gap.

Conclusion

Blender 4.1 offers detailed improvements across the board. A successful intermediate release, for the grand finale in the form of Blender 4.2 LTS we are still waiting for EEVEE Next.

]]>
DIGITAL PRODUCTION 144561
Prism Pipeline, the second https://digitalproduction.com/2024/04/19/prism-pipeline-the-second/ Thu, 18 Apr 2024 22:19:00 +0000 https://digitalproduction.com/?p=144504
The technical requirements of animation and VFX productions are increasing every year - I don't have to tell you that. In most projects, a variety of tools are used to achieve the desired quality. In order to complete these increasingly complex projects within the specified deadlines, more powerful pipeline tools are also required.
]]>

While it used to be common for studios to develop their own tools over a period of years, Prism Pipeline offers an alternative. This is accessible to everyone and in version two, Prism can be used to plan and implement workflows for a wide range of CG projects in no time at all.

by Richard Frangenberg

The latest version of Prism Pipeline 2 has now been released and in the following we will introduce you to the latest features and give you a preview of what to expect in the future so that you know whether it might be time to rebuild your pipeline. But let’s start at the beginning…

A brief overview

For those unfamiliar with Prism, here’s a quick overview of its basic features. Prism is a pipeline software with the aim of simplifying and automating the work steps of CG productions. The focus is on simple set-up and intuitive use by artists – even without prior technical knowledge. The main functions include the creation of folder structures, organisation of assets, shots and tasks as well as the versioning of scene files, geometry caches, rendered images and, of course, automated import and export between different DCCs. There are plug-ins for many other functions that extend the functional scope of Prism as required. Prism can be used both as a standalone tool and within all common DCCs. The basic functions are available in all DCCs and are extended with DCC-specific functions. For example, in Maya Prism can load the assets as Maya references and in Houdini simulations can be easily exported with a Prism Filecache Node.

Da Prism das komplette Dateimanagement des Projekts managed, müssen sich Artists nicht mehr mit dem Durchsuchen von Ordnern und der Benennung von Dateien herumschlagen. Das spart Zeit und vermeidet unnötige Fehler.
Since Prism manages the entire file management of the project, artists no longer have to struggle with searching through folders and naming files. This saves time and avoids unnecessary errors.

Where is Prism used?

The majority of projects realised with Prism are 3D animations and VFX projects, but Prism has also been successfully used in 2D animations, games and other real-time projects. The complexity ranges from the first student films to major Netflix series and feature films. Some of the more well-known projects in which Prism has been used include the HBO series “House of the Dragon” and the Netflix film “The Kitchen”. The ease of installation and operation makes it easy for freelancers working alone to increase productivity. For smaller studios without their own pipeline department, Prism offers the opportunity to work efficiently without having to invest a lot of money and time in developing their own pipeline. On the other hand, Prism is also used by studios with dozens of employees. These studios usually have a pipeline department or TDs who can expand Prism with plug-ins.

Eine weitere Möglichkeit ist der Open Source MaterialX Editor QuiltiX. Mit diesem kostenlosen Editor lassen sich MaterialX Dateien direkt
von Prism aus öffnen, editieren und als neue Versionen abspeichern. Auch eine Material Library lässt sich somit leicht anlegen – mehr dazu hier: prism-pipeline.com/quiltix. Wer mehr dazu wissen will, gibt der DP-Redaktion Bescheid, dann machen wir eine Story dazu!
Another option is the open source MaterialX Editor QuiltiX. With this free editor, MaterialX files can be opened, edited and saved as new versions directly from Prism. This also makes it easy to create a Material Library – find out more here: prism-pipeline.com/quiltix. If you want to know more, let the DP editorial team know and we’ll do a story about it!

New features in Prism 2

But enough of the preamble: we wanted to talk about version 2! The latest version comes with a host of new plug-ins and features in the core application. Highlights include OpenUSD support, integrations for ZBrush, Substance Painter and Unreal Engine, a completely redesigned link with Shotgrid/Flow and new links with Kitsu and Ftrack. Speaking of which: Here you can find a complete list of supported tools:prism-pipeline.com/plugins At the same time, UI and performance have been improved. In addition, users now have much more flexibility in their projects. The folder structure and file names can be customised using templates and options for import, export and render settings can now be preset for the entire project. Other new functions include a new user permission system with which users can be assigned to specific roles across projects and a new launcher with which specific DCC versions can be defined per project. So much for the overview, let’s get into the details.

Scenefiles werden unter einem Task des Assets gespeichert.
Verschiedene Versionen lassen sich dadurch leicht finden und
öffnen.
Scenefiles are saved under a task of the asset. This makes it easy to find and open different versions.

OpenUSD

Pixar’s OpenUSD is increasingly becoming the long-awaited standard in the CG world. It is much more than just another file format to send 3D geometry back and forth between DCCs – being able to edit complete scenes (including lights, materials, render settings etc.) independently of a specific DCC or renderer opens up completely new workflows that would not be possible without OpenUSD. These USD scenes can reference other USD scenes in different ways, allowing multiple artists to work simultaneously on a single scene. This new type of collaboration has huge potential, but also brings with it a certain amount of complexity. There are dozens of terms in the world of USD, the meaning of which can only be recognised after painstaking familiarisation. The good news is that not everyone who uses OpenUSD needs to know how OpenUSD works. USD support in Houdini, Maya, Blender, Unreal Engine and other DCCs is getting better and easier to understand with every update. Unfortunately, to utilise the full potential of USD, it is not enough to just export an asset as USD. Instead, pipeline tools are needed to automate the referencing of USD files with each other. Only then can workflows be automated and, for example, multi-shot workflows be implemented efficiently. Large studios invest a lot of resources in their USD pipelines, but it has been difficult for smaller studios to really utilise USD. Developing a USD workflow for smaller studios was one of the main goals of Prism 2, and in the first announcement of Prism 2 (yes, that was still in 2021) a first USD workflow was already presented. In the following months, this workflow was further developed and feedback from beta testers was regularly implemented. With the release of Prism 2, small studios now have a comprehensive USD workflow tool at their disposal. However, USD support is implemented in Prism as a plug-in, so that everyone can decide for themselves if and when the time is right for them to switch to a USD workflow.

In dem 3D Viewport des USD Editors lässt sich die komplette Szene mit verschiedenen Displaysettings anschauen. Neben dem Standard GL Renderer (Storm) lässt sich die Szene auch mit einen Renderer wie z. B. Arnold in dem Viewport rendern. Der Prism USD Editor lässt sich unter
anderem in Prism Standalone nutzen, sodass sich ein Asset oder Shot anschauen und bearbeiten lässt ohne Houdini oder Maya öffnen zu müssen.
In the 3D viewport of the USD Editor, the complete scene can be viewed with different display settings. In addition to the standard GL renderer (Storm), the scene can also be rendered in the viewport with a renderer such as Arnold. The Prism USD Editor can also be used in Prism Standalone so that an asset or shot can be viewed and edited without having to open Houdini or Maya.

How does the USD workflow work?

Prism automatically creates USD files for each asset and shot. An additional USD file is created for each department, which is added as a layer to the asset/shot USD file. This allows multiple artists to work simultaneously on different tasks of the asset/shot and see the combined result of all layers at any time. Combining the individual layers can be imagined as Photoshop layers that are put together to form the final image. If the material of an asset is changed, only the surfacing layer in the USD asset is replaced. As the USD asset is referenced in the shots, these shots can be rendered directly with the new material without any department having to re-import the asset. This “working in department layers”, which are combined, is just one of many USD features. Other USD features such as Variants or Instances are mostly optional and can be used as required. To create or edit USD files, Prism offers an extensive USD Editor. Layers can be created, deleted or sorted within a USD file. All objects (USD prims) in the USD file can be viewed in the “USD Scenegraph Tree”. New prims can be created here or existing prims can be renamed or deleted. USD Variants can also be created in the Scenegraph. This allows different variations of the geometry or material to be created in an asset USD file, for example. If the asset is later referenced in a shot, the geometry or material can be changed to a different variation with a single click. This can be very useful if an asset is duplicated very often in a shot and not all instances should look the same. Within the DCCs, Prism uses the USD tools available there, among other things. Houdini currently has by far the best USD toolset, but Maya also has good support in the latest versions. Prism can also work with USD files in other DCCs such as Blender and Unreal Engine. However, it should be noted that the official USD support in Blender and Unreal Engine is still quite experimental. In newer versions of Nuke, ZBrush and Substance Painter, Prism can import and export USD files. More DCCs will be added in the future – we are working hard on it! To make the USD workflow as easy as possible for artists, Prism automatically recognises which department the artist is currently in. These default settings make it easy to get used to the USD workflow, but can also be adjusted at any time if necessary. In the following, we will go through the individual departments and explain how the basic USD workflow works with Prism. Of course, other departments can also be added or omitted, depending on the project.

Modelling

USD changes very little in the modelling department. As usual, the geometry is created in a DCC such as Maya. The name and hierarchy of the objects are of great importance for the following departments. The model is then exported as a USD by clicking on a shelf tool. In the background, Prism recognises that you are currently working in the modelling department and automatically uses the necessary settings for the export. In addition to Maya, the model can of course also be created and exported in ZBrush, Houdini or other DCCs.

Surfacing

A great strength of USD is that complex materials can be saved in USD files and thus rendered by different renderers in different DCCs – with almost identical results. In practice, there are still differences with some renderers and not all materials can be rendered by all renderers. In many cases, however, this already works very well and support is constantly improving. One term that is frequently used in this context is MaterialX. This MaterialX is an open source material standard that is supported by more and more renderers. Materials in a USD file can be MaterialX materials, but do not have to be. Both Houdini and Maya since version 2024 (with LookDevX) can create and edit USD materials in a Node Editor. These materials can be exported using Prism and automatically linked to other USD files. Many studios rely on Adobe Substance Painter to create textures. Prism makes it very easy to import USD assets into Substance Painter. When exporting textures, Prism also has the option of creating a MaterialX file. This material is automatically assigned to the asset. This allows the materials to be viewed directly in Prism Standalone on the 3D asset. To edit the material further, the asset can now be imported into Houdini or Maya.

Ab nach Kitsu – oder zu jedem anderen Tool
Off to Kitsu – or any other tool

Rigging

Rigging is a department that is neglected in most USD workflows. Bones can currently be saved in USD files, but constraints or expressions, which would be necessary for more complex rigs, are not (currently) supported by USD. This will be added in future versions of USD, but currently rigs have to be entered in other ways. One option is to save the rig not as a USD, but as a Maya scene (.mb), which can be referenced from a USD file. However, this process requires several steps and is very tedious to implement manually. Prism automates this process so that the rig can be exported with virtually one click and automatically added to the USD asset. Animators can import a single USD file containing environments and props as well as character assets with the Maya rigs.

Turntable
Turntable

Turntables

In order to evaluate the appearance of an asset, turntables are usually created where the asset and/or lights can be rotated. Prism has a USD Turntable Editor that can be used to configure turntables, e.g. the HDRI, the duration or the asset position. This configuration can then be saved as a preset and applied to all assets in the project. The turntables can either be rendered locally or sent to a deadline render farm. This fast way of generating turntables is independent of DCCs and problems with the assets can be detected at an early stage.

Layout

In larger studios, there is often a layout department that sets up the shots and takes care of set dressing and camera animations. In smaller studios, this step is often carried out by the animators. In a USD workflow, the layout step is necessary to reference the assets in the shots. Prism can automatically create the layout USD file for each shot.

Animation

The animator can now import a single shot USD file which contains the complete layout of the shot including all assets. Assets with a rig can now be animated and exported with a click on the export shelf tool. The animation USD layer only has information about the point position at the individual frames of the animation. Materials or UVs are not saved in the Animation USD Layer. However, this is not a construction site, it is intentional: the materials of the asset can be adjusted later without having to re-export the animation.

Lighting

When it comes to lighting USD scenes, Houdini currently offers the most options. In other news: Water is still wet. Although USD scenes can also be rendered in other DCCs, for the sake of simplicity we will focus here on the lighting workflow in Houdini. And yes, I’ll leave “Houdini for the sake of simplicity” as it is. Node-based work in Houdini allows the lighting of several shots or complete sequences to be created in one scene. Prism can import the desired shots into Houdini Solaris. Prism creates ready-made nodegraphs for each shot, which can then be added to by the Lighting Artist. The USD scenes can now either be rendered directly in Houdini or submitted to a deadline render farm. Prism also offers the option of exporting the complete scene to a USD file before Houdini renders it. This can improve performance and of course save Houdini licences.

USD Conclusion

The use of USD makes it easier to automate work steps and realise them more independently of specific DCCs and renderers. With the right pipeline tools, USD offers both large and small projects the opportunity to organise the workflow efficiently and flexibly – much more than would be possible with conventional pipelines. Currently, the complexity of USD and the experimental implementation in some DCCs are obstacles for some studios to change their workflow, but these problems are literally getting less and less every day. Almost all DCCs are currently working hard on USD support. SideFX, Autodesk, Adobe, Blender Foundation, Epic Games and many other companies have made USD a key focus and are committed to making USD the standard in the animation/VFX industry. Prism offers the most comprehensive and user-friendly USD pipeline currently available on the market. We will continue to implement new USD features into Prism and make the workflow as easy to use and understand as possible, so that the USD workflow becomes the first choice for small studios, students and freelancers. But that’s not all – Prism 2 offers even more new features.

Gerenderte Bilder und Playblasts lassen sich direkt in Prism anschauen. Hier können Notizen erstellt und ein Status für jede Version gesetzt werden. Die Bildsequenzen und Videos lassen sich von hier aus auch in externen Media Playern öffnen.
Rendered images and playblasts can be viewed directly in Prism. Notes can be created here and a status can be set for each version. The image sequences and videos can also be opened in external media players from here.

ZBrush

Anyone who has ever tried to write a plug-in for ZBrush knows that it is much more difficult than for most other DCCs. As a result, many studios have not even integrated ZBrush into their pipeline and models and textures are often still exported manually. With Prism 2 there is now also a Prism – ZBrush integration. This makes it possible to version ZBrush projects in an organised manner within the project structure. This allows you to jump back to older versions if required and ZBrush projects from members can be easily found and opened. Asset import is significantly simplified by Prism, as the desired asset can simply be selected in a library window and folders no longer need to be searched through manually. In addition to geometry, diffuse and displacement maps can also be exported. Prism takes care of folder creation and file naming so that this step only requires a few clicks.

Substance Painter

Another new plug-in is the Substance Painter integration. Similar to other DCC plug-ins, Prism takes care of the versioning of the scene files and the asset import. Various file formats such as Alembic, FBX or USD can be used. For the export of textures, Prism offers a separate export window with settings for the resolution, whether an export preset should be used and which maps should be exported. The export of multiple UDIMs is also supported. The exported textures are automatically versioned and can then be viewed in the Prism Library. From there, they can be brought into other DCCs to create materials for the assets.

Unreal Engine

The enthusiasm for real-time engines in recent years has probably not gone unnoticed by anyone. Well-known studios such as WetaFX create impressive short films in Unreal Engine and Epic Games invests a lot of money in the development of animation tools in UE. The potential time savings are motivating many smaller studios to test Unreal Engine, and many are already using it in their productions. The new Prism plug-in for Unreal Engine makes it possible to link the workflow of DCCs such as Houdini and Maya with Unreal Engine. The exchange of assets plays a central role in this. The UE plug-in works slightly differently to most DCC plug-ins in Prism. While Prism Houdini saves and versions scenes within the Prism project folder, Unreal projects can be stored outside the Prism project. Versioning of UE files is done using a version control system such as Perforce, which takes a different approach to Prism but works very well with UE. The Prism project is linked to the UE project and Prism can be used within UE to import assets from the Prism project into the UE Content Browser. For film projects, complete shots can also be imported from Prism into the UE Sequencer. Prism creates UE LevelSequences with the appropriate frame ranges and imports cameras and assets for each shot. Cameras and other objects can then be exported from UE back into the Prism project using Prism. A typical use case would be, for example, that the camera layout is made in UE and this camera is then to be exported to Maya so that the animator can animate a character to match the camera perspective. In the usual case that UE is used for rendering, Prism can render the content of the UE sequencer so that each shot ends up in the correct shot folder. Prism also takes care of the versioning of the rendered images. Prism can optionally submit the UE project to a deadline render farm for rendering. Prism is then used to import the rendered shots into Nuke or Resolve for post-processing.

Shotgrid, Ftrack, Kitsu

Prism focuses on file management, but a successful project also requires good time and task planning. The most widely used project management tools in the animation industry include Shotgrid, Ftrack and Kitsu. For the best file and project management experience, there are new plug-ins in Prism 2 to link any of these three tools to Prism. For the sake of simplicity, only Shotgrid or more recently Flow or formerly Shotgun will be mentioned in the following section, but all features apply equally to the Ftrack and Kitsu integration. As soon as Prism is linked to Shotgrid, all assets and shots are synchronised between Prism and Shotgrid in real time. When a new shot is created in Shotgrid, it is also immediately visible in Prism. Prism also reads metadata such as frame ranges, descriptions and thumbnails directly from Shotgrid. Departments and tasks are also synchronised – artists can therefore open Prism within their DCCs and change the task status or add notes; the changes are then automatically transferred to Shotgrid. Conversely, artists can see in Prism which Shotgrid/Flow tasks are assigned to them and no longer have to jump back and forth between DCC and Webbrower. And it is simply clearer and more convenient. Another function of the integration is the publishing of renders, playblasts and caches from Prism to Shotgrid. Prism also handles the conversion of image sequences to videos (if required), which can be played in Shotgrid/Flow.

User permissions

The new “Studio” plug-in offers cross-project settings with which an admin can manage entire teams. Individual users can be assigned roles that have different authorisations. For example, you can specify which users can create shots and who is authorised to edit the project settings. Admins can assign users to specific projects so that artists only see the projects they are involved in. Very practical, especially for studios with several projects! Environment variables and numerous user settings can also be set centrally for all users using the Studio plug-in.

Launcher

With the Prism Launcher, you can access all the tools and resources that the artist needs for a project from one place – centrally for the entire project or studio. Specific versions of DCCs can be configured per project. Environment variables can also be defined for DCCs so that, for example, you can configure which project should load which version of a particular Houdini plug-in. Website links, e.g. for the studio’s internal documentation and tools such as Media Player, can also be added to the launcher. Everything you always want to have at hand – and that varies from team to team.

Easier setup

Installation and project creation have been greatly simplified: Prism can be installed locally or on a central server. Silent installation for automated setup without user interaction is also easy to do. Plug-ins can now be installed and updated via the Prism Hub. From over 20 plug-ins, you can select the ones that offer the desired range of functions. There are numerous minor improvements when setting up projects. For example, you can now create dozens of shots with hundreds of tasks with just a few clicks. Project presets can be created to create future projects with the desired presets. Even artists without any technical know-how can install and use Prism in just a few minutes.

Als Beispiel Polyhaven: Die Library ist direkt eingebunden. Mit dem neuen „Libraries“ Plug-in lassen sich Bibliotheken von
Assets oder Texturen mühelos organisieren. Die kostenlose Online Asset Bibliothek „Poly Haven“ ist nun direkt an Prism angebunden und HDRIs, Texturen und Models lassen sich schnell und einfach in verschiedenen Formaten und Auflösungen herunterladen und in
allen DCCs nutzen. Alle diese neuen Funktionen machen Prism 2 zum größten Prism Update bisher. Das ist allerdings erst der Anfang,
da wir bereits unzählige Ideen für zukünftige Updates haben.
Take Polyhaven, for example: the library is directly integrated. With the new “Libraries” plug-in, libraries of assets or textures can be organised effortlessly. The free online asset library “Poly Haven” is now directly connected to Prism and HDRIs, textures and models can be downloaded quickly and easily in various formats and resolutions and used in all DCCs. All these new features make Prism 2 the biggest Prism update to date. However, this is just the beginning, as we already have countless ideas for future updates.

Extensibility

One of the most important features in Prism is the ability to create plug-ins to add new features or modify existing ones. This gives studios the ability to customise their workflow. But instead of developing your own pipeline as before, you can use the basic functions of Prism, and in Prism 2 it has become easier to write plug-ins. A plug-in can now be created with a single Python file with less than 10 lines of code. There are numerous sample plug-ins in the documentation and some studios have released their plug-ins to the entire community. Studios have already written their own plug-ins for Gaffer, Fusion, Cinema4D, After Effects, Katana and other DCCs. However, smaller plug-ins such as those for copying files to certain folders or tracking working time on certain tasks are also popular extensions.

Further innovations

There are numerous other new features that we have not covered in detail here, but are nevertheless worth mentioning. These include new integrations for DaVinci Resolve, PureRef and OpenRV. A new media converter enables the conversion of media between different formats and OCIO colour spaces.

Dreams of the future!

There is no doubt that USD and MaterialX will play a bigger role for most studios in the future. New versions of these two tools will be released every few months and Prism will adapt and integrate new features. Among the long-awaited features announced for upcoming USD versions are the representation of animation curves and keyframes in USD files, which would allow animations to be exchanged between DCCs without baking, and “OpenExec”, which makes it easier to display rigs in USD. For Prism, it is a long-term task to simplify the complicated USD concepts so that artists can benefit from USD and MaterialX without a long familiarisation period.

New DCC integrations

There is a wide range of DCCs used by studios, and our goal is to develop Prism plug-ins for more and more tools. Deciding which DCCs to support next will depend heavily on demand. A Cinema4D plug-in is currently at the top of the list and is expected to be released later this year.

Editorial

Another focus in the coming months will be the exchange of editorial data between different tools. Information about shot length, sequence etc. will be easy to send back and forth between tools such as DaVinci Resolve, Nuke Studio, Unreal Engine Sequence, OpenRV and other tools. Multi-shot workflows in Houdini will also benefit from this. This development will be based on the open source standard OpenTimelineIO to guarantee future-proof and modern functions.

Linux

Prism 2 is currently only available for Windows. For small studios and freelancers, Windows is the preferred OS due to its ease of setup and user-friendliness. For larger studios, however, Linux is the first choice due to its better performance, stability and configurability. Now that the Windows version of Prism has been released, we are also working on a Linux version – and will have updates for you in the coming months.

Long-term plans

Our goal is and remains to make the work of 3D artists easier with user-friendly tools. Of course we have ideas for the distant future, but these are not set in stone. New DCCs such as EmberGen are under discussion, but in a year’s time there may already be new DCCs that nobody is talking about today. Open Source Software (OSS) will continue to play a central role for Prism in the future. OSS such as OpenEXR, OpenVDB and Alembic are already indispensable and newer OSS such as OpenUSD, MaterialX, OpenTimelineIO and OpenAssetIO will play an increasingly important role in the industry and also in Prism in the future.

Open Source Core

As with Prism 1, the core application of Prism 2 is open source and free for everyone to use, as are many of the plug-ins, such as the integrations for Houdini, Maya, 3ds Max and Blender. The free version can be easily downloaded from the website or from GitHub. Some of the newer plug-ins, such as the OpenUSD integration, require a paid Prism Plus licence, which can be tested free of charge for 30 days. For larger teams, the Prism Pro licence is available, which provides extended support and access to beta functions, among other things. In the documentation you will find guides for the first steps with Prism in general as well as specifically for the USD workflow. Click here for the free download and more information: prism-pipeline.com In addition to e-mail, we also offer support via our Discord server. Many users share their experiences and workflows with Prism there. New functions can be suggested there and the more people support an idea, the more we prioritise it in our roadmap. We also offer live demos and training sessions on various functions, as well as the development of customised plug-ins.

Conclusion

We are very proud of Prism 2 – with USD, the links to the tools from the various sectors and the general “usefulness”, we believe that it is a real relief for many artists. We can’t yet say what Prism 3 will look like – but come to our Discord (prism-pipeline.com/discord) or the forum (prism-pipeline.com/forum) and tell us what you’d like to see!

]]>
DIGITAL PRODUCTION 144504
WIND UP – Family drama on a desert island https://digitalproduction.com/2024/08/28/wind-up-familiendrama-auf-einsamer-insel/ Wed, 28 Aug 2024 17:49:00 +0000 https://digitalproduction.com/?p=144221
As part of their second year of study, students from the VFX specialisation joined forces with all other departments at the Munich University of Television and Film to create a short film. This ambitious project combines various disciplines of the HFF and tells a dramatic story using visual effects.
]]>

The challenge for the visual effects design was to achieve a photorealistic visual aesthetic that blends seamlessly into a real film with real actors. A Portuguese courtyard, built in the HFF studio in the summer of 2022, served as the central set for the dystopian narrative of a family drama.

by Franziska Bayer, Valentin Dittlmann, Alexander Hupp, Ines Timmich and Hannes Werner

The plot unfolds on an isolated island in the middle of an endless ocean and thematises the conflict between the spirit of discovery of youth and the entrenched convictions of the older generation. Although officially listed as a “VFX film 02” by the visual effects students, “Wind Up” is a joint project. For example, screenplay student Tamaki Richter wrote the script, while the production was handled by WennDann Film GmbH, which was founded at the HFF, and students from all departments took on additional roles – wenndann-film.de.

The professional expertise of the film industry complemented the work of the HFF students, with Matthias Zentner(velvet.de) acting as director and Moritz Rautenberg(moritzrautenberg.com) as director of photography. The synergy between various educational institutions in Munich should also be noted, with contributions from the Academy of Music, the Academy of Fine Arts for the set design and the August Everding Drama School, which used some of its students for the make-up design. The studio building, which provided the basic setting, was also created as part of the “Entwerfen und Gestalten – Architectural Design and Conception” programme at the Technical University of Munich. The cooperation within the departments of the HFF and with other art academies not only marks an impressive short film, but also emphasises the power of interdisciplinary exchange. “Wind Up” stands as a testament to the creative fusion of talents from different disciplines at the HFF and its partner institutions in Munich.

See for yourself:

Brainstorming and script development

What is hidden behind the boundaries of a studio courtyard? A lonely family on an island, in the sea, surrounded by monsters. That was the basic idea submitted by a screenwriting student that would eventually become “Wind Up”. The process of developing the idea was a creative collaboration between several departments at the HFF, which included intensive discussions about the relationship dynamics of the characters, the special significance of a lamp and how a hot air balloon works.

Previz
Previz
Final
Final

The core, the family drama, remained an important guideline throughout the script versions. The result is a story of a family that grows beyond its limits in the fight against its shadows. Lino (16) spends his everyday life building up the crumbling walls of his home island to protect himself and his family from the darkness that lurks outside the island and swallowed up his mother 10 years ago. Until he learns that his sister Benedita (18) and his uncle Afonso (52) want to flee the island to find his mother: She’s alive? And the darkness is just a fairy tale told to him by his grandmother Madalena (83) to keep the family on the island. Lino now has to find his way between mysterious lamps, home-made hot air balloons and family lies and make a decision: Will he stay on a crumbling island with his family? Or will he set off in search of his mother – into a world of shadows?

Previz
Previz
Final
Final

Animatic & Resolution

When planning the VFX shots, it was important to break down the entire film into individual shots in advance. That’s why the VFX team, together with director Matthias Zentner and cinematographer Moritz Rautenberg, spent several weeks developing shot plans for the film. The set plans were used to create animatics that could be quickly adapted. In contrast to traditional planning with storyboards, it was an advantage to work directly on the previs, as the space of the set could be taken into account. The characters were generated in readyplayer.me, placed in a scanned 3D model of the set and animated in Blender. Different focal lengths and camera movements were created in this way, with many of the non-VFX shots later being made more dynamic or combined as one shot during the shoot. As the dialogue was also set to music in the animatics, screenwriter Tamaki Richter was able to quickly determine whether her story was working the way she wanted and make changes accordingly.

Previz
Previz
Final
Final

Concept Art

The realisation of the short film “Wind Up” required a well thought-out, consistent visual look that went beyond the set of the Portuguese courtyard to include the entire island. The layout of the courtyard was the starting point for the design of the exterior buildings and the island. An extensive search for references of remote places, coasts and islands laid the foundation for the development of concept art for the location. The aim was to create a doomed island, with buildings slowly decaying and being maintained by the last four inhabitants in an endless battle against decay. The result is a gloomy look of an island that seems to be drowning in fog, illuminated only by a single light source. The VFX students were supported in designing the concepts by concept artist Luis Guggenberger(luisguggenberger.de).

Another crucial aspect was the design of the “shadows”, which embody the fears of 15-year-old Linos in the film. Dark, abstract illusions that gnaw at the walls of the island and ensure its decay were designed as a visual representation. To realise this concept, ink spreads were shot on paper in various combinations with water, alcohol or glycerine on the first day of shooting. In post-production, these black and white shots were used as masks in Nuke to make the shadows move across the walls of the island. In addition to the design of the island and the shadows, the design of the so-called “barkonaut” was also developed. It is a mixture of a small rowing boat and a hot air balloon, as escaping across the ocean is not possible with a boat alone due to stormy waves.

The composite appearance of the vehicle, made from washed-up components, old planks, nails, fabric patches for the balloon and improvised assemblies in the workshop, gave the “Barkonaut” an authentic character. Artificial intelligence was used to generate construction plans for the barkonaut and maps of the remote island. The images were plotted onto semi-transparent paper and patinated with tea to give the maps an aged look. The door to a secret workshop, a painting of the island, was made especially for the film by painting student Elisaveta Bogushevskaya.

Sculpting the island

As the final island model was used as a reference for the above-mentioned painting in the film and some other parts of the set design, the island had to be ready several weeks before shooting began. The sculpting of the island was done entirely in Zbrush. Depending on the concepts, a rough silhouette was first defined, then smaller details were worked out. With the exception of the details in the rocks, the entire island was sculpted completely by hand and without procedural aids.

As the island model in Zbrush ultimately had a very high resolution of over 4 million active points, a lower-resolution duplicate of the base mesh was created, which contained fewer details and would speed up the further texturing process. This mesh required less computational work in programmes such as Houdini or Substance Painter and thus facilitated the entire work process. In Blender, the low-resolution mesh was UV unwrapped. The high-resolution mesh was then projected onto this reduced mesh in Substance Painter in order to retain all the details of the original model in the maps generated in Substance Painter.

Modelling & Texturing

The story takes place on an island threatened by decay. Wind and water have reduced the piece of land, which was once richly populated, to a minimum. A sparse amount of dilapidated houses are the last thing left on the island, but even these will not last much longer. The house assets were divided into five complexes before filming and arranged based on the set scan. An attempt was made to create a recognisable silhouette by varying the heights of the buildings and a tower whose top had been broken off. After the shoot, the models were refined in Blender and customised with Kitbash assets.

As the houses were almost only shown in supertotal shots, it was possible to keep the models low-resolution with only around 100,000 polygons. The roof tiles, for example, were not modelled individually, but were just flat roofs onto which roof tile textures were projected. However, for two shots in which one of the house walls was used as a set extension, a detailed asset with 338,394 polygons had to be modelled. When texturing in Adobe Substance 3D Painter, the wall colours, as found in the set, were adopted and digitally patinated using various layers. Stains and elements that contributed to the worn, dirty look were painted by hand.

The digital colour and structure of the barkonaut’s balloon also had to match that of the real equipment. The basket could be designed more freely, as the real basket is hardly ever seen in the film, but it also had to look worn and dirty. To achieve this, the layers were first projected using smart masks and then adjusted by hand. For the most part, the software’s own PBR textures were used, but textures from Textures.com were used for the roof tiles.

Shooting preparations

Careful planning is extremely important for a production with around 60 people involved. Especially for the smooth realisation of the VFX shots, it should be as clear as possible in advance what will be seen and what you should pay particular attention to during the shoot. Basically, the filming preparation can be divided into two sections: firstly, the internal VFX coordination of the various tasks and secondly, the communication of the VFX-relevant information with the other departments.

Die Künstlerin Elisaveta Bogu­shevskaya beim Erstellen des Inselgemäldes.
The artist Elisaveta Bogushevskaya creating the island painting.

The internal preparation ran in parallel and in close collaboration with the director and the DOP. Once the script had been developed to such an extent that it was foreseeable that there would be no major changes, an initial calculation of the VFX shots could be made. Thanks to the existing Previz, the calculation was quite accurate and there were few surprises, as it was easy to see in the virtual set whether the shots could be realised as planned. Based on the calculation, all assets and shots were created in Shotgrid, divided into individual tasks and distributed to the various people. This resulted in an extremely precise schedule, which enabled a largely smooth realisation. Working backwards from the deadline, it was easy to recognise when which tasks had to be completed in order to have enough time for subsequent tasks and still meet the deadline. The calculation, assets, shots and tasks were regularly revised when changes were made to the resolution or the script.

Die Szenenbildnerinnen Sophie Horn und Afra Bruckner zusammen mit Franziska Bayer und Ines Timmich beim Setdesign von Afonsos Werkstatt.
Production designers Sophie Horn and Afra Bruckner together with Franziska Bayer and Ines Timmich during the set design of Afonso’s workshop.

Communication with the other departments involved was essential to ensure that the shots could actually be realised as planned. The above-mentioned painting of the island was also created as part of the set on the basis of an early rendering of the 3D island, which ensured a consistent and coherent depiction of the island.

Plan B’s production designers were able to make targeted additions to the existing set construction using the virtual model, which meant that set extensions could be largely avoided and the real and virtual sets could be made to match. The special effects department, which was responsible for the destruction of the lamp as well as the inflation of the balloon, also had a virtual simulation to fall back on. Based on the simulation, which clearly visualised the inflation process, the set could be built in such a way that the balloon could actually be inflated with the help of a wind machine and the actors could enter the basket of the balloon.

Set supervision and DIT

The combination of CGI and live-action film presents a few more difficulties than the purely animated film that the VFX students produced in their first year. In addition to thorough planning, realisation is one of the most important steps. The role of the VFX supervisor is responsible for the interface between the director, camera and the final post-production.

Alle Darsteller:innen wurden in T-Pose von jeder Seite abfoto­grafiert, um Digi-Doubles für die Full-CG-Einstellungen zu kreieren.
All the actors were photographed in T-pose from each side to create digi-doubles for the full CG shots.

The aim is to plan and prepare the VFX shots as accurately as possible so that the post-production schedule can be adhered to. The focus is also on recognising and preventing potential problems that would cost a lot of time and money later on. Another important task of the supervisor is data acquisition on set.

There are many tools that provide helpful information during the shoot and make realisation easier later on. Camera and lens data are extremely important. This information must be known so that a seamless transition between VFX and the original material shot is possible. For Wind Up, a LiDAR scan was also used to improve the tracking of the planned set extension.

Mit einem Lidar-Scanner wurde das gesamte Set eingescannt, um die digitale 3D-Version des Sets beispielsweise für das Match­moving zu nutzen.
The entire set was scanned with a Lidar scanner so that the digital 3D version of the set could be used for matchmoving, for example.

This was created directly after the shot was filmed. The key is to capture as much important data as possible at the right moment without holding up the entire shoot. The supervisor is also responsible for finding the best compromise between the valuable time on set and the avoidable additional work in post-production. The students were supported by Prof Jürgen Schopper, 3D mentor Berter Orpak and Pipeline TD Jonas Kluger throughout the filming period.

The different “roles” were rotated daily between the 5 VFX students so that everyone could gain an insight into the different activities. They also took on the tasks of the DIT (Digital Imaging Technician). Thanks to a mobile workstation, not only can the backups be made, but the dailies can also be rendered directly. Another advantage was the ability to process 3D scans on site and create slap comps to identify potential problems.

Simulation

An island in the water and a balloon that inflates and flies away were the two simulation tasks. Several full CG shots were planned, in which the island was to be seen surrounded by water, partly with an expanding and flying balloon. The real balloon was measured, photographed and recreated from these references in Blender.

The basket of the balloon was only created on the side facing the interior based on the real one, the rest was modelled inspired by our own concepts. The idea was a flying lifeboat, a so-called “Barkonaut”. This model was revised based on the advice of simulation specialist Felix Hörlein in order to optimise the resolution and distribution of the topology for the subsequent simulation in Houdini. The size of the opening through which the balloon was to expand, as measured on set, was recreated virtually as a collision geometry to ensure that the CG settings matched the real filmed settings well. After several versions and adjustments of various parameters in Houdini, the fabric finally behaved as desired and the simulation looked convincing.

The loneliness is mainly told through the huge and empty ocean in the background. The interaction between the water and the island was one of the biggest challenges in achieving a realistic end product. Like the balloon, the water in Houdini was also simulated and shaded. The latest FLIP solver was used to calculate the base water.

For high efficiency over the entire process, the simulation area was limited to a small part around the island. This allows a fast workflow even with a high scaling of several million particles. In this area, the input parameters were extracted from the original ocean in order to use them as parameters for the simulation. The level of detail of the water is largely represented by the whitewater, which is then calculated as a function of the water simulation. Not only the water, but also the wind influence the dynamics of the spray. To make the behaviour even more realistic, an airfield simulation was simulated around the island so that the turbulence in the air also works.

Lighting, shading & rendering

The aim was to use lighting to create an atmosphere and mood on set that emphasised a tight and oppressive feeling. To ensure that this mood is not interrupted by the full CG shots, the lighting has to be adapted to the mood. Whilst the lighting contributes a lot to the overall mood, it also plays a crucial role in terms of the realism we are aiming for. To give the impression that computer-generated elements are part of the real world, light and shadow must fall on them correctly. Careful lighting ensures that the island is seamlessly integrated into the scenes and matches the real lighting conditions, some of which are provided by the stock footage.

The professional help of CG supervisor Frank Dürschinger, who supported the students with the basic principles of lighting through to the individual shots, was also important here. However, light alone is not enough to create a photorealistic image. The interaction with the light is influenced by the shaders. As the entire process of lighting, shading and rendering took place in Houdini, the students used MaterialX shaders to guarantee a high degree of flexibility between the programmes. To increase the level of realism, the texture of the island was combined with several PBR materials.

The resulting improved level of detail creates a real look even with closer settings. A modified version of the Houdini shader was used for the water and spray. To maximise the scope for compositing, the image was split into several layers and rendered individually. This makes it possible to adjust areas such as whitewater, the island or the balloon afterwards. Individual AOVs were also calculated in the respective layers, for example to make the light in the balloon flicker or to re-insert the reflection in the water. The use of the Karma render engine enables a very efficient and fast workflow overall. This was also important for the lighting due to the real-time factor.

Compositing

Lino’s anxiety, which is symbolised by shadows, was visually represented using real rotated ink spreads. With the help of Nuke, the recorded elements were used as masks to darken specific areas. To break up the fluid look and give the “shadows” a creepy, organic aspect, the ink shots were also distorted with noise.

To keep objects and actors in the foreground, they were rotoscoped in the scenes where the shadow spread was located. For shadow shots with tracking shots, the movement was tracked to allow for the integration of the shadows; for static shots, minimal camera movement was added in post-production to increase authenticity.

The students integrated the rendered island model in the long shots of the island with real ocean footage to create a more appropriate atmosphere. The overly friendly sky of the original footage was replaced with more dramatic matte paintings to emphasise the sombre mood.

The black and white levels of the digital image were then adjusted to achieve a seamless integration with the original shot. The digital island blended more realistically with the real ocean through simulated white water effects and supporting VFX elements such as fog. Additional effects such as lens distortion and chromatic aberration contributed to the fusion of CGI and real footage.

To add scenes with a visible exterior wall of a building on the island, the students extended the physical set. With the help of 3D equaliser expert Ando Avila, the camera movements of the crane ride were tracked and reconstructed in digital space. This enabled a correct representation of the digital set in conjunction with the original shot, supported by additional VFX elements such as fog and particles for seamless integration.

In the film sequence where Lino’s fear reaches its climax and he goes into a panicked state where he perceives his grandmother as a demon-like being, her altered appearance was also supported with visual effects. The eyes were tracked in Nuke, rotoscoped and coloured black, while retaining the real highlights to preserve the plastic appearance of the eye.

In addition to the main tasks, inattentions that happened during the shoot were also addressed. This included removing the special effects operator including his leaf blower in the background or adding a forgotten oil lamp to the barkonaut’s burner. The finished VFX settings were delivered by the HFF students to the post-production company Pharos. There, senior colourist Andreas Lautil not only gave the entire film the finishing touches with his cinematic colour grading, but also by taking over the VFX shots. For the compositing tasks, the students received support from Nuke expert Martin Tallosy.

Rodolfo Anes Silveira bei der Tonmischung
Rodolfo Anes Silveira with the sound mix

Soundtrack & sound mixing

The musical and tonal layer of the film was extremely important, as it not only emphasised the moods of the characters in all the scenes, but also provided a way to make the location and supernatural events such as the shadows more believable and real. The film’s music was composed by film composer Victor Ardelean. As part of his final thesis at the Munich University of Music and Theatre, it was even possible to record parts of the final composition with the Munich Symphony Orchestra.

Rehearsing and recording with such a large and renowned orchestra was a unique and unforgettable opportunity, not only for the composer but also for the rest of the team. The final piece “Ballonflucht” in particular has an epic orchestral sound that emphasises the final scene and its hopeful mood.

The tonal layer had two main tasks: the acoustic unification of the scenes shot entirely in the studio with the narrative location on an island in the sea and the visualisation of the living shadows. Since the entire film had been shot in a studio building, the soundscape of an island surrounded by the roaring ocean had to be added later. Artistic collaborator Dr Rodolfo Anes Silveira took over the sound mixing here and added subconscious sound elements in addition to the obvious sounds. In addition to the sound of waves and the occasional screech of a seagull, you can practically feel the breaking of the waves on the rocks of the island in the form of a deep bass rhythm in your own chest. The acoustic design of a short film is often a creative challenge, especially when the question arises: How can shadows sound at all?

The answer to this proved to be subtle and yet effective. Whispering noises, crackles and pops were mainly used to shape the acoustic identity of the shadows. The audience should not only see the shadows, but also literally feel their presence. The quiet but haunting sounds meant that the shadows were no longer just an embodiment of fears and evil, but also a reminder of actual dangers such as real cracks in the walls.

]]>
DIGITAL PRODUCTION 144221
Blender: An upgrade for our particle system – We lay pipes! https://digitalproduction.com/2024/01/02/blender-an-upgrade-for-our-particle-system-we-lay-hoses/ Tue, 02 Jan 2024 18:49:00 +0000 https://digitalproduction.com/?p=144232
In issue 23:04|05 we learnt how to create a particle system with the new Simulation Nodes in Blender 3.6. In Blender 4.0, an interesting new function has been added that allows us to connect a series of points via curves. That would be a nice feature upgrade for our custom build. In addition, Cycles can now do light linking, which allows us to set the scene perfectly.
]]>

There are features in Blender that users have been waiting decades for. Light linking is one of them, even if users of other programmes find it hard to believe. For Blender users, however, this really is a new feature that Cycles has been given. We want to try it out together with the particle system from Simulation Nodes, which we built in issue 23:04|05. But first we’ll add another new feature to Blender 4.0, namely the ability to connect points with curves. The result adorns the cover of this issue.

No longer quite (so) tight

First download the result of the simulation nodes workshop and open the file – is.gd/simstrings. If you start the animation with the space bar, you will notice how close together the particles are. A tube is to be laid through each of these particles later, which would only result in a lump with this quantity. Therefore, first reduce the density in the modifier panel to 100.

Perlenkette: Indem wir den Seed aus der „Distribute Points on Faces“-Node nicht mehr in jedem Frame ändern, erhält unser Partikelsystem vom Aussehen her den Character von Perlenketten.
String of pearls: By no longer changing the seed from the “Distribute Points on Faces” node in each frame, our particle system takes on the appearance of strings of pearls.

The seed has to go

The particles are now much less dense. So that we can make threads out of them later, they should not appear randomly on the surface of the object, but always in the same place. This allows us to create a thread-like look even without the conversion to curves. Go to the geometry node workspace and make sure that the “Fire Particle System” node tree is open in the geometry node editor. Look for the “Distribute Points on Faces” node at the bottom left of the node tree and remove the connection in the seed socket. If you now play the animation, the particles look like strings of pearls that slowly disintegrate. Instead, you can move the seed to the outside as a parameter in the modifier panel by dragging it into the empty socket of the group input node.

Points to curves

At the other end of the node tree, the generated points flow out of the “Simulation Output”. At this point, we can convert them to curves. Add a new node Points -> Points to Curves and connect it to the Geometry output of the Simulation Output node and the Geometry input of the Group Output node. Threads now appear in the viewport instead of points. We leave the Set Material node in the tree; we can still use it later to make the particles appear additionally.

Kurven: Mit der Points to Curves-Node lassen sich die Partikel zu Curves verbinden. Dabei wird jeweils aus den Partikeln, die in einem Frame emittiert wurden, eine Curve.
Curves: The Points to Curves node can be used to connect the particles to curves. This turns the particles that were emitted in a frame into a curve.

Curves to meshes

The curves now appear in the viewport, but not yet in the render, as they do not yet have a surface. This is ensured by the Node Curve -> Operations -> Curve to Mesh. Set this between Points to Curves and Group Output. The curve has now become a mesh, but this consists of individual edges. For a proper surface, we need another curve for the profile. Click on the Profile Curve socket and drag out a new connection. A search field appears when you release the mouse pointer. Here you search for a circle, it is sufficient to enter “ci”, thanks to Type Ahead Find, Curve Circle -> Curve already appears in the second field, which you select. A new node now appears, which creates a circle that then acts as a profile for the curves created from the particles.

Resolution: Mit der Curve Circle-Node erzeugen wir eine Hülle für unsere Kurven. Deren Aufl ösung holen wir nach Außen ins Modifier-UI rechts im Bild.
Resolution: We use the Curve Circle node to create an envelope for our curves. We bring their resolution to the outside in the modifier UI on the right of the screen.

Resolution

There is now a lot going on in the viewport, as suddenly a huge amount of geometry wants to be displayed. You can put a stop to this by reducing the resolution in the curve circle node to eight. But perhaps you want to reduce the resolution a little further when experimenting and then increase it again during the final rendering? Switching to the Geometry Node workspace each time and searching for the right node in the Node Editor may not be the most skilful way to do this. It is therefore a good idea to move this parameter to the outside of the modifier UI. Drag out a new node connection as you have just done and search for Group Input. A Group Input node appears, in which all sockets are hidden except for the newly created resolution. The Blender interface is full of little surprises that make everyday work easier.

Node Group Assets

The curves are currently very sharp on each particle, which has a negative effect on shading and may not be the style everyone wants. We need something to round them off like the subsurf modifier for meshes. Such a tool is now supplied in Blender as a Node Group Asset with the new hair assets. We take advantage of the fact that hair and curves are almost the same thing in Blender, the corresponding assets actually all work on curves. So we can also use them with our setup. Add a node Hair -> Deformation -> Smooth Hair Curves and place it between Points to Curve and Curve to Mesh.

The wild curves

The result should look pretty wild, none of the curves are in place anymore. This is due to the Preserve Length setting. Switch this off and our threads are all back in the right place, albeit slightly rounded. You can use the iterations to determine how strong the effect should be. One was enough for our cover, the more you use, the more rounded the curves become.

Particles: Über Smooth Hair Curves können wir die Fäden etwas abrunden und erhalten ein deutlich weicheres Shading. Über Join Geometry können wir die Partikel wieder mit einmischen.
Particles: We can use Smooth Hair Curves to round off the threads slightly and achieve a much softer shading. We can use Join Geometry to blend in the particles again.

Bringing back the particles

As we are already simulating the movements of a particle system for the threads anyway, we can use them at the same time by representing them as points again. For this purpose, we have not deleted the Set Material node earlier in the article, but merely separated it. Add a Node Geometry -> Join Geometry between Curve to Mesh and Group Output. Also connect the output of the Set Material node to the Join Geometry node. The particles now appear as point objects in the viewport and should already have the appropriate material in the render preview.

Stahlrohre: Das Material für die Fäden muss in den Geometry Nodes gesetzt werden, für die Übersicht geben wir ihm einen eigenen Material Slot. Bei einem Material mit Metallic von 1.0 werden die Curves nicht mehr von der Umgebung beschienen, weil sie in unserer Startdatei für Glossy Shader ausgeschalten ist.
Steel pipes: The material for the threads must be set in the geometry nodes, for the overview we give it its own material slot. With a material with metallic of 1.0, the curves are no longer illuminated by the environment because it is switched off in our start file for glossy shaders.

Steel tubes

To give the threads a material as well, add another material slot to the Particle Nodes Container object and create a new material there with metallic at 1.0. However, it must first be assigned in the Geometry Nodes so that it also appears on the tubes. Duplicate the Set Material node and place it between Curve to Mesh and Join Geometry. The threads should now appear very dark again. This is due to a special feature of the source file. In this file, the world is invisible to glossy shaders, which produces an interesting effect, as the points in the shader have a diffuse component and therefore appear as if they themselves are glowing, although not as evenly as would be the case with emission without tricks.

Light Linking

The fact that the world is not visible in shiny reflective surfaces is a simple version of light linking and has been present in Cycles from the very beginning. However, this is shader-based and therefore very generalised. In Blender 4.0, it is now possible for the first time to limit the influence of light sources on objects in a collection. And we would now like to use this to illuminate the threads of two area lights on the left and right, and only the threads, particles and logo, but not the floor. To do this, create a new collection in the Outliner and drag the DP logo and the Particle Nodes container into it.

Light Linking befindet sich etwas versteckt im Shading Panel in den Object Properties.
Light Linking is somewhat hidden in the Shading Panel in the Object Properties.

Orange and Teal

Then create another new collection and two area lights in it, whose shape you set to Rectangle in the Object Data Properties and set Size X to 3.0. Align the two area lights so that they shine on the scene from the left and right and give them two contrasting colours, e.g. the famous combination of orange and aquamarine. You should set the cold light source to a much stronger colour than the warm one, e.g. 200 watts and 50 watts.

Still well hidden

To restrict the illumination of the two area lights to the logo, particles and threads, you must select the collection in which the three objects are located for both in the object properties in the shading panel under Light Linking. Now you no longer illuminate the floor, which draws the viewer’s attention to the particle action.

Fade-in also for threads

However, there is one last detail I would like to mention. The particles fade in our example file because we built our particle system in the last workshop so that it saves the age of each point as a value between 0.0 and 1.0. We can also access these values for the threads. In other words, the curves can also be threaded in and out.

Mastered with flying colours

Go to the Shading Workspace and select the Particle Nodes Container object. Select the Particles Fading Out material in the material slots and select the three connected nodes Attribute, Invert Colour and Colour Ramp in the Shader Editor. Copy them using Ctrl C, then select the material that you have given to the threads, or press Ctrl V in the Shader Editor. Now that you have copied the nodes, connect the output of the Colour Ramp node to the alpha socket of the Principled BSDF. The curves will now appear and fade and you have mastered the technical part of the workshop with flying colours.

Let off steam

Now it’s time to let off steam. For the cover image, I have changed the direction of movement of the particles upwards. You can also change the direction in the Vector Add Node in the Simulation Zone. And at this point, think about how you could make the structure even more user-friendly. For example, by also exposing the direction of movement. Or adding an auxiliary object that specifies the direction and “wind force”?

RADiCAL Motion Capture: Der fertige Effekt eignet sich besonders gut, um Bewegungen zu visualisieren, dieses Beispiel nutzt den RADiCAL Service für Motion Capture mit dem Smartphone.
RADiCAL Motion Capture: The finished effect is particularly suitable for visualising movements; this example uses the RADiCAL service for motion capture with a smartphone.

Conclusion and outlook

Blender 4.0 brings some new features, including the eagerly awaited Light Linking and new Geometry Nodes. Both were combined in this workshop. But there are possibilities to go further. For example, the threads are currently generated at each frame like tangles; with a few more nodes, they could be displayed like a string of particles or like growing hair, quasi perpendicular to the current direction. This method is particularly suitable for motion capture recordings, as it allows a motion path to be created for any point on a character. I used this method for the Udon asset for the RADiCAL blender add-on. RADiCAL is a service for extracting motion data from simple video recordings or livestreams.

Für diejenigen, die sich vor Ort mit Geometry und Simulation Nodes vertraut machen wollen, gibt es dieses Jahr eine Reihe von Workshops an der Blender Summer School, die vom 26. bis 28.07. in Mannheim stattfindet. Impressionen vom letzten Jahr und die Anmeldung für das kommende Jahr findet ihr hier: blender3dschool.de
For those who want to familiarise themselves with geometry and simulation nodes on site, there will be a series of workshops at this year’s Blender Summer School, which takes place from 26 to 28 July in Mannheim. Impressions from last year and the registration for next year can be found here: blender3dschool.de

]]>
DIGITAL PRODUCTION 144232