Search Results for “DP1903” – DIGITAL PRODUCTION https://digitalproduction.com Magazine for Digital Media Production Thu, 26 Sep 2024 11:16:01 +0000 en-US hourly 1 https://digitalproduction.com/wp-content/uploads/2024/09/cropped-DP_icon@4x-32x32.png Search Results for “DP1903” – DIGITAL PRODUCTION https://digitalproduction.com 32 32 236729828 Cloudy to fiery – Volumetric effects in Cinema 4D (Part 2) https://digitalproduction.com/2019/09/01/cloudy-to-fiery-volumetric-effects-in-cinema-4d-part-2/ Sun, 01 Sep 2019 09:30:40 +0000 https://www.digitalproduction.com/?p=71898 Sonne aus ZDF „Terra X“, Episode „Eine Frage der Zeit“ (https://vimeo.com/renderbaron/txeine­fragederzeit): volumetrische Lichter mit entspr. volumetrischen Texturen erzeugen komplex aussehende Effekte – mit vorhersagbaren Ergebnissen und ohne schlaf­lose Nächte.
In this two-part series of articles, we will provide the experienced Cinema 4D user with in-depth insights into the creation of volumetric effects with the onboard tools of C4D - without any plug-ins, particles or PyroCluster. In the second part of the series, we will first deepen our knowledge of volumetric texturing of visible lights and then turn to a mograph-based technique.
]]>
Sonne aus ZDF „Terra X“, Episode „Eine Frage der Zeit“ (https://vimeo.com/renderbaron/txeine­fragederzeit): volumetrische Lichter mit entspr. volumetrischen Texturen erzeugen komplex aussehende Effekte – mit vorhersagbaren Ergebnissen und ohne schlaf­lose Nächte.

In the first part of this article series, we learned that a visible volumetric light can be used as an “aquarium” for volumetric shaders such as 3D noises and 3D colour gradients. With such a light container, we are able to create thin clouds or even complex self-luminous structures such as stellar nebulae. With the same technique, however, we are also able to create complex animated structures such as an animated solar corona with its characteristic eruptions. In the following, we will take “the magic of visible lights” one step further and create a realistic, fiery and detailed solar corona.

Case study: ZDF Terra X, a question of time (sun)

Looking at contemporary science fiction films or VFX-heavy TV documentaries, one might think that the depiction of a realistically animated sun is a task that causes many sleepless nights in a VFX studio as a classic scenario for particles and simulations.
The intro sequence of this project for the TV documentary series ZDF “Terra X” (see cover picture) stages the sun as a place of stellar nuclear fusion, but only uses the techniques we have learnt so far – and at this point we can sleep soundly again.
If you take a look at NASA footage of the sun, you will realise that the surface of the sun contains a lot of visual details that are comparable to C4D’s noise shaders. For example, the so-called granulation of the sun’s photosphere can easily be reproduced by a noise shader called “Voronoi 3”, which is distorted in a distorter shader with a VL noise (image 01).

Bild 01: Sonnen-Phänomene und ihre Entsprechungen in C4D: Granule (links) können in C4D leicht erzeugt werden, indem man in einem Distorter Shader ein Voronoi-3-Noise mit einem VL-Noise verzerrt (Foto Granule: NASA, genutzt unter Creative-Common-Lizenz).
Image 01: Solar phenomena and their equivalents in C4D: Granules (left) can be easily created in C4D by distorting a Voronoi 3 noise with a VL noise in a distorter shader (photo granule: NASA, used under Creative Commons licence)

If you distort the result again in a plane shader using a plane effect “Distorter” with a turbulence noise, you get a structure that comes quite close to the real model (image 02).

Bild 02: Granule finalisiert: Den fertigen Distorter Shader packt man in einen Ebene-Shader, wendet den Ebenen-Effekt Distorter für eine weitere leichte Verzerrung an und färbt das ganze mit dem Ebenen-Effekt Colorizer.
Image 02: Granule finalised: Put the finished Distorter shader into a layer shader, apply the Distorter layer effect for another slight distortion and colour the whole thing with the Colorizer layer effect

The same applies to phenomena that are obviously volumetric, such as corona, solar flares, etc.: visible volumetric light can be used as an “aquarium” for noises and 3D colour gradients, with the mathematical functions behind the noises always resembling their natural models.

Simple components

As is so often the case with visual effects, the complexity of the final sun results from the superimposition and combination of various simple elements: For all visual phenomena such as light rays of different sizes, eruptions, etc., we only use one point light at a time, whose visible light is more or less larger than the solar body itself. The astonishing thing is that the complexity of the final result is only achieved with 8 volumetric point lights (Fig. 03).

Bild 03: Nur acht volumetrische Punktlichter mit entspr. volumetrischen Texturen mimen komplexe Eruptionen. Einfacher geht’s nicht.
Image 03: Only eight volumetric point lights with corresponding volumetric textures mimic complex eruptions. It couldn’t be simpler

Concentric masking

Corona, light rays and eruptions are thus applied as textures to volumetric point lights using the technique we are familiar with. As the “Visibility” tab of the respective point light contains the parameters “Inner distance” and “Outer distance”, volumetric effects can be faded out concentrically from the surface of the sun into space (Figure 04).

Bild 04: Der Innenradius der Punktlichtquelle entspricht in etwa dem Sonnenradius. Der Außenradius bestimmt die Höhe des darüber hinausgehenden sichtbaren Effekts, also bis wohin Strahlen und Eruptionen über der Sonnenoberfläche sichtbar sein sollen, bzw. auf null abnehmen.
Figure 04: The inner radius of the point light source corresponds approximately to the radius of the sun. The outer radius determines the height of the visible effect beyond this, i.e. up to where rays and eruptions should be visible above the surface of the sun, or decrease to zero.

In order to have more control over this behaviour, we also apply a colour gradient in 3D spherical mode as a mask (Image 05). This colour gradient has the same radius as the “Outer distance” parameter of the point light source, but has the advantage that it fine-tunes the spatial fading of the effects and at the same time offers the option of applying turbulence (image 06).

Bild 05: Ein kugelförmiger 3D-Verlauf (Gradient) mit Radius des Außenradius wirkt durch den Ebenen-Modus Multiplizieren quasi maskierend und verfeinert die Abnahme mit einer leichten Turbulenz.
Image 05: A spherical 3D gradient with the radius of the outer radius has a quasi-masking effect using the Multiply layer mode and refines the reduction with a slight turbulence.
Bild 06: Wichtig ist bei dem kugelförmigen 3D-Verlauf das Bezugssystem Objekt.
Image 06: The reference system object is important for the spherical 3D gradient

Solar flares

This effect may look very complex, but it is actually quite simple: It consists of a visible volumetric point light that contains a large Naki noise in space object as a texture. The Naki contains a slight in-itself movement due to the “animation speed” parameter. It is located in a plane shader, where it is masked out by a 2D colour gradient of the polar caps and coloured by a colourizer effect (Figure 07).

Bild 07: Shader für größere Sonnen-Eruptionen: Zuunterst maskiert ein Ebenen-Ordner mit einem 2D-Gradienten das darüberliegende 3D-Noise von den Polkappen der Sonne aus. Obenauf fadet ein kugelförmiger Verlauf das ganze turbulent in den Raum aus.
Image 07: Shader for larger sun eruptions: At the bottom, a layer folder with a 2D gradient masks the overlying 3D noise from the polar caps of the sun. At the top, a spherical gradient fades the whole thing turbulently into space

The scaling of this Naki noise is then animated over time, meaning that the noise grows concentrically from the centre of the sun. This alone creates the illusion of large plasma-like structures.
In order to have more control over how the growing Naki is faded out in space, I masked it with a static but highly turbulent 3D colour gradient in 3D spherical mode, which is also applied to the space object (image 08). The result is that solar flares emerge from the sun and are faded out turbulently in space.

Bild 08: Vergleich der Wirkung des kugelförmigen 3D-Verlaufs: links schneidet die Begrenzung des sichtbaren Lichts die Noise-Textur einfach ab, rechts findet ein turbulentes Ausblenden statt.
Image 08: Comparison of the effect of the spherical 3D gradient: on the left, the boundary of the visible light simply cuts off the noise texture; on the right, turbulent fading takes place

The surface of the sun

Although this article is about volumetric effects, let’s take a quick look at the shader setup for the actual surface of the sun. Everything takes place in the glow channel of the sun material or in a layer shader placed there: noises, colour gradients and layer shader effects are stacked on top of each other with different layer modes and opacity levels. In addition, several layers are used to mask the layer above (image 09).

Bild 09: Die Oberfläche der Sonne: Ein gut gefüllter Ebene-Shader mit sinnfällig benannten Noises, Verläufen, Ebenen-Effekten und Ebenen-Ordner.
Image 09: The surface of the sun: A well-filled layer shader with clearly named noises, gradients, layer effects and layer folders

In the displacement channel, three noises in a layer shader deform the high-res geometry of the sun to match the details in the glow channel (image 10).

Bild 19: Das Naki im Raum Welt durchdringt den Mograph-Stapel. Es wird wolkig – allerdings noch mit Artefakten.
Image 19: The Naki in the space world penetrates the mograph stack. It becomes cloudy – but still with artefacts
Bild 10: Im Displacement-Kanal sorgen zwei Noises und eine Verlaufsmakse für eine leichte Verformung der Sonne gemäß den Details im Leuchten-Kanal.
Image 10: In the displacement channel, two noises and a gradient make for a slight deformation of the sun according to the details in the glow channel.

Large plasma arcs were modelled procedurally: Spheres were distributed as mograph clones over the surface of the sun and arranged appropriately with a shader effect. The plug-in pCONNECTOR (tcastudios.com) was then used to connect these clones, and the resulting spline object was thrown into a sweep object. The resulting tubular connections between the clones were then bent into arcs using a shader-based displace deformer (Figure 11).

Bild 11: Prozedural modellierte Plasma-Bögen: das Plug-in pConnector schafft Verbindungen zwischen Klonen, ein Displace-Deformer erzeugt die Bogenstrukturen.
Figure 11: Procedurally modelled plasma arcs: the pConnector plug-in creates connections between clones, a displace deformer generates the arc structures

Diving into the sun

In min 00:11 of the animation, the moving camera enters the volume of the sun. This transition takes place in the compositing with a cross-fade to a sequence that is similar in technical approach to the stellar nebula in the last article. It is important to note that for this transition, the final speed of the outside camera must have the same direction and linear speed as the start of the inside camera.

Fast rendering with standard renderer

Sample-based volumetric light has been part of C4D for a small eternity, namely since Release 5 (1998). This old type of sample is of course handled differently by C4D than the modern samples of surface shadows, scattered reflections, ambient occlusion etc. Consequently, the rendering of this scene is much faster in the older standard renderer than in the more recent physical renderer.
In addition, the standard renderer has been accelerated by Intel Embree technology since release 19 (2017). Embree is a ray tracing library that is optimised for the latest Intel processors and significantly speeds up rendering – in some cases by up to 100% compared to Release 18. Embree breathes new life into the good old standard renderer, so to speak.
But that’s not all: noises and procedural shaders have high-quality SAT texture interpolation as standard, so there is no need to use anti-aliasing for rendering with the standard renderer. To put it in a nutshell: The standard renderer renders this scene at the speed of light – especially considering the volumetric complexity involved here.

Fluffy clouds with Mograph

The approach described for stellar nebulae and solar flares is based on the volume of a visible volumetric light source as an “aquarium” for 3D noises. While the use of such a light container is quite simple and straightforward, it has one major drawback: all structures and effects created are actually visible light without true opacity and without the possibility of casting a shadow. A cloud created in this way will therefore never cast a shadow on the ground or on itself.
So to create a shadow casting fluffy cloud for a sunny summer sky, we have to dive into a technique that is also based on the use of volumetric shaders, but uses a different container: a mograph cloner.

Mograph – procedural cloning and animation

Mograph, C4D’s proprietary toolset, provides motion designers with an extensive range of powerful tools for procedural non-destructive animation. A core function of Mograph is the cloning of objects according to certain rules, e.g. along a spline, in certain arrangements or on the surface of an object. Effectors such as randomness, shaders or time, among others, bring variation and movement to the cloning system and with the fields of Cinema4D R20, there are virtually no limits to the imagination and complexity.
Whilst Mograph offers a huge range of exciting functions and thus invites you to play and experiment, we will concentrate on what is probably the most boring function: cloning simple layers (i.e. polygon plates) into a linear stack. This will be the container for our 3D noises: a stack of cloned layers with only one polygon. Before we get into the cloud creation technique based on this, let’s see what you can do with it.

Case study: ZDF “Terra X”, Scotland

This project for the TV documentary series ZDF “Terra X”, episode “Scotland – the myth of the Highlands” (picture 13, 14, 15) shows the geographical origin of Scotland and England. Volumetric clouds in the close-up shots are created using the mograph-based technique, which we will now look at in the form of a short tutorial.

Bild 12: ZDF „Terra X“-Episode „Mythos der Highlands“ (https://vimeo.com/renderbaron/txschottland): fluffige volu­metrische Wolken durch Mograph-Kloner als Container für volumetrische Shader
Image 12: ZDF “Terra X” episode “Myth of the Highlands” (https://vimeo.com/renderbaron/txschottland): fluffy volumetric clouds using Mograph cloner as a container for volumetric shaders
Bild 13,: ZDF „Terra X“-Episode „Mythos der Highlands“ (https://vimeo.com/renderbaron/txschottland): fluffige volu­metrische Wolken durch Mograph-Kloner als Container für volumetrische Shader
Image 13: ZDF “Terra X” episode “Myth of the Highlands” (https://vimeo.com/renderbaron/txschottland): fluffy volumetric clouds using mograph cloners as containers for volumetric shaders
Bild 14: ZDF „Terra X“-Episode „Mythos der Highlands“ (https://vimeo.com/renderbaron/txschottland): fluffige volu­metrische Wolken durch Mograph-Kloner als Container für volumetrische Shader
Image 14: ZDF “Terra X” episode “Myth of the Highlands” (https://vimeo.com/renderbaron/txschottland): fluffy volumetric clouds using mograph cloners as containers for volumetric shaders

Tutorial – fluffy disc clouds

To not only understand the technique described below, but also to get your hands on it, follow a short tutorial.

Creating an environment

Create a new scene, create a simple sky object under Main menu > Create > Environment. Create a new material, deactivate all material channels, activate the glow channel and create a nice 2D-V colour gradient in typical sky colours, perhaps as shown in image 15.

Bild 15: Ein Farbverlauf im Leuchten-Kanal erzeugt eine einfache Textur für das Himmel-Objekt.
Image 15: A colour gradient in the glow channel creates a simple texture for the sky object

Place a plane object with 1 x 1 segments and dimensions of 1,200 x 1,200 cm at -100 cm Y-height of the world and apply a new material with a bluish colour to the plane. This slab will serve as our ground plane. Create a camera and view the scene from a flat top view. The empty scene should look something like Figure 16.

Bild 16: Ausgangspunkt zum Wolken-Bauen: unsere Szene mit Himmel-Objekt und Bodenplatte
Image 16: Starting point for cloud building: our scene with sky object and ground plane

Create an infinite light as the sun and apply raytraced shadows (hard) for the time being (just to achieve a quick shadow effect, no matter how unrealistic it is).

Stacked layers

Create a Mograph cloner under Main Menu > Mograph > Cloner. Assign a copy of the floor layer to the cloner. The layer is immediately duplicated three times along the Y-axis of the cloner. Click on the cloner in the object manager and view its settings in the attribute manager: You will notice that the cloning mode is set to linear by default, just as we want it. First set the number of clones to 10 and set the distance mode from Step to Endpoint. Set the distance under P.Y. to 50 cm. This will distribute all cloned planes along the Y-axis of the world within this range (Fig. 17).

Bild 17: Grundprinzip der Wolken: Platten als Mograph-Klone dienen quasi als Volumen-Samples.
Image 17: Basic principle of the clouds: Plates as mograph clones serve as volume samples

Creating a cloud material

Create a new material. Activate the alpha channel in the material and create a noise shader in it. In the noise shader, select the noise type Naki, select Object as the room and adjust the global size to 2,500%. Set the clipping at the bottom to 50% and the clipping at the top to 100%. This narrows the tonal gradient of the noise so that you get nicely distinguishable white and black areas (Fig. 18). This creates defined transparent areas (black) and opaque areas (white) in the alpha channel.

Bild 18: Der Wolken-Shader – ein einfaches Naki-Noise, zunächst im Raum Objekt
Image 18: The cloud shader – a simple Naki noise, initially in the room object

Optimising the C4D viewport

Before we continue with the cloud creation, we will briefly optimise our viewport so that we can view the first versions of our volumetric clouds in real time (!) in the viewport. To do this, select “Enhanced OpenGL” in the viewport menu under Options (if not already activated). Then activate Noise and Transparency. We’ll see the result in a moment.

Now it’s getting fluffy

Apply the material you created earlier to your cloner. You will then get 10 layers with exactly the same noise texture – even though we applied the Naki Noise to the room object! This is because the cloner is not a “real” object – the noise in the room object instead references the axis system of each layer and repeats the Naki noise with each clone. To change this, we go back to the Naki Noise and select Room World. Now we get a representation of the Naki Noise that changes from plane to plane – just the way we want it.
Now, step by step, increase your clone count to 50 and see what happens… Bingo! The slices of the Mograph Cloner – the layers – serve as a kind of spatial sample for the 3D noise, creating volumetric clouds. The more clones you use, the more homogeneous the result will be. The “Number” parameter in the cloner now functions as a kind of sample count.
With the previously optimised settings of the C4D viewport, you can view this first version of your volumetric clouds in real time (!).

Increasing the ray depth

When rendering, you will now get some nice cloud-like structures, which, however, show black, sharp artefacts here and there (Fig. 19). This is because the ray depth of the raytracer has been used up. This means that the raytracer beam stops after passing through a limited number of transparencies (our cloned layers with alpha channel) and renders a black pixel.

To avoid this, select the “Options” entry under Render presets (Ctrl B) and adjust the “Ray depth” parameter. You should select the number of your clones plus one as the value. The result already looks better (image 20). As the more frequent penetration of transparencies by the raytracer beam increases the rendering time, we will initially use a raytraced shadow for the sun light source created in step 01. For the final rendering, you can still use realistic area shadows.

Bild 20: Eine Erhöhung der Strahltiefe in den Optionen der Rendervoreinstellungen entfernt Artefakte.
Image 20: Increasing the ray depth in the render preset options removes artefacts

Vertical masking

What we can clearly see is that all the clouds above and below are abruptly cut off by the vertical boundaries of our cloner. To get around this, we mask our 3D noise with a vertical 3D colour gradient.
In the alpha channel of your cloud material, click on the triangle button next to Texture and select “Layer”. You have now moved your noise shader to a layer shader.
Go back briefly to your alpha channel and deactivate the “Alpha image” checkbox to ensure that the greyscale information of our layer shader is also interpreted as alpha information. Double-click on the noise in the layer shader and name it Cloud Noise. Then click on the “Shader” button, create a colour gradient shader and drag it above the cloud noise. Select 3D Linear as the type of gradient.
We now need to think about how to adjust the colour gradient correctly in order to spatially mask our cloud noise from bottom to top. As the bottom layer of our cloner is at Y=0 and the cloner with all layers has a thickness / height of 50 cm, we set the 3D colour gradient so that it starts along the Y axis at starting point 0 cm and ends at end point 50 cm. We select Space World as the reference system. The colour gradient runs along the world Y-axis from black at 0 cm to white and back to black at 50 cm (Fig. 21).

Bild 21: Ein linearer 3D-Verlauf in der Höhe des Mograph-Stapels sorgt für eine weiche vertikale Begrenzung.
Image 21: A linear 3D gradient in the height of the mograph stack provides a soft vertical boundary.

Back in the layer shader, we set the layer mode to Multiply. Now we can render.
The result is now much better, as we can now mask the cloud noise spatially from bottom to top and thus have control over its vertical shape (image 22). However, the mask is far too soft and too homogeneous. Instead of adding turbulence to our colour gradient, we will now do something more elegant with it.

Bild 22: Die vertikale Begrenzung der Wolken an ihrer Unter- und Oberseite ist noch zu weich.
Image 22: The vertical boundary of the clouds at the top and bottom is still too soft

Copy the cloud noise and paste it as a copy into your layer shader (right mouse button, Copy Shader / Paste Shader). Double-click on the pasted noise and name it Mask-Noise. Then drag the noise above the colour gradient.
Change the global scaling of the mask noise to 500% and invert the noise by setting the clipping at the bottom to 85% and the clipping at the top to 0%. Go back to the layer shader and set the layer mode of the gradient below to Normal. The setup should now look like Figure 23.

Bild 23: Eine kleinere, invertierte Version des Cloud-Noises liegt als Mask-Noise im Ebene-Shader oben auf.
Image 23: A smaller, inverted version of the cloud noise is applied as mask noise in the layer shader above

Now set the layer mode of the mask noise to Levr and see what happens, roughly speaking: A kind of high-contrast version of the mask noise is cut away from the gradient. Reduce the opacity of the mask noise to 70% to get a softer edge. Set the layer mode of the gradient back to Multiply (image 24). Before rendering, we adjust the Clipping bottom parameter in the cloud noise to 20%. Then we render again. The result should look something like Figure 25.

Bild 24: Das Mask-Noise im Ebenen-Modus Levr gibt dem Verlauf darunter mehr Struktur.
Image 24: The mask noise in Levr layer mode gives the gradient below more structure
Bild 25: Die vertikale Begrenzung der Wolken hat nun eine schön definierte Struktur.
Image 25: The vertical boundary of the clouds now has a nicely defined structure

Shadow Luminance

The clouds are now increasingly realistic, but still look grey and dark. Reason: Each Mograph clone casts a shadow on the underlying clone. In order to lighten only shadow areas, we use a shader setup that I created to simulate a kind of self-illumination on the shadow sides of objects: Shadow Luminance.
Although I originally developed this shader setup to simulate very diffuse light on shadow sides of objects, it can also be used to simulate a kind of subsurface scattering on our clouds.

Shadow Luminance consists of three important components:

  • The first component is a plane shader that serves as a container.
  • The second component is a Lumas shader that recognises where there is light and where there is shadow. Lumas does this through its tab shader, which behaves like the colour channel and says: “Show me where I am in the light.”
  • The third component is a colourizer shader into which Lumas is dropped. In the colourizer, the colour gradient is then set to white-black, which reverses the contained Lumas to “Show me where I am in the shadow”.

The colourizer fed with the Lumas is then used to mask any colour on the shadow side of an object and thus create a slight brightening of the shadow areas. Let’s put this into practice.
Activate the glow channel of your cloud material. Create a layer shader. Create a Luma shader within the layer shader. Deactivate all glow aspects of the Lumas shader. Under the Shader tab, select 100% lighting and set the colour to a bright white. Go back to the Layer Shader and place the Lumas in a Colorizer Shader by right-clicking on the Lumas and selecting Colorizer. Inside the Colorizer shader, set its colour gradient to white-black.
Go back to the layer shader, click on the “Shader” button and create a colour shader. Drag it above the colourizer. Set the colour of the colour shader to a light blue. Back in the layer shader, set the opacity of the colour layer to 8%. Select the layer mode Layer mask for the colourizer below.
The setup should now look like image 26. voilà! – You are now masking a light blue exclusively on the shadow sides of your clouds.

Bild 26: Aufhellung der Schattenseiten der Wolken durch Shadow Luminance
Image 26: Lightening the shadow sides of the clouds with Shadow Luminance

Increasing the light sensitivity

If you look at the colour channel of our cloud material, you will see that the drop-down menu is set to Lambert. Lambert is a so-called BSDF, a Bidirectional Scattering Distribution Function – or simply put: a function that describes how light is distributed over the surface of an object, from its brightest to the so-called terminator, the day-night boundary. Lambert simulates a perfectly diffuse surface, while the other available BSDF Oren-Nayar calculates additional micro-facets for a satin, roughened look. While Lambert is a good choice for our bright clouds, Oren-Nayar has an important advantage: the diffuse strength and roughness parameters. With roughness, you can seamlessly mix between Lambert (0% roughness) and Oren-Nayar (100% roughness) behaviour. And with diffuse strength you can virtually adjust the light sensitivity. With a combination of 0% roughness and 200% diffuse strength, we have a Lambertian BSDF with increased light sensitivity and thus increased albedo (Fig. 27).

Bild 27: Gesteigerte Lichtempfindlichkeit der Wolken durch 150% diffuse Stärke
Image 27: Increased light sensitivity of the clouds through 150% diffuse strength

With a further increased clone count (80), adjusted ray depth, shadow luminance for the ground material and area shadows for the sun light source, your final result should look like Figure 28.

Bild 28: Das Endergebnis: dichte, fluffige Schäfchen-Wolken
Image 28: The final result: dense, fluffy fleecy clouds

Conclusion

Fluffy disc clouds are a question of the number of clones and the beam depth. The more homogeneous the desired result, the longer it takes to render. Render times are increased again, especially when realistic surface shadows are used. The happy medium is therefore a balance between a low clone number/beam depth and homogeneous results – handle both with care!
The physical renderer is the best choice in this case, as it offers a high-quality anti-aliasing function, faster calculation of surface shadow samples and also Intel Embree acceleration.
By the way: You can find a live version of this short series of articles as a recording of the Maxon Supermeet 2018 at www.renderbaron.de/publikationen.

]]>
DIGITAL PRODUCTION Sonne aus ZDF „Terra X“, Episode „Eine Frage der Zeit“ (https://vimeo.com/renderbaron/txeine­fragederzeit): volumetrische Lichter mit entspr. volumetrischen Texturen erzeugen komplex aussehende Effekte – mit vorhersagbaren Ergebnissen und ohne schlaf­lose Nächte. 71898
Dell U3219Q in test https://digitalproduction.com/2019/07/15/dell-u3219q-im-test/ Mon, 15 Jul 2019 14:30:45 +0000 https://www.digitalproduction.com/?p=76161
As we realised a while ago, we haven't had a Dell Ultrasharp in our test for a long time, which sounds strange because these devices have always done quite well. Reason enough to take a look at the latest model with the catchy name U3219Q.
]]>

What is the U3219Q? A 4K (3840 x 2160 pixels) IPS monitor with a 31-inch screen diagonal that advertises VESA Certified DisplayHDR™ 400i on the packaging. The street price (as of May 2019) is an affordable €830 on Amazon – the RRP is €990.

Delivery and feel

So we have requested a test device that has survived the usually rough treatment by the logistics company (don’t laugh, this happens more often than you might think …), and set about unpacking it: The box is Styrofoam-free and instead sturdily filled with solid cardboard trays. The screen is a vivid stone grey, surprisingly light and comparable in volume to much inferior devices. A factory calibration report and the necessary cables are included.
As usual, the Vesa mount is present and stable. The housing did not make a sound during practical use, and no fans or similar could be heard. The Dell screen communicates with the various signal transmitters via HDMI (2.2) and Displayport (1.4), as well as an audio output, USB upstream & downstream and a USB-C port. But more on that later.

Die Farbhomogenität ist im täglichen Umgang nicht sonderlich auffällig – aber die Schwankung kommt hier in den zweistelligen Bereich.
The colour homogeneity is not particularly noticeable in everyday use – but the fluctuation here is in the double-digit range.

Panel & colour

The panel is extremely stable in terms of viewing angle and is more than sufficient for 3D and engineering tasks. The integrated HDR 400i support works with the various test signals, but it is only HDR 400 – to what extent this is real HDR is up to you to decide. However, it is certainly sufficient for everyday use. What is flawless is the anti-reflective coating – it can easily keep up with its Eizo colleagues.
The colour measurements (4 weeks apart, the drift was negligible) were okay – not excellent, but okay. The factory calibration to sRGB was spot-on – set it up and get to work.

Das nennen wir mal eine gute Gammakurve: In dieser Preisklasse so nah am Ideal zu sein, ist selten.
That’s what we call a good gamma curve: Being so close to the ideal in this price range is rare

Other features

The menu is easy to understand, and neither the placement of the connections nor anything else struck us as negative. Picture in Picture also works, but what about the advertised feature, the USB-C function? According to the promotional material, you can use the screen as a USB hub and not only transfer the image to the laptop, but also work comfortably while the laptop’s battery is charging. Sounds strange at first, but it worked – our test device was connected for a full working day, and while its battery normally dies after about 3 hours, we were on 100% battery until the end of the working day. That’s nice! But that also gives us a clue to the idea behind the screen.

Bei unserer Messung hatten wir ein durchschnittliches DeltaE von 2 im Mittelwert – wer sich die Farbmessung ansieht, wird sehen, dass diese zwischen 0 und 4 schwankt – zusammengenommen ein sehr guter Mittelwert, doch eine gewisse Flatterhaftigkeit zeigt sich trotzdem.
In our measurement, we had an average DeltaE of 2 – if you look at the colour measurement, you will see that it fluctuates between 0 and 4 – all in all, a very good average value, but a certain flutteriness is still evident.

Conclusion

So, what do we have? A monitor with excellent colour gamut, contrast and colour fidelity, with slight weaknesses in terms of luminance and colour homogeneity, but not a dealbreaker. Anyone who occasionally works in true colour will be well served by it, and the Ultrasharp is a welcome addition to any multi-monitor setup thanks to its narrow bezel. It’s still a long way from being a Class A broadcast monitor, but that’s all you can expect at this very civilised price. However, if you are looking for an extension for modern laptops that is not only easy on the battery but also on the wallet, the Dell U3219Q – despite its awkward name – is a good choice.

Der Helligkeitswert hört bei 215 und einem Weißpunkt von 7100 auf – das ist okay, aber nicht herausragend. Die angegebenen 400 Nits haben wir nicht beobachten können.
The brightness value stops at 215 and a white point of 7100 – that’s okay, but not outstanding. We were unable to observe the stated 400 nits
]]>
DIGITAL PRODUCTION 76161
InstaLOD: From CAD to VR? https://digitalproduction.com/2019/07/15/instalod-von-cad-zu-vr/ Mon, 15 Jul 2019 08:30:05 +0000 https://www.digitalproduction.com/?p=77241
InstaLOD automates the entire 3D workflow, including the tedious steps that no 3D artist likes to perform, such as UV unwrapping, manual retopo or baking. You can see an overview of InstaLOD's complete feature set in DP 03:19. In this issue, we look in particular at how InstaLOD can automate the workflow from CAD data from a PLM system to a real-time-ready VR model.
]]>

Computer-aided design (CAD) data from engineering consists of nurbs curves and surfaces and cannot be rendered by most renderers. Until now, there were two options for preparing them for real-time rendering. Either a manual retopo, i.e. building the asset from scratch as a 3D mesh, or tessellation. Manual retopo is a process that can take days or weeks per scene – depending on the complexity of the models. The other option is automatic tessellation. However, as CAD data is parametric (has infinite accuracy), automatic tessellation achieves an extremely high polycount, which is practically useless for real-time rendering, and if a low tessellation is aimed for, details are often missing, curvatures are poorly modelled or get broken normals. If the CAD object is then tessellated with sufficient accuracy, the polycount must be reduced, again by automatic optimisation or manual retopo.

Im Import-Fenster kann die Tessellierung eingestellt werden. Der Standard, der für die Szene automatisch errechnet wird, ist oft ein guter Anfangspunkt, weshalb wir beim ersten Import zunächst nichts an den Standardeinstellungen ändern.
The tessellation can be set in the import window. The standard, which is automatically calculated for the scene, is often a good starting point, which is why we don’t change the standard settings for the first import

After all, UVs have to be designed and textures baked so that the model can be viewed in the VR scene with sufficient fps. And even then, there are long-term problems with both manual preparation methods if a different renderer is to be used or if the initial CAD data changes. Huge material libraries are currently being created, but these are only customised for a single renderer. Whether VRED, Arnold, V-Ray etc., the problem is the same. If you want to switch to a different engine in the future, every material from the library has to be completely rebuilt as a shader. Another big problem with a product lifecycle management (PLM) system is that CAD data in it is constantly being modified by engineers, which means that all of the above steps often have to be performed from scratch on a weekly basis. This workflow can be completely automated with InstaLOD’s CAD Live Link and Scene Import Rules. By the way: Each feature can be executed as a batch process operator. All settings that are set up in a profile, including Scene Import Rules, can be saved externally as a JSON file. These can be used as presets or, if the CAD object changes again next week, the user loads the profile that has already been set up, applies it and achieves the same result again: an object that works for VR without manual intervention.

Nach einigen Sekunden ist die Szene, in unserem Fall eine Felge, tesselliert im Viewport sichtbar. Im Material-Editor können die Materialien abgeändert werden, sodass der Metallwert, Oberflächenrauheit und die Farbe bereits so eingestellt sind, wie sie beim Endprodukt zu sehen sein sollen.
After a few seconds, the scene, in our case a wheel rim, is visible tessellated in the viewport. In the material editor, the materials can be changed so that the metal value, surface roughness and colour are already set as they should be in the final product

But what is the CAD Live Link and how do you use it? When loading a CAD file, you are first greeted with an import window in which you can select which parts or sub-assemblies are to be imported. InstaLOD supports a variety of CAD formats including Catia, Solidworks, Rhino, JT, NX, STEP and many more. Since the CAD Live Link still maintains the connection to the original CAD parts, we can still selectively retessel the parts. The tessellation is influenced by three settings:

  • Maximum Deviation – uses a user-specified tolerance of how much InstaLOD is allowed to deviate from the original CAD surface. This feature originally comes from InstaLOD’s optimiser, but was so popular that it was also integrated into CAD Live Link.
  • Maximum Edge Length – Specifies the maximum length of the edges.
  • Maximum Angle – Specifies the maximum angle between two polygons. If the angle is exceeded, the area is subdivided again until the specified angle is achieved.

The maximum angle is mainly used in other tessellation programmes, but is often not the ideal solution. The reason for this is that small parts – such as screws – can very quickly become very strongly tessellated within large assemblies. Normally, you don’t want to have a single screw with 50,000 polygons. Maximum deviation is therefore the ideal solution. This tessellates the objects to the quality tolerance that is specified, but does not subdivide the object unnecessarily.

Hier sieht man, dass selbst auf der flachen Oberfläche der Felge Subdivisions hinzugefügt worden sind. Diese würden mit dem Maximum Angle nicht modelliert werden, da die Oberfläche zu flach ist. Um die gleiche Qualität mit dem Maximum Angle erreichen zu können, bräuchte man einen Winkel von ca 2 bis 3 Grad, wodurch dann der Rest der Szene extrem hoch tesselliert werden würde.
Here you can see that subdivisions have been added even on the flat surface of the rim. These would not be modelled with the Maximum Angle as the surface is too flat. To achieve the same quality with the maximum angle, you would need an angle of approx. 2 to 3 degrees, which would make the rest of the scene extremely tessellated

Getting the splits ..

Another problem that often occurs with CAD data is surface inaccuracies, which can lead to shading problems. One such problem is shading splits – as can be seen in the images on the right. By default, a 3D operator has to manually repair these splits piece by piece, which is a lengthy process. With InstaLOD’s CAD Live Link, these can be repaired quickly and easily using the shading settings. In many cases, recalculating the normals is sufficient. In our case, however, a lot of shading information is stored within the CAD metadata, which would be lost by recalculation. We therefore use InstaLOD’s Shading Magic, which automatically localises and repairs the problematic areas. After the tessellation has been performed and any shading problems have been repaired, the rim is not really usable yet, as all materials would have to be set up again for a render. There are also no UVs yet, so textures could not be added, and finally we have 80,000 polygons in the current state. If you were to scale these to a complete car, you would quickly have several million polygons, which is not compatible with a VR application.

Zustand vor dem Anwenden der Shading Settings Zustand nach dem Anwenden der Shading Settings

So the next step is to make the rim VR-ready in a few steps. Firstly, we go to the mesh operation settings and start with a UV unwrap. Here we use the Hard Surface Axial algorithm, which creates a clean unwrap for the surfaces facing the axes.

Material Merge

After the UV unwrap, we perform a material merge operation. The reason for this is that we have three objects with three materials in this scene. If we scale this scene to a complete car, we quickly have thousands of objects with thousands of materials. Firstly, this is a lot of work for manually setting up the materials for a render, and secondly, this is also a huge amount of draw calls for a real-time application. We therefore use the material merge, which combines all materials and textures into a single material with a texture atlas. Here we also merge the material parameters of metalness and roughness into a texture. This saves time later on when setting up the individual materials, but also saves a lot of draw calls and texture memory. To make it even easier to set up the materials, we combine the objects by using “Combine Meshes” in the Mesh Tool Kit (MTK). We now have an object with a material and a draw call.

UV-Unwrap mit dem Hard-Surface-Axial-Algorithmus für an den Achsen ausgerichtete Daten.
UV unwrap with the Hard Surface Axial algorithm for data aligned to the axes

This object can now be exported and imported into any renderer as we have the UVs and textures to render the materials exactly as initially set up – completely independent of the renderer.

Away with the polygons

Now we need to reduce the number of polygons. To do this, we use InstaLOD’s Remesher, which performs a complete reconstruction of the rim within a very short time, simultaneously building UVs and baking textures. The result is an incredible reduction of over 90 %. We had already done all these steps manually, which begs the question: Why didn’t we use Remeshing directly on the original object? The reason for this is the workflow that we want to set up. What we can do now is to take our timeline with all the entries of UVs, Material Merge, Combine Meshes and Remesh – and convert all that into a new profile.

Ausgangs-Mesh bei ca. 73k Polys. Resultat des Remeshs bei 7k Polys – über 90% Reduzierung der Polygonanzahl und Reduzierung von drei auf einen Draw-Call mit nur minimalem visuellen Qualitätsverlust.

Outlook and application

Now you really realise how powerful InstaLOD’s workflows are – we can test multiple mesh operations, and if we’d rather try something else, we can simply jump back in the timeline and test other operations until we’re happy. Then we can turn the workflow into a profile and scale it to hundreds or thousands of objects by exporting the profile as a JSON file and running it through the command line as a batch process. Or we can run the profile directly within InstaLOD Studio XL with InstaLOD Pipeline. Profiles (which contain all settings) can be saved and used again at a later time. This is how presets etc. are set up if this ensures a better workflow. Experience has shown that you should first test the profile on one or two objects to see whether the profile has been set up correctly. Then nothing stands in the way of making your work easier – or of complete automation.

Nun haben wir eine komplexe Kettenreaktion aufgebaut, mit der wir mit einem Mausklick nicht nur das low-poly Realtime-ready Modell kreieren, sondern gleichzeitig separat das High-Poly-Modell herausspeichern – mit UVs und Texturen für einen Offline-Render.
We have now set up a complex chain reaction with which we can not only create the low-poly realtime-ready model with a click of the mouse, but also save the high-poly model separately at the same time – with UVs and textures for an offline render

Scene Import Rules

When batch processing the CAD file, you logically have no manual control over the first steps that we have applied to the rim. This includes changing the tessellation and materials and often also changing the object hierarchies that you want to change or delete because you don’t need them for the visualisation (see image above left “Organise outside” and “Organise inside”). With the Scene Import Rules, all this is possible with little effort. Simply specify which objects are affected and what should happen to them.

[caption id="attachment_77238" align="alignnone" width="1181"]In der neuesten Version von InstaLOD Studio XL kann man die Regeln für die Aufbereitung des Objekts nun auch automatisch mit dem Record-Knopf aufnehmen. In the latest version of InstaLOD Studio XL, the rules for preparing the object can now also be recorded automatically using the Record button

To set up a rule, you need to add a new entry in the “Scene Import Rules” window and give it a name. You must then specify an attribute. Everything from the object name to the path can be specified here. The nice thing here is that basically any available attribute can be used within the metadata. You can see exactly which ones are available in the “Selected Object Information” window. If you use “Name” as an attribute, for example, you must enter the name of the object in the “Match RegEx” field below so that it is processed by the rule. In the Predicate, you can specify how important this rule is, which is helpful if there are many rules, so that you can specify which rule is processed earlier or later in the list. The predicate determines what the rules should perform on the objects. Available options are operations such as “Material Assignment”, “Tessellate”, “Delete” and many more. (top left image “Scene Import Rules”). Customised predicates can also be added using a C plug-in. These plug-ins are automatically compatible with the complete InstaLOD system. With these plug-ins you can customise InstaLOD to what you need for your own pipeline.

[caption id="attachment_77228" align="alignnone" width="2560"]Nun haben wir eine komplexe Kettenreaktion aufgebaut, mit der wir mit einem Mausklick nicht nur das low-poly Realtime-ready Modell kreieren, sondern gleichzeitig separat das High-Poly-Modell herausspeichern – mit UVs und Texturen für einen Offline-Render. Now we have built a complex chain reaction with which we not only create the low-poly realtime-ready model with a mouse click, but also save the high-poly model separately at the same time – with UVs and textures for an offline render.

Conclusion: Automation rocks!

Now we come back to the PLM system that was briefly mentioned earlier. Once you have set up a set of rules, including the subsequent mesh operations, it is no longer a problem to update the CAD objects on a weekly basis. Simply load the finished profile and run a batch process in which the objects are automatically prepared using the rules (including materials, tessellation, organisation, etc.) and finally reduced by the mesh operations. At the end, you get the finished, updated scene at the touch of a button, without having to manually intervene in the process. This means that large assemblies can be continuously extracted from a PLM system and made VR-ready within a very short time. By the way: If you want to try this out for yourself, you can get a trial version by filling in the form on our website InstaLOD.com and try out the described workflow and all the other features and processes that InstaLOD makes possible on your own assets.

Hier sieht man InstaLOD Pipeline – das Tool, in dem wir Meshes oder Ordner voller Meshes laden können, die durch ein Profil in einem Batch-Prozess auch ohne manuelle Interaktion aufbereitet werden. Man kann Mesh-Operationen als Kettenreaktionen aufsetzen (Previous Output as Input) oder einfach mehrere Profile laden und angeben, welche Ordner und Dateien durch welches Profil bearbeitet werden sollen – der Artist behält die Kontrolle über alles, was bearbeitet werden soll.
Here you can see InstaLOD Pipeline – the tool in which we can load meshes or folders full of meshes that are prepared by a profile in a batch process even without manual interaction. You can set up mesh operations as chain reactions (Previous Output as Input) or simply load multiple profiles and specify which folders and files are to be processed by which profile – the artist retains control over everything that is to be processed
]]>
DIGITAL PRODUCTION 77241
Xi-Machines Animate X2 Advanced im Test https://digitalproduction.com/2019/07/10/xi-machines-animate-x2-advanced-im-test/ Wed, 10 Jul 2019 14:22:54 +0000 https://www.digitalproduction.com/?p=76684
If an "Advanced" or "Ultra" appears in the type designation of a computer from Xi-Machines, this is almost certainly not a boast. You are actually getting an advanced workstation. If only the price wasn't so ultra ...
]]>

As is usual for Xi-Machines, the Animate X2 Advanced was delivered extremely well protected against transport damage. The actual packaging of the workstation is very securely padded in another large cardboard box with a huge number of small polystyrene elements. However, unpacking and collecting the numerous polystyrene elements, especially with a statically charged fleece jumper, elicited a curse or two from the author. Alternatively, you could also use bubble wrap, which is easier to unpack and repack.

Even when unpacking, the numerous labels on each side of the packaging were noticeable, indicating further transport protection for the CPU and graphics card inside the workstation case.

Case

The case is the midi tower version of the standard workstation case from Xi-Machines made from elegant-looking, black brushed aluminium with chrome-plated feet. Two USB 3.0 ports as well as a headphone and microphone socket are hidden under a small hinged lid on the top, with the power switch and reset button right next to it. There are two empty 5¼-inch slots on the front of the housing in case an optical drive or multi-card slot reader needs to be retrofitted.

After removing the transport lock for the CPU and graphics card, the inside of the case looks very tidy: one usable PCIe x4 and PCIe x16 slot each are still free, and three hot-swap bays are still available in the neighbouring HDD cage. As you would expect from Xi-Machines, the cable management is flawless, with only one really visible cable running from the powerful and quiet power supply unit to the graphics card.

Features

Xi-Machines has equipped the Animate X2 with an Intel Xeon W-2155 CPU with ten cores and 64 Gbytes of registered ECC RAM system memory. The RAM memory is divided into four 16 Gbyte modules. This leaves four RAM slots free. In total, the memory can be expanded to a maximum of 512 Gbytes.

Kein Spitzenwert, aber alles im grünen Bereich.
Not a peak value, but everything is in the green

When it comes to mass storage, Xi-Machines has opted for so-called Enterprise Edition drives. Enterprise Edition is not a special edition for Star Trek fans, but means that these mass storage devices have a significantly higher operational reliability and service life than the normally available consumer/desktop models. The Meantime Between Failure (MTBF) is significantly higher for the Enterprise Edition models, and so is the price.

Three different mass storage devices were installed: a 480 Gbyte SSD for the operating system, a 480 Gbyte M.2 SSD as a scratch disc and a 3.2 Tbyte (!) PCIe SSD module with crazy transfer rates for project data. The graphics card is the Ti version of the current Geforce RTX 2080 with 11 Gbytes of RAM. In addition, the Animate X2 Advanced offers almost all currently relevant interfaces at the rear of the housing.
The scope of delivery also includes a mouse and a keyboard of acceptable quality, extra cables for the power supply unit, a small box with screws and an anti-static wrist strap with a clamp for earthing.

Typisch 2080 Ti: mit 260 Watt TDP ist die GraKa ein thermisches Ferkel.
Typical 2080 Ti: with 260 watts TDP, the graphics card is a thermal piglet

Performance

In the Cinebench 20 CPU test, the Animate X2 Advanced came out on top of the test field with 5,258 points, as expected. In the older Cinebench 15 test, it was also at the top with 2,200 points and the Geforce RTX 2080Ti achieved 159 points in the OpenGL test. In the V-Ray render test for CPU and GPU, the good results of the Cinebench tests were confirmed with a computing time of just 1 minute and 1 second for the CPU test and just 46 seconds for the GPU test.

RTX liefert beim Octance-Bench-Beta beeindruckenden Boost.
RTX delivers an impressive boost in the Octance-Bench-Beta

When rendering the classroom scene in Blender, the Geforce RTX was able to show what it can do. The ten CPU cores of the Xeon W-2155 calculated a brisk 8 minutes and 49 seconds, while the Geforce RTX 2080 Ti took just a quarter of the time at 2 minutes and 12 seconds. The unofficial Octane-Bench beta test also showed what the RTX cards have over their predecessors: 302 points without RTX and a whopping 895 with.

3,2 Tbyte Kapazität bei traumhaften Transferraten. Auch die als Scratch Disk gedachte SSD ist flink.

Xi-Machines has gone all out when it comes to mass storage: the 3.2 Tbyte PCIe SSD Enterprise Edition module achieved write rates of 2,035 Mbytes and read rates of 4,407 Mbytes per second in the Aja system test and delivered a sustained transfer rate of 4,400 Mbytes per second. However, with limitations, not with the mass storage, but with the benchmarks. According to Xi-Machines, the SSD is capable of reading data at up to 6,170 Mbytes per second – unfortunately, we were unable to measure this. The 500 Gbyte capacity SSD intended as a scratch disc also achieved similarly high write rates of 2,012 Mbytes per second as the large PCIe SSD, but with read rates of 2,729 Mbytes per second it did not come close to their values. The smaller system SSD wrote data at 362 and read it out again at 513 Mbytes per second – completely sufficient for the operating system. With a maximum DPC latency of 442 microseconds, the Animate X2 Advanced was in the midfield of the test candidates. This is fine for the traditional application area of 3D, rendering and HD video. In addition, Windows 10 Pro is configured in such a way that users can get started straight away without having to activate Windows, update drivers or prevent questionable optimisation tools from working.

Die 10 Kerne der Xeon-CPU schlagen die 8 Kerne der i9-9900K deutlich. Die Animate X2 Advanced überzeugt auch beim V-Ray-Rendertest.

The cooling concept of the Animate X2 Advanced works well, because even under simultaneous synthetic full utilisation of all components with the Aida 64 stress test, the temperature values of the CPU, the mainboard and the mass storage remained within the normal range.

Only the Nvidia Geforce RTX 2080 Ti reached a slightly higher value of 81°C, although this is unlikely to be reached under realistic load scenarios in practice. And under all the load and stress, hardly a murmur could be heard from the Animate X2 Advanced, only the graphics card stood out a little when the fans started up, but everything was absolutely bearable.

Conclusion

At 9,875 euros, the Xi-Machines Animate X2 Advanced is certainly no bargain, but calling it expensive is not justified either. Of course, in comparison with the other test candidates, none of which offer selected and tested Enterprise Edition devices of 3.2 Tbyte size and a 5-year warranty, the price seems quite high.

However, if you consider the costs incurred and the loss of image for the customer if a workstation fails unexpectedly in the middle of a large project, then the price is put into perspective and the attribute expensive can very quickly turn into inexpensive.

The Xi-Machines Animate X2 Advanced is unquestionably fast in all areas, offers excellent hardware components and processing and extensive options for increasing the capacities of mass storage and RAM memory. In terms of operational safety and reliability: If I had to buy a computer for a nuclear power plant, I would probably order it from Xi-Machines.

]]>
DIGITAL PRODUCTION 76684
Notebook Alienware m15 tested https://digitalproduction.com/2019/07/05/notebook-alienware-m15-im-test/ Fri, 05 Jul 2019 14:11:49 +0000 https://www.digitalproduction.com/?p=76149 Notebook Alienware m15
In the mobile gaming sector, where it has always been about cramming as much CPU and graphics performance as possible into a notebook, Dell's Alienware notebooks are a firm favourite. Such an Alienware gaming notebook should also be suitable as a mobile workstation, right?
]]>
Notebook Alienware m15

The idea of using a gaming notebook as a workstation is actually obvious due to the similar hardware requirements. This is because the demands on the CPU, GPU, RAM and mass storage are high in gaming as well as in the areas of 3D, HD video and media content creation. If you take a look at the hardware equipment of the Alienware m15, you would be forgiven for believing it could be used as a workstation. The notebook was delivered in a sturdy shipping box, which keeps the device well padded on the inside.

Case

The housing of the m15 appears stable and accurately finished. All plastic parts are clean and flush, the display hinges are secure and run smoothly and evenly. The 15.6-inch UHD display with 4K resolution delivers good, high-contrast images even at less than ideal viewing angles and has enough brightness reserves to work outdoors on a summer’s day.
We were positively impressed by the keyboard, which, in addition to good keys and a generous surface area, even offers a full alpha-num pad, which makes it much easier to operate 3D, audio and video programmes. The track pad with two mouse buttons can also be controlled safely and precisely.

Dell Alienware m15 (R1) non-touch gaming notebook computer, codenamed Orion 15.

On the left side of the Alienware m15 is a USBA 3.1 port and the Gigabit LAN socket, with two more USBA 3.1 ports on the right. On the rear, Dell has accommodated the HDMI 2.0, a mini display port, Thunderbolt 3 and the Alienware Graphics Amplifier port.

Notebook Alienware m15
Notebook Alienware m15

Equipment

When selecting the CPU and graphics card for the Alienware m15, the maxim “a lot helps a lot” obviously applied. Dell has packed an Intel i9-8950HK CPU with six cores and the Nvidia Geforce GTX 2080 Max Q, one of the currently most powerful mobile graphics cards with 8 Gbytes of RAM, into the m15 and additionally garnished it with 32 Gbytes of RAM memory. This should be sufficient for most workstation tasks such as HD video editing or 3D modelling. However, the capacity of the internal NVME SSD could be a little tight, as only 256 Gbyte for programmes, one or two libraries and project data will quickly become cramped. However, it is then possible to connect a lot of fast mass storage via one of the available high-speed interfaces.

Das Alienware m15 wird zwar warm, bietet aber auch anständig Power. Mit 256 ist die Kapazität der SSD etwas knapp geraten.

Performance

With the Intel Core i9-8950HK processor, the Alienware m15 achieved an impressive 2,546 points in the Cinebench 20 CPU benchmark, 1,221 points in the older Cinebench 15 and 117 frames per second in its OpenGL test for the Nvidia Geforce 2080 Max Q. The Alienware m15 also performed well in the V-Ray benchmark for CPU and GPU:
1 minute and 52 seconds of computing time for the CPU test and 1 minute and 43 seconds for the GPU test.
In the Blender 2.7 render test of the classroom scene, the i9 8950 HK needed 23 minutes and 44 seconds. With version 2.8, the same scene calculated on the CPU then only took 18 minutes and 6 seconds and just 4 minutes and 37 seconds on the GPU of the Geforce RTX. The unofficial Octane-Bench beta render test resulted in 151 points without and a whopping 445 points with RTX support.
With the default preset “4K Full”, the 256 Gbyte NVME SSD achieved 650 Mbytes per second when writing and 2,409 Mbytes per second when reading data with a continuous transfer rate of at least 1,182 Mbytes per second in the Aja system test. This is a little low for writing, but the Alienware m15 will probably not be used for 8K capturing.

Der hohe Latenzwert wird von einem alten Bekannten verursacht: Dell dddriver.sys.

The Dell Alienware m15 achieved the second-highest DPC latency value of 734 microseconds, traditionally caused by a Dell driver. This is a pity, because apart from this singular peak value, the latencies were in significantly lower ranges, which makes the Alienware m15 appear worse in this area than it actually is.
An Intel i9-8950HK CPU and an RTX 2080 are very fast, but they also have to be cooled somehow, which is always a bit difficult in notebooks due to the limited space available. If you then consider that the case is quite flat for the hardware power it contains, a thermal disaster seems inevitable. Even when idling, the Alienware’s fans repeatedly started up audibly. At around 20 to 30% load, the fans then ran continuously in the lower speed range.
During the test runs with the benchmarks, the fans then started up audibly to loudly and the underside of the case became noticeably warm after a few minutes. Under synthetic load with the Aida 64 stress test, in which the CPU, GPU, memory and all mass storage devices ran simultaneously at full capacity, the temperature of the CPU initially rose to 100°C and that of the GPU to 92°C. This led to the CPU clock frequency being reduced by 10 to 20% to protect the CPU from overheating. Normally, a notebook does not recover from such a heavy thermal load and continues to run at a slower pace under load. Not so the Alienware m15, which heroically picked itself up after about two to three minutes of thermal throttling of the CPU with audible fan noise and from then on continued to calculate unchecked with CPU temperatures of 92°C and GPU temperatures of around 80°C – permanently and without fluctuations. Although this is close to the limit, it is okay considering the form factor and performance. A warning for all male readers who have the idea of using the Alienware m15 under full load as a classic laptop on their lap: This could lead to thermal sterilisation in the long run.

Conclusion

The idea of misusing the Alienware m15, which was actually designed for gaming, as a workstation is not so far-fetched. There are a few points that will irritate professional workstation users, such as the many pre-installed software helpers and assistants that interrupt your work at the most inopportune moments and have to be deactivated manually. Or the relatively high fan noise under partial and full load, the low SSD storage space and the brief thermal throttling. In return, the user gets a computer with a huge variety of configuration options, a good display and casing and more than enough CPU and GPU performance to be able to work decently on the move without having to spend a fortune. And you can also use it for gaming.

]]>
(C) Dell Inc. Notebook Alienware m15 76149