In the first part of this article series, we learned that a visible volumetric light can be used as an “aquarium” for volumetric shaders such as 3D noises and 3D colour gradients. With such a light container, we are able to create thin clouds or even complex self-luminous structures such as stellar nebulae. With the same technique, however, we are also able to create complex animated structures such as an animated solar corona with its characteristic eruptions. In the following, we will take “the magic of visible lights” one step further and create a realistic, fiery and detailed solar corona.
Looking at contemporary science fiction films or VFX-heavy TV documentaries, one might think that the depiction of a realistically animated sun is a task that causes many sleepless nights in a VFX studio as a classic scenario for particles and simulations.
The intro sequence of this project for the TV documentary series ZDF “Terra X” (see cover picture) stages the sun as a place of stellar nuclear fusion, but only uses the techniques we have learnt so far – and at this point we can sleep soundly again.
If you take a look at NASA footage of the sun, you will realise that the surface of the sun contains a lot of visual details that are comparable to C4D’s noise shaders. For example, the so-called granulation of the sun’s photosphere can easily be reproduced by a noise shader called “Voronoi 3”, which is distorted in a distorter shader with a VL noise (image 01).
If you distort the result again in a plane shader using a plane effect “Distorter” with a turbulence noise, you get a structure that comes quite close to the real model (image 02).
The same applies to phenomena that are obviously volumetric, such as corona, solar flares, etc.: visible volumetric light can be used as an “aquarium” for noises and 3D colour gradients, with the mathematical functions behind the noises always resembling their natural models.
As is so often the case with visual effects, the complexity of the final sun results from the superimposition and combination of various simple elements: For all visual phenomena such as light rays of different sizes, eruptions, etc., we only use one point light at a time, whose visible light is more or less larger than the solar body itself. The astonishing thing is that the complexity of the final result is only achieved with 8 volumetric point lights (Fig. 03).
Corona, light rays and eruptions are thus applied as textures to volumetric point lights using the technique we are familiar with. As the “Visibility” tab of the respective point light contains the parameters “Inner distance” and “Outer distance”, volumetric effects can be faded out concentrically from the surface of the sun into space (Figure 04).
In order to have more control over this behaviour, we also apply a colour gradient in 3D spherical mode as a mask (Image 05). This colour gradient has the same radius as the “Outer distance” parameter of the point light source, but has the advantage that it fine-tunes the spatial fading of the effects and at the same time offers the option of applying turbulence (image 06).
This effect may look very complex, but it is actually quite simple: It consists of a visible volumetric point light that contains a large Naki noise in space object as a texture. The Naki contains a slight in-itself movement due to the “animation speed” parameter. It is located in a plane shader, where it is masked out by a 2D colour gradient of the polar caps and coloured by a colourizer effect (Figure 07).
The scaling of this Naki noise is then animated over time, meaning that the noise grows concentrically from the centre of the sun. This alone creates the illusion of large plasma-like structures.
In order to have more control over how the growing Naki is faded out in space, I masked it with a static but highly turbulent 3D colour gradient in 3D spherical mode, which is also applied to the space object (image 08). The result is that solar flares emerge from the sun and are faded out turbulently in space.
Although this article is about volumetric effects, let’s take a quick look at the shader setup for the actual surface of the sun. Everything takes place in the glow channel of the sun material or in a layer shader placed there: noises, colour gradients and layer shader effects are stacked on top of each other with different layer modes and opacity levels. In addition, several layers are used to mask the layer above (image 09).
In the displacement channel, three noises in a layer shader deform the high-res geometry of the sun to match the details in the glow channel (image 10).
Large plasma arcs were modelled procedurally: Spheres were distributed as mograph clones over the surface of the sun and arranged appropriately with a shader effect. The plug-in pCONNECTOR (tcastudios.com) was then used to connect these clones, and the resulting spline object was thrown into a sweep object. The resulting tubular connections between the clones were then bent into arcs using a shader-based displace deformer (Figure 11).
In min 00:11 of the animation, the moving camera enters the volume of the sun. This transition takes place in the compositing with a cross-fade to a sequence that is similar in technical approach to the stellar nebula in the last article. It is important to note that for this transition, the final speed of the outside camera must have the same direction and linear speed as the start of the inside camera.
Sample-based volumetric light has been part of C4D for a small eternity, namely since Release 5 (1998). This old type of sample is of course handled differently by C4D than the modern samples of surface shadows, scattered reflections, ambient occlusion etc. Consequently, the rendering of this scene is much faster in the older standard renderer than in the more recent physical renderer.
In addition, the standard renderer has been accelerated by Intel Embree technology since release 19 (2017). Embree is a ray tracing library that is optimised for the latest Intel processors and significantly speeds up rendering – in some cases by up to 100% compared to Release 18. Embree breathes new life into the good old standard renderer, so to speak.
But that’s not all: noises and procedural shaders have high-quality SAT texture interpolation as standard, so there is no need to use anti-aliasing for rendering with the standard renderer. To put it in a nutshell: The standard renderer renders this scene at the speed of light – especially considering the volumetric complexity involved here.
The approach described for stellar nebulae and solar flares is based on the volume of a visible volumetric light source as an “aquarium” for 3D noises. While the use of such a light container is quite simple and straightforward, it has one major drawback: all structures and effects created are actually visible light without true opacity and without the possibility of casting a shadow. A cloud created in this way will therefore never cast a shadow on the ground or on itself.
So to create a shadow casting fluffy cloud for a sunny summer sky, we have to dive into a technique that is also based on the use of volumetric shaders, but uses a different container: a mograph cloner.
Mograph, C4D’s proprietary toolset, provides motion designers with an extensive range of powerful tools for procedural non-destructive animation. A core function of Mograph is the cloning of objects according to certain rules, e.g. along a spline, in certain arrangements or on the surface of an object. Effectors such as randomness, shaders or time, among others, bring variation and movement to the cloning system and with the fields of Cinema4D R20, there are virtually no limits to the imagination and complexity.
Whilst Mograph offers a huge range of exciting functions and thus invites you to play and experiment, we will concentrate on what is probably the most boring function: cloning simple layers (i.e. polygon plates) into a linear stack. This will be the container for our 3D noises: a stack of cloned layers with only one polygon. Before we get into the cloud creation technique based on this, let’s see what you can do with it.
This project for the TV documentary series ZDF “Terra X”, episode “Scotland – the myth of the Highlands” (picture 13, 14, 15) shows the geographical origin of Scotland and England. Volumetric clouds in the close-up shots are created using the mograph-based technique, which we will now look at in the form of a short tutorial.
Create a new scene, create a simple sky object under Main menu > Create > Environment. Create a new material, deactivate all material channels, activate the glow channel and create a nice 2D-V colour gradient in typical sky colours, perhaps as shown in image 15.
Place a plane object with 1 x 1 segments and dimensions of 1,200 x 1,200 cm at -100 cm Y-height of the world and apply a new material with a bluish colour to the plane. This slab will serve as our ground plane. Create a camera and view the scene from a flat top view. The empty scene should look something like Figure 16.
Create an infinite light as the sun and apply raytraced shadows (hard) for the time being (just to achieve a quick shadow effect, no matter how unrealistic it is).
Create a Mograph cloner under Main Menu > Mograph > Cloner. Assign a copy of the floor layer to the cloner. The layer is immediately duplicated three times along the Y-axis of the cloner. Click on the cloner in the object manager and view its settings in the attribute manager: You will notice that the cloning mode is set to linear by default, just as we want it. First set the number of clones to 10 and set the distance mode from Step to Endpoint. Set the distance under P.Y. to 50 cm. This will distribute all cloned planes along the Y-axis of the world within this range (Fig. 17).
Create a new material. Activate the alpha channel in the material and create a noise shader in it. In the noise shader, select the noise type Naki, select Object as the room and adjust the global size to 2,500%. Set the clipping at the bottom to 50% and the clipping at the top to 100%. This narrows the tonal gradient of the noise so that you get nicely distinguishable white and black areas (Fig. 18). This creates defined transparent areas (black) and opaque areas (white) in the alpha channel.
Before we continue with the cloud creation, we will briefly optimise our viewport so that we can view the first versions of our volumetric clouds in real time (!) in the viewport. To do this, select “Enhanced OpenGL” in the viewport menu under Options (if not already activated). Then activate Noise and Transparency. We’ll see the result in a moment.
Apply the material you created earlier to your cloner. You will then get 10 layers with exactly the same noise texture – even though we applied the Naki Noise to the room object! This is because the cloner is not a “real” object – the noise in the room object instead references the axis system of each layer and repeats the Naki noise with each clone. To change this, we go back to the Naki Noise and select Room World. Now we get a representation of the Naki Noise that changes from plane to plane – just the way we want it.
Now, step by step, increase your clone count to 50 and see what happens… Bingo! The slices of the Mograph Cloner – the layers – serve as a kind of spatial sample for the 3D noise, creating volumetric clouds. The more clones you use, the more homogeneous the result will be. The “Number” parameter in the cloner now functions as a kind of sample count.
With the previously optimised settings of the C4D viewport, you can view this first version of your volumetric clouds in real time (!).
When rendering, you will now get some nice cloud-like structures, which, however, show black, sharp artefacts here and there (Fig. 19). This is because the ray depth of the raytracer has been used up. This means that the raytracer beam stops after passing through a limited number of transparencies (our cloned layers with alpha channel) and renders a black pixel.
To avoid this, select the “Options” entry under Render presets (Ctrl B) and adjust the “Ray depth” parameter. You should select the number of your clones plus one as the value. The result already looks better (image 20). As the more frequent penetration of transparencies by the raytracer beam increases the rendering time, we will initially use a raytraced shadow for the sun light source created in step 01. For the final rendering, you can still use realistic area shadows.
What we can clearly see is that all the clouds above and below are abruptly cut off by the vertical boundaries of our cloner. To get around this, we mask our 3D noise with a vertical 3D colour gradient.
In the alpha channel of your cloud material, click on the triangle button next to Texture and select “Layer”. You have now moved your noise shader to a layer shader.
Go back briefly to your alpha channel and deactivate the “Alpha image” checkbox to ensure that the greyscale information of our layer shader is also interpreted as alpha information. Double-click on the noise in the layer shader and name it Cloud Noise. Then click on the “Shader” button, create a colour gradient shader and drag it above the cloud noise. Select 3D Linear as the type of gradient.
We now need to think about how to adjust the colour gradient correctly in order to spatially mask our cloud noise from bottom to top. As the bottom layer of our cloner is at Y=0 and the cloner with all layers has a thickness / height of 50 cm, we set the 3D colour gradient so that it starts along the Y axis at starting point 0 cm and ends at end point 50 cm. We select Space World as the reference system. The colour gradient runs along the world Y-axis from black at 0 cm to white and back to black at 50 cm (Fig. 21).
Back in the layer shader, we set the layer mode to Multiply. Now we can render.
The result is now much better, as we can now mask the cloud noise spatially from bottom to top and thus have control over its vertical shape (image 22). However, the mask is far too soft and too homogeneous. Instead of adding turbulence to our colour gradient, we will now do something more elegant with it.
Copy the cloud noise and paste it as a copy into your layer shader (right mouse button, Copy Shader / Paste Shader). Double-click on the pasted noise and name it Mask-Noise. Then drag the noise above the colour gradient.
Change the global scaling of the mask noise to 500% and invert the noise by setting the clipping at the bottom to 85% and the clipping at the top to 0%. Go back to the layer shader and set the layer mode of the gradient below to Normal. The setup should now look like Figure 23.
Now set the layer mode of the mask noise to Levr and see what happens, roughly speaking: A kind of high-contrast version of the mask noise is cut away from the gradient. Reduce the opacity of the mask noise to 70% to get a softer edge. Set the layer mode of the gradient back to Multiply (image 24). Before rendering, we adjust the Clipping bottom parameter in the cloud noise to 20%. Then we render again. The result should look something like Figure 25.
The clouds are now increasingly realistic, but still look grey and dark. Reason: Each Mograph clone casts a shadow on the underlying clone. In order to lighten only shadow areas, we use a shader setup that I created to simulate a kind of self-illumination on the shadow sides of objects: Shadow Luminance.
Although I originally developed this shader setup to simulate very diffuse light on shadow sides of objects, it can also be used to simulate a kind of subsurface scattering on our clouds.
Shadow Luminance consists of three important components:
The colourizer fed with the Lumas is then used to mask any colour on the shadow side of an object and thus create a slight brightening of the shadow areas. Let’s put this into practice.
Activate the glow channel of your cloud material. Create a layer shader. Create a Luma shader within the layer shader. Deactivate all glow aspects of the Lumas shader. Under the Shader tab, select 100% lighting and set the colour to a bright white. Go back to the Layer Shader and place the Lumas in a Colorizer Shader by right-clicking on the Lumas and selecting Colorizer. Inside the Colorizer shader, set its colour gradient to white-black.
Go back to the layer shader, click on the “Shader” button and create a colour shader. Drag it above the colourizer. Set the colour of the colour shader to a light blue. Back in the layer shader, set the opacity of the colour layer to 8%. Select the layer mode Layer mask for the colourizer below.
The setup should now look like image 26. voilà! – You are now masking a light blue exclusively on the shadow sides of your clouds.
If you look at the colour channel of our cloud material, you will see that the drop-down menu is set to Lambert. Lambert is a so-called BSDF, a Bidirectional Scattering Distribution Function – or simply put: a function that describes how light is distributed over the surface of an object, from its brightest to the so-called terminator, the day-night boundary. Lambert simulates a perfectly diffuse surface, while the other available BSDF Oren-Nayar calculates additional micro-facets for a satin, roughened look. While Lambert is a good choice for our bright clouds, Oren-Nayar has an important advantage: the diffuse strength and roughness parameters. With roughness, you can seamlessly mix between Lambert (0% roughness) and Oren-Nayar (100% roughness) behaviour. And with diffuse strength you can virtually adjust the light sensitivity. With a combination of 0% roughness and 200% diffuse strength, we have a Lambertian BSDF with increased light sensitivity and thus increased albedo (Fig. 27).
With a further increased clone count (80), adjusted ray depth, shadow luminance for the ground material and area shadows for the sun light source, your final result should look like Figure 28.
Fluffy disc clouds are a question of the number of clones and the beam depth. The more homogeneous the desired result, the longer it takes to render. Render times are increased again, especially when realistic surface shadows are used. The happy medium is therefore a balance between a low clone number/beam depth and homogeneous results – handle both with care!
The physical renderer is the best choice in this case, as it offers a high-quality anti-aliasing function, faster calculation of surface shadow samples and also Intel Embree acceleration.
By the way: You can find a live version of this short series of articles as a recording of the Maxon Supermeet 2018 at www.renderbaron.de/publikationen.
What is the U3219Q? A 4K (3840 x 2160 pixels) IPS monitor with a 31-inch screen diagonal that advertises VESA Certified DisplayHDR 400i on the packaging. The street price (as of May 2019) is an affordable €830 on Amazon – the RRP is €990.
So we have requested a test device that has survived the usually rough treatment by the logistics company (don’t laugh, this happens more often than you might think …), and set about unpacking it: The box is Styrofoam-free and instead sturdily filled with solid cardboard trays. The screen is a vivid stone grey, surprisingly light and comparable in volume to much inferior devices. A factory calibration report and the necessary cables are included.
As usual, the Vesa mount is present and stable. The housing did not make a sound during practical use, and no fans or similar could be heard. The Dell screen communicates with the various signal transmitters via HDMI (2.2) and Displayport (1.4), as well as an audio output, USB upstream & downstream and a USB-C port. But more on that later.
The panel is extremely stable in terms of viewing angle and is more than sufficient for 3D and engineering tasks. The integrated HDR 400i support works with the various test signals, but it is only HDR 400 – to what extent this is real HDR is up to you to decide. However, it is certainly sufficient for everyday use. What is flawless is the anti-reflective coating – it can easily keep up with its Eizo colleagues.
The colour measurements (4 weeks apart, the drift was negligible) were okay – not excellent, but okay. The factory calibration to sRGB was spot-on – set it up and get to work.
The menu is easy to understand, and neither the placement of the connections nor anything else struck us as negative. Picture in Picture also works, but what about the advertised feature, the USB-C function? According to the promotional material, you can use the screen as a USB hub and not only transfer the image to the laptop, but also work comfortably while the laptop’s battery is charging. Sounds strange at first, but it worked – our test device was connected for a full working day, and while its battery normally dies after about 3 hours, we were on 100% battery until the end of the working day. That’s nice! But that also gives us a clue to the idea behind the screen.
So, what do we have? A monitor with excellent colour gamut, contrast and colour fidelity, with slight weaknesses in terms of luminance and colour homogeneity, but not a dealbreaker. Anyone who occasionally works in true colour will be well served by it, and the Ultrasharp is a welcome addition to any multi-monitor setup thanks to its narrow bezel. It’s still a long way from being a Class A broadcast monitor, but that’s all you can expect at this very civilised price. However, if you are looking for an extension for modern laptops that is not only easy on the battery but also on the wallet, the Dell U3219Q – despite its awkward name – is a good choice.
Computer-aided design (CAD) data from engineering consists of nurbs curves and surfaces and cannot be rendered by most renderers. Until now, there were two options for preparing them for real-time rendering. Either a manual retopo, i.e. building the asset from scratch as a 3D mesh, or tessellation. Manual retopo is a process that can take days or weeks per scene – depending on the complexity of the models. The other option is automatic tessellation. However, as CAD data is parametric (has infinite accuracy), automatic tessellation achieves an extremely high polycount, which is practically useless for real-time rendering, and if a low tessellation is aimed for, details are often missing, curvatures are poorly modelled or get broken normals. If the CAD object is then tessellated with sufficient accuracy, the polycount must be reduced, again by automatic optimisation or manual retopo.
After all, UVs have to be designed and textures baked so that the model can be viewed in the VR scene with sufficient fps. And even then, there are long-term problems with both manual preparation methods if a different renderer is to be used or if the initial CAD data changes. Huge material libraries are currently being created, but these are only customised for a single renderer. Whether VRED, Arnold, V-Ray etc., the problem is the same. If you want to switch to a different engine in the future, every material from the library has to be completely rebuilt as a shader. Another big problem with a product lifecycle management (PLM) system is that CAD data in it is constantly being modified by engineers, which means that all of the above steps often have to be performed from scratch on a weekly basis. This workflow can be completely automated with InstaLOD’s CAD Live Link and Scene Import Rules. By the way: Each feature can be executed as a batch process operator. All settings that are set up in a profile, including Scene Import Rules, can be saved externally as a JSON file. These can be used as presets or, if the CAD object changes again next week, the user loads the profile that has already been set up, applies it and achieves the same result again: an object that works for VR without manual intervention.
But what is the CAD Live Link and how do you use it? When loading a CAD file, you are first greeted with an import window in which you can select which parts or sub-assemblies are to be imported. InstaLOD supports a variety of CAD formats including Catia, Solidworks, Rhino, JT, NX, STEP and many more. Since the CAD Live Link still maintains the connection to the original CAD parts, we can still selectively retessel the parts. The tessellation is influenced by three settings:
The maximum angle is mainly used in other tessellation programmes, but is often not the ideal solution. The reason for this is that small parts – such as screws – can very quickly become very strongly tessellated within large assemblies. Normally, you don’t want to have a single screw with 50,000 polygons. Maximum deviation is therefore the ideal solution. This tessellates the objects to the quality tolerance that is specified, but does not subdivide the object unnecessarily.
Another problem that often occurs with CAD data is surface inaccuracies, which can lead to shading problems. One such problem is shading splits – as can be seen in the images on the right. By default, a 3D operator has to manually repair these splits piece by piece, which is a lengthy process. With InstaLOD’s CAD Live Link, these can be repaired quickly and easily using the shading settings. In many cases, recalculating the normals is sufficient. In our case, however, a lot of shading information is stored within the CAD metadata, which would be lost by recalculation. We therefore use InstaLOD’s Shading Magic, which automatically localises and repairs the problematic areas. After the tessellation has been performed and any shading problems have been repaired, the rim is not really usable yet, as all materials would have to be set up again for a render. There are also no UVs yet, so textures could not be added, and finally we have 80,000 polygons in the current state. If you were to scale these to a complete car, you would quickly have several million polygons, which is not compatible with a VR application.
So the next step is to make the rim VR-ready in a few steps. Firstly, we go to the mesh operation settings and start with a UV unwrap. Here we use the Hard Surface Axial algorithm, which creates a clean unwrap for the surfaces facing the axes.
After the UV unwrap, we perform a material merge operation. The reason for this is that we have three objects with three materials in this scene. If we scale this scene to a complete car, we quickly have thousands of objects with thousands of materials. Firstly, this is a lot of work for manually setting up the materials for a render, and secondly, this is also a huge amount of draw calls for a real-time application. We therefore use the material merge, which combines all materials and textures into a single material with a texture atlas. Here we also merge the material parameters of metalness and roughness into a texture. This saves time later on when setting up the individual materials, but also saves a lot of draw calls and texture memory. To make it even easier to set up the materials, we combine the objects by using “Combine Meshes” in the Mesh Tool Kit (MTK). We now have an object with a material and a draw call.
This object can now be exported and imported into any renderer as we have the UVs and textures to render the materials exactly as initially set up – completely independent of the renderer.
Now we need to reduce the number of polygons. To do this, we use InstaLOD’s Remesher, which performs a complete reconstruction of the rim within a very short time, simultaneously building UVs and baking textures. The result is an incredible reduction of over 90 %. We had already done all these steps manually, which begs the question: Why didn’t we use Remeshing directly on the original object? The reason for this is the workflow that we want to set up. What we can do now is to take our timeline with all the entries of UVs, Material Merge, Combine Meshes and Remesh – and convert all that into a new profile.
Now you really realise how powerful InstaLOD’s workflows are – we can test multiple mesh operations, and if we’d rather try something else, we can simply jump back in the timeline and test other operations until we’re happy. Then we can turn the workflow into a profile and scale it to hundreds or thousands of objects by exporting the profile as a JSON file and running it through the command line as a batch process. Or we can run the profile directly within InstaLOD Studio XL with InstaLOD Pipeline. Profiles (which contain all settings) can be saved and used again at a later time. This is how presets etc. are set up if this ensures a better workflow. Experience has shown that you should first test the profile on one or two objects to see whether the profile has been set up correctly. Then nothing stands in the way of making your work easier – or of complete automation.
When batch processing the CAD file, you logically have no manual control over the first steps that we have applied to the rim. This includes changing the tessellation and materials and often also changing the object hierarchies that you want to change or delete because you don’t need them for the visualisation (see image above left “Organise outside” and “Organise inside”). With the Scene Import Rules, all this is possible with little effort. Simply specify which objects are affected and what should happen to them.
[caption id="attachment_77238" align="alignnone" width="1181"]To set up a rule, you need to add a new entry in the “Scene Import Rules” window and give it a name. You must then specify an attribute. Everything from the object name to the path can be specified here. The nice thing here is that basically any available attribute can be used within the metadata. You can see exactly which ones are available in the “Selected Object Information” window. If you use “Name” as an attribute, for example, you must enter the name of the object in the “Match RegEx” field below so that it is processed by the rule. In the Predicate, you can specify how important this rule is, which is helpful if there are many rules, so that you can specify which rule is processed earlier or later in the list. The predicate determines what the rules should perform on the objects. Available options are operations such as “Material Assignment”, “Tessellate”, “Delete” and many more. (top left image “Scene Import Rules”). Customised predicates can also be added using a C plug-in. These plug-ins are automatically compatible with the complete InstaLOD system. With these plug-ins you can customise InstaLOD to what you need for your own pipeline.
[caption id="attachment_77228" align="alignnone" width="2560"]Now we come back to the PLM system that was briefly mentioned earlier. Once you have set up a set of rules, including the subsequent mesh operations, it is no longer a problem to update the CAD objects on a weekly basis. Simply load the finished profile and run a batch process in which the objects are automatically prepared using the rules (including materials, tessellation, organisation, etc.) and finally reduced by the mesh operations. At the end, you get the finished, updated scene at the touch of a button, without having to manually intervene in the process. This means that large assemblies can be continuously extracted from a PLM system and made VR-ready within a very short time. By the way: If you want to try this out for yourself, you can get a trial version by filling in the form on our website InstaLOD.com and try out the described workflow and all the other features and processes that InstaLOD makes possible on your own assets.
As is usual for Xi-Machines, the Animate X2 Advanced was delivered extremely well protected against transport damage. The actual packaging of the workstation is very securely padded in another large cardboard box with a huge number of small polystyrene elements. However, unpacking and collecting the numerous polystyrene elements, especially with a statically charged fleece jumper, elicited a curse or two from the author. Alternatively, you could also use bubble wrap, which is easier to unpack and repack.
Even when unpacking, the numerous labels on each side of the packaging were noticeable, indicating further transport protection for the CPU and graphics card inside the workstation case.
The case is the midi tower version of the standard workstation case from Xi-Machines made from elegant-looking, black brushed aluminium with chrome-plated feet. Two USB 3.0 ports as well as a headphone and microphone socket are hidden under a small hinged lid on the top, with the power switch and reset button right next to it. There are two empty 5¼-inch slots on the front of the housing in case an optical drive or multi-card slot reader needs to be retrofitted.
After removing the transport lock for the CPU and graphics card, the inside of the case looks very tidy: one usable PCIe x4 and PCIe x16 slot each are still free, and three hot-swap bays are still available in the neighbouring HDD cage. As you would expect from Xi-Machines, the cable management is flawless, with only one really visible cable running from the powerful and quiet power supply unit to the graphics card.
Xi-Machines has equipped the Animate X2 with an Intel Xeon W-2155 CPU with ten cores and 64 Gbytes of registered ECC RAM system memory. The RAM memory is divided into four 16 Gbyte modules. This leaves four RAM slots free. In total, the memory can be expanded to a maximum of 512 Gbytes.
When it comes to mass storage, Xi-Machines has opted for so-called Enterprise Edition drives. Enterprise Edition is not a special edition for Star Trek fans, but means that these mass storage devices have a significantly higher operational reliability and service life than the normally available consumer/desktop models. The Meantime Between Failure (MTBF) is significantly higher for the Enterprise Edition models, and so is the price.
Three different mass storage devices were installed: a 480 Gbyte SSD for the operating system, a 480 Gbyte M.2 SSD as a scratch disc and a 3.2 Tbyte (!) PCIe SSD module with crazy transfer rates for project data. The graphics card is the Ti version of the current Geforce RTX 2080 with 11 Gbytes of RAM. In addition, the Animate X2 Advanced offers almost all currently relevant interfaces at the rear of the housing.
The scope of delivery also includes a mouse and a keyboard of acceptable quality, extra cables for the power supply unit, a small box with screws and an anti-static wrist strap with a clamp for earthing.
In the Cinebench 20 CPU test, the Animate X2 Advanced came out on top of the test field with 5,258 points, as expected. In the older Cinebench 15 test, it was also at the top with 2,200 points and the Geforce RTX 2080Ti achieved 159 points in the OpenGL test. In the V-Ray render test for CPU and GPU, the good results of the Cinebench tests were confirmed with a computing time of just 1 minute and 1 second for the CPU test and just 46 seconds for the GPU test.
When rendering the classroom scene in Blender, the Geforce RTX was able to show what it can do. The ten CPU cores of the Xeon W-2155 calculated a brisk 8 minutes and 49 seconds, while the Geforce RTX 2080 Ti took just a quarter of the time at 2 minutes and 12 seconds. The unofficial Octane-Bench beta test also showed what the RTX cards have over their predecessors: 302 points without RTX and a whopping 895 with.
Xi-Machines has gone all out when it comes to mass storage: the 3.2 Tbyte PCIe SSD Enterprise Edition module achieved write rates of 2,035 Mbytes and read rates of 4,407 Mbytes per second in the Aja system test and delivered a sustained transfer rate of 4,400 Mbytes per second. However, with limitations, not with the mass storage, but with the benchmarks. According to Xi-Machines, the SSD is capable of reading data at up to 6,170 Mbytes per second – unfortunately, we were unable to measure this. The 500 Gbyte capacity SSD intended as a scratch disc also achieved similarly high write rates of 2,012 Mbytes per second as the large PCIe SSD, but with read rates of 2,729 Mbytes per second it did not come close to their values. The smaller system SSD wrote data at 362 and read it out again at 513 Mbytes per second – completely sufficient for the operating system. With a maximum DPC latency of 442 microseconds, the Animate X2 Advanced was in the midfield of the test candidates. This is fine for the traditional application area of 3D, rendering and HD video. In addition, Windows 10 Pro is configured in such a way that users can get started straight away without having to activate Windows, update drivers or prevent questionable optimisation tools from working.
The cooling concept of the Animate X2 Advanced works well, because even under simultaneous synthetic full utilisation of all components with the Aida 64 stress test, the temperature values of the CPU, the mainboard and the mass storage remained within the normal range.
Only the Nvidia Geforce RTX 2080 Ti reached a slightly higher value of 81°C, although this is unlikely to be reached under realistic load scenarios in practice. And under all the load and stress, hardly a murmur could be heard from the Animate X2 Advanced, only the graphics card stood out a little when the fans started up, but everything was absolutely bearable.
At 9,875 euros, the Xi-Machines Animate X2 Advanced is certainly no bargain, but calling it expensive is not justified either. Of course, in comparison with the other test candidates, none of which offer selected and tested Enterprise Edition devices of 3.2 Tbyte size and a 5-year warranty, the price seems quite high.
However, if you consider the costs incurred and the loss of image for the customer if a workstation fails unexpectedly in the middle of a large project, then the price is put into perspective and the attribute expensive can very quickly turn into inexpensive.
The Xi-Machines Animate X2 Advanced is unquestionably fast in all areas, offers excellent hardware components and processing and extensive options for increasing the capacities of mass storage and RAM memory. In terms of operational safety and reliability: If I had to buy a computer for a nuclear power plant, I would probably order it from Xi-Machines.
The idea of using a gaming notebook as a workstation is actually obvious due to the similar hardware requirements. This is because the demands on the CPU, GPU, RAM and mass storage are high in gaming as well as in the areas of 3D, HD video and media content creation. If you take a look at the hardware equipment of the Alienware m15, you would be forgiven for believing it could be used as a workstation. The notebook was delivered in a sturdy shipping box, which keeps the device well padded on the inside.
The housing of the m15 appears stable and accurately finished. All plastic parts are clean and flush, the display hinges are secure and run smoothly and evenly. The 15.6-inch UHD display with 4K resolution delivers good, high-contrast images even at less than ideal viewing angles and has enough brightness reserves to work outdoors on a summer’s day.
We were positively impressed by the keyboard, which, in addition to good keys and a generous surface area, even offers a full alpha-num pad, which makes it much easier to operate 3D, audio and video programmes. The track pad with two mouse buttons can also be controlled safely and precisely.
On the left side of the Alienware m15 is a USBA 3.1 port and the Gigabit LAN socket, with two more USBA 3.1 ports on the right. On the rear, Dell has accommodated the HDMI 2.0, a mini display port, Thunderbolt 3 and the Alienware Graphics Amplifier port.
When selecting the CPU and graphics card for the Alienware m15, the maxim “a lot helps a lot” obviously applied. Dell has packed an Intel i9-8950HK CPU with six cores and the Nvidia Geforce GTX 2080 Max Q, one of the currently most powerful mobile graphics cards with 8 Gbytes of RAM, into the m15 and additionally garnished it with 32 Gbytes of RAM memory. This should be sufficient for most workstation tasks such as HD video editing or 3D modelling. However, the capacity of the internal NVME SSD could be a little tight, as only 256 Gbyte for programmes, one or two libraries and project data will quickly become cramped. However, it is then possible to connect a lot of fast mass storage via one of the available high-speed interfaces.
With the Intel Core i9-8950HK processor, the Alienware m15 achieved an impressive 2,546 points in the Cinebench 20 CPU benchmark, 1,221 points in the older Cinebench 15 and 117 frames per second in its OpenGL test for the Nvidia Geforce 2080 Max Q. The Alienware m15 also performed well in the V-Ray benchmark for CPU and GPU:
1 minute and 52 seconds of computing time for the CPU test and 1 minute and 43 seconds for the GPU test.
In the Blender 2.7 render test of the classroom scene, the i9 8950 HK needed 23 minutes and 44 seconds. With version 2.8, the same scene calculated on the CPU then only took 18 minutes and 6 seconds and just 4 minutes and 37 seconds on the GPU of the Geforce RTX. The unofficial Octane-Bench beta render test resulted in 151 points without and a whopping 445 points with RTX support.
With the default preset “4K Full”, the 256 Gbyte NVME SSD achieved 650 Mbytes per second when writing and 2,409 Mbytes per second when reading data with a continuous transfer rate of at least 1,182 Mbytes per second in the Aja system test. This is a little low for writing, but the Alienware m15 will probably not be used for 8K capturing.
The Dell Alienware m15 achieved the second-highest DPC latency value of 734 microseconds, traditionally caused by a Dell driver. This is a pity, because apart from this singular peak value, the latencies were in significantly lower ranges, which makes the Alienware m15 appear worse in this area than it actually is.
An Intel i9-8950HK CPU and an RTX 2080 are very fast, but they also have to be cooled somehow, which is always a bit difficult in notebooks due to the limited space available. If you then consider that the case is quite flat for the hardware power it contains, a thermal disaster seems inevitable. Even when idling, the Alienware’s fans repeatedly started up audibly. At around 20 to 30% load, the fans then ran continuously in the lower speed range.
During the test runs with the benchmarks, the fans then started up audibly to loudly and the underside of the case became noticeably warm after a few minutes. Under synthetic load with the Aida 64 stress test, in which the CPU, GPU, memory and all mass storage devices ran simultaneously at full capacity, the temperature of the CPU initially rose to 100°C and that of the GPU to 92°C. This led to the CPU clock frequency being reduced by 10 to 20% to protect the CPU from overheating. Normally, a notebook does not recover from such a heavy thermal load and continues to run at a slower pace under load. Not so the Alienware m15, which heroically picked itself up after about two to three minutes of thermal throttling of the CPU with audible fan noise and from then on continued to calculate unchecked with CPU temperatures of 92°C and GPU temperatures of around 80°C – permanently and without fluctuations. Although this is close to the limit, it is okay considering the form factor and performance. A warning for all male readers who have the idea of using the Alienware m15 under full load as a classic laptop on their lap: This could lead to thermal sterilisation in the long run.
The idea of misusing the Alienware m15, which was actually designed for gaming, as a workstation is not so far-fetched. There are a few points that will irritate professional workstation users, such as the many pre-installed software helpers and assistants that interrupt your work at the most inopportune moments and have to be deactivated manually. Or the relatively high fan noise under partial and full load, the low SSD storage space and the brief thermal throttling. In return, the user gets a computer with a huge variety of configuration options, a good display and casing and more than enough CPU and GPU performance to be able to work decently on the move without having to spend a fortune. And you can also use it for gaming.