Search Results for “DP1904” – DIGITAL PRODUCTION https://digitalproduction.com Magazine for Digital Media Production Thu, 26 Sep 2024 11:04:51 +0000 en-US hourly 1 https://digitalproduction.com/wp-content/uploads/2024/09/cropped-DP_icon@4x-32x32.png Search Results for “DP1904” – DIGITAL PRODUCTION https://digitalproduction.com 32 32 236729828 Silentmaxx workstation based on Kenko S-595i in test https://digitalproduction.com/2019/07/10/silentmaxx-workstation-basierend-auf-kenko-s-595i-im-test/ Wed, 10 Jul 2019 14:45:41 +0000 https://www.digitalproduction.com/?p=76700
The company Silentmaxx specialises in silent PCs and workstations. The development of the components used, such as a passive CPU cooler that only works with heat pipes without an additional fan, takes place in-house.
]]>

Silentmaxx in Rheinbach has been developing and producing PC components for over 15 years and – as the company name suggests – has specialised in particularly low-noise, passively cooled power supply units and CPU coolers. Silentmaxx also offers a wide range of silent, passively cooled workstations, including Xeon workstations with 44 cores and Nvidia Quadro cards. The workstation we tested is based on the Kenko S-595i, which is completely passively cooled in its basic version and does not require a fan.

The silent Max was delivered for testing “true rock ‘n’ roll” style in a flight case with butterfly fasteners and ball corners, extremely well padded and packaged. This is not the normal packaging, but is only used for test devices. The graphics card was attached to a housing plate with a cable tie for transport, the large heatpipe heatsink was not specially secured.

Case

The Silentmaxx Big Tower case, like the CPU and graphics card cooler, is a product that was developed and manufactured in-house. Unlike most other tower cases, the Silentmaxx case is made of plastic, which is lighter and probably cheaper, but unfortunately does not convey the same feeling of quality and stability as aluminium, for example. There are two USB 3.0 and two USB 2 ports on the front top, along with a power and reset button. Practical if you need to quickly import data from a mobile hard drive.

The question of whether a MIDI tower wouldn’t have done as well is answered automatically after opening the housing. No, because the entire space between the first PCIe slot, where the graphics card is located, to the top of the case is filled by a huge passive CPU cooler with ten heatpipes. The passively cooled graphics card, also equipped with heatpipes and a large heat sink, sits directly underneath. Actually passively, because directly one slot below the graphics card is a plastic tray with two large, slowly rotating fans that gently fan the graphics card’s heat sinks – but unfortunately cover a PCIe x-16 slot in the process. A narrow fan is also hidden between the two blocks of the CPU heatsinks, attached to the upper cooler via a metal clip.

There are three free 5¼ slots behind the case door, one of which is already occupied by a DVD multi 22x drive, with eight completely free HDD cages/slots underneath in the case.

Equipment

Silentmaxx has equipped the workstation with an Intel i9-9900K CPU with 8 cores and 32 Gbytes of RAM memory. Unlike the i7-9700K CPU, the Intel i9-9900K installed in the Silentmaxx has hyperthreading in addition to the 8 cores, which is certainly not a disadvantage for HD video, 3D and audio applications. The GeForce RTX 2070 with 8 Gbyte is not the most powerful graphics card, but it is one that offers a good price-performance ratio. In addition, the 175 watts of power consumption of the 2070 are probably easier to cool passively than the 260 watts of the 2080 Ti.

Silentmaxx has installed an M.2 SSD module with a Tbyte capacity for mass storage and, as already mentioned, there is even a DVD-R multi drive in the 5¼ slot. In terms of interfaces, the silent Max offers everything you could possibly need for everyday use: two Gigabit LAN ports, four USB 3.0 ports, two USB 3.1 ports and two USB 2.0 ports, two Thunderbolt 2.1 ports and one USB C port, three Displayports plus one Mini Displayport and HDMI on the RTX 2070.

How quiet is “Silent”?

Is the workstation really silent when computing? Not quite, because on the one hand you could hear the fan working quietly between the two CPU heat sinks. On the other hand, the fan began to rattle audibly once the heat sinks had warmed up a little. Although the problem could be quickly resolved by moving the fan, it revealed a problem with the fastening and mechanical decoupling of the fan. The graphics card, on the other hand, cooled effectively, quietly and smoothly at around 72°C under full load.

Performance

With a Cinebench 20 CPU score of 4,913 and 2,042 points in Cinebench 15, the Silentmaxx workstation took second place directly behind the significantly more expensive Xi-Machines Animate X2. The CPU was also only 3 seconds behind the Animate X2 in the V-Ray render test with 1 minute and 3 seconds. The Nvidia RTX 2070 achieved the top value of 168 frames per second in the OpenGL benchmark of Cinebench 15 and needed 1 minute and 7 seconds for the V-Ray scene.

Die RTX 2070 Grafikkarte erreichte mit 167 Bildern pro Sekunde den Cinebench-15-OpenGL-Spitzenwert. Auch beim Cinebench-20-CPU-Test erreichte der i9-9990K die zu erwartenden Punktzahlen. V-Ray: Nur zwei Sekunden Unterschied zur Xeon CPU mit 10 Kernen beim V-Ray-CPU-Test.

The results of the Blender Classroom tests were interesting. In the CPU test with version 2.7, the i9-9900K of the Silentmaxx workstation was only 14 seconds slower than the Xeon CPU of the Xi-Machine at 12 minutes and 2 seconds. If the same scene was rendered with Blender 2.8, the i9-9900K was even 22 seconds faster than the Xeon W 2155 at just 8 minutes and 27 seconds. At 3 minutes and 14 seconds, the Nvidia RTX 2070 required a good 1 minute and 2 seconds more computing time than the 2080 Ti.

The one Tbyte SSD delivered the expected transfer speeds of 3,001 Mbytes per second read rate and 2,970 Mbytes per second write rate in the Aja system test. The test with HD-Tune determined a continuous transfer rate of 2,248 Mbytes per second. With these transfer speeds and capacities, many tasks from HD video and 3D to large audio projects can be realised. We measured a maximum DPC latency of 304 microseconds, which, together with the good benchmark results, indicates that Windows 10 Pro is running smoothly with the latest drivers and without disruptive utilities.

Conclusion

Admittedly, the case of the Silentmaxx workstation is not as stylish as that of the Xi-Machines Animate X2, and it is not much quieter under full load. On the other hand, it only costs about a third of the price, but still plays in the same league in terms of CPU performance. After all, the Silentmaxx Worksation was the second fastest computer in the test field, but at 3,110 euros it was the second cheapest. If you can do without the stylish casing and additional security provided by Enterprise Edition components, the Silentmaxx is a powerful, well-configured workstation that offers excellent value for money.

]]>
DIGITAL PRODUCTION 76700
Lightwave 3D 2019 – the continuation of a new beginning? https://digitalproduction.com/2019/05/10/lightwave-3d-2019-die-fortsetzung-eines-neuanfangs/ Fri, 10 May 2019 13:52:04 +0000 https://www.digitalproduction.com/?p=76665
Lightwave 2019 is out, now already the 3rd fixed version, Lightwave 2019.0.3, with a decent list of fixes. What not even the most loyal fans may have expected has materialised: Newtek has delivered the fastest upgrade in its history for the proven and production-tested 3D modelling, animation and rendering software. After a period of uncertainty, everything is now set to change for the better. The list of current features is extremely long, and the response from the remaining, die-hard Lightwave users is correspondingly high.
]]>

As one of the first 3D programmes on the market – originally a 3D component of the Amiga-based video toaster from Newtek – the software (now for Windows and Mac) has a long and chequered history behind it. Especially in its heyday, Lightwave was able to score points by being used in numerous productions such as “300”, “Iron Man” or “Avatar”. The times when graphics and especially 3D software was booming (suddenly everyone seemed to be able to produce cinematic 3D film effects themselves) are probably over. Some websites that list such software are now categorised as “dead”.

Lightwave has also suffered. The disaster with the “Core” project certainly caused the first major setback. For many users, Core was seen as a real new beginning with a new UI, new concepts and innovative functions – and a stylish T-shirt that some users referred to as a shroud. After the project was quietly cancelled in 2011, things became increasingly quiet around Lightwave. Nevertheless, the software has remained available to its users and has been further developed. Even today, Lightwave is still sometimes the software of choice for smaller studios and freelancers.

Fans of Lightwave 3D are particularly hopeful about the latest upgrades. There was also a change in nomenclature – after numbering up to version 11, the last version was 11.6, followed by version 2015 (we reported on this in issue 02:15). The next release did not appear until 2018 – but now Newtek seems to be getting serious. Lightwave 2019 was announced at the beginning of the year. After a few weeks, the first fixed versions have been released. The software’s range of functions has increased significantly.

Leuchtende Mesh-Objekte
Luminous mesh objects

Regular Lightwave users, as well as users of other 3D software, amateurs and professionals, are encouraged to see Newtek’s new start as an opportunity. Newtek offers an unconventional but complete and production-proven tool for a manageable price without any subscription obligation, with which a variety of tasks for motion FX, animation and game design can be completed.

Some say so – others say so …

The current version of Lightwave remains true to itself and retains some of its sometimes famous and much-discussed peculiarities. For example, while the division of the software into two programmes, the Modeler (the part in which 3D objects are created) and Layout (the part in which scenes are arranged, illuminated, animated and rendered) is an annoying relic for some users, others see this as an advantage because it keeps the focus on the essentials and improves clarity. Lightwave’s UI manages without colourful icons or tooltips. Some new users complain about hidden functions or functions that are complicated to use. For example, a number of different presets can be selected for the viewport layout – albeit in the preferences. In the view layout menu under the “View” tab, there are only options to call up the previous or next layout from the list. This still applies, for example: If nothing is selected in the modeller, Lightwave considers everything to be selected – without studying the manual, the learning curve seems to be difficult to overcome. Experienced users, on the other hand, praise the customisability of the UI (all menus can be freely designed), the effective control options with shortcuts and the speed when working as well as the variety of functions of the software, which can be enhanced with numerous plug-ins from third-party providers(a list of extensions can be found here).

Lightwaves Material für Haare verwendet verschiedene Farbwerte und Specular-Shader.
Lightwave’s material for hair uses different colour values and specular shaders.

The many third-party extensions, some of which have been developed over the years and are used by professionals as a matter of course, can be a challenge for programme newcomers when, for example, video tutorials are studied and commands appear that Lightwave does not even have by default. The names of plug-ins or commands do not always say anything about their function. Some different extensions fulfil very similar tasks in different ways, and some simply no longer exist at some point.

15, 18, 19 …?

As Lightwave has hardly changed externally, the new features only become apparent at second glance. While new features in the 2015 version included bullet constraints, the Genoma 2 character rigging system, a match perspective tool and rendering techniques such as importance sampling or edge rendering, as well as workflow improvements, the list of new features in the 2018 version is much more extensive: Lightwave’s scene format has been changed to support new functions. If scenes are to be opened in older versions of the programme, they must be exported as an earlier version.

Nur ein Null-Objekt mit einer Shape ­Sphere – der Turbulence Shader sorgt für ordentlich Displacement.
Just a null object with a shape sphere – the turbulence shader ensures proper displacement

A new physical-based rendering system was introduced in the 2018 version (PBR Rendering); new shading, lighting and rendering options are intended to enable a higher degree of realism when creating 3D scenes and images.

With extended render & light buffers, Newtek aims to improve performance when working with compositing. Using the Node system, any render buffer can be created and displayed in any viewport as a VPR preview in (near) real time.

For the 2019 version, Newtek is advertising the bridge to the Unreal Engine. Several Lightwave applications can be connected to the Unreal Engine at the same time. Updates to scenes or objects are made in real time. According to Newtek, the bridge uses a network detection algorithm from Newtek for simple automatic configuration, which can be installed on a single project in Unreal or as a general plug-in for use in all Unreal projects. In an initial test, the connection to the Unreal Engine worked straight away.

Creating OpenVDB content, Metamorphic (animated sculpting in layout), Empty Volumes (conversion of meshes to volumes, Boolean functions in real time), new shader nodes and new compositing buffers as well as physical sky and render extensions (Denoiser, Despike) are just some of the keywords for the current 2019 version.

Shape primitives and empty volumes

Null objects can be assigned primitive shapes (sphere, cube, cylinder, torus, cone and plane) as object properties in Layout. The Shape > Empty Volume option creates an empty volume. Such empty volumes can be used to create real-time Booleans in Layout. The CSG node takes over the Boolean functions (Union, Intersection, Subtraction). A new Volumetric Engine generates volumetric clouds and fog from primitives.

Newtek introduced OpenVDB support in the 2018 version. This award-winning open source C library is now used by most 3D programmes. In the 2018 version, Newtek advertises volumetric effects for smoke and fire, among others.

Lightwaves Rigging-Systeme Genoma 1 und Genoma 2 sollen den Rigging- und Animationsprozess beschleunigen helfen (beide Systeme bestehen nebeneinander, Genoma 2 ist noch komplexer und bietet mehr Möglichkeiten als Genoma 1).
Lightwave’s rigging systems Genoma 1 and Genoma 2 are designed to help speed up the rigging and animation process (both systems exist side by side, Genoma 2 is even more complex and offers more options than Genoma 1)

The lighting system and the surface editor have also been completely revised. In the 2018 version, Lightwave offers a virtual reality camera for both
cylindrical and spherical mode. This VR camera is also suitable for stereo 360-degree renderings and animations for VR applications.

The FiberFX features have been further developed and are compatible with the new shading system.

If you take the order in which the new features of the current version are presented on the Newtek website as a criterion for their importance, then support for the Unreal Engine seems to be at the top of the list. The improvement of FBX data exchange is also listed at the top. The FBX support of Lightwave was already fine-tuned in the 2018 version, e.g. to better master the exchange with Unity.

[gallery size="full" columns="2" link="file" ids="76662,76660"]

The innovations to the bone system to better accommodate the needs of game development also fit the theme. Limited Bones is an option to customise the number of bones that can affect a specific point to match the game engine in use. The new mode also offers real-time optimisations.

Lightwave 2019 includes a new Genoma 2 rig created for Japanese TV animations by Koutarou Shishido. It is designed to be particularly easy to use and allows you to quickly switch between FK and IK.

Smoothing Groups (SG), now a standard for smoothing polygons in game engines, allows up to 32 selected polygon groups to be smoothed in the Modeler without inserting extra geometry.

[caption id="attachment_76653" align="aligncenter" width="2560"]Volume Shapes in Layout – die drei volumetrischen Primitives Cube, Sphere und Cone in der VPR-Ansicht: Links die Object Properties, rechts der Node Editor für Volume_Sphere, unten rechts die Einstellungen für das Hyper Voxel Primitive Turbulence Volume Shapes in Layout – the three volumetric primitives Cube, Sphere and Cone in the VPR view: on the left the object properties, on the right the node editor for Volume_Sphere, bottom right the settings for the Hyper Voxel Primitive Turbulence

New content – OpenVDB Evaluator

The OpenVDB options have been significantly expanded in the current version. You can now create your own OpenVDB content using a range of node tools and manipulate it in various ways. Meshes, particles or primitives can be turned into gaseous, liquid or even solid volumes and, for example, enter into Boolean real-time interactions. The new VDB workflow, e.g.
converting meshes into volumes and vice versa, opens up amazing new modelling and animation possibilities (objects become animated clouds, for example). Initial tests and, above all, Newtek’s video tutorials impressively demonstrate how powerful this technology is.

Volumetrischer Nebel in Lightwave – Test­render einer Beispielszene Ein Testrendering zu volumetrischem Licht

Materialistic – new nodes for new material components

A number of new functions have been added to the material system in the 2019 version. Not only the Surface Editor, but above all the node system has been given numerous new material properties and functions, especially for creating PBR materials.

Material Components in Lightwave 2019 is a new, complex set of nodes and tools that can be used to customise shading effects. Lightwave professionals should be able to achieve the exact look they want for materials. There is access to Fresnel functions, lighting models, volumetrics, material integrators etc. New cel shaders promise improved NPR rendering.

An Edge Shader creates the illusion of additional geometry needed to round edges. A new Patina Node generates shading effects such as on chipped edges or dirty surfaces.

A light goes on

The lighting architecture was completely revised in 2018. For example, the Dome Light has been removed. Instead, the Distant Light now has the option of angle and distance input, which can also be used to control the shadows. There are now physical lights. The “Primitive” light type allows light sources to be assigned the shape of any scene objects, with the “Sample Surface” option taking the surface of the objects into account when emitting light.

Eine simple Beispielszene zu volumetrischer Beleuchtung – hier mit Area Lights zur Raumausleuchtung und volumetrischem Licht
A simple example scene for volumetric lighting – here with area lights for room illumination and volumetric light

Spotlights no longer have shadow maps, but they can now create soft shadows using “Size” settings.

Area lights now only shine in one direction. This means that the lighting effects can be controlled more precisely. The “Portal” option for area lights or polygon light sources supports the illumination of interior spaces.

In addition, most light sources now have a switch called “Normalise”. It determines that the amount of light emitted only depends on the set light intensity. If this option is switched off, the object size also influences how much light is emitted. The loading of IES web files has been improved to better simulate the light intensity of real manufactured luminaires. With the “Visible to Camera” option, light sources can be displayed visibly in the rendering. Volumetric primitives and OpenVDB objects provide new options for luminaires that affect the volumetrics, e.g. a volumetric scattering effect (for so-called God Rays) and new fog settings.

New in the old UI

Menus can now be navigated using the buttons (arrow, mouse wheel, start, end, etc.). Large menus can be searched using filters. The key combination Ctrl-Spacebar calls up a list of all available commands in Modeler or Layout, including the associated key combinations.

Since 2018, a layout view can be displayed in Modeler with “Layout View” (if several viewports are open in Layout, the selection is made by clicking on the camera icon, which is also responsible for the animation preview). Newtek received a lot of praise for the long-awaited revision of the undo system in Layout.

Lightwave’s Node Editor has also been revised. Not only have a number of new nodes been added. A very pleasant new feature is the “Tidy Nodes” function in the Node Editor. Even complex node networks can now be easily untangled. The comma key also highlights all connections upstream of a node, while the dot key highlights all connections downstream. Furthermore, the display of the node connections can be switched between straight, square or as a curve and a background grid can be set.

In Modeler und Layout lassen sich alle verfügbaren Befehle mit dem Shortcut Strg Leertaste anzeigen. Die Liste ist sehr lang. Wer die Namen kennt, dem kann das helfen. Das Menü lässt sich durchsuchen und mit den Pfeiltasten und dem Mausrad kann navigiert werden.
In Modeler and Layout, all available commands can be displayed with the shortcut Ctrl-Spacebar. The list is very long. If you know the names, this can help. The menu can be searched and you can navigate using the arrow keys and the mouse wheel

Some options for UV mapping have been added. The sizes or edge lengths and angles of the geometry and the UV map can now be compared and differences highlighted in colour. The programme highlights overlapping polygons of the UV map in red. A new feature is the ability to create UDIM tiles for UV maps.

By the way, but perhaps worth mentioning for Lightwaver: The online documentation now adapts to the screen format of each device.

Metamorphic – Sculpting in the layout

Newtek has integrated the mesh sculpting and vertex map editing system Metamorphic by Jamil Halabis, which was previously available as a plug-in, into the 2019 version. The toolset is multithreaded and supports the pen pressure of a tablet to control the size, thickness and hardness of the modelling brushes, at least under Windows. It is now possible to perform animated sculpting, weight mapping and vertext painting in the layout. However, Metamorphic is not a competitor to applications such as ZBrush (the bridge to ZBrush, GoZ, has been part of Lightwave since version 11).

To start sculpting, a “Metamorphic” modifier must be assigned to the object to be edited in the properties. Double-click on the modifier to open the plug-in’s work window. The features of Metamorhic include

  • Free-form animated sculpting
  • Integrated undo / redo system
  • Multithread with all available CPU cores
  • Pen pressure support for brush size, thickness and hardness (Windows only)
  • Nodal Brush Texture Support
  • Non-linear interpolation of keyframes for sculpt animations
  • Convert shape animation keyframes to endomorphs
  • Full motion blur support

There are various brush modes such as Select (vertex selection), Transform (moving vertices), Sculpt (the actual sculpting), Weight (for creating weight maps in layout), Normal (for editing normal maps), Colour (for creating vertex colour maps). The brush size can be adjusted with the shift and right mouse button.

Die Rendersettings sind übersichtlich angeordnet.
The render settings are clearly arranged

To sculpt animatedly, simply navigate to the desired position in the timeline. Lightwave automatically creates keyframes for the changes to the object during sculpting.

Metamorphic works, but should not be seen as a competitor to real sculpt software. Rather, the tool is designed to create corrective morphs for animations. The tool is surprisingly well suited for this purpose. Sculpt animation keyframes can be converted into endomorphs, Lightwave’s built-in animation objects, and interpolated non-linearly.

The sky above Lightwave – Physical Sky

Lightwave offers various options for designing a scene environment, backgrounds or skies. In the current version, the Hosek Wilkie Sky Simulator, which is also used by Blender, is added to the backdrop options Gradient, Image World, Sky Tracer 2 or Textured Environment.

In Lightwave, it is possible to select any point on a world map and specify the date and time. A sky background is generated accordingly. The values for azimuth, elevation and (colour) temperature as well as the intensity and size of a sun can also be specified manually. A light source in the scene can act as a sun (SunLight Hosek-Wilkie light type). In this case, the option of selecting a location is omitted.

This and that – making life easier for Lightwaver

Newtek has tweaked many aspects of the new Lightwave. The render options have become much more extensive. The two new options “Enable Despike” and the noise filter should be emphasised. When “Enable Despike” is active, an attempt is made to smooth out abnormal brightness peaks in the rendering. Newtek recommends only activating this function if bright peaks actually occur in the rendering.

The noise filter can be set to be CPU- or GPU-based. However, according to the manual, the GPU option only works for still images (not on the network) and requires an Nvidia Geforce, Quadro or Tesla graphics card with at least a Kepler chipset. There are no user-configurable options.

The Nvidia CUDA system is no longer supported by MacOS. This means that Nvidia Optix Denoising is not available for the MacOS platform Source.

Lightwave offers quite a long list of render buffers in the current version. There are standard buffers that are provided by the render engine (alpha, depth, lens flare etc.). Additional buffers, which depend on the materials used among other things, are intended to support extensive compositing work (e.g. refraction, reflection).

In the render settings under “Buffers”, light groups can be created for which special buffers can be rendered. It is also possible to create user-defined buffers that are saved together with the respective object in the interface.

One little thing: after the last update, it was no longer possible to delete anything in the Modeler – at least not as usual with the Del key. Why the shortcut had disappeared from the default settings remains a mystery – but such minor problems can be quickly resolved with Edit > Keyboard Shortcuts.

Model-like

a tool for the parametric creation of bolts was added in 2019. QuickBolt generates bolts according to various, quite complex specifications with one click in an empty layer. This works in a similar way to the “Gemstone Tool”, which is already part of the arsenal, but has far fewer setting options and is also interactive. Unfortunately, LWCAD is not (yet?) part of the software.

Simple but effective is Spline Bridge, a tool for connecting polygons with a bridging effect. There are only three options: Divisions (number of subdivisions), First Twist, Second Twist (twists at the beginning and end of the connection) and the ability to bend the bridge at a spline.

Conclusion

After many years of a chequered history, Newtek finally seems to be recognising the potential of Lightwave and devoting the necessary attention to the software. The latest updates give us hope. It’s not just the regular users who will be grateful.

The software has at least retained essential characteristics such as the separation of modeler and layout or the fact that everything is selected in Lightwave if nothing is selected, or that Newtek still does not use icons in the UI.

Despite this, or perhaps because of it, Lightwave die-hards claim that the software is beginner-friendly, leads to quick results and, given the current price-performance ratio, represents a real alternative to the (current) top programmes.

As far as the programme’s performance is concerned, this is true. The manageable price for the range of functions also speaks in favour of the program (approx. 1000 US dollars for the full version, 495 US dollars for an upgrade – 95 US dollars for an educational licence) and it is still possible to purchase the software without the need for a subscription.

From our point of view, the tool offers a large number of professional functions and, for those in the know, an unmanageable wealth of possibilities that make enormous time savings possible (for trained users). The functions added in the latest versions make us curious about future developments. If Newtek continues to develop the programme in the way it has, Lightwave could perhaps soon return to its former heights. We hope so.

]]>
DIGITAL PRODUCTION 76665
CityEngine 2019! https://digitalproduction.com/2019/05/10/cityengine-2019/ Fri, 10 May 2019 12:00:02 +0000 https://www.digitalproduction.com/?p=76956
With CityEngine 2019.0, one of the biggest CityEngine versions in recent years has been released. Our CityEngine team, which now consists of 17 people at the ESRI R&D Centre in Zurich, has been hard at work programming every nook and cranny, so that with this version we can really offer you many new features as well as a solid revision of old functionality.
]]>

We have been inundated with enquiries from potential CityEngine users wanting to know the best place to start learning about CityEngine and its functionality. We realised that it wasn’t quite as useful to send users lots of links, and so our CityEngine Resources page has (finally) been created. Through this page we hope to provide you with a good place to go that will not only help you get started with CityEngine, but also answer most of your questions.

Among other things, beginners and aficionados can find the latest features in explanatory videos – from which we have taken the images for this article – as well as a detailed setup guide, tutorials for getting started and videos from the developers on special functions, as well as a large library of practically all CGA references with explanations, examples and code snippets. Plus all the videos from our Tips & Tricks collection.

Drawing and transformation tools

Let’s continue with the drawing and transformation tools, which have been our focus since CityEngine 2018.1. The design concept for this was developed by our product designer Christian Iten (who was also involved in the design of the CityEngine resources).

As a designer, it is particularly important not only to present the functionality in a beautiful and plausible way, but also to make the users (more or less) perfectly happy. We hope that you realise this with this revision of the tools and are pleased that Christian will continue to work on the design concept in the future so that the drawing and transformation tools become even stronger and better.
Thanks to numerous performance improvements, the drawing and transformation tools are now more precise and responsive. In addition to the existing tool enhancements, CityEngine has now also received a Boolean subtraction tool for 2D shapes due to popular demand. The offset tool is now called “Offset Shapes” and is located behind “Subtract Shapes” in the main menu. The ability to draw arcs with the polygon creation tool has also been improved in several places.

CGA (Computer Generated Architecture)

Did you know that without CGA there might not even be a CityEngine? CityEngine originated from the PhD dissertation of Pascal Müller, who incidentally still works as Director at the ESRI R&D Centre in Zurich. As part of this dissertation, Pascal developed numerous new procedural modelling techniques, including CGA.

The CGA operations together with the procedural runtime core are pretty much the heart of CityEngine. This is exactly why the procedural modelling language has received so much love from us this time.

The component split is a fundamental operation in procedural modelling. It splits a geometry into individual topological components, for example a building into its facades. In CityEngine 2019.0, we have adapted this operation as a function. For the first time, it is now possible to analyse a modified geometry and evaluate the information obtained without changing the procedurally generated geometry itself.

“The development of such new paradigms is a challenge, as they have to be in line with the existing CGA language, be as easy to use as possible and still open up many possibilities. In this case, we are particularly excited to see how CityEngine users utilise the new possibilities,” says Martin Manzer, one of our Procedural Runtime /CGA developers.

By the way: One of Martin’s favourite CGA operations is insertAlongUV, which was introduced in CityEngine 2018.0. Simply put, it makes it possible to apply a 3D geometry to a geometry with texture coordinates instead of a 2D texture. It has turned out that this makes it very easy to create curved geometries that embed perfectly, for example crash barriers on roads.

VFX synergies

One of the big issues with CityEngine 2019.0 was that we wanted to try to improve workflows for VFX as well. That’s why we have now officially released the Palladio plug-in.

Palladio is a CityEngine plug-in for SideFX Houdini and was for a while a private project of Simon Hägler (also one of our software developers) and Matthias Bühler (former CityEngine team member and now owner of his own company Vrbn Studios). The plug-in has been reworked, but is still open source and allows the CityEngine CGA rules to be executed within Houdini.
The rules are exported from CityEngine as so-called Rule Packages (RPKs) and contain not only the actual CGA rules but also all referenced assets (usually manually created building parts). The creation of the CGA scripts remains a work step in CityEngine.

The basic way of working with Palladio consists of three steps: (1) creating the procedural rules in CityEngine, (2) exporting the rules as a rule package, (3) applying the rule package to Houdini geometry using the Palladio-Houdini operators. These operators generate the procedural city model as output.
Palladio adds two further operators to the Houdini tool list: Firstly, the pldAssign operator (short for “Palladio assignment”), which ensures that each Houdini primitive is added to a start shape group (essentially the perimeter polygon of the building to be modelled). The start rule and required attributes are also added so that the CGA engine works with the correct values.
In a second step, the pldGenerate operator (short for “Palladio generation”) comes into play. Here, the Houdini primitives and the assigned attributes are fed to the CGA engine, which applies the rules and generates the corresponding geometry. Embedding the CGA engine in the Houdini operator network enables new ways of working that are not possible with CityEngine itself. For example, it is now possible to connect several rule packages in series. This means that further rules can be applied to the result of a first rule, which is useful for the modularisation of extensive rules. Reports, contact options and all other information can be found on GitHub.

And this is only possible with Houdini?

Very soon we’ll go one better and release something cool for a little piece of software called Autodesk Maya (you may know it). As they say in English: “Stay tuned!”

Palladio licence?

Oh yes, and a CityEngine licence is no longer required for the non-commercial use of Palladio. In general, we have rethought our licensing model so that you can now also benefit from the “Named User Licence”. This means that you can simply log in with your ArcGIS online account and no longer have to deal with Flexnet licence activation and the ArcGIS administrator. In addition, CityEngine is now also part of “ArcGIS for Personal Use” and “ArcGIS for Student Use”. These software bundles give students, pupils and anyone who does not want to use the software commercially access not only to CityEngine, but also to many other software products produced by ESRI (cost: 100 US dollars per year).

gITF

With glTF, CityEngine is keeping pace with the latest developments in the world of 3D exchange formats. The format is used by leading companies and has now become the quasi-standard for exchanging geometries for WebGL. I asked Stefan Götschi, one of our glTF experts, what exactly makes glTF so exciting.

For him, the answer was relatively obvious: “The modern material scheme, which not only allows physically based rendering, but encourages it, has helped. This is further supported by the reference implementation of a viewer with open source code. This also guarantees correct display across different programmes. As the format is easy to extend, many companies have already introduced extension specifications, such as Google with Draco Mesh compression.”
GlTF is a new format that is intended to significantly simplify the exchange of 3D scenes between different 3D applications, as the description of the underlying data is standardised and – unlike other exchange formats – there is no room for interpretation. Bottlenecks should no longer occur with glTF 2.0, and developers are already talking about the “JPEG for 3D”. Naturally, we didn’t want to miss out on this for CityEngine.

The structure of the format is obvious – there is no extended or even redundant data and only one way to interpret the data. A right-handed coordinate system based on the cross product of X and Y, which gives Z as the solution. Y is seen as the up-axis. The units for all linear distances are measured in metres, and all angles use radians. Positive rotation is always clockwise.

This makes it easy for the import modules to work and allows the data to be displayed correctly on the first import – without any surprises. Furthermore, there is no support for multiple coordinate systems or units. You can find out more about this from the Khronos Group, which takes care of standards across companies.

The latest version of glTF supports skeletons and morph targets. Users can transport several animations in one glTF file, which is ideal for the movement of 3D characters. Common or well-known interpolation methods such as Catmull Rom and Cubic Spline are also available. For material definitions, PBR materials are supported based on Disney’s Principled Shader with Albedo, Metallic, Roughness, Normal, Emission and Ambient Occlusion.

Conclusion

What else has been added? We’re in the process of widening the bridge to Unreal Studio – the CityEngine VR Experience now works with the Oculus Rift, integration with ArcGIS Urban makes it easy to use with ESRI’s urban planning platform, and several other features are in development – check out the CityEngine Resources to find out what else is new.

]]>
DIGITAL PRODUCTION 76956
Challenge Yourself: A project analysis https://digitalproduction.com/2019/05/10/challenge-yourself-eine-projektanalyse/ Fri, 10 May 2019 10:00:42 +0000 https://www.digitalproduction.com/?p=76797
After several years of film editing and constant discussions about film dramaturgy and visual language, what is needed, what is not, even more intensively, but differently, much more clearly, I decided to invest this knowledge in my first directorial work.
]]>

I wanted to let the imagination run wild – a bold decision, but not the most sensible one, as this was a concept spot as part of a Specfilm production. We were not blessed with an excessive budget right from the start. Therefore, every newcomer or low-budget production is advised to make creative compromises for a good result. The story involved a fantasy-laden workout of a fitness athlete on a deserted airfield with self-acting tractor tyres and a falling mega dumbbell to give the commercial’s claim “Challenge Yourself” a meaning all of its own. As this required a lot of elaborate visual effects, it became clear that it would be better to do most of this as in-camera effects and concentrate on just a few visual effects, as otherwise it would just become a never-ending story in post. After sufficient preparation, it was time to start shooting.

As editor on this project, I came into contact with many other departments that I don’t usually work with directly. This resulted in both instructive and interesting dialogue. I was able to bring my editing experience to the set and provide important feedback: Which lengths, timing and reactions are absolutely necessary for the cut in order to make it flowing and exciting. But it was also instructive for me to learn what is easier and what is more difficult to realise within the available shooting time and especially working with sunlight during outdoor shoots. And so together we found a way to get the best out of the project.

Finally getting it right: The filming

I was able to use the beautiful airfield near Schleißheim for the location of the spot. It has a special landing field with a spectacular hangar in front of it. The film crew spent two challenging days shooting at this location in August 2017. We had a tight shooting schedule to get the scenic resolution through.

We shot with a very nice combination: Arri Alexa Mini with Vantage Hawk V-Lite lenses with plenty of available light. To optimise this, a lighting crew under the direction of the experienced head lighting technician Kai Giegerich worked together with our DoP Holger Jungnickel. We were very conscientious with the VFX shots and tried to deliver good plates for them in the best possible way with a lot of imagination in mind. We also collected a lot of additional footage for this. True to the motto: Better too much than too little, because what you have, you have! A few sunburns later, it was with joyful relief that filming was completed.

Pro tip: If you have a very high forehead, don’t forget a hat or sunscreen when filming outside.

We’ll fix it in post

After the shoot was successfully completed, there were only a few hard drives left with 5 hours of footage for a 1-minute film – so it was time for data management. Although there were enough backups, the filming data was also stored on a NAS server, which was operated in Raid 6. It is also advisable to keep the backup hard drives within easy reach in the cupboard, but also in a second trusted location – Murphy’s Law can take you by surprise, because any device can fail. So for the time being, sufficient security was ensured.

The editing took place in Avid Media Composer. Several Avid projects were created: three projects for the respective footage, separated according to the different resolutions and a main project for the editing. There was also another project for the final mastering.

Because the main editing project became a bit too sluggish/slow for me, I set up a new project for finishing, in which I handled all the final imports. Quite simply to avoid overloading the projects too much and thus remain flexible. Editing was done at an editing suite that was equipped with a Thunderbolt raid in order to be able to work effectively with ultra-high-resolution footage. Despite linking the footage accordingly, we also worked with transcoded UHD Avid media files and then used the Relink function to guide the footage back online. A generated look-up table that could be applied to all images proved to be very helpful. Simple, convenient and practical to avoid having to cut without colour or apply complex colour corrections in the edit. This meant you could concentrate fully on the cutting. This created the pipeline for post-production and editing could begin.

Cutting first

I would like to start by disabusing anyone who thinks that what was shot exactly according to the script can simply be edited down. That’s not the case, because nothing is shot for editing! Precisely because people put a lot of effort into it and worked well according to a clear concept, there is now a very good chance of making a good film out of it. So there was a lot of good material waiting for me and it was overwhelming at first. So I decided to divide the film roughly into four parts, despite good key numbering: Hangar, Tyre Contact, Tyre Action, Axle. Because the simpler the workflow is, the more effective it is.

I dispensed with a classic view with timecodes. Accordingly, I compiled the settings of the parts with the corresponding takes into sequences. If I came across individual takes that I found particularly interesting but couldn’t yet categorise, I simply put them to one side and named them accordingly. I always included a generous amount of meat at the front and back of the clips when compiling. This created well-organised feeds for the edit. Nevertheless, to be on the safe side, all the material was also stored in separate, appropriately labelled sequences, because nothing can be more tiring than clicking through a lot of clips again. I now had a good overview of the material and had made a relatively effective selection, which made the main creative work much easier.

Rough cut

Now the aim was to get a first rough cut to see whether all the experiential elements would come to fruition and whether the film would work emotionally at all. The first version was actually still too long, but it went through very well. I screened this rough cut in front of selected people to get an insightful external perception. Together with this feedback, it was back to editing. The idea was born to make both a film and a commercial version. A longer film that takes more time and achieves a more intensive level of experience, and a shorter commercial that achieves a more dynamic, promotional effect. After further dialogue, the cuts became increasingly finer.

It’s always fascinating to see what you can get out of editing for your film when you don’t think you’ve seen any optimisations for a long time and yet, after a while, you understand, reinterpret and implement these things. It’s a great creative experience to have lived through this. Editing is also about pushing yourself to the limit again and again, just like our fitness heroine in the film.

3D: Think!

3D artist Patrick Wagner took on this challenge. Patrick was responsible for the design and realisation of the mega-axis as well as its animation and the creation of the “Challenge Yourself” typoshot in the commercial. These were certainly the biggest construction sites of the project. During the editing process, the exact timing was determined and it was decided which raw material would be used and what would be used as the plates for the 3D animation.

However, when working with Patrick, it quickly became clear that we still needed to make a few small changes to the editing. So I actually decided to do two full CGI shots (landing of the object, rolling out of the object) and leave it with just two plates (object whirling in the air towards the protagonist, the protagonist placing the object on her shoulder) in order to have more freedom in the design and also to be closer to the device and get more exciting camera angles. Angles that were simply not possible during the shoot. I then focussed on the look of the axis barbell hybrid with Patrick and we discussed a possible look for the machine. The appearance of an ordinary barbell was not an option, as it would be too unspectacular and would not do justice to the scenery in the film. So the study of technical drawings of various vehicle axles and photographs of train axles helped me a lot. The latter motivated me to create a concept drawing. With this and a lot of feedback, Patrick was not long in coming. As a 3D generalist, Patrick also offered a complete animation of all the shots. A nice first result that already gave a good impression.

After careful scrutiny on the reference screen, we refined the result more and more:

  • Details in motion: We let the object fall into the picture in the first 3D shot.
  • Details on the texture: With a lot of effort, the axle was given a few traces of rust and patina.
  • Details in the light: Based on the information from the filming report about the position of the sun, we positioned the light, especially in the full CGI shots, according to the surrounding settings in the edit.

The version history came in a simple and easy-to-use email with a download link. Particularly noteworthy is the last shot in the film and thus also the fifth and last 3D shot (the protagonist places the object on her shoulder), which already contained a complete rod during filming. This was then replaced 1:1 by an animated mask and then by the new 3D rod and its characteristics.

The “Challenge Yourself” claim shot was a very special treat. This was the shot that conveyed the statement. A plate was also originally shot for this: a flying shot that moves upwards from the ground. It quickly became apparent that this proved to be too inflexible for us. We therefore decided to create another full CGI shot with various backgrounds (concrete floors, sky, etc.). The result was a dynamic animation across the concrete square, past cracks in the ground and up into the air, before pausing in the supertotal on the “Challenge Yourself” typography applied to the ground. A truly three-dimensional way of thinking that brought this large construction site to an end.

As it turned out in post, the plate shot with a copter for our typoshot could have been saved. A full CG shot is often the better option for more flexibility and creative freedom. We then actually only used still images of the concrete floor, edited everything digitally and therefore didn’t use a single take of the copter shot. This would have given us more shooting time for other important shots.

A bit of magic

But what would the effort in 3D be worth if it weren’t for the compositing? The shots would only look half as good. Compositing brings the magic into play. I was able to win over the VFX studio BigHugFX for this, who took on the comp in particular. You would think that it was just a few shots with a small amount of work, but you’d be amazed at what an extensive endeavour it is.

Just tackling the subject goes far beyond “let’s make it a little more real”. I wrote almost two pages in advance to clearly describe what the CG shots still need to look more realistic. It is often characteristics such as lighting or shadows that are not yet ideal, but textures and backgrounds can also make the shots look too artificial, i.e. still too generated. This feedback is therefore very helpful for the artists and saves a lot of time and effort. The rule here is: the more precise and detailed, the better. Communication was very simple and effective via collective email via the in-house VFX coordination team. This meant that everyone involved was always aware of the current status.

Anything else that was needed could be delivered promptly, such as stills for floors and backgrounds or simply additional information such as camera focal lengths and apertures. The version uploads always included the entire final sequence along with the corresponding VFX shots. They were sent by e-mail as MP4 files in acceptable quality and could even be conveniently checked on a mobile phone while on the move. It was advisable to load them back into the edit and check them carefully against the current soundtrack. In this way, we achieved a nice result after just a few versions.

The renderings were delivered in passes and returned as DPX and TIFF single-frame sequences as well as in MOV film format. Masks were also added in single-frame format in order to have even better options for secondary colour correction during grading. This also lifted a huge weight off our shoulders, as we were now able to put an end to the long-term construction site of visual effects. It has also been shown that vague ideas only lead to blank plates and can only leave question marks. This requires a lot of planning and a clear vision in advance. All in all, it was a very nice collaboration with the BigHugFX crew.

Into the paint pot

After the long process of visual effects had finally been successfully completed, we moved on to the last and final step on the image, colour grading. This was carried out by colourist Andreas Brückl at Baselight, who is currently based in Mumbai (India). Due to the long distance, as the post-production took place in Germany, he was given an AAF upload instead of a hard drive with the required footage.

For the sake of simplicity, communication took place via remote grading with Baselight for Avid. This is a so-called non-live remote, in which you only have to update the Baselight project with BLG files sent via upload in order to receive work progress from the other party. This is an extremely facilitating workflow, as we were able to exchange information effectively and promptly even over long distances (Germany – India). Of course, this avoids many long render and upload/download times on long production routes, only to find out afterwards that you are still on the wrong track – which can prove to be very tedious, especially in colour grading.

Baselight for Avid works with the NLE and finishing systems Media Composer and Symphony. Extensive tools for colour correction and look design are available within the plug-in. In addition to the actual colour correction tools, some effects and filters are also on board. Simply drag the Baselight effect icon from the Effects palette in Media Composer onto the desired clips in the timeline and open the application via the Effect Editor. For the sake of simplicity, however, we decided to let the colourist work on the large Baselight, while Baselight for Avid was only used in Media Composer to view work progress and for rendering.

You simply open the BLG files via the Baselight for Avid application and update your Avid timeline, and you are already up to date with the grading from the large Baselight. Now you are ready to export your sequence to Media Composer. This can take a little longer than usual without first rendering the Baselight effects, but is also very dependent on the resolution and codec of the export formats.

Communication from a distance was definitely a drawback, as we were not directly on site with the grader. This made the workflow more difficult and slower overall. However, this was the only acceptable solution for us due to the long distance. It is also worth mentioning that everything technical, such as colour spaces, had to be checked in advance so that both sides could see the same thing. However, everything went smoothly with the non-live remote grading. Andreas sent the final Baselight gradings at the end, and I rendered them at home. In my experience, the best way is still to work with the colourist on location, if it’s possible to do so live during grading. Ideally, you should also work as a director with a DoP, because the three of you simply see more and can exchange ideas better. The interpersonal exchange on a subject as complex as colour should not be underestimated. Our perception of colour is also surprisingly different.

When it came to the look itself, we opted for a rather simple look in order to lend a certain authenticity to the rather fantastical events. We generally kept the colours in the cool range. Increased contrasts, modified saturation and brightness values so that the scenery doesn’t come across as too sweet. We were also able to apply some skin retouching in the grading (sunburn on the actress). All in all, a subtle look, but one that impressed us with its simplicity.

Toneless is only half as good

The sound atmosphere on set was nothing but noise. You simply wouldn’t have been able to get anything out of this aerodrome. We therefore decided to shoot completely without sound: a so-called MOS shoot (most common interpretation: Motion Only Shot). We even dispensed with sound-only shots. So everything sound-related was only solved in sound post-production via sound design and film music. But how do you approach such a project?

Editing without any sound at all is like playing a puzzle in the pitch black. That’s why I decided to always include a complete soundtrack consisting of library sounds and temp music in the editing process. A very tedious endeavour, but one that was very beneficial to finding the final cut. But it was also very helpful for everyone involved during further editing, as it was much easier to imagine the film in its respective cut versions and the many construction sites with this soundtrack.

Nevertheless, I wasn’t satisfied with that. As planned, the temp track was discarded and sound engineer Alexander Rubin took over both the final sound design and the sound mix. Composer Julian M. Michel was responsible for the film music. This decision enhanced the product enormously. Working with a composer who is able to respond individually to the needs of the project is worth its weight in gold. In principle, we exchanged ideas about the mood and atmosphere of the film through descriptions as well as musical examples. This is fundamentally important in order to recognise at the briefing how the director and composer feel musically about the theme and whether they are on the same wavelength. Insights that are highly recommended to every young director, especially when music is very much in charge of the sound. You need THE right composer for a very nice result. In terms of content, we combined classical music elements with electronic sounds to give the film event the necessary drama, but also to do justice to the coolness of our fitness heroine. In terms of workflow, we proceeded in a very simple way, in that I was able to put the respective update of the music into my edit as an MP3 via the server. The first layout already took a very good direction, which was thanks to our good communication and the result of an effective briefing. We worked our way from version to version.

During the music work, the sound design was created at the same time, so that we could already determine where we needed more of what, what should dominate at what point. A preliminary layout mix that Alexander Rubin had already created for us during the sound design phase was helpful. A final mix suitable for cinema in 5.1 and a stereo downmix rounded off the result perfectly. Our mix was created in Pro Tools. All in all, it was a creative exchange that I wouldn’t want to have missed.

In order not to be faced with too many question marks in post-production, especially with visual effects, it is advisable to plan these very well before shooting. Storyboards are certainly highly recommended, but extensive location scouting is also worthwhile. You should take a lot of time here and also take meaningful stills. If the budget is there, then previs are the icing on the cake. This is the only way to create the ideal conditions for producing breathtaking visual effects. Another point would be the early use of layout music directly from the composer in the rough cut phase. This way you don’t get bogged down with temp music and use the composer’s music from the start. The composer has not invested too much work and you already have enough material to work with in the right direction during editing. In the meantime, and especially after the picture lock, the actual work on the film music can continue.

Finished

The circle is complete. All the design elements are now together and form a new whole. A relieving feeling when you consider that you had to work on individual parts for a very long time. It’s easy to lose track of the overall product. But the patience has paid off for us. The result speaks for itself. All the components came together very well, both in terms of sound and image. Towards the end, I carried out a few final screenings in front of a selected test audience (both specialist and target audience). This resulted in only minor cosmetic changes and gave me the courage to finalise the project. The individual parts were then finally brought together again in Avid Media Composer. In other words, created in sequences with link clips in Avid as a MOV file for the image master and the individual WAV files for the sound master. All the desired exports could then be created from these. There was also a DCP mastering of the film.

The DCP-o-matic software was used for this. This also provided us with another important lesson: before creating a DCP, it is essential to obtain precise information about the delivery files from the cinema itself and then use this DCP master on a suitable hard drive to put the entire film through its paces in advance in the cinema. All unpleasant surprises were thus eliminated and nothing stood in the way of our team premiere at the Cinemaxx am Isartor in Munich.

And now?

Initially, my only goal was to make a nice project of my own on a topic that appeals to me personally, to produce a cool sports clip packed with interesting ideas. Something home-made! But over time, this sports clip turned into a major project with a high production value. This production value came about entirely thanks to an excellent and talented team. An exciting development, as you could slowly see for yourself how an initial idea turned into an impressive film project. Furthermore, it was pure luxury not to have a deadline breathing down our necks this time, with the chance of sufficient production time included. A fact that everyone was able to capitalise on. The result was so much fun that it almost cried out for a continuation of our work. Another sports film is already being planned, which will offer more story and even more levels of experience in a very special way: The Elements Walker. If you want to find out more about this young work, you can get an insight here. It remains exciting!

]]>
DIGITAL PRODUCTION 76797
A haven of peace in the zombie storm – Making-of “Grant” cinematic trailer https://digitalproduction.com/2019/05/01/ruhepol-im-zombiesturm-making-of-grant-cinematic-trailer/ Wed, 01 May 2019 14:00:52 +0000 https://www.digitalproduction.com/?p=72347
Goodbye Kansas Studios created a series of cinematic trailers for the game "Overkill's The Walking Dead" in which the four main game characters are introduced in various scenes. With "Grant", the team won the animago AWARD in the "Best Game Cinematic" category last year.
]]>

Originally launched as a comic series in 2003, the zombie world of “The Walking Dead” can now be consumed in numerous forms: for example as a series, game, animated film, book or board game. At the end of 2018, a new
Game of “The Walking Dead”, developed by studio Overkill Software and published by Starbreeze and 505 Games.
“Overkill’s The Walking Dead is a first-person shooter with co-operative gameplay. Each of the four characters Maya, Aidan, Grant and Heather has special abilities that they must use together to achieve their goals. The cinematics created by Goodbye Kansas, which raised great expectations among fans of “The Walking Dead” games with their extremely high photorealistic quality, were unfortunately not fulfilled by the game itself. The version released for Microsoft Windows received mixed reviews and caused publisher Starbreeze, which took over Swedish developer Overkill Software in 2012, a financial crash landing. The game community criticised the weak gameplay and technical problems in particular. There is currently still confusion surrounding the console version: while Sony announced in January that the game would be cancelled for consoles, publisher ‘505’ Games announced that the launch had merely been postponed – albeit indefinitely.

Der Regen und die dazugehörigen Effekte in der Umgebung kreierte Goodbye Kansas mit Houdini.
The rain and the corresponding effects in the environment were created by Goodbye Kansas with Houdini

Goodbye Kansas Studios’ outstanding achievement for the release trailers is not affected by this. “Grant” won various awards: In addition to the animago, it received silver at the Epica Awards and bronze at Eurobest. “This was our first animago, but hopefully not our last,” says Director Fredrik Löfberg. “We are extremely proud of the award, because the competition was strong and the bar was set very high.”

Gradual increase in efficiency

Publisher Starbreeze had already realised other full CG trailer projects with Goodbye Kansas, so discussions within the team for the “Walking Dead” cinematics also began at an early stage. Right at the beginning of the process, the decision was made to realise four trailers that would provide an insight into the playable characters. Starbreeze did not want to reveal too much about the game world in these trailers, instead focussing on specific moments that would help define the characters. “The client initially provided us with scripts from which we took the stories and then refined them. Starbreeze had confidence in our storytelling skills, creativity and technical solutions and gave us a lot of creative freedom,” says Director Fredrik about the initial phase. As is usual with other projects, the team first felt their way through a storyboard and produced board-o-matics. This process runs in parallel with a very rough temporary sound design and a music score that supports everyone involved in the realisation of the final film.

William Hope, bekannt als „Lieutenant Scott Gorman“ aus dem Film „Aliens – Die Rückkehr“, spielt Großvater Grant.
William Hope, known as “Lieutenant Scott Gorman” from the film “Aliens – The Return”, plays Grandfather Grant

The team re-visualised these 2D images in the subsequent pre-visualisation process. The in-house motion capture studio was a great advantage here, as it meant Goodbye Kansas could simply bring in its stunt team and play through the film once in advance. “Each film had its own day of shooting, so we were able to find the beat and timing of the overall performance and action. We filmed scenes with different body language and action in order to work as thoroughly as possible in advance and eliminate questions,” explains Fredrik. The camera layout from the storyboards and the pre-vis editing were great aids for the camera set-up on set to find the best camera angles.

In-house performance capture

After Starbreeze had approved the previs of the four trailers, Goodbye Kansas started shooting in the in-house performance capture studio. A full-time technical team on set for all projects ensures that the team is working with the latest technology, pipeline and workflows at all times. The MoCap stage has been in existence for ten years, so the team is well-rehearsed. Anton Söderhäll, Executive Producer in the Capture Division, knows how the stage is equipped: “It has a passive optical system with retroreflective body markers. 65 high-end motion capture cameras track the markers and translate them into digital 3D space.” Using a replica skeleton, the team transfers the movements to the target rig. The facial expressions are simultaneously captured by head cameras. The videos are then tracked and translated to the movement of the facial rigs, which were created by scanning numerous facial action coding systems (FACS) of the actors. “Our rigs are designed to be very user-friendly and optimised for our workflow. Everything we record in the studio – reference videos, body data, audio – is synchronised with the master timecode clock. We are very interested in capturing the best possible data quality. This ensures that no nuances are lost when digitising the movement,” explains Anton.

Die Animation des Zombies lief über Keyframing.
The animation of the zombie was keyframed

A lot of data comes together during filming, but space can be saved with the reference videos, which can be highly compressed for the post-production process. “The animation data is also not as large as you might think. And there have never been any problems when the hard drives fill up during a day of filming,” says Anton. After filming, the team saved everything, including a careful backup, to the main server so that all departments could access all the data.

A balanced character

The actors for the four game characters were cast by the team in London. There was one day of rehearsals and one day of filming for each trailer. “Because we knew what worked and what didn’t thanks to the Previs shooting days, we were able to focus entirely on the actors’ performances and immerse ourselves in the characters,” recalls Fredrik. William Hope, known as Lieutenant Scott Gorman from the film “Aliens – The Return”, plays grandfather Grant. He seems to have experienced so much in his long life that even a zombie apocalypse can only shock him to a limited extent. With a determined face, he trudges through the gloomy and foggy landscape and finds shelter in an abandoned car. His only company is a zombie in the passenger seat who can’t bite him because he has no lower jaw. So Grant tells him about his past while indulging in alcoholic leftovers from a found object. Meanwhile, the zombie hordes gather around the car.
As it’s a voice role, the director wanted to cast Grant with an actor that Starbreeze would also use for the game voice:
“During rehearsal, we asked William about his vision for the character. We talked about the person Grant was before the apocalypse and how it changed him. The aim of this was to find the right balance of tension, humour and kindness for the grandfather. We created enlightening and contrasting moments for the dark situation where he sits next to a zombie.” A balancing act in the design was also to make Grant seem tough enough to cope with the approaching zombie horde, while at the same time making it believable to the audience that Grant might commit suicide. “We deliberately made the ending ambiguous. But Walking Dead fans know that zombies are blind and react to sound, so Grant can fall asleep there during the storm without it being too dangerous for him,” Fredrik reveals.

Game models as a basis

Grant’s asset was provided to Goodbye Kansas by the client. The team used the high-resolution ZBrush model and retopologised his clothing and equipment for an accurate cloth simulation. “It was important to the client that the look of Grant’s face was preserved. Therefore, we transferred the basic face shape of the client’s model into our standard face topology and created an additional pass with skin-surface details. With the help of our in-house tools, we were able to adapt the acting performance to Grant’s CG face,” says Daniel Bystedt, Lead Character Artist.

Das richtige Maß an zerrissenem Fleisch fanden die Artists über eine Previs-Simulation in Blender.
The artists found the right amount of ripped flesh via a Previs simulation in Blender.

Grant’s head and facial hair were created using the Yeti plug-in. The artists created the shading of his skin using V-Ray alSurface skin shaders; the higher resolution and detailing of the textures was created based on the game character’s textures using Substance Painter. The zombies were also based on high-resolution ZBrush files from Starbreeze. The artists retopologised the geometry to give it more definition and topology for the simulation. “We used Substance Painter to add some extra detail to the zombies. The hair grooming was done in Blender, then we created a procedural hair system based on the groom in Yeti,” says Daniel. The team also built some “killed zombies” with holes in their heads from which brains spill out. This effect required additional sculpting and texturing work. Each zombie body, including clothing, was posed, with the artists grooming the hair to suit the individual pose. “We created the zombie with the missing jaw from scratch, it’s a one-off. We used Blender for the Previs simulation of the torn flesh. This gave us a feel for how much flesh had to be attached to the jaw without it being too distracting.”

Case-dependent animation work

For the animation of the CG characters, the animation artists tried to preserve as much of the performance capture as possible, because the acting performance of the talented actors is the starting point for the photorealistic facial expressions in the final trailer. “If this foundation is missing, it doesn’t matter how great the facial rig is. What matters is a believable source performance. Then we always built the rigs with the performer in mind. To preserve as much of the acting as possible, we scanned the performers and tried to integrate as many of their features as possible into the digital character. So the characters still look like Maya or Grant from the game, but when they smile or cry, we see the actors’ emotions – modelled as closely as possible to the source material,” explains animation director Jonas Ekman.

Entschlossen, seine Enkelin zu finden, und mit allen Wassern gewaschen: Großvater Grant.
Determined to find his granddaughter and ready for anything: Grandad Grant.

But even if the actors’ performance is the most important source of digital recreation for the many components of such a trailer – the decision in favour of keyframe animation remains case-dependent. The team looked at the storyboards as well as the performance and considered in which cases it made sense to invest time in keyframing vs. motion editing or a combination. In the Grant trailer, performance capturing was used for the grandfather’s face. The zombie next to him barely has a face, so keyframing with a simple face rig made more sense. Another example of key animation is the teddy bear that he picks up in one scene: “It would have been too much effort to build a real prop with limbs for the shoot, which is why we used keyframe animation instead,” says Jonas.

Environment for the right mood

“During the pre-vis phase, we went through many steps until we found out where the trailer should be set. We then decided how much rain and fog was the right amount and what the lighting should look like. To do this, we designed some key concepts that came close to the look we wanted,” says Fredrik. Grant’s backstory of a long journey in search of his granddaughter until he reaches Washington DC was familiar to the team when choosing the location. Google Maps 3D view helped the artists a lot in finding a suitable motorway location around DC. “I wanted to place Grant under a subway, surrounded by rain. Our environment was based on a real environment north of DC, which we adapted according to our needs and vision,” says the director.

One of the biggest challenges in terms of the environment was to create a seemingly empty motorway that was still interesting and engaging. In the final scene of the first episode of “The Walking Dead” series, Rick Grimes rides towards Atlanta while the motorway is empty on his side and a motorcade is jammed on the other side. The Atlanta skyline is visible in the distance. The director did not want to copy this well-known shot, but instead wanted to tell a subtle story behind the two crashed cars. “In the car Grant is sitting in, the previous owner’s family left a lot of clues. Most people don’t consciously pay attention to these details, but they add much-needed life to a world that would otherwise feel empty and boring.”
For the look and lighting, the team looked at numerous scenes from films and TV series. The artists analysed how the camera work differed from one another. The team then discussed what time of day, how overcast it should be and how much fog there should be. In the end, it came down to a natural look with beautiful lighting that emphasised certain important elements. It was important to keep the film toned down in a cinematic way. “As we move around the environment, we show it from many different angles, which was a challenge. Because each of them had to look great, but also connect the trailer at the same time,” says Fredrik. Nothing was scanned for the environment, the pre-production only included concept art.
The team created the rain using a variety of techniques, VFX supervisor Henrik Eklundh explains what they were: “Houdini was used to create the drops in the air, which then hit the ground. We created a kind of rain mist and real fog for the feeling of high humidity. For the raindrop movement in the puddles, we used an animated displacement map, which we merged with the shading and texturing of the tarmac and puddles. We also created moving wet maps on some surfaces. For the rain interaction with the car surface, we used special setups consisting of droplets sliding on the surface and some that stay in place. We put all these elements together, added some more for variety and blended them together perfectly.”

Das Autobahn-Environment basiert auf einer realen Location nördlich von DC, die das Team über Google Maps fand.
The motorway environment is based on a real location north of DC, which the team found via Google Maps

The team achieved the foggy look of the scenery with a very cloudy base in which the direction of the sun is only hinted at. In this way, most of the elements were illuminated. “Within the car, we added additional light sources to capture the characters and align the look with the direction given by our in-house art direction. Together with the muted colours in the correction, this resulted in the final look,” says Henrik.

The perfect engine for this project

According to Henrik, the V-Ray render engine was perfect for this project. “The hair in particular was a challenge, but with the new hair shader for V-Ray Next, I think we could have done even better. Also for the water in the air it was difficult to get samples with correct shading, so we had to play around with the shading to make it work. Additionally, we rendered some passes in Houdini Mantra and simple tricks like doubling the size worked wonders.”

Das Autobahn-Environment basiert auf einer realen Location nördlich von DC, die das Team über Google Maps fand. The motorway environment is based on a real location north of DC, which the team found via Google Maps

For an optimised render process, Goodbye Kansas kept the amount of reflections and the depth of reflection as low as possible without compromising the look. The team created the render layers in such a way that not much tracing was necessary. In the comp, they then layered back everything that had a big impact on the look. The team also solved many other issues in the compositing process: “We always look at what can be moved from 3D to Nuke. We try to reproduce details without having to go back to the asset and therefore the entire pipeline,” says Henrik. CG elements in the distance were created with matte paintings, which were seamlessly connected to the 3D renderings.

Tough, but controlled

“For us, this was the first trailer that relied so heavily on performance capturing. Our character and facial departments did a fantastic job in creating this believable and endearing character in a very specific trailer setting. There is usually a lot of action in cinematics, but here the pace was slow, which made it a super interesting project. Actor William Hope’s performance finally perfected Grant,” summarises Henrik.

“Grant” was the second trailer to be produced after “Aidan” because the facial rigs and animation workflow had to be in place. After that, “Maya” and “Heather”, which with 46 shots had the most individual shots of the trailers, could be produced all the faster and with even more convincing results. “It paid off that our pipeline and workflow were already working very efficiently at this point. The Heather project was a tough one, but a controlled one. Our outstanding character modellers and look developers pushed their craft on these films and delivered very high-quality results,” sums up producer Thomas Oger.

]]>
DIGITAL PRODUCTION 72347