Search Results for “DP2201” – DIGITAL PRODUCTION https://digitalproduction.com Magazine for Digital Media Production Fri, 20 Dec 2024 08:29:32 +0000 en-US hourly 1 https://digitalproduction.com/wp-content/uploads/2024/09/cropped-DP_icon@4x-32x32.png Search Results for “DP2201” – DIGITAL PRODUCTION https://digitalproduction.com 32 32 236729828 17 Souls https://digitalproduction.com/2021/12/29/17-souls/ Wed, 29 Dec 2021 10:54:00 +0000 https://digitalproduction.com/?p=151176
Since October 2020, the Munich University of Television and Film has been offering a new specialisation in VFX as part of the Image Design course. In their first year, the students have already been able to realise an animated film in teams of three; from a reinterpretation of the werewolf myth (from page 102) to mysterious radio messages from a lost plane.
]]>

Six of us started our studies with Professor Michael Coldewey in the 2020 winter semester. Thanks to his extraordinary commitment, we were the first VFX class at HFF to start giving free rein to our ideas early on. When Professor Jürgen Schopper took over as head of the specialisation in spring 2021, we were confronted with realistic project planning, which meant that the projects could be completed in July 2021. For the most part, we succeeded and it was an extremely important experience for us.

by Chris Kühn, Nicolas Schwarz and Christian Geßner

Jonas Kluger, Pipeline TD at HFF Munich, set up the Shotgrid software for us to manage the projects. With regard to time tracking, which is common in the industry, we quickly realised how much time is actually needed for some “small” changes. We received cinematographic input from Rodolfo Anes Silveira, who also supported us in all audiovisual matters.


The two films 01 of the new VFX class at the HFF Munich 

Jürgen Schopper took over the professorship of the VFX specialisation in April 2021. The first students have now completed their film 01. The result is the animated short film "Sleep Tight" and the fully animated fictional teaser "17 Souls". 

Jürgen Schopper started as head professor of the VFX specialisation at the Munich University of Television and Film (HFF) at the beginning of April. The specialisation is anchored in the HFF Department of Visual Design (headed by Prof. Tom Fährmann). The first VFX students started there in the winter semester 20/21. 

Prof. Jürgen Schopper: "I really enjoyed planning the VFX specialisation and supervising the very first two films in this new degree programme. I would like to thank our President Prof Bettina Reitz, without whose support this new direction at HFF Munich would not exist, Prof Tom Fährmann, who has always supported me in the planning, and above all Prof Michael Coldewey, who was such a great mentor during the first semester before I was appointed to HFF Munich. I would also like to thank Prof Dr Peter C. Slansky, who was a great help with all the technical aspects. With Petra Hereth (Team Assistant), Rodolfo Silveira (Artistic Associate) and Jonas Kluger (Pipeline TD), we have now set up a great VFX team in our department, which is supplemented by external lecturers. This ensures the most up-to-date references in the curriculum and the exchange of students with the VFX industry from the very first day of study. We have opted for an end-to-end VFX workflow - from brainstorming to the finished film - which is also state of the art in the industry. But it's best to let the students themselves have their say and describe their impressions and experiences in the process of creating their first two films."

The first film and an immediate crash

What do “Der Schimmelreiter” by Theodor Storm and a missing passenger plane from the 1940s have in common? The teaser “17 Souls” tells the beginnings of a mystery story about a ghost ship based on true events.


At the end of the 1940s, several “Avro Tudor” aeroplanes disappeared over the Atlantic Ocean in the infamous Bermuda Triangle. To this day, it is still not clear how these crashes could have happened despite the best weather conditions. Based on these events, the 3D animated teaser “17 Souls” was created at the HFF Munich, supervised by professors Jürgen Schopper and Michael Coldewey. We were very fortunate to be supported by people with industry experience from the VFX and animation film industry departments.

Brainstorming and conception – the visual claim

The long-standing tradition of the HFF Munich actually stipulates that the students’ first film must be shot in black and white. The various areas of VFX bring with them many innovations, which means that HFF Munich has finally created a link between analogue and digital. As a result, we were lucky enough to introduce a bit of colour into our film.


We initially pursued a visual ambition rather than a profound narrative. As pioneers of the new VFX programme, we knew at the beginning of the semester that we wanted to create images that would impress our viewers. We already had enough material for initial concepts from our research. With the first storyboard, we were able to cut together a pre-visualised film and use an initial sound design layout to better capture the mood and direction we wanted to take. We used the weekly meetings with Prof Michael Coldewey, Prof Jürgen Schopper and Rodolfo Anes Silveira to get suggestions and feedback. From this point on, we moved up a dimension from the flat 2D drawings to the 3D software.

Modelling – design first, history second

Due to the coronavirus pandemic, we were unfortunately unable to visit the nearby aircraft museum in Schleißheim for research. Nevertheless, we stuck almost exactly to the original model of the “Avro Tudor IV” for the external shape. Thanks to blueprints and discussions in aircraft forums about the Avro Tudor series, we were able to get a good idea of the aircraft and block out the first settings with a rough model.


As the pre-visualisation gave us an accurate picture of the perspectives that would be seen of the aircraft, the focus of the model was placed more on the front and side views, which allowed us to save unnecessary work on details that were not visible.


For close-ups of the engines, the outer turbines were moved a little further towards the bow so that they are easily recognisable in the blur. The interior of the aircraft is very different from the original and is largely based on our own design, as the original aircraft interior nowadays is more like that of a train compartment. Nevertheless, we have always orientated ourselves on the designs of the time. The cockpit is a combination of modern spaceship seats, control elements from various aeroplanes and several duplicated elements.

Texturing – You snooze, you lose!

The assets were then all given a used look through texturing. It was important to create a dilapidated surface without telling the story of a completely destroyed aircraft. Different camera angles meant that different levels of detail were required on the aircraft. For example, a single wing and the foremost metre of the aircraft nose each have their own 8K texture for base colour, bump, metallic and roughness.

The textures in Houdini
A few assets from the aircraft: 3D artist Dirk Mauche helped us to find the optimum degree of soiling for both modelling and texturing.

Rigging & animation – a turbulent affair

Due to the stormy environment in which the aircraft is located, we were aware that it had to be shaken up properly. This meant that the aircraft rig had to have several controllers in order to be able to control the shaking independently of the aircraft’s direction of movement. We then implemented the interior in a similar way. Assets such as the seats in the cockpit and passenger area were given their own controller for each seat, which could be animated in the classic way or procedurally.


Problems only arose when changes were made to the geometry of the aircraft, which usually led to less than pleasant surprises in the rig. We were very lucky to work with Prof Melanie Beisswenger from the Ostfalia University of Applied Sciences via Zoom on specific problems in the animation.

Light(n)ing – Let there be light!

Most of the film takes place in a wild thunderstorm. For planning purposes, we created a light sheet under the guidance of creative director & CGI artist Kathrin Hawelka in order to pre-define the right lighting mood. The indirect lighting of the lightning between the different layers of clouds created a special atmosphere. As most of the clouds were actual volumetrics, we were also able to place the lightning, which was generated in Houdini using L-systems, freely in the 3D scene.


In addition, a very transparent volume over the whole scene created an impressive weather glow when lightning struck. To create a gloomy mood, the entire film is lit low-key and is often limited to a single edge lighting. Inside the aisle of the aeroplane, we used flickering neon lights and occasional bright flashes to maintain the basic nervousness.

FX – Smoking heads and burning engines

To keep our finger on the pulse, Houdini was our first choice for all simulations. The industry standard software demanded a lot from us and has become a real asset, especially with the help of our lecturer Felix Hörlein.


The animations of all shots were exported as Alembic from Maya. To keep the simulations as stable as possible, the aircraft was animated at the origin of the coordinate system and not moved forwards at cruising speed.


The climax of the tension in the film is the burning engine. Fire and smoke were created using the sparse solver “Axiom” and controlled with a particle simulation. The time advantage provided by the GPU acceleration of the plug-in was the decisive factor in our decision.


The rain that hits the windscreen consists of particles that are measured to form a water surface. To do this, we created clusters that generate several particles simultaneously and shoot them at the cockpit’s collider geometry. The windscreen wipers do not interact directly with the individual particles, but generate a force field that is responsible for the streak-like distribution of the droplets.

We have created proxy geometries with a significantly reduced number of polygons for all rigid body objects. The collider geometries also only consist of rudimentary geometry such as cuboids. We used Vellum for all pockets in the interior and soft bodies. As it was not possible at the time of production to have Vellum objects and RBD objects collide with each other, we also opted for Vellum for the suitcases in the luggage rack net.

Rendering – If you don’t render, you freeze!

The rendering was carried out using the in-house render farm – a network of 7 workstations. For reasons of time and cost, a CPU render with Arnold or Mantra was out of the question. The original plan was to render in Maya with Redshift. In the end, however, Houdini (also with Redshift) became our lighting and rendering software.
In addition to the classic back-to-beauty layers, we also rendered a depth channel and the cryptomat for all shots for compositing. The distribution of the individual render jobs was monitored with the Deadline Monitor software, which woke us up a few times during the night if a rendering was faulty. The result was wonderful 32-bit multilayer EXR files.

Compositing – Cloudy with a view of rain

For the compositing we were able to use additional layers of rain as well as self-created matte paintings to give our images more depth and realism. For the clouds in the storm, we opted for a mixture of rendered volumes interacting with the aircraft and 2D plates of procedurally generated noise. Compositing artist Heike Kluger showed us the endless possibilities of noise algorithms, which we used for rain, haze and clouds, among other things.

Colour grading – showing your colours

We also noticed the freshness of the course in colour grading, when the in-house colourist, Claudia Fuchs, was very pleased with images with a colour depth of 32 bits. You don’t even get that with the raw codec from RED. Together with the colourist, we created an aesthetic black and white look with a few colour accents.

Sound design – boom, crack, hiss

When we marched into the HFF sound studio with the finished film, we didn’t realise how important the sound design would be for our film. Together with sound designers Gerhard Auer and Rodolfo Anes Silveira, we filled the images with sound. When the first thunderclap was set, a broad grin spread and suddenly the film felt like great cinema. Pattering rain, a burning engine and the distant rumble of the storm resounded through the cinema. The trembling of the low tones and clinking of the flying glass provided the final immersion of the teaser.

Happy accidents & virtual meetings

Virtual feedback loops, digital beer drinking and lectures in all kinds of transport: This is what life was like during the project. Review rounds initially took place exclusively online. We were all the more surprised when we got to see an interim result on the HFF cinema screen for the first time. The initial anticipation quickly evaporated in view of the amount of work that still needed to be done. That evening we also realised that the films were designed for the cinema and not for the monitors we were working on. The joy was all the greater a few weeks later when – back on the screen – we realised what had happened in the final stages. The biggest hurdles had now been overcome.

One example of such a hurdle was moving the 3D models from Blender to Maya. This tended to work smoothly via an FBX export. Only the naming made the Maya file almost unusable. Dots in object names were replaced by a long string after the import and all constraints were included in the object name. The solution to this was a self-programmed Blender add-on that replaced dots with underscores and a checkbox in Maya to hide the name spaces.

Outlook

The first film of our studies is finished. But what are we going to do with it? The teaser and the collected material are perfect for a pitch-vis to push the development of the feature film, which shows the whole story around the mysterious disappearance of the plane. A series of festival submissions is also planned to network even more closely with other filmmakers and gain new experiences. And if you want to see it all in motion, you can find the making-of at is.gd/hff_17souls.

The team 
Cast: Maximilian Klampfl, pilot voice 

Crew:
Producer: Michaela Mederer 
Director / Script / Modelling / Texturing / Rigging / Animation / Compositing: Chris Kühn 
Director / Concept / Matte Painting / FX: Nico Schwarz 
Director / Concept / Modelling / Texturing / Lighting: Christian Geßner
Project Supervision: Prof. Michael Coldewey & Prof. Jürgen Schopper
 
Project Consultant: Rodolfo Anes Silveira 
VFX Pipeline TD: Jonas Kluger 
Line Producer: Ina Mikkat 
Team Assistant to Line Producer: Jenny Freyburger 
Team Assistant: Petra Hereth 
Colour Grading: Claudia Fuchs 
Sound Design: Gerhard Auer & Rodolfo Anes Silveira 
Postproduction Supervisor: Christoffer Kempel 
Technical Support: Benedikt Geß & Florian Schneeweiß 
Conforming: Martin Foerster 

Department Mentors: 
Melanie Beisswenger (Animation)
Kathrin Hawelka (Lighting & Shading)
Felix Hörlein (FX)
Heike Kluger (Compositing)
"Super" Dirk Mauche (Modelling & Texturing)
Moritz Rautenberg (Camera)

]]>
DIGITAL PRODUCTION 151176
Sleep Tight – A modern horror tale https://digitalproduction.com/2020/12/19/sleep-tight-a-modern-horror-tale/ Sat, 19 Dec 2020 15:16:00 +0000 https://digitalproduction.com/?p=151405
"From the edge of a forest, the werewolf steps into the moonlight; in front of him lies a small settlement by a forest lake." This sentence comes from an early version of our script. A lot happened from there to the final film. The old-fashioned settlement became a big city, the forest lake became a hill with modern flats and the werewolf became a werewolf. "Sleep Tight" is a homage to the horror films of days gone by, but at the same time it also breaks with convention. Our aim was to capture this dichotomy visually.
]]>

After a few brainstorming sessions and mood research, the setting was found: The world was to be dark and foggy, the images diffuse and rich in contrast. Our professor Michael Coldewey gave us a lot of freedom to come up with ideas, and so the first very ambitious script was soon created. That’s how we came to want a creature as the main character: a werewolf.

by Malte Pell, Tobias Sodeikat and Jonas Potthoff


With this basic idea, further meetings took place in which the story began to take shape: a short, menacing sequence – a werewolf breaking into a house – told full of suspense and with a twist at the end that bypasses the usual tropes of the genre. Here, for once, no one is eaten; the werewolf creature is itself an inhabitant of the house it appears to be breaking into.
But above all, the werewolf becomes a werewolf in the course of this process. This variation is not common in film history, and it was precisely this break with the traditional that brought us a huge step forward in the conception. Based on this premise, the rest of the setting was also modernised under the supervision of Prof. Jürgen Schopper; the originally planned, run-down forest hut was replaced by a modern flat that now stands on the outskirts of the big city. The first storyboard and character drafts were created in November 2020, which were later incorporated into a simple animatic. This rough preliminary version gave us an impression of the most important story beats and, above all, gave us an idea of the timing and pacing, so that we were able to further develop the rhythm and camera angles under the supervision of our artistic collaborator Rodolfo Anes Silveira.
The conception phase lasted a total of 3 months, after which the entire project was set up on the project management platform ShotGrid (formerly Shotgun). A particular challenge in times of coronavirus was that we were often only able to meet online. Nevertheless, we were supported by weekly Zoom meetings with professors and artistic staff from the HFF, as well as experts from the industry – from topics such as concept art and storyboarding to rigging, modelling, texturing, animation, simulation and compositing.

Horror on velvet paws – the creature design

The creature design in particular posed various challenges that had to be taken into account right from the start: In terms of content, we had the requirement that the werewolf could not be too big and massive, after all she still had to fit through the doors of the house. At the same time, she had to move on velvet paws so that her husband wouldn’t wake up in bed when she crept through the building. Nevertheless, she still had to look scary and command respect. Various references from nature were therefore incorporated into the design process: The teeth are arranged in several rows, similar to a shark. We modelled the fur comb on the back on the hump and fur of a hyena.


An additional challenge was that the creature had to walk on both four and two legs. It only reaches its maximum size and dominance at the very end, when it stands up and steps into the moonlight. Before that, it only appears in reflections, in details or as a shadow, so it is always veiled. Only in this shot do we see it clearly and distinctly in front of us.


To ensure that this straightening works, we carried out a number of tests with a motion capture suit from Xsens. We realised early on that the three of us would never be able to animate the entire character by hand. So we took inspiration from films like “Planet of the Apes” and built our own forearm extension from sawn-off crutches. These were adapted precisely to the proportions of the werewolf model so that the sensors of the suit were located where the werewolf’s wrists had to be. This enabled more precise retargeting of the movements of the (real) motion capture performer to the rig of the (digital) werewolf. For her skeleton, the lower body of a dog was fused with the upper body of a human; the creature’s outer skin was modelled around it.
In addition, the extensions of our actress’s arms helped her to feel her way into the role of a 2 metre tall, four-legged monster. We were able to recruit the actress and dancer Kathrin Knöpfle for this physically demanding task.

Digital film set – motion capturing

After this intensive preparation phase, the time had finally come. The model of the werewolf was optimised for capturing, the environments were prepared so that we could create and measure a floor plan of each digital set. Then the four-day shoot began in the HFF film studio. The sets were recreated using movable walls. If there were to be interactions between the werewolf and the environment, they were supplemented with props such as moving doors, branches or a mattress, representing the bed in the bedroom. Various versions were recorded for each shot, which were analysed on set and fed into the existing scenes in Maya via retargeting. This allowed us to view all takes in the form of grayshade playblasts after each day of shooting and decide whether shots needed to be repeated if necessary. Almost like a classic film set! To recreate the mood of the scene as well as possible on set and thus support Kathrin’s performance, we also used a bright light source from the direction of the (digital) moon. At the same time, all motion capture takes were also recorded with a real camera in order to obtain as many movement references as possible.

Bringing the dead to life – the animation

A more detailed, three-dimensional animatic, the first real rough cut so to speak, was cut together from the playblasts of the motion capture shoot. This allowed us to reassess the narrative necessity of each shot, after which we significantly shortened the entire film. Now we could start with the detailed work. In almost every shot, errors that had occurred during motion capturing or retargeting had to be corrected by hand animation. The hands, ears and the blendshapes on the werewolf’s face also had to be animated by hand and any connection errors between the shots had to be corrected.
The two words that give the film its title, “Sleep Tight”, also posed a challenge, as they are spoken by the reverted woman at the end of the film. She turns directly into the camera and speaks to us. Her eyes glow discreetly in the darkness. In order to avoid the Uncanny Valley as much as possible, we had initially thought of a more distant shot. However, for the emotional impact of the final scene, we realised that we couldn’t avoid including a close-up of the woman. This meant that the lip synchronisation and facial animation had to be as detailed and believable as possible. To do this, we not only recorded the audio of the speaker in the recording studio, but also filmed her face at the same time to obtain reference material. This helped immensely with the facial animation. It added nuances to her performance that we would probably never have realised when animating by hand without a reference. We tried to recreate her acting as well as possible digitally. Our lecturer, Prof Melanie Beisswenger, was a great support in this, regularly assisting us online with all questions relating to the animation.
Furthermore, the animation of the camera was fine-tuned afterwards to ensure that the camera movements meshed as well as possible. At the same time, our concept from the outset was to limit ourselves to camera movements that could actually be realised and not to incorporate any illogical or exaggerated movements. We also wanted to do justice to our role models, the early horror films.

Cosy blankets and lumpy fur – the simulation

Once the animation was complete, we moved on from Maya to Houdini, because another major challenge followed: the simulation. Our aim was to make the look as realistic as possible. So with a creature full of fur, we had no choice but to delve into the topic of grooming. Various attempts and several hours and Gbytes of simulation cache finally led to a result. In the final version, the setup consists of three different grooms for different parts of the creature’s body, each of which is simulated with its own parameters in order to get as close as possible to real fur behaviour.
In addition, the woman lies down in bed with her husband at the end of the film, so an elaborate cloth simulation was necessary for the interaction of the two characters with the blanket. Finding the right stiffness with a natural drape took time. But the specialist Felix Hörlein helped us enormously with all the simulations.

Leathery to shiny – the texturing / shading

Underneath the fur, our werewolf naturally also needed a detailed texture for her skin, nose, teeth, eyes, etc. All the texturing was done in Substance Painter. Here we made sure to give the skin a leathery look and to include lots of details such as abrasions and scars, especially on the face. The shader itself was then created in Houdini, with various masks for individual areas such as the skin, nose, teeth and eyes. As with the modelling, we received a lot of support from asset specialist Dirk Mauche. The fur was deliberately not simulated in scarred areas so that the skin comes through in some places.
The remaining shaders for the environments and objects were also built in Houdini. Here, we paid particular attention to matching the colour scheme to the black and white final image from the outset, so that the greyscale supports the focus on the respective image section wherever possible. There was also a further element to help focus on certain areas of the image: a special feature of the creature is that its eyes constantly emit a menacing, subtle glow.
The glowing eyes in the darkness of the house were an element that we wanted to have in the film from the outset. It can be found in early mood boards, but also in the very first storyboard. And it closes the film as a bracket, because the eyes of the reverted woman in the bed also glow in the dark. For this, we created the shaders with an emission map in Houdini. In this area, the texturing went hand in hand with the lighting.

Telling darkness with light – the lighting

Just like the visual design of the camera and set pieces, our lighting concept was consistent throughout the project: Light that is as hard as possible with clear edges that emphasise certain parts of the image and thus support the suspense of the film at the end. At the same time, we hide the exact appearance of the werewolf until the last moment – and thus (hopefully) also increase the suspense.
To achieve this look, we tried to limit ourselves to as few light sources as possible and orientate them on real-life models. The moonlight in the outdoor shots was mainly created using HDRIs supplemented by hard panel lights. In the interior shots, mainly strongly focussed panel lights were used, which were supplemented with moving gobos in some places. These moving, very natural shadows also helped massively to give the images a more realistic look. The lighting design process was supervised by creative director and CGI artist Kathrin Hawelka and DoP Moritz Rautenberg.
Rendering was done with Redshift from Houdini. Pipeline TD Jonas Kluger set up his own render farm at the HFF, which is operated via Deadline. Despite the computing power, up to 5 days were needed for some of the renderings. The fine-tuning of the images only came afterwards, in compositing.

Digital to analogue – compositing

As each shot from the lighting was rendered as a multilayer EXR with 32 bits, we still had a lot of leeway in compositing to adjust the light across all shots and to further harmonise the atmospheric elements with each other.
For the establishing shot in particular, in which we see the modern house and the werewolf’s leg for the first time, some layers were combined as matte painting in Photoshop and Nuke.
In compositing, we also further developed the aesthetics of the film under the guidance of senior compositor Heike Kluger. In order to simulate older lenses, we created a sharpness drop-off towards the edges of the image and a separate lens distortion for each focal length based on old Angenieux zoom lenses. The corresponding lens grids were shot by fellow students from the HFF camera department when they had a seminar on handling green screen and VFX on set at the same time as our motion capture shoot.
In Nuke, we also added a subtle glow to the highlights and particles, similar to the properties of a BPM filter. This stylistic patination of the otherwise clean, digital images contributes to the impression that the footage was actually shot, and we also wanted to get even closer to the aesthetic of old horror films. In colour grading (colourist: Claudia Fuchs), we then reinforced this impression even further by adding film grain and high contrasts. Here, too, the large headroom of the EXR files paid off and we were able to give the images the perfect finishing touches.

Lost in the sound? – The sound design

The final step was a sound mix in 5.1 under the direction of Gerhard Auer and Rodolfo Anes Silveira. For the sound design, we focussed on atmospheric sounds and music beds that didn’t sound too melodic and intrusive in order to draw the suspense from the sounds of the creature and the house. Only at the big climax, when the werewolf stands up, do weird violins enter and the music comes to the fore, getting louder and louder to make the atmosphere as uncomfortable as possible and the twist afterwards all the more effective. We even recorded the squeaking for this ourselves at home.

On velvet paws into the future – an outlook

Once the sound mix and the edited material had come together in the form of a DCP, an internal team premiere took place in the HFF’s own cinema. Guests included the department heads and supervisors from the industry, with whom 9 months of work on this film came to a provisional conclusion. We are very proud of the result and would like to thank everyone involved for their support. The film will be released in 2022, and perhaps our Werwölfin will then sneak into one or two festivals. Until then, you can watch a making-of at is.gd/hff_sleep_tight.

Team

Cast

Motion Capture Actress: Kathrin Knöpfle
Voices: Lisa Hagleitner, Hendrik Ehlers

Crew

Directors, Script, Editing, Camera, Animation, Simulation, Compositing: Tobias Sodeikat, Malte Pell, Jonas Potthoff
Producer: Luisa Eichler
Project Supervision: Prof. Jürgen Schopper, Prof. Michael Coldewey
Project Consultant: Rodolfo Anes Silveira
VFX Pipeline TD: Jonas Kluger
Line Producer: Ina Mikkat
Assistant to Line Producer: Jenny Freyburger
Team Assistant: Petra Hereth
Colour Grading: Claudia Fuchs
Re-Recording Mixer / Sound Design: Gerhard Auer, Rodolfo Anes Silveira, Stefan Möhl
Postproduction Supervisor: Christoffer Kempel
Scheduling: Beate Bialas, Sabina Kannewischer
Editing Support: Christine Schorr, Yuval Tzafrir
Technical Support: Benedikt Geß, Florian Schneeweiß
Rental HFF Munich: Rainer Christoph, Boris Levine
Studio Management: Peter Gottschall, Andreas Beckert
Conforming: Martin Foerster
Consultants: Dirk Mauche, Kathrin Hawelka, Moritz Rautenberg, Felix Hörlein, Melanie 
Beisswenger, Heike Kluger

Production

University of Television and Film Munich
Supervising Professor: Prof. Jürgen Schopper, Prof. Michael Coldewey
Technical Details
Frames: 4,608; Resolution: 2,048 x 858; Aspect Ratio: 2.39:1; Renderer: Redshift; Compositing: Nuke, After Effects, Photoshop; 
3D software: Houdini, Maya, Blender, Substance; Sound: 5.1
Website hff-muenchen.de/

]]>
DIGITAL PRODUCTION 151405
Dune https://digitalproduction.com/2020/10/29/dune/ Thu, 29 Oct 2020 11:40:00 +0000 https://digitalproduction.com/?p=151240
Good things come to those who wait – and fans of the “Dune” series had to wait for a very long time. But Denis Villeneuve took on the gargantuan source material and made one of the most highly anticipated films of the last few years.
]]>


A confession first: I have been a fan of the books for my whole adult life and I even slogged through all of the extended universe. So when I heard that the dude who made “Arrival” – Denis Villeneuve – was giving the first book a movie treatment, I was quite excited. But with the extended waiting time (the C that shall not be named) and an unhealthy amount of discussion on certain platforms (without any information – welcome to Social Media), the expectations were exceptionally high. So after we saw it in cinema (twice), we jumped on a call with VFX Supervisor Paul Lambert, and Tristan Myles and Brian Connor from DNEG.

Paul Lambert (Bottom Left), VFX Supervisor, had been involved both in Denis Villeneuve’s last project, “Blade Runner 2049”, as well as a few of the biggest VFX-movies of the last two decades including “Tron”, “I,Robot”, “Harry Potter”, “Benjamin Button”, “Tomb Raider” and 30 more. For “Blade Runner 2049” and “First Man” he received Academy Awards.

Brian Connor (Top Left) , Visual Effects Supervisor from DNEG, is no newbie either. His filmography includes everything from “Star Wars” and “Star Trek” to “Transformers” as well as Marvel & DC movies, “Jurassic Park” and the Godzilla-Monsterverse (including the 1998 version by Roland Emmerich).

The third supervsior on the call was DNEG’s Tristan Myles (Bottom right), who (along with Paul Lambert) won the Oscar for Best Achievement in Visual Effects for „First Man“. Besides that, he was a supervisor on “Fantastic Beasts”, “Interstellar” and many more, including favourites like “Hellboy 2”, “Kingdom of Heaven”, “Children of Men” and “The Dark Knight Rises”.

Another note: Between the interview and going to print, it has been announced that the second part has been greenlit and should hopefully be released in 2023. But we didn’t know that, with the film not being released at that point.

DP: When we first got to see “Dune”, I was amazed by the set extensions. How did you bring Arrakis, Caladan and the sets to life?

Paul Lambert: We built a fair amount of them (laughs). Denis Villeneuve and Patrice Vermette, the Production Designer, spent a year prior designing the worlds of “Dune”, the spaceships and the sets.
Usually, what concept art does for VFX is serve as a springboard into different ideas. But on this, Denis was so happy with the concept art that it became a solid reference. We built the sets in Budapest, and the 3D-assets extended that (“Dune” was partially shot in the Origo Studios in Budapest; origostudios.com).

In VFX, there is often some deviation from the concept art with new ideas or things that don’t match exactly. But Denis felt that in previous movies some things had gotten away, and when it goes down the wrong path, it takes a lot of money and energy to drag it back. This time, he was adamant, and – Brian can attest to that – the assets were as close as possible to the concepts. Any changes were marked and got approved. With the ships, we would A/B-test against the designs.

In a way, that helped a lot with the look – we knew what everything would look like. Instead of putting everything against a blue- or greenscreen and then figuring it out, we never had the „We’ll fix it in post“-attitude. The phrase wasn’t even muttered, as far as I can tell.

DP: Could you give us an example for that?

Paul Lambert: The interior of the ornithopters, for one. Traditionally, you’d shoot that against a greenscreen in a studio, on a gimbal. And after shooting, you’d replace everything. But together with Greig Fraser, the Director of Photography, we decided that we would not try to replicate daylight in a studio. Arrakis is this hot desert environment, so everything that would happen outside, we would shoot outside. You can’t replicate the strength of the sun.

DP: But a virtual production environment with LED screens?

Paul Lambert: I had experience with LED screens, and Greig Frasers worked on the first season of “The Mandalorian” (DOP for episodes 1, 3 and 7). So, with that extensive experience with virtual production, we agreed that you can’t get enough light from LEDs to get the desert feel. If the movie had played during sunset, it would not have been a problem, then it would have been perfect. But the noon sun in Arrakis needs the actual sun. Because of that, we didn’t try to light it, but we found the highest hill in Budapest, put our gimbal on top so we could get a nice horizon, and surrounded it with an eight-meter screen, colored like sand. On a hot day, the sun would bounce off the screen and enter the cockpit. Even when we looked at the dailies, with Greigs camerawork, it already felt like you were in the desert.

And then the compositors from DNEG added their magic. We shot hours of footage flying through deserts in the United Arab Emirates, with a six-camera-rig under a helicopter flying through the dunes. With that, the compositors could do a blend of that with the footage. Rather than a full extraction, like the classic „foreground and completely different background“, our foreground already had very similar tones, so we could mix. And honestly, it felt immediately real, and we didn’t have the usual problems with edges and a lack of believability.

For example, the glass dome of the ornithopter had reflections, and reflections of reflections on the inside. Shooting that on a greenscreen would have been problematic. Obviously, there are times when you have to rebuild that, and I think, the more you hit a plate, the lesser the credibility. And with our approach, we could be seamless. Also, we had a lot of time in preproduction to find ideas and to think about what we want in the end. Visual effects goes hand in hand with the on-set experience and demands, if you do it properly. And when you come up with a good basis, the VFX artists have something that can succeed.

We all know: If you have a foreground that isn’t corresponding to the background in terms of lighting, there is not much you can do about it. The more you pull and push and grade it, the less believable it becomes. Yes, you can get the perfect seams, but it still doesn’t look natural. So, with the time we were given in preproduction, we could avoid bluescreen.

An example: When the story moves to Arrakis, whether it is in the desert or the city of Arrakeen, we shot a lot of it against a sand screen, a sand colored background. Which is funny – because if you invert the colors, you get a bluescreen. Obviously, there are issues with skin tones and the like, but it gives you a very good basis. And let’s be honest: At this point, we can come up with a process for extracting parts of the image for any color, as long as it isn’t a complex background.

I was having this conversation with the DOP: We can remove any background or foreground, but if the lighting between the two doesn’t correspond, there isn’t much good we can do. LEDs and virtual production help with this particular challenge, but we already decided not to go down this path at this point.

DP: When did you get involved in the project?

Paul Lambert: Concept work, especially the interiors, were pretty much finalized, except for a few changes. We came in basically when the storyboard phase of the preparations started. We also had to previs a couple of scenes, which is not a thing Denis likes. But for some scenes we needed every department to know what we were about to do. For example, the sandworm attack on the crawler needed extensive preparation.
Also, from day one we knew that one of the most computationally expensive things for “Dune” was going to be the sand. And the sand around the sandworm in particular would be extremely important. When Tristan joined about a month later, one of the questions was: “How the heck are we going to displace all of that sand?”
The key to a good effect is having a visual reference which an artist can use to make informed creative decisions and even copy to. But we naturally couldn’t find a sandworm or something working in this way anywhere. I asked production if we could get at least some explosions in the desert for reference, while we were filming deserts in Jordan and the United Arab Emirates, but I was told that would not go down well in the Middle East.

DP: So how did you do it?

Paul Lambert: Tristan and the guys at DNEG Vancouver went through iteration after iteration. In an ideal world, you would just simulate every grain of sand by itself, but who has the processing power for that? So, you clump things together and hope the render goes through. But with that, sometimes things appear to not have the right scale or speed. So, during preproduction we did the R&D, so in the edit we could deliver the shots quickly. At the same time, Brian had a similar problem with scale: the ships coming out of the water on Caladan, which is a massive structure. Nobody has planes like that.

Brian Connor: Well, thankfully Paul found footage of icebergs tipping over which were about the same size. When they melt, they roll over, and that helped us to understand how such a massive structure behaves and displaces water on that scale. It was one of those shots that you work on pretty much until the end (laughs). You have to give it the love and the time and the disk space it needs for the simulations.

DP: You mean the scene where the Atreides flagship surfaces?

Brian Connor: Yeah, we had a couple of iterations on that. One of the first ones was with a person in a boat next to it, for scale. But we ran the simulations over and over, even though some looked odd. Remember: If it looks odd, it does so because you don’t know what it would actually do, for example, the huge amount of water piling up on the top of the ship.
If you get in there and change too much of what the dynamics of the simulation are telling you, you run into the same problem of over-processing that Paul talked about earlier. We just put all the distributed rendering power of the DNEG farm to use on this. Strategically, of course – when things slowed down, we took all the resources. It takes a lot of time to figure out the iterations. Same thing with the sandstorm. That was also a computationally heavy piece, but we were lucky that we had massive sandstorms from Africa as reference. So that was a bit easier.

DP: Can you talk a bit about your simulation pipeline?

Tristan Myles: We used Houdini and pushed it beyond its limits, I think. We came aboard early and tried to figure out how to show sand behaving at this massive scale – same as the water. In the beginning we put Fremen in the sand for scale, but we had to make scaled down versions for the edit, so it wouldn’t distract from the story. It couldn’t look distracting, like a visual effects scene – it is the environment for the story.

DP: So you manged to keep the scene files reasonable?

Tristan Myles: I can’t remember the exact file sizes, but Vancouver reserved three servers – we were in the petabytes-range. And 60 percent of that were the caches and the geometry. But with the types of things we were doing that was acceptable. The destruction, the heavy explosions, the sand simulation and the worm itself all were beasts to wrangle through the farm.

DP: So, the worm – how did you bring him to the screen?

Tristan Myles: We did model the whole worm including all the plates along the sides of its body. Those are all moveable, and the skin in between had a little bit of ‘give’, so it’s not fully rigid and organic.

Paul Lambert: We wanted something alive but prehistoric. One reference we had for that was elephant skin. Rigid plates over spots and areas with soft membranes in between, folding like an accordion. Obviously not super agile on this scale – the turning circle of a being like that would not be small. And a beast like that affects the whole environment it moves through. Robyn Luckham, the Animation Director, spent a long time figuring out how it moves.

It’s more about the sand displacement when the dunes ripple and rise like water, almost. And when it goes faster, it becomes almost like an explosion as it is traveling towards whatever source. And in keeping that idea of water: The actual worm’s mouth has baleen as you would see in a large whale. Because like a whale sifting water and catching krill, the worm would sift through sand.

The thing is: On that scale, it is a force of nature, affecting the whole environment. When it appears, we show the scale, adding things like camera shaking and rumbling and little explosions when it approaches. But you don’t get a lot of screen time with the worm. This is not “Jaws”.

DP: The bubbling of the sand in the sandcrawler scene was like a whale coming up from below?

Paul Lambert: Yes, everything around them is influenced by the worm. Funnily enough, when they sink into the sand, that was done in camera. Gerd Nefzer, the Special Effects Supervisor, built a vibrating plate, which we buried under the sand. And when you dialled in the vibration just right, the sand looked like bubbling water and you would sink into it, just like you see in the movie. Tristan was able to replicate that on the larger environment.

DP: For the next part of “Dune”, the worm is ready and roaring to go? (At the time of the interview, no information about Part Two was available.)

Paul Lambert: Well, I assume for the bidding procedure for “Dune: Part Two”, having a sandworm on hand will be relevant (laughs). But so far, there hasn’t been any prep for “Dune: Part Two”. If so, I’d love to know!

DP: You mentioned that you had a lot of concept art. How detailed was it?

Paul Lambert: Extremely! But there are always some things which you need to actually see. For example, Denis wasn’t really sure about the shape and texture of the Guild Heighliners (the massive ship that transports other ships for example from Caladan to
Arrakis). And Brian had a lot variations and iterations of its main docking port and the shape of that. When that was final, the texture was also important. These ships are old and have been around for a long time. Still working, but with bumps and scratches accumulated.

Brian Connor: I would love to use those in “Dune: Part Two”. The ships are so detailed, and with the structured insides and their scale, you could do wonderful things with that, interesting camera angles, composition and showing all that in relation to each other. I hope we can show it off!

DP: Another story beat that needed a lot of CGI was the shields. How was that done?

Paul Lambert: That was surprisingly straightforward. And it came from having artists involved in preproduction and on set, so you can do tests and inform the shoot. We had a list from things that we would have to figure out – among those, the shields. For example, would the shields add additional light? If yes, that would mean additional lights on set.
But we came up with a “past and future frames” approach which works really well when there is a lot of movement. Which there usually is in fight scenes. When there wasn’t a lot of it, we had to fake a bit of it.
What was very important was that we didn’t just procedurally grab frames – two from the back, two from the front and be done with it. We needed an artist’s trained eye and actual people who painted the frames out or in to get a look approved very early on. It shouldn’t feel digital, and DNEG has a few artists who are good at that kind of work.
We tested it with fight scenes from other movies, and it worked for everybody. It was just in DI and the edit that we saw that it became confusing – especially the fight between Paul and Gurney with its quick angles and cuts. There the idea of color came in. Blue for the normal state, and red for penetration.
Also, it was a bit of an homage to the first film, where people scratched the shields into the frame. And when we had it down for the fights, we had to recreate it digitally, for the ships – you’ll see it prominently in the attack of Arrakeen.

DP: Which techniques from “Dune” will you be carrying over to the next show?

Tristan Myles: Well, some of the tools we have written to manage the large amounts of data and bring them together at rendertime will be useful in the future. It’s part of visual effects that you always learn without trying to reinvent the wheel, although you generally end up reinventing the wheel anyway (laughs).
On “Dune”, we learned about large-scale effects simulation and what impact that has on the renderfarm, how to mitigate that and different setups to display what the final image is going to look like.
The real trick there is to work in lower resolution, but not making it look like low resolution. We had a setup for that which was called Ultra Res – once the simulation was signed off by Paul and Denis, then it went through the farm and we could wrangle every grain of sand. The backend of it all – it’s boring to write about, but is an essential part of heavier VFX.

DP: When we say „computationally heavy“, how did you plan for that during production?

Brian Connor: Usually, you can break it up a bit – for example only the front portion of the flagship. But the way it was shot – and you see all of it going into the distance – we couldn’t do only the front part of it. We ran the simulation for the whole ship, which added complexity to the background in addition to the stuff in the foreground. We had to strategize. Everything interacts with everything else. It took our supervisors a lot of work just to distribute it everywhere and to give us a way to iterate in a reasonable amount of time. We had that running on the side the whole time, but we knew that going in and planned for it.
What was also quite challenging was that we had different formats. For example, in Imax you see the whole frame, in 2.39:1 you miss some of the top and the bottom.

Paul Lambert: We framed for Imax on the set. We discussed 2.39, because there will always be something missing from the frame. But some shots couldn’t be done that way. So if you see it in Imax, there are a fair amount of shots which are different. One in particular: when Paul is standing in front of the worm, and it fills the frame with worm texture. We had to redesign that shot, because it fills the whole frame, even in Imax, from Paul at the bottom to the towering top of the worms mouth.
And on about 30 shots, we couldn’t go from Imax to 2.39. Usually you animate that visible area, and that’s that. But it didn’t work with the narrative, so we extended the Imax frame to the ratio of 2.39.
Funnily enough, when I saw the finished movie for the first time in Imax, I did not remember it like that. „Did we really shoot it like that?“ So, I encourage everybody to see the film twice – once in Imax and once in the usual theatrical aspect ratio.

Brian Connor: We called it the „mega­frame“ – the resolution is just massive. It’s around 7K, because you’re widening the Imax frame, which already is large. You could just cheat and buffer on the side and not have it in Imax-height, but then the quality and the fidelity would have suffered. We got the 7K frames to DI, so they could shrink them into the format.

DP: Couldn’t you have scaled it up?

Paul Lambert: Not yet. AI enhancements are getting scary good, but not quite good enough for this vital scene. I have been keeping an eye on these technologies, and it could influence every aspect of VFX. It feels almost like we are pre-Newtonian. One example is that, while you are doing onset capture of textures and so on, one could do AI-passes to train the AI on actual footage and help with production, for example capturing actors to do certain things as reference to train a machine to do extractions.
In some ways we are already doing that. On “Dune” I always had a couple of witness cameras on set. You might not always use them, but it is so beneficial to have the data. And in the near future, we can do all kinds of things with that.
Another thing I would highly recommend to everybody: Attach a GoPro to your main camera. When your DOP fuses a shallow depth of field, you basically cannot do background extractions. But with the GoPro’s sensor and lens, you can get the camera movement and reference for the backgrounds.

Brian Connor: Another thing that is coming is the saving of props. On a recent production we scanned a few period cars with a smartphone. The prop was just rented for the day, so we got as much of it as we could, and it worked surprisingly well, even with the reflections. And if it is not going to be up close, but seen from an aerial perspective or to populate the background, that guerrilla style of capturing data and assets can really help.
At DNEG, we have a pipeline for that, and you can get many things so much faster than building it from scratch and with lower hurdles in preparation. There can always be somebody with a phone taking pictures, you don’t need scheduling for that.
Paul Lambert: I did the spinner in “Blade Runner 2049” like that. Since the sensors are so small, everything is always in focus and you get a decent solution for photogrammetry. You have the full range of depth.

DP: If we switch the Direction of scanning: Did you use Lidar scanners on “Dune”?
Paul Lambert: Yes, we had a small one running whenever we were shooting, and sometimes during scenes we captured particular setups. Obviously, scenes move around and props are all over the place. Of course, we had scheduled a proper capture of every room before it got taken down, but we had a special person doing scans and Lidar on the go for anything that we requested and whatever came up. Yes, it produces a massive amount of data, but that is easier to handle than wasting a lot of time in post trying to figure stuff out.

DP: With a movie made from a book: Did you read the novel (or novels) in preparation?

Paul Lambert: I had read the book when I was about 14 and had seen the David Lynch movie first. At that point I was fascinated, but in preparation for “Dune” I was torn whether I should read it again or stick with the script. I knew the story, but I stuck with the script and Denis’ vision of it. I was afraid it might create tension. In hindsight: It wouldn’t have.

Tristan Myles: I read the book at a similar age. My dad got me into it, and I read it again in my twenties. And when the script came, it was closer to the book than the movie Paul mentioned, so I stuck with that.

DP: So, with “Dune: Part One” finished, what sticks with you? Which scenes will you put in your showreel?
Paul Lambert: It’s been a while since I updated my showreel (laughs). But what stuck to memory is that I am really happy with what we achieved and the experience of making the movie. It will be hard to replicate the collaboration, having the guys come out to the set and experiencing this whole world. Sometimes it doesn’t happen this way. You want things to be shot for VFX in a certain way, and you don’t get it. This time, having this collaboration was fundamental to getting the movie onto the screen. And having had this experience, I know what I want for future movies (laughs).


And particular scenes? There are ones which were a challenging shoot for me, like Salusa Secundus, where we see the Sardaukar Legions. That was challenging, because suddenly we had rain and sunlight at the same time. The team did a fantastic job – we were worried that it wouldn’t be believable because we had to adapt to the weather at the morning of the shoot. It was a challenge, but it came out really well.
The one thing I liked about “Dune” was that we had time. Originally, the movie was planned to come out October 2020, and that got extended to December, and then Covid hit. And then we finished January after that and went into DI.
Also, we did additional shooting. Denis felt we needed more connection between Paul and his parents, so we did some additional scenes. And we had a really quick turnaround to put that together. But it worked out well. For example, at the spaceport in Arrakeen, where Duncan lands and comes out and hugs Paul, that was a backlot in Budapest, and we had a sand screen going all the way around the backlot. Brian put in the spaceport.

DP: Doesn’t that make it harder?

Paul Lambert: I’m a huge believer in having a harder composite (laughs) – rather than breaking things down in layers and shooting those to be put together, I prefer to get everything in one go. That way then you have a harder composite. Which meant, that a lot of our scenes on Arrakis had us blowing sand and throwing dust from the ornithopters. You see that in the historical scene with the Fremen fighting the Harkonnen soldiers. We were throwing sand like crazy, and it was just texture and swirls. The guys did an amazing job at replacing the background – again, the idea of not doing full extractions, but to blend. The compositors might say it’s really hard, but the result looks more believable. I’m super proud of that approach and the way the artists brought it to life. The same with the city of Arrakeen: We had a helicopter and flew around Jordan, and basically Denis was like: „I want Arrakeen to be there“, and we did Lidar scans of that whole area and imported them. So even when there are full CG scenes, the environment is real, and that adds a lot of believability.

Brian Connor: The scale of that movie looks really good – the massiveness. When I came to the set, Paul took me on a tour, and standing in all of these massive sets – we pretty much took over the whole studio, and the backlot itself is just gigantic. The sets – especially for the interiors – were amazingly detailed. Walking around in them felt like being completely surrounded by the world. That was a luxury to have – a lot of it is really there. And that was something that I’ll take away from this: We never settled and didn’t go for good enough. We didn’t cut corners, but went straight through – even if that meant a lot of work, even if the servers went down. The Tech department is probably not our friend anymore, but we came through with a result that we could be proud of.

]]>
DIGITAL PRODUCTION 151240