Search Results for “DP2302” – DIGITAL PRODUCTION https://digitalproduction.com Magazine for Digital Media Production Wed, 04 Dec 2024 13:29:47 +0000 en-US hourly 1 https://digitalproduction.com/wp-content/uploads/2024/09/cropped-DP_icon@4x-32x32.png Search Results for “DP2302” – DIGITAL PRODUCTION https://digitalproduction.com 32 32 236729828 Trash, Dance & Robots https://digitalproduction.com/2023/02/28/trash-dance-robots/ Tue, 28 Feb 2023 09:42:00 +0000 https://digitalproduction.com/?p=151050
In winter 2021, we decided to make a film as a group of three under the guidance of our professor Jürgen Schopper about a robot whose purpose in life is to pick up rubbish. A lot has happened in the meantime: the robot is now controlled by an old lady and can - surprise! - dance very well. We have spent many enjoyable, stressful and also funny days at the HFF with our robot and the old lady. The original idea resulted in a four-minute animated film. We - three visual effects students (Valentin Dittlmann, Hannes Werner, Felix Zachau) at the University of Television and Film in Munich - developed, realised and completed this film in the course of our first year of study.
]]>

The year is 2029, we live in an ageing society. What will the world look like when younger people can no longer pay for the pensions of older people? Will they still have to work? Who will look after them? These are the kinds of questions we asked ourselves when creating the story. In essence, it’s about an old, frail lady who controls a robot for a living, which she uses to collect rubbish. A bit like modern bottle collecting.

This woman, or rather her robot, then finds a tape recorder, which rekindles her love of music and dance. Instead of the boring, repetitive work, the robot she controls dances. She uses her working hours, somewhat subversively, to experience again what her situation now prohibits. In order to finally arrive at this idea, we first collected a large pool of ideas and small stories, roughly sketched them out and expanded them.

Realisability was not yet a decisive factor at this stage. It was more about generating lots of ideas quickly. Thanks to this pool of ideas, we had the decisive advantage in the next phase that we were able to part with stories that didn’t work so well without any pain, whereby some discarded ideas could flow into further development in the form of details. Over several rounds, the respective stories were further enriched with character sketches and smaller scenes. In the end, the basis for the film was a human-controlled robot that collects rubbish at night. The original idea came from the carelessly discarded masks lying around on the streets during the coronavirus winter of 2021. The dystopian tale of poverty in old age and loneliness is given a positive twist, celebrating the power of memories, human resilience, music, dance and fun.

Concept

This development should also be reflected in the look. The aim was to create visually exciting images from the story. What should the film look like, what does the robot look like, what kind of city is it set in, what time of day is it, how does the positive twist manifest itself in the environment. During this phase, we mainly used the drawing programme Procreate to create sketches and images for inspiration. There was also a Miro board on which we could save and organise our ideas. Basically, we had a Pixar-like look in mind – with high standards.


The robot went through many iterations. It had to be able to collect rubbish as believably as possible as well as dance freely in a human way. In the end, a cold robot should develop human traits. We also decided on a design that was as humanoid as possible in order to be able to use motion capture in the animation later on. The robots from Boston Dynamics served as a great inspiration. New York, with its tall buildings and small neighbourhood parks, was our template for the urban design. The environment contrasts gloomy streets at night with a park at dawn, and an interior space also had to be created for the limited world of the old working woman.

Storyboard

Once the story and look for the film had been found, the next step was to combine them into a convincing storyboard. Together with Prof Michael Coldewey, we looked for key frames such as the encounter between the rubbish collector and the tape recorder, the old woman in the wheelchair or the dancing robot in front of the sunrise. Ideas were discarded, developed further and sometimes reintroduced.


We tried to break the story down to the most important plot steps without losing any of the substance. The keyframes helped us to find effective shots. We then sketched these roughly before drawing them in more detail in Procreate.

Animatic & pre-visualisation

Now it was time to convert the storyboard into a moving film. To begin with, we took the previously created shots and added slight movements and a rough sound concept using After Effects and Premiere Pro. The addition of the sound in particular helped us to get a better feel for the timing.
It also allowed us to decide what should be shown visually and what would be better told on the audio level. With this knowledge, we moved on to the three-dimensional in the next step. For the previs, the settings were roughly blocked out in Blender, cameras were set, animations were added or used from Mixamo and the timings were taken from the animatic.

Then Eevee was used for rendering and the result was cut in Premiere Pro. This allowed us to recognise which settings worked well in three dimensions and which needed to be framed differently or fundamentally changed. From there, we gradually made the Previs cleaner, adjusted the camera movements, increased the level of detail in the environments and animated the Smoother animations until we could be sure that the film would come across to the viewer the way we wanted to tell it.

Pipeline & Workflow

Our great pipeline TD Jonas Kluger had set up a pipeline for us via Shotgrid for Blender, so it was possible not to get bogged down in chaos even with a project of this size. As we had almost 250 assets in total, it was necessary to work in a clearly organised structure. We created three environments from the assets: grandma’s home, the city and the park. We could then load these into the respective shots again and again and be sure that continuity was maintained. Nevertheless, it was still possible to adjust the size and position of individual assets in certain shots.

We were also able to use the pipeline to change and publish assets and then update them in the environment or in the shot using the Shotgrid add-on in Blender. This made the division of labour within the team much easier and eliminated misunderstandings about versions or file names from the outset.

Modelling & Shopping

It was clear to us that the amount of work involved in modelling all the assets or an entire city would far exceed our capacities. That’s why we decided early on to only create the really important models ourselves and to buy in all the others.

Modelling Both – Blender Wireframe


As we didn’t want our style to be photorealistic, we had to search a little longer to find the right models. We finally decided on a package that already included an entire city. We built our hero models, the robot, the “router” and the station from which the robot emerges ourselves, with valuable tips from Helmut Stark, and we were able to create the grandma with the help of Reallusion’s Character Creator. It was important to us that the robot should look as if it had really been built to collect rubbish, but it also had to have enough freedom of movement to be able to dance. For the grandma, we decided to change the proportions of the head and body to avoid ending up in the Uncanny Valley.

Texturing

Once the models were finished, it was time for some colour, using Adobe Substance Painter to give the robot, station and router the right finish. By using textures with less detail, we were able to get closer to the goal of a cartoon-like style.

A mix of procedurally generated and hand-drawn masks for wear and tear and dirt emphasises the history of the models, for example, it became very clear that the robot has been keeping the city streets clean for a long time and has picked up a few scratches along the way. We then imported all the textures created in Substance Painter back into Blender and the models were ready for the next step.

Rigging

Our two characters, the grandma and the robot, finally wanted to move. For the grandma, we were able to use a Blender add-on, Autorig Pro, and with the help of our lecturer Benc Orpak, we were able to generate a functioning rig quite quickly. Fortunately, this saved us the manual rigging process and therefore a lot of time. Unfortunately, things were a little different with our robot: although it has human-like proportions, some of the joints worked a little differently. That’s why we had to rig the robot manually. The aperture and the cylinders on the legs were particularly interesting, as they visualise the mechanical character of the robot well. For remapping the data recorded during motion capture to the respective rig, we were able to use Autorig for both models.

Once the mapping, i.e. the definition of which bone receives which keyframes, had been set, it could be used again and again. In the end, only one click was needed to transfer all the data from the mocap to the target rig. It was a truly magical moment when you could see a robot moving almost by itself for the first time.

Motion capture shoot

When we were developing our story, it was already clear that we didn’t want to animate the dancing movements of the robot by hand, but to record them using motion capture. Katharina Hein, our producer, discovered Nicole, a professional rollerblader who showcases her roller dance skills on Instagram (@rollin_me_softly) and has even founded a Munich roller dance group (@munich_rollerdance_squad). Nicole was quickly enthusiastic about our film idea and was keen to lend her movements to our robot as a motion-capture actress.

A few days before the actual shoot, we made some preparations so that the motion capturing could run smoothly. For example, we measured our digital
digital sets so that we could recreate them in a very rudimentary form in the studio. We also developed a detailed shooting schedule for the three days of filming. For the actual motion capturing, we used a suit from Xsens and the associated software MVN Animate. On the first day of filming, we mainly recorded Nicole’s dance movements on the roller skates. As Xsens is not an optical motion capture system, but a system that works with acceleration sensors, we had the following difficulty: Xsens calculates the position of the suit in space based on ground contact and the distance travelled by the actress while walking. Due to the lack of ground contact because of the rather high roller skates and the “floating” movement above the ground, we did not receive any position data from the performer, but only her own movement data. This wasn’t a problem, however, as we were planning to manipulate the recorded routes to our liking during the animation phase anyway. In addition, we always filmed the entire set from two different perspectives with witness cams so that we could better reconstruct the movement in space later.

Animation

With the motion capture data recorded, we were able to continue with the animation. Firstly, all the selected motion captures were selected and exported to MVN Animate as .fbx. Retargeting in Blender was relatively simple using the AutoRig Pro add-on. Of course, the pure motion capture animation was not precise enough, or we wanted to change certain movements afterwards. For this we used the Blender add-on Animation Layers, which enabled us to do just that. This meant that the original mocap movement data was on one layer and we had the option of manipulating the movements on a second layer. This was particularly important in moments when the characters interacted with objects. We were accompanied by Prof Melanie Beisswenger, who not only provided practical support but also explained the basic theoretical principles of animation.

Lighting

Before we started lighting ourselves, we first looked for mood images with colours and lighting that matched our ideas for Clean Aid. For example, we analysed still frames from Pixar films for their lighting design. From these images, we developed a lookbook and colour palettes for each of the three environments.

With this inspiration, we went about lighting our shots. We used various HDRIs, on the one hand as a “natural” light source for the scenes, but also to tell the story of the transition from the night sky to the morning mood.

In one shot, in which we show a time-lapse, we even used an HDRI sequence to allow the different stages of the sky to run smoothly. Of course, we didn’t leave it at the HDRIs, but set the lights individually for each shot to achieve our desired look. With backlighting, for example, we were able to isolate the characters shown from the background. These lights were rendered in Blender on separate layers so that we could adjust them again in compositing. The use of volumes in the shots made the robot’s light cone visible. Creative director and CGI artist Kathrin Hawelka provided us with active support.

Rendering

As we had two internal render engines at our disposal in Blender, we used both. We rendered with Eevee during the entire Previs phase. This allowed us to render almost in real time, which accelerated our creative process enormously. However, as it was still important to us to have a realistic interaction with light, especially because of the robot’s light beam, we decided to use Cycles as our render engine. In order to be as flexible as possible when compositing in Nuke, we used Blender’s own compositor to export a multilayer EXR sequence in which all render passes and masks were saved. For hero objects, such as the robot, the granny and the boombox, we create one mask per shot. We realised that the denoiser works much better if we don’t apply it to the whole image, but to each render pass individually. This allowed us to denoise the volume, for example, and reduce artefacts to a minimum.

At the same time, this procedure enabled us to save all passes in the EXR without noise. However, we had to realise that the denoiser also has its limits, and it was difficult to denoise asphalt, for example. As the reflections that inevitably result from the structure are very similar to digital noise, this desired structure was often transformed into “mud”. Although we had to do without the denoiser in such cases, we were still able to significantly reduce the render time. An average render time of approx. 45 min/frame was possible.

Compositing

With the exported EXR sequences including all layers, it was time for compositing. In Nuke, we used a “back-to-beauty” workflow to merge the layers back into the overall image. However, this gave us the opportunity to adjust all the selected passes individually and manipulate them to our liking.


Depending on the environment, we set up a comp setup that allowed us to go even further in the direction of our lookbook. To achieve the most authentic camera look possible, we used chromatic abberation effects in compositing, for example, or to further emphasise the dreamy overall mood during the dance sequence at sunrise, we worked with various types of glow effects. We also had the opportunity to retouch render errors or other minor details in the compositing. Senior compositing artist Shayan Sharegh provided us with lots of helpful tips and tricks
Artist Shayan Sharegh was at our side.

Grading

Then it was off to grading with the composited shots. Together with colourist Claudia Fuchs, we fine-tuned a few small details, completed our look and checked the settings for colour accuracy. A selected grain rounded off our sequences.

Sound design & music

No film without sound. This is all the more true for our animated film. Very early on in the development process, we were looking for the right piece of music for our robot to dance its way through the streets to. As the film is set in 2029, the grandma in the wheelchair is 80 years old and her wheeler dance career is at its peak in young adulthood, it was clear to us that the song should fit in with the 70s and the disco genre.
After extensive research, we came across “Dance Like Crazy” by the artist “Ikoliks”. Mykola Odnoroh (alias Ikoliks) was enthusiastic about the idea of his piece of music making our robot dance, and that’s how the collaboration came about.
But not only the music has become an elementary component of our animated film, but also the sound design. Sound engineer Andreas Goldbrunner and artistic collaborator Rodolfo Anes Silveira used the raw sound design that we had created for Previs as a basis and used it to conjure up an auditory backdrop that brought our environments and the movements of our rusty robot to life.

Retrospective

And voilà, about a year later it was finished, our first animated short film. Who would have thought that it could take so long to develop a coherent story, to go through all the production steps of the animation pipeline until you end up with a finished film of three minutes and fifty-five seconds? We certainly didn’t – but it was worth it. It’s been a very instructive year with many hurdles and challenges overcome, so we can now look back on Clean Aid with pride and look forward to sharing it with as many people as possible.

Producers Comment by Katharina Hein and Felix Mann

After the exciting and inspiring lecture on VFX by Michael Coldewey in the first semester, we were gripped by enthusiasm and curiosity for VFX producing. The offer to support our year’s animated films gave us the opportunity to get to know a completely new way of producing. As students in our first semester, we had no previous experience in the production of animated films. So we began a period of constant learning about the workflow, the software required and how long rendering times can really take. One of the highlights of the production for both of us was the motion capture shoot, where we were able to make the best use of our previous experience from previous shoots. After all, a motion capture shoot is essentially only slightly different from a live-action film shoot. As producers, we tried to take on all organisational tasks and be available at all times to answer questions. After the shoot is before the shoot and we are very much looking forward to producing the VFX students’ first live-action film in 2021 together with WennDann Film next year. We would like to thank the entire VFX department and especially Ines, Franzi, Alex, Valentin, Hannes and Felix for this trust.

Team

  • Directors: Valentin Dittlmann,Hannes Werner, Felix Zachau
  • Producer: Katharina Hein
  • Motion capture Actors: Nicole Adamczyk, Katharina Hein, Hannes Werner
  • Music: Mykola Odnoroh (Ikoliks)
  • Project Supervision: Prof. Jürgen Schopper
  • Project Consultant: Berter Orpak, Rodolfo Anes Silveira
  • Vfx Pipeline TD: Jonas Kluger
  • Line Producer: Ina Mikkat
  • Team Assistant to Line Producer: Jenny Freiburger
  • Team Assistant: Petra Hereth
  • Scheduling: Beate Bialas, Sabina Kannewischer
  • Technical Support: Benedikt Geß, Florian Schneeweiß
  • Studio Management: Peter Gottschall, Andreas Beckert
  • Conforming: Martin Foerster
  • Colour grading: Claudia Fuchs
  • Sound Design & Re-Recording: Andreas Goldbrunner
  • Lecturer: Prof. Melanie Beisswenger (Animation), Prof. Michael Coldewey (Storyboarding), Kathrin Hawelka (Lighting), Benc Orpak (Rigging), Moritz Rautenberger (Camera work), Shayan Sharegh (Compositing), Helmut Stark (Modelling & Texturing)
  • Thanks to: Jonas Bartels, Franziska Bayer, Christian Gessner, Alexander Hupp, Christoph Kühn, Malte Pell, Jonas Potthoff, Nicolas Schwarz, Tobias Sodeikat, Ines Timmich

]]>
DIGITAL PRODUCTION 151050
Planet B https://digitalproduction.com/2023/02/28/planet-b/ Tue, 28 Feb 2023 15:31:00 +0000 https://digitalproduction.com/?p=151429
"There is no Planet B". A slogan from various climate protection campaigns, which refers to our current situation of not having a second planet on which to live on which the inhabitants of Earth could live. The film Planet B shows a scenario in which exactly that is attempted. On a planet afflicted by drought and toxic gases, a young woman sits in her underground bunker and controls the construction process of a new planet in space.
]]>

In autumn 2021, six of us, as the second year of VFX students at the University of Television and Film Munich, started to find ideas for our first film exercise and met weekly under the direction of Prof. Jürgen Schopper. Planet B began with the story of an old man who breeds biotopes and discovers a small human in one of his biotopes. Once the teams for the realisation of the films had been formed, however, this idea soon developed into a futuristic sci-fi story with the aim of commenting on the man-made decay of the environment and presenting the urgency of a solution to this.

by Alexander Hupp, Franziska Bayer, Ines Timmich

After further development, the old man who created a biotope with a human became a young woman who, together with other scientists, wants to create an entire planet in order to save humanity.


The rough story was finalised at the beginning of 2022, after which the implementation began. Consequently, the next step was to find the resolution of our scenes in the various environments, for which storyboards, animatics and blocking were created in a fluid process over several weeks. Our pipeline TD Jonas Kluger worked with us in parallel to create an efficient working environment and pipeline. In addition, a lot of concept art was created, as finding our style, which is explained in more detail below, was also a challenge.

Storyboard

In the storyboard phase, we worked on the visualisation of our script with great support from Prof. Michael Coldewey. During this week, we worked intensively on how we wanted to tell our story visually. It was clear to us early on that our camera language should be static and calm, which also benefited our 2D look.

Finale Concepts der „Creatorin“
Final concepts of the “creator”

The storyboard was revised several times, so that in the end we only moved on to the pre-visualisation phase with the fourth version. There, together with Dr Rodolfo Silveira, we converted the finished storyboard into a rough pre-vis. This consisted of 2D animations that helped us to define the mood and timing of the shots and their sequence more precisely.

Keyconcept der „Creatorin“ am Interface und grober Entwurf der Lichtstimmung
Key concept of the “creator” on the interface and rough draft of the lighting mood

Concept/design

We were certain from the outset that our film should contain a combination of 2D drawn elements and 3D animation. The style templates included the games Valorant and Borderlands, the League of Legends series “Arcane” and various episodes of the series “Love Death & Robots”: “The Whitness” and “Jibaro”, both directed by Alberto Mielgo. The aesthetic of our film ran through many episodes: Initially we wanted to make everything steampunk-style, in the meantime we had focused on a retro-futuristic look, but in the end it became a mixture of grunge and pre-apocalyptic aesthetics.

Entwicklungsskizzen der Frisuren und Masken
Development sketches of the hairstyles and masks

To “nail” this look stylistically, we designed many different concepts for the two planets, the environments and our protagonist – who still bears the mysterious title “Creator”. Not only the look, but also the practicality of the various assets and environments was important. For example, the Creator’s robotic arm had to be anatomically similar to a real human hand so that it could be rigged realistically. For reasons of effort and time, we opted for a slicked-back short haircut that moved little or not at all, and a breathing mask that covered most of the lower half of her face. For the two planets, we concentrated primarily on colour concepts. The old Earth, destroyed by climate change, was to be bathed in dry, desert-like ochre and toxic sulphur yellow, while the new Planet B shone in rich turquoise, teal and blue tones.

Erster Versuch, einen Look der Satellitenschüssel und Umgebung zu definieren.
First attempt to define the look of the satellite dish and its surroundings.

We based the design of the giant satellite dish on a real giant dish, the Arecibo Observatory in Puerto Rico, which was the largest single telescope in the world until 2016, but collapsed in 2020 (similar to our dish). To get a feel for the framing and colour mood of our film, we created key concepts for the individual key scenes.

Modelling, texturing & rigging

For the modelling, we worked on the 3D models with great help from our external lecturer Helmut Stark. He showed us how we could model the biotope or the bunker corridor, for example, in a detailed and topologically sensible way. Initially, we had a complete model of the bunker including a detailed interior, but this was later replaced in many scenes by matte paintings, which better matched the style of the 2D look. To give our shots the drawn look, we often took a frame from the camera’s point of view, painted over the existing texture of the model in Krita and projected this drawing back onto the models. By focussing on this texturing method, we ended up only needing simple models that had the right silhouette and dimensions. But not everything was textured in this way.

Texturieren im Programm Adobe Substance Painter.
Texturing in the Adobe Substance Painter programme.

Some models, such as the Creator’s work interface and the space capsule in which our protagonist later flies into space, were painted by hand in Adobe Substance Painter. This allowed us to show the models from several perspectives without having to project something new onto them each time. The creator herself was also painted by hand in Substance Painter. We added several lines and hard shadows, especially around her eyes and ears, to enhance the comic/2D look. The rigging was done under the very helpful supervision of our external lecturer Benc Orpak, who showed us how to rig our protagonist realistically. For her facial expressions, we used shape keys to animate her eyes and eyebrows in particular.

Animation, motion capture & simulation

Overall, it would be far too time-consuming to animate the entire character “by hand”, so we decided to record the body movements using motion capture. We used the Xsens system available at the university for this purpose. With motion capture, we also had to get the interactions right that would later be seen in the film. That’s why we built a similar environment on set so that our motion capture actress could interact with it. We made sure that the scaling was correct so that we could use the data without making too many changes.

Explosions­ansicht der Raumkapsel
Exploded view of the space capsule


However, we were unable to record two components of the character using the motion capture system. The hand movements, including fingers and facial expressions. With the support of Prof Melanie Beisswenger, these were created in Blender using keyframe animation. We animated the rest using shape keys for the facial features and a controller for the eyeballs, which was created using the Auto-Rig Pro plug-in. As the character’s face below the eyes is covered by a static mask, the facial expressions could only be shown via the eyes and eyebrows, which placed high demands on the animation.
To make the collapse and the behaviour in the storm look as real as possible, the hanging elements of the satellite dish were implemented as a simulation. We took the dimensions from the Arecibo observatory and then used these parameters to calculate the mass of the respective fragments. Using simple wind force fields, we were able to simulate the interaction between the pillars and the storm. The pillars were fractured, i.e. broken up into small individual pieces in advance. This allowed us to better control the collapse of the pillar.

Definition des zeichnerischen Stils im All
Definition of the drawing style Style in space


To save time, we also simulated the land masses flying away from planet B, whose surface consists of hexagons. To do this, we detached certain hexagons from the planet. By deactivating the gravity, we were able to cause the parts to detach using a single force field in the centre of the planet. We then converted the simulation into keyframes in order to make individual changes.

Shading and rendering

By projecting the textures, the shader was easy to adjust, but only for objects that were not changed in perspective. Our character was therefore the most difficult part of the shading. It had to be able to move and the light had to behave accordingly. We set the light itself in three-dimensional space instead of painting it directly onto the textures in order to achieve realistic behaviour. CGI artist Kathrin Hawelka and cinematographer Moritz Rautenberg were particularly helpful in this process.

Overall, our desired look is very dependent on the shading. It has to be able to combine the 2D (drawn) with the 3D (animation/shading) well. We decided in favour of the LightningBoyShader, a layer-based shader. This has the advantage over conventional cell shaders that we can more easily control the influence of the light sources. The look is also defined by the broken edges. For the objects themselves, we were able to break them all up in 2D, whereas we also had to break up the light edges using the shader. The shader also allows us to use light sources selectively, i.e. we can separate the surroundings from the character and light them individually. We received great support from our 3D mentor Berter Orpak with the complex shader setups. It was also important that our 3D objects matched the background, as the lighting moods had already been defined in the 2D painting.

Another advantage of the LightningBoyShader is that it works with the Eevee real-time render engine integrated in Blender. Being able to see the “finished” image in real time while working offers a number of advantages. Above all, this has greatly accelerated the lighting and shading. Thanks to Eevee, we were also able to render intermediate states in full quality without any problems. We appreciated the fast rendering times more and more towards the end, as the complexity of the shader meant that “real time” sometimes turned into a good 15 seconds per frame. We rendered with a colour depth of 16 bits, which we had to specify early on, as all drawn textures had to be created with the same colour depth.

Handgemalte UV‐Textur des Biotops
Hand-painted UV texture of the biotope

Compositing

The compositing part of our film was relatively small, as we had some restrictions due to the shader. Due to time constraints, we decided to do most of the compositing in Blender. The challenge was mainly in the capsule scenes, as the beam can be seen in the reflection of the disc. The problem with this is that our real-time render cannot render reflections and we therefore had to add them afterwards. The final touches are often added to the film during compositing. However, we were already able to define our look through our drawings and did not have to make many adjustments afterwards.

Sound design and language

In our first animatics, we put together a provisional mix of sound-only and atmos, but it wasn’t until May 2022 that we really got to grips with the sound design and, above all, the film music. It was clear that we needed something epic, preferably in the “Hans Zimmer style”. Our producer, Felix Mann, contacted the film composer Victor Ardelean. After a meeting with Victor, where we explained our ideas for the mood of the music, he created the perfect composition for our film within just a few weeks, based on our three-minute blocking.

Foley sounds were created in a recording studio at the HFF. For example, we created footsteps on a concrete floor and the sound of the robot hand touching the glass biotope using jewellery rings and a beer mug. The voice recordings were also made in the studio by Moritz Segura Kanngießer and Ines Timmich. The 5.1 mix for the DCP, which was created by Martin Förster, was created from the film music, Foley and voices by sound mixer Stefan Möhl.

Das Team von Planet‐B: (von Links) Ines Timmich (VFX), Alexander Hupp (VFX), Franziska Bayer (VFX), Felix Mann (Produktion)
The Planet-B team: (from left) Ines Timmich (VFX), Alexander Hupp (VFX), Franziska Bayer (VFX), Felix Mann (production)

Conclusion

We worked on the films for two semesters and half a summer with a lot of support and were able to show our finished results at the VFX Reel 2022 at the HFF on 24 November 2022. We owe the planning of this event and other events within the VFX department to our team assistant Petra Hereth. After this very successful evening, we are looking forward to the next opportunities and festivals where we can present our film
film. We hope that all viewers will enjoy our film, but also that
film will inspire viewers to rethink their own actions and how they treat the environment. Ultimately, we are still faced with the fact that there is no Planet B.

]]>
DIGITAL PRODUCTION 151429