Search Results for “dp2404” – DIGITAL PRODUCTION https://digitalproduction.com Magazine for Digital Media Production Wed, 04 Dec 2024 13:29:13 +0000 en-US hourly 1 https://digitalproduction.com/wp-content/uploads/2024/09/cropped-DP_icon@4x-32x32.png Search Results for “dp2404” – DIGITAL PRODUCTION https://digitalproduction.com 32 32 236729828 VR gallery at the Ostfalia https://digitalproduction.com/2024/11/07/vr-gallery-at-the-ostfalia/ Thu, 07 Nov 2024 09:49:00 +0000 https://digitalproduction.com/?p=150577
Wouldn't it be great if you could see something else in VR besides hectic hustle and bustle, demos and microtransaction calls? An art exhibition, for example? Yes, there is - and we found one such project with Noah Thiele, a student at Ostfalia.
]]>

Even before I started my studies at Ostfalia, I had the idea of placing my painted pictures in an exhibition space that I could freely design myself. My fascination with 3D enabled me to realise exactly this vision: Blender became my playground for such ideas at no cost and with limitless possibilities. I developed a museum, which became one of the projects for my application portfolio for my current degree programme. However, at that time I still lacked the time and expertise to turn my Blender scene into an interactive museum.



DP: What was the aim of the project?
Noah Thiele: As part of the fourth and fifth semesters, I had the opportunity to give free rein to my own project idea in the so-called compulsory elective module. I was free to choose my project for this module and so I came back to the idea of combining my two passions, painting and computer graphics. However, this time I wanted to learn something new and set myself a challenge by choosing a medium that I had absolutely no prior knowledge of.
Virtual reality has always fascinated me, but I had never actually had the opportunity to put on a VR headset before my studies. What’s more, I wanted my virtual exhibition to be interactive.
I had either Unity or Unreal Engine at my disposal, neither of which I had ever even opened before. But true to the motto “There is nothing in this world that you can’t learn.” i threw myself in at the deep end and pursued my vision of a virtual reality exhibition.

DP: Did you have any recommended sources?
Noah Thiele: I thought the tutorials I found on VR for beginners in Unreal Engine were rather poor. A lot of things didn’t work for me, so I was able to get most of the functions to work through trial and error and my own findings.

DP: What data was there?
Noah Thiele: I take photos of all the pictures I have painted and document everything on my website www.noahthiele.de/art. These images are the foundation of the project and not only serve as a key to the different worlds depicted on the canvases. They were also my reference for designing and modelling these worlds.

DP: How did you pull the data into Unreal?
Noah Thiele: The photos I took of my works were edited a bit in Photoshop to place them in Blender as textures on 3D modelled canvases. These canvases stand on easels in the VR gallery and can be touched and held in the hand.

DP: The first alpha version: what worked “right away”?
Noah Thiele: Due to my non-existent knowledge of VR and Unreal Engine, the alpha version was a very important step for me to see if my ambitious project was even feasible. My goals for the alpha were extremely simple: connect a VR headset to Unreal Engine, load a custom asset to test out the right size dimensions. I also placed an image as an image plane in the scene, which was equipped with a collision box. If one of the controllers approaches the image plane, a new level is loaded.
This was the most rudimentary basis of the project. In addition, in the unlikely event that I failed to implement future features in Unreal Engine, I still had a simple but functioning application.

DP: What did the implementation time cost?
Noah Thiele: To bring more interactivity to the gallery, I decided to redesign how you enter the worlds. Instead of loading new worlds by touching the images, I created a kind of “lock and key” system. You take a picture of your choice from its place and place it on the gallery’s central computer. Once the canvas is placed on the desk, it appears in full size on the screen above it.
If you now reach for the lever next to it and pull it, you are teleported to a new environment and are now in the centre of the painted picture. Adding such interactive elements to Unreal Engine took me a long time, as the programme was very new to me and I couldn’t find any really good tutorials on VR in Unreal Engine that met my specific needs for my ideas of the functions. I had to cut a few corners and came up with new ideas and features that fit well with the project.

DP: The “animated” elements within the virtual gallery: How did you realise them?
Noah Thiele: To make the worlds appear even more alive, I incorporated some animations. I exported the VR modelled meshes from Gravity Sketch and added a rig in Blender, animated them and exported them as Alembic in Unreal Engine.


I modelled the plants and vegetation individually for the “amazonia” image and created my own foliage set in Unreal Engine and then placed it in the environment and let it blow in the wind.
In another picture, the “flower field”, I simply painted or modelled without thinking about it. Exporting individual plants and creating a foliage set was virtually impossible because the entire field was connected. The whole field of flowers consists of coarse and large strokes that are further away and detailed meshes in the foreground, which are very intermingled.


That’s why I created a material for the entire mesh in Unreal Engine, which transfers noise-induced wave-like movements to the mesh (is.gd/youtube_unreal_materials). This means that all the plants move slightly differently but still together when they are next to each other. Like a field in reality when the wind blows through it.

DP: If you could start all over again: What would you do differently?
Noah Thiele: If I could start all over again, I would change the modelling software. Unfortunately, I only found out about the VR software Quill far too late. At that point, 90% of the modelling tasks had already been completed. Quill makes it possible to animate in VR. I used it to create test animations for effects that should appear as soon as you enter a world. Individual colour particles and brushstrokes should fly out of the selected image and blur in front of your eyes until you find yourself in the newly loaded environment. Unfortunately, I had to cancel this feature due to time constraints.
There are also a large number of images that I would have liked to integrate into the gallery. However, when I presented my project during the annual media design exhibition at my university, it turned out that the number of images in the gallery was sufficient.
Some visitors to the exhibition spent less time with the VR headset than would have been necessary to explore all the worlds. This is because wearing VR glasses is still uncomfortable and unpleasant for many people, in rare cases even causing slight balance problems and dizziness.

DP: What “lab equipment” was provided by the university?
Noah Thiele: Throughout the project work, I was provided with various VR headsets by my university.
I worked a lot with the Meta Quest 2, which I was able to borrow from the university – and I did a lot of the modelling tasks with it. Later on in the project, my university ordered two Meta Quest 3 models, one of which I was allowed to borrow and use for the rest of my work. During this phase, I almost exclusively created and finalised the worlds in Unreal Engine and modelled the last assets.
Switching from Meta Quest 2 to 3 made my work much easier and much more enjoyable thanks to the improved comfort and higher frame rate and resolution.
After one or two hours, it became very uncomfortable to continue using the VR headset, so my work was often interrupted by headaches and continued the next day. The higher resolution and frame rate that I was able to enjoy by upgrading to the Quest 3 gave me a little more freedom.

DP: Which courses/seminars were particularly helpful for the project?
Noah Thiele: I didn’t take any courses or seminars on VR, but had very helpful discussions and assistance from my professor Melanie Beisswenger, other professors and staff at the university.

DP: Where can we see the finished product?
Noah Thiele: At the annual end-of-semester presentation, my project was exhibited in the university’s media design exhibition. You can also see videos and more about the project on my website at noahthiele.de/v-art/.

DP: What are your next steps?
Noah Thiele: I’m currently doing my internship at Woodblock in Berlin and starting to prepare for my Bachelor film. I’m planning a 3D animated music video and am considering integrating one or two VR workflows or styles into my bachelor’s thesis. I’ve also discovered Gravity Sketch as a helpful pre-visualisation tool for me, as it allows me to quickly capture interesting perspectives in 3D scenes and gives me new ideas in the conception phase.’ ei

The Ostfalia

With around 12,500 students, Ostfalia University of Applied Sciences is one of the largest universities of applied sciences in Lower Saxony. It offers more than 90 degree programmes in the fields of law, business, social and health care as well as engineering and computer science at 4 locations. Practical relevance and interdisciplinarity take centre stage here. The Faculty of Transport, Sport, Tourism and Media in Salzgitter has around 2000 students. Ostfalia University is a state university, so there are no tuition fees (except for long-term tuition fees if the student account is exhausted). A re-registration fee of currently approx. 330 euros is payable per semester, which also includes the semester ticket for local public transport.

The Media Design degree programme in Salzgitter is offered with a Bachelor’s (6 semesters) or Master’s degree (4 semesters). Students can specialise in animation, VFX, games, interactive media, film or communication design.

In the field of animation, students are taught to work with a variety of techniques, starting with stop motion and 2D animation through to motion design, 3D animation and character animation. In addition to technical skills, the focus is on conception and visual development, and in particular working on your own ideas, projects and films. Excursions to conferences and festivals such as the ITFS, the Festival of Animation Berlin, Gamescom and DOK Leipzig bring our students into contact with the industry.

Ostfalia is equipped with all kinds of state-of-the-art technology: Motion capture system from Rokoko, large film studio with blue/green screen, professional camera and lighting equipment, 3D printer, VR & AR headsets (Quest 3, Apple Vision Pro, Hololens, HTC Vive Pro), several computer pools that can also be used for rendering, photo studio and a games club with corresponding community and equipment.

What are the requirements for the programme? The application with the artistic portfolio and passing the aptitude test are the prerequisites for admission to the Media Design (BA) programme. Prospective Master’s students with BA degrees in design do not need a portfolio to apply.

]]>
DIGITAL PRODUCTION 150577
Magic Nodes 2: New functions https://digitalproduction.com/2024/08/27/magic-nodes-2-new-functions/ Tue, 27 Aug 2024 14:04:09 +0000 https://digitalproduction.com/?p=144825
The "Magic Nodes" extension for "Adobe After Effects" has has been around for a while. This enables compositing in a Nodes network with the Adobe programme. A look at the changes and improvements in version 2.
]]>

For many artists, the layer-based system of “After Effects” is the ideal solution for working with moving images Moving images. It is easy to understand and learn to use, and its proximity to Photoshop allows many graphic artists to quickly enter the world of motion graphics. The integration with Photoshop is an additional advantage when it comes to transferring designs for subsequent animation. However, in the field of compositing, node-based software shows its advantages through greater flexibility. With “Magic Nodes”, there is a plug-in that extends the Adobe solution with this option. The ambitious “Hollywood Illusion” project works with After Effects from version 2017 for Windows and Apple.

Version 2 von Magic Nodes erlaubt den Einsatz mehrerer Graph-Netzwerke, die sich ineinander verschachteln lassen.
Version 2 of Magic Nodes allows the use of multiple graph networks that can be nested within one another.


Installation and licensing work without any problems with version 2. A licence can also be used in a beta version of After Effects installed in parallel. The main new features in Magic Nodes 2 are for working with nodes. A selection can now be grouped and summarised in a box. To do this, the user simply has to drag a selection over the relevant elements using the mouse in combination with the Ctrl/Cmd key. A container appears and holds the nodes in it. Additional nodes can later be simply dragged into the group using drag/drop, which also works the other way round. The name for a group and the colour can be changed via the settings.
Multiple networks (graphs) are now also possible in one project. They can be managed via the menu on the right-hand side of the panel. The settings for the individual graphs in a project can be found here. The name, size, runtime and frame rate are changed individually via a pop-up. An existing block can become part of another graph. Frequently used operations can be summarised like libraries and used in another network if required. In practice, you simply drag the graph into the current view and treat it like footage.
Other improvements include the ability to deactivate a node in the network at the touch of a button and the quick connection of elements via “Smart Connect”. Here, the line automatically snaps to the nearest operator. Compatibility with 3D planes and cameras has also been implemented.

„Magic Nodes“ ist nun für den Einsatz von 3D-Ebenen bzw. Kameras gerüstet. Die Daten von diesen Elementen stellt die Erweiterung korrekt in der Vorschau dar.
“Magic Nodes” is now equipped for the use of 3D planes and cameras. The extension displays the data from these elements correctly in the preview.


Existing 3D footage is now displayed correctly with all animations in the Magic Nodes viewport – to ensure that it runs smoothly, the plug-in has been prepared for acceleration by a GPU since version 2.
Unfortunately, it is not possible to simply copy/paste networks from one project to another. Teams in a studio would particularly benefit from an export/import function for graphs or node groups. Stored in a central library on a server, existing solutions could easily be reused at several workstations.
The undo/redo system was already criticised in the first version of Magic Nodes. Unfortunately, this criticism is still relevant in the current release of “Magic Nodes”. Undoing changes with a shortcut on the keyboard quickly becomes an adventure. Deleted nodes, groups or concatenation lines cannot be retrieved with an undo command. The developer is aware of the situation and has put a solution on the feature list for an upcoming version. We are
curious.

Viele Nodes in einem Netzwerk kann der Artist zur besseren Übersicht in Gruppen zusammenfassen.
The Artist can summarise many nodes in a network into groups for a better overview.

Conclusion

The new features in “Magic Nodes 2” are welcome improvements for the workflow
improvements. They speed up work and provide a better overview of the network. This makes the plug-in an interesting solution for compositing. Additional changes in the timeline are possible alongside the nodes. A little caution is required here – problems can quickly arise if the user accidentally deletes elements or does not adjust a necessary layer.

]]>
DIGITAL PRODUCTION 144825
Ghosts at the HFF https://digitalproduction.com/2024/09/15/ghosts-at-the-hff/ Sun, 15 Sep 2024 08:00:00 +0000 https://digitalproduction.com/?p=144898
A humanoid robot leans against a car, smoking, with a dog sitting opposite. They enjoy the sunset together.
]]>

Our first ideas emerged from this sketch: What characterises the robot, what is the relationship between it and the dog, what kind of world do they live in? We are in the near future. People and seemingly all life have disappeared from the city and countryside. Posters, camps, cars and facilities left behind bear witness to the vanished culture.

By Edgar Bauer, Franz Stöcker and Felix Zachau. They are studying image design at the HFF Munich, specialising in VFX. All three found it exciting to tell the story of such different characters coming together, which is why they joined forces as a team.

The rusty old household robot has also been left behind. It still works and seems to mechanically fulfil its old routines. The dog is playful, it eats and seeks closeness.
In doing so, he gradually brings the old, cumbersome, stuck machine to life. He takes it out of its routine and humanises it. The robot becomes a humanoid that can care, play and feel. You watch as the years pass, the dog grows older and the robot becomes more human. It all ends with one last special evening when they watch fireflies together.
This sets the cornerstones of the story, focussing on their relationship and the development of the robot.
In weekly meetings with Prof Jürgen Schopper and Dr Rodolfo Anes Silveira, we presented our respective progress and discussed how best to proceed. Team assistant Petra Hereth coordinated the project organisation for us, including the lectures and all associated seminars and workshops.

Storyboard

With the script ready and a precise idea of the aesthetics, it was time for the storyboard. Professor Michael Coldewey helped us to break the story down into a few images, which we then drew in the “Procreate” drawing programme. The focus was on the comprehensibility of the images and not on details, but at the same time we were also able to think about the camera settings.
The aim was to tell and structure the script visually in such a way that an outsider could look at the storyboard and understand what was happening.

First storyboard drawings and concepts of the robot

Animatic and pre-visualisation

The animatic can be divided into two phases. First comes the drawn animatic and then the PreVis, a rough 3D animated version of the film. The drawn animatic is an animated version of the storyboard. We used the images from the storyboard in the “Premiere Pro” editing programme. We also tested how long we should leave the shots for and created a rudimentary version of the sound. This allowed us to consider what should be shown visually and what could be told via the sound. That was very good for getting a rough feel for the story. The next step was the PreVis.
We modelled and rigged rough versions of the dog and robot in “Blender” and created very simplified sets. The scenes were then quickly animated and rendered in Workbench to see where the characters were, how well the camera was placed and whether a viewer could understand the film at all. Our focus on speed meant we could quickly see which shots worked and which we needed to rework. Telling a long period of time in a short film was a particular challenge. Over 28 versions, we considered shot sizes, moved characters to different locations and added, replaced, moved in the timeline, changed or discarded entire shots. In the end, we had a plan of how long each shot had to be, how the camera should be placed and what should happen in front of it.

Music

With a rough idea of what kind of tonality we wanted to create in our film music, we were very lucky that the composers Arezou Rezaei and Jiro Yoshioka from the Munich University of Music agreed to compose the score for us. It was ideal that we started collaborating even before the PreVis was finalised.
This allowed us to give each other feedback and they were able to advise us on which scenes we should keep longer or shorter so that the composition could unfold its full effect. The music also provided plenty of inspiration and the insights the musicians were able to give us based on their expertise had a significant influence on the development of the story.

Mit dem Animatic wurde die Auflösung finalisiert.

Pipeline & Workflow

Without the help of our Pipeline TD Jonas Kluger, this film would probably still not be finished. He familiarised us with the project management software “Shotgrid” and made sure that possible errors caused by the communication between “Blender” and “Shotgrid” did not affect us as much as possible. We initially created tasks for all environments, characters, assets and shots. We then divided the tasks among ourselves and uploaded the results to “Shotgrid”. This pipeline makes it possible to work on sets simultaneously so that the updated assets are also opened when a scene is opened.

Modelling, Texturing, Rigging

It was important to us that the last dog and the last robot should be archetypes. The robot’s shapes should convey a fascination for its technical abilities and yet still look old and capable of development.


We took a lot of inspiration for the design from older tractors. The approach behind this was that it is obviously a simple machine – its functionality and components are revealed in the little remodelled design. It is an industrial machine whose original reason for being should be as far removed as possible from a social or cautious function, so that its transformation towards the human becomes clearer and its action contrasts with its appearance. small details such as a colourful child’s handprint subtly tell of his past as a helping hand to a family. In “Procreate”, we first roughly sketched the robot in order to find a design language that we could agree on as a team. The sketches were then turned into technical drawings, which also looked at the individual body parts in detail.
The robot consists of over 200 individual parts that move mechanically depending on each other. Each part was drawn in great detail and then modelled in Blender with the help of our lecturer Benc Orpak. It was a very complicated but rewarding job to create the mechanisms that would eventually make the robot move. The individual parts had to fit together exactly and yet not clip into each other when moving. All parts were individually drawn, modelled, textured and rigged. Berter Orpak was on hand as a 3D mentor to answer any questions we had.
Unlike the robot, the dog is very organic, it should look playful, lively and cute. We researched the anatomy and bone structure of dogs and tried to model it as closely as possible to reality. We showed the ageing of the dog mainly through different textures for the young and old dog.

Settings

The film shows a contrasting world. It starts in a destroyed, abandoned city and then switches to a rural, old-fashioned landscape where nature is slowly returning.
Both landscapes were modelled in 3D in Blender. The textures were then drawn in “Procreate” from a camera perspective and then projected onto the model in “Blender”. In some cases, we also used the drawings directly as backgrounds. The backgrounds had to be coherent and picturesque without distracting from the actual action.

Look

The look development is more than just the selection of colours and designs. It defines the visual language of the film, gives it character and lends the story a unique aesthetic. This process is crucial to how viewers perceive the world of the film and how they connect emotionally with the characters and the plot.
The process is iterative and required many feedback loops. In the beginning, we aimed for a photorealistic, Pixar-like look. Partly to save render time, but also because we wanted to focus on the relationship between the robot and the dog, we decided to use a mix of drawn textures and a 3D animated film.


This change was one of the most difficult decisions we made for this film. We had already completed several sets in photorealistic style and had fallen in love with the look of the test renders.
But when we saw the dog in motion with a drawn texture, the decision was easy. It looked much more lively in the new style. We were also able to emphasise the character of the robot better in the new style.

Rotation

To recreate a movement realistically, it helps to collect as many references as possible. We planned a day of shooting for this purpose. Studio masters Andreas Beckert and Peter Gottschall let us use the HFF’s internal television studio for this.
We filmed with a younger and an older dog to cover him in both the younger and older scenes.
For the robot, our fellow student Julius von Diest slipped into Xsens’ in-house motion capture suit. David Emmenlauer explained the correct operation of the suit to us in advance. We only used the data collected with it as a reference so as not to make the robot appear more lifelike than the dog.
We took each shot with three different cameras. Each was responsible for a selected perspective. The first one was placed as close as possible to the camera position defined in the PreVis. With the other two, we filmed from the front and from the side so that we could then jump 90 degrees from axis to axis when animating.

Animation

The animation was a big part of our work process. We wanted to show the slow humanisation of the robot and the ageing of the dog in the animation. The recorded references from the studio shoot came in very handy.
We also got help from Professor Melanie Beisswenger. As an experienced animator, she had a trained eye for our animations and was able to give us very good suggestions for improvement, especially for the dog, which made it appear even more natural.


One thing that made animating technically much easier was that both the dog and the robot had a very detailed model for rendering and a less detailed one for animating. When animating, we could deactivate the body responsible for rendering the characters at the touch of a button. Thanks to the saved computing power, we were able to animate in real time in the viewport.

Rendering

The film is rendered in Blender via Cycles. We used the HFF render farm to save rendering time and stay on schedule. We also rendered out masks for the dog and robot so that we could edit them better during grading.

Grading

We were very lucky to have Claudia Fuchs, a professional grader, at our side. Together we made some colour corrections and matched the colours of the shots to each other.

Das Colorgrading verleiht dem Film seinen Charakter und rundet den Prozess der Bildgestaltung ab.

Sound design

For animated films, in which every visual component is created from scratch, the creation of realistic soundscapes is a major challenge. Stefan Möhl took on the sound design for us. We discussed our ideas with him about how the robot and the world should sound. We felt it was a great privilege to be able to look over his shoulder as he worked. It was fascinating to see how he creates a soundscape from different sounds that seem to have nothing to do with each other and breathes life into the film. He surpassed all the expectations we had beforehand.

Retrospective

A strongly humanoid robot leans against a car, smoking. A year has now passed since this sketch was made. A lot has happened in that time. We have spent many days and nights at the university and have grown together as a group. The collaboration led to creative solutions and a film that we are proud of. Only now do we realise how many steps and how much work is actually involved in the development of an animated film. It was a long process with many ups and downs. We have learnt a lot from it, we would like to thank everyone involved and look forward to the next project

]]>
DIGITAL PRODUCTION 144898
Animating images with Resolve https://digitalproduction.com/2024/09/08/animating-images-with-resolve/ Sun, 08 Sep 2024 08:00:00 +0000 https://digitalproduction.com/?p=144901 Die magische Maske – hier wurde das rechte Bild, die süße Katze von Marko Blazevic, auf Pexels zur Verfügung gestellt, mit zwei Strichen maskiert (im Modus „better“). Im Node-Fenster (rechts) ist der Alpha-Ausgang zu sehen (Rechtsklick und „Add Alpha Output“), unten die Masken- und tracking-Optionen (zum Tracken eines Clips auf die Play-Buttons klicken).
"Breathing life" into static images with effects from DaVinci Resolve - so-called Cinemagrams - are easy to do - and in our new series "Resolve Tricks for Beginners" we'll start with them!
]]>
Die magische Maske – hier wurde das rechte Bild, die süße Katze von Marko Blazevic, auf Pexels zur Verfügung gestellt, mit zwei Strichen maskiert (im Modus „better“). Im Node-Fenster (rechts) ist der Alpha-Ausgang zu sehen (Rechtsklick und „Add Alpha Output“), unten die Masken- und tracking-Optionen (zum Tracken eines Clips auf die Play-Buttons klicken).

Resolve, currently in version 18.6.6, is available in a slightly slimmed-down free version or for currently 329 euros as a studio version online, or as a free add-on when purchasing high-quality hardware, such as film cameras from Blackmagic Design).
It is claimed by Blackmagic, but also by many users, that the possibilities of the free version of DaVinci already completely fulfil the requirements of most users for video editing, effects, grading and sound and would “put many paid-for software applications in the shade”.
A public beta version 19 is also currently available (users must register before downloading), which replaces the current version during installation if this is not explicitly prevented (backup of the current database and renaming of the DaVinci programme folder – instructions on how users can upgrade to the new version 19 and use several versions of DaVinci at the same time can be found on YouTube – e.g. here: is.gd/resolve18_and_19

Der Projekt Manager von Davinci – erscheint zum Start des Programmes – ist eine Datenbank zur Verwaltung von DaVinci-Projekten. Hier können vorhandene Projekte aufgelistet, verwaltet, sortiert, umbenannt oder z.B. Project Settings von einem Projekt auf ein anderes übertragen werden.
The Davinci Project Manager – which appears when you start the programme – is a database for managing DaVinci projects. Here, existing projects can be listed, managed, sorted, renamed or, for example, project settings can be transferred from one project to another.

What are we talking about today?

When looking at the current version of DaVinci, we have dispensed with all the possibilities of video editing, sophisticated grading or the complex options of sound editing (Fairlight) and have instead focussed on a few (small) “delicacies” of the programme.

Dieses türkise Hintergrundbild wurde in Fusion mit dem Inspector animiert (nur ein wenig horizontal bewegt). Die Bewegungskurven wurden auf „smooth“ gestellt. „Set Loop Ping Pong“ lässt die Bewegung am Ende umkehren. Wenn die Bildqualität ausreicht, kann bei passender Clip-Länge ein animiertes GIF-Bild mit Endlosschleife exportiert werden.
This turquoise background image was animated in Fusion with the Inspector (only moved a little horizontally). The movement curves were set to “smooth”. “Set Loop Ping Pong” reverses the movement at the end. If the image quality is sufficient, an animated GIF image with an endless loop can be exported if the clip length is suitable.

Install beta

At the launch of beta version 19, which will also be available in German at the request of numerous users, an overview of the upcoming innovations is provided – including Postproductions with live replays, AI tracking with Intellitrack, Colorslice – grading with 6 vectors, improved noise reduction thanks to DaVinci Neural AI Engine, text-based timeline editing, film look creator, refined volumetric rendering of smoke and flame effects, audio-to-video panning (tracks objects in the viewer for spatial sound distribution).

DaVinci Resolve – einfache Übung: Objekte (Bilder oder Video-Clips) im Inspector bewegen mit der Transform-Node – und per Keyframes animieren.
DaVinci Resolve – simple exercise: move objects (images or video clips) in the Inspector with the Transform Node – and animate using keyframes.


And let’s go into parallax!

In “mini-projects”, in which we added some (rather suptile) effects and animations to static images – cinemagrams, parallax animations – some motion graphics for beginners, so to speak, we tried out some effects, animation options with keyframes and masking options.

Static images – moving effects

Applying animation effects to static images (photo animations/cinemagraphs) is a popular method of increasing the visual impact of digital media such as videos, animations or interactive presentations. The possibilities are almost limitless and only determined by taste.

Dynamic Zoom (im Inspector aufrufbar) erlaubt einfache Kamerafahrten und dynamische Zoom-Bewegungen. Das grüne und das rote Rechteck im Viewer kennzeichnen Position und Größe des Kamera-Ausschnitts am Anfang und Ende der Bewegung. Mit „Swap“ kann die Bewegungsrichtung umgekehrt werden.
Dynamic Zoom (can be called up in the Inspector) allows simple camera movements and dynamic zoom movements. The green and red rectangles in the viewer indicate the position and size of the camera section at the start and end of the movement. The direction of movement can be reversed with “Swap”.

Ready for Burns!

In addition to fast, attention-grabbing movements, such as those often used in PowerPoint presentations or on advertising websites, subtle movements and effects are likely to attract more attention when it comes to dynamically conveying information, concepts or stories. Slight camera pans or zoom movements on still images (also known as the “Ken Burns effect” after the American documentary filmmaker) can be created in DaVinci Resolve both as keyframe animations and automatically using the “Dynamic Zoom effect”.
Animations in which cropped objects in the image move at different speeds can convey a spatial impression (parallax effect). Object movements that take place close to the viewer’s eye appear faster than those further away. If a gentle zoom or a small camera movement causes foreground objects to move in front of more distant objects, the three-dimensional impression is further emphasised.

Cinemagrams

Cinemagrams, i.e. photo animations in which static images are supplemented with mostly subtle, repetitive animations of individual, smaller parts of the image, are neither pure photos nor videos and are intended to attract attention, create atmosphere and evoke emotional effects.
DaVinci Resolve Studio promises to offer all the necessary tools and effects in one application – and should therefore be suitable. In addition to the wide range of effects, the current Studio version also advertises simple cropping with the help of AI – sounds ideal for our experiments with the aforementioned “mini projects”. Even if this is a bit of “shooting with cannons on sparrows” – let’s try out some of the functions. As mentioned, DaVinci is not only powerful, but also extensive and complex. Nevertheless, the software promises an easy introduction even for programme newcomers – at least to the basic functions.

PingPong – animation in a loop

Probably the simplest way to animate images is to move an entire image that is larger than the video in the inspector and set keyframes. It is advisable to “round” the animation curves, i.e. to make them “smooth”. With a “ping pong” effect, the image will move smoothly in one direction, then gradually stop to return to the starting point. Of course, scaling and rotation can also be incorporated. Dynamic Zoom is much faster and more effective.

Dynamic Zoom

All you have to do is select the relevant clip (an image) in the edit page and then switch on Dynamic Zoom in the Inspector. A drop-down menu can be opened below the viewer window, which uses two coloured rectangles to set the start and end values for zooming. The direction can be reversed using the “Swap” option. To change the duration of the zoom effect (normally applied to the duration of an entire clip), it is advisable to use an Adjustment Clip.

Dynamic Zoom (hier in der Endposition des Zooms) auf Adjustment-Clip – Der Adjustment-Clip ist nötig, um die Länge der Kamerafahrt zu steuern (Dynamic Zoom wirkt immer auf einen ganzen Clip). Außerdem kann er mehrmals verwendet werden (in diesem Beispiel wurde er zweimal hintereinander gesetzt, im zweiten Clip wurde die Bewegungsrichtung mit „Swap“ umgekehrt).
Dynamic Zoom (here in the end position of the zoom) on Adjustment Clip – The Adjustment Clip is necessary to control the length of the camera movement (Dynamic Zoom always affects an entire clip). It can also be used several times (in this example it was set twice in succession, in the second clip the direction of movement was reversed with “Swap”).

Splitscreens ohne Masken – der Effekt Video Collage ist einfach, schnell und ziemlich flexibel einsetzbar. Hier die Layout-Vorschau: 2 Reihen, 3 Spalten, alles ziemlich rund mit Schlagschatten – der Blur-Effekt auf dem Hintergrundbild muss „unter“ dem Video Collage-Filter liegen.
Split screens without masks – the Video Collage effect is simple, quick and quite flexible to use. Here is the layout preview: 2 rows, 3 columns, all fairly round with drop shadows – the blur effect on the background image must be “under” the Video Collage filter.

Automated collages with images or videos

The “Video Collage” effect offers an interesting way of arranging and animating several objects (moving images) in a scene. A grid, a kind of table with adjustable rows and columns, is superimposed on a clip with this effect. As with comic panels in graphics software, images or videos can be placed under the effect clip, allowing several objects to be arranged quickly and correctly. The parameters can also be animated here.

Mit dem Effekt Video Collage können tatsächlich schnell Kollagen mit mehreren Bild- und Video-Elementen erstellt werden. Die Clips unter dem Raster werden einzeln positioniert und mittels „Cropping“ so beschnitten, dass die jeweiligen Ausschnitte „passen“.
The Video Collage effect can be used to quickly create collages with several image and video elements. The clips under the grid are positioned individually and cropped using “Cropping” so that the respective sections “fit”.

Masquerade

If only parts of the image are to be animated, you need the individual parts. DaVinci offers various options for masking, editing and cropping objects. The mask tools include basic shapes such as rectangles, circles, polygon and BSpline curves, colour selections – and in the Colour Page – Magic Mask. The shape masks can be combined, inverted and the deformations animated. There is even an Onion Skin function to better control the animation of the mask curves.

Für Polgon lässt sich Onion Skin zuschalten, um Animationen besser beurteilen zu können. Das ist natürlich praktischer bei Vektor-Animationen, die wir in einer der nächsten Ausgaben dieser Serie besprechen werden.
Onion Skin can be switched on for Polgon to make it easier to judge animations. This is of course more practical for vector animations, which we will discuss in one of the next issues of this series.


Für Polgon lässt sich Onion Skin zuschalten, um Animationen besser beurteilen zu können. Das ist natürlich praktischer bei Vektor-Animationen, die wir in einer der nächsten Ausgaben dieser Serie besprechen werden.
Onion Skin can be switched on for Polgon to make it easier to judge animations. This is of course more practical for vector animations, which we will discuss in one of the next instalments of this series.

There are several tutorials on YouTube in which users demonstrate how to create dynamic masks and even cartoons with the shape tools (to fill the shapes with colour or image content, a suitable background must be created each time, which then becomes visible within the overlying mask).
You can also use these masks to make objects “disappear” – like with Photoshop’s copy stamp. This is also possible with the Paint tool, e.g. in clone mode. If masks are applied to moving videos, you can try using DaVinci’s intelligent tracking. But: Since we are only working with single images this time and not with moving images, this does not apply. Next time!
Ideally, masks no longer have to be adapted to movements frame by frame, but “only” tracked – DaVinci tries to track the selected object and adjust the mask so that the masking is retained even when there is movement. One of the sensations of the new version is the “Magic Mask”, which now has a person and an object mode. With just a few strokes and a few adjustable parameters, DaVinci generates amazingly good masks of objects or backgrounds surprisingly quickly.

Object Removal – DaVinci versucht, störende Bildinhalte zu entfernen. Eine Maske wurde um das Objekt gezeichnet, getrackt und auf die Kopie der Node der Effekt Object Removal angewendet. Wenn das Objekt nicht zu groß war und die Hintergrundstruktur „passte“, dann funktionierte das ganz gut.
vObject Removal – DaVinci attempts to remove distracting image content. A mask was drawn around the object, tracked and the Object Removal effect was applied to the copy of the node. If the object was not too large and the background structure “matched”, this worked quite well.


With the Magic Mask in particular, it is possible to quickly select objects in order to apply effects to them or, in conjunction with “Add Alpha Output” in the Node window of the Colour Page, to cut them out completely. This allows you to quickly place text in front of or behind (cropped) parts of the image, for example.
Convincing effects can be achieved with a little effort using DaVinci’s 3D tools. Real parallax effects can be created here due to the spatial depth. The depth of field of a camera can also be simulated.

Object Removal – DaVinci versucht, störende Bildinhalte zu entfernen. Eine Maske wurde um das Objekt gezeichnet, getrackt und auf die Kopie der Node der Effekt Object Removal angewendet. Wenn das Objekt nicht zu groß war und die Hintergrundstruktur „passte“, dann funktionierte das ganz gut.
Object Removal – DaVinci attempts to remove distracting image content. A mask was drawn around the object, tracked and the Object Removal effect was applied to the copy of the node. If the object was not too large and the background structure “matched”, this worked quite well.
Die magische Maske – hier wurde das rechte Bild, die süße Katze von Marko Blazevic, auf Pexels zur Verfügung gestellt, mit zwei Strichen maskiert (im Modus „better“). Im Node-Fenster (rechts) ist der Alpha-Ausgang zu sehen (Rechtsklick und „Add Alpha Output“), unten die Masken- und tracking-Optionen (zum Tracken eines Clips auf die Play-Buttons klicken).
The magic mask – here the image on the right, the cute cat by Marko Blazevic, was made available on Pexels and masked with two strokes (in “better” mode). In the node window (right) you can see the alpha output (right-click and “Add Alpha Output”), below the mask and tracking options (click on the play buttons to track a clip).
Lensflare und Text mit Drop Shadow vor Hintergrund
Lens flare and text with drop shadow against background
Depth of Field in DaVincis Renderer – es ist möglich, mit der Schärfentiefe einer Kamera zu arbeiten. Schneller geht es mit einem Blur-Filter. DaVinci bietet viele Filter für Unschärfe-Effekte – u.a.: Gaussian Blur – die „normale“ Unschärfe; Box Blur – Unschärfe basierend auf der durchschnittlichen Farbe benachbarter Pixel – bei hohen Radiuswerten entstehen deutliche Rechtecke im Bild – wirkt weniger künstlich als „Gaussian Blur“. Defocus – Tiefenunschärfe mit Bokeh-Effekten; Lens Blur – noch realistischere Effekte, aber längere Renderzeiten
Depth of Field in DaVinci’s renderer – it is possible to work with the depth of field of a camera. It is faster with a blur filter. DaVinci offers many filters for blur effects – including Gaussian Blur – the “normal” blur; Box Blur – blur based on the average colour of adjacent pixels – high radius values create distinct rectangles in the image – looks less artificial than “Gaussian Blur”. Defocus – depth of field blur with bokeh effects; Lens Blur – even more realistic effects, but longer render times
]]>
DIGITAL PRODUCTION Die magische Maske – hier wurde das rechte Bild, die süße Katze von Marko Blazevic, auf Pexels zur Verfügung gestellt, mit zwei Strichen maskiert (im Modus „better“). Im Node-Fenster (rechts) ist der Alpha-Ausgang zu sehen (Rechtsklick und „Add Alpha Output“), unten die Masken- und tracking-Optionen (zum Tracken eines Clips auf die Play-Buttons klicken). 144901
Baby’s first pipeline https://digitalproduction.com/2024/08/24/babys-first-pipeline/ Sat, 24 Aug 2024 13:33:42 +0000 https://digitalproduction.com/?p=144778
The first big project with multiple stakeholders can feel like a rollercoaster ride! A clear workflow is the secret recipe for smooth collaboration and communication between departments. You often hear the term "pipeline" - sounds technical, doesn't it? Don't panic! In this article, I'll show you how to set up a pipeline for your project - without any magic or programming!
]]>

The term “pipeline” is made up of “pipe” and “line” and describes a process in which something goes in at one end and comes out at the other. It is often used in the fields of visual effects, animation and games, but “workflow” is also a good synonym. Pipelines can be very different. They can range from a 3D software rendering pipeline to a production pipeline. A production pipeline is there to manage the workflow and data output at each stage.
Pipelines are the secret to collaboration. They connect different departments together. Often artists don’t understand how these magical tubes work or how to connect them. But don’t worry, we’re here to shed some light and unlock the secrets of the pipeline!
Teamwork made easy

In every new project, artists face the challenge of teamwork. Even if we are masters in our art, that doesn’t mean we are automatically team players. This is where the pipeline comes into play! It creates standards and workflows so that every artist can work freely within clear expectations. A well-functioning pipeline reduces friction and allows artists to focus on their creative work, while the transition between departments is (ideally) buttery smooth. By minimising redundant tasks, we can focus on the really important things. This means fewer mistakes because we’re not constantly repeating the same work clicks and less stress because everything runs like clockwork. A well-planned pipeline ensures that everyone knows their role and everything goes hand in hand. This leaves more time for creative explosions and less time for organisational headaches.

Everyday life in the pipeline

There are many moments in everyday production when we have to focus on uncreative tasks. Imagine doing this over and over again: opening certain menus, navigating to folders, executing commands or waiting for processes to be completed. It may sound boring, but these routines are our safety net. They give us security and stability so that we can realise our creative ideas without crashing.
With a pipeline behind us, we can focus on what really matters – our art. So, let’s celebrate the magic of the pipeline and make collaboration a smooth and stress-free dance! To create our own pipeline, we need to take a closer look at the following four elements:

Folder Structure

Every pipeline starts with the folder structure, the files are the essential glue between and representation of each department’s work. The first question we ask ourselves when working with our project files is “Where do I store my files?”
The folder structure defines the folder names, their hierarchy and the content of each folder. Depending on the production and studio, this can be customised as required.
Task-based: The task-based approach is best suited to specialised teams where tasks are strictly divided into departments so that artists have a simple “everything in one place” structure. Assets and shots are organised under tasks. (ASSETS/RIG/Shots)
Asset-based (most common): The asset-based approach allows for easier switching when each artist is working on multiple tasks at once. A great generalist approach for smaller projects. (ASSETS/Mike/RIG)
Further planning folders provide space for production data such as asset/shot spreadsheets and budgets, while Preproduction is where we store artwork and mood boards as well as footage, where we find general project-independent resources to help with asset and shot production.

Ordnerstruktur für ein einzelnes Projekt mit einer nummerierten Reihenfolge.
Folder structure for a single project with a numbered sequence.

A project folder structure can be divided into the following categories:

Naming Convention

Now that we have organised our files into different project folders, we need to answer another question: How do we name the working files, texture files, assets, shots and nodes in the scene? These questions are answered by the naming convention, which defines general and specific rules for certain types and phases of the project. These rules must answer all naming questions to avoid confusion and help in the later stages of script development. General naming conventions
could be:

  • English
  • camelCase (ragingBull)
  • no spaces “” instead underscore “_” in the file name
  • Tasks are in CAPITAL LETTERS (RIG)
  • Versions are a “v” combined with 3 digits (v001)
  • no names and comments in the file name (are saved as metadata)


Tasks can be shortened for use in asset and shot files:

ShortLong
GEO Modelling
SHD Shading
TEX Texturing
RIG Rigging
ANIM Animating
FX Effects
LGT Lighting
RENDER Rendering
PRE PREVIZ
SLAP SlapComp
COMP Compositing
Editing
GRAD Grading

Here is the final result for our project files using the naming conventions:

Working file: name_TASK_version.extension
mother_RIG_v001.mb
Render file: shotNr_AOVs_version.####.extension
s010_diffuse_v001.1001.exr

Manages the connections between different states and tasks of a production.

Software pipeline

Now that we have structured our project and naming conventions inside and outside the scene, it’s time to look at the software and plug-ins. The software pipeline defines the software, its versions, the files and how they relate to each department. The purpose of the software pipeline can be summarised quite simply by defining what goes in, what comes out and what software is used in the middle of the process. The result is that every artist uses the same tools and plug-ins, while the data flows clearly to and from each department.

Software

Software version (Maya 2025, Houdini 20)
File format per department (input & output)

TASK INPUT OUTPUT
GEO.abc.mb
TEX.abc.tiff (8 bit)/ .exr (16 bit)
SHD.mb.mb
RIG.mb.mb
ANIM.mb.mb/.abc
LGT.mb.mb
RENDER.mb.exr (16 bit)

Extras

Start Frame: 1001 (pre-run time for simulations)

Frame handles: 5 (pre- and postrun time for the edit)

The naming is crucial for clarity and the script pipeline.

Script Pipeline

We have now reached the final stage of our pipeline: the script pipeline. This is what most people think of as a pipeline. In reality, this is only partially true. The previous sections are much more important and essential to creating a pipeline than the scripts that support it. The purpose of the script pipeline is really just to automate what has been defined by the folder structure, naming convention and software pipeline. The Script Pipeline helps our “Studio Pipeline” with the following tasks:
Folder Structure: Creating folders and organising the working files.
Naming Convention: Save, load, import and export files using a standardised naming convention within and outside the scene.
Software Pipeline: Opens the appropriate project software and loads the required plug-ins, scripts and files via import and publishing applications.

The script pipeline automates the previous steps and realises implementation smoothly. Even the best folder structure, naming convention and software pipeline can be ignored by a single artist. Automating these processes with scripts eliminates manual errors and additional training while saving time, allowing the artist to focus on their craft and not on data processing.

Successful pipelines

The key to a successful script pipeline is automating these 3 elements:

  • Software setup at startup
  • Save & Load
  • Export & Import

Launching the right software with the required plug-ins, settings and scripts ensures that all team members are working with the same applications. Saving, loading, exporting and importing enforce the pipeline structure, simplifying work and publishing to other departments.

arDesktop to start the pipeline from the desktop.

“One of the biggest mistakes we made was trying to automate things that are super easy for a person to do, but super hard for a robot to do.”
– Elon Musk

arLoad to open the project assets and shots.

On the other hand, we tend to see certain automations as the only way to get things done, without realising that we can achieve the same results with little preparation and a simple manual workflow. Studios tend to be plagued by the same issues, but they remedy this with pipelines that automate as much “idle time” as possible to increase artist productivity.
The best time to create a script pipeline is after a thorough review of the previous three phases to determine which part will benefit from automation, as automation is time-consuming and comes with its own challenges.

(is.gd/xkcd_efficency)

Extras

Finally, a few points that may be extremely important when looking at the pipeline of a project and that you should definitely keep in mind.
Onboarding: The introduction of new employees to a project should always be a priority at the beginning. In addition to the familiarisation period, it is important to show them how the pipeline works specifically for their work and at the same time give them the opportunity to read about this in the pipeline documentation. (For example in the Open Source Pipeline Plex Wiki – is.gd/plex_wiki)

Backup: It is always advisable to make several backup copies of our project in case we lose it or need to revert to a certain state. Every operating system offers backups with timestamps, or you can simply use Google Drive or Dropbox to save a copy in a remote location. For Script Pipeline, git (www.git-scm.com/) is the best backup system as it allows a tremendous amount of control over backups, changes and rollbacks.

Simple automation: A variety of free and paid external tools can take over certain tasks and create a simple script pipeline. For example, tools such as Total Commander, Directory Opus, the Element or Adobe Bridge can be used to rename or move a large number of files according to a specific logic.

Open Source Pipeline Plex: Script Pipeline with the most important modules.

Conclusion

Pipelines are a key element in any production of visual effects, animations and games. A good pipeline guides the artist through the complexity of a multi-stage project and can mean the difference between meeting or missing a deadline. A bad pipeline adds complexity, confusion and unnecessary steps that slow down creativity, teamwork and progress.
To create a good pipeline, it’s important to be clear about the project and have enough experience from previous projects to plan a workflow in advance that guides everyone involved from script to final cut. It’s not just about automation and scripts, because a smart and clear workflow can achieve similar results without relying too much on programming. Ultimately, it’s about achieving the result as smoothly as possible.

]]>
DIGITAL PRODUCTION 144778