Jürgen Firsching – DIGITAL PRODUCTION https://digitalproduction.com Magazine for Digital Media Production Wed, 04 Dec 2024 13:29:15 +0000 en-US hourly 1 https://digitalproduction.com/wp-content/uploads/2024/09/cropped-DP_icon@4x-32x32.png Jürgen Firsching – DIGITAL PRODUCTION https://digitalproduction.com 32 32 236729828 Text-to-Animation-Generator? https://digitalproduction.com/2024/08/29/text-to-animation-generator/ Thu, 29 Aug 2024 17:49:32 +0000 https://digitalproduction.com/?p=144225
Just say how the animation runs and it will be generated? That's possible, at least according to the makers of Saymotion. How well? You'll have to see for yourself.
]]>

DeepMotion has launched version 2.0 of its SayMotion software, which aims to simplify and automate the creation of animation sequences. The software allows users to turn stories and ideas directly into animation sequences, with the update bringing some significant new features. These are designed to improve the production pipeline and optimise collaboration between different teams.

Extended file format support and new features

One of the most notable new features in SayMotion 2.0 is support for a wider range of file formats. Artists can now import and export files in FBX, BVH, USD and GLTF formats. This expanded compatibility makes the software more versatile, especially when working with other popular tools in the industry.

In addition to improved file format support, SayMotion 2.0 also offers deeper integration of AI-powered features. Users can automatically analyse and adjust motion data through the use of deep learning algorithms, which significantly speeds up the workflow. New tools for motion optimisation have also been introduced, which make it possible to automatically correct unwanted motion errors before they become visible in the final animation.

Cloud integration and improved collaboration

Another highlight of SayMotion 2.0 is the extended cloud integration, which allows multiple users to work on projects simultaneously. This function is primarily aimed at larger teams and international projects where it is important that everyone involved always has access to the latest versions of the animation data. Cloud integration supports seamless synchronisation of project data and allows changes to be tracked in real time. This promotes collaboration and reduces the risk of version conflicts.

New export options and integration into existing pipelines

SayMotion 2.0 also gives users the option of exporting their animation sequences directly in various video formats such as MP4 and MOV. Direct uploading to platforms such as YouTube or Vimeo is also supported. This function is particularly useful for artists who want to present their work quickly and easily.

Another feature is the support of Python scripts. This makes it possible to create customised automations and integrate the software seamlessly into existing production pipelines. These enhancements allow SayMotion 2.0 to be used in a wide range of projects, from small indie productions to large VFX productions.

Licence models and prices

SayMotion 2.0 licences are offered in several models, depending on the needs of the user. For individual users there is a monthly subscription fee, while teams and studios can choose an extended licence with additional functions and support options. Prices vary depending on the scope of the licence and start at around USD 15 per month. A trial version is also available to evaluate the new features in advance.

Conclusion

SayMotion 2.0 brings a host of new features and improvements that should be of particular interest to users in the animation and VFX industry. The extended file format support, cloud integration and new export options make the software a versatile tool in the modern digital production pipeline. Nevertheless, all new functions should be thoroughly tested before they are used in ongoing projects – and of course the animation is by far NOT yet suitable for the hero character, but for “quick variants in the crowd”, background characters or a bit of movement and bustle in the background? Why not.

SayMotion documentation
Detailed technical documentation on the software.

SayMotion 2.0 Release Notes
Official release notes from DeepMotion.

SayMotion official website
Manufacturer’s website with further information and documentation.

]]>
DIGITAL PRODUCTION 144225
New functions in Godot 4.3 for CG Artists https://digitalproduction.com/2024/08/19/neue-funktionen-in-godot-4-3-fuer-cg-artists/ Mon, 19 Aug 2024 17:49:31 +0000 https://digitalproduction.com/?p=144223
Godot 4.3 brings new features that could also make it "interesting" for virtual production: volumetric fog, meshlet rendering and highly optimised animation tools.
]]>

With the new version 4.3 of the Godot Engine, the developers have integrated numerous improvements and new functions that are specifically tailored to the needs of CG artists. The focus is on optimising and expanding existing tools to make complex digital projects more efficient and flexible.

Volumetric fog: more realism in the visualisation

The introduction of “Volumetric Fog” in Godot 4.3 offers you an improved representation of fog and smoke in real time. The engine now supports the simulation of light scattering within fog volumes, which makes for more realistic and atmospheric environments. Especially for scenes that require a dynamic atmosphere, such as in film production or real-time visualisations, this is a welcome enhancement.

The new fog function is fully integrated into the pipeline and works efficiently with the existing lighting and shading engine. However, the complex calculation of the light interactions within the fog can require high computing power, depending on the scene and project. Users should therefore test the new functions thoroughly before integrating them into production projects.

Meshlet rendering: Efficient rendering of complex geometries

Meshlet rendering is another important new feature in Godot 4.3. This technology makes it possible to divide large and complex models into smaller parts, so-called meshlets. This leads to more efficient processing and visualisation of geometries, as only the visible parts of the model need to be rendered. For VFX artists working with highly detailed scenes, this is a significant relief as it can considerably reduce render times. Especially in real-time applications, such as games or interactive visualisations, meshlet rendering can help to optimise performance on different hardware configurations. Here too, however, it is advisable to evaluate the technology in detail before using it in production.

Improved animation tools: Precision and flexibility

The animation toolset has also been revised with Godot 4.3. The new functions offer you as a user more precision when creating and customising animations. The “Animation Retargeting” feature allows you to easily transfer animations to different characters, which greatly simplifies the workflow for projects with multiple characters.

The integration of Animation Trees has also been improved, allowing you to manage and customise complex animations more efficiently. These new features are particularly useful for artists working on projects with many animated characters or objects. Improved control over timing and transitions allows for more realistic motion sequences, which is important in VFX and game production.

Extended file format support: More flexibility at work

Godot 4.3 extends support for different file formats, giving you greater flexibility when importing and exporting your projects. The engine now supports a wider range of 3D file formats, including FBX and OBJ, and improves integration with external tools such as Blender. This expanded format support facilitates the exchange of assets between different software solutions and enables smoother collaboration in mixed production environments. Although the new formats have been extensively tested, artists should ensure that all imports are correct and lossless before using them in current projects.

Further optimisations: Improved performance and stability

In addition to the major features, Godot 4.3 offers a number of smaller optimisations that have a positive effect on the general performance and stability of the engine. These include improvements to the rendering backend, which increase the general frame rate and responsiveness of the engine.

The developers have also worked on the user interface to further improve the workflow. This particularly affects the scripting and debugging areas, where new tools and options have been added. These changes are aimed at providing you as a user with an even more pleasant and productive working environment.

Licence model and availability

Godot 4.3, like the previous versions, remains available under the MIT licence, which means that the engine can be used free of charge. This openness makes Godot particularly attractive for indie developers and small studios who want to benefit from the professional features without having to pay high licence costs.

For more information and detailed instructions on how to use the new features, you can consult the official documentation. Information on downloading and installing the latest version can be found directly on the manufacturer’s website.

]]>
DIGITAL PRODUCTION 144223
CST-USD Viewer in beta https://digitalproduction.com/2024/08/09/cst-usd-viewer-in-beta/ Fri, 09 Aug 2024 07:38:00 +0000 https://digitalproduction.com/?p=144572
The Higher Technical Commission for Sound and Images (CST), a French association of film and audiovisual technicians, has presented the new 3D-Info tool. This tool aims to simplify the visualisation of 3D scenes and make it more efficient. The focus is on the USD format.
]]>

3D-Info is a free, open-source and cross-platform tool that enables the visualisation of 3D scenes in USD format. It is aimed in particular at department heads in filming (camera, set decoration, staging etc.) and in the animation sector such as production management and direction. Other users who want to view USD 3D assets also benefit from the user-friendly interface, which does not require any additional dependencies.

With 3D Info, users can adjust and export cameras and images, take measurements on set, share notes or play animations. Props, animated characters or sets can also be opened. Technically, the tool allows you to view the hierarchy of “stages” and “layers”, which are specific concepts of the USD format.

Current limitations

The first beta version of 3D-Info currently offers limited functions. The focus was on ease of navigation and consistency of visualisation. The visual rendering in 3D-Info is identical to that in usdview and Nvidia’s Omniverse Composer, as the same Storm rendering engine from Pixar is used. The development process required considerable effort to implement the USD framework correctly. The developers prioritised the features requested by partner studios Loops Creative Studio and Plateau Virtuel, which already work with the USD format.

CST’s goal is to support the film industry. With the increasing use of virtual production, the need to validate 3D assets in pre-production is growing. 3D-Info is intended to remedy this situation and facilitate the use of 3D for everyone involved. The idea for 3D-Info was born when Frédéric Fermon, one of the main developers, was looking for a tool that could visualise 3D assets in an understandable way without having to open a DCC. After discussions with artists, TDs and virtual set designers, it became clear that an easily accessible viewer with annotation capabilities was needed.

Development challenges

The USD format documentation is extensive but scattered in places. The file format lacks a formal description, which makes development difficult. Different parts of the Pixar library were created by different people at different times, resulting in slightly different architectures. The choice fell on the Qt framework for the graphical interface, which facilitates the implementation of certain widgets. However, this also brings challenges, for example in the handling of keyboard events. Another issue is the evolution of the library versions. Currently the version used is frozen, but in the future it will be necessary to adapt to new versions.

Public roadmap and support

The developers are calling for the beta version to be tested and feedback to be provided. 3D-Info should at least be useful as a viewer for models or the hierarchy of USD stages and layers. Here you can find the Git, information, images and more: trello.com3d-info

]]>
DIGITAL PRODUCTION 144572
Here comes the new Cityengine https://digitalproduction.com/2024/08/08/here-comes-the-new-cityengine/ Thu, 08 Aug 2024 17:49:34 +0000 https://digitalproduction.com/?p=144234
CityEngine 2024.0 offers new tools such as the Visual CGA Editor, ArcGIS Urban Integration and the ability to share web scenes.
]]>

The latest version of CityEngine, 2024.0, is now available and offers a host of new features and improvements for VFX artists. This version brings significant updates that simplify workflows and improve integration with other tools and platforms.

Visual CGA Editor

The Visual CGA Editor has left the beta phase and is now available to all users. This tool significantly simplifies workflows and allows users – especially those without programming skills – to edit models more easily. The integrated content library also offers new components for terrain division and mass modelling.

Integration with ArcGIS Urban

Integration with ArcGIS Urban has been improved so that CityEngine users can now edit the elevation or terrain of a plan directly in ArcGIS Urban. When importing an ArcGIS Urban plan into CityEngine, these layers are automatically imported as separate terrain layers. The intuitive terrain editing makes it possible to perfectly adapt the terrain to the scenario and upload the changes back to ArcGIS Urban.

Share web scenes

With the new “Share as Web Scenes” feature, users can export their 3D scenes directly from CityEngine to ArcGIS Online and view them in the Scene Viewer. This feature makes sharing, exploring and organising multiple scenes much easier.

Material collection and browser

CityEngine 2024.0 offers an extensive collection of high-quality materials that can be used to texture buildings and open spaces. Users can browse and apply these materials via a dedicated browser.

Supported file formats

CityEngine 2024.0 supports a variety of file formats, including .obj, .dae, .fbx, .gdb and .shp, allowing for flexible and versatile use. It is recommended to carefully review all new features before using them in current projects.

CityEngine documentation

ArcGIS Blog about the new features

]]>
DIGITAL PRODUCTION 144234
VR beyond the hype https://digitalproduction.com/2024/06/28/vr-jenseits-vom-hype/ Fri, 28 Jun 2024 17:49:00 +0000 https://digitalproduction.com/?p=144220
Although the hype surrounding the Apple Vision Pro is still going strong, VR has yet to really gain a foothold at home. However, where it is really picking up speed is in the area of "rides" and free roaming. Amusement parks are where the technology can be fully utilised, regardless of home PC compatibility. We ask ourselves, what does it all look like then?
]]>

It turns out that there are studios and companies that specialise in exactly this kind of thing. Unique on this planet is the MACK Group, which is not only a manufacturer of rollercoasters [MACK Rides], but also operates Europa-Park (europapark.de) and is home to several leading companies in the fields of VR, specialised hardware and creative content production. We are talking about the company VR Coaster(www.vrcoaster.com) based in Kaiserslautern, the sub-unit MACK Interactive and the group driver MACK One (mackone. eu), which not only develop prototypes and VR experiences for all rides, but also have an Animago judge in their ranks: Alexander Bouquet. He is Managing Director of VR Coaster and Executive Director within MACK One.

For this interview, he is joined by Robin Herrmann, COO and Head of Production at VR Coaster, and Marcus Ernst, Senior Product Manager at MACK Interactive.

DP: Before we talk about the “how”: What exactly makes your VR rides stand out?

Alexander Bouquet: VR Coaster GmbH & Co. KG has been adding virtual reality to real rides such as rollercoasters and freefall towers for 10 years now. This combination allows you to feel real “airtime” during the VR experience, i.e. weightlessness, G-forces and generally the classic “tingling in the stomach” of a real rollercoaster ride. The first experiments took place in 2014, when Michael Mack let Thomas Wagner and his virtual design students onto the Blue Fire roller coaster with VR goggles and a laptop for the first time.

At the time, it was quickly realised that this had created a completely new type of ride that combined a real ride experience with media-based storytelling for the first time. For example, you can experience a flight on a dragon, a breakneck chase on a jet ski or a space battle more immersively than ever before. Shortly afterwards, the two founded VR Coaster GmbH & Co KG together with Mack Rides, the Mack Group’s rollercoaster manufacturer. The VR Coaster portfolio now also includes underwater VR experiences, and the award-winning free-roaming format “YULLBE” was also developed together with MACK One, where guests can walk through fantastic VR worlds on their own.

DP: And where can you find them?

Alexander Bouquet: This unique experience is not only available at Europa-Park on two rides, but also at many theme parks around the globe. With over 80 equipped rides worldwide, VR Coaster is the industry leader in this field. We also offer underwater VR in our own water park Rulantica and, of course, the YULLBE experience. The latter can be found not only in theme parks and family entertainment centres, but recently also on many cruise ships.

DP: How is a ride like this actually organised?

Robin Herrmann: The actual VR application runs entirely on the VR goggles themselves. It would simply be impossible to install laptops or computers for each seat in a rollercoaster train – if only because of vibrations, difficult power supply, cables to the goggles or the inconvenience of cleaning the goggles. That is why we have developed a system that runs entirely on mobile hardware and still enables very high visual quality. Only objects in the foreground are shown in real-time graphics (e.g. a cockpit or your own body, etc.), and everything further away is shown in a stereoscopically pre-calculated, high-resolution panoramic image sequence, which also runs at an extremely high frame rate.

However, in order to experience VR in a moving car, its movement must also be precisely synchronised. For this purpose, a so-called “black box” is mounted on the rollercoaster train, which uses a wheel sensor to track the position of the train on the track and regularly recalibrates itself after each lap. This position is then sent via Bluetooth to the goggles, which can then display a precisely synchronised VR ride. For larger installations, we also use camera tracking in the station to automatically determine the seating position and head orientation of the guests.

Marcus Ernst: In the case of coastiality, i.e. VR on roller coasters, it is advisable to install a conveyor belt system on larger rides to transport the used goggles from the exit side to the entrance side for re-issue. There is also the “Roam & Ride” concept, where guests already wear the VR headset in the queue and board the rollercoaster train with their glasses on so that the VR ride starts seamlessly. We are currently building a new award-winning experience that will also introduce a new generation of VR goggles and tracking.

DP: What is the “experience”?

Alexander Bouquet: It’s very different. We have something for every age and thrill level, from the wild chase on jet skis of agent Amber Blake, to riding in a lorry through mines with our own Mascots Ed & Edda, to racing with cartoonish dinosaurs and Madame Freudenreich. Incidentally, my 7-year-old daughter looked at me with proud eyes after her first ride when she drove a screw figure during the experience that doesn’t even exist in reality – that’s immersion. There is also the huge advantage that we design the experiences in-house and realise them with the help of VR coasters, MACK interactive or MACK Animation.

Marcus Ernst: Let’s continue to take the procedure for VR on rollercoasters as an example. Of course, it’s slightly different for free-roaming VR and swim VR. Our aim is always to “upgrade” an existing rollercoaster ride, to make it even more immersive. In other words, to be able to operate the existing ride for longer. This is economical and sustainable. We refer to the equipping of existing installations that are to be made more profitable as a retrofit. With different VRContents, the same roller coaster looks completely different every time and therefore invites you to take another ride. The aim is always to have little or no impact on the maximum hourly capacity of a ride – the most important criterion for theme parks.

Robin Herrmann: In addition to upgrading an existing ride, another USP is to offer an experience that is only possible on a roller coaster or a freefall tower. Large rides with longer periods of weightlessness in particular enable a VR experience that would not be possible on a simulator chair. Technically, our system allows an unlimited number of VR headsets to be synchronised with the ride at the same time, but most installations allow guests to choose whether they want to ride with or without VR. Only some of our customers, such as Universal Studios Japan or Phantasialand, operate their VR rollercoasters exclusively with VR.

„Attack on Titan XR Ride“ für Universal Studios Japan
“Attack on Titan XR Ride” for Universal Studios Japan

DP: And what happens if one of the goggles stops working?

Robin Herrmann: Goggle failures were particularly common in the early days, when we were still forced to use the mobile phone-based Gear VR. The mobile phones tended to overheat quickly in VR mode and the software was also difficult to control – after all, we were dealing with consumer hardware. However, with integrated glasses optimised for location-based VR, we now have an extremely robust hardware and software basis that has proven to be very reliable and resilient in daily operation and also gives us complete control over the entire software and operating system. The battery level and the status of the VR application are also constantly monitored so that failures can be avoided at the station.

DP: How do you synchronise between visitors?

Marcus Ernst: That depends on the type of experience: “Roam and Ride” with a free roaming component and pure “VR Ride” experiences. In Roam and Ride, the players have to be able to see each other at all times, as do the operators, so that they don’t run into each other (in the free roaming part) and can also board the train safely. This takes place in the latest generation of goggles by means of inside-out tracking, whereas previously complex camera systems were required to track people.

**DP: You would think that relative tracking would be a horror – coloured light, individual movement and someone always has an ancient mobile phone transmitting in exactly the wrong band? **

Robin Herrmann: Fortunately, synchronisation on the rollercoaster, on freefall towers or other rides doesn’t work optically, but with sensors on the vehicle and Bluetooth broadcast packets that communicate the vehicle’s position to the goggles. In fact, at the very beginning in 2014, we had real problems with the newly emerging smartwatches, which also sent massive amounts of broadcast packets for the first time. However, we were quickly able to get this under control.

We only use camera tracking in the station for particularly large systems in order to determine the seating position and head orientation of guests using infrared markers. But this also works surprisingly well and robustly, even outdoors. Even with inside-out tracking, such as in our YULLBE GO attractions, colourful light and a lot of movement are no longer a problem, as the current generation of glasses can cope surprisingly well with changing or unsettled lighting situations.

DP: And when we talk about timing: How precise does it have to be when you have VR goggles on the rollercoaster?

Robin Herrmann: The synchronisation does indeed have to be pretty precise, as otherwise curves would very quickly appear in the wrong place and people would be disorientated. It’s not about latency, but that the basic direction of movement is correct. A rollercoaster train travels at quite different speeds depending on the load, temperature and weather, and it also makes a big difference whether you are sitting at the very front or the very back. In order to synchronise this precisely, the train is equipped with a so-called “black box”, which uses a wheel sensor to determine where the vehicle is currently located on the track. This information is constantly sent to the goggles, which can then generate a precisely synchronised VR ride.

This is still the biggest problem with VR in home applications: When you move through virtual environments with quick turns, seasickness immediately sets in. However, as soon as the movements and rotations also take place in the real world and the VR journey is synchronised with them, any feeling of dizziness disappears. In fact, by augmenting real rollercoasters and rides with VR, we have created the only setup that enables dynamic, fast flights through VR worlds without motion sickness.

Alexander Bouquet: This is where our patented technology and precise content creation come into play. The interplay between hardware and content is our guarantee for success. I keep seeing new rides or simulators with VR headsets at trade fairs and this is where the wheat is separated from the chaff – 80 per cent of simulators are asynchronous, which causes motion sickness.

DP: When it comes to VR goggles: Which ones are good?

Robin Herrmann: On rollercoasters or rides in general, where 3DOF tracking is sufficient, the G2 or G3 devices from Pico are the best choice. The goggles are very robust and we were also able to modify them perfectly for location-based entertainment operations (e.g. with sun protection against display damage and a hard-wearing cover with a head strap from our own production). We also use these goggles in underwater applications, for which we have developed a completely sealed housing. In the free roaming area, we currently mainly use the HTC Vive Focus 3, as these goggles offer robust inside-out tracking and can recognise learned spaces very quickly and orientate themselves immediately. You can also swap the battery here, which makes operation more efficient.

DP: And how much effort does it take for your developers to switch between the different devices and SDKs?

Alexander Bouquet: As we work with Android-based VR headsets in practically all set-ups, it’s never really that difficult. Both Unity and Unreal make the work relatively easy. Only in the more complex YULLBE PRO installation did we initially use Windows-based backpack PCs, but even here we were able to successfully switch to mobile glasses from HTC. What’s more, a lot of real-time graphics only occur in the free roaming setups, while on the rollercoaster or underwater, mainly pre-calculated image sequences are streamed, which is independent of the SDK or VR platform.

**DP: Let’s talk about the content: How were the assets created? **

Robin Herrmann: Basically, it’s the same approach as with CG productions in the animation sector. Assets are usually created “from scratch” by our team, or we receive existing 3D models from our clients, for example if it’s an IP for which films or computer games already exist. With the DC Comics IPs in particular, there were already many existing models that we could build on. There are different workflows depending on where the assets are used: Objects that appear as real-time graphics in the foreground need to be more optimised and work with fewer polygons. What is used in the pre-rendered layer, on the other hand, can of course be as complex as you like, even if the panorama rendering is realised in Unity or Unreal.

DP: People say that Unity is much better for this? Why Unreal?

Robin Herrmann: We use Unity almost exclusively for the actual VR apps. The pre-rendered part, which is streamed as a high-resolution image sequence during the experience, is usually also rendered with Unity, but we are using Unreal more and more often for this, as you can achieve great results even faster here. In rare cases, we still render in the classic way, for example with V-Ray or Arnold – but due to the extremely high resolution and frame rate, this takes weeks of rendering time, whereas the real-time engines can render even our largest projects in a single day.

DP: What are the restrictions that you have imposed on yourselves to ensure that it always works? Marcus Ernst: The pre-rendered stereoscopic panoramas have a resolution of 6K×6K, and run at an average frame rate of 60 fps. However, the current VR glasses could not display a higher resolution anyway. With real-time geometry in the foreground, you should always stay below 300,000 vertices with current mobile VR hardware and not use overly complex shaders so that the frame rate remains high.

Alexander Bouquet: For me, the story is the essential part – emotions are our currency and our restriction here is as follows: If it doesn’t kick, it’s out. Our technology is “military grade”, we produce patented parts to refine the standard headsets and everything is customised to the respective track with its unique layout. These are standards that we orientate ourselves by.

DP: When we look at a development like this: How many iterations were necessary for the different rides until it was “Technically Clean” for you?

Robin Herrmann: As far as the visual quality is concerned, a lot can be tested and checked directly on the computer or in the office, so we don’t have to test so many iterations on the actual ride. However, it is important that certain effects are frame-accurate, such as a bump in a curve that translates into a virtual collision with a monster – and this coordination sometimes requires one or two test rides, because you simply can’t feel it on the office chair. In the case of our “Diving Theatre”, our underwater ride with counter-current system and effect jets, it’s the other way round – here the fine-tuning takes place more in the control of the system, which has to be precisely synchronised with the VR dramaturgy.

Alexander Bouquet: Of course, it took many rounds to get from POC to an industry-standard product suitable for mass production. However, we have created a pipeline that enables us to measure all rollercoasters in the world and supply them with hardware and software relatively quickly. This is where our in-house AMS – Attraction Management System – helps us, which we also use to ping every pair of glasses in the world and generate statistics that are relevant for our controlling and accounting.

DP: There are few activities that are more interactive than bumper cars – do you have anything on offer?

Robin Herrmann: In principle, we do the same thing here as with a free roaming installation: we use camera-based tracking to follow the movement of passengers and vehicles. This makes it possible to replace the real driving area in the VR world with a much larger area with remotely controlled vehicles. This opens up very exciting possibilities: On the one hand, we greatly enlarge the travelling area and the vehicles, which increases the perceived speed enormously. On the other hand, the vehicles also appear huge and no longer as tiny as in the real world. One of the highlights of the experience is when the driving surface is folded up halfway and you can virtually drive up the wall – it works really well and is totally amazing!

DP: Are there interactions beyond the users themselves? For example, can I drive over the robot spider?

Robin Herrmann: We stage a big boss opponent like that, which of course doesn’t exist in the real world, in such a way that it can’t be touched, so it pulls its leg away just before the collision. But of course you can touch all the other vehicles and feel it quite clearly – just like in a real bumper car.

DP: Let’s talk about what happens inside the goggles: What works and why?

Robin Herrmann: A key feature is always the enlargement of the virtual track layout to increase the perceived speed and also to simulate much greater heights or steeper drops. In addition, you can virtually “bend up” curves to create more space, similar to bending a paper clip apart: A 90° right turn becomes a 45° right turn and you have a little more room to shape the virtual world. It even works so well that at some points we let people travel backwards briefly in VR, even though the train is travelling forwards as normal – that always amazes the guests the most.

„The Great Lego Race“, Legoland Florida
“The Great Lego Race”, Legoland Florida
Dank speziellem Sonnenschutz können die VR-Coaster-Brillen auch draußen verwendet werden.
Thanks to special sun protection, the VR coaster goggles can also be used outside.

With freefall towers, we also swivel our gaze vertically downwards into the depths as soon as the fall begins. As the guests are weightless from this moment anyway and no “up” or “down” can be felt, this trick also works perfectly. Some real freefall towers create this effect with very elaborate swivelling seats in the real world – we offer this virtually free of charge with a gentle virtual camera pan.

Freeroaming in „Yullbe“
Free-roaming in “Yullbe”

Marcus Ernst: Anything that feels natural is possible in free roaming. Free roaming is the logical development of 360° videos: It feels natural to be able to look around freely, and it feels just as natural to then be able to move freely around the room. It’s always an experience that changes everything when you experience it for the first time. It really feels like Star Trek in the holodeck, because you can move completely freely in a virtual world, which you can’t do at home. What you can do very well is play with the actual dimensions of the room. Example: step length manipulation. Here, 1 metre, which is walked in real life, sometimes becomes only 90 cm in VR, or sometimes 1.20 m. This allows me to make rooms appear larger or smaller than they actually are, as it remains plausible for our brain as long as I don’t overdo it. We have defined a factor of around 1.5 as the maximum, above that everything feels like on the conveyor belts at the airport (every movement becomes extremely fast). But we can easily make a real 80 square metre room feel like a 200 square metre room, and not a single guest has ever come out of the 80 square metre attraction and said “Oh, that room was small”.

DP: What doesn’t work at all?

Marcus Ernst: True photorealism is always an issue, at least with free-roaming real-time graphics – the glasses simply can’t do that yet. However, good storytelling is much more important. Due to the Uncanny Valley, we can only fail at recreating reality. Our brain always and immediately recognises whether something is real or fake. This is also the reason why, even today, CG figures still appear somewhat cool and rigid. You have to incorporate imperfections at every point. I’m much more likely to form an emotional attachment to an overdrawn character than to one who tries to look realistic but inevitably always fails.

For comparison: the almost realistic animatronics of, for example, Abraham Lincoln at Disney look extremely creepy, whereas the totally overdrawn characters from the films look cute. Horror and jumpscares work extremely well, as VR and MR are immersive media. You can’t escape the impressions unless you close your eyes. For an appealing horror experience, however, the scene has to seem somewhat realistic so that I can relate to it and recognise the danger as such. Of course, suspension horror can and should be used here too, but a monster has to look really intimidating when it jumps towards you, just like in computer games. Carelessly animated assets that are immediately recognisable as such do not create a feeling of fear – rather humour.

**DP: And how much “reality” is needed? **

Alexander Bouquet: We always ask the question when creating the content. We often include a path or a trail as a directional indicator for our guests. It’s more pleasant – especially with agile tracks – when you can see where the journey is about to take you. Then we create a real flow for the passenger.

**DP: There are “IPs” in the rides – how did you choose what appears in the ride? **

Alexander Bouquet: Basically, the IP and the ride have to fit together and be harmonised. We naturally have rides with our internal IPs such as Ed & Edda or Snorri in our portfolio, but strong external IPs are also represented. The new Phantom of the Opera experience by Andrew Lloyd Webber is a strong addition to the park and fits in perfectly with the layout of Eurosat.

Robin Herrmann: In fact, our customers, such as Universal or Six Flags, often already owned the rights to well-known IPs and asked us to create the relevant content for them. We then sat at the table with three parties who all had to be satisfied: Warner Bros, DC Comics and Six Flags, all with different priorities. While DC was mainly concerned with visual style and graphic quality, Six Flags was mainly concerned with being ready in time for the start of the season. The content productions for Universal Studios were always particularly complex projects, for example with the “Attack on Titan” IP, which the VR Coaster team realised completely in-house.

**DP: What technology are you currently testing? **

Alexander Bouquet: We have several streams running at the moment, in which we are once again merging new engineering skills from MACK Rides with a new generation of goggles. The XR Maze, including a shooting device and recoil module, is also on the roadmap, right through to psssst, which we use in the park for queue entertainment. It’s going to be a very exciting year. We also have the Apple Vision Pro in the wringer and are of course looking to see what we can tease out of it.

DP: Who comes up with the idea of doing something like this?

Robin Herrmann: As part of his professorship at Kaiserslautern University of Applied Sciences in 2014, Thomas Wagner was looking for a way to combine VR simulations with real movement and approached rollercoaster manufacturer Mack Rides. Fortunately, Michael Mack, Managing Partner of Europa-Park, immediately allowed him to carry out his first test rides with VR on the Blue Fire and Pegasus roller coasters. The rest is history: This was followed by a successful patent application and the joint founding of VR Coaster GmbH & Co KG.

Alexander Bouquet: I am always fascinated by the people I get to work with every day. On the one hand, the crazy Professor Thomas Wagner, who brought VR to rides, the innovation-driven and extremely creative visionary Michael Mack and the person who brings all the rides here – Christian von Elverfeldt as Managing Director of MACK Rides. In addition, there are brilliant engineers, inventors, creatives, master planners, technology geeks and other crazy people with whom it is great fun to develop the thrills of tomorrow. We call ourselves the Emotioneers of Tomorrow.

DP: What kind of background do your developers have?

Robin Herrmann: In the creative team, there are classic artists in the fields of 3D modelling, animation, concept art and game development. We also have pure software developers and a hardware department that deals with the development of electronics and engineering, such as the modification of VR goggles.

Alexander Bouquet: In terms of developers, we differentiate between digital experience artists and hardware developers or engineers who develop all the haptic parts. This ranges from specially designed weapons including tracking LEDs to a helicopter vibrating plate with 15 tonnes and zero-G forces.

„Alpha Mods P.D.“ ist ein rasantes VR-Erlebnis und eine IP der MACK One
“Alpha Mods P.D.” is a fast-paced VR experience and an IP of MACK One

DP: And how often does each creator have to ride the ride before the audience arrives?

Robin Herrmann: Fortunately, not too often now, as we can record the ride and play it back on the computer during the development process. However, a few dozen rides are usually necessary for the final fine-tuning before the Mack family is the first to test the finished ride.

DP: If we look back over the last few years, what were the dead ends?

Marcus Ernst: If we had known 4 years ago that the backpack PCs would be phased out, we would have developed for mobile right away or focussed on streaming. And the decision in favour of a full body tracking system is still a very conscious one, but one that is constantly being rechallenged because it is extremely expensive and time-consuming. What we have learnt is how incredibly important it is to know your B2B/B2C target group and where they spend their time. And that’s before you start developing anything. You need to know exactly who your target group is, and where is it? And where not? And then you have to go with your product exactly where they are, right in the middle. And relieve them of all their technology worries, only very few people are interested in technology.

DP: If we look to the near future, what will be the trends?

Marcus Ernst: Mixed reality in particular will be extremely exciting, as a location-based experience where guests once again don’t have to worry about the technology themselves. We offer what they would like to have but can’t or don’t want to afford.

**DP: Dreams of the future: What will the VR Coaster Showreel look like in 2043? **

Alexander Bouquet: 2043 – everyone builds their own individualised experience at home or on the way to the theme park – fully customisable content. I ride and feel it with all my senses, including full body tracking on the rollercoaster. Content is customised according to my taste or my body stats. Do I need cheerful music with flowers and forests or heavy metal in the Underworld – the glasses already know, because everything communicates and is networked.

]]>
DIGITAL PRODUCTION 144220