Blackmagic Design: DaVinci Resolve and Fusion turn 16

For young people, 16 is considered a rather problematic age, but the roots of Resolve and Fusion go way back into the last century and you can expect a certain maturity. Does this also apply to a public beta?
Blackmagic Design: DaVinci Resolve und Fusion werden 16
Blackmagic Design: DaVinci Resolve und Fusion werden 16

At least the new version no longer focusses on the integration of further acquisitions, but on their consolidation and competitiveness in the areas of editing and VFX.

Caught up

Among the most important points of criticism from users of Adobe’s video programmes who were willing to make the switch were the fixed frame rate for each project, the lack of adjustment layers and the limited keyframing for filters and effects. In 16, some of the long-cherished wishes have finally come true. Although a timeline is still set to a fixed frame rate as soon as it contains video material, everything else can now be changed retrospectively. Above all, you can mix timelines with different frame rates within a project and then merge them into a superordinate timeline. The individual timelines can also differ in resolution, scaling and the setting for external monitors. A warning is displayed in the output if a higher resolution has been set than in the timelines. The frame rates are converted using the method set for the project, from frame repetition to optical flow.

Auf Einstellungsebenen dürften potenzielle Umsteiger noch gewartet haben.
Potential switchers have probably been waiting for setting levels

Adjustment layers can now be found as Adjustment Clips under the in-house effects; they allow you to use the functions in the Inspector, filters and effects in the Edit Page and control them with keyframes over any area of the underlying clips. It is now also possible to define and play loops across several clips. Keyframes for effects and filters – also for OFX from third-party manufacturers – are now displayed in the keyframe editor on the Colour page. Finally there are also Bezier curves for position animations, but they do not yet work perfectly in the beta, occasionally there were even crashes.

And finally, the function usually referred to as Smart Rendering has also been introduced. It can be found under Advanced Settings in the output module as “Bypass re-encode when possible”. If this is activated, the programme checks whether the source codec matches the output format in every respect and no changes have been made to the clip. In this case, no recompression takes place. The latter is certainly the least important of these new features, because a program with such good colour grading will rarely leave a clip completely untouched. Compression and decompression of H.264 and H.265 in the most common formats are now also supported by the studio version on PCs with AMD graphics cards via hardware. The speed for these formats has been improved on the Mac.

Endlich gibt es auch Funktionskurven für die Position – noch etwas unfertig in der Beta.
Finally, there are also function curves for the position – still somewhat unfinished in the beta

Quick cut

A completely new feature is the Cut page, which dispenses with Edit’s wealth of functions so that simple editing work can be carried out quickly under time pressure, e.g. for news, urgent commercials or shows that are not broadcast live. Automations recognise what the user was probably planning to do. A typical example: Who has never inserted a clip in Edit without the magnet function being switched on and the cursor in exactly the right position? Later, you have to laboriously search for the few remaining images that flash up briefly. With Cut, Smart Insert simply assumes the next available cut as the presumed target for insertion in such cases. In the case of Append, other automatic functions assume that the end is to be appended or that an adjustment of the duration is usually desired in the case of Overwrite. Match Overwrite places a clip in parallel on the upper track, but this requires a matching timecode.

There is only one viewer left, which automatically switches between sources and timelines. In a third mode, it automatically shows all the raw material as a virtual videotape, which you can fast-forward through for viewing instead of having to select each clip in the bin individually. The order corresponds to the sorting in the bin, and a clip activated in the viewer is also selected in the bin. If you simply play this virtual tape, it automatically runs slower with short settings and faster with longer ones. This may seem trivial at first glance, but it makes it much easier to get a quick overview of all recordings. If In and Out have already been set in the raw material, only this selection is transferred to the virtual tape. Missing raw material can be imported directly without having to go through Media. For effects, the search term can be entered here immediately – why do you still need an additional mouse click in Edit and Colour?

Die neue Seite Cut ist ganz auf den schnellen Schnitt ausgerichtet.
The new Cut page is completely geared towards quick editing.

The two constantly displayed timelines also serve to provide an overview: the upper one always shows the entire cut, the lower one a zoomed-in version for precise work. In the lower one, the video moves under a fixed cursor. Scrolling with a trackpad or Magic Mouse depends on the speed at which you scroll, just like on a mobile phone or tablet. Of course, the two positions are linked and scrubbing or jumping is possible in both timelines. Clips can be grabbed in both timelines and dragged to the other. Anyone who has worked with this for a while will be amazed to realise that it is no longer necessary to constantly zoom in and out during editing, e.g. to reposition a clip that is further away, and how much time this can actually save. A simple idea that could have been thought of earlier. When trimming, a third version is opened under the viewer, in which you can work frame by frame. This means that you have three zoom levels without having to do anything. In addition, a very precise audio controller appears below the source window if an audio track is available. Speech intelligibility during scrubbing has been improved, making it easier to find cut points, especially in dialogue.

Eine virtuelle Bandmaschine erlaubt die schnelle Sichtung des Rohmaterials.
A virtual tape machine allows quick viewing of the raw material

There is no inspector here, you have direct access to the most important tools via icons, a direct selection of common transitions on cuts and a quick selection tool by right-clicking in the lower timeline. The directly accessible tools here also include stabilisation and lens distortion compensation, which also appear in the 16 in Edit in the inspector and offer more setting options. Cut’s user interface is also very compact and is obviously aimed at use with laptops with limited screen resolution. Even the icons for changing modes at the bottom can be hidden under “Show Page Navigation” in the Workspace menu. After a little familiarisation, working with Cut on a modern laptop with a good trackpad is very pleasant.

The result of such a speedy edit (or any part of it) can be output in standard formats and uploaded to Vimeo, YouTube and now also to the fully integrated professional platform Frame.io without the diversions via Deliver with “Quick Export”. Markers and comments are also transferred there and live messages are exchanged without leaving Resolve. Unfortunately, DNxHR is not offered as an alternative to ProRes in Quick Export on the Mac. In addition, the export does not take place in the background, but Resolve is blocked for the duration. This is understandable for rendering, as long as the computer is not very well equipped. But it seems nonsensical that you also have to wait for the upload to one of the video platforms before you can continue working. Especially if you can’t get a really fast line in a German city centre.

Die Gesichtserkennung basiert auf der neuen Neural Engine und bietet die automatische Erstellung passender Bins.
Face recognition is based on the new Neural Engine and offers automatic creation of suitable bins.

Neural filters?

Neural networks are actually computer systems that are capable of learning. In Resolve 16, the term (neural in English) appears several times as “DaVinci Neural Engine” in the context of new filters and functions, without it being clear how what has been learnt is to be retained when switching between projects or even workstations. I would therefore rather chalk the term up to marketing jargon in the context of the artificial intelligence hype, without wishing to devalue the corresponding functions. One function that uses this engine is People for recognising people in clips. This already works quite well as long as they are facing the camera and not holding their hand in front of their face. Once you have given them names, Resolve 16 can automatically sort them into smart bins. Keywords and other metadata can now also be used to automatically create bins if desired.

Zu den neuronalen Filtern gehört auch Stylize, das Cartoon-­Looks generiert.
The neural filters also include Stylize, which generates cartoon looks

The already familiar SuperScale is now also said to utilise this engine, and it has indeed become much faster: Upscaling from HD to UHD used to run at around 6 fps on the test computer; in the new version, SuperScale manages this at just under 25 fps, almost in real time. This is very helpful because, in addition to scaling individual clips, SuperScale can now also be activated across the board during output and rendering if material with insufficient resolution was used in the final timeline. In this way, a clip can be used beyond its native resolution when zooming or panning. Auto Colour and Shot Match now also rely on this engine. In fact, the results are usually better than with the predecessor, without being able to replace experienced colourists or even 3D LUT creators (see DP 03:19) when matching.

Less useful is the new Speed Warp, which is offered as an additional option in motion detection for Optical Flow and is also supposed to be neural. To change the speed of a 20-second clip, our computer needed 40 seconds with the best previous setting (Enhanced better), but over 15 minutes with Speed Warp. The results were not exactly worlds better, especially with artificial slow motion. Although there are recognisable advantages depending on the subject, especially when separating moving elements from the background, these can also be seen with Twixtor Pro – which only needs around one and a half minutes for the same task. Even if Speed Warp should become significantly faster in future versions, it should primarily be useful for adjusting different frame rates. For slow motion, the competition comes from within the company: real slow motion from a camera like the new generation of the Ursa Mini Pro is still far superior to any algorithm.

Weniger perfekte Objektive lassen sich hier korrigieren, aber auch simulieren.
Less perfect lenses can be corrected here, but also simulated

Fusion 16

Tracking in Fusion, on the other hand, has become faster and better, both in 3D camera tracking and in the planar tracker. The planar tracker needs one minute and 36 seconds for our test sequence in the old version, in version 16 it only takes 21 seconds with even better precision. The 3D tracking of a corresponding test scene without manual intervention did not become faster to the same extent, but here too the value for the pixel deviations was reduced. Naturally, 3D tracking is extremely dependent on the image content and manual adjustment of the settings, so the results can vary greatly. In any case, it is helpful that lens distortions can now be taken into account.

While in many cases it was previously possible to improve the still rather poor stability of the integrated version of Fusion by switching off GPU acceleration, this is the area where the most has been done. Many things, such as practically all 3D functions, temporal effects, vector-based motion blur, splitters/combiners, polygon and bitmap masks as well as the often criticised title tools in Fusion now use the GPU for better speed and more stability. In 3D, the visual quality is improved by multiple sampling.

Intensive work has been done on caching and memory management: even though Fusion continues to benefit from ample RAM, the integrated version is no longer as unstable with less memory. However, this does not mean that Fusion 16 utilises a particularly fast GPU as efficiently as Resolve. A computer with many powerful CPU cores can still keep up well or even be faster, depending on the functions used in a composition, and Fusion cannot yet utilise more than one GPU.

Die runderneuerten Scopes sind schneller, präziser und bieten mehr Informationen.
The retreaded scopes are faster, more precise and offer more information.

The appearance of the separate version has been adapted to the integrated version, but not all experienced users will be happy about the iconisation, the waste of space caused by the bars and the much less flexible layout of the windows. Fusion is making a big leap from 9 to 16 to bring it into line. The previous dongle for Resolve also activates the studio version of Fusion in the beta, while a free Fusion 16 does not appear at all, at least not in the beta programme. One can assume that the separate version will be discontinued at some point when Blackmagic considers the integrated version to be sufficiently mature.

Incidentally, although the PDF manuals are already labelled version 16, there is currently no new information to be found there. Only a separate document called “DaVinci Resolve 16 New Features Guide” helps with experimentation, but does not answer all questions by far, especially not about Fusion 16. And for the do-it-yourselfers: OpenCL fuses must be rewritten to the new DCTL language, which allows a uniform code base for Metal, CUDA and OpenCL (pure Lua scripts run). A new SDK is not yet available, which is why native plug-ins such as Krokodove or LitSphere do not run in Fusion 16.

Hilfreich sind auch die zuschaltbaren Histogramme für die Korrekturkurven.
The switchable histograms for the correction curves are also helpful.

Scopes, filters and effects

All technical displays have been massively revised. The scopes are now supported by the GPU and affect the smooth display in the viewer much less with improved display, but they can still be set to optimised display or better speed. They can also be coloured and noise can be removed from the display using a low-pass filter. The new vectorscope can display highlights, midtones or shadows separately, and the respective area can even be adjusted. It would also be desirable to be able to display the scopes in parallel using multiple displays. The scopes can now display the minima and maxima as an outline via “extents”. A completely new feature is the display of the colour space, which finally makes information visible that camera manufacturers are often asked for in vain. In the current beta, the scopes in the floating window still disappear when you switch to Cut; they have to be reactivated in the other modes. Another new and useful feature is the display of underlying histograms for curve manipulation, which show either the signal distribution for the input signal or the result.

Nostalgiker brauchen mit Analog Damage keine zusätzlichen Plug-ins.
Nostalgics don’t need any additional plug-ins with Analog Damage

Beauty retouching and face recognition have been improved, but the mask can still not be corrected manually using the reference points displayed there. The new Colorize filter can’t do anything that can’t be achieved with curves or channel operations, but it is faster. Stylize and Pencil Sketch are also to be based on the neural machine, which should put a number of cartoon effects from third-party manufacturers out of work. To degrade perfect digital material with artefacts from the analogue era, you no longer have to turn to third-party manufacturers thanks to Analog Damage. The Dead Pixel Fixer and the Warper now allow point keyframes to adjust the position over time. A new feature is the Chromatic Aberration filter for the correction (or simulation) of such imaging errors in lenses, while Chromatic Adaptation is responsible for the adaptation of light spectra and colour spaces. Vignettes and shadows are now offered directly as filters and no longer have to be built by hand. Several filters from the Colour page are now also available under Edit, some also in Fusion.

We would have liked to test the new Object Removal directly against Mocha Pro (see DP 06:18), but in the beta we could not yet find out how to import an external clean plate. However, the top dog Mocha Pro will probably not become superfluous with version 16. On the contrary: Blackmagic Design finally allows developers of OFX plug-ins full access to all individual images of the respective clip. This means that Mocha Pro as a plug-in no longer causes the programme to crash, even if the developers still have to make a few adjustments. Accordingly, the temporal effects from Sapphire, which were previously severely restricted by Resolve, will soon be available
(see DP 02:19) will soon be up and running, the people at Boris FX are already working on it.

Die Werkzeuge zur Bearbeitung von Sprachaufnahmen finden sich nun im Dialog Processor.
The tools for editing voice recordings can now be found in the Dialogue Processor

Fairlight

The sound department shows improvements above all in automation and measurement methods, including a loudness history curve; there are no more excuses for incorrect levels (just as there are for gamut in the image with the new Scope). The display can be set to all the important standards of the broadcast world, which can also be selected for Normalise. Support for exchange with Pro Tools has been revised, both for export and AAF import. Elastic Time facilitates length adjustment, e.g. for post-synchronisation without pitch change, and the Dialog Processor combines the most important tools for refining voice recordings. Other new features include real-time frequency analysis and phase measurement for stereo, while the limiter has been significantly improved. However, let someone with golden ears judge the acoustic quality of all the new features – that’s not really my speciality.

Immersive Formate brauchen auch ein neues Interface zur Positionierung.
Immersive formats also need a new interface for positioning

The processing of immersive audio formats is offered with Dolby Atmos, Auro-3D, MPEG-H and SMPTE ST 2098 and supplemented with 3D panning and the Spaceview scope, but requires corresponding I/O hardware. Blackmagic currently provides surprisingly little information on this: apart from a brief description of the Fairlight Audio Accelerator, there is only the statement that it is supported in a Thunderbolt Expander under MacOS. The Foley sound library, which was announced at the NAB, is also not yet online. It is regrettable that audio is still exported at a maximum of 48 kHz (albeit at up to 32 bits). At least nothing else is listed under Deliver, although higher sampling rates are accepted for input. Even if hardly anyone can hear the difference to 96 kHz, higher values are simply desired by some customers and are also useful for possible further processing. After all, when outputting timelines with several audio tracks, all of them can be rendered in individual files.

Formats and miscellaneous

In addition to a few additional formats and better support of the necessary flags for HDR, the support of RAW from Canon and DJI cameras as well as Arri HDE should be mentioned in particular. Resolve now also understands DNxHD/HR in uncompressed form. Projects can be supplemented with notes, which are also saved and transferred to project participants. In Teamwork, projects can now also be defined as Read Only. With conform or colour tracing, the relevant name parts for reel (or tape) can be set to a shorter number of letters and a desired number of characters can be ignored at the beginning – this helps with exchanges. The intelligence when exporting still images still leaves something to be desired: Resolve now remembers the last folder, but still not the desired format. In Colour, the key signal is no longer inverted when transferring between nodes, and the new LUT browser offers a preview like the previous one for Looks. Zoom and offset in the viewer can be transferred identically to an external monitor if desired, and there are more combinations in the split view.

Mocha Pro als Plug-in bringt Resolve nicht mehr zum Absturz.
Mocha Pro as a plug-in no longer causes Resolve to crash

Comment

The Cut mode, which I personally would have preferred to call Assemble or Quick Cut, is a very useful new feature. It abandons common conventions without irritating users the way Apple did with Final Cut Pro X back in the day. Otherwise, even in the early beta, the integrated Fusion has become more stable and Fairlight more complete. Overall, there are numerous improvements in detail without introducing an infinite number of new features. This version clearly focusses on consolidation and catches up with the competition in important details.

However, if you want to test the public beta, you should definitely make a backup beforehand or use a separate machine – it currently has more bugs than any street dog.

Related Posts

Hff’s Little Star

Every year, the VFX programme at HFF Munich produces various projects that break new creative ground. These include the animated mixed-media short film "Little Star", which brings a touching encounter to life through a combination of traditional hand-drawn 2D animation and 3D computer animation. But how exactly was the film made?
More...

VR gallery at the Ostfalia

Wouldn't it be great if you could see something else in VR besides hectic hustle and bustle, demos and microtransaction calls? An art exhibition, for example? Yes, there is - and we found one such project with Noah Thiele, a student at Ostfalia.
More...