Search Results for “DP1902” – DIGITAL PRODUCTION https://digitalproduction.com Magazine for Digital Media Production Tue, 03 Dec 2024 15:49:58 +0000 en-US hourly 1 https://digitalproduction.com/wp-content/uploads/2024/09/cropped-DP_icon@4x-32x32.png Search Results for “DP1902” – DIGITAL PRODUCTION https://digitalproduction.com 32 32 236729828 Capturing Reality in Houdini https://digitalproduction.com/2022/04/06/capturing-reality-in-houdini-retro-artikel/ Wed, 06 Apr 2022 08:00:45 +0000 https://www.digitalproduction.com/?p=99462
Review: In DP 02 : 2019, we take a closer look at photogrammetry in Houdini. Our Houdini expert clarifies whether this has resulted in a production boost for game developers.
]]>

SideFX Houdini users get help in processing image series for point cloud generation and reverse engineering to take Houdini’s pipeline for game developers to a new level. In this case, we are talking about photogrammetry integrated into Houdini on the basis of Capturing Reality. We are also pleased to announce that the Capturing Reality plug-in for Houdini is in open beta. The plug-in maps a handful of stages from the main Capturing Reality software as Houdini nodes, so users don’t have to perform constant import and export functions between the two packages. Combined with the optimised tools in Houdini, this results in a significant production boost for game content developers.

In media productions, the use of photogrammetry is growing as software solutions become more affordable and professional technical equipment has become more accessible. Time is always a driving factor in media productions. Studios are turning to photogrammetry to create digital sets faster. This is an area that already offers a range of solutions for processing image series. Well-known examples include Agisoft’s Photoscan, 3D Zephyr, Recap and Capturing Reality. The latter solution enjoys a special status in the community, as its accuracy and, above all, processing speed are said to leave its competitors out in the symbolic rain. Reason enough to take a closer look at the symbiosis between the VFX giant and the photogrammetry giant.

Fast to the target

The number of nodes of the capturing reality plug-in is manageable and can be counted on one hand during the test period. The starting point is within an empty geometry node. Using the node called RC Register Images, users can import an image series into Houdini that is optimised for the photogrammetry workflow of Capturing Reality. Once the image series has been added, users can use the RC Align Images node to initiate the analysis of the image pairs and placement of the cameras in 3D space. As a result of the operation, the object in the image series is displayed as a point cloud with colour values. Even with an image series that was taken under strict conditions, it can happen that far more points are present during generation than is sensible and the scene resembles a fine spray mist. The area to be processed must be narrowed down. To do this, it is sufficient to delete the points that are not required and draw a bounding box around the required points. How the process takes place is up to the user and usually does not take long. There is no RC node for this, but it is not necessary as Houdini provides numerous ready-made tools for this.

To obtain a 3D object from the point cloud, another Capturing Reality Node called RC Create Model is used. Two inputs must be used for a correct display, ideally starting with the second input to prevent unnecessary computing operations that could lead to a crash if cancelled – provided Auto Update is set. The first input is the direct output of the RC Align Images node. All points resulting from the calculation of the image series are fed in here. The second input requires the bounding box information, which should display or mark out the selected points for reconstruction.

After a processing time, which can vary depending on the resolution and length of the image series, the 3D object is visible in the scene graph, but must still be correctly placed in the Origin by the user. The colouring of the object at this point appears plausible, but is based on the vertex colours, which were taken from the point cloud and interpolated. This is relatively easy to recognise by looking at the number of areas.

There are now two basic methods for proceeding. The first leads back to the RC Create Model Node. An RC Texture Model Node must be added to the node – the fifth RC Node in the bundle. There is a field in the settings that makes it possible to derive a lower-resolution model from the scan model with just a few settings. In addition to specifying an output path, the settings options for the function also allow the user to specify the resolution of the textures to be created based on a power of two or to select from the options. There is also a slider that asks the user to specify the percentage by which the algorithm should reduce the object.

Once the settings have been made, the object can be created at the touch of a button. As this is an automatic process, the topology of the grid may not correspond to the desired standard for some objects. It is therefore also a good idea to prepare the object in Houdini. SideFX offers special tools for game content developers.

The sample data sets from SideFX are relatively simple, so a reality capture test was carried out in Houdini using an image set from 3D-Scanstore. In addition to 3D models, image sets can now also be purchased for your own mesh generation: 3dscanstore.com

Fine tuning and function

When you get right down to it, users receive two nodes that can be used to create 3D objects from image series. As the old saying goes, the devil is in the detail and the settings may need to be adjusted for accuracy when processing the different image pairs. The accuracy of the reconstruction can be checked when using the node called RC Extract Cameras. If image series were created using a cage or a fixed camera rig, the position of the different cameras in the room can be displayed using RC Extract Cameras. Any deviations can thus be counteracted in a targeted manner.

Up to this point, we have talked about the workflow rather than the plug-in itself. Firstly, the plug-in must be downloaded and installed. SideFX has added a separate subpage on the official homepage with download links for Houdini version 16.5 and 17 as well as sample files. When the RC Align Images node is inserted and the display flag is set, the plugin asks for an active Capturing Reality licence. If no licence is available, the query window allows you to select a specific licence model. Regardless of which one is selected, a Capturing Reality account is always important, as Capturing Reality will ask for a licence the first time it is used. Users who do not yet have a licence are asked to select a licence model at this point.

Potential users must be aware that the Houdini plug-in is free of charge within the beta. However, a purchased capturing reality licence must be acquired via the user’s own account. The demo and Steam versions are not compatible. However, the costs for Capturing Reality are manageable and reasonable in terms of functionality and user-friendliness. In addition, the Capturing Reality community is active and is constantly growing. If a problem is identified in the community in order to find a solution, new users can look forward to help.

Costs: For individual users and indie studios, we recommend the promo programme for 99 euros for three months of use. Updates are included, a maximum of 2500 images per object can be imported and an internet connection is required to export the data. For a fair price, users in the entertainment industry get a photogrammetry turbo.

Links

Houdini Capturing Reality Plug-in, documentation and sample files

Official Capturing Reality Hompage

]]>
DIGITAL PRODUCTION 99462
More pixels in your pocket – the Blackmagic Pocket 4K https://digitalproduction.com/2020/09/09/mehr-pixel-in-die-tasche-die-blackmagic-pocket-4k/ Wed, 09 Sep 2020 09:00:38 +0000 https://www.digitalproduction.com/?p=83121
Many loved the inconspicuous Blackmagic Pocket HD despite all its weaknesses - from short battery life to fragile sockets and a very flat screen.
]]>

This article originally appeared in Digital Production 02 : 2019.

Many loved the inconspicuous Blackmagic Pocket HD despite all its weaknesses – from short battery life to fragile sockets and a very flat screen. The film images were simply too beautiful for a camera of this price range and size. As soon as 4K became an issue, many people wanted a new successor to the classic 16 mm camera with better resolution. When the hapless Digital Bolex disappeared from the market, these voices became even more numerous. Finally, Blackmagic Design (BMD for short) announced the new Pocket at NAB 2O18, and obviously many fingers twitched at “buy” without thinking twice. Unlike in the past, BMD was almost able to keep to the announced delivery date this time. But that doesn’t mean you can get the camera right away: It’s selling like hotcakes. Does the new camera have that much magic?

The predecessor could actually be slipped into almost any pocket with a small pancake or 16 mm lens. That’s a thing of the past, because despite all the progress, processing four times as many pixels for RAW images and the corresponding heat dissipation cannot be accommodated in such a compact housing. The new camera looks very similar to a DSLR and is even a little chunkier. When you first touch it, however, the low weight (720 g) is surprising, which is also associated with a certain feeling of a plastic box. But it is a composite material reinforced with carbon fibres, which should be able to withstand quite a lot – others use such material in cars. In any case, there have been no reports of damage to the housing from those who have been working with it for some time. A few Ursa Minis, on the other hand, have not survived tipping sideways with the handle attached without breaking the metal housing.

At best, the torsional strength of the Pocket 4K could be somewhat lower than that of metal. With heavy lenses or even motorised focus pullers, you should probably attach the lens to the support rather than the camera alone. The housing is not rainproof. A common problem with the predecessor was the fragile sockets, especially the tiny HDMI connection, which usually failed when used frequently. There are now better things here: HDMI in full size, a much more solid, latching 12-volt socket and mini-XLR for sound (mono) in addition to the usual 3.5 mm stereo jack. The batteries have also grown slightly; they now correspond to Canon LP-E6. BMD has retained the lens mount for Micro 4/3 (MFT). The active cooling is audible, but only in the immediate vicinity. The outlet for the significantly heated air is not optimally positioned on the underside, as a larger tripod plate could jeopardise the cooling, especially if the camera is carelessly placed on textiles.

Despite the touchscreen, there are plenty of sensibly placed and clearly labelled controls, including three freely assignable function buttons, a photo button for stills and a quick switch to slow motion. However, the latter can lead to problems if it happens to be assigned the same frame rate as the one you are currently shooting with. If you then accidentally touch this button, the picture/sound synchronisation is no longer guaranteed – so it is better to set a clearly different frame rate so that it is noticeable. It is also to be hoped that the labelling will be more durable than on the old Pocket: anyone who has used it intensively must now be able to operate it blindly. There is no longer a socket for LANC (Control-L); remote control is only offered via Bluetooth app as with the Ursa Mini Pro. If desired, GPS data from the controlling device can be transferred to the metadata of the recording – e.g. on the tracks of rare animals.

The screen has become a little brighter, but be careful with the HFR button

Together with the well thought-out menu structure, from which manufacturers like Sony could still learn a lot, the Pocket 4K quickly grows into your (not too dainty) hand. The manual is available in several languages, including German. Apart from a few amusing translation errors such as “camera assistant” instead of “AC” for AC, it is well organised and easy to understand; there is even an introduction to DaVinci Resolve. The camera menus are currently limited to English, but translations are planned.

Monitor

The screen makes good use of the available space, has a full HD resolution of 1920 x 1080 and around 500 nits. Although it cannot compete with a Ninja V that is twice as bright, it is significantly brighter and sharper than its predecessor (which can hardly be operated without a viewfinder magnifier). Unfortunately, it cannot be tilted, making it difficult to work with it from very low or high positions. Nevertheless, this decision is understandable, as the camera was obviously designed to be robust. The fragile tilting mechanism of some other cameras and the continuous strain on the corresponding cabling is always a potential weak point.
The monitor offers clear displays of all important parameters, which lead directly to the corresponding setting when touched without a menu; of course, they can also be hidden. Loadable LUTs for the display allow image assessment – these can optionally be transferred to the recording if you need to deliver immediately presentable material. Contour sharpening, zebra and false colours can be switched on for control purposes, but there is no waveform, histogram or vectorscope here. When the menu is activated or in dark scenes, a slight light scattering can be seen in the bottom left-hand corner, but this is insignificant in practice.

The screen has become a little brighter, but be careful with the HFR button

LUTs can be placed on the monitor, but can also be recorded

Sensor and lenses

The sensor has also grown, even slightly beyond the usual size in MFT photo cameras such as the GH5 from Panasonic. While the latter has a sensor with a width of 17.3 mm and an aspect ratio of 1.33 to 1, the chip in the Pocket 4K is almost 19 mm wide, but only 10 mm high. It is clearly aimed at film and actually has real 4K pixels in the cinema standard of 4096 x 2160 – other camera manufacturers are happy if you don’t know the difference to UHD with 3840 x 2160. Now, neither means a true resolution of 2,000 lines with a Bayer pattern, but at least both formats are available natively without scaling. The sharpness of a Sony A7III with oversampling is not quite achieved, but subjectively the Pocket looks very sharp. With one small catch: BMD still does not use an OLPF (anti-aliasing filter), so part of the sharpness impression is likely to consist of false detail. however, 4K on a small chip is far less critical than HD because the resolution limit of many lenses already has an effect. We were only rarely able to detect moiré in natural subjects.

The larger sensor has consequences for the choice of lens: if you still have S-16 lenses from an old pocket, they are unfortunately no longer as suitable despite having the same mount. They vignette massively at 4K and can only be used for 2K or HDTV via windowing or with crop. MFT lenses, which are now available in an enormous selection with both autofocus and purely manual focus, do not cause any problems with the image field. With a crop factor of 1.9 instead of 2 (in relation to KB photos), they even have a slightly wider angle of view on the Pocket 4K. With appropriate lenses, autofocus is possible on an area tapped on the screen, but this is relatively slow and cannot be tracked continuously as with modern photo cameras.

The Panasonic 12-35mm/f2.8 or the Olympus 12-100mm/f4 can be used as universal lenses. Both are excellent lenses and have image stabilisation – with Olympus, the latter in the lens is rather the exception. For a film camera, the lack of internal image stabilisation is less important, as you usually use aids such as a gimbal or tripod and the stabilisation can then even interfere. With appropriate lenses, you can activate their stabilisation in the camera for hand-held shots, but this cannot replace a gimbal. All classic manual lenses can be adapted using an adapter if you love their look. However, you can hardly save any money with them now that the Sonys with E-mount are available and there is a corresponding run on good lenses. The lower speed of the zooms is generally unproblematic on the Pocket 4K, as the new sensor is significantly more light-sensitive than all previous BMD sensors.

For the first time at BMD, the sensor has two native ISO values, namely 400 and 3,200 ISO. Our tests showed that it is less noisy at 1,250 ISO than at 640 or 800, which are obviously only generated by amplification. Yes, the noise even seemed a touch lower than at 400, even if this is somewhat at the expense of the latitude in the highlights. Even at 1,600 ISO, the image is still quite usable, while 3,200 requires some noise filtering. The limit value of 25,600, on the other hand, is exactly that: borderline. The dynamic range is around 13 f-stops and therefore slightly below that of the Ursa Mini Pro, but this is still a very decent result, as our test subject shows. This dual ISO is explained excellently at FilmmakerIQ bit.ly/hess_dual_iso. The rolling shutter is acceptable, it is more on a par with other cinema cameras and is not as massive as with film cameras. The structure of the noise is different to that of BMD, it appears very homogeneous and the infamous fixed pattern noise is barely discernible. It is noticeable that in the waveform display of Resolve a clipping below the black level is recognisable in the noise, which we have not seen in this form with other cameras. After slight noise filtering, a normal noise carpet can be recognised again. Not only is the light sensitivity impressive for a still quite small sensor in 4K, the colours are also fully convincing. BMD once again shows that it understands colour science: it shouldn’t be too difficult to match correctly exposed skin tones with an Arri, even if the Pocket can’t really keep up in the colours of the limits of its dynamic range.

C-mount lenses for S-16 cannot illuminate the entire sensor, but HDTV (blue frame) can. Green corresponds to 3K, i.e. oversampling for HD

The noise at 6.4OO ISO goes below the black limit in the original, after noise filtering the effect is gone

Recording media

Only the Ursa Mini is similarly flexible when it comes to storage media: in addition to CFast and fast SD cards, you can also connect an external SSD via the USB-C port (note: not identical to Thunderbolt 3) and record directly to it. However, there still seem to be minor firmware problems, because at least with the popular Samsung T5, you should first start the camera (approx. 5 seconds) and only then connect the SSD. Sometimes a card should also be inserted in the SD slot first so that the SSD is recognised (both use the same bus). However, BMD is aware of the problem and a solution should be in the works. Sandisk should urgently solve another problem: Some of the current batches of proven SD cards no longer work in BMD’s cameras, not alone in the Pocket 4K. Be careful with repeat purchases!

USB-C use is even more elegant and cheaper than the separately purchased SSD recorder for the larger camera or tinkering, as presented in DP 03:18. This means that everything is available, from fast but more expensive media to inexpensive and widely used media to media with long runtimes. A place for the lightweight SSD with some Velcro can usually still be found; unfortunately, USB-C is not secured against slipping out. On sufficiently fast media, the Pocket can manage 4K DCI or UHD with 60 frames, and with HDTV in the crop window it goes up to 120 fps. Unfortunately, no intermediate format of 3K (as with RED) is offered, so that a Bayer sensor would also deliver the full HDTV resolution. Currently, recording is only possible in DNG or ProRes 422, but BMD has also announced BRaw (see DP 01:19) for this camera. As the camera does not allow parallel recording on multiple media, not all formats can be recorded uncompressed at higher frame rates.

The brightened patterns show the low noise at 1.25O ISO and the absence of fixed pattern noise.
Extreme contrasts push the sensor to the limit at 1.25O ISO in the highlights, but noise remains low

Power supply

Even though the Pocket 4K achieves slightly longer runtimes than its predecessor with the larger batteries, the power requirement for small batteries typically used in DSLRs is critical. The battery supplied by BMD achieves 49 minutes of continuous recording on an internal card, the remaining runtime indicator decreases continuously, it displays a warning shortly before switching off and switches off properly. In contrast, a new battery from Patona, which was also used for testing, lasted 33 minutes and switched off at 80% without warning. We have not had any bad experiences with these third-party batteries on other devices, but it is simply the case that the Pocket draws more power than a standard DSLR.

Unlimited reliability only seems to be possible with the rather expensive original batteries from Canon and those from BMD. However, BMD is currently experiencing supply bottlenecks not only with the cameras, but also with these batteries. Particularly when using SSDs, which also draw their power from the camera via USB-C, it is important to warn against using third-party batteries. One possible weak point appears to be the battery cover – its closure does not look very trustworthy, but the battery itself is secured against falling out by an additional lever. On the other hand, the cover is also easy to remove if you want to connect a more powerful battery (such as the Sony L series) externally using a dummy – the accessories industry has reacted quickly.

The Pocket also has a socket for an external 12-volt supply. Strictly speaking, it can be 12-20 volts, so that standard video batteries with D-Tap can be used without fear, although they can have up to 16.8 volts when freshly charged (nominal 14.4). The appropriate cable must be purchased separately. A power supply unit for this connection is included in the scope of delivery and also charges the internal battery; however, a separate charger is not included. When the camera is switched off, the battery can even be charged via USB. As long as a charged battery is in the camera, the power supply remains very reliable. If you have to change the external battery or someone unplugs the mains cable, recording continues without interruption.

The battery cover does not always latch reliably

Sound and timecode

The four internal microphones sound very good and have an amazing feature when a zoom lens with electronic connection is used: The sound is zoomed in! In the wide-angle position, the sound is more open and spacious; in the zoom position, it sounds closer and more intimate. Noise from the inputs is barely audible, but the fan becomes discreetly audible in a quiet environment. The mono XLR input offers switchable 48 volt phantom power – which places additional demands on the battery and costs around 10 minutes of battery life. The built-in loudspeaker is only used to determine the presence of sound. The output for headphones is usable this time, it has neither too much noise nor too much latency, as has been the case with some other BMD models.

Most of the connections are now more robust. Unfortunately, the 3.5 mm audio input is too insensitive for microphones

Unfortunately, this is the end of the plus points: the 3.5 mm stereo input is far too insensitive and without an external preamplifier can only be used in front of the stage at a heavy metal concert, even with powerful microphones. However, a radio link with line output is sufficient. Whether this is a software error or a fundamental weakness could not yet be determined during our test. In addition, the inputs cannot be switched separately to line or micro level, they can only be switched together. This restriction is not obvious in the menu and can be irritating.
On the other hand, the use of external timecode generators is a good solution: you only have to briefly feed an LTC timecode into one of the inputs without assigning it to a track during recording. As soon as it has been recognised, a jam sync is performed, indicated by a small symbol next to the TC in the monitor. The synchronisation is then stable for several hours as long as the power supply is maintained. Consequently, it is wisest to work with separate sound recording if you do not want to tether the camera to an external mixer. The Zoom Recorder F4, among others, demonstrates very good TC stability. The internal microphones can at least provide a good ambience or a guide sound if you want to create the sound with Pluraleyes or similar programmes.

Which gimbal?

The Half Cage from SmallRig makes mounting on a gimbal easier without taking up too much space

The choice should not be a problem with such a light camera, but the wide housing causes problems. On most gimbals, the handle hits the tilt motor if you can’t move the pocket far enough with an additional plate and still balance it. The Ronin S from DJI is currently available at a favourable price, which can be used with an additional plate. An alternative is the Moza Air 2, which can be used together with the Half Rig from Smallrig. Neither of these have been tested by us due to a lack of availability, but the reports from experienced forum colleagues are positive.

Comment

The Pocket 4K should be used for exactly what it is: a compact, very light and unbeatably cheap camera for the image quality. If you consider the value of the licence for DaVinci Resolve Studio, it costs just 1,000 euros. It can be used just as well in a YouTube home studio as a B-camera or crash cam alongside an Arri, and is also excellent as backpack equipment for landscape and wildlife photography. However, with its lack of ND filters, no viewfinder, weak sound section and critical power supply, it will not replace an Ursa Mini Pro, which also offers more resolution and dynamic range. Otherwise, you would have to put together a monster rig that is neither more user-friendly nor much cheaper.

]]>
DIGITAL PRODUCTION 83121
Datacolor SpyderX in test https://digitalproduction.com/2019/05/25/datacolor-spyderx-im-test/ Sat, 25 May 2019 11:00:11 +0000 https://www.digitalproduction.com/?p=76918
Colour correction is playing an increasingly important role in media production, not least because more and more recordings are being made in flat LOG formats to give colour correction more leeway. As a result, the need for calibrated monitoring is increasing, partly already on-set, but at the latest in post-production. If you want to have more control over the deliverable quality, reference displays are indispensable in post-production. Otherwise you are quickly exposed to discussions and uncertainty spreads among everyone involved.
]]>

Calibration can, or rather should, help with this. Why else are such products sold? For example, the SpyderX from Datacolor, which was so highly praised by the editorial team.

What is the SpyderX?

SpyderX Pro is a colour measurement device which – according to the manufacturer Datacolor – was designed for “dedicated photographers and designers” and consists of a colourimeter – now with a stylish future look – and software. The software can match different screens and also measure various targets – according to the manufacturer, it is wonderfully accurate with all kinds of additional features such as measuring the display performance, assigning certain colour profiles and so on. If you want to have this yourself, you will pay around 170 euros – as of May 2019 – or 280 euros for the extended Elite functions.

In a very first test of the haptics and application, we noticed – regardless of the results – the enormous speed compared to its predecessor. The latter – full programme – took a good hour per screen (including measuring and so on). With the SpyderX Pro, you are back in your favourite 3D software after a quarter of an hour (same tests). The price is identical, the physical features have only changed in that the underside of the device is adorned with a large lens and no longer has the facets of previous generations. The software has not changed visually or in its description – anyone familiar with the Datacolor tools can click through with their eyes closed.

We ask ourselves the question: How far can you currently get with it? In the past, we have already analysed various measuring probes several times and did not achieve really good results, especially with the cheaper entry-level products and the displays tested at the time. The Datacolor products in particular did not generally perform particularly well. We have also recently experienced several Spyder customers who have not been able to match their various monitors, which is a typical problem or the result of the technical limitations of the inexpensive probes.

Beim SpyderX Pro kommen einige pauschale Displayprofile für die Gegenkalibrierung mit. Wide LED wird unter anderem von Mac- Displays wie beim Macbook eingesetzt. Full-Calibration-Einstellungen der SpyderX-Pro-Software von Datacolor

However, a new product is always a reason to investigate whether something has improved. I will reveal this much: We did indeed find a significant improvement, although that alone is not enough.

Built-in problems

What are the typical problems of inexpensive probes?

  • Inexpensive probes are so-called colourimeters. Similar to cameras, normal image sensors are installed on the basis of colour filters. Anyone who has ever taken photos of two different monitors, which were actually very well calibrated and visually identical, usually realises that they can look completely different in the photo. This is because the spectral properties may be different, but are still perceived identically by the human perceptual apparatus. This is due to the properties of the colour filters on the camera sensor. Colourimeters sometimes measure different values, just like the camera sensor. That would be bad, because it would lead to incorrectly calibrated monitors – or, as in the customer example we experienced, to the fact that it is not possible to match two different monitors so that the images displayed with them look the same.Practical: We carry out a colour correction on an iMac monitor with DaVinci Resolve. This should resemble the customer’s monitor (usually a TV set) and, as a colour-accurate screen, a class 1 reference monitor such as the Sony BVM X300. So we match an LCD TV with the Sony OLED. This makes it difficult because they are two completely different display technologies and, as RGB OLEDs, Sony OLEDs in particular have a white point error because they deviate from the usual colour models due to their extreme spectral properties (although this is more of an exception).
  • Inexpensive probes are not temperature-stabilised; a major reason why colorimeters are not suitable for taking measurements with several hundred or thousand colour locations in order to generate a calibration suitable for reference using the 3D LUTs calculated from them. This is because the colourimeter – like any electrical device – produces heat during the measurement time and primarily absorbs heat from the display, which is usually the larger heat component. This heat changes the electrical resistance of the individual light spots on the sensor, causing the colours to drift. So if you want to compare colours, you should do so at the same temperature, and even more importantly: the displays should have warmed up for about 60 minutes.
  • Inexpensive probes are not calibrated against the different or specific spectral properties of different display technologies. A crucial point, as already described in point 1, is that probes based on colour filters are usually unique (enough) in their measurement behaviour. This problem can only be solved with very expensive spectroradiometers by measuring the deviations of the colourimeters from the specific display and correcting the measurement results using the correction values calculated from this. Spectroradiometers work in a similar way to pinhole cameras and are also regularly calibrated or checked by the manufacturer. However, these are also not suitable for complex, lengthy measurements as they are very slow and, above all, do not usually cover a very wide luminance range. In addition, they cannot measure total black as with OLEDs or extremely dark LCDs. with them, 5% grey can take 2 minutes for one colour value.

In a perfect world ..

… to achieve reference quality, you need temperature-stabilised colourimeters (e.g. Klein K10, approx. 8,000 euros) and a spectroradiometer (e.g. Jeti 1511, approx. 8,000 euros) to calibrate the colourimeter against the respective display. An ageing display, which changes its physical properties, can also produce a deviation, which should also be measured. In high-quality measurement programmes, such counter-calibration functions can be saved as profiles and can therefore be reused. However, my experience shows that I achieve better results if I check the profiling for each measurement and generate a new one if necessary.

And in reality?

So what has Datacolor supplied with the SpyderX Pro and its measurement software to address the latter problem to some extent? Firstly, a selection of which display technology is available.

Datacolor has provided the probe with calibration profiles for some of the most widely used display technologies, which is an essential prerequisite for improved or even just usable measurement results. This also leads to better calibration results, which we have verified with our measurement equipment worth over 25,000 euros.

Now a direct comparison ..

In the default settings, however, we do not find any information about the colour space to be calibrated, i.e. the target colour space that is to be achieved as a result. This will be a problem, as we will find out later. There were also no further setting options available in the settings. Roomlight and room light sources: If you look in Datacolor’s help centre, the explanations are quite meagre, except that there can be different ones and should be measured. It is noticeable in the help texts that the measurement software is primarily intended for “print” use, as a colour temperature of 5,700 Kelvin is described here, which does not correspond to use in film or TV, where 6,500 Kelvin is required. There are also clear standards for ambient light conditions, including room light brightness, which should be approx. 10% of the maximum brightness of the display.

Das Display des Macbook Pro 2016 liegt ziemlich innerhalb des Kino-Farbraums DCI-P3, aber auf Weißpunkt D65 (6500 Kelvin). Wer zumindest für P3/D65 farbkorrigieren möchte, der hat nun ein gar nicht so schlechtes Display. Abweichungswerte von 1,98 für Gamma/Graustufen sind leicht sichtbar, Farbabweichungen von 1,03 in der Regel nicht, im Worst Case bei zwei unterschiedlichen Displays in Summe im Vergleich nebeneinander aber doch. Mittels Calman und unserer 25.000 Euro teuren Messausrüstung auf Rec. 709 kalibriert: exzellente Werte im Bereich der Referenztauglichkeit: Farbortabweichung deltaE2000=0,37 und Graustufen/Gamma mit 0,99 – beides Abweichungen unterhalb der Sichtbarkeit für den Menschen. Nur im Gamma könnte beim Displaymatch was auffallen, aber nur in den Schwärzen. Aber die sind beim Mac-Display eh nicht auf Referenzniveau. Um das Ganze auf die Spitze zu treiben, haben wir mit dem SpyderX Pro versucht, an einem LG OLED TV den Rec. 709 nachzumessen, der von uns perfekt kalibriert war.

Before we carried out the calibration using SpyderX, we first carried out a measurement using our measuring equipment. In the pre-calibration measurement, you can see that the colour space Rec. 709 (corresponds to sRGB) set for the verification measurement is far exceeded by the Mac display before the device was optimised by the SpyderX software calibration. So we carried out a second pre-calibration measurement with the DCI-P3 colour space as the target, and lo and behold, it was a better match. But red and skin tones in particular are still off.

After the Spyder calibration, however, the results are very good, if you could do something with the colour space. Now the Internet and most computer displays and TVs in the world are currently still Rec. 709 or sRGB. A P3-calibrated display therefore shows the wrong colours – i.e. usually too colourful/oversaturated. As DCI-P3, the cinema colour space requires a display with a DCI white point that is significantly greener than D65 (6,500 Kelvin). At best, HDR would be P3/D65-compatible, but the Macbook Pro display is far too dark for that at 100 nits. Ergo: great calibrated to P3, but ultimately unusable, at least from the point of view of VFX, film or video production. If P3 is sufficient as a wide gamut for viewing photos that are only supposed to work in sRGB for the customer, if the colour management in the photo viewer works, then it can work.

Countercheck

We took the liberty of calibrating the Macbook Pro display to Rec. 709 with our Calman measurement software, and it works and looks like this – see the third screenshot on the left.

Conclusion

Datacolor has made some initial improvements, but unfortunately has completely neglected some other points. The last measurement of the LG OLED in particular illustrates how far off the mark you can be if you misjudge the display type.
Unfortunately, my general conclusion remains the same: only those who have a cheap or often drifting older display that does not provide usable settings, e.g. laptops in general or cheap monitors that come with 9,000 Kelvin (Asian standard), will get some relief from the calibration – provided that the display is still in the sRGB/Rec. 709 gamut.

SpyderX is far superior to its predecessor, but is not suitable for the colour-accurate high-end sector. If you want better colours than the manual settings on the wheel at the back of the screen, you are well served, as are those who want to replace the worst stored colour profiles of various software packages. Those who regularly work with true colours will not be able to avoid purchasing a suitable monitor. Even a SpyderX can only work with the colours that the monitor outputs consistently, and at around €2,000, investing in a professional colour calibration (which is usually in the low three-digit range) is not a deal-breaker.

]]>
DIGITAL PRODUCTION 76918
Portrait of a genius – 3D portrait still https://digitalproduction.com/2019/05/01/portraet-eines-genies-3d-portraet-still/ Wed, 01 May 2019 13:30:50 +0000 https://www.digitalproduction.com/?p=72319
Portraits of very well-known personalities are always a great challenge because everyone knows the face of the person portrayed and mistakes quickly become apparent. Alexander Beim impressed with his 3D portrait still of Albert Einstein and won the public vote in the "Best Still" category and thus an animago AWARD. Here, the 3D artist explains how the image was created.
]]>

The development of the “Albert Einstein” portrait began with a commissioned work for Swarowski Crystal Worlds. A new cabinet of curiosities called “Heroes of Peace” was created for them, in which visitors can view hologram projections of people who have received a Nobel Prize. In addition to Einstein, Mahatma Ghandi and Martin Luther King can also be seen there.

Einstein as a leisure project

Alexander Beim created the hologram version of Einstein, and once the commission was complete, the artist wanted to share the work he had done. “The resolution of the final images and the level of detail of the model were very suitable for the hologram animation, but they were not sufficient for a presentation in the most important internet galleries and at the animago AWARD. This inspired me to work on refining the model, for which I invested my free time,” recalls Alexander.

Prozedural wurde teilweise nur feine Hautstruktur erstellt. Alle individuellen Falten wurden nur manuell geformt und gezeichnet.
Only fine skin structure was created procedurally in some cases. All individual wrinkles were moulded and drawn manually

As the artist was later planning an animation scene with the 3D model, he decided not to perfect it at first. This was because the face would look completely different under the lighting with the material, textures and other factors in the animation. “Only when all aspects are in play are errors that you don’t notice in the grey ZBrush model visible. That’s why, as soon as I had more or less got the shape of the head right, I started texturing and later corrected the original ZBrush model after lighting.” It took Alexander two months from the first variant to the result of the first final 3D model. He then invested time in rigging, blend shapes and the hair setup for the animated version of Einstein. After around four months, the final animation, including the final corrections for compositing, was ready. After that, the artist continued to experiment and it took another two months until the final version, which can be seen on the still, was ready.

Einsteins Hautmaterial war viel Arbeit: Alexander experimentierte für eine fotorealistische Optik viel mit den Materialparametern herum und mischte verschiedene Color-Texturen.
Einstein’s skin material was a lot of work: Alexander experimented a lot with the material parameters and mixed different colour textures to achieve a photorealistic look

A different look than expected

The biggest challenge in the creation process was achieving a resemblance to the original and using digital means to show the soul and character of Einstein the man. To understand Einstein’s emotional world, Alexander studied many pictures and videos of the scientist. Collecting suitable references turned out to be complicated, as there are not many photos of Albert Einstein. And the ones Alexander did find were of very poor quality and in black and white. It was particularly difficult to find views from all angles. Due to a lack of photographic material, the artist resorted to video stills, which were of course of even poorer quality. “I kept looking at pictures and videos until Einstein’s face filled my subconscious and was able to help me with the sculpting,” says Alexander. “It was interesting to see how my previous idea of his face differed from reality, because I discovered completely different lines and shapes than I had imagined. Nevertheless, his moustache and tousled hair naturally make up 50 per cent of his recognisability. Another important feature is the outer corners of his eyes, which are very deep.”
Using the reference material, Alexander created a front and side view for the basic proportions of the face, the eye lines, the mouth and the nose. The surface transparency effect in ZBrush helped the artist to compare the original Einstein from the photographs with his model. For intuitive modelling, Alexander divided the base model head into eight subdivisions with the correct topology. “I didn’t use high-definition geometry because the polygon density was sufficient for a video,” explains the artist.

[caption id="attachment_72327" align="alignnone" width="2362"]Zum Einfärben der Haupttöne des Gesichtsbereiches fügte der Artist Ebenen mit dem Blending-Modus Color ein. To colour the main tones of the facial area, the artist added layers using the Color blending mode

Customised wrinkle look

The specific wrinkles on his face were particularly important for Einstein’s correct appearance. The artist created them manually by first drawing the wrinkles onto the smooth surface of the model’s face using ZBrush Polypaint. The DamStandard Brushes were used for the grooves, while the ClayBuildup and Inflate Brushes helped to quickly build up volume. There were separate layers for the large and small wrinkles as well as the small details such as the pores. As the animation of the eyes also made the back of the eyelids visible, the artist also created an extra layer for this.
Alexander created some of the alphas for the skin structure himself, while some were taken from the ZBrush library. The artist transferred the skin structure from ZBrush by extracting the displacement map and the normal map from the high-poly model. He then exported the low-poly with the UV coordinates and the high-poly version of the model to Substance Painter. Using the Bake Maps function, it was possible to transfer the details from the high-poly model to the low-poly model as textures. “Wrinkles from other people wouldn’t have matched Einstein’s individual facial expressions, so I only created some of the textures procedurally and most of them by hand,” says Alexander.

[caption id="attachment_72329" align="alignnone" width="2953"]Einsteins Augen bereiteten die meiste Arbeit. Sie bestehen aus zwei Objekten: Die äußere Kugel aus transparentem Material dient dem Glanz, das Material des inneren Modells besitzt Texturen ohne Glanz. Einstein’s eyes provided the most work. They consist of two objects: The outer sphere made of transparent material is used for gloss, while the material of the inner model has textures without gloss

After filling the base layer with a skin colour, the artist created a few procedural noise variations. “The human forehead is yellowish, the cheeks and nose are reddish and the beard is bluish. To colour these main tones of the facial area, I added a layer with the Color blending mode. I also drew wreaths, moles and age spots under his eyes and on his forehead.”
Einstein’s eyes consist of two objects: The outer sphere made of transparent material serves to add lustre, while the material of the inner model has textures without lustre. Alexander painted the capillaries onto the white part of the eye; the volume of the iris was created using displacement textures. A lot of fine-tuning of the materials was then required for the final look of the eyes, which ultimately took up most of the time in the process.
Once the artist was satisfied with the look of the model, he exported it to Maya. The UV coordinates were created in the Autodesk software. Alexander used the UDIM method for the largest and most detailed textures possible and divided the UVs from the entire head into three large parts. “The face is the most important part of the head, so I created it in full size. The less important parts such as the neck and ears I placed in a second part of the UDIM and the rest I moved to a third. As the scalp is covered by the hair, I was able to reduce the UV size there, giving me extra space for other parts,” explains Alexander.

Für möglichst große und detailreiche Texturen nutzte Alexander die UDIM-Methode und teilte die UVs vom ganzen Kopf in drei große Teile auf. Die nicht so wichtigen Teile verschob er in den Kopfhautbereich.
For the largest and most detailed textures possible, Alexander used the UDIM method and divided the UVs from the whole head into three large parts. He moved the less important parts to the scalp area

In the light of science

For the lighting setup of the scene, the artist in Maya used two basic lights: one from behind and one from the top left. A dome light with an HDRI texture was used for the global illumination (GI). “To emphasise Einstein’s role as a scientist, I coloured the light bluish for a kind of laboratory atmosphere. Once I was happy with the lighting situation, I created the skin material. As I had no deadline, I was able to test numerous tools and techniques during the project, most notably creating the CG skin in Arnold. The recently released Arnold update promised that the skin shader would now appear even more realistic, which the images from other 3D artists seemed to confirm. The Einstein project offered me an ideal opportunity to learn the shader and use it in practice,” says Alexander. “At first I was very happy with the shader, because on a model without textures it showed a believable skin effect. But as soon as I added textures, it created a plastic effect.” The artist therefore experimented a lot with the material parameters and mixed the textures. “I even went so far as to split up the colour texture. There was a layer with a bright colour, a pale one, a light one and an extra layer for small veins and birthmarks. I painted all the textures in Substance Painter and blended them in Maya.” It took the artist a lot of time to create a realistic look for Einstein’s skin.

Einsteins charakteristische Kopfbehaarung entstand mit Maya Interactive Grooming.
Einstein’s characteristic head of hair was created using Maya Interactive Grooming.

Grooming for hair and jumpers

Once the head model with textures and material was finalised, the artist started with the hair, for which Maya Interactive Grooming was used. Alexander was very pleased with the performance of the feature: “I copied Einstein’s head model and applied the hair to it.
I only created a description for the head hair. I made a separate grooming for the tousled, sparse hair on the head and an extra description for the moustache, eyebrows and eyelashes.

Einsteins charakteristische Kopfbehaarung entstand mit Maya Interactive Grooming.
Einstein’s characteristic head of hair was created with Maya Interactive Grooming.

I created the outer shape of the head of hair using guides, then switched to brushes, which I used to do almost all the rest of the work. I broke the uniformity of the hair with some modifiers such as noise and some textures.”
The base of the jumper was created using a simple nCloth simulation. To emphasise some of the folds, Alexander shaped the jumper model in ZBrush. “In the next step, it was important to create the UV coordinates so that the fabric tile texture evenly covered the jumper. The black and white tile part of a fabric served as the basic texture for the material. I used this both as a displacement and as blending information between two brown shades, which served as the colour texture. I then added the noise and finally used Maya Interactive Grooming. The fine hairs created the tousled texture of the jumper.”
Thanks to Arnold’s good work, Alexander hardly had to retouch anything in post-production. In post-production, the artist only created the background, which consists of a gradient with lots of blurred formulas. He then adjusted the contrast slightly, made a small colour correction and added his signature.

Der Arnold-Shader zeigte einen glaubwürdigen Hautlook, bis die Materialien hinein platziert wurden.
The Arnold shader showed a believable skin look until the materials were placed inside

After winning last year, Alexander doesn’t yet know whether he will be back at this year’s animago AWARD: “The level of 3D visualisations is increasing exponentially, and it takes more and more time to create something really impressive. But as soon as I create something good, I’ll definitely be back!”

]]>
DIGITAL PRODUCTION 72319
Data everywhere. But where is that file? https://digitalproduction.com/2019/05/05/data-everywhere-but-where-is-that-file/ Sun, 05 May 2019 11:00:10 +0000 https://www.digitalproduction.com/?p=76924
After asking around in preparation for this focus, we stumbled on a few studios mentioning something called Caringo – an Austin-based provider of storage tools, hardware, software and other things. Since it came recommended, we asked what that “storage thing they do” is …
]]>

With a decade of experience in media, medical, high performance computing and adjacent fields, the people of Caringo have provided storage from very large to very small, in most reasonable configurations. Their tools work with all the big cloud storage providers like Microsoft Azure, Amazon WS or the Google Cloud.

DP: Hello Adrian, my drives are full and I can’t find anything anymore …

Adrian J Herrera: Sorry to hear that – but storage that degrades as it reaches capacity and lack of searchability and accessibility were among the issues that the Caringo founders set out to solve. When Caringo was founded in 2005, the storage landscape was different. Enterprise storage devices were large monolithic hunks of metal comprised of proprietary software and hardware. They needed a forklift to be moved into a data center; and “cloud” storage as we know it today wasn’t publicly available. Enterprise storage systems were expensive and difficult to manage, and the data they stored was laborious to find and deliver. Many organizations and businesses were using tape for a data archive. Tape was also difficult to manage and, of course, data stored on tapes wasn’t online and accessible.

The founders of Caringo knew there was a better way and set out to change the economics of storage by creating a software-­defined storage solution (now known as object storage) that installed on commodity hardware. The storage solution they created is easy to scale and employs automated management, with each file or object stored having a unique ID. All you need is that ID to find a specific item and to access it within your network or over the internet, regardless of where it is stored.

With this new object storage technology, you no longer needed to know the server
name, directory path and file name. It sounds pretty normal today, but when we released our first version in 2006, it was a revolutionary approach to storing and accessing data. Since then, we have continued to innovate and enhance our product suite – giving us the most flexible, stable and efficient object storage platform in the market.

DP: Have you worked with VFX / movie companies, and what special requirements did you see that differ from other industries?

Adrian J Herrera: We have production companies, film studios, video-on-demand providers, sports teams and broadcasters as customers, and all rely on VFX in some way. It’s important to note that object storage is a tier 2 or tier 3 storage technology, which means it’s used after something is produced – often as an archive or backup. That said, since object storage is basically a mash-up of storage and a web server – it enables on-­demand delivery of content within a network or over the web. Instant access to archives, the ability to stream content from the archive layer, and plugging into specific workflows and asset management solutions are the mission-critical requirements we see most often from M&E customers.

DP: Looking towards classic broadcasting, what are the problems of the people that the VFX studios are delivering to?

Adrian J Herrera: From an asset-archive perspective, classic broadcasters are struggling with reusing content and recalling project files, driven primarily by new on-
demand workflows. What they store on tape is taking too long to find and restore. More forward-thinking broadcasters are now deploying object storage as a layer of storage in front of tape, since the files on object storage are instantly available. From a VFX perspective, this means archived project files can now be found and delivered instantly. No need to wait for a tape (or many tapes) to load.

DP: If you see media productions, especially VFX with large-scale image and video files, as well as a load of smaller, KB-sized sidecars: What are your tips for keeping transfer rates reasonable?

Adrian J Herrera: File sizes, number of files and available bandwidth are all reasons why content-driven organizations need on-premises storage – a storage box in the studio network. It’s true that object storage is the enabling technology for all cloud storage services. But, when you are looking at file-access fees in the cloud, every API call (regardless of size) to a cloud service incurs a cost. To keep transfer rates reasonable, depending on your size, you need to keep the assets you will reuse instantly accessible in a location (like your own data center) where access costs are minimal.

For the world of movie production to become more organized, a metadata standard needs to be agreed upon so editing platforms and asset management solutions can leverage the metadata capabilities of the underlying storage layer. Alternatively, the industry could start adopting open NoSQL platforms like Elasticsearch. Of course, it’s easier said than done, but things are moving in the right direction. Artificial Intelligence and Machine Learning will likely play an important role here, automatically populating metadata.

DP: You are offering „S3“-Storage as well as object storage and Swarm Services and software. What does that have to do with my files?

Adrian J Herrera: As with any storage solution (or really any technology), you need to be able to actually use it for it to have value to you. Historically, to use object storage, the application you were using needed a direct integration because every solution had a proprietary interface. File-system-based storage doesn‘t have this issue because it relies on the file system to manage application access via standard storage protocols (like CIFS / SMB and NFS).

This is one of the reasons it has taken so long for object storage to become mainstream. But that’s all changing because of the Amazon S3 protocol. With Amazon’s dominance in the cloud storage space, their S3 API has become a de facto standard. The M&E application ecosystem is now finally catching up and almost all editing and digital asset management solutions either already support the S3 API or are planning to within the year. And, all major object storage solutions also support the S3 API so you can actually use object storage, specifically Caringo Swarm, with your existing applications and files.

DP: So, for a medium-sized studio, how would the transfer work to build a “bulletproof” Swarm environment?

Adrian J Herrera: One of our most recent products, Swarm Single Server, was developed specifically for small studios with limited IT staff. Swarm Single Server is an on-prem, S3-accessible, object-based storage device with built-in content management. The appliance contains all the hardware and software you need to keep archived content online, searchable and web-accessible – secure within your network. It includes 120 TBs of capacity and 3 years of support and maintenance and retails for 50K US-Dollars. That comes out to a little over 0.01 US-Dollar per GB per month over 3 years. If you need more capacity, simply plug in another Single Server. For medium-sized studios storing 500 TBs to multiple PBs, it will be more cost-effective for us to design a solution for you on the hardware of your choice.

DP: Does that tie into the different access points like pipeline management ­tools (for example ftrack or Autodesk Shotgun), the user clicking in some OS, the different playblast handlers and review tools and the backup process?

Adrian J Herrera: As I mentioned earlier, we can plug into any application that supports the S3 protocol. You can mount Swarm via Mac OS, Windows, NFS or SMB. We also have the ability to tier data from Windows Storage Servers or NetApp to Swarm (or even Amazon AWS, Microsoft Azure or Google Cloud) via FileFly. We haven’t specifically tested with Shotgun or ftrack.

DP: With long-term storage and years-long shows: What would be your recommended way of keeping retrievable files without breaking the bank?

Adrian J Herrera: As with any software-defined storage solution, it depends on your performance requirements. If you don’t need high performance, you can go with dense servers and optimize for cost. If you need to serve or stream content directly from Swarm or are frequently accessing assets, you probably want to optimize for throughput and use smaller hard drives in a high-capacity chassis.

DP: With that long-term storage: If money wouldn’t play a part, what would the perfect system be in your personal opinion?

Adrian J Herrera: If money didn’t play a part, then any setup that I can manage from
my mansion on the Amalfi Coast or remotely monitor from my McLaren Senna would be ideal. All jokes aside, it depends. Object storage is about economically storing massive amounts of content and enabling efficient throughput. It can go as fast as the underlying infrastructure, so compute, HDD (or SSD), network speed and available bandwidth all play a big part. For a specific example, we have a performance benchmark overview for one of our customers who optimized their cluster for throughput vs. storage capacity. They used 12 supermicro chassis each with 45 12 TB drives, 2x 25 GB NIC ports, 256 GB RAM and 24x cores. They also employed a 100 GB leaf/spine, super low-latency network configuration.

DP: Let’s keep looking at that team: What would you recommend in terms of fast storage for smaller teams, let’s say 10 people with about 20 TBs of active data?

Adrian J Herrera: We offer a free 20 TB license with our full-featured Developer
Edition. So, for smaller teams with 20 TBs or less, our software is completely free. You can run everything in your VM farm if you wish or you can deploy on dedicated hardware. Running our software on dedicated hardware will always perform better than a VM-based solution. If you are interested, go to http://bit.ly/caringo_register and select “Swarm Developer Edition – complete VM environment” or “Swarm Evaluation software – bare metal deployment” in the “I am interested in” field.

DP: If Swarm and Caringo is too large a feature set, what would be the next step down the ladder in tools you would recommend (thinking about freelancers, one-man bands and specialists)?

Adrian J Herrera: One-man shops probably can’t afford to spend a lot of time managing infrastructure beyond their own workstation, so a wise move would be to use cloud-based services. We recommend using BT’s cloud storage service.

DP: If there is something you could tell people on how not to suffer from data-­overload and delivery anymore, what would that be?

Adrian J Herrera: The first step is accepting the facts: data is no longer deleted, file sizes are increasing, file count is increasing, and access from any device in any location is now a requirement. You will need to take a tiered approach and understand what type of storage you need for your specific requirements. We have an educational blog and webinar on this specific topic that you might find helpful: What are the 5 Tiers of Storage for New Video Production Workflows? And, of course, the Caringo team is also available to help. If you have any questions, just send us an email or give us a call!

]]>
DIGITAL PRODUCTION 76924