Here’s two VFX and virtual production-related tech reveals from SIGGRAPH

ILM showcased Oscar, a suite of tablet-based user interfaces for StageCraft, while Sony Pictures Imageworks unveiled its approach to making light rigs, including with machine learning tools.

Sometimes at SIGGRAPH, visual effects studios get to showcase their wares with some extra fun detail not previously shared elsewhere. I feel like that’s the case with two great Talks shared by Industrial Light & Magic and Sony Pictures Imageworks at the conference.

ILM’s Oscar tablet for StageCraft

David Hirschfield and Mike Jutan from ILM discussed the studio’s Oscar. It’s a suite of tablet-based user interfaces related to StageCraft, the virtual production platform ILM has built, made famous by The Mandalorian.

From their associated paper, here’s a note about how Oscar was made:

“Oscar was built using Pythonista, an iPad-based Python IDE, which provided access to all the native iPadOS APIs, unlocking complete creative freedom to design advanced user interfaces. With this platform, we were able to build custom controls which were key to
meeting the interaction needs of our users. Our architecture allows us to separate unique stage roles into modular interface “panels”, allowing the user to focus on one task at a time, yet enabling quick context-switching as needed. Using Python for business-logic ensured a straightforward integration into ILM’s proprietary DCC, Zeno. The panels are built using our custom WYSIWYG interface design application, Oscar Editor, for rapid prototyping and tweaking of designs.

You can read the paper and see what Oscar looks like here.

Imageworks’ use of depth estimation to build light rigs

Here, Imageworks’ Sergey Shlyaev described the techniques the studio uses to build light rigs, starting with a panoramic HDRI from the set. The usual workflow is to extract area lights from the HDRI, place them in a 3D scene and aligning lights with Lidar from set. Some machine learning workflows, such as the positioning of extracted lights in an automated way using PatchFusion (an off-the-shelf machine learning model for high resolution monocular depth estimation), are used to automate the process.

Imageworks’ tool called Spherelight, used to make light rigs from a panoramic lat-long map, was also discussed. Spherelight is a standalone, GPU-accelerated, 2D engine running outside of Katana.

You can read the paper and download images and even video here.

Continue Reading