In 2012, Ubisoft Toronto opened an 8,000-square-foot performance-capture studio to help in the development of the studio’s first game, Splinter Cell: Blacklist. Seven years – and many games – later, and the studio has completed construction on its new 12,000-square-foot, state-of-the-art performance-capture studio. To learn more about the new space, and the ways it will help developers make more cinematic and polished games, we spoke with Performance Capture Director Tony Lomanaco.
Credit: Lauren Green
What does a performance capture director do?
Tony Lomonaco: As the performance capture director, I oversee all the areas our department encompasses. We service Ubisoft productions, including casting, photogrammetry (scanning characters’ faces, clothing, and props), voice-over, R&D and tools development, budgeting, and, of course, performance capture and data processing of all the facial and body data from our shoots. We also work closely with the cinematic team to help bring our stories to life. I’ve been at Ubisoft since we opened our original stage eight years ago, and our team was quite small. Back then, we only handled the shooting of motion capture (mocap) and performance capture (pcap). No casting, no data, no scanning, no voice-over. Now we manage all of those services for our production teams, and we’ve grown to a team of over 35 people!
__Why did Ubisoft Toronto feel the need to build a new performance capture studio? __
TL: We are really excited to enter the next phase in the evolution of our performance-capture work at Ubisoft Toronto. We’ve come a long way since we opened our original stage in 2012, where we shot our studio’s first game, Splinter Cell: Blacklist. There have been a lot of advancements in technology, and increasing demand for shooting pcap performances.
Our investment in a bigger performance-capture stage means we can take the quality of our work to a new level, allowing us to be more flexible and innovative to create even better, more immersive games for our players.
We overhauled our setup, allowing us to capture at any standard framerate to accommodate any production need, whether it be for film, TV, or games, as our previous setup limited us to 29.97 frames per second. Our team built custom 4K reference cameras using modular, industry-standard camera rails to adapt to various support equipment, such as cranes and dollies. Everything is now completely wireless, to give our camera operators and boom operators more freedom.
Credit: Lauren Green
__Ubisoft Toronto already had a smaller pcap studio; what is the advantage of having a bigger space? What sorts of things will a bigger studio allow you to do that the smaller one didn’t? __
TL: Our ceiling is twice as high as the original stage, giving us the ability to shoot more complex stunts. As soon as our setup was complete, we didn't waste any time getting back at it – on day one of shooting in the new space, we did a rappel from 30 feet high!
With a bigger stage, we have more options and opportunities to capture more elaborate scenes and dream bigger. The larger volume allows us to fit full virtual sets within our space – instead of dividing the set into two and doing two different takes, we have the flexibility to capture it in one shot. We also use technologies like VR and AR to create more immersive experiences and better visualization for our performers and directors.
The bigger stage also allows us to bring in even larger equipment, like a 25-foot technocrane, so we can capture our creative vision on a grander scale, like sweeping shots through London’s iconic Piccadilly Circus in Watch Dogs: Legion, using our virtual camera.
To the untrained eye, a pcap studio just looks like a large open space. What goes into building a performance-capture studio? What makes a good pcap studio versus a bad pcap studio?
TL: Besides the building itself, there’s so much that goes on behind the scenes that nobody sees. Every aspect of what we capture is timecode-synced, which is crucial in performance capture for many reasons – most specifically because of our editing process. Our post-production pipeline relies on accurately synced data – facial, audio, mocap, video – which gets sent to our video editors, who cut together an edit of our cinematic. Once validated by our cinematic director, we use the timecode from the edit to order the data. We generate a lot of data per week, so we have custom scripts and software that are automating a lot of the post processes.
But the most important thing you absolutely need if you want to build the best performance-capture studio is a stellar team. There is no possible way anything like this can be put together without the most talented, passionate, and innovative team in the world. A huge shout-out to our capture manager, Brad Oleksy, and the team. It’s because of their hard work that we’re able to do what we do, and do it so well.
So how about the building itself?
TL: Most performance-capture stages are created from retrofitted spaces, which was the case with our original stage. But this new studio is completely custom-built from the ground up, and purposely designed for performance capture. It’s actually a building-within-a-building, where the inner walls and floor are isolated from the outside walls and foundation, so it’s completely separated from the outside world.
The inside building is resting on 2-inch cubes made of a specialized rubber compound spaced 4 feet apart, creating an air gap between the foundation and the floor of the inner space, which also makes it a true soundstage with world-class acoustics.
Credit: Lauren Green
Is there a difference between performance capture and motion capture? What kinds of things are performance-captured that players might not expect?
TL: Some people use mocap and pcap interchangeably, but there is definitely a difference between the two. Generally, mocap only refers to the performer's body movement being captured, whereas performance capture is that and more – it refers to the capture of body motion, facial, and voice performance.
“Mocap” is the word that's always been used, and it's evolved as an art form. At Ubisoft, we capture more than just motion – we capture a full performance. Performance capture, or pcap, is now a common industry term. We still use both terms, but they have very distinct meanings.
How has performance capture evolved in recent years with advancements in technology?
TL: Technology has always been at the heart of mocap and pcap, and it’s now more important than ever. As our performance-capture studio generates increasing amounts of data – over a terabyte per week of mocap, reference cameras, head-mounted cameras, and audio data – we need to find ways to better curate that data and be more efficient in getting it to our artists. As an example, in certain cases we now take advantage of the capabilities of machine learning to process and deliver data to our animators even faster, so they can fine-tune the animations seen in our games.
On set, we’re using our game engines to visualize the performances within our virtual environments, an approach that more accurately ensures proper contact and interaction between the performance and elements of that environment.
__What do you see as the next steps in the evolution of performance capture technology? __
TL: We already capture motion really well, but I’m excited to see how technology will keep evolving to capture more subtleties in facial performances to create even more believable digital humans. We currently use head-mounted cameras (HMCs) to capture facial performances, but in the future, it would be really cool to see a technology that eventually removes the need for HMCs altogether. This would allow our actors to be freed from HMC hardware, and to take their performances to the next level.