“In a way, the performer and the scene operator were dancing,” wrote Cloudhead Games. The studio was referring to a skunkworks-type experiment wherein the Cloudhead team developed a motion capture system for actors, using VR.
Yes, motion capture (MoCap), the same technology used to build virtual characters into games and movies, the one that usually requires plenty of studio space, copious green screens, actors decked out in node-covered green suits, and tennis balls on sticks as a stand in from everything from an item to grab or a monstrous dragon to fight. And a lot of money.
The Cloudhead team has scrapped all of that. Instead, they created their own “performance rig,” built a virtual environment, and put an actor wearing an HTC Vive HMD inside of it. As you can see in the video they created, they did all of this in what looks like someone’s attic.
“It was a no-brainer that we were going to put the actor in VR to allow them to reference the locations where they would be performing,” wrote Cloudhead Games, “But then the question became -- why take them out?”
The power of such a setup is obvious. Instead of interacting with a tennis ball on a stick, actors are immersed in the animated world their character will inhabit. They can interact with that world, react to the elements within it, and even control it, in (almost) real time.
Even better, the Cloudhead guys captured the audio during the performance, so Adrian Hough, the actor playing the part of a dark character named “The Watcher,” could give his entire performance all at once. “The Watcher” is a character in Cloudhead’s new series, The Gallery, Episode One: Call of the Starseed.
Cloudhead worked with Noitom, making use of the latter’s Perception Neuron MoCap suit. “After a short ramp-up time we were able to capture motion data comparable to what we would expect to get from a MoCap system that lives far outside of our budget,” said Cloudhead. The team said that the flexibility afforded by the suit and the system allowed them to iterate and experiment quickly, and “stub-in animations to ensure that they work perfectly in engine.”
They built the performance rigs using Unity, and the action happened within a 15x15-foot space. Dan, the cinematic designer, “pulled out some sort of dark magic to fit those scenes in a 15x15 capture space, without causing the actors to puke with aggressive vigor.”
One brilliant idea they came up with was inserting a cue-card system into the game so that the actor could read lines if necessary, improvise, and receive cues from the director. The team was able to externally trigger events within the experience for Hough so the environment would react to him. This apparently took a “few moments,” so there was a bit of natural lag, but that, they wrote, turned into the “dance” mentioned at the top of this article.
Another side effect of this technology is that because the actor is immersed in a virtual world, not only is it easier to act with the environment instead of green-everythings, the real world is almost entirely blocked out. Even traditional film actors have to be careful to pretend that there are no cameras and crew just outside of their periphery. Inside of a VR experience, actors have virtually no such distractions, yet they can hear the director and get cues if need be.
“In the future we will: build in more systems to allow the actor greater control over their performance rig, tightening the process loop to allow us to quickly review performances in (near) real time – in engine, and creating robust environments that will interact with the actor's performance in more meaningful ways,” wrote the studio.