Advertisement

‘The Irishman’ VFX Supervisor Pablo Helman On The “Great NASA Project” Of Developing Gangster Epic’s De-Aging Technology

Click here to read the full article.

On Martin Scorsese’s The Irishman, VFX Supervisor Pablo Helman faced the “astronomical” challenge of de-aging Robert De Niro, Al Pacino and Joe Pesci for a gangster epic that jumped back and forth in time, developing groundbreaking software and a three-part camera rig through which the team at ILM could pull the job off.

Describing the endeavor as “a great NASA project,” Helman embraced a particular weight of responsibility with this film, knowing that if his team couldn’t get their system working within a period of two years, the movie could not be made.

More from Deadline

For Scorsese, what was key with effects on The Irishman was that technology would not interfere with his work on set, or that of his A-list stars. In this scenario, conventional facial markers went out the window. Their replacement was the “Three-Headed Beast,” a rig comprised of one conventional, digital camera and two infrared cameras, which would capture facial performance without the visual interference of shadows. Acquiring a massive amount of data from this camera system, Helman and his team then put it through a new software called FLUX, using artificial intelligence to further elevate the de-aging work.

Heading into his latest film with Scorsese, Helman “knew that Marty was so exacting when it comes to performances and the narrative content of the film that there was absolutely no room for error. That’s when you start thinking about the trust that you have in the team that works with you. You remember all the war stories that we had together at ILM, and how we all managed not to drown in problems,” he says. “With that in mind, I convinced Marty that we could do this.”

A three-time Oscar nominee known for such films as Star Wars: Episode II – Attack of the Clones and War of the Worlds, Helman found great satisfaction in The Irishman, given the fact that his creative focus on the film was markedly different than it had been on films in the past, as well as those he’s reading scripts for now. “It’s very difficult to go from a project like The Irishman, where I spent four years of my life thinking about performance and communication, to the majority of visual effects opportunities,” he says, “which are great, but not based on performance.”

Below, Helman details the process of developing technology unlike any used on set before, by which he could pull off de-aging, throughout the film’s 1700 VFX shots. Additionally, the effects pioneer reflects on what he sees as the cinematic future of the system he developed.

DEADLINE: It’s been widely reported that your first VFX test with Robert De Niro was key in getting The Irishman green lit. But what was it that convinced you early on, in your first read of the script, that you could execute the de-aging work the film required?

PABLO HELMAN: Scientifically, the first test gave us the certainty that we could pull geometry from light and textures. That was kind of a proof of concept, so I had to convince a lot of people around me to come up with a production model that was fiscally responsible, that we could actually complete a movie with.

Having said that, I trust the natural movement of trends. Visual effects was moving towards a markerless technology, and it was a matter of time. It’s either somebody else does it, or we do it. I also checked around the visual effects community, because we are a lot of people, but we’re very close. We all have worked with each other, and nobody was following the markerless technology because it was a huge gamble and a huge challenge. But we all knew that naturally, it’s going to move towards no markers, because we need to get the technology away from the actors. We need to let them do what they do.

DEADLINE: How did you come to the idea of using infrared witness cameras to capture facial performances?

HELMAN: The choice of infrared technology came because of the handicap of the technology itself. When we did the first test, we didn’t have infrared; we had three regular cameras, but we realized that the software had trouble with very sharp changes of contrast. If you have very sharp shadow lines, the software doesn’t do as well.

So then, all of us basically understood why we work with a controlled environment—because when we work with a controlled environment, we control the lights, and those shadows are not there because they’re very soft. So the idea was, “How can we bring the controlled environment into the set without changing the light of the DP?” And the only way to do that was to go into a different spectrum. Actually, we talked about ultraviolet first, but ultraviolet, you can see. It alters the lighting, so we had to go to a spectrum that wasn’t seen by the human eye. But it took us months of sitting down at a table, thinking about he we can alter the light without changing what the DP is doing.

We did tests. We started getting our own cameras, [adding] filters and taking filters [out]. It was a real NASA moment, a moment in which we all played with stuff.

DEADLINE: To my understanding, there was no precedent for using infrared cameras on set in the way you did.

HELMAN: No, there isn’t. It’s not used because that spectrum is not seen by the human eye, so there’s absolutely no research there. When we started looking into infrared, we realized the depth of field is different than the RGB camera, so if we put the same lenses, the depth of field is different. That means that we’re going to need two operators—one to pull focus for the infrared cameras, and one to pull the director focus for the RBG. The other thing that infrared has a problem with is flares; we started to get flares from materials like metal or wood that were not familiar.

[But] the moment we realized, if we flood the set with infrared, then we’re going to solve the problem with the shadows, we started working with ARRI in Los Angeles and Rodrigo [Prieto, cinematographer]. Because now, we started talking about changing some of the way we were going to shoot this movie. Rodrigo had a lot to say, and I didn’t want to intrude into that methodology, but then ARRI was great. We started with the ALEXA minis, and they told us immediately, “We can modify those cameras to be completely infrared, which nobody uses, in the hardware and software.” It was a big commitment from ARRI to say, “Okay, if you want to do this, we can do it. But we need to modify a set of six cameras for you.”

So, it took two years for us to come around and realize that infrared technology was going to help us. Then, the last part of it was, “How do you like the accuracy of the infrared?” We thought at the beginning that if we just flooded the set with infrared—those big, square, infrared lights that you can buy—then you were going to be okay. But then we realized that once you do that, you still get shadows, and you will see them in the infrared cameras, unless you throw the light from the same angle as the lens, and you focus your infrared rays. That was also a scientific development—that once we aligned the infrared rings to the lens, the shadows went away.

Then, we needed to get the software to compute—to take a look at the two infrared cameras and the RGB, and then [have them] communicate with each other—and then come up with a composite that gives us the geometry.

DEADLINE: What exactly does the process look like, early on, when you’re setting out to develop a new technology or visual effects technique?

HELMAN: It’s a combination of a lot of playtime, and a very organized way of managing that playtime. The moment we realized that we had the infrared technology to play with, we brought in the R&D department. That’s 30 people, and what they do is, they organize the research into a calendar that tells us, “Okay, you’ve got three weeks to play with this. You need to solve this specific problem. Once you solve this, you have three other weeks to do this, and then we can get people to write pieces of software to support this.”

It’s a big family of science that is based on the beautiful images that you’re creating. But at the same time that we’re playing, we’re also being responsible from a production point of view and scheduling everything, because we knew that by March of 2017, we needed to shoot, and we needed to have a solution for that.

DEADLINE: You’ve said that ILM is continuing to refine the technology you developed for The Irishman, making the software more efficient while bringing the size and weight of the camera system down. How long do you think it will take for this set of tools to become more widely accessible?

HELMAN: Well, there’s a project that was shot right after ours that already used a rig that was half the size, with different cameras that were a lot less heavy, so that has already happened, and we’ll keep refining that rig because the smaller the rig is, the better it is. At the end of the day, that rig actually could be used for lots of different things. It’s not just for de-aging; it’s mainly used for acquiring the [largest] amount of information. That’s why you have three cameras instead of one.

There is also some other technology that we’re trying to develop, which has to do with invisible markers that we will now use. The software is getting faster, and also, we have a lot more artists that are very familiar with the software. When we started the project, there were no artists that could work in the software because it was brand new, so we had to train a lot of people. But we are definitely making a statement about taking the technology out of the actor’s face, as opposed to putting a lot more technology in the middle of the creative process.

One more thing I’d like to mention is the artificial intelligence part of it, which is also part of the science that is naturally going to move into production eventually. It’s not quite ready yet, because the artificial intelligence is [only] as good as your database, how big your images are. But the way we used artificial intelligence in this show is, after we rendered a face of one of the actors, we would feed it into this program. The program would race through our database of all the movies that we had—hundreds of movies that we had of the actors—and come up with faces that were very similar to what we were rendering. In that way, we would use it as a kind of checker, a place for us to check our work and make sure that it’s in the universe of the actors that we were working with.

DEADLINE: How do you foresee this technology being utilized in production?

HELMAN: Well, I think there is a place for the computer to take a look at an image and come up with a correlation of that image in a different world, and that is basically what the computer does best. It’s basically comparing and coming up with a result that we humans are not very good at, because we get really tired and bored really quickly, and make mistakes the computer doesn’t.

So for instance, let’s just say that we had a library of 100 movies that Robert De Niro did from 1975 to 1990, and the images were high resolution, like 4K or 8K. We could come up with a system in which the performances that he gave, as a contemporary actor—as a 76-year-old—would correlate to those movies. You could come up with something that looks exactly like those older movies. The technology’s not there yet, but artificial intelligence will come into play to do all kinds of matching.

Best of Deadline

Sign up for Deadline's Newsletter. For the latest news, follow us on Facebook, Twitter, and Instagram.