Chuck McIlvian, Character Pipeline Supervisor at Sony Pictures Imageworks talks to CHIP on how they weaved the magic of Spider-Man 3.
The biggest superhero movie of 2007, Spider-Man 3 marks the arrival of highly complex hybrid CG, and photography based characters like Sandman and Venom. The Spider-Man series have been massive; not just budget wise, but also on the technologies used to make Toby Maguire leap out and go swinging between the skyscrapers of New York. In its third installment the movie featured two of the mostly visually challenging villains in the Spider-Man universe. Chuck McIlvain tells how the team at Sony Pictures Imageworks tackled all these problems and gave us a movie that had great visual appeal and about its various effects and fight sequences.
What were the new visual effects used in Spider-Man 3 compared to the previous films?
This time the key advances were in the effects arena. We have done well in setting up a consistent rig. Animators are getting used to our rigs and our controls. We had to work a lot on the effects side. For the Sandman (character) we had to entirely rewrite our instances and renders. It took 10 man years of combined power to write all that code.
One of the bigger things we’ve done in the film is editing within the frame. This was done for the fight sequence between Peter and the Goblin. They mixed all the elements together: multiple 2D and 3D pieces interacting with an all CG environment, layering those together properly, and getting the animator to move those around the frame.
We were constantly editing the motion of these Blue screen elements all over the place and then adding effects. The goblin had a lot of such effects. So you have match move rotomation attached to something that you need to fly around and then do iteration after iteration on that. And that was something that we simplified in Maya with a handle. And then back to the effects side and the character side was the tie together of effects and character animation. That’s getting blurred more. To be able to quickly kick a character performance to effects and have them run flow simulations or dynamics on those and then kick that back to animation, at least to visualize. To do some form of that in Maya (or whatever you use for animation) and that is a necessary thing. Venom and the goo was an example. Back and forth between effects and character animation is getting closer. You need to have some visual representation of the work in any package that you are working on. That cross over between multiple packages is something for every facility to find. And then the modeling of the cities. Most people wouldn’t notice the CG cities because it’s just a huge character on the scene. And I think we really mastered that on the texture side.
We ended up doing a lot more modeling this time. We perfected all the lighting passes to get the photo realism.
What technique did you use for Venom?
We ended up using a muscle system to rig Venom this time. It helps build the muscular shapes for such a character. To make the sliding of the skin over those shapes a little easier. It gives you a sense of the biceps and allows the skin to move over a static shape that represents the muscle. So that kind of helps a character like Venom. And there were lots of transformation shots. So we ended up applying the goo rig on top of his face. And those animations together were transforming the rigs like the mouth. And also unrevealing with all the goo rig that we set up in Maya and kick all that to rendering.
For the Spider-Man 3 film did you use motion capture?
We did some facial mo-cap but we also did all key frame animation for all the CG characters. And then there was stunt work and motion control cameras and some miniature shoots. We started with facial mo-cap shoots of the characters to render the faces and get the performances. We were going to use lines of dialog that you can build up the library of motion capture that we got for the face. (But) In the end we did more shots with face replacement. Which was photography of the face applied to the CG body.
Because of the level of close-ups that we got, the intricacies of the performance that was needed, the director wanted to see a lot more of the expression of his actors. So we ended up doing a lot more face replacements.
What degree of technology used in Superman was used in the Spider-Man films?
We used very similar technologies. All of the body scans and modeling techniques used, the light stage, and capturing of the characters were similar. We got Terabytes of Textures. The Light Stage 2 technique spins a light around the entire character in 360 degrees. Those were then used to reference when you do a CG light and mimic that same lighting scenario in CG and figure out what textures to blend under the face.
What work from the film Spider-Man was done in India?
We outsourced a lot of matchmoves. But people are reluctant to outsource too much because they want the ability to change it really quickly and have a sit down with the client. So the turn-around time is an issue even though satellites and all are making it easier and faster. But nothing beats face to face interaction and they want to see an hourly or daily turnaround. So that’s why they chose to outsource the front-end pipeline stuff and not as much animation.
Tata Elxsi, Visual Computing Labs, from India got credit for Spider-Man 3.
What potential do you see in India for outsourcing VFX and animation?
India is poised to take more of the effects side of the work; taking full sequences, not just partial bits that are outsourced; assets are being passed across facilities. There’s potential for doing more work with animation; all the way to compositing and lighting.
We did a lot of Match Move outsourcing as it is easier to do that because you can pass the data back and forth. But when it comes to character animation and lots of deformers, it is proprietary work and so it is difficult to pass characters outside the building, because you just can’t pass all the intellectual property. So you’ll have to come up with your own, equally innovative technologies and keep building it up. For instance, there are so many plug-ins that can be added to Maya.
What are the challenges you face when working with global teams? What’s the strategy for solving workflow problems?
The simplest things become difficult when you work with global teams. One of them is naming conventions. If you start with a naming structure, you stick to conventions and you get things going. What that does for the entire facility is standardized tool sets. So that you can write things and expect input instead of letting people all over the board do as they please. You can do a lot of these mundane tasks with Libraries and automated processes of data management.
If you have images, files, and textures following standardized naming conventions and you stick to your database, viewing material is quick. You can say things like bring up the last 30 versions and it works fast. When we outsourced we received files with different names and that’s understandable.
Quality of work is the next thing that is important. Making sure that you are providing the exact solution is very important here. Rotomation of a character; for example, is a difficult process and so we kept that in-house. Because the hi-resolution skins could not go out, you need to rotomate with the hi-resolution.
The iterative process also takes time. When exchanging notes with global teams, we need to make sure that the notes are properly interpreted by each team.
What are the global trends in the areas of VFX, DI, and animation? How is the film industry leveraging on these trends?
One trend which is something of a sour spot for animators is motion capture. It is a huge area that is going to stick around. It is an economically viable way to get lots of animation done fast. But it doesn’t remove the artistry, as some people think it might. Because there are always things to do to that data, For instance; layering. You can do artistry in other areas such as manual key frame art. Then there are character developments as well. Facial animation and facial setup has become simpler to use. In fact Sony Pictures animation films has simplified all the controls. We’ve created stylized cartoon-like looks that can be used for facial animation. We are trickling it down to the more photo realistic face setups and trying to get those rigs into the face setup of a human (like Superman). Along with that goes the rendering side of the faces.
VFX is omnipresent in Los Angeles and the states. It is everywhere right from commercials to films. The production values are getting better after learning from new techniques.