But while GPUs are used for quick and dirty previews, the actual rendering work is done primarily on CPUs, with movies like Monsters University requiring a whopping 24,000 cores and even as much as 40,000 for Avatar. The reason why the VFX firms don't use GPUs is because they lack the required memory capacity. The production of Avatar required a massive 104TB of RAM.
Each frame in a final render requires 64GB or even as much as 128GB of system memory, whereas NVIDIA's Quadro M6000 tops out at 24GB. And that's for rendering at around a 2K resolution, a switch to 4K further exacerbates the issue.
"Nvidia has this yearly question that they pose to their customers, and 'more memory' is what they hear across the board," says Pixar software engineer Jeremy Cowles. "The way we're using GPUs right now is we use a lot for textures, for frame buffers. Everything is MSAA [multisample antialiasing], so that eats up more memory too. If we could really show the whole scene, like every object previewed with the true limit surface, that would chew up an enormous amount of memory.You can read the entire piece over here.
"The other thing that we do is that we do this kind of image caching in Presto [Pixar's proprietary animation system], so every frame of animation is saved as a temporary image. Animators can then scrub through them quickly without having to re-render. Right now that's all stored on the CPU [i.e. main system memory], but we would love to be able to store that on the GPU [i.e. video RAM]."
H/T to TweakTown for the link