←back to thread

Pixar's Render Farm

(twitter.com)
382 points brundolf | 5 comments | | HN request time: 0.533s | source
Show context
mmcconnell1618 ◴[] No.25616372[source]
Can anyone comment on why Pixar uses standard CPU for processing instead of custom hardware or GPU? I'm wondering why they haven't invested in FPGA or completely custom silicon that speeds up common operations by an order of magnitude. Is each show that different that no common operations are targets for hardware optimization?
replies(12): >>25616493 #>>25616494 #>>25616509 #>>25616527 #>>25616546 #>>25616623 #>>25616626 #>>25616670 #>>25616851 #>>25616986 #>>25617019 #>>25636451 #
berkut ◴[] No.25616527[source]
Because the expense is not really worth it - even GPU rendering (while around 3/4 x faster than CPU rendering) is memory constrained compared to CPU rendering, and as soon as you try and go out-of-core on the GPU, you're back at CPU speeds, so there's usually no point doing GPU rendering for entire scenes (which can take > 48 GB of RAM for all geometry, accel structures, textures, etc) given the often large memory requirements.

High end VFX/CG usually tessellates geometry down to micropolygon, so you roughly have 1 quad (or two triangles) per pixel in terms of geometry density, so you can often have > 150,000,000 polys in a scene, along with per vertex primvars to control shading, and many textures (which can be paged fairly well with shade on hit).

Using ray tracing pretty much means having all that in memory at once (paging sucks in general of geo and accel structures, it's been tried in the past) so that intersection / traversal is fast.

Doing lookdev on individual assets (i.e. turntables) is one place where GPU rendering can be used as the memory requirements are much smaller, but only if the look you get is identical to the one you get using CPU rendering, which isn't always the case (some of the algorithms are hard to get working correctly on GPUs, i.e. volumetrics).

Renderman (the renderer Pixar use, and create in Seattle) isn't really GPU ready yet (they're attempting to release XPU this year I think).

replies(4): >>25616832 #>>25617017 #>>25617606 #>>25620652 #
ArtWomb ◴[] No.25616832[source]
Nice to have an industry insider perspective on here ;)

Can you speak to any competitive advantages a vfx-centric gpu cloud provider may have over commodity AWS? Even the RenderMan XPU looks to be OSL / Intel AVX-512 SIMD based. Thanks!

Supercharging Pixar's RenderMan XPU™ with Intel® AVX-512

https://www.youtube.com/watch?v=-WqrP50nvN4

replies(1): >>25616923 #
1. lattalayta ◴[] No.25616923[source]
One potential difference is that the input data required to render a single frame of a high end animated or VFX movie might be several hundred gigabytes (even terabytes for heavy water simulations or hair) - caches, textures, geometry, animation & simulation data, scene description. Often times a VFX centric cloud provider will have some robust system in place for uploading and caching out data across the many nodes that need it. (https://www.microsoft.com/en-us/avere)

And GPU rendering has been gaining momentum over the past few years, but the biggest bottleneck until recently was availabe VRAM. Big budget VFX scenes can often take 40-120 GB of memory to keep everything accessible during the raytrace process, and unless a renderer supports out-of-core memory access, then the speed up you may have gained from the GPU gets thrown out the window from swapping data

replies(4): >>25617113 #>>25617144 #>>25617198 #>>25618613 #
2. pja ◴[] No.25617113[source]
As a specific example, Disney released the data for rendering a single shot from Moana a couple of years ago. You can download it here: https://www.disneyanimation.com/data-sets/?drawer=/resources...

Uncompressed, it’s 93Gb of render data, plus 130Gb of animation data if you want to render the entire shot instead of a single frame.

From what I’ve seen elsewhere, that’s not unusual at all for a modern high end animated scene.

3. berkut ◴[] No.25617144[source]
To re-enforce this, here is some discussion of average machine memory size at Disney and Weta two years ago:

https://twitter.com/yiningkarlli/status/1014418038567796738

4. lattalayta ◴[] No.25617198[source]
Oh, and also, security. After the Sony hack several years ago, many film studios have severe restrictions on what they'll allow off-site. For upcoming unreleased movies, many studios are overly protective of their IP and want to mitigate the chance of a leak as much as possible. Often times complying with those restrictions and auditing the entire process is enough to make on-site rendering more attractive.
5. cubano ◴[] No.25618613[source]
Did you really just say that one frame can be in the TB range??

Didn't you guys get the memo from B. Gates that no one will ever need more than 640k?