←back to thread

Pixar's Render Farm

(twitter.com)
382 points brundolf | 1 comments | | HN request time: 1.817s | source
Show context
mmcconnell1618 ◴[] No.25616372[source]
Can anyone comment on why Pixar uses standard CPU for processing instead of custom hardware or GPU? I'm wondering why they haven't invested in FPGA or completely custom silicon that speeds up common operations by an order of magnitude. Is each show that different that no common operations are targets for hardware optimization?
replies(12): >>25616493 #>>25616494 #>>25616509 #>>25616527 #>>25616546 #>>25616623 #>>25616626 #>>25616670 #>>25616851 #>>25616986 #>>25617019 #>>25636451 #
aprdm ◴[] No.25616623[source]
FPGA is really expensive for the scale of a modern studio render farm, we're talking around 40~100k cores per datacenter. Because 40~100k cores isn't Google scale either it also doesn't seem to make sense to invest in custom silicon.

There's a huge I/O bottleneck as well as you're reading huge textures (I've seen textures as big as 1 TB) and writing constantly to disk the result of the renderer.

Other than that, most of the tooling that modern studios use is off the shelf, for example, Autodesk Maya for Modelling or Sidefx Houdini for Simulations. If you had a custom architecture then you would have to ensure that every piece of software you use is optimized / works with that.

There are studios using GPUs for some workflows but most of it is CPUs.

replies(2): >>25616693 #>>25616904 #
nightfly ◴[] No.25616693[source]
I'm assuming these 1TiB textures are procedural generated or composites? Where do this large of textures come up?
replies(3): >>25616722 #>>25616850 #>>25617045 #
1. _3r2w ◴[] No.25616850[source]
1 terabyte sounds like an outlier, but typically texture maps are used as inputs to shading calculations. So it's not uncommon for hero assets in large-scale VFX movies to have more than 10 different sets of texture files that represent different portions of a shading model. For large assets, it may take more than fifty 4K-16K images to adequately cover the entire model such that if you were to render it from any angle, you wouldn't see the pixelation. And these textures are often stored as mipmapped 16 bit images so the renderer can choose the most optimal resolution at rendertime.

So that can easily end up being several hundred gigabytes of source image data. At rendertime, only the textures that are needed to render what's visible in the camera are loaded into memory and utilized, which typically ends up being a fraction of the source data.

Large scale terrains and environments typically make more use of procedural textures, and they may be cached temporarily in memory while the rendering process happens to speed up calculations