←back to thread

Pixar's Render Farm

(twitter.com)
382 points brundolf | 1 comments | | HN request time: 0s | source
Show context
mmcconnell1618 ◴[] No.25616372[source]
Can anyone comment on why Pixar uses standard CPU for processing instead of custom hardware or GPU? I'm wondering why they haven't invested in FPGA or completely custom silicon that speeds up common operations by an order of magnitude. Is each show that different that no common operations are targets for hardware optimization?
replies(12): >>25616493 #>>25616494 #>>25616509 #>>25616527 #>>25616546 #>>25616623 #>>25616626 #>>25616670 #>>25616851 #>>25616986 #>>25617019 #>>25636451 #
aprdm ◴[] No.25616623[source]
FPGA is really expensive for the scale of a modern studio render farm, we're talking around 40~100k cores per datacenter. Because 40~100k cores isn't Google scale either it also doesn't seem to make sense to invest in custom silicon.

There's a huge I/O bottleneck as well as you're reading huge textures (I've seen textures as big as 1 TB) and writing constantly to disk the result of the renderer.

Other than that, most of the tooling that modern studios use is off the shelf, for example, Autodesk Maya for Modelling or Sidefx Houdini for Simulations. If you had a custom architecture then you would have to ensure that every piece of software you use is optimized / works with that.

There are studios using GPUs for some workflows but most of it is CPUs.

replies(2): >>25616693 #>>25616904 #
nightfly ◴[] No.25616693[source]
I'm assuming these 1TiB textures are procedural generated or composites? Where do this large of textures come up?
replies(3): >>25616722 #>>25616850 #>>25617045 #
1. CyberDildonics ◴[] No.25617045[source]
I would take that with a huge grain of salt. Typically the only thing that would be a full terabyte is a full resolution water simulation for an entire shot. I'm unconvinced that is actually necessary, but it does happen.

An entire movie at 2k, uncompressed floating point rgb would be about 4 terabytes.