←back to thread

Pixar's Render Farm

(twitter.com)
382 points brundolf | 1 comments | | HN request time: 0.218s | source
Show context
mmcconnell1618 ◴[] No.25616372[source]
Can anyone comment on why Pixar uses standard CPU for processing instead of custom hardware or GPU? I'm wondering why they haven't invested in FPGA or completely custom silicon that speeds up common operations by an order of magnitude. Is each show that different that no common operations are targets for hardware optimization?
replies(12): >>25616493 #>>25616494 #>>25616509 #>>25616527 #>>25616546 #>>25616623 #>>25616626 #>>25616670 #>>25616851 #>>25616986 #>>25617019 #>>25636451 #
boulos ◴[] No.25616494[source]
Amusingly, Pixar did build the "Pixar Image Computer" [1] in the 80s and they keep one inside their renderfarm room in Emeryville (as a reminder).

Basically though, Pixar doesn't have the scale to make custom chips (the entire Pixar and even "Disney all up" scale is pretty small compared to say a single Google or Amazon cluster).

Until recently GPUs also didn't have enough memory to handle production film rendering, particularly the amount of textures used per frame (which even on CPUs are handled out-of-core with a texture cache, rather than "read it all in up front somehow"). I think the recent HBM-based GPUs will make this a more likely scenario, especially when/if OptiX/RTX gains a serious texture cache for this kind of usage. Even still, however, those GPUs are extremely expensive. For folks that can squeeze into the 16 GiB per card of the NVIDIA T4, it's just about right.

tl;dr: The economics don't work out. You'll probably start seeing more and more studios using GPUs (particularly with RTX) for shot work, especially in VFX or shorts or simpler films, but until the memory per card (here now!) and $/GPU (nope) is competitive it'll be a tradeoff.

[1] https://en.wikipedia.org/wiki/Pixar_Image_Computer

replies(1): >>25616674 #
1. brundolf ◴[] No.25616674[source]
That wikipedia article could be its own story!