←back to thread

Pixar's Render Farm

(twitter.com)
382 points brundolf | 2 comments | | HN request time: 0.546s | source
Show context
mmcconnell1618 ◴[] No.25616372[source]
Can anyone comment on why Pixar uses standard CPU for processing instead of custom hardware or GPU? I'm wondering why they haven't invested in FPGA or completely custom silicon that speeds up common operations by an order of magnitude. Is each show that different that no common operations are targets for hardware optimization?
replies(12): >>25616493 #>>25616494 #>>25616509 #>>25616527 #>>25616546 #>>25616623 #>>25616626 #>>25616670 #>>25616851 #>>25616986 #>>25617019 #>>25636451 #
corysama ◴[] No.25617019[source]
Not an ILMer, but I was at LucasArts over a decade ago. Back then, us silly gamedevs would argue with ILM that they needed to transition from CPU to GPU based rendering. They always pushed back that their bottleneck was I/O for the massive texture sets their scenes through around. At the time RenderMan was still mostly rasterization based. Transitioning that multi-decade code and hardware tradition over to GPU would be a huge project that I think they just wanted to put off as long as possible.

But, very soon after I left Lucas, ILM started pushing ray tracing a lot harder. Getting good quality results per ray is very difficult. Much easier to throw hardware at the problem and just cast a whole lot more rays. So, they moved over to being heavily GPU-based around that time. I do not know the specifics.

replies(1): >>25620346 #
1. greggman3 ◴[] No.25620346[source]
AFAIU the issue with GPU rendering is generally you have to design the assets to be GPU friendly. So while you can get a huge speed up at rendering time you get a huge slowdown creating the assets in the first place because you have new issues. Using normal maps and displacement maps instead of millions of polygons. Keeping textures to the minimal size that will get the job done, etc...

Is any of that true?

replies(1): >>25626079 #
2. corysama ◴[] No.25626079[source]
Not for the ILM use-case. I expect they would stick to finely-tessellated geometry. The challenge would be moving all of that data across the PCI bus in and out of the relatively limited DRAM on the GPU. It would require a very intelligent streaming solution. Similar to the one they already have to stream resources from storage to the CPU RAM of various systems.