←back to thread

Pixar's Render Farm

(twitter.com)
382 points brundolf | 1 comments | | HN request time: 0.227s | source
Show context
nom ◴[] No.25616292[source]
Oh man, I wanted this to contain much more details :(

Whats the hardware? How much electric energy goes into rendering a frame or a whole movie? How do they provision it (as they keep #cores fixed)? They only talk about cores, do they even use GPUs? What's running on the machines? What did they optimize lately?

So many questions! Maybe someone from Pixar's systems department is reading this :)?

replies(7): >>25616619 #>>25616668 #>>25616803 #>>25616962 #>>25617126 #>>25617551 #>>25622359 #
1. mroche ◴[] No.25616962[source]
Former Pixar Systems Intern (2019) here. Though I was not part of the team involved in this area, but I have some rough knowledge around some of the parts.

> Whats the hardware?

It varies. They have several generations of equipment, but I can say it was all Intel based, and high core count. I don't know how different the render infra was to the workstation infra. I think the total core count (aggregate of render, workstation, and leased) was ~60K cores. And they effectively need to double that over the coming years (trying to remember one of the last meetings I was in) for the productions they have planned.

> How much electric energy goes into rendering a frame or a whole movie?

A lot. The render farm is pretty consistently running at high loads as they produce multiple shows (movies, shorts, episodics) simultaneously so that there really isn't idle times. I don't have numbers, though.

> How do they provision it

Not really sure how to answer this question. But in terms of rendering, to my knowledge shots are profiled by the TDs and optimized for their core counts. So different sequences will have different rendering requirements (memory, cores, hyperthreading etc). This is all handled by the render farm scheduler.

> What's running on the machines?

RHEL. And a lot of Pixar proprietary code (along with the commercial applications).

> They only talk about cores, do they even use GPUs?

For rendering, not particularly. The RenderMan denoiser is capable of being used on GPUs, but I can't remember if the render specific nodes have any in them. The workstation systems (which are also used for rendering) are all on-prem VDI.

Though with RenderMan 24 due out in Q1 2021 will include RenderMan XPU, which is a GPU (CUDA) based engine. Initially it'll be more of a workstation facing product to allow artists to iterate more quickly (it'll also replace their internal CUDA engine used in their propriety look-dev tool Flow, which was XPU's predecessor), but it will eventually be ready for final-frame rendering. There is still some catchup that needs to happen in the hardware space, though NVLink'ed RTX8000's does a reasonable job.

A small quote on the hardware/engine:

>> In Pixar’s demo scenes, XPU renders were up to 10x faster than RIS on one of the studio’s standard artist workstations with a 24-core Intel Xeon Platinum 8268 CPU and Nvidia Quadro RTX 6000 GPU.

If I remember correctly that was the latest generation (codenamed Pegasus) initially given to the FX department. Hyperthreading is usually disabled and the workstation itself would be 23-cores as they reserve one for the hypervisor. Each workstation server is actually two+1, one workstation per CPU socket (with NUMA configs and GPU passthrough) plus a background render vm that takes over at night. The next-gen workstations they were negotiating with OEMs for before COVID happened put my jaw on the floor.