←back to thread

Pixar's Render Farm

(twitter.com)
382 points brundolf | 1 comments | | HN request time: 0.001s | source
Show context
nom ◴[] No.25616292[source]
Oh man, I wanted this to contain much more details :(

Whats the hardware? How much electric energy goes into rendering a frame or a whole movie? How do they provision it (as they keep #cores fixed)? They only talk about cores, do they even use GPUs? What's running on the machines? What did they optimize lately?

So many questions! Maybe someone from Pixar's systems department is reading this :)?

replies(7): >>25616619 #>>25616668 #>>25616803 #>>25616962 #>>25617126 #>>25617551 #>>25622359 #
KaiserPro ◴[] No.25617551[source]
> How do they provision it

ex VFX sysadmin here. I'm not sure if they use their own scheduler or not. IF they do, they use tractor(might be tractor 2 now), which looks after putting the processes in the right places. Think K8s, but actually easy to use, well documented and reliable. (just not distributed, but then it scales way higher and is nowhere near as chatty)

They would have a whole bunch of machines, some old some new, some with extra memory, for particle sims, some with extra cores for just plain rendering. Each machine will be separated into slots, which are made up of a fixed number of cores. Normally memory is guarded but CPU is not (ie, you only get 8 gigs of ram, but as much CPU as you can consume. Context switching the CPU is fast, memory not so much.) I'm not sure on how pixar does it, but at a large facility like ilm/framestore/dneg the farm will be split into shows, with guaranteed minimum allocation of cores. this is controlled by the scheduler. crucially it'll be over subscribed, so jobs are ordered by priority.

As for actual hardware provisioning, thats quite cool. In my experience there will be a bringup script that talks to the iLo/iDrac/other management system. When a machine is plugged in, it'll be seen by the bringup script, download the xml/config/other goop that tells the bios how to configure and boot from the network, connect to the imaging system and install whatever version of linux they have.

As for power per frame, each frame will be made up of different plates, so if you have a water sim, that'll be rendered separately, along with other assets. These can then be combined afterwards in nuke to tweak and make pretty without having to render everything again.

That being said, a crowd shot with lots of characters with hair, or a water/smoke/ice effect can take 25+ hours per frame to render. So think a 100core/thread machine redlining for 25 hours, plus a few hundreds TB of spinny disk. (then it'll be tweak 20ish times)

optimisations wise, I suspect it's mostly getting software to play nice on the same machine, or beating TDs to make better use of assets, or adjusting the storage to make sure its not being pillaged too much.

replies(2): >>25617782 #>>25617873 #
aprdm ◴[] No.25617782[source]
Out of curiosity, did you move outside VFX and if yes to what industry, have you been enjoying and what motivated you?

Cheers

replies(2): >>25617891 #>>25618544 #
KaiserPro ◴[] No.25617891[source]
I spent tenish years in VFX. I moved away in 2014, because the hours and pay were abysmal. I still love the industry.

I moved to a large profitable financial news paper, which had cute scaling issues (ie they were all solved, so engineers tried to find new and interesting ways to unsolved them )

I then moved to a startup that made self building machine readable maps, which allowed me to play with scale again, but on AWS (alas no real hardware). We were then bought out by a FAAMG company, so now I'm getting bored but being paid loads to do so.

Once the golden handcuffs have been broken, I'd like to go back, but only if I can go home at 5 every day...

replies(2): >>25618192 #>>25618560 #
aprdm ◴[] No.25618192[source]
Interesting, thanks for sharing! I've been in VFX around ~6 years and was in the sw industry before (and the hw industry before it).

I find VFX really fun as far as job! Sometimes I do think about leaving mostly for Pay reasons but the pay has been decent enough recently (basically FAANG base pay without RSU/Bonus...).

It is interesting how we have a lot of big scale problems that goes unrecognized, I find the problems really challenging. Compared to when I worked in the software industry we had a team 10x as big for a problem 100x simpler.

Outside of some big tech companies, biology, oil industry and finance, I cannot imagine many companies having such a big scale on number of cores/memory/disk.

Working in Pipeline I haven't found crazy hours yet, has been mostly a 8h/day job that I can disconnect after I am done. Also with Covid some people even switched to 4 days weeks which is quite interesting.

Anyho, thanks for sharing your perspective!

replies(1): >>25620804 #
1. KaiserPro ◴[] No.25620804[source]
MAybe we will work together soon!