←back to thread

Pixar's Render Farm

(twitter.com)
382 points brundolf | 8 comments | | HN request time: 0.001s | source | bottom
Show context
klodolph ◴[] No.25615970[source]
My understanding (I am not an authority) is that for a long time, it has taken Pixar roughly an equal amount of time to render one frame of film. Something on the order of 24 hours. I don’t know what the real units are though (core-hours? machine-hours? simple wall clock?)

I am not surprised that they “make the film fit the box”, because managing compute expenditures is such a big deal!

(Edit: When I say "simple wall clock", I'm talking about the elapsed time from start to finish for rendering one frame, disregarding how many other frames might be rendering at the same time. Throughput != 1/latency, and all that.)

replies(6): >>25615994 #>>25616015 #>>25616474 #>>25617115 #>>25617883 #>>25618498 #
brundolf ◴[] No.25615994[source]
Well it can't just be one frame total every 24 hours, because an hour-long film would take 200+ years to render ;)
replies(5): >>25616010 #>>25616035 #>>25616054 #>>25616125 #>>25616154 #
chrisseaton ◴[] No.25616010[source]
I’m going to guess they have more than one computer rendering frames at the same time.
replies(1): >>25616073 #
1. brundolf ◴[] No.25616073[source]
Yeah, I was just (semi-facetiously) pointing out the obvious that it can't be simple wall-clock time
replies(2): >>25616150 #>>25616184 #
2. chrisseaton ◴[] No.25616150[source]
Why can’t it be simple wall-clock time? Each frame takes 24 hours of real wall-clock time to render start to finish. But they render multiple frames at the same time. Doing so does not change the wall-clock time of each frame.
replies(1): >>25616231 #
3. masklinn ◴[] No.25616184[source]
It could still be wallclock per-frame, but you can render each frame independently.
4. brundolf ◴[] No.25616231[source]
In my (hobbyist) experience, path-tracing and rendering in general are enormously parallelizable. So if you can render X frames in parallel such that they all finish in 24 hours, that's roughly equivalent to saying you can render one of those frames in 24h/X.

Of course I'm sure things like I/O and art-team-workflow hugely complicate the story at this scale, but I still doubt there's a meaningful concept of "wall-clock time for one frame" that doesn't change with the number of available cores.

replies(3): >>25616259 #>>25616605 #>>25617310 #
5. chrisseaton ◴[] No.25616259{3}[source]
Wall-clock usually refers to time actually taken, in practice, with the particular configuration they use, not time could be taken if they used the configuration to minimise start-to-finish time.
6. klodolph ◴[] No.25616605{3}[source]
I suspect hobbyist experience isn't relevant here. My experience running workloads at large scale (similar to Pixar's scale) is that as you increase scale, thinking of it as "enormously parallelizable" starts to fall apart.
7. dodobirdlord ◴[] No.25617310{3}[source]
Ray tracing is embarrassingly parallel, but it requires having most if not all of the scene in memory. If you have X,000 machines and X,000 frames to render in a day, it almost certainly makes sense to pin each render to a single machine to avoid having to do a ton of moving data around the network and in and out of memory on a bunch of machines. In which case the actual wall-clock time to render a frame on a single machine that is devoted to the render becomes the number to care about and to talk about.
replies(1): >>25617362 #
8. chrisseaton ◴[] No.25617362{4}[source]
Exactly - move the compute to the data, not the data to the compute.