←back to thread

Pixar's Render Farm

(twitter.com)
382 points brundolf | 5 comments | | HN request time: 0.001s | source
Show context
klodolph ◴[] No.25615970[source]
My understanding (I am not an authority) is that for a long time, it has taken Pixar roughly an equal amount of time to render one frame of film. Something on the order of 24 hours. I don’t know what the real units are though (core-hours? machine-hours? simple wall clock?)

I am not surprised that they “make the film fit the box”, because managing compute expenditures is such a big deal!

(Edit: When I say "simple wall clock", I'm talking about the elapsed time from start to finish for rendering one frame, disregarding how many other frames might be rendering at the same time. Throughput != 1/latency, and all that.)

replies(6): >>25615994 #>>25616015 #>>25616474 #>>25617115 #>>25617883 #>>25618498 #
ChuckNorris89 ◴[] No.25616015[source]
Wait, what? 24 hours per frame?!

At the standard 24fps it takes you 24 days per film second which works out to 473 years for the average 2 hour long film which can't be right.

replies(7): >>25616045 #>>25616061 #>>25616115 #>>25616213 #>>25616559 #>>25616561 #>>25617639 #
klodolph ◴[] No.25616559[source]
Again, I'm not sure whether this is core-hours, machine-hours, or wall clock. And to be clear, when I say "wall clock", what I'm talking about is latency between when someone clicks "render" and when they see the final result.

My experience running massive pipelines is that there's a limited amount of parallelization you can do. It's not like you can just slice the frame into rectangles and farm them out.

replies(1): >>25617401 #
1. capableweb ◴[] No.25617401[source]
> It's not like you can just slice the frame into rectangles and farm them out.

Funny thing, you sure can! Distributed rendering of single frames been a thing for a long time already.

replies(1): >>25618153 #
2. klodolph ◴[] No.25618153[source]
What about GI? You can't just slice GI into pieces.
replies(3): >>25618297 #>>25620501 #>>25621006 #
3. dahart ◴[] No.25618297[source]
Why are you thinking GI wouldn’t work? Slicing the image plane pretty much works for parallelizing GI just as well as it does for raster. It does help to use small-ish tiles, that way you get some degree of automatic load balancing.
4. capableweb ◴[] No.25620501[source]
How I've seen it work in the past, it'll totally work with GI (and more generally, raytracing). If the frame to be rendered is CPU bound rather than I/O (because of heavy scenes), the whole project would be farmed out to the works, so they have a full copy of what's to be rendered, then resize which part of the frame to be rendered by them. Normally this would happen locally, and if you have 8 CPU cores, each one of them get responsible for a small size of the frame. Now if you're doing distributed rendering, replace CPU core with a full machine, and you have the same principle.

Obviously doesn't work for every frame/scene/project, only if the main time is spent on actual rendering with CPU/GPU. Most of the times when doing distributed rendering, CPU isn't actually the bottleneck, but rather transferring the necessary stuff for the rendering (project/scene data structures that each worker needs).

5. Hard_Space ◴[] No.25621006[source]
This has been possible even for CGI tinkerers like me with C4D for more than ten years.