Most active commenters
  • klodolph(3)

←back to thread

Pixar's Render Farm

(twitter.com)
382 points brundolf | 14 comments | | HN request time: 0.214s | source | bottom
Show context
klodolph ◴[] No.25615970[source]
My understanding (I am not an authority) is that for a long time, it has taken Pixar roughly an equal amount of time to render one frame of film. Something on the order of 24 hours. I don’t know what the real units are though (core-hours? machine-hours? simple wall clock?)

I am not surprised that they “make the film fit the box”, because managing compute expenditures is such a big deal!

(Edit: When I say "simple wall clock", I'm talking about the elapsed time from start to finish for rendering one frame, disregarding how many other frames might be rendering at the same time. Throughput != 1/latency, and all that.)

replies(6): >>25615994 #>>25616015 #>>25616474 #>>25617115 #>>25617883 #>>25618498 #
1. ChuckNorris89 ◴[] No.25616015[source]
Wait, what? 24 hours per frame?!

At the standard 24fps it takes you 24 days per film second which works out to 473 years for the average 2 hour long film which can't be right.

replies(7): >>25616045 #>>25616061 #>>25616115 #>>25616213 #>>25616559 #>>25616561 #>>25617639 #
2. dralley ◴[] No.25616045[source]
24 hours scaled to a normal computer, not 24 hours for the entire farm per frame.
3. noncoml ◴[] No.25616061[source]
Maybe they mean 24 hours per frame per core
4. mattnewton ◴[] No.25616115[source]
Not saying it's true, but I assume this is all parallizable so 24 cores would complete that 1 second in 1 day, and 3600*24 cores would complete the first hour of the film in a day, etc. And each frame might have parallizable processes to get them under 1 day wall time, but still cost 1 "day" of core-hours
5. dagmx ◴[] No.25616213[source]
It's definitely not 24 hours per frame outside of gargantuan shots, at least by wall time. If you're going by core time, then it assumes you're serial which is never the case.

That also doesn't include rendering multiple shots at once. It's all about parallelism.

Finally, those frame counts for a film only assume final render. There's a whole slew of work in progress renders too, so a given shot may be rendered 10-20 times. Often they'll render every other frame to spot check and render at lower resolutions to get it back quick.

6. klodolph ◴[] No.25616559[source]
Again, I'm not sure whether this is core-hours, machine-hours, or wall clock. And to be clear, when I say "wall clock", what I'm talking about is latency between when someone clicks "render" and when they see the final result.

My experience running massive pipelines is that there's a limited amount of parallelization you can do. It's not like you can just slice the frame into rectangles and farm them out.

replies(1): >>25617401 #
7. berkut ◴[] No.25616561[source]
In high-end VFX, 12-36 hours (wall clock) per frame is a roughly accurate time frame for a final 2k frame at final quality.

36 is at the high end of things, and the histogram is more skewed towards the lower end than > 30 hours, but it's relatively common.

Frames can be parallelised, so multiple frames in a shot/sequence are rendered at once, on different machines.

replies(1): >>25631262 #
8. capableweb ◴[] No.25617401[source]
> It's not like you can just slice the frame into rectangles and farm them out.

Funny thing, you sure can! Distributed rendering of single frames been a thing for a long time already.

replies(1): >>25618153 #
9. KaiserPro ◴[] No.25617639[source]
yup, you've also got to remember that a final frame will have been rendered many times.

Each and every asset, animation, lighting, texturing sim and final comp will go through a number of revisions before being accepted.

So in all actuality that final frame could have been rendered 20+ times.

VFX farms are huge, In 2014 I worked on one that was 36K cpus and about 15pb of storage. Its probably now in the 200k cpu mark.

10. klodolph ◴[] No.25618153{3}[source]
What about GI? You can't just slice GI into pieces.
replies(3): >>25618297 #>>25620501 #>>25621006 #
11. dahart ◴[] No.25618297{4}[source]
Why are you thinking GI wouldn’t work? Slicing the image plane pretty much works for parallelizing GI just as well as it does for raster. It does help to use small-ish tiles, that way you get some degree of automatic load balancing.
12. capableweb ◴[] No.25620501{4}[source]
How I've seen it work in the past, it'll totally work with GI (and more generally, raytracing). If the frame to be rendered is CPU bound rather than I/O (because of heavy scenes), the whole project would be farmed out to the works, so they have a full copy of what's to be rendered, then resize which part of the frame to be rendered by them. Normally this would happen locally, and if you have 8 CPU cores, each one of them get responsible for a small size of the frame. Now if you're doing distributed rendering, replace CPU core with a full machine, and you have the same principle.

Obviously doesn't work for every frame/scene/project, only if the main time is spent on actual rendering with CPU/GPU. Most of the times when doing distributed rendering, CPU isn't actually the bottleneck, but rather transferring the necessary stuff for the rendering (project/scene data structures that each worker needs).

13. Hard_Space ◴[] No.25621006{4}[source]
This has been possible even for CGI tinkerers like me with C4D for more than ten years.
14. franzb ◴[] No.25631262[source]
Hi Berkut, I'd love to get in touch with you, unfortunately I couldn't find any contact info in your profile. You can find my email in my profile. Cheers!