←back to thread

Pixar's Render Farm

(twitter.com)
382 points brundolf | 2 comments | | HN request time: 2.527s | source
Show context
mmcconnell1618 ◴[] No.25616372[source]
Can anyone comment on why Pixar uses standard CPU for processing instead of custom hardware or GPU? I'm wondering why they haven't invested in FPGA or completely custom silicon that speeds up common operations by an order of magnitude. Is each show that different that no common operations are targets for hardware optimization?
replies(12): >>25616493 #>>25616494 #>>25616509 #>>25616527 #>>25616546 #>>25616623 #>>25616626 #>>25616670 #>>25616851 #>>25616986 #>>25617019 #>>25636451 #
berkut ◴[] No.25616527[source]
Because the expense is not really worth it - even GPU rendering (while around 3/4 x faster than CPU rendering) is memory constrained compared to CPU rendering, and as soon as you try and go out-of-core on the GPU, you're back at CPU speeds, so there's usually no point doing GPU rendering for entire scenes (which can take > 48 GB of RAM for all geometry, accel structures, textures, etc) given the often large memory requirements.

High end VFX/CG usually tessellates geometry down to micropolygon, so you roughly have 1 quad (or two triangles) per pixel in terms of geometry density, so you can often have > 150,000,000 polys in a scene, along with per vertex primvars to control shading, and many textures (which can be paged fairly well with shade on hit).

Using ray tracing pretty much means having all that in memory at once (paging sucks in general of geo and accel structures, it's been tried in the past) so that intersection / traversal is fast.

Doing lookdev on individual assets (i.e. turntables) is one place where GPU rendering can be used as the memory requirements are much smaller, but only if the look you get is identical to the one you get using CPU rendering, which isn't always the case (some of the algorithms are hard to get working correctly on GPUs, i.e. volumetrics).

Renderman (the renderer Pixar use, and create in Seattle) isn't really GPU ready yet (they're attempting to release XPU this year I think).

replies(4): >>25616832 #>>25617017 #>>25617606 #>>25620652 #
dahart ◴[] No.25617017[source]
> Because the expense is not really worth it

I disagree with this takeaway. But full disclosure I’m biased: I work on OptiX. There is a reason Pixar and Arnold and Vray and most other major industry renderers are moving to the GPU, because the trends are clear and because it has recently become ‘worth it’. Many renderers are reporting factors of 2-10 for production scale scene rendering. (Here’s a good example: https://www.youtube.com/watch?v=ZlmRuR5MKmU) There definitely are tradeoffs, and you’ve accurately pointed out several of them - memory constraints, paging, micropolygons, etc. Yes, it does take a lot of engineering to make the best use of the GPU, but the scale of scenes in production with GPUs today is already firmly well past being limited to turntables, and the writing is on the wall - the trend is clearly moving toward GPU farms.

replies(4): >>25617080 #>>25617265 #>>25619363 #>>25622440 #
berkut ◴[] No.25617080[source]
I write a production renderer for a living :)

So I'm well aware of the trade offs. As I mentioned, for lookdev and small scenes, GPUs do make sense currently (if you're willing the pay the penalty of getting code to work on both CPU and GPU, and GPU dev is not exactly trivial in terms of debugging / building compared to CPU dev).

But until GPUs exist with > 64 GB RAM, for rendering large scale scenes, it's just not worth it given the extra burdens (increased development costs, heterogeneous sets of machines in the farm, extra debugging, support), so for high-end scale, we're likely 3/4 years away yet.

replies(4): >>25617276 #>>25617360 #>>25619796 #>>25620580 #
foota ◴[] No.25617276[source]
Given current consumer GPUs are at 24 GB I think 3-4 years is likely overly pessimistic.
replies(1): >>25617327 #
berkut ◴[] No.25617327[source]
They've been at 24 GB for two years though - and they cost an arm and a leg compared to a CPU with a similar amount.

It's not just about them existing, they need to be cost effective.

replies(2): >>25617419 #>>25620838 #
1. lhoff ◴[] No.25617419[source]
Not anymore. The new Ampere based Quadros and Teslas just launched with up to 48GB of RAM. A special datacenter version with 80Gb is also already announced: https://www.nvidia.com/en-us/data-center/a100/

They are really expensive though. But chassis and rackspace also isn't free. If one beefy node with a couple GPUs can replace have a rack of CPU only Nodes the GPUs are totally worth it.

I'm not too familiar with 3D rendering but in other workloads the GPU speedup is so huge that if its possible to offload to the GPU it made sense to do it from a economical perspective.

replies(1): >>25620882 #
2. erosenbe0 ◴[] No.25620882[source]
Hashing and linear algebra kernels get much more speedup on a GPU than a vfx pipeline does. But I am glad to see reports here detailing that the optimization of vfx is progressing.