←back to thread

Pixar's Render Farm

(twitter.com)
382 points brundolf | 1 comments | | HN request time: 0.204s | source
Show context
supernova87a ◴[] No.25616522[source]
I would love to know about some curious questions, for example:

If there's a generally static scene with just characters walking through it, does the render take advantage of rendering the static parts for the whole scene once, and then overlay and recompute the small differences caused by the moving things in each individual sub frame?

Or, alternatively what "class" of optimizations does something like that fall into?

Is rendering of video games more similar to rendering for movies, or for VFX?

What are some of physics "cheats" that look good enough but massively reduce compute intensity?

What are some interesting scaling laws about compute intensity / time versus parameters that the film director may have to choose between? "Director X, you can have <x> but that means to fit in the budget, we can't do <y>"

Can anyone point to a nice introduction to some of the basic compute-relevant techniques that rendering uses? Thanks!

replies(5): >>25616535 #>>25616590 #>>25616686 #>>25617013 #>>25618032 #
1. lattalayta ◴[] No.25617013[source]
Here's a kind of silly but accurate view of path tracing for animated features https://www.youtube.com/watch?v=frLwRLS_ZR0

Typically, most pathtracers use a technique called Monte Carlo Estimation, which means that they continuously loop over every pixel in an image, and average together the incoming light from randomly traced light paths. To calculate motion blur, they typically sample the scene at least twice (once at camera shutter open, and again at shutter close). Adaptive sampling rendering techniques will typically converge faster when there is less motion blur.

One of the biggest time-saving techniques lately, is machine learning powered image denoising [1]. This allows the renderer to compute significantly fewer samples, but then have a 2D post-process run over the image and guess what the image might look like if it had been rendered with higher samples.

Animated movies and VFX render each frame in terms of minutes and hours, while games need to render in milliseconds. Many of the techniques used in game rendering are approximations of physically based light transport, that look "good enough". But modern animated films and VFX are much closer to simulating reality with true bounced lighting and reflections.

[1] https://developer.nvidia.com/optix-denoiser