Most active commenters

    ←back to thread

    Pixar's Render Farm

    (twitter.com)
    382 points brundolf | 13 comments | | HN request time: 1.306s | source | bottom
    1. supernova87a ◴[] No.25616522[source]
    I would love to know about some curious questions, for example:

    If there's a generally static scene with just characters walking through it, does the render take advantage of rendering the static parts for the whole scene once, and then overlay and recompute the small differences caused by the moving things in each individual sub frame?

    Or, alternatively what "class" of optimizations does something like that fall into?

    Is rendering of video games more similar to rendering for movies, or for VFX?

    What are some of physics "cheats" that look good enough but massively reduce compute intensity?

    What are some interesting scaling laws about compute intensity / time versus parameters that the film director may have to choose between? "Director X, you can have <x> but that means to fit in the budget, we can't do <y>"

    Can anyone point to a nice introduction to some of the basic compute-relevant techniques that rendering uses? Thanks!

    replies(5): >>25616535 #>>25616590 #>>25616686 #>>25617013 #>>25618032 #
    2. tayistay ◴[] No.25616535[source]
    Illumination is global so each frame needs to be rendered separately AFAIK.
    replies(1): >>25618070 #
    3. dagmx ◴[] No.25616590[source]
    If you're interested in production rendering for films, there's a great deep dive into all the major studio renderers https://dl.acm.org/toc/tog/2018/37/3

    As for your questions:

    > Is rendering of video games more similar to rendering for movies, or VFX?

    This question is possibly based on an incorrect assumption that feature (animated) films are rendered differently than VFX. They're identical in terms of most tech stacks including rendering and the process is largely similar overall.

    Games aren't really similar to either since they're raster based rather than pathtraced. The new RTX setups are bringing those worlds closer. However older rendering setups like REYES that Pixar used up until Finding Dory, are more similar to games raster pipelines. though that's trivializing the differences.

    A good intro to rendering is reading Raytracing in a Weekend (https://raytracing.github.io/books/RayTracingInOneWeekend.ht...), and Matt Pharr's PBRT book (http://www.pbr-book.org/)

    replies(2): >>25616622 #>>25617396 #
    4. supernova87a ◴[] No.25616622[source]
    Thanks!

    (I was also reading the OP which says "...Our world works quite a bit differently than VFX in two ways..." hence my curiosity)

    replies(2): >>25616698 #>>25616726 #
    5. raverbashing ◴[] No.25616686[source]
    > If there's a generally static scene with just characters walking through it, does the render take advantage of rendering the static parts for the whole scene once

    From the detail of rendering they do, I'd say there's no such thing.

    As in: characters walking will have radiosity and shadows and reflections so there's no such thing as "the background is the same, only the characters are moving" because it isn't.

    6. lattalayta ◴[] No.25616698{3}[source]
    One way that animated feature films are different than VFX is schedules. Typically, an animated feature from Disney or Pixar will take 4-5 years from start to finish, and everything you see in the movie will need to be created and rendered from scratch.

    VFX schedules are usually significantly more compressed, typically 6-12 months, so often times it is cheaper and faster to throw more compute power at a problem rather than paying a group of highly knowledgeable rendering engineers and technical artists to optimize it (although, VFX houses will still employ rendering engineers and technical artists that know about optimization). Pixar has a dedicated group of people called Lightspeed technical artists whose sole job is to optimize scenes so that they can be rendered and re-rendered faster.

    Historically, Pixar is also notorious for not doing a lot of "post-work" to their rendered images (although they are slowly starting to embrace it on their most recent films). In other words, what you see on film is very close to what was produced by the renderer. In VFX, to save time, you often render different layers of the image separately and then composite them later in a software package like Nuke. Doing compositing later allows you to fix mistakes, or make adjustments in a faster way than completely re-rendering the entire frame.

    7. dagmx ◴[] No.25616726{3}[source]
    I suspect they mean more in approaches to renderfarm utilization and core stealing.

    A lot of VFX studios use off the shelf farm management solutions that package up a job as a whole to a node.

    I don't believe core stealing like they describe is unique to Pixar, but is also not common outside Pixar either, which is what they allude to afaik. It's less an animation vs VFX comparison, as just studio vs studio infrastructure comparison.

    8. lattalayta ◴[] No.25617013[source]
    Here's a kind of silly but accurate view of path tracing for animated features https://www.youtube.com/watch?v=frLwRLS_ZR0

    Typically, most pathtracers use a technique called Monte Carlo Estimation, which means that they continuously loop over every pixel in an image, and average together the incoming light from randomly traced light paths. To calculate motion blur, they typically sample the scene at least twice (once at camera shutter open, and again at shutter close). Adaptive sampling rendering techniques will typically converge faster when there is less motion blur.

    One of the biggest time-saving techniques lately, is machine learning powered image denoising [1]. This allows the renderer to compute significantly fewer samples, but then have a 2D post-process run over the image and guess what the image might look like if it had been rendered with higher samples.

    Animated movies and VFX render each frame in terms of minutes and hours, while games need to render in milliseconds. Many of the techniques used in game rendering are approximations of physically based light transport, that look "good enough". But modern animated films and VFX are much closer to simulating reality with true bounced lighting and reflections.

    [1] https://developer.nvidia.com/optix-denoiser

    9. dodobirdlord ◴[] No.25617396[source]
    > This question is possibly based on an incorrect assumption that feature (animated) films are rendered differently than VFX. They're identical in terms of most tech stacks including rendering and the process is largely similar overall.

    Showcased by the yearly highlights reel that the Renderman team puts out.

    https://vimeo.com/388365999

    replies(1): >>25618294 #
    10. RantyDave ◴[] No.25618032[source]
    > If there's a generally static scene with just characters walking through it, does the render take advantage of rendering the static parts for the whole scene once, and then overlay and recompute the small differences caused by the moving things in each individual sub frame?

    Not any more. It used to be that frames were rendered in bits then composited to make the final image. However, you then need lots of tricks to reflect what would have happened to the background as a result of the foreground ... shadows, for instance. So now the entire scene is given to the renderer and the renderer is told to get on with it.

    Regarding physics cheats, it depends on the renderer but basically none. AI despeckling is making a huge difference to render times, however.

    Directors don't get involved in scaling laws and stuff like that. Basically a studio has a "look" that they'll quote around.

    Compute relevant techniques? A renderer basically solves the rendering equation. https://en.wikipedia.org/wiki/Rendering_equation

    And have a look at Mitsuba! http://rgl.epfl.ch/publications/NimierDavidVicini2019Mitsuba...

    11. CyberDildonics ◴[] No.25618070[source]
    That doesn't make any sense. Each frame is rendered separately because you need the granularity of individual static images. It has nothing to do with global illumination.
    12. NickNameNick ◴[] No.25618294{3}[source]
    For a 2020 showreal, there sure were a lot of 2019 and earlier movies in there.

    I'm pretty sure one of those shots was from Alien:Covenant (2017)

    replies(1): >>25619160 #
    13. djmips ◴[] No.25619160{4}[source]
    A showreel is more like a resume / sales sheet than a summary of a particular year. The 2020 date would only mean it included stuff up to and including 2020.