←back to thread

Pixar's Render Farm

(twitter.com)
382 points brundolf | 1 comments | | HN request time: 0s | source
Show context
mmcconnell1618 ◴[] No.25616372[source]
Can anyone comment on why Pixar uses standard CPU for processing instead of custom hardware or GPU? I'm wondering why they haven't invested in FPGA or completely custom silicon that speeds up common operations by an order of magnitude. Is each show that different that no common operations are targets for hardware optimization?
replies(12): >>25616493 #>>25616494 #>>25616509 #>>25616527 #>>25616546 #>>25616623 #>>25616626 #>>25616670 #>>25616851 #>>25616986 #>>25617019 #>>25636451 #
aprdm ◴[] No.25616623[source]
FPGA is really expensive for the scale of a modern studio render farm, we're talking around 40~100k cores per datacenter. Because 40~100k cores isn't Google scale either it also doesn't seem to make sense to invest in custom silicon.

There's a huge I/O bottleneck as well as you're reading huge textures (I've seen textures as big as 1 TB) and writing constantly to disk the result of the renderer.

Other than that, most of the tooling that modern studios use is off the shelf, for example, Autodesk Maya for Modelling or Sidefx Houdini for Simulations. If you had a custom architecture then you would have to ensure that every piece of software you use is optimized / works with that.

There are studios using GPUs for some workflows but most of it is CPUs.

replies(2): >>25616693 #>>25616904 #
nightfly ◴[] No.25616693[source]
I'm assuming these 1TiB textures are procedural generated or composites? Where do this large of textures come up?
replies(3): >>25616722 #>>25616850 #>>25617045 #
aprdm ◴[] No.25616722[source]
Can be either. You usually have digital artists creating them.

https://en.wikipedia.org/wiki/Texture_artist

replies(1): >>25617054 #
CyberDildonics ◴[] No.25617054[source]
Texture artists aren't painting 1 terabyte textures dude.
replies(1): >>25620347 #
forelle2 ◴[] No.25620347[source]
The largest texture sets are heading towards 1TB in size, or at least they were when I was last involved in production support. I saw Mari projects north of 650gb and that was 5 years ago. Disclaimer : I wrote Mari, the vfx industry standard painting system.

Note though these are not single 1TB textures, they’re multiple sets of textures, plus all of the layers that constitute them. Some large robots In particular had 65k 4K textures if you count the layers.

replies(1): >>25622741 #
CyberDildonics ◴[] No.25622741[source]
I think we both realize that it's a bit silly to have so much data in textures that you have 100x the pixel data of a 5 second shot at 4k with 32 bit float rgb. 650GB of textures would mean that even with 10gb ethernet (which I'm not sure is common yet) you would wait at least 12 minutes just for the textures to get to the computer before rendering could start and rendering 100 frames at a time would mean 100GB/s from a file server for a single shot. Even a single copy of the textures to freeze an iteration would be thousands in expensive disk space.

I know it doesn't makes sense to tell your clients that what they are doing is nonsense, but if I saw something like that going on, the first thing I would do is chase down why it happened. Massive waste like that is extremely problematic while needing to make a sharper texture for some tiny piece that gets close to the camera is not a big deal.

replies(1): >>25624059 #
forelle2 ◴[] No.25624059[source]
Texture caching in modern renders tends to be on demand and paged so it is very unlikely the full texture set is ever pulled from the filers.

Over texturing like this can be a good decision depending on the production. Asset creation often starts a long time before shots or cameras are locked down.

If you don’t know how an asset is to be used it makes sense to texture all of it upfront as if it will be full screen, 4K.

Taking an asset off final to ‘Upres’ it for a can be a pain in the ass and more costly than just detailing it up in the first place.

In isolation it’s a insane amount of detail and given perfect production planning it is normally not needed, but until directors lock down the scripts and shots it can be the simplest option.

replies(1): >>25625195 #
CyberDildonics ◴[] No.25625195[source]
> Texture caching in modern renders tends to be on demand and paged so it is very unlikely the full texture set is ever pulled from the filers.

This was easier to rely on in the days before ray tracing, when texture filtering was consistent because everything was from the camera. Ray differentials from incoherent rays aren't quite as forgiving.

> If you don’t know how an asset is to be used it makes sense to texture all of it upfront as if it will be full screen, 4K.

4k textures for large parts of the asset in the UV layout can be an acceptable amount of overkill. That's not the same as putting 65,000 4k textures on something because each little part is given its own 4k texture. I know that you know this, but I'm not sure why you would conflate those two things.

> Taking an asset off final to ‘Upres’ it for a can be a pain in the ass and more costly than just detailing it up in the first place

It is very rare that specific textures need to be redone like that and it is not a big deal.

650GB of textures for one asset drags everything from iterations to final renders to disk usage to disk activity to network usage down for every shot in a completely unnecessary way. There isn't a fine line between these things, there is a giant gap between that much excessive texture resolution and needing to upres some piece because it gets close to the camera.

> Asset creation often starts a long time before shots or cameras are locked down.

This is actually fairly rare.

> In isolation it’s a insane amount of detail and given perfect production planning it is normally not needed, but until directors lock down the scripts and shots it can be the simplest option.

That's rarely how the time line fits together. It's irrelevant though, because there is no world where 65,000 4k textures on a single asset makes sense. It's multiple orders of magnitude out of bounds of reality.

I am glad that you have that insane amount of scalability as a focus since you are making tools that people rely on heavily, and I wish way more people on the tools end thought like this. Still, it is about 1000x what would set off red flags in my mind.

I apologize on behalf of whoever told you that was necessary, because they need to learn how to work within reasonable resources (which is not difficult given modern computers), no matter what project or organization they are attached to.

replies(1): >>25627505 #
forelle2 ◴[] No.25627505[source]
Mari was designed in production at Weta, based off the lessons learned from, well, everything that Weta does.

Take for example, a large hero asset like King Kong.

Kong look development started many months before a script was locked down. Kong is 60ft tall, our leading lady is 5’2”.

We think we need shots where she’ll be standing in Kong’s hands, feet, be lifted up to his face, nose etc.

So we need fingers prints that will stand up at 4K renders, tear ducts, pores on the inside on the nose, etc etc but we don’t know. All of which will have to match shot plates in detail.

We could address each of these as the shots turn up and tell the director (who owns the company) he needs to wait a few days for his new shot, or you can break Kong into 500 patches and create a texture for each of the diffuse, 3 spec, 3 subsurface, 4 bump, dirt, blood, dust, scratch, fur, flow etc etc inputs to our shaders.

Let’s says we have 500 UDIM patches for Kong so we can sit our leading lady on the finger tips, and 20 channels to drive our shaders and effects systems.

When working the artist uses 6 paint layers for each channel ( 6 is a massive underestimate for most interesting texture work).

So we have 500 patches * 20 channels * 6 layers which gives us 60k images. Not all of these will need be at 4K however.

For Kong replace any hero asset where shots will be more placed “in and on” the asset rather than “at”. Heli carriers, oil rigs, elven great halls, space ships, giant robots.... The line between asset and environment is blurred at that point and maybe think “set” rather than “asset”

replies(1): >>25628170 #
CyberDildonics ◴[] No.25628170[source]
500 separate 4k textures patches for a character covered in fur is excessively wasteful. Things like 3 4k subsurface maps on 500 patches maps on a black skinned creature that is mostly covered by fur is grossly unnecessary no matter who tells you it's needed.

We both know that stuff isn't showing up on film and that the excess becomes a CYA case of the emperor's new clothes where no one wants to be the one to say it's ridiculous.

> When working the artist uses 6 paint layers for each channel ( 6 is a massive underestimate for most interesting texture work).

This is intermediary and not what is being talked about.

replies(1): >>25628631 #
aprdm ◴[] No.25628631[source]
Your opinion on something doesn’t mean much when confronted with real world experiences from the biggest studios.
replies(1): >>25633365 #
CyberDildonics ◴[] No.25633365[source]
Maybe some day I'll know what I'm talking about. Which part specifically do think is wrong?
replies(1): >>25635444 #
aprdm ◴[] No.25635444[source]
Focusing on the technical steps and what might be technically feasible or not versus the existing world and artists workflows. Also speaking as an authority that knows best patronizing who actually works in the industry.
replies(1): >>25636318 #
CyberDildonics ◴[] No.25636318[source]
> Focusing on the technical steps and what might be technically feasible or not versus the existing world and artists workflows.

I would say it's the opposite. There is nothing necessary about 10,000 4k maps and definitely nothing typical. Workflows trade a certain amount of optimization for consistency, but not like this.

> patronizing who actually works in the industry.

I don't think I was patronizing. This person is valuable in that they are trying to make completely excessive situations work. Telling people (or demonstrating to them) they are being ridiculous is not his responsibility and is a tight rope to walk in his position.

> Also speaking as an authority that knows best

If I said that 2 + 2 = 4 would you ask about a math degree? This is an exercise in appeal to authority. This person and myself aren't even contradicting each other very much.

He is saying the extremes that he has seen, I'm saying that 10,000 pixels of texture data for each pixel in a frame is enormous excess.

The only contradiction is that he seems to think that because someone did it, it must be a neccesity.

Instead of confronting what I'm actually saying, you are trying to rationalize why you don't need to.

replies(1): >>25636394 #
aprdm ◴[] No.25636394[source]
> This person is valuable in that they are trying to make completely excessive situations work. Telling people (or demonstrating to them) they are being ridiculous is not his responsibility and is a tight rope to walk in his position.

Usually the way VFX works is that technology (R&D) is very moved away from production. The artist job is getting the shot done regardless of technology and they have very short deadlines. They usually push the limits.

Digital artists are not very tech savvy in a lot of disciplines, it is not feasible to have a TD in the delivery deadlines of the shots for a show.

The person at Weta also told you how Weta actually worked in Kong which is very typical. You don't know upfront what you need. And you dismissed it as something unnecessary, still, is how every big VFX studio works. Do you feel that you know better and/or everyone is doing something wrong and hasn't really thought about it? If that is the case you might have a business opportunity for a more efficient VFX studio!

replies(1): >>25637088 #
CyberDildonics ◴[] No.25637088[source]
Your post is an actual example of being patronizing. Before I was just trying to explain what the person I replied to probably already knew intuitively.

> how Weta actually worked in Kong which is very typical

It is not typical to have 10,000 4k maps on a creature. What has been typical when rendering at 2k is a set of 2k maps for face, torso, arms and legs. Maybe a single arm and leg can be painted and the UVs can be mirrored, though mostly texture painters will layout the UVs separately and duplicate the texture themselves to leave room for variations.

> it is not feasible to have a TD in the delivery deadlines of the shots for a show.

Actually most of the people working on shots are considered TDs. Specific asset work for some sequence with a hero asset is actually very common, which makes sense if you think about it from a story point of view of needing a visual change to communicate a change of circumstances.

4k rendering (was the 2017 king kong rendered in 4k?) and all the closeups of king kong mean that higher resolution maps and more granular sections are understandable, but it doesn't add up to going from 16 2k maps to 10,000 4k maps. Maps like diffuse, specular and subsurface albedo are also just multiplicative, so there is no reason to have multiple maps unless they need to be rebalanced against each other per shot (such as for variations).

You still never actually explained a problem or inconsistency with anything I've said.

replies(3): >>25638237 #>>25638372 #>>25638788 #
1. aprdm ◴[] No.25638237{3}[source]
I do not think you said anything wrong, is much less about what you're saying and more about how you're saying (as if it was a simple thing to get right and people are dumb for not doing it in an optimal way).

> Actually most of the people working on shots are considered TDs.

That's not true in the studios I've been. TDs is usually reserved to more close to pipeline folks that aren't doing shot work (as in, delivering shots). They're supporting folks doing so.

For the record, I haven't downvoted you at all.