Most active commenters
  • mathattack(3)

←back to thread

Pixar's Render Farm

(twitter.com)
382 points brundolf | 13 comments | | HN request time: 1.157s | source | bottom
1. shadowofneptune ◴[] No.25615950[source]
It's good to know they care about optimization. I had the assumption that all CGI is a rather wasteful practice where you just throw more hardware at the problem.
replies(5): >>25615993 #>>25616001 #>>25616034 #>>25616237 #>>25622695 #
2. ChuckNorris89 ◴[] No.25615993[source]
Most of these studios are tech first since thy wouldn't be able to have gotten where they are now without prioritizing tech.
3. mathattack ◴[] No.25616001[source]
In the end it’s still more profitable to hire performance engineers than hardware. For the last decade I’ve heard the “toss more HW” argument. It hasn’t held because the amount of compute and storage goes up too.
replies(2): >>25616100 #>>25616142 #
4. unsigner ◴[] No.25616034[source]
No, artists can throw more problem at the hardware faster than you can throw hardware against the problem. There are enough quality sliders in the problem to make each render infinitely expensive if you feel like it.
5. Retric ◴[] No.25616100[source]
That’s very much an exaggeration. Pixar/Google etc can’t run on a single desktop CPU and spends a lot of money on hardware. The best estimate I have seen is it’s scale dependent. At small budgets your generally spending most of that on people, but as the budget increase the ratio tends to shift to ever more hardware.
replies(2): >>25616183 #>>25618455 #
6. theptip ◴[] No.25616142[source]
In reality it’s never as simple as a single soundbite. If you are a startup with $1k/mo AWS bills, throwing more hardware at the problem can be orders of magnitude cheaper. If you are running resource-intensive workloads then at some point efficiency work becomes ROI-positive.

The reason the rule of thumb is to throw more hardware at the problem is that most (good) engineers bias towards wanting to make things performant, in my experience often beyond the point where it’s ROI positive. But of course you should not take that rule of thumb as a universal law, rather it’s another reminder of a cognitive bias to keep an eye on.

replies(1): >>25618774 #
7. cortesoft ◴[] No.25616183{3}[source]
It is absolutely about scale.... an employee costs $x of dollars regardless of how many servers they are managing, and might improve performance y%..... that only becomes worth it if y% of your hardware costs is greater than the $x for the employee.
replies(1): >>25616417 #
8. dagmx ◴[] No.25616237[source]
CGI is heavily about optimization. I recommend checking out SIGGRAPH ACM papers , and there's a great collection by them on production renderers.

Every second spent rendering or processing, is time an artist is not working on a shot. Any savings in optimizations add up to incredible cost savings.

replies(1): >>25622757 #
9. Retric ◴[] No.25616417{4}[source]
The issue is extra employees run out of low hanging fruit to optimize, so that y% isn’t a constant. Extra hardware benefits from all the existing optimized code written by your team, where extra manpower needs to improve the already optimized code already written by your team.
10. mathattack ◴[] No.25618455{3}[source]
Eventually you have to buy more hardware. The Pixars and Google have the most to gain from added expertise.
11. mathattack ◴[] No.25618774{3}[source]
Yes - context matters. This article is about Pixar where it pays to have someone think about performance. My data points are only companies of 50 people and higher - in those cases cloud consumption was the number 2 cost line item behind people. It matters there. In most cases the people who know cost performance tend to be good at fighting latency - you need the same skills.

This may not apply on smaller side projects or places where technology is secondary.

12. Narann ◴[] No.25622695[source]
> I had the assumption that all CGI is a rather wasteful practice where you just throw more hardware at the problem.

Hardware can be overloaded quickly if you don't care about it. You still need some engineering to keep everything under control.

I suspect this assumption come from the fact CGI have a lot of different things to render so you try to get the hardware that "support" by and large the problems you will have: You can't focus optimization on every single problem, but you can optimize the 90% use case, so the 10% use case can represent 90% of artist time and keep deadline safe.

13. Narann ◴[] No.25622757[source]
About this: One SIGGRAPH focused was about Multithreading for VFX. The result was so valuable they made a book.

I reviewed this book (2014): https://www.fevrierdorian.com/blog/post/2014/08/24/Multithre...