←back to thread

Pixar's Render Farm

(twitter.com)
382 points brundolf | 1 comments | | HN request time: 0.221s | source
Show context
banana_giraffe ◴[] No.25616781[source]
One of the things they mentioned briefly in a little documentary on the making of Soul is that all of the animators work on fairly dumb terminals connected to a back end instance.

I can appreciate that working well when people are in the office, but I'm amazed that worked out for them when people moved to work from home. I have trouble getting some of my engineers to have a stable connection stable enough for VS Code's remote mode. I can't imagine trying to use a modern GUI over these connections.

replies(6): >>25616815 #>>25616858 #>>25617057 #>>25617074 #>>25618038 #>>25628067 #
mroche ◴[] No.25617057[source]
The entire studio is VDI based (except for the Mac stations, unsure about Windows), utilizing the Teradici PCoIP protocol, 10Zig zero-clients, and (at the time, not sure if they've started testing the graphical agent), Teradici host cards for the workstations.

I was an intern in Pixar systems for 2019 (at Blue Sky now), and we're also using a mix of PCoIP and NoMachine for home users. We finally figured out a quirk with our VPN terminal we sent home with people that was throttling connections, but the experience post-that fix is actually really good. There are a few things that can cause lag (such as moving apps like Chrome/Firefox), but for the most part unless your ISP is introducing problems it's pretty stable. And everyone with a terminal setup has two monitors, either 2*1920x1200 or 1920x1200+2560x1440.

I have a 300Mbps/35Mbps plan (turns into a ~250/35 on VPN) and it's great. We see bandwidth usage ranging from 1Mbps to ~80 on average. The vast majority being sub-20. There are some outliers that end up in mid-100s, but we still need to investigate those.

We did some cross country tests with our sister studio ILM over the summer and was hitting ~70-90ms latency which although not fantastic, was still plenty workable.

replies(2): >>25617917 #>>25618614 #
jfindley ◴[] No.25618614[source]
Few years ago I spoke to some ILM people about their VDI setup, which at the time was cobbled together out of mesos and a bunch of xorg hacks to get VDI server scheduling working on a pool of remote machines with GPUs (I think they might even have used AWS intially but not sure - this is going back a fair few years now). I was doing a lot of work with mesos at the time, and we chatted a bit about this as our work overlapped a fair bit.

Are you still using a similar sort of setup to orchestrate the backend of this, and if so have you published anything about it? I've had a few people ask me about this sort of problem lately and there aren't too many great resources out there I can point people new to this sort of tech towards.

replies(2): >>25619559 #>>25643048 #
1. mroche ◴[] No.25619559[source]
I wish I could answer this, but I really can't. Not because of any NDA, just that I don't know. I wasn't involved with the workstation team at Pixar (or ILM at all); I was part of the Network and Server Admin [NSA] team, specifically focused on OpenShift. There are a lot of tools that Pixar use that I don't have the full picture of how they work together.

Here at Blue Sky we are in our infancy for thin client based work. Remote terminals aren't too new as they were used for contract workers and artists who needed to WFH on the prior show, but we don't have VDI as we still use deskside workstations. For COVID, the workstations have been retrofitted with Teradici remote workstation hostcards and we send the artists home with a VPN client and zero client, utilizing direct connect. It was enough to get us going, but we have a long road ahead in optimizing this stack and eventually (if our datacenters can handle it) switching over to VDI.