Basically though, Pixar doesn't have the scale to make custom chips (the entire Pixar and even "Disney all up" scale is pretty small compared to say a single Google or Amazon cluster).
Until recently GPUs also didn't have enough memory to handle production film rendering, particularly the amount of textures used per frame (which even on CPUs are handled out-of-core with a texture cache, rather than "read it all in up front somehow"). I think the recent HBM-based GPUs will make this a more likely scenario, especially when/if OptiX/RTX gains a serious texture cache for this kind of usage. Even still, however, those GPUs are extremely expensive. For folks that can squeeze into the 16 GiB per card of the NVIDIA T4, it's just about right.
tl;dr: The economics don't work out. You'll probably start seeing more and more studios using GPUs (particularly with RTX) for shot work, especially in VFX or shorts or simpler films, but until the memory per card (here now!) and $/GPU (nope) is competitive it'll be a tradeoff.