Incremental builds and diff only pulls are not enough in a modern workflow. You either need to keep a fleet of warm builders or you need to store and sync the previous build state to fresh machines.
Games and I'm sure many other types of apps fall into this category of long builds, large assets, and lots of intermediate build files. You don't even need multiple apps in a repo to hit this problem. There's no simple off the shelf solution.
For example, the League of Legends source repo is millions of files and hundreds of GB in size, because it includes things like game assets, vendored compiler toolchains for all of our target platforms, etc. But to compile the game code, only about 15,000 files and 600MB of data are needed from the repo.
That means 99% of the repo is not needed at all for building the code, and that is why we are seeing a lot of success using VFS-based tech like the one described in this blog. In this case, we built our own virtual filesystem for source code based on our existing content-defined patching tech (which we wrote about a while ago [1]). It's similar to Meta's EdenFS in that we built it on top of the ProjFS API on Windows and NFSv3 on macOS and Linux. We can mount a view into the multimillion-file repo in 3 seconds, and file data (which is compressed and deduplicated and served through a CDN) is downloaded transparently when a process requests it. We use a normal caching build system to actually run the build, in our case FASTBuild.
I recently timed it, and I can go from having nothing at all on disk to having locally built versions of the League of Legends game client and server in 20 seconds on a 32-core machine. This is with 100% cache hits, similar to the build timings mentioned in the article.
[1] https://technology.riotgames.com/news/supercharging-data-del...