I'd be surprised if increased load has a negative effect on 1.1.1.1's performance.
We run a homogeneous architecture -- that is, every machine in our fleet is capable of handling every type of request. The same machines that currently handle 10% of all HTTP requests on the internet, and handle authoritative DNS for our customers, and serve the DNS F root server, are now handling recursive DNS at 1.1.1.1. These machines are not sitting idle. Moreover, this means that all of these services are drawing from the same pool of resources, which is, obviously, enormous. This service will scale easily to any plausible level of demand.
In fact, in this kind of architecture, a little-used service is actually likely to be penalized in terms of performance because it's spread so thin that it loses cache efficiency (for all kinds of caches -- CPU cache, DNS cache, etc.). More load should actually make it faster, as long as there is capacity, and there is a lot of capacity.
Meanwhile, Cloudflare is rapidly adding new locations -- 31 new locations in March alone, bringing the current total to 151. This not only adds capacity for running the service, but reduces the distance to the closest service location.
In the past I worked at Google. I don't know specifically how their DNS resolver works, but my guess is that it is backed by a small set of dedicated containers scheduled via Borg, since that's how Google does things. To be fair, they have way too many services to run them all on every machine. That said, they're pretty good at scheduling more instances as needed to cover load, so they should be fine too.
In all likelihood, what really makes the difference is the design of the storage layer. But I don't know the storage layer details for either Google's or Cloudflare's resolvers so I won't speculate on that.