Well, as hardware becomes more and more powerful, what's possible in a small footprint becomes bigger and bigger. And distributed software for disaggregated storage is becoming more accessible. You put these two together, running on-prem footprints at the scale of 50-100M$ capex makes a lot of sense. In my personal experience, at this scale, (if your cloud bill for compute+storage+local-network is $50M+/year), you can get 2-4x more on-prem private-cloud capacity for the same money. Of course, this only makes sense if you have an in-house software engineering team already and the marginal cost of adding another 50-100 engineers to build and operate this is strategically valuable to your business.
In big data AI space, this is exactly what's happening with the top 20th to 100th companies in the world right now.