←back to thread

528 points sealeck | 4 comments | | HN request time: 0.996s | source
Show context
daxfohl ◴[] No.31391669[source]
Heroku was magic for hosting college projects of 2010's complexity. The failure wasn't in the prohibitive cost at scale (though that factor didn't help); it was that for most real-world stuff we need IaaS, not PaaS. That has become more and more evident over the last ten years.

I think if fly succeeds, they need to figure out edge IaaS, and not put all their eggs into edge PaaS. And I hope they do! I'm curious what a successful edge IaaS looks like!

replies(1): >>31392354 #
mrkurt ◴[] No.31392354[source]
This is pretty much what I believe. This isn't HN frontpage worthy, but one of the things I'm most excited about is people running production CockroachDB clusters on Fly.io. It is still a little more difficult to use the underlying infrastructure than it should be, but we're getting close.
replies(1): >>31392488 #
1. daxfohl ◴[] No.31392488[source]
Neat, I'd say that is absolutely HN frontpage worthy!
replies(1): >>31392652 #
2. daxfohl ◴[] No.31392652[source]
And disclosure, I used to be on Azure Front Door team and was the lead for Azure Edge Zones development. I really wanted to do something like fly with that. But it turned out that too many people wanted too many different things. Some needed GPU (some Nvidia, some AMD), some FPGA, some general compute. Surprisingly few cared about Functions (our Lambda-like service), or even Web Apps (our Heroku). SQL Server was a big request; CosmosDB was not. Remote Desktop was another big one. They also wanted availability zones within the edge zone itself. And even if latency was tolerable to the nearest region, we had to put almost all our infra services local too because everything needed to continue working if there's a network outage. And the extra annoying thing that shouldn't have to be annoying -- everybody wanted ipv4.

So it ended up hardly making sense to deploy anything less than like 80 racks per site, at which point it's basically a small region minus a few small pieces.

Then there's just the risk that the people who wanted whatever special GPU or SSD combination would quit wanting them and they'd just sit there unused indefinitely after that. Or stockouts when demand rose due to a conference or whatever that would tarnish the brand. And of course nobody wanted to pay more than like 10% markup. They were more amenable to long term contracts though. It was just hard to figure out the right use case and make it profitable.

Seemed like what customers really wanted out of them were nearby replacements for pieces of their own datacenter. It was exactly the opposite direction of where I was hoping things would go, which was something between fly and cloudflare workers. Not sure what they're doing now; I left about 18 months ago.

replies(1): >>31392963 #
3. mrkurt ◴[] No.31392963[source]
This is part of why the PaaS take has worked so well for us. People who think they want edge have all kinds of different needs. When we realized that all full stack devs could benefit from something kind-of-like-edge it helped us do more focused work.
replies(1): >>31393132 #
4. daxfohl ◴[] No.31393132{3}[source]
Yeah, I hope it works! Prior to MS I was the solo dev for Smilebooth (a photobooth company), and when I joined Edge Zones my north star to shrink the minimum footprint (I think it was three racks initially) so that you could load an edge zone host on like a Smilebooth console or something and manage photobooth fleets by deploying Web Apps directly to them. (I realized that was ridiculously far-fetched at the time and that there were probably better ways of achieving that outcome, but I certainly didn't foresee the minimum footprint growing so dramatically)!

And, like I said earlier, I hope to see what a real edge IaaS solution looks like too, if such a thing is even possible. Maybe the IaaS that would allow a build-your-own-CDN.