←back to thread

1226 points bishopsmother | 2 comments | | HN request time: 0.541s | source
Show context
samwillis ◴[] No.35046486[source]
Fundamentally I think some of the problems come down to the difference between what Fly set out to build and what the market currently want.

Fly (to my understanding) at its core is about edge compute. That is where they started and what the team are most excited about developing. It's a brilliant idea, they have the skills and expertise. They are going to be successful at it.

However, at the same time the market is looking for a successor to Heroku. A zero dev ops PAAS with instant deployment, dirt simple managed Postgres, generous free level of service, lower cost as you scale, and a few regions around the world. That isn't what Fly set out to do... exactly, but is sort of the market they find themselves in when Heroku then basically told its low value customers to go away.

It's that slight miss alignment of strategy and market fit that results in maybe decisions being made that benefit the original vision, but not necessarily the immediate influx of customers.

I don't envy the stress the Fly team are under, but what an exciting set of problems they are trying to solve, I do envy that!

replies(20): >>35046650 #>>35046685 #>>35046754 #>>35046953 #>>35047128 #>>35047302 #>>35047334 #>>35047345 #>>35047376 #>>35047603 #>>35047656 #>>35047786 #>>35047788 #>>35047937 #>>35048244 #>>35048674 #>>35049946 #>>35050285 #>>35051885 #>>35056048 #
mattbillenstein ◴[] No.35046754[source]
Yeah, distributed systems at the global scale are very very difficult - at least with the Heroku style problem, you'd be looking at scaling in a single datacenter I think - deployments to multiple datacenters wouldn't share dependencies.

I do wonder however if they'd be better off using less l33t tech - do almost everything on Postgres vs consul and vault, etc. Scaling, failover, consistency, etc is a more well-known problem and there are a lot of people who've ran other DBs at tremendous scale than the alternatives.

Simplicity is the key to reliability, but this isn't a simple product, so idk.

replies(2): >>35049032 #>>35049768 #
1. lmm ◴[] No.35049032[source]
> I do wonder however if they'd be better off using less l33t tech - do almost everything on Postgres vs consul and vault, etc. Scaling, failover, consistency, etc is a more well-known problem and there are a lot of people who've ran other DBs at tremendous scale than the alternatives.

In my experience people who ran Postgres distributed across a WAN tended to use obscure third-party plugins at best, more often a pile of dodgy Perl scripts. Using something designed from the ground up to be clustered seems to have a much better chance of working out than trying to make something that's been built as a single-instance system for decades work across the internet.

replies(1): >>35050493 #
2. mattbillenstein ◴[] No.35050493[source]
Yeah, point taken, I wasn't thinking to cluster across the WAN - more like an api wrapping postgres in a single DC. But you pay the price of read latency I guess... it's a hard problem no doubt.