But is he right? How do we know? Well for starters, look at his CV. He has never managed servers for a living. The closest he's come is working on FPGAs. So what's he basing all these opinions on? Musings? Thoughts? Feelings? Hope?
He makes a couple claims which it isn't obvious are bunk, so I'll address them here, in reverse order.
"microservice architectures in general add a lot of overhead to a system for dubious gain when you are running on one big server" - Microservices architectures are not about overhead or efficiency. They are an attempt to use good software design principles to address Conway's Law. If you design the microservice correctly, you can enable many different groups in an organization to develop software independently, and come up with a highly effective and flexible organization and stable products. Proof? Amazon. But the caveat is, you have to design them correctly. Almost everyone fails at this.
"It's impossible to get the benefits of a CDN, both in latency improvements and bandwidth savings, with one big server" - This is so dumb I'm not sure I have to refute it? But, uh, no, CDNs absolutely give a heap of benefits whether you have 1 server or 1,000. And CloudFlare Free Plan is Free.
"My Workload is Really Bursty - Cloud away." - Unless your workload involves massive amounts of storage or ingress/egress and your profit margin tiny, in which case you may save more by building out a small fleet of unreliable poorly-maintained colocated servers (emphasis on may).
"The "high availability" architectures you get from using cloudy constructs and microservices just about make up for the fragility they add due to complexity. ... Remember that we are trying to prevent correlated failures. Cloud datacenters have a lot of parts that can fail in correlated ways. Hosting providers have many fewer of these parts. Similarly, complex cloud services, like managed databases, have more failure modes than simple ones (VMs)." - Argument from laziness, or ignorance? He's trying to say that because something is complex it's also less reliable. Which completely ignores the reliability engineering aspect of that complexity. You mitigate higher numbers of failure modes by designing the system to fail over reliably. And you also have warm bodies running around replacing the failing parts, which fights entropy. You don't get that in a single server; once your power supply, disk, motherboard, network interface, RAM, etc fails, and assuming your server has a redundant pair, you have a ticking clock to repair it until the redundant pair fails. How lucky do you feel? (oh, and you'll need downtime to repair it.)
As usual, the cloud costs quoted is MSRP, and if you're paying retail, you're a fool. Almost all cloud costs can be brought down from 25%-75%, spot instances are a fraction of the on-demand server cost, and efficient use of cheaper cloud services reduces your need to buy compute at all.
"The big drawback of using a single big server is availability. Your server is going to need downtime, and it is going to break. Running a primary and a backup server is usually enough, keeping them in different datacenters. A 2x2 configuration should appease the truly paranoid: two servers in a primary datacenter (or cloud provider) and two servers in a backup datacenter will give you a lot of redundancy. If you want a third backup deployment, you can often make that smaller than your primary and secondary." - Wait... so One Big Server isn't enough? Huh. So this was a clickbait article? I'm shocked!
"One Server (Plus a Backup) is Usually Plenty" - Plenty for what? I mean we haven't even talked system architecture or application design. But let's assume it's a single microservice that gets 1RPS. Is your backup server a hot spare, cold spare, or live mirror? If it's live, it's experiencing the same wear, meaning it will fail at about the same time. If it's hot, there's less wear, but it's still experiencing some. If it's cold, you get less wear, but you're less sure it'll boot up again. And then there's system configuration. The author mentions the "complexity" of managing a cluster, but actually it's less complex than managing just two servers. With a fleet of servers, you know you have to use automation, so you spend the time to automate their setup and run updates frequently. With a backup, you probably won't do any maintenance on the backup, and you definitely won't perform the same operations on the backup as the server. So the system state will drift wildly, and the backup's software will be useless. It would be better to just have it as spare part.
The author never talks about the true failure modes of "one big server". When parts start to need replacing, it's never cheap. Smart hands cost, cost of the parts+shipping, cost of the downtime. And often you'll find there are delays - delays in getting smart hands to actually repair it correctly, delays in shipping, delays in part ordering/availability. Running out of power, running out of space, temperatures too high, "flaky" parts you can't diagnose, backups and restores, datacenter issues, routing issues, backbone issues. You'll tell yourself these are "probably rare" - but these are all failure modes, and as the author tells us, you should be wary of lots of failure modes. And anecdotes will tell you somebody has run a server for 10 years with no issue, while another person had a server with 3 faults in a month. To say nothing of the need to run "burn-in" on a new server to discover faults once it's racked.
Go ahead and do whatever you want. Cloud, colo, one server, multiple. There will be failures and complexity no matter what. You want to tell yourself a comforting story that there is "one piece of advice" to follow, some black and white world where only one piece of folksy wisdom applies. But here's my folksy wisdom: design your application, design your system to fit it, try not to pinch every penny, build something, and become educated enough to know what problems to expect and how to deal with them. Or if not, pay someone who can, and listen to them.