I needed to make a landing page for an ad campaign to test out an idea for PMF.
Claude crapped out a workable landing page in ~30 seconds of prompting. I updated the copy on the page, total time less than an hour.
The odds of me spending more than an hour just picking a color theme for the page or finding the SVG icons it used is pretty much 100%.
------------
I had a bug in some async code, it hit rarely but often enough it was noticeable. I had narrowed down what file it was in, but after over an hour of staring at the code I wasn't finding it.
Popped into cursor, asked it to look for async bugs in the current file. "You forgot to clean up a resource on this line here."
Bug fixed.
------------
"Here is my nginx config, what is wrong with the block I just added for this new site I'm throwing up?"
------------
"Write a regex to do nnnnnn"
------------
"This page isn't working on mobile, something is wrong, can you investigate and tell me what the issues may be?"
Oh that won't go well, all of the models get super confused about CSS at some point and end up in doom spirals applying incorrect fixes again and again.
> Googles and Amazons of this world would be beating the crap out of every aspiring startup in any niche the big tech cared about, even remotely. And that simply is not happening.
This is already a well explored and understood space, to the extent that big tech cos have at times spun teams off to work independently to gain the advantage of startup-like velocities.
The more infra you have, the more overhead you have. Deploying a company's first service to production is really easy, no infra needed, no dev-ops, just publish.
Deploying the 5th service, eh.
Deploying the 50th service, well by now you need to have a host of meetings before work even starts to make sure you aren't duplicating effort and that the libraries you use mesh with the department's strategic technical vision. By the time those meeting are done, a startup will have already put 3 things into prod.
The communication overhead within large orgs is also famously non-linear.
I spent 10 years working at Microsoft, then 3 years at HBO Max (lean tech company 200 engineers, amazing dev ops), and now I'm working at startups of various sizes.
At Microsoft, pre-Azure it could take weeks just to get a machine provisioned to test an idea out on. Actually getting a project up and running in a repo was... hard at times. Build systems were complex, tooling was complex, and you sure as hell weren't getting anything pushed to users without a lot of checks in place. Now many of those checks were in place for damn good reasons, wrongly drawn lines on a map inside Windows is a literal international incident[1], and we had separate localizations for different variants of English around the world. (And I'd argue that Microsoft's agility at deploying software around the entire world at the same time is unmatched, the people I worked with there were amazing at sorting through the cultural and legal problems!)
Also if Google launches a new service and it goes down from too much traffic, it is embarrassing. Everything they do has to be scalable and load balanced, just to avoid bad press. If a startup hits the front page of HN and their website goes down from being too popular, they get to write a follow up blog post about how their announcement was so damn popular their site crashed! (And if they are lucky, hit the front page of HN again!)
The differences in designing for levels of scale is huge.
At Microsoft it was "expect potentially a billion users" At HBO it was "Expect tens of millions of users", at many startups it is "If we hit 10k users we'll turn a profit and we can figure out how to scale out later."
10K DAU is a load balances and 3 instances of NodeJS (for rolling updates) each running on a potato of a CPU.
> So, I don't think I'm biased toward bureaucratic environments, where developers code in MS Word rather than VS Code.
I've worked in those environments, and the level of engineering quality can be much higher. The number of bugs that can be hammered out and avoided in spec reviews is huge. Technology designs that end up being servicable for years to decades instead of "until the next rewrite". The actual code tends to flow much faster as well, or at least as fast as it can flow in the large sprawling code bases that exist at big tech companies. At other times, those specs are needed so that one has a path forward while working through messy legacy code bases.
Both styles have their place - Sometimes you need to iterate quickly and get lots of code down and see what works, other times it is worth thinking through edge cases, usage scenarios, and performance characteristics. Heck I've done memory bus calculations for different designs, when you are working at that level you don't just "write code and see what works", you first spend a few days (or a week!) with some other smart engineers and try to narrow down the potential field of you should even be trying to do!
[1]https://www.upi.com/Archives/1995/09/09/Microsoft-settles-In...