It's not about AI development, it's about something mentioned earlier in the article: "make as much money as I can". The problems that we see with AI have little to do with AI "development", they have to do with AI marketing and promulgation. If the author had gone ahead and dammed the creek with a shovel, or blown off his hand, that would have been bad, but not that bad. Those kinds of mistakes are self-limiting because if you're doing something for the enjoyment or challenge of it, you won't do it at a scale that creates more enjoyment than you personally can experience. In the parable of the CEO and the fisherman, the fisherman stops at what he can tangibly appreciate.
If everyone working on and using AI were approaching it like damming a creek for fun, we would have no problems. The AI models we had might be powerful, but they would be funky and disjointed because people would be more interested in tinkering with them than making money from them. We see tons of posts on HN every day about remarkable things people do for the gusto. We'd see a bunch of posts about new AI models and people would talk about how cool they are and go on not using them in any load-bearing way.
As soon as people start trying to use anything, AI or not, to make as much money as possible, we have a problem.
The second missed takeaway is at the end. He says Anthropic is noticing the coquinas as if that means they're going to somehow self-regulate. But in most of the examples he gives, he wasn't stopped by his own realization, but by an external authority (like parents) telling him to stop. Most people are not as self-reflective as this author and won't care about "winning zero sum games against people who don't necessarily deserve to lose", let alone about coquinas. They need a parent to step in and take the shovel away.
As long as we keep treating "making as much money as you can" as some kind of exception to the principle of "you can't keep doing stuff until you break something", we'll have these problems, AI or not.