←back to thread

449 points lemper | 1 comments | | HN request time: 0.203s | source
Show context
rvz ◴[] No.45036860[source]
We're more likely to get a similar incident like this very quickly if we continue with the cult of 'vibe-coding' and throwing away basic software engineering principles out of the window as I said before. [0]

Take this post-mortem here [1] as a great warning and which also highlights exactly what could go horribly wrong if the LLM misreads comments.

What's even more scarier is each time I stumble across a freshly minted project on GitHub with a considerable amount of attention, not only it is 99% vibe-coded (very easy to detect) but it completely lacks any tests written for it.

Makes me question the ability of the user prompting the code in the first place if they even understand how to write robust and battle-tested software.

[0] https://news.ycombinator.com/item?id=44764689

[1] https://sketch.dev/blog/our-first-outage-from-llm-written-co...

replies(2): >>45037453 #>>45042994 #
1. mrguyorama ◴[] No.45042994[source]
God that "post mortem" is such a portent of things to come. I've seen this exact problem path happen locally nearly any time I use claude. It very obviously just picks what it should put where based on weighted random chance, and that random chance is going to not go in your favor at some point, in a way that no amount of training or job experience can help with, because no, a human would not have made this mistake.

This is the kind of mistake that fails people out of CS101; It's obvious that the student is just manipulating symbols they don't really "get" rather than modifying code. Throwing the chinese room thought experiment at your code base is bad engineering.