←back to thread

466 points 0x63_Problems | 2 comments | | HN request time: 0s | source
Show context
perrygeo ◴[] No.42138092[source]
> Companies with relatively young, high-quality codebases benefit the most from generative AI tools, while companies with gnarly, legacy codebases will struggle to adopt them. In other words, the penalty for having a ‘high-debt’ codebase is now larger than ever.

This mirrors my experience using LLMs on personal projects. They can provide good advice only to the extent that your project stays within the bounds of well-known patterns. As soon as your codebase gets a little bit "weird" (ie trying to do anything novel and interesting), the model chokes, starts hallucinating, and makes your job considerably harder.

Put another way, LLMs make the easy stuff easier, but royally screws up the hard stuff. The gap does appear to be widening, not shrinking. They work best where we need them the least.

replies(24): >>42138267 #>>42138350 #>>42138403 #>>42138537 #>>42138558 #>>42138582 #>>42138674 #>>42138683 #>>42138690 #>>42138884 #>>42139109 #>>42139189 #>>42140096 #>>42140476 #>>42140626 #>>42140809 #>>42140878 #>>42141658 #>>42141716 #>>42142239 #>>42142373 #>>42143688 #>>42143791 #>>42151146 #
dcchambers ◴[] No.42138350[source]
Like most of us it appears LLMs really only want to work on greenfield projects.
replies(2): >>42138525 #>>42138978 #
hyccupi ◴[] No.42138525[source]
Good joke, but the reality is they falter even more on truly greenfield projects.

See: https://news.ycombinator.com/item?id=42134602

replies(2): >>42138662 #>>42138664 #
1. MrMcCall ◴[] No.42138662{3}[source]
That is because, by definition, their models are based upon the past. And woe unto thee if that training data was not pristine. Error propagation is a feature; it's a part of the design, unless one is suuuuper careful. As some have said, "Fools rush in."
replies(1): >>42140455 #
2. Terr_ ◴[] No.42140455[source]
Or, in comic form: https://www.smbc-comics.com/comic/rise-of-the-machines