←back to thread

467 points 0x63_Problems | 4 comments | | HN request time: 0.815s | source
Show context
perrygeo ◴[] No.42138092[source]
> Companies with relatively young, high-quality codebases benefit the most from generative AI tools, while companies with gnarly, legacy codebases will struggle to adopt them. In other words, the penalty for having a ‘high-debt’ codebase is now larger than ever.

This mirrors my experience using LLMs on personal projects. They can provide good advice only to the extent that your project stays within the bounds of well-known patterns. As soon as your codebase gets a little bit "weird" (ie trying to do anything novel and interesting), the model chokes, starts hallucinating, and makes your job considerably harder.

Put another way, LLMs make the easy stuff easier, but royally screws up the hard stuff. The gap does appear to be widening, not shrinking. They work best where we need them the least.

replies(24): >>42138267 #>>42138350 #>>42138403 #>>42138537 #>>42138558 #>>42138582 #>>42138674 #>>42138683 #>>42138690 #>>42138884 #>>42139109 #>>42139189 #>>42140096 #>>42140476 #>>42140626 #>>42140809 #>>42140878 #>>42141658 #>>42141716 #>>42142239 #>>42142373 #>>42143688 #>>42143791 #>>42151146 #
1. anthonyskipper ◴[] No.42138690[source]
This is only partly true. AI works really well on very legacy codebases like cobol and mainframe, and it's very good at converting that to modern languages and architectures. It's all the stuff from like 2001-2015 that it gets weird on.
replies(1): >>42138720 #
2. dartos ◴[] No.42138720[source]
> AI works really well on very legacy codebases like cobol and mainframe

Any sources? Seems unlikely that LLMs would be good at something with so little training data in the widely available internet.

replies(1): >>42140055 #
3. true_religion ◴[] No.42140055[source]
LLMs are good at taking the underlying structure of one medium and repeating it using another medium.
replies(1): >>42146828 #
4. dartos ◴[] No.42146828{3}[source]
Assuming both mediums are reasonably well represented in the dataset, which brings me back to my comment