←back to thread

117 points soraminazuki | 1 comments | | HN request time: 0.295s | source
1. jokethrowaway ◴[] No.45081043[source]
After one of my client forced all employees and contractors to use AI, my boss, who was previously reasonable, started: - Regurgitating AI crap to every answer, often just replying with a ChatGPT / Claude screenshot - Not being able to explain code but "don't worry, I got Claude to generate some tests and the tests pass" - Introducing random bots in slack and github which print tons of noise humans just skip through because they're not accurate enough.

The effect on the team of developers with various level of experience started showing up as well:

The application architecture turn into a horrible mess, it's worse than junior engineers. The application started exhibiting tons of hard to debug issues, because the generated code was too low level and not covering corner cases.

Every attempt of the AI engineers to fix the issue generated one more class wrapping the existing codebase - with a fix which never worked (eg. ConnectionManagerWithTimeouts).

Eventually we basically had to rewrite the application, throwing away most of the code twice. One to just get something working with the existing architecture without crashing every hour and then another to use a framework and eliminate the last bugs occurring every once and then.

LLM needs to be in incredibly capable hands in order to be used safely and engineers will have to fight their instinct and not get swayed by the LLM telling them they're right.