←back to thread

399 points nomdep | 2 comments | | HN request time: 0.003s | source
Show context
bdamm ◴[] No.44295003[source]
No offense intended, but this is written by a guy who has the spare time to write the blog. I can only assume his problem space is pretty narrow. I'm not sure what his workflow is like, but personally I am interacting with so many different tools, in so many different environments, with so many unique problem sets, that being able to use AIs for error evaluation, and yes, for writing code, has indeed been a game changer. In my experience it doesn't replace people at all, but they sure are powerful tools. Can they write unsupervised code? No. Do you need to read the code they write? Yes, absolutely. Can the AIs produce bugs that take time to find? Yes.

But despite all that, the tools can find problems, get information, and propose solutions so much faster and across such a vast set of challenges that I simply cannot imagine going back to working without them.

This fellow should keep on working without AIs. All the more power to him. And he can ride that horse all the way into retirement, most likely. But it's like ignoring the rise of IDEs, or Google search, or AWS.

replies(2): >>44295035 #>>44295098 #
satisfice ◴[] No.44295098[source]
You think he's not using the tools correctly. I think you aren't doing your job responsibly. You must think he isn't trying very hard. I think you are not trying very hard...

That is the two sides of the argument. It could only be settled, in principle, if both sides were directly observing each other's work in real-time.

But, I've tried that, too. 20 years ago in a debate between dedicated testers and a group of Agilists who believed all testing should be automated. We worked together for a week on a project, and the last day broke down in chaos. Each side interpreted the events and evidence differently. To this day the same debate continues.

replies(1): >>44300934 #
bdamm ◴[] No.44300934[source]
I am absolutely responsible for my work. That's why I spend so much time reading the code that I and others on my team write, and it's why I spend so much time building enormous test systems, and pulling deeply on the work of others. Thousands and thousands of hours go into work that the customer will never see, because I am responsible.

People's lives are literally at stake. If my systems screw up, people can die.

And I will continue to use AI to help get through all that. It doesn't make me any less responsible for the result.

replies(1): >>44388323 #
1. satisfice ◴[] No.44388323[source]
Why should any responsible person believe you? I'm serious. You assure us that you are properly supervising a tool that is famous for doing shockingly bad work at random times. The only way anyone could verify that is by watching you literally always.

We don't have a theory of LLMs that provides a basis on which to trust them. The people who create them do not test them in a way that passes muster with experts in the field of testing. Numerous articles by people at least as qualified as you cast strong doubt on the reliability of LLMs.

But you say "trust me!"

Stockton Rush assured us that his submersible was safe, despite warnings from experts. He also made noises about being responsible.

replies(1): >>44446671 #
2. bdamm ◴[] No.44446671[source]
I'm responsible for my work and I don't need to prove that to you or anyone else except the people who pay me for my work.

The fact there is AI involved doesn't change the nature of the work. Engineers and coders are paid to produce functioning results, and thorough code review is sometimes but not always involved. None of that changes. Software developers make mistakes, regardless of whether there is an AI involved or not. So introducing AI literally changes nothing in terms of the validation chain.

If you're trying to prevent a Stockton Rush type personality from creating larger social problems, then you're talking about regulating the software industry presumably like how the Engineering industry is regulated. However again, that doesn't change anything about the tools, only who and how responsibility flows.