←back to thread

378 points todsacerdoti | 6 comments | | HN request time: 0.662s | source | bottom
Show context
xnorswap ◴[] No.44984684[source]
I won't say too much, but I recently had an experience where it was clear that when talking with a colleague, I was getting back chat GPT output. I felt sick, like this just isn't how it should be. I'd rather have been ignored.

It didn't help that the LLM was confidently incorrect.

The smallest things can throw off an LLM, such as a difference in naming between configuration and implementation.

In the human world, you can with legacy stuff get in a situation where "everyone knows" that the foo setting is actually the setting for Frob, but with an LLM it'll happily try to configure Frob or worse, try to implement Foo from scratch.

I'd always rather deal with bad human code than bad LLM code, because you can get into the mind of the person who wrote the bad human code. You can try to understand their misunderstanding. You can reason their faulty reasoning.

With bad LLM code, you're dealing with a soul-crushing machine that cannot (yet) and will not (yet) learn from its mistakes, because it does not believe it makes mistakes ( no matter how apologetic it gets ).

replies(14): >>44984808 #>>44984938 #>>44984944 #>>44984959 #>>44985002 #>>44985018 #>>44985019 #>>44985160 #>>44985639 #>>44985759 #>>44986197 #>>44986656 #>>44987830 #>>44989514 #
1. ivanjermakov ◴[] No.44984959[source]
> I was getting back chat GPT output

I would ask them for an apple pie recipe and report to HR

replies(3): >>44985009 #>>44987228 #>>44988112 #
2. japhyr ◴[] No.44985009[source]
I get that this is a joke, but the bigger issue is that there's no easy fix for this because other humans are using AI tools in a way that destroys their ability to meaningfully work on a team with competent people.

There are a lot of people reading replies from more knowledgeable teammates, feeding those replies into LLMs, and pasting the response back to their teammates. It plays out in public on open source issue threads.

It's a big mess, and it's wasting so much of everyone's time.

replies(1): >>44985248 #
3. ivanjermakov ◴[] No.44985248[source]
As with every other problem with no easy fix, if it is important - it should be regulated. It should not be hard for a company to prohibit LLM-assisted communication, if management believes that it is inherently destructive (e.g. feeding generated messages into message summarizers).
4. wildzzz ◴[] No.44987228[source]
I had a QA inspector asking me a question on teams about some procedure steps before we ran a test. I answered them and he replied back with this message absolutely dripping in AI slop. I was expecting "ok thanks I'll tell them" and instead got back "Thank you. I really appreciate your response. I'll let them know and I'm sure they will feel relieved to know your opinion." Like wtf is that. I had to make sure I was talking to the right guy. This guy definitely doesn't talk like that in-person. It's not my opinion and I highly doubt anyone was worried to the point they'd feel relief to hear my clarification.
replies(1): >>44987331 #
5. ivanjermakov ◴[] No.44987331[source]
Fun part that it's immediately obvious to everyone who worked with LLMs. I wonder what future "enhancements" big tech would come up with to make slop speech less robotic/recognizable.

And it's unfortunate that many people would start associating long texts as generated by default. Related XKCD: https://xkcd.com/3126/

6. chasd00 ◴[] No.44988112[source]
> I would ask them for an apple pie recipe and report to HR

i do this sometimes except i reply asking them to rephrase their comment in the form of a poem. Then screenshot the response and add it as an attachment before the actual human deletes the comment.