←back to thread

378 points todsacerdoti | 1 comments | | HN request time: 0.001s | source
Show context
xnorswap ◴[] No.44984684[source]
I won't say too much, but I recently had an experience where it was clear that when talking with a colleague, I was getting back chat GPT output. I felt sick, like this just isn't how it should be. I'd rather have been ignored.

It didn't help that the LLM was confidently incorrect.

The smallest things can throw off an LLM, such as a difference in naming between configuration and implementation.

In the human world, you can with legacy stuff get in a situation where "everyone knows" that the foo setting is actually the setting for Frob, but with an LLM it'll happily try to configure Frob or worse, try to implement Foo from scratch.

I'd always rather deal with bad human code than bad LLM code, because you can get into the mind of the person who wrote the bad human code. You can try to understand their misunderstanding. You can reason their faulty reasoning.

With bad LLM code, you're dealing with a soul-crushing machine that cannot (yet) and will not (yet) learn from its mistakes, because it does not believe it makes mistakes ( no matter how apologetic it gets ).

replies(14): >>44984808 #>>44984938 #>>44984944 #>>44984959 #>>44985002 #>>44985018 #>>44985019 #>>44985160 #>>44985639 #>>44985759 #>>44986197 #>>44986656 #>>44987830 #>>44989514 #
BitwiseFool ◴[] No.44984808[source]
>"It didn't help that the LLM was confidently incorrect."

Has anyone else ever dealt with a somewhat charismatic know-it-all who knows just enough to give authoritative answers? LLM output often reminds me of such people.

replies(7): >>44984914 #>>44985008 #>>44985013 #>>44985034 #>>44985093 #>>44985184 #>>44985564 #
SamBam ◴[] No.44985184[source]
That’s a great question — and one that highlights a subtle misconception about how LLMs actually work.

At first glance, it’s easy to compare them to a charismatic “know-it-all” who sounds confident while being only half-right. After all, both can produce fluent, authoritative-sounding answers that sometimes miss the mark. But here’s where the comparison falls short — and where LLMs really shine:

(...ok ok, I can't go on.)

replies(3): >>44985537 #>>44986329 #>>44988471 #
ryandrake ◴[] No.44985537[source]
Most of the most charismatic, confident know-it-alls I have ever met have been in the tech industry. And not just the usual suspects (founders, managers, thought leaders, architects) but regular rank-and-file engineers. The whole industry is infested with know-it-alls. Hell, HN is infested with know-it-alls. So it's no surprise that one of the biggest products of the decade is an Automated Know-It-All machine.
replies(1): >>44986836 #
flatb ◴[] No.44986836[source]
Thereby self correcting perhaps.
replies(1): >>44995598 #
1. sigotirandolas ◴[] No.44995598[source]
I'd say the opposite, LLMs are a know-it-nothing machine to perfectly suit know-it-alls. Unlike a human, it isn't that hard to get the machine to say what you want, and then generate enough crap to 'defeat' any human challenger.