←back to thread

504 points puttycat | 1 comments | | HN request time: 0s | source
Show context
theoldgreybeard ◴[] No.46182214[source]
If a carpenter builds a crappy shelf “because” his power tools are not calibrated correctly - that’s a crappy carpenter, not a crappy tool.

If a scientist uses an LLM to write a paper with fabricated citations - that’s a crappy scientist.

AI is not the problem, laziness and negligence is. There needs to be serious social consequences to this kind of thing, otherwise we are tacitly endorsing it.

replies(37): >>46182289 #>>46182330 #>>46182334 #>>46182385 #>>46182388 #>>46182401 #>>46182463 #>>46182527 #>>46182613 #>>46182714 #>>46182766 #>>46182839 #>>46182944 #>>46183118 #>>46183119 #>>46183265 #>>46183341 #>>46183343 #>>46183387 #>>46183435 #>>46183436 #>>46183490 #>>46183571 #>>46183613 #>>46183846 #>>46183911 #>>46183917 #>>46183923 #>>46183940 #>>46184450 #>>46184551 #>>46184653 #>>46184796 #>>46185025 #>>46185817 #>>46185849 #>>46189343 #
Forgeties79 ◴[] No.46182527[source]
If my calculator gives me the wrong number 20% of the time yeah I should’ve identified the problem, but ideally, that wouldn’t have been sold to me as a functioning calculator in the first place.
replies(2): >>46182711 #>>46182712 #
theoldgreybeard ◴[] No.46182712[source]
If it was a well understood property of calculators that they gave incorrect answers randomly then you need to adjust the way you use the tool accordingly.
replies(4): >>46182893 #>>46183352 #>>46183528 #>>46187937 #
1. Forgeties79 ◴[] No.46187937{3}[source]
Generally I’d ditch that tool because it doesn’t work. A calculator is supposed to calculate. If it can’t reliably calculate, then it’s not a functioning tool and I am tired of people insisting it is functioning properly.

LLM’s simply aren’t good enough for all the use cases some people insist they are. They’re powerful tools that have been far too broadly applied and there’s too much money and too many reputations being put on the line to acknowledge the obvious limitations. Frankly I’m sick of it.

I had somebody on HN a few months ago insist to me that because we value art and fiction, LLM’s being wrong when we need them to be correct (in ways that are also not always easy to identify) was desirable. I don’t even know what to do with that kind of logic other than chalk it up as trolling. I don’t want my computer to trick me into false solutions.