←back to thread

504 points puttycat | 1 comments | | HN request time: 0.244s | source
Show context
theoldgreybeard ◴[] No.46182214[source]
If a carpenter builds a crappy shelf “because” his power tools are not calibrated correctly - that’s a crappy carpenter, not a crappy tool.

If a scientist uses an LLM to write a paper with fabricated citations - that’s a crappy scientist.

AI is not the problem, laziness and negligence is. There needs to be serious social consequences to this kind of thing, otherwise we are tacitly endorsing it.

replies(37): >>46182289 #>>46182330 #>>46182334 #>>46182385 #>>46182388 #>>46182401 #>>46182463 #>>46182527 #>>46182613 #>>46182714 #>>46182766 #>>46182839 #>>46182944 #>>46183118 #>>46183119 #>>46183265 #>>46183341 #>>46183343 #>>46183387 #>>46183435 #>>46183436 #>>46183490 #>>46183571 #>>46183613 #>>46183846 #>>46183911 #>>46183917 #>>46183923 #>>46183940 #>>46184450 #>>46184551 #>>46184653 #>>46184796 #>>46185025 #>>46185817 #>>46185849 #>>46189343 #
Forgeties79 ◴[] No.46182527[source]
If my calculator gives me the wrong number 20% of the time yeah I should’ve identified the problem, but ideally, that wouldn’t have been sold to me as a functioning calculator in the first place.
replies(2): >>46182711 #>>46182712 #
imiric ◴[] No.46182711[source]
Indeed. The narrative that this type of issue is entirely the responsibility of the user to fix is insulting, and blame deflection 101.

It's not like these are new issues. They're the same ones we've experienced since the introduction of these tools. And yet the focus has always been to throw more data and compute at the problem, and optimize for fancy benchmarks, instead of addressing these fundamental problems. Worse still, whenever they're brought up users are blamed for "holding it wrong", or for misunderstanding how the tools work. I don't care. An "artificial intelligence" shouldn't be plagued by these issues.

replies(2): >>46182841 #>>46183515 #
SauntSolaire ◴[] No.46182841[source]
> It's not like these are new issues.

Exactly, that's why not verifying the output is even less defensible now than it ever has been - especially for professional scientists who are responsible for the quality of their own work.

replies(1): >>46219953 #
1. Forgeties79 ◴[] No.46219953[source]
If I have to constantly assess every single line done by an LLM then we are fast approaching a point where it’s no longer being helpful and I’m just grading homework for a C student.

I’m not saying that isn’t what has to be done, but it kind of clashes with the whole “this will make you more productive” argument if you ask me