←back to thread

504 points puttycat | 5 comments | | HN request time: 0s | source
Show context
theoldgreybeard ◴[] No.46182214[source]
If a carpenter builds a crappy shelf “because” his power tools are not calibrated correctly - that’s a crappy carpenter, not a crappy tool.

If a scientist uses an LLM to write a paper with fabricated citations - that’s a crappy scientist.

AI is not the problem, laziness and negligence is. There needs to be serious social consequences to this kind of thing, otherwise we are tacitly endorsing it.

replies(37): >>46182289 #>>46182330 #>>46182334 #>>46182385 #>>46182388 #>>46182401 #>>46182463 #>>46182527 #>>46182613 #>>46182714 #>>46182766 #>>46182839 #>>46182944 #>>46183118 #>>46183119 #>>46183265 #>>46183341 #>>46183343 #>>46183387 #>>46183435 #>>46183436 #>>46183490 #>>46183571 #>>46183613 #>>46183846 #>>46183911 #>>46183917 #>>46183923 #>>46183940 #>>46184450 #>>46184551 #>>46184653 #>>46184796 #>>46185025 #>>46185817 #>>46185849 #>>46189343 #
1. kklisura ◴[] No.46183387[source]
> AI is not the problem, laziness and negligence is

This reminds me about discourse about a gun problem in US, "guns don't kill people, people kill people", etc - it is a discourse used solely for the purpose of not doing anything and not addressing anything about the underlying problem.

So no, you're wrong - AI IS THE PROBLEM.

replies(2): >>46183583 #>>46183639 #
2. Yoofie ◴[] No.46183583[source]
No, the OP is right in this case. Did you read TFA? It was "peer reviewed".

> Worryingly, each of these submissions has already been reviewed by 3-5 peer experts, most of whom missed the fake citation(s). This failure suggests that some of these papers might have been accepted by ICLR without any intervention. Some had average ratings of 8/10, meaning they would almost certainly have been published.

If the peer reviewers can't be bothered to do the basics, then there is literally no point to peer review, which is fully independent of the author who uses or doesn't use AI tools.

replies(2): >>46184498 #>>46184603 #
3. sneak ◴[] No.46183639[source]
> it is a discourse used solely for the purpose of not doing anything and not addressing anything about the underlying problem

Solely? Oh brother.

In reality it’s the complete opposite. It exists to highlight the actual source of the problem, as both industries/practitioners using AI professionally and safely, and communities with very high rates of gun ownership and exceptionally low rates of gun violence exist.

It isn’t the tools. It’s the social circumstances of the people with access to the tools. That’s the point. The tools are inanimate. You can use them well or use them badly. The existence of the tools does not make humans act badly.

4. smileybarry ◴[] No.46184498[source]
Peer reviewers can also use AI tools, which will hallucinate a "this seems fine" response.
5. amrocha ◴[] No.46184603[source]
If AI fraud is good at avoiding detection via peer review that doesn’t mean peer review is useless.

If your unit tests don’t catch all errors it doesn’t mean unit tests are useless.