If a carpenter builds a crappy shelf “because” his power tools are not calibrated correctly - that’s a crappy carpenter, not a crappy tool.
If a scientist uses an LLM to write a paper with fabricated citations - that’s a crappy scientist.
AI is not the problem, laziness and negligence is. There needs to be serious social consequences to this kind of thing, otherwise we are tacitly endorsing it.
Scientists who use LLMs to write a paper are crappy scientists indeed. They need to be held accountable, even ostracised by the scientific community. But something is missing from the picture. Why is it that they came up with this idea in the first place? Who could have been peddling the impression (not an outright lie - they are very careful) about LLMs being these almost sentient systems with emergent intelligence, alleviating all of your problems, blah blah blah. Where is the god damn cure for cancer the LLMs were supposed to invent? Who else is it that we need to keep accountable, scrutinised and ostracised for the ever-increasing mountains of AI-crap that is flooding not just the Internet content but now also penetrating into science, every day work, daily lives, conversations, etc. If someone released a tool that enabled and encouraged people to commit suicide in multiple instances that we know of by now, and we know since the infamous "plandemic" facebook trend that the tech bros are more than happy to tolerate worsening societal conditions in the name of their platform growth, who else do we need to keep accountable, scrutinise and ostracise as a society, I wonder?
> Where is the god damn cure for cancer the LLMs were supposed to invent?
Assuming that cure is meant as hyperbole, how about https://www.biorxiv.org/content/10.1101/2025.04.14.648850v3 ?
AI models being used for bad purposes doesn't preclude them being used for good purposes.