←back to thread

446 points walterbell | 1 comments | | HN request time: 0.34s | source
Show context
BrenBarn ◴[] No.43577934[source]
It's become almost comical to me to read articles like this and wait for the part that, in this example, comes pretty close to the beginning: "This isn’t a rant against AI."

It's not? Why not? It's a "wake-up call", it's a "warning shot", but heaven forbid it's a rant against AI.

To me it's like someone listing off deaths from fentanyl, how it's destroyed families, ruined lives, but then tossing in a disclaimer that "this isn't a rant against fentanyl". In my view, the ways that people use and are drawn into AI usage has all the hallmarks of a spiral into drug addiction. There may be safe ways to use drugs but "distribute them for free to everyone on the internet" is not among them.

replies(12): >>43577939 #>>43577996 #>>43578036 #>>43578046 #>>43578066 #>>43578099 #>>43578125 #>>43578129 #>>43578304 #>>43578770 #>>43579016 #>>43579042 #
croes ◴[] No.43578304[source]
It’s a rant against the wrong usage of a tool not the tool as such.
replies(2): >>43578326 #>>43579044 #
1. mike_hearn ◴[] No.43579044[source]
Well, it's actually a rant about AI making what the author perceives as mistakes. Honestly it reads like the author is attempting to show off or brag by listing imaginary mistakes an AI might have made, but they are all the sort of mistakes a human could make too. And the fact that they are not real incidents, significantly weakens his argument. He is a consultant who sells training services so obviously if people come to rely on AI more for this kind of thing he will be out of work.

It does not help that his examples of things an imaginary LLM might miss are all very subjective and partisan too.