←back to thread

421 points sohkamyung | 2 comments | | HN request time: 0.421s | source
Show context
roguecoder ◴[] No.45670387[source]
I am curious if LLMs evangelists understand how off-putting it is when they knee-jerk rationalize how badly these tools are performing. It makes it seem like it isn't about technological capabilities: it is about a religious belief that "competence" is too much to ask of either them or their software tools.
replies(7): >>45670776 #>>45670799 #>>45670830 #>>45671500 #>>45671741 #>>45672916 #>>45673109 #
1. GolfPopper ◴[] No.45673109[source]
They've conned themselves with the LLMs they use, and are desperate to keep the con going: "The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con"

https://softwarecrisis.dev/letters/llmentalist/

replies(1): >>45674584 #
2. tim333 ◴[] No.45674584[source]
I had a look at that and am not convinced

>people are convinced that language models, or specifically chat-based language models, are intelligent... But there isn’t any mechanism inherent in large language models (LLMs) that would seem to enable this...

and says it must be a con but then how come they pass most of the exams designed to test humans better than humans do?

And there are mechanisms like transformers that may do something like human intelligence.