←back to thread

Death by AI

(davebarry.substack.com)
583 points ano-ther | 2 comments | | HN request time: 0.459s | source
Show context
arendtio ◴[] No.44623913[source]
I tend to think of LLMs more like 'thinking' than 'knowing'.

I mean, when you give an LLM good input, it seems to have a good chance of creating a good result. However, when you ask an LLM to retrieve facts, it often fails. And when you look at the inner workings of an LLMs that should not surprise us. After all, they are designed to apply logical relationships between input nodes. However, this is more akin to applying broad concepts than recalling detailed facts.

So if you want LLMs to succeed with their task, provide them with the knowledge they need for their task (or at least the tools to obtain the knowledge themself).

replies(1): >>44627265 #
1. gtsop ◴[] No.44627265[source]
> more like 'thinking' than 'knowing'.

it's neither, really.

> After all, they are designed to apply logical relationships between input nodes

They are absolutelly not. Unless you assert that logical === statistical (which it isn't)

replies(1): >>44628620 #
2. arendtio ◴[] No.44628620[source]
So what is it (in your opinion)?

For clarification: yes, when I wrote 'logical,' I did not mean Boolean logic, but rather something like probabilistic/statistical logic.