←back to thread

277 points simianwords | 1 comments | | HN request time: 0s | source
Show context
farceSpherule ◴[] No.45150005[source]
I wish they would come up with a better term. Computers do not have brains or conscientiousness.

They erroneously construct responses (i.e., confabulation).

replies(1): >>45150314 #
ACCount37 ◴[] No.45150314[source]
You should anthropomorphize LLMs more. Anthropomorphizing LLMs is at least directionally correct 9 times out of 10.

LLMs, in a very real way, have "conscientiousness". As in: it's a property that can be measured and affected by training, and also the kind of abstract concept that an LLM can recognize and operate off.

If you can just train an LLM to be "more evil", you can almost certainly train an LLM to be "more conscientious" or "less conscientious".

replies(1): >>45153541 #
1. patrickmay ◴[] No.45153541[source]
> You should anthropomorphize LLMs more.

No, you shouldn't. They hate that.