←back to thread

178 points themgt | 1 comments | | HN request time: 0s | source
Show context
xanderlewis ◴[] No.45777174[source]
Given that this is 'research' carried out (and seemingly published) by a company with a direct interest in selling you a product (or, rather, getting investors excited/panicked), can we trust it?
replies(6): >>45777183 #>>45777199 #>>45779098 #>>45779186 #>>45780472 #>>45781756 #
bobbylarrybobby ◴[] No.45777199[source]
Would knowing that Claude is maybe kinda sorta conscious lead more people to subscribe to it?

I think Anthropic genuinely cares about model welfare and wants to make sure they aren't spawning consciousness, torturing it, and then killing it.

replies(4): >>45777638 #>>45778064 #>>45779830 #>>45780094 #
DennisP ◴[] No.45777638[source]
This is just about seeing whether the model can accurately report on its internal reasoning process. If so, that could help make models more reliable.

They say it doesn't have that much to do with the kind of consciousness you're talking about:

> One distinction that is commonly made in the philosophical literature is the idea of “phenomenal consciousness,” referring to raw subjective experience, and “access consciousness,” the set of information that is available to the brain for use in reasoning, verbal report, and deliberate decision-making. Phenomenal consciousness is the form of consciousness most commonly considered relevant to moral status, and its relationship to access consciousness is a disputed philosophical question. Our experiments do not directly speak to the question of phenomenal consciousness. They could be interpreted to suggest a rudimentary form of access consciousness in language models. However, even this is unclear.

replies(2): >>45777853 #>>45778909 #
versteegen ◴[] No.45778909[source]
> They say it doesn't have that much to do with the kind of consciousness you're talking about

Not much but it likely has something to do with it, so experiments on access consciousness can still be useful to that question. You seem to be making an implication about their motivations which is clearly wrong, when they've been saying for years that they do care about (phenomenal) consciousness, as bobbylarrybobb said.

replies(2): >>45780628 #>>45790404 #
1. walleeee ◴[] No.45780628[source]
On what grounds do you think it likely that this phenomenon is at all related to consciousness? The latter is hardly understood. We can identify correlates in beings with constitutions very near to ours, which lend credence (but zero proof) to the claim they're conscious.

Language models are a novel/alien form of algorithmic intelligence with scant relation to biological life, except in their use of language.