←back to thread

178 points themgt | 1 comments | | HN request time: 0.2s | source
Show context
andy99 ◴[] No.45777552[source]
This was posted from another source yesterday, like similar work it’s anthropomorphizing ML models and describes an interesting behaviour but (because we literally know how LLMs work) nothing related to consciousness or sentience or thought.

My comment from yesterday - the questions might be answered in the current article: https://news.ycombinator.com/item?id=45765026

replies(3): >>45777598 #>>45780130 #>>45785998 #
1. baq ◴[] No.45780130[source]
> we literally know how LLMs work

Yeah, in the same way we know how the brain works because we understand carbon chemistry.