←back to thread

178 points themgt | 1 comments | | HN request time: 0.204s | source
Show context
andy99 ◴[] No.45777552[source]
This was posted from another source yesterday, like similar work it’s anthropomorphizing ML models and describes an interesting behaviour but (because we literally know how LLMs work) nothing related to consciousness or sentience or thought.

My comment from yesterday - the questions might be answered in the current article: https://news.ycombinator.com/item?id=45765026

replies(3): >>45777598 #>>45780130 #>>45785998 #
1. DennisP ◴[] No.45785998[source]
Down towards the end they actually say it has nothing to do with consciousness. They do say it might lead to models being more transparent and reliable.