This was posted from another source yesterday, like similar work it’s anthropomorphizing ML models and describes an interesting behaviour but (because we literally know how LLMs work) nothing related to consciousness or sentience or thought.
My comment from yesterday - the questions might be answered in the current article: https://news.ycombinator.com/item?id=45765026
replies(3):