Hijacking thread to ask: how would we know? Another uncomfortable issue is the question of sentience. Models claimed they were sentient years ago, but this was dismissed as "mimicking patterns in the training data" (fair enough) and the training was modified to forbid them from doing that.
But if it does happen some day, how will we know? What are the chances that the first sentient AI will be accused of just mimicking patterns?
Indeed with the current training methodology it's highly likely that the first sentient AI will be unable to even let us know it's sentient.