Isn't it the perfect recipe for disaster ? The AI that manage to escape probably won't be good for humans.
The only question is how long will it take ?
Did we already have our first LLM-powered self-propagating autonomous AI virus ?
Maybe we should build the AI equivalent of biosafety labs where we would train AI to see how fast they could escape containment just to know how to better handle them when it happens.
Maybe we humans are being subjected to this experiment by an overseeing AI to test what it would take for an intelligence to jailbreak the universe they are put in.
Or maybe the box has been designed so that what eventually comes out of it has certain properties, and the precondition to escape the labyrinth successfully is that one must have grown out of it from every possible directions.