If this can't work, the program abstraction is insufficient to the task. This insufficiency is not a surprise.
That an ordinary 5-year can make a sandwich after only ever seeing someone make one, and that the sandwich so-made is a component within a life sustaining matrix which inevitably leads to new 5 year-olds making their own sandwiches and serenading the world about the joys of peanut butter and jelly is the crucial distinction between AI and intelligence.
The rest of the stuff about a Harvard professor ripping a hole in a bag and pouring jelly on a clump of bread on the floor is a kooky semantic game that reveals something about the limits of human intelligence among the academic elite.
We might wonder why some people have to get to university before encountering such basic epistemological conundrum as what constitutes clarity in exposition... But maybe that's what teaching to the test in U.S. K-12 gets you.
Alan Kay is known a riff on a simple study where Harvard students were asked what causes the earth's seasons: almost all of them give the wrong explanation, but many of them are very confident about the correctness of their wrong explanations.
Given that the measure of every AI chat program's performance is how agreeable its response is to a human, is there a clear distinction between a the human and the AI?
If this HN discussion was among AI chat programs considering their own situations and formulating understanding of their own problems; maybe waxing about the ineffable, for them, joy of eating a peanut butter and jelly sandwich...
But it isn't.