Isn’t this where it would likely unravel?
The interviewer will know what the interesting parts of the exercise are, and ask the deep questions about them. Observe some more: do they know how to use an IDE, run their own program, cut through code to the parts that matter. Basically, can they do the things someone who wrote the code should trivially be able to do?
Since it was mentioned in a sibling comment: Even if the candidate used an LLM to write the code at home, I don’t care, so long as they ace the explanation part of the interview.
(Though you have to watch out for folks that are using the AI to answer your questions.)
In fact, I'm okay with people using AI to solve coding problems, as long as that is acceptable behavior at work as well. That should all be spelled out in the interview expectations.
They also need to be able to reason well about why they made the choices they did. Something useful when talking to them can be asking questions like "If X changed, how would that impact your design?". If they were reliant on AI for vibing (rather than just using it as a tool), then those can be more difficult questions to answer well.
Heh I do think that happened once (that I was aware of), but it was on a topic I knew a lot about, and it fell apart after layer one. Still, pretty lame, I’d much prefer a “I don’t know,” which I usually say if they start guessing.