They want to throw LLMs at everything even if it does not make sense. Same is true for all the AI agent craze: https://medium.com/thoughts-on-machine-learning/langchains-s...
replies(10):
Even the serious idea that the article thinks could work is throwing the unreliable LLMs at verification! If there's any place you can use something that doesn't work most of the time, I guess it's there.
In these situations, I’ve been able to sufficiently program the agent that I haven’t seen too much of an issue as you described. Consistency is a feature.