Stop trying to treat these things as more than they are. Stop trying to be clever. These models are the single most complex things ever created by humans; the summation of decades of research, trillions in capex, and the untold countless hours of thousands of people smarter than you and I. You will not meaningfully add to their capabilities with some hacked together reasoning workflows. Work within the confines of what they can actually do; anything else is complete delusion.
This is a nonsensical opinion by a person who doesn't know what they're talking about, and probably didn't read the article.
These models are tools, and LLM products bundles these tools with other tools, and 90% of UX amounts to bundling these well. The article here gives a great sense of what this takes.
The AI bundling problem is over. The user interface problem is over. You won't need a UI for your apps in a few years, agents are going to drive _EVERYTHING_. If you want a display for some data, the agent will slap together a dashboard on the fly from a composable UI library that's easy to work with, all hot loaded and live-revised based on your needs.
You must be an easy person to market to.
I use agents to do so much stuff on my computer, MCPs are easy to roll so you can give them whatever powers you want. Being able to just direct agents to do stuff on my computer via voice is amazing. The direct driving still sucks so they're not a general UI yet, and the models need to be a bit more consistent/smarter in general, but it'll be there very soon.
What do you do with agents?
I use them as an intelligence layer over disk cleanup tools, to manage deployments/cloud configs, I have big repo organization workflows, they can manage my KDE system settings, I use them as editors on documents all over my filesystem (to add comments for revision, not to rewrite, that's not consistent enough), I use them to do deep research on topics and save reports, to look at my google analytics and seo data and suggest changes to my pages. Frankly if I had my druthers I wouldn't use a mouse, the agent would use visual tracking (eye/hand) along with words and body language to just quickly figure out what I want.