This conviction doesn't seem to acknowledge the problem at scale. Decades of great UI development will still leave out edge cases that users will need to use the tool for. This happens fundamentally because the people who need to use the tools are not the people who make them, they rarely even talk to each other (instead they are "studied" via analytics).
When /humans/ bring up the idea of integrating LLMs into UIs, I think most of the time the sentiment comes from legitimate frustration about how the UI is currently designed. To be clear, this is a very different thing than a company shimming copilot into the UI, because the way these companies use LLMs is by delegating tasks away from users rather than improving their existing interfaces to complete these tasks themselves. There are /decades/ of HCI research on adaptive interfaces that address this, in the advent of expert systems and long before LLMs -- it's more relevant than ever, yet in most implemenations it's all going out the window!
My experience with accounting ^H^H^H^H^H^H^H^H^H^H bookkeeping / LLMs in general resonates with this. In gnu cash I wanted to bulk re-organize some transactions, but I couldn't find a way to do it quickly through the UI. All the books are kept in a SQL db, I didn't want to study the schema. I decided to experiment by getting the LLM to emit a python script that would make the appropriate manipulations to the DB. This seemed to take the best from all worlds -- the script was relatively straightforward to verify, and even though I used a closed source model, it had no access to the DB that contained the transactions.
Sure, other tools may have solved this problem directly. But again, the point isn't to expect someone to make a great tool for you, but to have a tool help you make it better for you. Given the verifiability, maybe this /is/ in fact one of the best places for this.