Conversations are error prone and noisy.
UI distills down the mode of interaction into something defined and well understood by both parties.
Humans have been able to speak to each other for a long time, but we fill out forms for anything formal.
Conversations are error prone and noisy.
UI distills down the mode of interaction into something defined and well understood by both parties.
Humans have been able to speak to each other for a long time, but we fill out forms for anything formal.
If you look at many of the current innovations around working with llms and agents, they are largely around constraining and tracking context in a structured way. There will likely be emergent patterns for these sorts of things over time, I am implementing my own approach for now with hopefully good abstractions to allow future portability.
For sure! UIs are also most of the past and present way to interact with a computer, off or online. Even Hacker News - which is mostly text - has some UI for to vote, navigate, flag…
Imagine the mess of a text-field-only interface where you had to type "upvote the upper ActionHank message" or "open the third article’ comments on the front page, the one that talks about On-demand UI generation…" then press enter.
Don’t get me wrong: LLMs are great and it’s fascinating to see experimentations with them. Kudos to the author.