Microsoft already tried this in office when they made the menu order change with usage frequency. People hated it
Is "learning" now a synonym of "friction" in the product and design world? I gather this from many modern thinkpieces. If I am wrong, I would like to see an example of this kind of UI that actually feels both learnable and seamless. Clarity, predictability, learnability, reliability, interoperability, are all sacrificed on this altar.
> The explosive popularity of AI code generation shows users crave more control and flexibility.
I don't see how this follows.
The chart with lines and circles is quite thought-leadershipful. I do not perceive meaning in it, however (lines are jagged/bad, circles are smooth/good?).
Personalized interfaces are bad. I don't want to configure anything, and I don't want anything automatically configured on my behalf. I want it to just work; that kind of design takes effort & there's no way around it.
Your UI should be clear and predictable. A chatbot should not be moving around the buttons. If I'm going to compare notes with my friend on how to use your software, all the buttons need to be in the same place. People hate UI redesigns for a reason: Once they've learned how to use your software, they don't want to re-learn. A product that constantly redesigns itself at the whims of an inscrutable chatbot which thinks it knows what you want is the worst of all possible products.
ALSO: Egregiously written article. I assume it's made by an LLM.
I do a lot of CAD. Every single keyboard shortcut I know was learned only because I needed to do something that was either *highly repetitive* or *highly frustrating*, leading me to dig into Google and find the fast way to do it.
However, everything that is only moderately repetitive/frustrating and below is still being done the simple way. And I've used these programs for years.
I have always dreamed of user interfaces having competent, contextual user tutorials that space out learning about advanced and useful features over the entire duration that you use. Video games do this process well, having long since replaced singular "tutorial sections" with a stepped gameplay mechanic rollout that gradually teaches people incredibly complex game mechanics over time.
A simple example to counter the auto-configuration interpretation most of the other commenters are thinking of. In a toolbar dropdown, highlight all the features I already know how to use regularly. When you detect me trying to learn a new feature, help me find it, highlight it in a "currently learning" color, and slowly change the highlight color to "learned" in proportion to my muscle memory.
Every. Single. Time. I spend at least the first 3 hours relearning how to use all the tools again with Claude reminding me where modifiers are, and which modifier allows what. And which hotkey slices. Etc etc.
MS Paint opens.
"No, Copilot! Make me a drawing app with feature X."
Fans turn on. Laptop gets hot. Ten minutes pass as the entire MS Paint codebase is downloaded and recompiled.
Finally, MS Paint opens. There's an extra button in the toolbar. It doesn't work.
The solution could be search. It's not a House of Leaves.
Nothing "just works" for everyone. You are a product of your environment, people say apple interfaces/OSX are intuitive, I found them utterly unusable until I was forced to spend a lot of time to learn them.
Depending on which software you grew up using, you either find it intuitive or don't. If you found someone that has never used technology, no modern UI would be intuitive.
Personally, I hate it when software that I have to use daily is not configurable (and ideally extensible via programming). It's basically designed for the lowest common denominator for some group of users that product/design groups have decided is "intuitive".
> People hate UI redesigns for a reason...
I do agree here, stop changing things for the sake of changing things. When I owned some internal tools, I would go out of my way to not break user workflows. Even minor things, like tab-order, which I think most people don't think about, I'd have browser automation tests to make sure they remained consistent.
Most UI's are fundamentally dumbed down, they're only good for repetitive tasks.
If you're doing any task that is non-repetitive enough such that the UI needs to change, what you really need or would like is an "assistant" who you can talk through, get feedback, and do the thing. Up until very recently, that assistant probably had to be human, but probably obviously, people are now working quite a bit on the virtual one.
On-the-job-training, honestly; like we've been doing for decades, restated as:
Employer-mandated required training in ${Product} competence: consisting of a "proper" guided introduction to the advanced and undiscovered features of a product combined with a proficiency examination where the end-user must demonstrate both understanding a feature, and actually using it.
(With the obvious caveat the you'll probably want to cut-off Internet access during the exam part to avoid people delegating their thinking to an LLM again; or mindlessly following someone else's instructions in-general)
My pet example is when ("normal") people are using MS Word when they don't understand how defined-styles work, and instead treat everything in Word as a very literal 1:1 WYSIWYG, so to "make a heading" they'll select a line of text, then manually set the font, size, and alignment (bonus points if they think Underlining text for emphasis is ever appropriate typography (it isn't)), and they probably think there's nothing more to learn. I'll bet that someone like that is never going to explore and understand the Styles system on their own volition (they're here to do a job, not to spontaneously decide to want to learn Word inside out, even on company time).
Separately, there are things like "onboarding popups" you see in web-applications thesedays, where users are prompted to learn about new and underused features, but I feel they're ineffective or user-hostile because those popups only appear when users are trying to use the software for something else, so they'll ignore or dismiss them, never to be seen again.
> By corollary, how do you turn more casual users into power users?
Unfortunately for our purposes, autism isn't transmissible.
UX and UI takes work, and it’s mostly work getting back to simplicity - things like “think more like a user and less like your organisation” in terms of naming conventions and structures, or making sure that content works harder than navigation in orienting users. I don’t think there’s any sort of quick fix here, it’s hard to get it right.
Simplicity is surprisingly complex :-)
The user needs to able to discover the capabilities and limitations of the system they are using.
For most practical examples I can think of, this approach would complicate that, if not make it nearly impossible.
Secondly, the concrete example is not generative UI, it’s just generated data getting put into a schema.
I think the hard part of design is that you must consider the trade off between a new user and a power user. Overwhelm against progressive disclosure. It’s an art form in and of itself.
Is an AI driven feed not UI changes? Those are incredibly successful but the buttons change every refresh.
UIs do not need to be static. The key is that there is a coherent pattern to what's changing.
When you look at it through that lens it doesn't seem so exotic.
But there is a possible world where you can have both - every 'feature' your users would ever want without overwhelming complexity or steep learning curves, but with the possible downside/cost of reducing consistency.
Consider building your own blender software. If you know nothing about 3D you start off in your language and the LLM will happily produce UI for your level of understanding, which is limited. Over time you will reach an understanding that looks just like the software you were trying to replicate.
Currently the ecosystem around UI changes so much, because its always been a solved problem that people just keep reinventing to have... something to do I guess?
This has nothing to do with laziness or attention span. 20 years ago you'd have maybe a dozen programs tops to juggle and they were much better designed, because they were made by people who actually use the software instead of some bored designer at FAANG sweatshop who operates on metrics. Now you have 3-5 chat clients, 20 different web UIs for everything on 3 different OSs, all with different shortcuts. And on top of that it CONSTANTLY changes (looking at you Android and material 3).
5 things deserve knowing in-depth: browser (not a specific website, but browser itself), text editor, Spreadsheet application, terminal and whatever else software you use to put a bread on your table.
For any VCs that seriously think I'll invest non-trivial amount of time into learning their unique™ software – you're delusional. Provide shortcuts that resemble browser/vim/emacs/vscode and don't waste yours and my time.
Take Claude Code - after I've described my requirement it gives me a customised UI that asks me to make choices specific to what I have asked it to build (usually a series of dropdown lists of 3-4 options). How would a static UI do that in a way that was as seamless?
The example used in the article is a bit more specific but fair - if you want to calculate the financial implications of a house purchase in the 'old software paradigm' you probably have to start by learning excel and building a spreadsheet (or using a dodgy online calculator someone else built, which doesn't match your use case). The spreadsheet the average user writes might be a little simplified - are we positive that they included stamp duty and got the compounding interest right? Wouldn't it be great if Excel could just give you a perfectly personalised calculator, with toggle switches, without users needing to learn =P(1+(k/m))^(mn) but while still clearly showing how everything is calculated? Maybe Excel doesn't need to be a tool which is scary - it can be something everyone can use to help make better decisions regardless of skill level.
So yes, if you think of software only doing what it has done in the past, Gen UI does not make sense. If you think of software doing things it has never done before we need to think of new interaction modes (because hopefully we can do something better than just a text chat interface?).
[0] - https://kenobi.ai
B2B is a lot more rewarding in this sense. When you've found your power-user any piece of feedback is useful. If it's good enough for them, then the rest typically follows.
This also keeps my motivation when developing UI, because I know someone else cares.
Businesses forgot about this and I ended up a job where I just do whatever my PM says.
Back in the early-mid 90’s Apple Computer and IBM and I seem to remember some other tech nonsense peddlers formed a joint venture (I'm not looking this up, all from memory), with a name something like Talagent, forgettable.
But their product was supposedly this uber-duper new non-language that was going to completely take over software development. Named “Script-X” it starts with each programmer defining the language they want to use, the syntax and whatnot of the language itself, and then they work in blissful joy writing code in the style they prefer to write code.
I cannot believe they actually managed to create a joint venture, mount a huge industry PR campaign, and start selling this utter shite without thinking this pure idiocy through…
No two programmers working on the same project could read one another’s code. The developers spent a large amount of time changing their minds’ on the specifics of the “optimal language they wanted”, which caused previous work to be incompatible to the language and that programmer who chose to change their programming mental model. Not a single project using their Script-X shipped, it was a total and complete failure.
I was at Philips Media while this was taking place, and being a little software language author myself, I watched this playout with dismay these participants could be so short sighted.
You enter a term, and depending on what you entered, you get a very different UI.
"best sled for toddler" -> search modifiers (wood, under $20, toboggan, etc.), search options, pictures of sleds for sale from different retailers, related products, and reviews.
"what's a toboggan" -> AI overview, Wikipedia summary, People Also Ask section, and a block of short videos on toboggans.
"directions to mt. trashmore" -> customized map of my current location to Mt. Trashmore (my local sledding hill)
Google has spent an immense amount of time and effort identifying the underlying intent behind all kinds of different searches and shows very different "UI" for each in a way makes a very fluid kind of sense to users.
I know personally I hit buggy forms and UI way more than I should preventing me from proceeding.
So I think there is an opportunity to instead have n permutations in natural language where the interface is consistent towards how the user inputs, it will just be up to the developers to support some UI for confirmation and structuring more complex input within chat itself. The biggest issue will be become discovery of what you can and cannot do without stationary UIs hinting at capabilities.
Anyways we are in new territory so it will be interesting how this plays out.i like to think of it as on demand UI but curious how others are toying with this paradigm.
We are testing a mostly display only interface for output where the majority of input comes in from chat and chat UI components right now just to see how this would work in practice.
Microsoft tried hiding less commonly-used menu options a decade or so with Office and it was so terrible they abandoned it - only to try the same approach with the Windows 11 Explorer menu.
I absolutely hate that rigid "Basic" vs. "Advanced" distinction, but one of our image processing UIs was so complicated a customer really pressed us to add that. We tried and tried and couldn't come up with something better, so we settled on an approach that I still feel is suboptimal.
So I welcome seeing what AI/LLMs may be able to contribute to the UI design space, and customizing per user based on their usage seems like an interesting experiment. But at the same time I'm skeptical that AI will really excel in this very human and subjective area.
Besides, HCI will inevitably change because after 30 years of incremental user interface refinement, your average person still struggles to use Excel or Photoshop, but those same users are asking ChatGPT to help them write formulas or asking Gemini to help edit their photos.
I don't accept the premise that the interfaces were ever actually that good - For simple apps users can get around them fine, but for anything moderately complex users have mostly struggled or need training IMO. Blender as an example is an amazing piece of software - but ask any user who has never used it before to draw and render a bird without refering to the documentation (they won't be able to). If we want users to be able to use software like blender without needing to formally train them then we need a totally different approach (which would be great, as I suspect artistic ability and the technical ability to use blender are not necessarily correlated that strongly).
That's how I think software will work in the future. I'm not suggesting that the UI should be completely different on every render. Some predictability is essential. That's one reason I don't think codegen on every render is worthwhile. I'm simply suggesting that software should look different from user to user based on their individual needs.
When I'm having trouble with software, I often turn to Google to figure out how to use it. I'm then directed to a YouTube video, help article, or blog post with instructions.
My take is that people are already accustomed to this question-and-answer model. They're just not used to finding it within the application itself.
AI can give you a smart starting point (like a pre-filled formula) instead of making you start from scratch, and then you dig deeper only if you need to. That's what I mean by personalized software.
You get something that actually works right away, and you can tweak it from there. No need to watch YouTube tutorials or read docs just to figure out where to start.
These apps will need lots of documentation but instead of having to "search" for the section you need it would be just exposed as you need it.
"How do I do X?"
You do X by filling out this form: [FORM]
I'm not sure the problem here?
We will need a new set of tools to help developers with this.
Just like a whole set of web analytics, logging, feedback tools were built when we moved to the web.
Before you could just gather feedback from your employee interactions, or by watching customers in your store.
I don't see this as categorically different, but the tools do not exist today.
Reduced information density
Abuse of paradigms we learned in the past (e.g. blue button to accept, links for decline) to make us accept harmful settings
Notifications for just about everything
Popups inside of the app
Adding buttons for functionality that's not included in your subscription, which show the nagscreen once you click on it
UI upgrades which add nothing (see all above)
Removal of agency. I don't want the upgrade, nor do I want to be reminded in 3 days.
A changing UI sounds absolute hell to me. When I was growing up I broke my Windows installation a lot by changing settings. That's how I learned.1) clicking through menus 2) reading docs/watching tutorials 3) getting hands-on help from a coworker or support person
Some apps try to do progressive disclosure as you get better at using them, but that's really hard to scale. Works okay for simpler apps but breaks down as complexity grows.
With generative UI, I think you're basically building option 3 directly into the app.
Users learn to just ask the app how to do something or describe their problem, and it surfaces the right tools or configures things for them.
Still early days though. I think users will also have to adopt new behaviors to get the most out of generative apps.
I'm arguing for isn't chat-only interfaces. It's giving users both options: use the UI directly for quick changes you already know how to make, chat when you don't know where to find something or you have a complex multi-step task.
Different users will prefer different methods for different tasks. The goal is software that works how the user wants it to work, not just optimized for one interaction style.
Remote LLMs is a real constraint right now.
I'm hoping for a class of generative apps that can be run entirely locally. I believe it will exist, just not right now.
I'm also not convinced this makes traditional methods: walkthroughs, support videos, trainings etc impossible?
My friend just discovered coding agents (lol), and he's constantly finding new things it can do for him...
"Oh it can ssh into my raspberrypi and run the code to test it. Wow"
That was an emergent property of the cli coding agent that had no "traditional" discoverability.
As an example: I was searching for an item to purchase earlier. It's a very particular design; I already know that it's going to send me a bunch of slightly-wrong knockoffs. The first thing I want to see is all of the images that are labeled like my query, as many as possible at once, so that I can pick through them. Instead, it shows me the shopping UI, which fills the screen with pricing and other information for a bunch of things that I'm definitely not going to buy, because they're not what I'm looking for. Old Google would have had the images tab in a predictable place; I'd be on it without even thinking. Now? Yet another frustrating micro-experience with Nu-gle.
Photoshop has thousands of possible panel arrangements, yet users develop their own workflows.
The question isn't whether permutations exist—it's whether the system helps you find your optimal permutation faster. Do you think the problem is unpredictability itself, or the lack of a predictable meta-pattern for how changes occur?
I'm suggesting you don't generally need menus. The pattern is closer to search like Apple Spotlight or Chrome New Tab... I think most people do that now instead of bookmarks or clicking through menus? Am I wrong?
Excel is neither simple nor explicit, yet it's the most successful end-user programming tool ever made.
Could generative UI be a path to creating powerful tools feel simple by hiding complexity until needed, rather than dumbing down the tool itself?
Doesn't that suggest the "curriculum" has to be personalized? And if it's personalized, aren't we back to something generative?
I work on an internal app for an insurance company that allows viewing and editing insurance product configuration data. Stuff like what coverages we offer, what limits and deductibles apply to those, etc. We have built out a very very detailed data model to spell out the insurance contract fully. It has over 20 distinct top-level components comprising an "insurance product". The data generated is then used to populate quoting apps with applicable selections, tie claims to coverage selections, and more.
Ultimately these individual components have a JSON representation, and the "power user" editor within our app is just a guided JSON editor providing intellisense and validation. For less technical users, we have a "visual editor" that is almost fully generated from our schema. I thought perhaps this article referred to something like that. Since our initial release, a handful of new top-level components have been added to the schema to further define the insurance product details. For the most part, these have not required any additionally coding to have a good experience in our "visual editor". The components for our visual editor are more aligned to data types: displaying numbers, enums, arrays, arrays of arrays, etc, which any new schema objects are likely to be built from. That also applies to nested objects i.e. limits are built from primitives, coverages are built from limits. Given user feedback we can make minor changes to the display, but it's been very convenient for us to have it dynamically rendered based of the schema itself.
The schema is also versioned and our approach ensures that the data can be viewed and edited regardless of schema version. When a user checks out a coverage to edit it, the associated schema version is retrieved, the subschema for coverages is retrieved, and a schema parser maps properties of the schema to the appropriate React editor components.
p.s. These patterns might be commonplace and I'm just ignorant to it. I'm a backend dev who joined a new team that was advertised as a backend gig, but quickly learned that the primary focus would be a React Typescript app, neither of which I had any professional experience with.
What are you using to modify the site for each person?
I can see how you personalize software with use, but how do you personalize a landing page before you have any user context?
The question is: Why expose it to the user if you can use an LLM to surface only the relevant information, contextualized to what they are doing?
We use this in our app, and it reads our docs to provide context when rendering the UI. I hope that most users never actually read our docs, and eventually learn to ask our app.
It can generally show the right UI, help them configure it, and use docs to ground it.
What I mean is that 'old' software has focussed on giving users lots of tools for users to complete their tasks, and when we talk about UI's we are talking about how to arrange all those tools in a way that is easy for a user to navigate. Users usually have to learn about all the tools that are there, so they know which ones to use (and over time the ones that learn about the existence of all the tools become 'power users').
But 'new' software might be more about completing users tasks for them, and these higher level tasks are the ones that are hard to define because there are so many permutations of what a user might want to do. As the software is helping users more, end-users might not ever need to know about the existence of all of the tools as they are abstracted away.
But my pessimism may be unfounded or based on ignorance. At some point AI will probably get better at these things as well, either with better LLMs or by augmenting LLMs with outboard spatial reasoning modules that they can interact with.
They just want to see a menu of available reports, and if the one they want isn't there, move on to a different way of doing what they need.