Most active commenters
  • dang(3)

←back to thread

1481 points sandslash | 11 comments | | HN request time: 1.295s | source | bottom
Show context
hgl ◴[] No.44315520[source]
It’s fascinating to think about what true GUI for LLM could be like.

It immediately makes me think a LLM that can generate a customized GUI for the topic at hand where you can interact with in a non-linear way.

replies(8): >>44315537 #>>44315540 #>>44315566 #>>44315572 #>>44315981 #>>44316784 #>>44316877 #>>44319465 #
1. jonny_eh ◴[] No.44315981[source]
An ever-shifting UI sounds unlearnable, and therefore unusable.
replies(4): >>44315991 #>>44316305 #>>44317803 #>>44318703 #
2. dang ◴[] No.44315991[source]
It wouldn't be unlearnable if it fits the way the user is already thinking.
replies(1): >>44316857 #
3. OtherShrezzing ◴[] No.44316305[source]
A mixed ever-shifting UI can be excellent though. So you've got some tools which consistently interact with UI components, but the UI itself is altered frequently.

Take for example world-building video games like Cities Skylines / Sim City or procedural sandboxes like Minecraft. There are 20-30 consistent buttons (tools) in the game's UX, while the rest of the game is an unbounded ever-shifting UI.

replies(1): >>44318371 #
4. guappa ◴[] No.44316857[source]
AI is not mind reading.
replies(2): >>44318817 #>>44321123 #
5. sotix ◴[] No.44317803[source]
Like Spotify ugh
6. skydhash ◴[] No.44318371[source]
The rest of the game is very deterministic where its state is controlled by the buttons. The slight variation is caused by the simulation engine and follows consistent patterns (you can’t have building on fire if there’s no building yet).
7. 9rx ◴[] No.44318703[source]
Tools like v0 are a primitive example of what the above is talking about. The UI maintains familiar conventions, but is laid out dynamically based on surrounding context. I'm sure there are still weird edge cases, but for the most part people have no trouble figuring out how to use the output of such tools already.
8. NitpickLawyer ◴[] No.44318817{3}[source]
A sufficiently advanced prediction engine is indistinguishable from mind reading :D
9. dang ◴[] No.44321123{3}[source]
Behavioral patterns are not unpredictable. Who knows how far an LLM could get by pattern-matching what a user is doing and generating a UI to make it easier. Since the user could immediately say whether they liked it or not, this could turn into a rapid and creative feedback loop.
replies(1): >>44325340 #
10. kevinventullo ◴[] No.44325340{4}[source]
So, if the user likes UI’s that don’t change, the LLM will figure out that it should do nothing?

One problem LLM’s don’t fix is the misalignment between app developers’ incentives and users’ incentives. Since the developer controls the LLM, I imagine that a “smart” shifting UI would quickly devolve into automated dark patterns.

replies(1): >>44332697 #
11. dang ◴[] No.44332697{5}[source]
A user who doesn't want such changes shouldn't be subjected to them in the first place, so there should be nothing for an LLM to figure out.

I'm with you on disliking dark patterns but it seems to me a separate issue.