←back to thread

1479 points sandslash | 1 comments | | HN request time: 0.226s | source
Show context
hgl ◴[] No.44315520[source]
It’s fascinating to think about what true GUI for LLM could be like.

It immediately makes me think a LLM that can generate a customized GUI for the topic at hand where you can interact with in a non-linear way.

replies(8): >>44315537 #>>44315540 #>>44315566 #>>44315572 #>>44315981 #>>44316784 #>>44316877 #>>44319465 #
jonny_eh ◴[] No.44315981[source]
An ever-shifting UI sounds unlearnable, and therefore unusable.
replies(4): >>44315991 #>>44316305 #>>44317803 #>>44318703 #
dang ◴[] No.44315991[source]
It wouldn't be unlearnable if it fits the way the user is already thinking.
replies(1): >>44316857 #
guappa ◴[] No.44316857[source]
AI is not mind reading.
replies(2): >>44318817 #>>44321123 #
dang ◴[] No.44321123[source]
Behavioral patterns are not unpredictable. Who knows how far an LLM could get by pattern-matching what a user is doing and generating a UI to make it easier. Since the user could immediately say whether they liked it or not, this could turn into a rapid and creative feedback loop.
replies(1): >>44325340 #
kevinventullo ◴[] No.44325340[source]
So, if the user likes UI’s that don’t change, the LLM will figure out that it should do nothing?

One problem LLM’s don’t fix is the misalignment between app developers’ incentives and users’ incentives. Since the developer controls the LLM, I imagine that a “smart” shifting UI would quickly devolve into automated dark patterns.

replies(1): >>44332697 #
1. dang ◴[] No.44332697[source]
A user who doesn't want such changes shouldn't be subjected to them in the first place, so there should be nothing for an LLM to figure out.

I'm with you on disliking dark patterns but it seems to me a separate issue.