←back to thread

1480 points sandslash | 1 comments | | HN request time: 0.206s | source
Show context
hgl ◴[] No.44315520[source]
It’s fascinating to think about what true GUI for LLM could be like.

It immediately makes me think a LLM that can generate a customized GUI for the topic at hand where you can interact with in a non-linear way.

replies(8): >>44315537 #>>44315540 #>>44315566 #>>44315572 #>>44315981 #>>44316784 #>>44316877 #>>44319465 #
karpathy ◴[] No.44315566[source]
Fun demo of an early idea was posted by Oriol just yesterday :)

https://x.com/OriolVinyalsML/status/1935005985070084197

replies(9): >>44315985 #>>44316123 #>>44316264 #>>44316327 #>>44316476 #>>44320938 #>>44321641 #>>44322664 #>>44325125 #
1. asterisk_ ◴[] No.44325125[source]
I feel like one quickly hits a similar partial observability problem as with e.g. light sensors. How often do you wave around annoyed because the light turned off.

To get _truly_ self driving UIs you need to read the mind of your users. It's some heavy tailed distribution all the way down. Interesting research problem on its own.

We already have adaptive UIs (profiles in VSC anyone? Vim, Emacs?) they're mostly under-utilized because takes time to setup + most people are not better at designing their own workflow relative to the sane default.