←back to thread

71 points fka | 3 comments | | HN request time: 0.411s | source
Show context
jFriedensreich ◴[] No.44006098[source]
I was working on exactly this in gpt 3 days and still believe ad hoc generation of super specifc and contextual relevant UIs will solve a lot of problems and friction that purely textual or speech based conversational interfaces pose especially if the UI elements like sliders provide some form of live feedback of their effect and are possible to scroll back to or pin and make changes anytime.
replies(2): >>44006257 #>>44009045 #
1. WillAdams ◴[] No.44006257[source]
This always felt like something which the LCARS interface addressed, at least conceptually (though I've never seen an implementation which was more than just a skin).

I'd love to see folks finding the same sort of energy and innovation which was driving early projects such as Momenta and PenPoint and so forth.

replies(2): >>44008096 #>>44009639 #
2. bhj ◴[] No.44008096[source]
Yes, there’s a video where Michael Okuda (with Adam Savage, I think?) recalls the TNG cast being worried about where to tap, and his response was essentially “you can’t press a wrong button“.
3. jFriedensreich ◴[] No.44009639[source]
thanks for bringing this up, totally forgot the connection even though i looked at it before and also remember the adam savage interview