←back to thread

261 points david927 | 1 comments | | HN request time: 0.226s | source

What are you working on? Any new ideas that you're thinking about?
1. msoloviev ◴[] No.43162383[source]
I'm working on a text editor (https://github.com/blackhole89/autopen/) that continuously analyses the buffer with a local LLM to compute token surprisal and generate candidate completions starting from any point, and switch back and forth between different ones by walking a tree structure. This is pretty different from the usual way people interact with LLMs, and has lots of interesting applications - for example, if you are using them to translate and don't like a particular word choice, you can "dig through" top alternatives on the spot or even insert your own.

Applying the same approach to chain-of-thought reasoning gave me the feeling that I might be looking at a form of realistic UX for some sort of science-fiction neural AI augmentation - you can let the CoT run on and do its thing, but also interject at any point and insert a "thought" of your own, or go back and revise a thought you did not like, and then let it continue. Imagine such a stream hooked up with a two-way pipe into your phonological loop (https://www.sciencedirect.com/topics/psychology/phonological... - perhaps more attainable with existing tech).