←back to thread

279 points nnx | 1 comments | | HN request time: 0s | source
Show context
ChuckMcM ◴[] No.43543501[source]
This clearly elucidated a number of things I've tried to explain to people who are so excited about "conversations" with computers. The example I've used (with varying levels of effectiveness) was to get someone to think about driving their car by only talking to it. Not a self driving car that does the driving for you, but telling it things like: turn, accelerate, stop, slow down, speed up, put on the blinker, turn off the blinker, etc. It would be annoying and painful and you couldn't talk to your passenger while you were "driving" because that might make the car do something weird. My point, and I think it was the author's as well, is that you aren't "conversing" with your computer, you are making it do what you want. There are simpler, faster, and more effective ways to do that then to talk at it with natural language.
replies(11): >>43543657 #>>43543721 #>>43543740 #>>43543791 #>>43543890 #>>43544393 #>>43544444 #>>43545239 #>>43546342 #>>43547161 #>>43551139 #
phyzix5761 ◴[] No.43543740[source]
You're onto something. We've learned to make computers and electronic devices feel like extensions of ourselves. We move our bodies and they do what we expect. Having to switch now to using our voice breaks that connection. Its no longer an extension of ourselves but a thing we interact with.
replies(1): >>43543986 #
namaria ◴[] No.43543986[source]
Two key things that make computers useful, specificity and exactitude, are thrown out of the window by interposing NLP between the person and the computer.

I don't get it at all.

replies(3): >>43544143 #>>43546069 #>>43546495 #
TeMPOraL ◴[] No.43544143[source]

   [imprecise thinking]
         v <--- LLMs do this for you
   [specific and exact commands]
         v
   [computers]
         v
   [specific and exact output]
         v <--- LLMs do this for you
   [contextualized output]
In many cases, you don't want or need that. In some, you do. Use right tool for the job, etc.
replies(2): >>43544577 #>>43551590 #
shakna ◴[] No.43544577[source]
I don't think they give a specific and exact output, considering how nondeterminism plays a role in most models.
replies(2): >>43545096 #>>43545818 #
1. TeMPOraL ◴[] No.43545818{4}[source]
I'll need to work on the diagram to make it clearer next time.

What it's trying to communicate is, in general, a human operating a computer has to turn their imprecise thinking into "specific and exact commands", and subsequently, understand the "specific and exact output" in whatever terms they're thinking off, prioritizing and filtering out data based on situational context. LLMs enter the picture in two places:

1) In many situations, they can do the "imprecise thinking" -> "specific and exact commands" step for the user;

2) In many situations, they can do the "specific and exact output" -> contextualized output step for the user;

In such scenarios, LLMs are not replacing software, they're being slotted as intermediary between user and classical software, so the user can operate closer to what's natural for them, vs. translating between it and rigid computer language.

This is not applicable everywhere, but then, this is also not the only way LLMs are useful - it's just one broad class of scenarios in which they are.