←back to thread

279 points nnx | 2 comments | | HN request time: 0.001s | source
Show context
ChuckMcM ◴[] No.43543501[source]
This clearly elucidated a number of things I've tried to explain to people who are so excited about "conversations" with computers. The example I've used (with varying levels of effectiveness) was to get someone to think about driving their car by only talking to it. Not a self driving car that does the driving for you, but telling it things like: turn, accelerate, stop, slow down, speed up, put on the blinker, turn off the blinker, etc. It would be annoying and painful and you couldn't talk to your passenger while you were "driving" because that might make the car do something weird. My point, and I think it was the author's as well, is that you aren't "conversing" with your computer, you are making it do what you want. There are simpler, faster, and more effective ways to do that then to talk at it with natural language.
replies(11): >>43543657 #>>43543721 #>>43543740 #>>43543791 #>>43543890 #>>43544393 #>>43544444 #>>43545239 #>>43546342 #>>43547161 #>>43551139 #
phyzix5761 ◴[] No.43543740[source]
You're onto something. We've learned to make computers and electronic devices feel like extensions of ourselves. We move our bodies and they do what we expect. Having to switch now to using our voice breaks that connection. Its no longer an extension of ourselves but a thing we interact with.
replies(1): >>43543986 #
namaria ◴[] No.43543986[source]
Two key things that make computers useful, specificity and exactitude, are thrown out of the window by interposing NLP between the person and the computer.

I don't get it at all.

replies(3): >>43544143 #>>43546069 #>>43546495 #
TeMPOraL ◴[] No.43544143[source]

   [imprecise thinking]
         v <--- LLMs do this for you
   [specific and exact commands]
         v
   [computers]
         v
   [specific and exact output]
         v <--- LLMs do this for you
   [contextualized output]
In many cases, you don't want or need that. In some, you do. Use right tool for the job, etc.
replies(2): >>43544577 #>>43551590 #
namaria ◴[] No.43551590[source]
Despite feeling like a "let me draw it for you" answer is a tad condescending, I want to address something here.

This would be great if LLMs did not tend to output nonsense. Truly it would be grand. But they do. So it isn't. It's wasting resources hoping for a good outcome and risking frustration, misapprehensions, prompt injection attacks... It's non-deterministic algorithms hoping P=NP, except instead of branching at every decision you're doing search by tweaking vectors whose values you don't even know and whose influence on the outcome is impossible to foresee.

Sure, a VC subsidized LLM is a great way to make CVs in LaTeX (I do it all the time), translating text, maybe even generating some code if you know what you need and can describe it well. I will give you that. I even created a few - very mediocre - songs. Am I contradicting myself? I don't think I am, because I would love to live in a hotel if I only had to pay a tiny fraction of the cost. But I would still think that building hotels would be a horrible way to address the housing crisis in modern metropolises.

replies(2): >>43552390 #>>43552872 #
1. TeMPOraL ◴[] No.43552390{5}[source]
> Despite feeling like a "let me draw it for you" answer is a tad condescending, I want to address something here.

I didn't mean it to be condescending - though I can see how it can come across as such. FWIW, I opted for a diagram after I typed half a page worth of "normal" text and realized I'm still not able to elucidate my point - so I deleted it and drew something matching my message more closely.

> This would be great if LLMs did not tend to output nonsense. Truly it would be grand. But they do. So it isn't.

I find this critique to be tiring at this point - it's just as wrong as assuming LLMs work perfectly and all is fine. Both views are too definite, too binary. In reality, LLMs are just non-deterministic - that is, they have an error rate. How big it is, and how small can it get in practice for a given tasks - those are the important questions.

Pretty much every aspect of computing is only probabilistically correct - either because the algorithm is explicitly so (UUIDs and primality testing, for starters), or just because it runs on real hardware, and physics happen. Most people get away with pretending that our systems are either correct or not, but that's only possible because the error rate is low enough. But it's never that low by accident - it got pushed there by careful design at every level, hardware and software. LLMs are just another probabilistically correct system that, over time, we'll learn how to use in ways that gets the error rate low enough to stop worrying about it.

How can we get there - now, that is an interesting challenge.

replies(1): >>43554183 #
2. namaria ◴[] No.43554183[source]
Natural language has a high entropy floor. It's a very noisy channel. This isn't anything like bit flipping or component failure. This is a whole different league. And we've been pouring outrageous amounts of resources into diminishing returns. OpenAI keeps touting AGI and burning cash. It's being pushed everywhere as a silver bullet, helping spin lay offs as a good thing.

LLMs are cool technology sure. There's a lot of cool things in the ML space. I love it.

But don't pretend like the context of this conversation isn't the current hype and that it isn't reaching absurd levels.

So yeah we're all tired. Tired of the hype, of pushing LLMs, agents, whatever, as some sort of silver bullet. Tired of the corporate smoke screen around it. NLP is still a hard problem, we're nowhere near solving it, and bolting it on everything is not a better idea now than it was before transformers and scaling laws.

On the other hand my security research business is booming and hey the rational thing for me to say is: by all means keep putting NLP everywhere.