←back to thread

323 points steerlabs | 4 comments | | HN request time: 0s | source
Show context
keiferski ◴[] No.46192154[source]
The thing that bothers me the most about LLMs is how they never seem to understand "the flow" of an actual conversation between humans. When I ask a person something, I expect them to give me a short reply which includes another question/asks for details/clarification. A conversation is thus an ongoing "dance" where the questioner and answerer gradually arrive to the same shared meaning.

LLMs don't do this. Instead, every question is immediately responded to with extreme confidence with a paragraph or more of text. I know you can minimize this by configuring the settings on your account, but to me it just highlights how it's not operating in a way remotely similar to the human-human one I mentioned above. I constantly find myself saying, "No, I meant [concept] in this way, not that way," and then getting annoyed at the robot because it's masquerading as a human.

replies(37): >>46192230 #>>46192268 #>>46192346 #>>46192427 #>>46192525 #>>46192574 #>>46192631 #>>46192754 #>>46192800 #>>46192900 #>>46193063 #>>46193161 #>>46193374 #>>46193376 #>>46193470 #>>46193656 #>>46193908 #>>46194231 #>>46194299 #>>46194388 #>>46194411 #>>46194483 #>>46194761 #>>46195048 #>>46195085 #>>46195309 #>>46195615 #>>46195656 #>>46195759 #>>46195794 #>>46195918 #>>46195981 #>>46196365 #>>46196372 #>>46196588 #>>46197200 #>>46198030 #
1. jacquesm ◴[] No.46193161[source]
If you're paying per token then there is a big business incentive for the counterparty to burn tokens as much as possible.
replies(3): >>46193224 #>>46193960 #>>46194192 #
2. lxgr ◴[] No.46193224[source]
As long as there's no moat (and arguably current LLM inference APIs are far from having one), it arguably doesn't really matter what users pay by.

The only thing I care about are whether the answer helps me out and how much I paid for it, whether it took the model a million tokens or one to get to it.

3. dboon ◴[] No.46193960[source]
Making a few pennies more from inference is not even on the radar of the labs making frontier models. The financial stakes are so much higher than that for them.
4. lkbm ◴[] No.46194192[source]
If I'll pay to get a fixed result, sure. I'd expect a Jevons paradox effect: if LLMs got me results twice as fast for the same cost, I'm going to use it more and end up paying more in total.

Maximizing the utility of your product for users is usually the winning strategy.