←back to thread

523 points noperator | 1 comments | | HN request time: 0.206s | source
Show context
gorgoiler ◴[] No.44497736[source]
Interesting article. Bizarrely it makes me wish I’d used Pocket more! Tangentially, with LLMs I’m getting very tired with the standard patter one sees in their responses. You’ll recognize the general format of chatty output:

Platitude! Here’s a bunch of words that a normal human being would say followed by the main thrust of the response that two plus two is four. Here are some more words that plausibly sound human!

I realize that this is of course how it all actually works underneath — LLMs have to waffle their way to the point because of the nature of their training — but is there any hope to being able to post-process out the fluff? I want to distill down to an actual answer inside the inference engine itself, without having to use more language-corpus machinery to do so.

It’s like the age old problem of internet recipes. You want this:

  500g wheat flour
  280ml water
  10g salt
  10g yeast
But what you get is this:

  It was at the age of five, sitting
  on my grandmother’s lap in the
  cool autumn sun on West Virginia
  that I first tasted the perfect loaf…
replies(5): >>44497793 #>>44497920 #>>44498626 #>>44499091 #>>44500377 #
1. dicethrowaway1 ◴[] No.44499091[source]
FWIW, o3 seems to get to the point more quickly than most of the other LLMs. So much so that, if you're asking about a broad topic, it may abbreviate a lot and make it difficult to parse just what it's saying.