Most active commenters
  • apsurd(3)

←back to thread

523 points noperator | 13 comments | | HN request time: 0.631s | source | bottom
1. gorgoiler ◴[] No.44497736[source]
Interesting article. Bizarrely it makes me wish I’d used Pocket more! Tangentially, with LLMs I’m getting very tired with the standard patter one sees in their responses. You’ll recognize the general format of chatty output:

Platitude! Here’s a bunch of words that a normal human being would say followed by the main thrust of the response that two plus two is four. Here are some more words that plausibly sound human!

I realize that this is of course how it all actually works underneath — LLMs have to waffle their way to the point because of the nature of their training — but is there any hope to being able to post-process out the fluff? I want to distill down to an actual answer inside the inference engine itself, without having to use more language-corpus machinery to do so.

It’s like the age old problem of internet recipes. You want this:

  500g wheat flour
  280ml water
  10g salt
  10g yeast
But what you get is this:

  It was at the age of five, sitting
  on my grandmother’s lap in the
  cool autumn sun on West Virginia
  that I first tasted the perfect loaf…
replies(5): >>44497793 #>>44497920 #>>44498626 #>>44499091 #>>44500377 #
2. apsurd ◴[] No.44497920[source]
How do you trust the recipe without context?

People say they want one thing but then their actions and money go to another.

I do agree there's unnecessary fluff. But "just give me the recipe" isn't really what people want. And I don't think your represent some outlier take because really have you ever gotten a recipe exactly as you outlined — zero context – and gave a damn to make it?

replies(5): >>44498306 #>>44498822 #>>44498850 #>>44500561 #>>44502401 #
3. aniviacat ◴[] No.44498306[source]
Yesterday I baked some muffins from an internet recipe that had a list of ingredients and four sentences on what to do. They're pretty nice.
4. mattmanser ◴[] No.44498626[source]
I just add "be concise" to the end. Works pretty well.

I'm no expert, but with the "thinking" models, I'd hope the "be concise" step happens at the end. So it can waffle all it wants to itself until it gives me the answer.

5. T0Bi ◴[] No.44498822[source]
The biggest cooking / recipe app in Germany (Chefkoch) works perfectly fine for millions of people without any of the fluff. It's a list of ingredients and cooking steps, that's it. I don't know a single person that cooks who doesn't use it regularly.
6. lan321 ◴[] No.44498850[source]
> How do you trust the recipe without context?

Ratings or poster reputation.

I often use recipes from a particular chef's website, which are formulated with specific ingredients, steps, and, optionally, a video. I trust the chef since I've yet to try a bad recipe from him.

I also often use baking recipes from King Arthur based on ratings. They're also pretty consistently good and don't have much fluff.

replies(1): >>44502784 #
7. dicethrowaway1 ◴[] No.44499091[source]
FWIW, o3 seems to get to the point more quickly than most of the other LLMs. So much so that, if you're asking about a broad topic, it may abbreviate a lot and make it difficult to parse just what it's saying.
8. sram1337 ◴[] No.44500377[source]
That is an issue with general use LLM apps like ChatGPT - they have to have wide appeal, so if you want replies that are differ from what the average user wants, you're going to have a bad time.

OpenAI has said they are working on making ChatGPT's output more configurable

9. Brendinooo ◴[] No.44500561[source]
> But "just give me the recipe" isn't really what people want.

The structure of recipe sites has less to do with revealed preferences and more to do with playing the SEO game.

10. marssaxman ◴[] No.44502401[source]
> How do you trust the recipe without context?

Well, I just read it. The stakes are not that high!

> have you ever gotten a recipe exactly as you outlined — zero context – and gave a damn to make it?

Of course: there are a great many useful cookbooks written exactly this way.

replies(1): >>44502758 #
11. apsurd ◴[] No.44502758{3}[source]
The book is the context! It was published, it has a presumably influential vetted author.

Maybe I am coming off too flippant. I'm just trying to say there's a spectrum between fluff and context. If the AI's literally just gave us answers and list of recipes, it wouldn't be as useful as with the context backing up where it came from, why this list, and so on.

replies(1): >>44503841 #
12. apsurd ◴[] No.44502784{3}[source]
Those are good examples. A trusted chef's website can list purely the recipe because it's held within a pre-vetted context. I do this as well.

I'm advocating for the need for those kinds of trust signals. If AI literally just listed ingredients, I wouldn't trust it. How could I?

13. marssaxman ◴[] No.44503841{4}[source]
Well, that's a reasonable point!

Perhaps it also depends on one's approach to cooking. I often read recipes not because I intend to follow them, but to understand the range of variation in the dish before I make my own version. "Somebody liked this enough to bother writing it up" is enough context for that use.