Most active commenters
  • diggan(6)
  • kubb(3)
  • TeMPOraL(3)

←back to thread

1479 points sandslash | 22 comments | | HN request time: 2.275s | source | bottom
Show context
mentalgear ◴[] No.44316934[source]
Meanwhile, I asked this morning Claude 4 to write a simple EXIF normalizer. After two rounds of prompting it to double-check its code, I still had to point out that it makes no sense to load the entire image for re-orientating if the EXIF orientation is fine in the first place.

Vibe vs reality, and anyone actually working in the space daily can attest how brittle these systems are.

Maybe this changes in SWE with more automated tests in verifiable simulators, but the real world is far to complex to simulate in its vastness.

replies(7): >>44317104 #>>44317116 #>>44317136 #>>44317214 #>>44317305 #>>44317622 #>>44317741 #
1. ramon156 ◴[] No.44317136[source]
The real question is how long it'll take until they're not brittle
replies(3): >>44317160 #>>44317197 #>>44317483 #
2. kubb ◴[] No.44317160[source]
Or will they ever be reliable. Your question is already making an assumption.
replies(3): >>44317316 #>>44317424 #>>44317731 #
3. guappa ◴[] No.44317197[source]
4. diggan ◴[] No.44317316[source]
They're reliable already if you change the way you approach them. These probabilistic token generators probably never will be "reliable" if you expect them to 100% always output exactly what you had in mind, without iterating in user-space (the prompts).
replies(1): >>44317546 #
5. vFunct ◴[] No.44317424[source]
Its perfectly reliable for the things you know it to be, such as operations within its context window size.

Don't ask LLMs to "Write me Microsoft Excel".

Instead, ask it to "Write a directory tree view for the Open File dialog box in Excel".

Break your projects down into the smallest chunks you can for the LLMs. The more specific you are, the more reliable it's going to be.

The rest of this year is going to be companies figuring out how to break down large tasks into smaller tasks for LLM consumption.

6. yahoozoo ◴[] No.44317483[source]
“Treat it like a junior developer” … 5 years later … “Treat it like a junior developer”
replies(2): >>44317582 #>>44317623 #
7. kubb ◴[] No.44317546{3}[source]
I also think they might never become reliable.
replies(2): >>44317591 #>>44317599 #
8. agile-gift0262 ◴[] No.44317582[source]
while True:

  print("This model that just came out changes everything. It's flawless. It doesn't have any of the issues the model from 6 months ago had. We are 1 year away from AGI and becoming jobless")
  sleep(timedelta(days=180).total_seconds)
9. diggan ◴[] No.44317591{4}[source]
But what does that mean? If you tell the LLM "Say just 'hi' without any extra words or explanations", do you not get "hi" back from it?
replies(2): >>44317612 #>>44318187 #
10. flir ◴[] No.44317599{4}[source]
There is a bar below which they are reliable.

"Write a Python script that adds three numbers together".

Is that bar going up? I think it probably is, although not as fast/far as some believe. I also think that "unreliable" can still be "useful".

11. TeMPOraL ◴[] No.44317612{5}[source]
That's literally the wrong way to use LLMs though.

LLMs think in tokens, the less they emit the dumber they are, so asking them to be concise, or to give the answer before explanation, is extremely counterproductive.

replies(1): >>44317636 #
12. TeMPOraL ◴[] No.44317623[source]
Usable LLMs are 3 years old at this point. ChatGPT, not Github Copilot, is the marker.
replies(1): >>44320349 #
13. diggan ◴[] No.44317636{6}[source]
I was trying to make a point regarding "reliability", not a point about how to prompt or how to use them for work.
replies(1): >>44317746 #
14. dist-epoch ◴[] No.44317731[source]
I remember when people were saying here on HN that AIs will never be able to generate picture of hands with just 5 fingers because they just "don't have common sense"
15. TeMPOraL ◴[] No.44317746{7}[source]
This is relevant. Your example may be simple enough, but for anything more complex, letting the model have its space to think/compute is critical to reliability - if you starve it for compute, you'll get more errors/hallucinations.
replies(1): >>44317850 #
16. diggan ◴[] No.44317850{8}[source]
Yeah I mean I agree with you, but I'm still not sure how it's relevant. I'd also urge people to have unit tests they treat as production code, and proper system prompts, and X and Y, but it's really beyond the original point of "LLMs aren't reliable" which is the context in this sub-tree.
17. kubb ◴[] No.44318187{5}[source]
Sometimes I get "Hi!", sometimes "Hey!".
replies(1): >>44318270 #
18. diggan ◴[] No.44318270{6}[source]
Which model? Just tried a bunch of ChatGPT, OpenAI's API, Claude, Anthropic's API and DeepSeek's API with both chat and reasonee, every single one replied with a single "hi".
replies(1): >>44318659 #
19. throwdbaaway ◴[] No.44318659{7}[source]
o3-mini-2025-01-31 with high reasoning effort replied with "Hi" after 448 reasoning tokens.

gpt-4.5-preview-2025-02-27 replied with "Hi!"

replies(1): >>44319506 #
20. diggan ◴[] No.44319506{8}[source]
> o3-mini-2025-01-31 with high reasoning effort replied with "Hi" after 448 reasoning tokens.

I got "hi", as expected. What is the full system prompt + user message you're using?

https://i.imgur.com/Y923KXB.png

> gpt-4.5-preview-2025-02-27

Same "hi": https://i.imgur.com/VxiIrIy.png

replies(1): >>44324512 #
21. LtWorf ◴[] No.44320349{3}[source]
Usable for fun yes.
22. throwdbaaway ◴[] No.44324512{9}[source]
Ah right, my bad. Somehow I thought the prompt was only:

    Say just 'hi'
while the "without any extra words or explanations" part was for the readers of your comment. Perhaps kubb also made a similar mistake.

I used empty system prompt.