Most active commenters
  • infecto(7)
  • bgwalter(5)

←back to thread

1480 points sandslash | 21 comments | | HN request time: 2.55s | source | bottom
Show context
OJFord ◴[] No.44324130[source]
I'm not sure about the 1.0/2.0/3.0 classification, but it did lead me to think about LLMs as a programming paradigm: we've had imperative & declarative, procedural & functional languages, maybe we'll come to view deterministic vs. probabilistic (LLMs) similarly.

    def __main__:
        You are a calculator. Given an input expression, you compute the result and print it to stdout, exiting 0.
        Should you be unable to do this, you print an explanation to stderr and exit 1.
(and then, perhaps, a bunch of 'DO NOT express amusement when the result is 5318008', etc.)
replies(10): >>44324398 #>>44324762 #>>44325091 #>>44325404 #>>44325767 #>>44327171 #>>44327549 #>>44328699 #>>44328876 #>>44329436 #
1. diggan ◴[] No.44326489[source]
> It makes no sense at all, it's cuckooland, are you all on crazy pills?

First step towards understanding something you obviously have strong feelings about, is to try to avoid hitting those triggers while you think about the thing, otherwise it clouds you. Not a requirement by any measure, just a tip.

> are you telling me people will do three years university to learn to prompt?

Are people going to university for three years to write "1.0" or "2.0" software? I certainly didn't, and I don't think even the majority of software developers have done so, at least in my personal experience but YMMV.

> I do not understand where there is anything here to be "not sure" on?

They're not sure about the specific naming, not the concept or talk as a whole.

> LLMs making non-deterministic mistakes

Everything they do is non-deterministic when temperature is set to anything above 0.0, as that's the entire point. The "correct" answers are as non-deterministic as the "mistakes", although I'm not sure "mistake" is correct because it did chose the right/correct tokens, it's just that you didn't like/expect it to chose that particular tokens.

2. bgwalter ◴[] No.44326722[source]
> It makes no sense at all, it's cuckooland, are you all on crazy pills?

Frequent LLM usage impairs thinking. The LLM has no connection to reality, and it takes over people's minds.

replies(2): >>44326752 #>>44327102 #
3. boppo1 ◴[] No.44326752[source]
>Frequent LLM usage impairs thinking

Is there hard evidence on this?

replies(4): >>44326923 #>>44326929 #>>44327015 #>>44327122 #
4. MattRix ◴[] No.44326923{3}[source]
yes

https://www.mdpi.com/2075-4698/15/1/6

5. bgwalter ◴[] No.44326929{3}[source]
If you are the type who prefers studies:

https://time.com/7295195/ai-chatgpt-google-learning-school/

Otherwise, read pro-LLM blogs which are mostly rambling nonsense that overpromises while almost no actual LLM written software exists.

You can also see how the few open source developers who jump on the LLM bandwagon now have worse blogging and programming output than they had pre-LLM.

replies(1): >>44328025 #
6. fat_cantor ◴[] No.44327015{3}[source]
Some preliminary evidence:

https://news.ycombinator.com/item?id=44286277

7. infecto ◴[] No.44327095[source]
Do you think you could condense your point of view without hyperbole and rudeness so the rest of us can understand it?
8. infecto ◴[] No.44327102[source]
Sounds like you’re taking crazy pills.

Far to early from any of the studies done so far to come to your conclusion.

replies(2): >>44327371 #>>44341212 #
9. infecto ◴[] No.44327122{3}[source]
Short answer no.

Longer answer there was that study posted this week that compared it to using search and then what was it…raw thinking or something similar. I could totally understand in certain cases you are not activating parts of your brain as much, I don’t know any of it proves much in aggregate.

10. bgwalter ◴[] No.44327371{3}[source]
The LLM proponents are so desperate now that they have to resort to personal insults. Are investors beginning to realize the scam?
replies(2): >>44327481 #>>44327809 #
11. infecto ◴[] No.44327481{4}[source]
It’s strange how often criticism gets deflected with claims of personal attack. You’re citing a study that doesn’t say anything close to what you’re claiming. You’re fabricating conclusions that simply aren’t there.
replies(1): >>44327585 #
12. bgwalter ◴[] No.44327585{5}[source]
I quoted zero studies in the comment you respond to and had no intentions of doing so. I quoted a study as well as personal observations under duress after a citation demand appeared.
replies(1): >>44327663 #
13. infecto ◴[] No.44327663{6}[source]
I honestly have no idea what point you’re trying to make now. You opened with bold claims and zero evidence, then acted like being asked for a citation was some kind of duress. If you’re going to assert sweeping conclusions, expect to be challenged. That’s not an attack, it’s basic discourse.
14. Kiro ◴[] No.44327809{4}[source]
You were the one who started with the insults.
replies(1): >>44327830 #
15. bgwalter ◴[] No.44327830{5}[source]
Saying to someone "you are more intelligent if you don't use an LLM" is a compliment, not an insult.
replies(1): >>44328287 #
16. Workaccount2 ◴[] No.44328025{4}[source]
I have 7 different 100% LLM written programs in use at my company daily, some going back to GPT-4 and some a recent as gemini 2.5.

Software engineers are so lost in the weeds of sprawling feature pack endless flexibility programs that they have completely lost sight of simple narrow scope programs. I can tell an LLM exactly how we need the program to work (forgoing endless settings and option menus) and tell it exactly what it needs to do (forgoing endless branching possibilities for every conceivable user workflow) and get a lean lightweight program that takes the user from A to B in 3k LOC.

Is the program something that could be sold? No. Would it work for other companies/users? Probably not. Does it replace a massive 1M+ LOC $20/mo software package for that user in our bespoke use case? Yes.

17. Kiro ◴[] No.44328287{6}[source]
You're not fooling anyone.
18. bopbopbop7 ◴[] No.44341212{3}[source]
Do you really need a study to tell you that offloading your thinking to something else impairs your thinking?

But yes, there are studies to prove the most obvious statement in the world.

https://news.ycombinator.com/item?id=44286277

replies(1): >>44346508 #
19. infecto ◴[] No.44346508{4}[source]
It’s not obvious to me but perhaps you are approach it from a biased perspective. Sure if you left all higher order function to a LLM, thinking about homework and simply parsing it all through a chatbot, of course you are losing out. There is a lot of nuance to it and I am not sure if that very first initial study captures it. Everyone is different and YMMV but I suspect it will come down to how you use the tools not a simple blanket statement like yours.

Do you really latch on to a single early study to make conclusions in the world? Wild. Next time before going down the path of rudeness, why don’t you share a real anecdote or thought. We have all seen that study linked many times already.

replies(1): >>44346905 #
20. bopbopbop7 ◴[] No.44346905{5}[source]
You asked for a study, you got a study. Yet you’re still playing mental gymnastics to somehow prove that not using your brain doesn’t impair thinking. You want an anecdote now? After dismissing a study? Wild. I doubt that will convince you if a study won’t, just grasping at straws.

And no, it’s not nuanced at all. If you stop using your brain, you lose cognitive abilities. If you stop working out, you lose muscle. If you stop coding and let someone else do it, you lose coding abilities.

No one is being rude, that’s just what it feels like when someone calls you out with evidence.

replies(1): >>44355062 #
21. infecto ◴[] No.44355062{6}[source]
You didn’t “call me out with evidence,” you linked a single early study that doesn’t prove the absolutist claim you’re making. You’re taking a broad, complex topic and flattening it into a gym analogy. That’s lazy thinking. It’s far too early to say either positions. I also never asked for a study, I asked for the parent to back up his claims that are far exceeding anything that study may show. He has nothing. While you might be unable to hold a constructive discussion, I will repeat mine.

I suspect there are shades of gray in how these tools are being used. From one extreme end of just using it to as a magic ball of decision making where no thinking goes into the process, to the other end where a constructive Q&A style discussion is being had. I would be surprised if having constructive discussion with a LLM to not be activating the brain, but like I said, I welcome more studies to dig more as what we have so far today is not what I would call conclusive or showing much.

Green accounts being green.

replies(1): >>44355534 #