←back to thread

1479 points sandslash | 1 comments | | HN request time: 0.445s | source
Show context
OJFord ◴[] No.44324130[source]
I'm not sure about the 1.0/2.0/3.0 classification, but it did lead me to think about LLMs as a programming paradigm: we've had imperative & declarative, procedural & functional languages, maybe we'll come to view deterministic vs. probabilistic (LLMs) similarly.

    def __main__:
        You are a calculator. Given an input expression, you compute the result and print it to stdout, exiting 0.
        Should you be unable to do this, you print an explanation to stderr and exit 1.
(and then, perhaps, a bunch of 'DO NOT express amusement when the result is 5318008', etc.)
replies(10): >>44324398 #>>44324762 #>>44325091 #>>44325404 #>>44325767 #>>44327171 #>>44327549 #>>44328699 #>>44328876 #>>44329436 #
aaron695[dead post] ◴[] No.44325404[source]
[flagged]
bgwalter ◴[] No.44326722[source]
> It makes no sense at all, it's cuckooland, are you all on crazy pills?

Frequent LLM usage impairs thinking. The LLM has no connection to reality, and it takes over people's minds.

replies(2): >>44326752 #>>44327102 #
infecto ◴[] No.44327102[source]
Sounds like you’re taking crazy pills.

Far to early from any of the studies done so far to come to your conclusion.

replies(2): >>44327371 #>>44341212 #
bopbopbop7 ◴[] No.44341212[source]
Do you really need a study to tell you that offloading your thinking to something else impairs your thinking?

But yes, there are studies to prove the most obvious statement in the world.

https://news.ycombinator.com/item?id=44286277

replies(1): >>44346508 #
infecto ◴[] No.44346508[source]
It’s not obvious to me but perhaps you are approach it from a biased perspective. Sure if you left all higher order function to a LLM, thinking about homework and simply parsing it all through a chatbot, of course you are losing out. There is a lot of nuance to it and I am not sure if that very first initial study captures it. Everyone is different and YMMV but I suspect it will come down to how you use the tools not a simple blanket statement like yours.

Do you really latch on to a single early study to make conclusions in the world? Wild. Next time before going down the path of rudeness, why don’t you share a real anecdote or thought. We have all seen that study linked many times already.

replies(1): >>44346905 #
bopbopbop7 ◴[] No.44346905[source]
You asked for a study, you got a study. Yet you’re still playing mental gymnastics to somehow prove that not using your brain doesn’t impair thinking. You want an anecdote now? After dismissing a study? Wild. I doubt that will convince you if a study won’t, just grasping at straws.

And no, it’s not nuanced at all. If you stop using your brain, you lose cognitive abilities. If you stop working out, you lose muscle. If you stop coding and let someone else do it, you lose coding abilities.

No one is being rude, that’s just what it feels like when someone calls you out with evidence.

replies(1): >>44355062 #
1. infecto ◴[] No.44355062[source]
You didn’t “call me out with evidence,” you linked a single early study that doesn’t prove the absolutist claim you’re making. You’re taking a broad, complex topic and flattening it into a gym analogy. That’s lazy thinking. It’s far too early to say either positions. I also never asked for a study, I asked for the parent to back up his claims that are far exceeding anything that study may show. He has nothing. While you might be unable to hold a constructive discussion, I will repeat mine.

I suspect there are shades of gray in how these tools are being used. From one extreme end of just using it to as a magic ball of decision making where no thinking goes into the process, to the other end where a constructive Q&A style discussion is being had. I would be surprised if having constructive discussion with a LLM to not be activating the brain, but like I said, I welcome more studies to dig more as what we have so far today is not what I would call conclusive or showing much.

Green accounts being green.

replies(1): >>44355534 #