←back to thread

63 points tejonutella | 5 comments | | HN request time: 1.632s | source
Show context
esafak ◴[] No.43304012[source]
This is a shallow and dated article, even when it came out in 2023. LLMs are dumb because they can't multiply, and generalize? They can easily write programs and calculate anything you want, and machine learning is all about generalizing.

   Hot take: if your job can be partially or wholly eliminated by AI, that’s a GOOD THING. If your job has patterns that predictable or labor that routine, AI automation is a GOOD THING. 
What if it lead to widespread under- or unemployment? This social upheaval would lead to people setting aside the niceties of civilization as they fight to meet their basic needs. And such people elect bad governments, which exacerbates the problem.

One might also be scared in a physical sense. The race is afoot to develop physical robots on par with humans. Imagine that as your policeman, or soldier, bringing democracy to a country near you.

https://www.yahoo.com/tech/china-tech-contest-features-robot...

replies(2): >>43304038 #>>43304144 #
1. Salgat ◴[] No.43304144[source]
The entire point is that LLMs don't think, they're just regressions of information you'd find on the internet. And automation is a good thing. 90% of Americans were farmers in the 1700s; automation completely revolutionized that. Your final point is still a concern in general, specifically what will happen when menial labor no longer exists. At that point, you have to hope that all the wealth generated from automation will be redistributed back in the form of a basic income, otherwise yes, we'll have a lot of people with no means to survive.
replies(2): >>43304189 #>>43304701 #
2. esafak ◴[] No.43304189[source]
They are not interpolating, which is what I think you meant to say, except for a loose definition humans would meet too.

What do you think of the latest thinking models, and what is your test of thinking?

replies(1): >>43304504 #
3. Salgat ◴[] No.43304504[source]
An LLM is one very big nonlinear regression used to pick a token with a clearly defined input, output, and the corresponding weights. It's still far too straight-forward and non-dynamic (the weights aren't constantly changing even during a single inference) compared to the human brain.

As far as the latest "thinking" techniques, it's all about providing the correct input to get the desired output. If you look at the training data (the internet), the hardest and most ambiguous problems don't have a simple question input and answer response, they instead have a lot of back-and-forth before arriving at the answer, so you need to simulate that same back-and-forth to arrive at the desired answer. Unfortunately model architecture is still too simple to implicitly do this within the model itself, at least reliably.

replies(1): >>43305036 #
4. throwaway85995 ◴[] No.43304701[source]
Automation is a good thing? It definitely made the rich richer, but I'm not sure it made the average person happier. Depression rates are at an all-time high, after all.
5. esafak ◴[] No.43305036{3}[source]
Learning and thinking are separate things. Today's models think without learning -- they are frozen in time -- but this is a temporary state borne of the cost of training. I actually like it like this because we don't yet have impenetrable guardrails on these things.

> If you look at the training data (the internet), the hardest and most ambiguous problems don't have a simple question input and answer response, they instead have a lot of back-and-forth before arriving at the answer, so you need to simulate that same back-and-forth to arrive at the desired answer. Unfortunately model architecture is still too simple to implicitly do this within the model itself, at least reliably.

Today's thinking models iterate (with function calls and Internet queries) and even backtrack. They are not as reliable as humans but are demonstrating the hallmarks of thinking, I'd say.