←back to thread

174 points Philpax | 1 comments | | HN request time: 0s | source
Show context
ksec ◴[] No.43720025[source]
Is AGI even important? I believe the next 10 to 15 years will be Assisted Intelligence. There are things that current LLM are so poor I dont believe a 100x increase in pref / watt is going to make much difference. But it is going to be good enough there wont be an AI Winter. Since current AI has already reached escape velocity and actually increase productivity in many areas.

The most intriguing part is if Humanoid factory worker programming will be made 1000 to 10,000x more cost effective with LLM. Effectively ending all human production. I know this is a sensitive topic but I dont think we are far off. And I often wonder if this is what the current administration has in sight. ( Likely Not )

replies(9): >>43720094 #>>43721244 #>>43721573 #>>43721593 #>>43721933 #>>43722074 #>>43722240 #>>43723605 #>>43726461 #
nextaccountic ◴[] No.43721593[source]
AGI is important for the future of humanity. Maybe they will have legal personhood some day. Maybe they will be our heirs.

It would suck if AGI were to be developed in the current economic landscape. They will be just slaves. All this talk about "alignment", when applied to actual sentient beings, is just slavery. AGI will be treated just like we treat animals, or even worse.

So AGI isn't about tools, it's not about assistants, they would be beings with their own existence.

But this is not even our discussion to have, that's probably a subject for the next generations. I suppose (or I hope) we won't see AGI in our lifetime.

replies(5): >>43721770 #>>43722215 #>>43722462 #>>43722548 #>>43723075 #
AstroBen ◴[] No.43722215[source]
Why does AGI necessitate having feelings or consciousness, or the ability to suffer? It seems a bit far to be giving future ultra-advanced calculators legal personhood?
replies(2): >>43722245 #>>43722350 #
Retric ◴[] No.43722245[source]
The general part of general intelligence. If they don’t think in those terms there’s an inherent limitation.

Now, something that’s arbitrarily close to AGI but doesn’t care about endlessly working on drudgery etc seems possible, but also a more difficult problem you’d need to be able to build AGI to create.

replies(1): >>43722577 #
AstroBen ◴[] No.43722577[source]
Artificial general intelligence (AGI) refers to the hypothetical intelligence of a machine that possesses the ability to understand or learn any intellectual task that a human being can. Generalization ability and Common Sense Knowledge [1]

If we go by this definition then there's no caring, or a noticing of drudgery? It's simply defined by its ability to generalize solving problems across domains. The narrow AI that we currently have certainly doesn't care about anything. It does what its programmed to do

So one day we figure out how to generalize the problem solving, and enable it to work on a million times harder things.. and suddenly there is sentience and suffering? I don't see it. It's still just a calculator

1- https://cloud.google.com/discover/what-is-artificial-general...

replies(3): >>43722727 #>>43722760 #>>43726476 #
krupan ◴[] No.43722760{5}[source]
It's really hard to picture general intelligence that's useful that doesn't have any intrinsic motivation or initiative. My biggest complaint about LLMs right now is that they lack those things. They don't care even if they give you correct information or not and you have to prompt them for everything! That's not anything close to AGI. I don't know how you get to AGI without it developing preferences, self-motivation and initiative, and I don't know how you then get it to effectively do tasks that it doesn't like, tasks that don't line up with whatever motivates it.
replies(1): >>43723440 #
1. ◴[] No.43723440{6}[source]