←back to thread

170 points bookofjoe | 4 comments | | HN request time: 0.941s | source
Show context
kogus ◴[] No.43644640[source]
I think we need to consider what the end goal of technology is at a very broad level.

Asimov says in this that there are things computers will be good at, and things humans will be good at. By embracing that complementary relationship, we can advance as a society and be free to do the things that only humans can do.

That is definitely how I wish things were going. But it's becoming clear that within a few more years, computers will be far better at absolutely everything than human beings could ever be. We are not far even now from a prompt accepting a request such as "Write a another volume of the Foundation series, in the style of Isaac Asimov", and getting a complete novel that does not need editing, does not need review, and is equal to or better than the quality of the original novels.

When that goal is achieved, what then are humans "for"? Humans need purpose, and we are going to be in a position where we don't serve any purpose. I am worried about what will become of us after we have made ourselves obsolete.

replies(12): >>43644692 #>>43644695 #>>43644736 #>>43644771 #>>43644824 #>>43644846 #>>43644847 #>>43644881 #>>43644933 #>>43645048 #>>43646501 #>>43647117 #
1. empath75 ◴[] No.43644692[source]
> But it's becoming clear that within a few more years, computers will be far better at absolutely everything than human beings could ever be.

Comparative advantage. Even if that's true, AI can't possibly do _everything_. China is better at manufacturing pretty much anything than most countries on earth, but that doesn't mean China is the only country in the world that does manufacturing.

replies(1): >>43644717 #
2. Philpax ◴[] No.43644717[source]
> AI can't possibly do _everything_

Why not? There's the human bias of wanting to consume things created by humans - that's fine, I'm not questioning that - but objectively, if we get to human-threshold AGI and continue scaling, there's no reason why it couldn't do everything, and better.

replies(1): >>43646259 #
3. seadan83 ◴[] No.43646259[source]
Why not - IMO you perhaps underestimate human complexity. There was a guardian article where researchers created a map of a mouse's brain, 1 cubic millimeter. Contains 45km worth of neurons and billions of synapses. IMO the AGI crowd are suffering expert beginner syndrome.
replies(1): >>43646778 #
4. Philpax ◴[] No.43646778{3}[source]
Humans are one solution to the problem of intelligence, but they are not the only solution, nor are they the most efficient. Today's LLMs are capable of outperforming your average human in a variety (not all, obviously!) of fields, despite being of wholly different origin and complexity.