←back to thread

Interview with gwern

(www.dwarkeshpatel.com)
308 points synthmeat | 2 comments | | HN request time: 0s | source
Show context
keiferski ◴[] No.42135432[source]
By writing, you are voting on the future of the Shoggoth using one of the few currencies it acknowledges: tokens it has to predict. If you aren't writing, you are abdicating the future or your role in it. If you think it's enough to just be a good citizen, to vote for your favorite politician, to pick up litter and recycle, the future doesn't care about you.

These AI predictions never, ever seem to factor in how actual humans will determine what AI-generated media is successful in replacing human-ones, or if it will even be successful at all. It is all very theoretical and to me, shows a fundamental flaw in this style of "sit in a room reading papers/books and make supposedly rational conclusions about the future of the world."

A good example is: today, right now, it is a negative thing for your project to be known as AI-generated. The window of time when it was trendy and cool has largely passed. Having an obviously AI-generated header image on your blog post was cool two years ago, but now it is passé and marks you as behind the trends.

And so for the prediction that everything get swept up by an ultra-intelligent AI that subsequently replaces human-made creations, essays, writings, videos, etc., I am doubtful. Just because it will have the ability to do so doesn't mean that it will be done, or that anyone is going to care.

It seems vastly more likely to me that we'll end up with a solid way of verifying humanity – and thus an economy of attention still focused on real people – and a graveyard of AI-generated junk that no one interacts with at all.

replies(6): >>42135577 #>>42135773 #>>42135911 #>>42137616 #>>42140517 #>>42142527 #
mapt ◴[] No.42135911[source]
With AI you need to think, long and hard, about the concept (borrowed from cryptography), "Today, the state of the art in is the worst it will ever be".

Humanity is pinning its future on the thought that we will hit intractable information-theoretic limitations which provide some sort of diminishing returns on performance before a hard takeoff, but the idea that the currently demonstrated methods are high up on some sigmoid curve does not seem at this point credible. AI models are dramatically higher performance this year than last year, and were dramatically better last year than the year before, and will probably continue to get better for the next few years.

That's sufficient to dramatically change a lot of social & economic processes, for better and for worse.

replies(3): >>42136135 #>>42136832 #>>42141458 #
1. keiferski ◴[] No.42136832[source]
I don’t disagree that it’ll change a lot of things in society.

But that isn’t the claim being made, which is that some sort of AI god is being constructed which will develop entirely without the influence of how real human beings actually act. This to me is basically just sci-fi, and it’s frankly kind of embarrassing that it’s taken so seriously.

replies(1): >>42146133 #
2. mapt ◴[] No.42146133[source]
It is enormously easier to code for an AI agent which pursues a task doggedly by iterating down its list of available tools, without any consistent human moral values, than to code in some kind of human morality manually. For that, you need to have solved philosophy. Mathematically, as a proof, and with a high degree of certainty that you are correct.

Good luck.