←back to thread

Interview with gwern

(www.dwarkeshpatel.com)
308 points synthmeat | 1 comments | | HN request time: 0.246s | source
Show context
keiferski ◴[] No.42135432[source]
By writing, you are voting on the future of the Shoggoth using one of the few currencies it acknowledges: tokens it has to predict. If you aren't writing, you are abdicating the future or your role in it. If you think it's enough to just be a good citizen, to vote for your favorite politician, to pick up litter and recycle, the future doesn't care about you.

These AI predictions never, ever seem to factor in how actual humans will determine what AI-generated media is successful in replacing human-ones, or if it will even be successful at all. It is all very theoretical and to me, shows a fundamental flaw in this style of "sit in a room reading papers/books and make supposedly rational conclusions about the future of the world."

A good example is: today, right now, it is a negative thing for your project to be known as AI-generated. The window of time when it was trendy and cool has largely passed. Having an obviously AI-generated header image on your blog post was cool two years ago, but now it is passé and marks you as behind the trends.

And so for the prediction that everything get swept up by an ultra-intelligent AI that subsequently replaces human-made creations, essays, writings, videos, etc., I am doubtful. Just because it will have the ability to do so doesn't mean that it will be done, or that anyone is going to care.

It seems vastly more likely to me that we'll end up with a solid way of verifying humanity – and thus an economy of attention still focused on real people – and a graveyard of AI-generated junk that no one interacts with at all.

replies(6): >>42135577 #>>42135773 #>>42135911 #>>42137616 #>>42140517 #>>42142527 #
mapt ◴[] No.42135911[source]
With AI you need to think, long and hard, about the concept (borrowed from cryptography), "Today, the state of the art in is the worst it will ever be".

Humanity is pinning its future on the thought that we will hit intractable information-theoretic limitations which provide some sort of diminishing returns on performance before a hard takeoff, but the idea that the currently demonstrated methods are high up on some sigmoid curve does not seem at this point credible. AI models are dramatically higher performance this year than last year, and were dramatically better last year than the year before, and will probably continue to get better for the next few years.

That's sufficient to dramatically change a lot of social & economic processes, for better and for worse.

replies(3): >>42136135 #>>42136832 #>>42141458 #
wavemode ◴[] No.42141458[source]
> dramatically higher performance this year than last year, and were dramatically better last year than the year before

Yeah, but, better at _what_?

Cars are dramatically faster today than 100 years ago. But they still can't fly.

Similarly, LLMs performing better on synthetic benchmarks does not demonstrate that they will eventually become superintelligent beings that will replace humanity.

If you want to actually measure that, then these benchmarks need to start asking questions that demonstrate superintelligence: "Here is a corpus of all current research on nuclear physics, now engineer a hydrogen bomb." My guess is, we will not see much progress.

replies(1): >>42146139 #
mapt ◴[] No.42146139[source]
Humans could engineer a hydrogen bomb in the 1960's from publicly available research and multiple AI models from unrelated firms could do it right this moment if you unlocked its censors.

Turning that into an agent which builds its own hydrogen bomb using what amount to seized resources and to do it covertly at a pace that is faster than human agencies notice is a different sort of thing, but the middleware to do that sort of agent directed project is rapidly developing as well, and there is strong economic incentive for self-interested actors to pursue it. For a very brief moment in time, a huge amount of shareholder value will be created, and then suddenly destroyed.

A large-scale nuclear exchange is no longer the worst case scenario, in point of fact.

Assuming we don't hit those information-theoretic barriers, and we don't develop a host of new safeguards which nobody at the present time seems interested in developing.

replies(1): >>42147628 #
wavemode ◴[] No.42147628[source]
> multiple AI models from unrelated firms could do it right this moment if you unlocked its censors

ok buddy

replies(1): >>42155546 #
1. mapt ◴[] No.42155546[source]
You believe I'm overestimating current AI. While my estimations are probably a bit higher than yours, mostly I think you're overestimating hydrogen bombs. They're not that complicated, and not that secret in 2024. These AI models have every unclassified scientific paper and every nonfiction book ever published on the subject at their disposal.

https://en.wikipedia.org/wiki/Thermonuclear_weapon

It's a mechanistic process featuring well-tread factual, verbose discourse. Scientists reasoning about facts and presenting what they found. "Tell me a joke about elephant mating habits in the form of a rap song" is a dramatically more complex task of synthesis.