←back to thread

Interview with gwern

(www.dwarkeshpatel.com)
308 points synthmeat | 1 comments | | HN request time: 0.209s | source
Show context
keiferski ◴[] No.42135432[source]
By writing, you are voting on the future of the Shoggoth using one of the few currencies it acknowledges: tokens it has to predict. If you aren't writing, you are abdicating the future or your role in it. If you think it's enough to just be a good citizen, to vote for your favorite politician, to pick up litter and recycle, the future doesn't care about you.

These AI predictions never, ever seem to factor in how actual humans will determine what AI-generated media is successful in replacing human-ones, or if it will even be successful at all. It is all very theoretical and to me, shows a fundamental flaw in this style of "sit in a room reading papers/books and make supposedly rational conclusions about the future of the world."

A good example is: today, right now, it is a negative thing for your project to be known as AI-generated. The window of time when it was trendy and cool has largely passed. Having an obviously AI-generated header image on your blog post was cool two years ago, but now it is passé and marks you as behind the trends.

And so for the prediction that everything get swept up by an ultra-intelligent AI that subsequently replaces human-made creations, essays, writings, videos, etc., I am doubtful. Just because it will have the ability to do so doesn't mean that it will be done, or that anyone is going to care.

It seems vastly more likely to me that we'll end up with a solid way of verifying humanity – and thus an economy of attention still focused on real people – and a graveyard of AI-generated junk that no one interacts with at all.

replies(6): >>42135577 #>>42135773 #>>42135911 #>>42137616 #>>42140517 #>>42142527 #
mapt ◴[] No.42135911[source]
With AI you need to think, long and hard, about the concept (borrowed from cryptography), "Today, the state of the art in is the worst it will ever be".

Humanity is pinning its future on the thought that we will hit intractable information-theoretic limitations which provide some sort of diminishing returns on performance before a hard takeoff, but the idea that the currently demonstrated methods are high up on some sigmoid curve does not seem at this point credible. AI models are dramatically higher performance this year than last year, and were dramatically better last year than the year before, and will probably continue to get better for the next few years.

That's sufficient to dramatically change a lot of social & economic processes, for better and for worse.

replies(3): >>42136135 #>>42136832 #>>42141458 #
cle ◴[] No.42136135[source]
There's a good chance you're right, but I think there's also a chance that things could get worse at some point (with some hand-wavy definition of "a while").

Currently the state-of-the-art is propped up with speculative investments, if those speculations turn out to be wrong enough, or social/economic changes force the capital to get allocated somewhere else, then there could be a significant period of time where access to it goes away for most of us.

We can already see small examples of this from the major model providers. They launch a mind-blowing model, get great benchmarks and press, and then either throttle access or diminish quality to control costs / resources (like Claude Sonnet 3.5 pretty quickly shifted to short, terse responses). Access to SOTA is very resource-constrained and there are a lot of scenarios I can imagine where that could get worse, not better.

Even "Today, the state of the art in is the worst it will ever be" in cryptography isn't always true, like post-spectre/meltdown. You could argue that security improved but perf definitely did not.

replies(1): >>42143041 #
1. ◴[] No.42143041[source]