←back to thread

Interview with gwern

(www.dwarkeshpatel.com)
308 points synthmeat | 1 comments | | HN request time: 0s | source
Show context
keiferski ◴[] No.42135432[source]
By writing, you are voting on the future of the Shoggoth using one of the few currencies it acknowledges: tokens it has to predict. If you aren't writing, you are abdicating the future or your role in it. If you think it's enough to just be a good citizen, to vote for your favorite politician, to pick up litter and recycle, the future doesn't care about you.

These AI predictions never, ever seem to factor in how actual humans will determine what AI-generated media is successful in replacing human-ones, or if it will even be successful at all. It is all very theoretical and to me, shows a fundamental flaw in this style of "sit in a room reading papers/books and make supposedly rational conclusions about the future of the world."

A good example is: today, right now, it is a negative thing for your project to be known as AI-generated. The window of time when it was trendy and cool has largely passed. Having an obviously AI-generated header image on your blog post was cool two years ago, but now it is passé and marks you as behind the trends.

And so for the prediction that everything get swept up by an ultra-intelligent AI that subsequently replaces human-made creations, essays, writings, videos, etc., I am doubtful. Just because it will have the ability to do so doesn't mean that it will be done, or that anyone is going to care.

It seems vastly more likely to me that we'll end up with a solid way of verifying humanity – and thus an economy of attention still focused on real people – and a graveyard of AI-generated junk that no one interacts with at all.

replies(6): >>42135577 #>>42135773 #>>42135911 #>>42137616 #>>42140517 #>>42142527 #
motohagiography ◴[] No.42137616[source]
I've been writing for decades with the belief I was training a future AI and used to say that the Turing test wasn't mysterious at all because it was a solved problem in economics in the form of an indifference curve that showed where peoeple cared whether or not they were dealing with a person or a machine.

the argument against AI taking over is we organize around symbols and narratives and are hypersensitive to waning or inferior memes, thereofre AI would need to reinvent itself as "not-AI" every time so we don't learn to categorize it as slop.

I might agree, but if there were an analogy in music, some limited variations are dominant for decades, and there are precedents where you can generate dominant memes from slop that entrains millions of minds for entire lifetimes. Pop stars are slop from an industry machine that is indistinguishable from AI, and as evidence, current AI can simulate their entire catalogs of meaning. the TV Tropes website even identifies all the elements of cultural slop people should be immune to, but there are still millions of people walking around living out characters and narratives they received from pop-slop.

there will absolutely be a long tail of people whose ontology is shaped by AI slop, just like there is a long tail of people whose ontology is shaped by music, tv, and movies today. that's as close to being swept up in an AI simulation as anything, and perhaps a lot more subtle. or maybe we'll just shake it off.

replies(2): >>42138521 #>>42148702 #
1. keiferski ◴[] No.42138521[source]
That is a good point, and fundamentally I agree that these big budget pop star machines do function in a way analogous to an AI, and that we're arguing metaphysics here.

But even if a future AI becomes like this, that doesn't prevent independent writers (like gwern) from still having a unique, non-assimilated voice where they write original content. The arguments tend to be "AI will eat everything, therefore get your writing out there now" and not "this will be a big thing, but not everything."