←back to thread

Interview with gwern

(www.dwarkeshpatel.com)
308 points synthmeat | 3 comments | | HN request time: 0.629s | source
Show context
keiferski ◴[] No.42135432[source]
By writing, you are voting on the future of the Shoggoth using one of the few currencies it acknowledges: tokens it has to predict. If you aren't writing, you are abdicating the future or your role in it. If you think it's enough to just be a good citizen, to vote for your favorite politician, to pick up litter and recycle, the future doesn't care about you.

These AI predictions never, ever seem to factor in how actual humans will determine what AI-generated media is successful in replacing human-ones, or if it will even be successful at all. It is all very theoretical and to me, shows a fundamental flaw in this style of "sit in a room reading papers/books and make supposedly rational conclusions about the future of the world."

A good example is: today, right now, it is a negative thing for your project to be known as AI-generated. The window of time when it was trendy and cool has largely passed. Having an obviously AI-generated header image on your blog post was cool two years ago, but now it is passé and marks you as behind the trends.

And so for the prediction that everything get swept up by an ultra-intelligent AI that subsequently replaces human-made creations, essays, writings, videos, etc., I am doubtful. Just because it will have the ability to do so doesn't mean that it will be done, or that anyone is going to care.

It seems vastly more likely to me that we'll end up with a solid way of verifying humanity – and thus an economy of attention still focused on real people – and a graveyard of AI-generated junk that no one interacts with at all.

replies(6): >>42135577 #>>42135773 #>>42135911 #>>42137616 #>>42140517 #>>42142527 #
CamperBob2 ◴[] No.42142527[source]
Having an obviously AI-generated header image on your blog post was cool two years ago, but now it is passé and marks you as behind the trends.

This is true only because publicly-accessible models have been severely nerfed (out of sheer panic, one assumes), making their output immediately recognizable and instantly clichéd.

Dall-E 2, for instance, was much better when it first came out, compared to the current incarnation that has obviously been tweaked to discourage generating anything that resembles contemporary artists' output, and to render everything else in annoying telltale shades of orange and blue.

Eventually better models will appear, or be leaked, and then you won't be able to tell if a given image was generated by AI or not.

replies(1): >>42144665 #
1. keiferski ◴[] No.42144665[source]
My point is that having work that is obviously AI-generated is now a negative thing.

If, in the future, there is a way to validate humanity (as I mentioned in my comment), then any real writers will likely use it.

Anyone that doesn't validate their humanity will be assumed to be an AI. The reaction to this may or may not be negative, but the broader point is that in this scenario, the AI won't be eating all human creations.

replies(1): >>42149923 #
2. CamperBob2 ◴[] No.42149923[source]
And my point is that if you can't tell the difference -- and you won't be able to -- the question of validating one's humanity will be moot.

Any technology capable of distinguishing between machine- and human-created text or artwork will simply be placed in a feedback loop and used for training the next generation of models.

If this indeed turns out to be a fight, humans won't win it. So let's not bother with the fighting part.

replies(1): >>42154928 #
3. keiferski ◴[] No.42154928[source]
I don’t think this is nearly as difficult as you are implying. All it would take is for a social network to require some kind of verified ID that is renewed on a regular basis. You might even need to verify this ID in person, physically. This might seem excessive today, but if the internet becomes unusable due to excessive AI content, it’s not that big of a deal. The infrastructure is practically already here via banking and government apps.

The alternative is that everyone just accepts the internet as a place where you can’t tell if anyone is real. I don’t think that will be a happy state of affairs for anyone.

Even if this isn’t a “required” thing, it will become a desired one by consumers. Who would you rather follow - the verified Taylor Swift account, or the unverified one? Non-verified creators will be relegated to niches where it doesn’t matter as much if they’re real or not.

Then you might say - but a person can just use AI tools and paste them into their verified account. Which is fine - I’m not saying that people aren’t going to use these tools to create content. I’m saying that these tools aren’t going to eliminate the possibility of an individual person creating content on the Internet as a human being, and being verified as such.