Most active commenters
  • keiferski(5)
  • mapt(4)

←back to thread

Interview with gwern

(www.dwarkeshpatel.com)
308 points synthmeat | 23 comments | | HN request time: 0.871s | source | bottom
1. keiferski ◴[] No.42135432[source]
By writing, you are voting on the future of the Shoggoth using one of the few currencies it acknowledges: tokens it has to predict. If you aren't writing, you are abdicating the future or your role in it. If you think it's enough to just be a good citizen, to vote for your favorite politician, to pick up litter and recycle, the future doesn't care about you.

These AI predictions never, ever seem to factor in how actual humans will determine what AI-generated media is successful in replacing human-ones, or if it will even be successful at all. It is all very theoretical and to me, shows a fundamental flaw in this style of "sit in a room reading papers/books and make supposedly rational conclusions about the future of the world."

A good example is: today, right now, it is a negative thing for your project to be known as AI-generated. The window of time when it was trendy and cool has largely passed. Having an obviously AI-generated header image on your blog post was cool two years ago, but now it is passé and marks you as behind the trends.

And so for the prediction that everything get swept up by an ultra-intelligent AI that subsequently replaces human-made creations, essays, writings, videos, etc., I am doubtful. Just because it will have the ability to do so doesn't mean that it will be done, or that anyone is going to care.

It seems vastly more likely to me that we'll end up with a solid way of verifying humanity – and thus an economy of attention still focused on real people – and a graveyard of AI-generated junk that no one interacts with at all.

replies(6): >>42135577 #>>42135773 #>>42135911 #>>42137616 #>>42140517 #>>42142527 #
2. MichaelZuo ◴[] No.42135577[source]
O1-preview is already indistinguishable from the 50th percentile HN commentator with the right prompts… no editing at all needed of the output.
3. notahacker ◴[] No.42135773[source]
I think the wider question mark about that sentence is that even if LLMs that ingest the internet and turn it into different words are the future of humanity, there's an awful lot of stuff in an AI corpus and a comparatively small number intensively researched blogs probably aren't going to shift the needle very much

I mean, you'd probably get more of a vote using generative AI to spam stuff that aligns with your opinions or moving to Kenya to do low wage RHLF stuff...

replies(1): >>42148136 #
4. mapt ◴[] No.42135911[source]
With AI you need to think, long and hard, about the concept (borrowed from cryptography), "Today, the state of the art in is the worst it will ever be".

Humanity is pinning its future on the thought that we will hit intractable information-theoretic limitations which provide some sort of diminishing returns on performance before a hard takeoff, but the idea that the currently demonstrated methods are high up on some sigmoid curve does not seem at this point credible. AI models are dramatically higher performance this year than last year, and were dramatically better last year than the year before, and will probably continue to get better for the next few years.

That's sufficient to dramatically change a lot of social & economic processes, for better and for worse.

replies(3): >>42136135 #>>42136832 #>>42141458 #
5. cle ◴[] No.42136135[source]
There's a good chance you're right, but I think there's also a chance that things could get worse at some point (with some hand-wavy definition of "a while").

Currently the state-of-the-art is propped up with speculative investments, if those speculations turn out to be wrong enough, or social/economic changes force the capital to get allocated somewhere else, then there could be a significant period of time where access to it goes away for most of us.

We can already see small examples of this from the major model providers. They launch a mind-blowing model, get great benchmarks and press, and then either throttle access or diminish quality to control costs / resources (like Claude Sonnet 3.5 pretty quickly shifted to short, terse responses). Access to SOTA is very resource-constrained and there are a lot of scenarios I can imagine where that could get worse, not better.

Even "Today, the state of the art in is the worst it will ever be" in cryptography isn't always true, like post-spectre/meltdown. You could argue that security improved but perf definitely did not.

replies(1): >>42143041 #
6. keiferski ◴[] No.42136832[source]
I don’t disagree that it’ll change a lot of things in society.

But that isn’t the claim being made, which is that some sort of AI god is being constructed which will develop entirely without the influence of how real human beings actually act. This to me is basically just sci-fi, and it’s frankly kind of embarrassing that it’s taken so seriously.

replies(1): >>42146133 #
7. motohagiography ◴[] No.42137616[source]
I've been writing for decades with the belief I was training a future AI and used to say that the Turing test wasn't mysterious at all because it was a solved problem in economics in the form of an indifference curve that showed where peoeple cared whether or not they were dealing with a person or a machine.

the argument against AI taking over is we organize around symbols and narratives and are hypersensitive to waning or inferior memes, thereofre AI would need to reinvent itself as "not-AI" every time so we don't learn to categorize it as slop.

I might agree, but if there were an analogy in music, some limited variations are dominant for decades, and there are precedents where you can generate dominant memes from slop that entrains millions of minds for entire lifetimes. Pop stars are slop from an industry machine that is indistinguishable from AI, and as evidence, current AI can simulate their entire catalogs of meaning. the TV Tropes website even identifies all the elements of cultural slop people should be immune to, but there are still millions of people walking around living out characters and narratives they received from pop-slop.

there will absolutely be a long tail of people whose ontology is shaped by AI slop, just like there is a long tail of people whose ontology is shaped by music, tv, and movies today. that's as close to being swept up in an AI simulation as anything, and perhaps a lot more subtle. or maybe we'll just shake it off.

replies(2): >>42138521 #>>42148702 #
8. keiferski ◴[] No.42138521[source]
That is a good point, and fundamentally I agree that these big budget pop star machines do function in a way analogous to an AI, and that we're arguing metaphysics here.

But even if a future AI becomes like this, that doesn't prevent independent writers (like gwern) from still having a unique, non-assimilated voice where they write original content. The arguments tend to be "AI will eat everything, therefore get your writing out there now" and not "this will be a big thing, but not everything."

9. ◴[] No.42140517[source]
10. wavemode ◴[] No.42141458[source]
> dramatically higher performance this year than last year, and were dramatically better last year than the year before

Yeah, but, better at _what_?

Cars are dramatically faster today than 100 years ago. But they still can't fly.

Similarly, LLMs performing better on synthetic benchmarks does not demonstrate that they will eventually become superintelligent beings that will replace humanity.

If you want to actually measure that, then these benchmarks need to start asking questions that demonstrate superintelligence: "Here is a corpus of all current research on nuclear physics, now engineer a hydrogen bomb." My guess is, we will not see much progress.

replies(1): >>42146139 #
11. CamperBob2 ◴[] No.42142527[source]
Having an obviously AI-generated header image on your blog post was cool two years ago, but now it is passé and marks you as behind the trends.

This is true only because publicly-accessible models have been severely nerfed (out of sheer panic, one assumes), making their output immediately recognizable and instantly clichéd.

Dall-E 2, for instance, was much better when it first came out, compared to the current incarnation that has obviously been tweaked to discourage generating anything that resembles contemporary artists' output, and to render everything else in annoying telltale shades of orange and blue.

Eventually better models will appear, or be leaked, and then you won't be able to tell if a given image was generated by AI or not.

replies(1): >>42144665 #
12. ◴[] No.42143041{3}[source]
13. keiferski ◴[] No.42144665[source]
My point is that having work that is obviously AI-generated is now a negative thing.

If, in the future, there is a way to validate humanity (as I mentioned in my comment), then any real writers will likely use it.

Anyone that doesn't validate their humanity will be assumed to be an AI. The reaction to this may or may not be negative, but the broader point is that in this scenario, the AI won't be eating all human creations.

replies(1): >>42149923 #
14. mapt ◴[] No.42146133{3}[source]
It is enormously easier to code for an AI agent which pursues a task doggedly by iterating down its list of available tools, without any consistent human moral values, than to code in some kind of human morality manually. For that, you need to have solved philosophy. Mathematically, as a proof, and with a high degree of certainty that you are correct.

Good luck.

15. mapt ◴[] No.42146139{3}[source]
Humans could engineer a hydrogen bomb in the 1960's from publicly available research and multiple AI models from unrelated firms could do it right this moment if you unlocked its censors.

Turning that into an agent which builds its own hydrogen bomb using what amount to seized resources and to do it covertly at a pace that is faster than human agencies notice is a different sort of thing, but the middleware to do that sort of agent directed project is rapidly developing as well, and there is strong economic incentive for self-interested actors to pursue it. For a very brief moment in time, a huge amount of shareholder value will be created, and then suddenly destroyed.

A large-scale nuclear exchange is no longer the worst case scenario, in point of fact.

Assuming we don't hit those information-theoretic barriers, and we don't develop a host of new safeguards which nobody at the present time seems interested in developing.

replies(1): >>42147628 #
16. wavemode ◴[] No.42147628{4}[source]
> multiple AI models from unrelated firms could do it right this moment if you unlocked its censors

ok buddy

replies(1): >>42155546 #
17. jddj ◴[] No.42148136[source]
There is plenty of low wage RHLF stuff in western/southern Europe for now as well.
18. mlsu ◴[] No.42148702[source]
This is an interesting comment. Let's extend it further.

The pop industry is a:

- machine

- which takes authentic human meaning

- and produces essentially a stochastic echo of it ("slop")

- in an optimization algorithm

- to predict the next most profitable song (the song that is most "likely")

So, this sounds an awful lot like something else that's very in vogue right now. Only it was invented in 1950 or 1960, not in 2017.

replies(1): >>42149558 #
19. pontsprit ◴[] No.42149558{3}[source]
yes but the pop industry is just one facet of music as a transcendental cultural/spiritual/secular meaning-making expression activity as a whole. its a place where both bob dylan ans taylor swift can exist on the same plane, the implications of such cannot and would never be able to be reduced to data. "if the only tool you have..." not to undermine the implications of complex automation as symptomatic of the current epoch, but I would argue that art offers a materially grounded experience irreducible to data. it doesnt mean that the affects of art wont be harnessed by corporate interests in ever accentuated ways, it seems like an obvious and natural development; on one level its simply the cyclical production of crappification occurring in tighter spirals of output/dynamism, weeee!
replies(1): >>42150862 #
20. CamperBob2 ◴[] No.42149923{3}[source]
And my point is that if you can't tell the difference -- and you won't be able to -- the question of validating one's humanity will be moot.

Any technology capable of distinguishing between machine- and human-created text or artwork will simply be placed in a feedback loop and used for training the next generation of models.

If this indeed turns out to be a fight, humans won't win it. So let's not bother with the fighting part.

replies(1): >>42154928 #
21. mlsu ◴[] No.42150862{4}[source]
Yeah, I think we agree here. I thought your comment -- what is "slop" vs what is "not slop" got to what is "ai" and what is "not ai" in an interesting way.
22. keiferski ◴[] No.42154928{4}[source]
I don’t think this is nearly as difficult as you are implying. All it would take is for a social network to require some kind of verified ID that is renewed on a regular basis. You might even need to verify this ID in person, physically. This might seem excessive today, but if the internet becomes unusable due to excessive AI content, it’s not that big of a deal. The infrastructure is practically already here via banking and government apps.

The alternative is that everyone just accepts the internet as a place where you can’t tell if anyone is real. I don’t think that will be a happy state of affairs for anyone.

Even if this isn’t a “required” thing, it will become a desired one by consumers. Who would you rather follow - the verified Taylor Swift account, or the unverified one? Non-verified creators will be relegated to niches where it doesn’t matter as much if they’re real or not.

Then you might say - but a person can just use AI tools and paste them into their verified account. Which is fine - I’m not saying that people aren’t going to use these tools to create content. I’m saying that these tools aren’t going to eliminate the possibility of an individual person creating content on the Internet as a human being, and being verified as such.

23. mapt ◴[] No.42155546{5}[source]
You believe I'm overestimating current AI. While my estimations are probably a bit higher than yours, mostly I think you're overestimating hydrogen bombs. They're not that complicated, and not that secret in 2024. These AI models have every unclassified scientific paper and every nonfiction book ever published on the subject at their disposal.

https://en.wikipedia.org/wiki/Thermonuclear_weapon

It's a mechanistic process featuring well-tread factual, verbose discourse. Scientists reasoning about facts and presenting what they found. "Tell me a joke about elephant mating habits in the form of a rap song" is a dramatically more complex task of synthesis.