←back to thread

63 points cjbarber | 2 comments | | HN request time: 0.422s | source
Show context
cjbarber ◴[] No.45305786[source]
This is written by Kevin Bryan from University of Toronto. He has good tweets on the economics of AI, too (https://x.com/Afinetheorem).

My recap of the PDF is something like:

1. There are good books about the near-term economics as AI.

2. There aren't many good books about "what if the AI researchers are right" (e.g. rapid scientific acceleration) and the economic and political impacts of those cases.

3. The Second Machine Age: Digital progress boosts the bounty and widens the spread, more relative inequality. Wrong on speed (e.g. self driving tech vs regulatory change).

4. Prediction Machines: AI = cheaper prediction. Which raises the value of human judgement, because that's a complement.

5. Power and Prediction: Value comes when the whole system is improved not just from smaller fixes. Electrification's benefits arrived when factories reorganized, not just when they added electricity to existing layouts. Diffusion is slow because things need to be rebuilt.

6. The Data Economy: Data is a nonrivalrous asset. As models get stronger and cheaper, unique private data grows in relative value.

7. The Skill Code: Apprenticeship pathways may disappear. E.g. survival robots prevent juniors getting practice reps.

8. Co-Intelligence: Diffusion is slowed by the jagged frontier (AI is spiky). Superhuman at one thing, subhuman at another.

9. Situational Awareness: By ~2027, $1T/yr AI capex spend, big power demand, and hundreds of millions of AI researchers getting a decade of algo progress in less than a year. (Author doesn't say he agrees, but says economists should analyze what happens if it does)

10. Questions: What if the AGI-pilled AI researchers are right, what will the economic and policy implications be?

replies(2): >>45306301 #>>45306889 #
catigula ◴[] No.45306889[source]
If AI researchers are wrong they're gonna have a lot of explaining to do.
replies(2): >>45306909 #>>45306999 #
rhetocj23 ◴[] No.45306999[source]
TBH its far more likely they are wrong than right.

Investors are incredibly overzealous to not miss out on what happened with certain stocks of the personal computing, web 2.0 and smartphone diffusion.

replies(1): >>45307011 #
catigula ◴[] No.45307011[source]
There's a certain anthropic quality to the idea that if we lived in a doomsday timeline we'd be unlikely to be here observing it.
replies(1): >>45311003 #
1. uoaei ◴[] No.45311003[source]
Humanist, maybe. The anthropic argument is tautological: nothing is a doomsday without there being someone for whom the scenario spells certain doom.
replies(1): >>45314063 #
2. catigula ◴[] No.45314063[source]
How is it tautological? Some form of it is the very basis of atheism.

Doomsday timelines have lower numbers of observers. In all timelines where you are no longer an observer,i.e. all current doomsday timelines, your observation has ceased.