E.g
What is the unit cost of serving a Token? It is the cost of electricity + amortized cost of GPU (GPUs would have been Capex, but because of their fast depreciation rate, you can claim they should be Opex). Given this cost structure, every SOTA labs (Google, Anthropic and OpenAI) are profitable and actually have high-margins 50-60%.
With this margin and growth, the frontier labs can be profitable anytime they want to. But they are sacrificing profitability for growth (as they should be)
Where is Ed's analysis about this? Either he is disingenuous or clueless. Remember people who voluntarily subscribe to Ed, are coming from wanting to hear what they believe.
If he is level-headed, show me an Ed article that is positive about AI
Why should those two things go together?
I happen to agree with the overall sentiment (that AI buildout is overextending the tech sector and the financial markets), but he is utterly fixated on the evils of AI and unable to admit either the current usefulness or the future potential of the technology. This does not make him look like an honest broker.
The rambling nature of his posts also makes it harder to properly argue against them as he keeps repeating the same points over and over; some of them are decent but there is certainly a gish gallop feeling to the whole thing.
Not necessarily. That METR study was interesting in that participants reported that they were more productive, but the hard data disagreed. This is incredibly common when looking at humans, we're generally bad at knowing what hurts or helps us in this sphere.
And personally, I think LLMs are super useful, but I'm pretty sceptical about valuations and returns in this space over the short to medium term.
He definitely changed his mind on AI coding agents based on reader feedback. Ultimately though, you need incredible productivity growth/massive layoffs to make the numbers work for the current spending and RN, I don't see large signs of this.
> I happen to agree with the overall sentiment (that AI buildout is overextending the tech sector and the financial markets), but he is utterly fixated on the evils of AI and unable to admit either the current usefulness or the future potential of the technology. This does not make him look like an honest broker.
I think this is probably because he feels like he's taking crazy pills when he hears what CEOs/leaders are saying about this. It's some kind of mind virus. Like, I was at a meetup a few months back where a senior data/code person was saying that nobody would write code in 5 years, which (if you've used the tools heavily) seems pretty absurd.
FWIW, I personally think they're correct on both bitcoin and Tesla, but apparently people disagree.