←back to thread

388 points reaperducer | 1 comments | | HN request time: 0.225s | source
Show context
jacquesm ◴[] No.45772081[source]
These kinds of deals were very much a la mode just prior to the .com crash. Companies would buy advertising, then the websites and ad agencies would buy their services and they'd spend it again on advertising. The end result is immense revenues without profits.
replies(6): >>45772090 #>>45772213 #>>45772293 #>>45772318 #>>45772433 #>>45774073 #
zemvpferreira ◴[] No.45772318[source]
There’s one key difference in my opinion: pre-.com deals were buying revenue with equity and nothing else. It was growth for growth’s sake. All that scale delivered mostly nothing.

OpenAI applies the same strategy, but they’re using their equity to buy compute that is critical to improving their core technology. It’s circular, but more like a flywheel and less like a merry-go-round. I have some faith it could go another way.

replies(13): >>45772378 #>>45772392 #>>45772490 #>>45772554 #>>45772661 #>>45772731 #>>45772738 #>>45772759 #>>45773088 #>>45773089 #>>45773096 #>>45773105 #>>45774229 #
api ◴[] No.45772554[source]
The assumption is that they have a large moat.

If they don't then they're spending a ton of money to level up models and tech now, but others will eventually catch up and their margins will vanish.

This will be true if (as I believe) AI will plateau as we run out of training data. As this happens, CPU process improvements and increased competition in the AI chip / GPU space will make it progressively cheaper to train and run large models. Eventually the cost of making models equivalent in power to OpenAI's models drops geometrically to the point that many organizations can do it... maybe even eventually groups of individuals with crowdfunding.

OpenAI's current big spending is helping bootstrap this by creating huge demand for silicon, and that is deflationary in terms of the cost of compute. The more money gets dumped into making faster cheaper AI chips the cheaper it gets for someone else to train GPT-5+ competitors.

The question is whether there is a network effect moat similar to the strong network effect moats around OSes, social media, and platforms. I'm not convinced this will be the case with AI because AI is good at dealing with imprecision. Switching out OpenAI for Anthropic or Mistral or Google or an open model hosted on commodity cloud is potentially quite easy because you can just prompt the other model to behave the same way... assuming it's similar in power.

replies(2): >>45772632 #>>45772671 #
simgt ◴[] No.45772632[source]
> This will be true if (as I believe) AI will plateau as we run out of training data.

Why would they run out of training data? They needed external data to bootstrap, now it's going directly to them through chatgpt or codex.

replies(1): >>45772769 #
delis-thumbs-7e ◴[] No.45772769[source]
As much ChatGPT says I’m basically a genius for asking it a good Vegan cake recipes, I don’t think that is providing it any data it doesn’t already have that makes it anyway better. Also at this point the massive increases in data and computing power seem to bring ever decreasing improvements (and sometimes just decline), so it seems we are simply hitting a limit this kind of architecture can achieve no matter what you throw at it.
replies(1): >>45773117 #
DenisM ◴[] No.45773117[source]
ChatGPT chat logs contain massive amount of data teased out of people’s brains. But much of it is lore, biases, misconceptions, memes. There are nuggets of gold in there but it’s not at all clear if there’s a good way to extract them, and until then chat logs will make things worse, not better.

I’m thinking they eventually figure out who is the source of good data for a given domain, maybe.

Even if that is solved, models are terrible at long tail.

replies(3): >>45773766 #>>45777609 #>>45781499 #
api ◴[] No.45773766[source]
When I say models will plateau I don't mean there will be no progress. I mean progress will slow down since we'll be scraping the bottom of the barrel for training data. We might never quite run out but once we've sampled every novel, web site, scientific paper, chat log, broadcast transcript, and so on, we've exhausted the rich sources for easy gains.
replies(1): >>45774292 #
1. DenisM ◴[] No.45774292[source]
Chat logs don’t run out. We may run out of novelty in those logs, at which point we may have ran out of human knowledge.

Or not - there still knowledge in people heads that is not bleeding into ai chat.

One implication here is that chats will morph to elicit more conversation to keep mining that mine. Which may lead to the need to enrage users to keep engagement.