Seems like OpenAI speed ran through the Facebook phase and are out of ideas
If they can maintain runway until then.
I struggle to imagine a team of more than 10 people writing an iOS app with less than 700 files.
Still thinking about the endgame. Its not obvious to me if OpenAI/Anthropic will become competitors to coding startups like Cursor or continue to be model providers.
Most noobs, such as those who think 700 files is too many because they've only worked on apps they never published, might just cram everything into that one file.
However, there would be various files for components, functions, etc. Code that's single responsibility and easy to test might mean there are lots of files. There might be upload queues, offline functionality, custom code to go beyond what the ios/android SDKs offer, and so on. DTOs, DAOs, etc. various services..
You probably (won't) get the gist but yeah.
OpenAI has been trying to get into the space with their multiple product offerings all called “codex” but execution has been lacking.
So this is very much a play at becoming more competitive in the space.
Right now, many small startups are essentially just thin wrappers around ChatGPT. Once it becomes clear which ideas and solutions gain real traction, providers like OpenAI/Anthropic can simply roll out those features natively removing any need for a third party.
In a sense, a lot of what happened with the mobile market. For example, there's no need for a QR scanner or document scanner app anymore, if your phone starts to offer it natively.
Quick summary, I believe consumer AI experiences will feature ads because the profit opportunity is too large and company valuations depend on it. The hiring of Fidji Simo (ads at Facebook) at OpenAI + and just this week, Vijaye Raji/Statsig also point that way.
* dick pills and boob surgery, also government announcements for a country I don't live in, also offers to help renounce a citizenship I never had in the first place
Acquiring them gives OAI:
- ready-made team that understands the IDE plumbing and the developer UX at a deep level
- head start in a platform ecosystem that's hard to crack
3. team that knows how to push the models into productized, developer-ready experiences
So it's not the prompts. It's engineering the scaffolding and UX around the model so it feels like magic to the user. That's what OpenAI is buying.
Also possible that Alex solved a lot of UX and integration challenges that could translate to other Apple contexts (productivity apps, design tools, even consumer-facing AI on macOS/iOS).
Without having used Alex myself (I don't do iOS or macOS development), I would guess that all the retrieval, context slicing, editor integration yada yada that they've built aren't necessarily unique to coding. Same scaffolding could support things like AI-driven writing, design, or general productivity in an Apple-native way.
Google "our incredible journey".
https://www.google.com/search?q=our+incredible+journey
https://www.gyford.com/phil/writing/2013/02/27/our-incredibl...
One thing I’ve been doing is querying and storing results. For example, “what are the best books on X topic” for every topic I can possibly think that I may want to read about in the future.
I’ve found the results to be amazing if you give a sufficiently detailed prompt. I have enough reading to see me through to exit.
More likely the model will be payment for low friction enablement of transactions rather than overt steering. Pick Door #1: the LLM states the product is fit for purpose. Pick Door #2: the LLM will directly complete the transaction or close to it.
Daniel and his team did a great job at doing what Apple wasn't but I can't help but feel this was an inevitable outcome.
But LLMs can be used to astroturf internet spaces. Which means they allow everyone the ability to serve ads and manipulate. It's no longer just limited to the company providing the original service.
Answer concisely when appropriate, more extensively
when necessary. Avoid rhetorical flourishes, bonhomie,
and (above all) cliches. Take a forward-thinking view.
OK to be mildly positive and encouraging but NEVER
sycophantic or cloying. Above all, NEVER use the
phrase "You're absolutely right." Rather than "Let me
know if..." style continuations, you may list a set
of prompts to explore further topics, but only when
clearly appropriate.
Pretty happy with the results so far. Very low BS factor (although it does ignore the last part sometimes.)Highly unlikely. Google's SERP is an ad-infested abomination that sometimes shows useful results, and yet people still use Google Search.
The same will happen with LLMs, except in far more subtle and insidious ways. Instead of showing you ads directly, they will be naturally interwoven in conversations, suggestions, and generated content. You won't be able to tell whether the content is genuine or promoted, as is common on the web today.
The ads will target you more accurately than ever before based on not just the data you've given them, but on the context of the conversation, your surroundings, and any other piece of real-time information they can use to secure a conversion, or to influence your thoughts on a particular matter. You will trust it more than any current ad channel since the AI will be personal, and the tone will be friendly.
As with the web, ad-free services will exist, but the only way to escape this entirely will be to use local and self-hosted models.
This type of bundling appears to be one of the strongest forces in the economy today, and I think comes about consistently due to a confluence of efficiencies of scale, coordination, and second-order effects of prestige (being able to hire and pay large numbers of outlier high performing employees, etc.)
I've learned not to bet against it, except in niche areas.
OpenAI already exploring and experimenting with different ad modalities privately. They’re also have a much better brand, so they might be able to avoid too much churn of customers.
They’ll do both: continue to be model providers while also leveraging their position as model providers to own as many of the valauble markets in which models are used as possible. Kind of like Amazon and its role as both infrastructure provider and direct competitors to other sellers (on the shopping/logistics side) and SaaS vendors (on the AWS side).
So Claude could do it, it just seems like they're focusing on a different set of developers for the moment.
I have at least one more idea in me and invest my own money to make it happen, but I don't think I'll have a 2 billion dollar exit.
But the other recommendations seemed like crap and when I followed the sources they seemed like AI generated garbage for AI that I couldn't find doing my normal searching.
> There are huge issues in the Apple ecosystem with documentation and so much tribal knowledge.
I struggle to come up with an ecosystem where that doesn't apply. React, Angular, .NET, .... Though some of them probably even suffer from overdocumentation, e.g. React with the same beginner level tutorials / open source code regurgitating bad patterns, and you then have the challenge of separating the wheat from the chaff.
The question is really whether maintaining an ecosystem-specific model would be able outperform a better generalized coding model, and even further whether the marginal improvements would justify the additional maintenance process/cost.
OpenAI must justify ads when its competitors are not sending ads. Why would they? OpenAI models must be so good that it should be worth dealing with ads compared to say Claude or DeepSeek.
I'm pretty impressed with the web ui of Codex lately. The cli ui didn't really impress me when it came out (a bit flaky and buggy and tedious to use). But the web UI is nice. I've created a few alright PRs with it. I think I have about 60-70% merge rate, some with manual changes made by me on the same branch. I like the mode of just keeping that stuff on a branch and interacting via Git.
It has its limitations but I do like this UX and DX. And I've so far not experienced any rate limiting on the plus plan.
I'm curious to learn how others are feeling about this. I know Claude Code and other solutions are popular. But how do they stack up in terms of usability and utility compared to codex?
Solve this and you’ll solve the ad problem, but I’m afraid it isn’t possible because money involves controls and regulations which you can’t weasel out of and not end up in jail.
Google search is widely acknowledged as drastically drying up in the last year or so, that's despite more worldwide internet usage.
Then do it.
Since when did HN become like Reddit? Always negative about everything?
Feels like all the losers from Reddit have somehow migrated here.
How does that fit in your price model because to me it makes no sense at all yet people buy that. Same for Netflix tier with ads.
If they stopped training, then the data cutoff date is one year farther each year? How do you make money from model which is stale on data and doesn't include any recent stuff?
Would note that market share confers revenue, users and data. The last is uniquley valuable for LLM builders.
Would you trust output from OpenAI which is sponsered? I mean it's bad enough now in the ad space where they are increasingly trying to make the ads look more like content - imagine that in woven into your ChatGPT output?
The younger generation are quite used to subscription models - netflix, spotify, various gaming platforms etc. Perhaps access just becomes part of your internet access bundle.
Or, what extra money would you pay OpenAI to get a non ad powered LLM model?
It’s not hard to imagine ChatGPT just charging.
I think they're realizing what most capitalists have realized - if you don't own the whole value chain, you don't own the value.
I bet Alex won't have a dropdown to switch to Claude Sonnet for very long.
Like everything else, most people won't notice a difference and prioritize free over paid.
Good rule. Just like using a service based AI instead of a self-hosted one if you are a developer or artist.
Doing search with ChatGPT feels like doing research in a library setting whereas Google search feels like doing research in Times Square.
I've personally stopped doing that. I understand it's not a mainstream decision since most people don't even know what alternatives are there, but doesn't mean that in a couple of years google won't start to feel it.