What would be interesting to see is what those kids produced with their vibe coding.
What would be interesting to see is what those kids produced with their vibe coding.
No one, including Karpathy in this video, is advocating for "vibe coding". If nothing more, LLMs paired with configurable tool-usage, is basically a highly advanced and contextual search engine you can ask questions. Are you not using a search engine today?
Even without LLMs being able to produce code or act as agents they'd be useful, because of that.
But it sucks we cannot run competitive models locally, I agree, it is somewhat of a "rich people" tool today. Going by the talk and theme, I'd agree it's a phase, like computing itself had phases. But you're gonna have to actually watch and listen to the talk itself, right now you're basically agreeing with the video yet wrote your comment like you disagree.
GitHub copilot has a free tier.
Google gives you thousands of free LLM API calls per day.
There are other free providers too.
Also the disconnect for me here is I think back on the cost of electronics, prices for the level of compute have generally gone down significantly over time. The c64 launched around the $5-600 price level, not adjusted for inflation. You can go and buy a Mac mini for that price today.
Sure, nobody can predict the long-term economics with certainty but companies like OpenAI already have compelling business fundamentals today. This isn’t some scooter startup praying for margins to appear; it’s a platform with real, scaled revenue and enterprise traction.
But yeah, tell me more about how my $200/mo plan is bankrupting them.
For the general public, these increasing costs will besubsidized by advertising. I cant wait for ads to start appearring in chatGPT- it will be very insidious as the advertising will be comingled with the output so there will be no way to avoid it.
You are correct that some providers might reduce prices for market capture, but the alternatives are still cheap, and some are close to being competitive in quality to the API providers.
Not everyone has to paid that cost, as some companies are releasing weights for download and local use (like Llama) and then some other companies are going even further and releasing open source models+weights (like OLMo). If you're a provider hosting those, I don't think it makes sense to take the training cost into account when planning your own infrastructure.
Although I don't it makes much sense personally, seemingly it makes sense for other companies.
A 50-year-old doctor who wants to build a specialized medical tool, a teacher who sees exactly what educational software should look like, a small business owner who knows their industry's pain points better than any developer. These people have been sitting on the sidelines because the barrier to entry was so high.
The "vibe coding" revolution isn't really about kids (though that's cute) - it's about unleashing all the pent-up innovation from people who understand problems deeply but couldn't translate that understanding into software.
It's like the web democratized publishing, or smartphones democratized photography. Suddenly expertise in the domain matters more than expertise in the tools.
This comment is wildly out of touch. The SMB owner can now generate some Python code. Great. Where do they deploy it? How do they deploy it? How do they update it? How do they handle disaster recovery? And so on and so forth.
LLMs accelerate only the easiest part of software engineering, writing greenfield code. The remaining 80% is left as an exercise to the reader.
As the LLM developers continue to add unique features to their APIs, the shared API which is now OpenAI will only support the minimal common subset and many will probably deprecate the compatibility API. Devs will have to rely on SDKs to offer comptibility.
It going to get wild when the tech bro investors demand ads be the included in responses.
It will be trivial for a version of AdWords where someone pays for response words be replaced. “Car” replaced by “Honda”, variable names like “index” by “this_index_variable_is_sponsered_by_coinbase” etc.
I’m trying to be funny with the last one but something like this will be coming sooner than later. Remember, google search used to be good and was ruined by bonus seeking executives.
But the majority of them are serving at ~ the same price, and that matches to the raw cost + some profit if you actually look into serving those models. And those prices are still cheap.
So yeah, I stand by what I wrote, "most likely" included.
My main answer was "no, ..." because the gp post was only considering the closed providers only (oai, anthropic, goog, etc). But youc an get open-weight models pretty cheap, and they are pretty close to SotA, depending on your needs.
> it's about unleashing all the pent-up innovation from people who understand problems deeply but couldn't translate that understanding into software.
This is just a fantasy. People with "incredible ideas" and "pent-up innovation" also need incredible determination and motivation to make something happen. LLMs aren't going to magically help these people gain the energy and focus needed to pursue an idea to fruition. Coding is just a detail; it's not the key ingredient all these "locked out" people were missing.
A doctor has an idea. Great. Takes a lot more than a eureka moment to make it reality. Even if you had a magic machine that could turn it into the application you thought of. All of the iterations, testing with users, refining, telemetry, managing data, policies and compliance... it's a lot of work. Code is such a small part. Most doctors want to do doctor stuff.
We've had mind-blowing music production software available to the masses for decades now... not a significant shift in people lining up to be the musicians they always wanted to be but were held back by limited access to the tools to record their ideas.
At least for now, LLM APIs are just JSONs with a bunch of prompts/responses in them and maybe some file URLs/IDs.
So? That's true for search as well, and yet Google has been top-dog for decades in spite of having worse results and a poorer interface than almost all of the competition.
If you continue to break the site guidelines, we'll end up having to ban you.
If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.
They dont have to retrain constantly and that’s where opinions like yours fall short. I don’t believe anyone has a concrete vision on the economics in the medium to a long term. It’s biased ignorance to hold a strong position in the down or up case.
As for advertising, it’s possible, but with so many competitors and few defensible moats, there’s real pressure to stay ad-free. These tools are also positioned to command pricing power in a way search never was, given search has been free for decades.
The hardware vs. software angle seems like a distraction. My original point was in response to the claim that LLMs are “toys for the rich.” The C64 was a rich kid’s toy too—and far less capable.