Most active commenters
  • infecto(6)
  • dist-epoch(3)
  • guappa(3)
  • dang(3)

←back to thread

1480 points sandslash | 40 comments | | HN request time: 2.23s | source | bottom
1. darqis ◴[] No.44317373[source]
when I started coding at the age of 11 in machine code and assembly on the C64, the dream was to create software that creates software. Nowadays it's almost reality, almost because the devil is always in the details. When you're used to write code, writing code is relatively fast. You need this knowledge to debug issues with generated code. However you're now telling AI to fix the bugs in the generated code. I see it kind of like machine code becomes overlaid with asm which becomes overlaid with C or whatever higher level language, which then uses dogma/methodology like MVC and such and on top of that there's now the AI input and generation layer. But it's not widely available. Affording more than 1 computer is a luxury. Many households are even struggling to get by. When you see those what 5 7 Mac Minis, which normal average Joe can afford that or does even have to knowledge to construct an LLM at home? I don't. This is a toy for rich people. Just like with public clouds like AWS, GCP I left out, because the cost is too high and running my own is also too expensive and there are cheaper alternatives that not only cost less but also have way less overhead.

What would be interesting to see is what those kids produced with their vibe coding.

replies(5): >>44317396 #>>44317699 #>>44318049 #>>44319693 #>>44321408 #
2. diggan ◴[] No.44317396[source]
> those kids produced with their vibe coding

No one, including Karpathy in this video, is advocating for "vibe coding". If nothing more, LLMs paired with configurable tool-usage, is basically a highly advanced and contextual search engine you can ask questions. Are you not using a search engine today?

Even without LLMs being able to produce code or act as agents they'd be useful, because of that.

But it sucks we cannot run competitive models locally, I agree, it is somewhat of a "rich people" tool today. Going by the talk and theme, I'd agree it's a phase, like computing itself had phases. But you're gonna have to actually watch and listen to the talk itself, right now you're basically agreeing with the video yet wrote your comment like you disagree.

3. dist-epoch ◴[] No.44317699[source]
> This is a toy for rich people

GitHub copilot has a free tier.

Google gives you thousands of free LLM API calls per day.

There are other free providers too.

replies(1): >>44317868 #
4. guappa ◴[] No.44317868[source]
1st dose is free
replies(2): >>44317929 #>>44318058 #
5. palmfacehn ◴[] No.44317929{3}[source]
Agreed. It is worth noting how search has evolved over the years.
6. infecto ◴[] No.44318049[source]
This is most definitely not toys for rich people. Now perhaps depending on your country it may be considered rich but I would comfortably say that for most of the developed world, the costs for these tools are absolutely attainable, there is a reason ChatGPT has such a large subscriber base.

Also the disconnect for me here is I think back on the cost of electronics, prices for the level of compute have generally gone down significantly over time. The c64 launched around the $5-600 price level, not adjusted for inflation. You can go and buy a Mac mini for that price today.

replies(2): >>44318720 #>>44320337 #
7. infecto ◴[] No.44318058{3}[source]
LLM APIs are pretty darn cheap for most of the developed worlds income levels.
replies(2): >>44318209 #>>44318307 #
8. guappa ◴[] No.44318209{4}[source]
Yeah, because they're bleeding money like crazy now.

You should consider how much it actually costs, not how much they charge.

How do people fail to consider this?

replies(4): >>44318223 #>>44318435 #>>44318736 #>>44320080 #
9. bdangubic ◴[] No.44318223{5}[source]
how much does it cost?
10. NoOn3 ◴[] No.44318307{4}[source]
It's cheap now. But if you take into account all the training costs, then at such prices they cannot make a profit in any way. This is called dumping to capture the market.
replies(3): >>44318415 #>>44319180 #>>44319311 #
11. infecto ◴[] No.44318415{5}[source]
No doubt the complete cost of training and to getting where we are today has been significant and I don’t know how the accounting will look years from now but you are just making up the rest based on feelings. We know operationally OpenAI is profitable on purely the runtime side, nobody knows how that will look when accounting for R&D but you have no qualification to say they cannot make a profit in any way.
replies(2): >>44318506 #>>44319856 #
12. infecto ◴[] No.44318435{5}[source]
>You should consider how much it actually costs, not how much they charge. How do people fail to consider this?

Sure, nobody can predict the long-term economics with certainty but companies like OpenAI already have compelling business fundamentals today. This isn’t some scooter startup praying for margins to appear; it’s a platform with real, scaled revenue and enterprise traction.

But yeah, tell me more about how my $200/mo plan is bankrupting them.

13. NoOn3 ◴[] No.44318506{6}[source]
Yes, if you do not take into account the cost of training, I think it is very likely profitable. The cost of working models is not so high. This is just my opinion based on open models and I admit that I have not carried out accurate calculations.
14. bawana ◴[] No.44318720[source]
I suspect that economies of scale are different for software and hardware. With hardware, iteration results in optimization of the supply chain, volume discount as the marginal cost is so much less than the fixed cost, and lower prices in time. The purpose of the device remains fixed. With software, the software becomes ever more complex with technical debt - featuritis, patches, bugs, vulnerabilities, and evolution of purpose to try and capture more disparate functions under one environment in an attempt to capture and lock in users. Price tends to increase in time. (This trajectory incidentally is the opposite of the unix philosophy - having multiple small fast independent tools than can be concatenated to achieve a purpose.) This results in ever increasing profits for software and decreasing profits for hardware at equilibrium. In the development of AI we are already seeing this-first we had gpt, then chatbots, then agents, now integration with existing software architectures.Not only is each model ever larger and more complex (RNN->transformer->multihead-> add fine tuning/LoRA-> add MCP), but the bean counters will find ways to make you pay for each added feature. And bugs will multiply. Already prompt injection attacks are a concern so now another layer is needed to mitigate those.

For the general public, these increasing costs will besubsidized by advertising. I cant wait for ads to start appearring in chatGPT- it will be very insidious as the advertising will be comingled with the output so there will be no way to avoid it.

replies(1): >>44326906 #
15. NitpickLawyer ◴[] No.44318736{5}[source]
No, there are 3rd party providers that run open-weights models and they are (most likely) not bleeding money. Their prices are kind of similar, and make sense in a napkin-math kind of way (we looked into this when ordering hardware).

You are correct that some providers might reduce prices for market capture, but the alternatives are still cheap, and some are close to being competitive in quality to the API providers.

replies(1): >>44319946 #
16. diggan ◴[] No.44319180{5}[source]
> But if you take into account all the training costs

Not everyone has to paid that cost, as some companies are releasing weights for download and local use (like Llama) and then some other companies are going even further and releasing open source models+weights (like OLMo). If you're a provider hosting those, I don't think it makes sense to take the training cost into account when planning your own infrastructure.

Although I don't it makes much sense personally, seemingly it makes sense for other companies.

17. dist-epoch ◴[] No.44319311{5}[source]
There is no "capture" here, it's trivial to switch LLM/providers, they all use OpenAI API. It's literally a URL change.
replies(2): >>44319913 #>>44321762 #
18. kordlessagain ◴[] No.44319693[source]
Kids? Think about all the domain experts, entrepreneurs, researchers, designers, and creative people who have incredible ideas but have been locked out of software development because they couldn't invest 5-10 years learning to code.

A 50-year-old doctor who wants to build a specialized medical tool, a teacher who sees exactly what educational software should look like, a small business owner who knows their industry's pain points better than any developer. These people have been sitting on the sidelines because the barrier to entry was so high.

The "vibe coding" revolution isn't really about kids (though that's cute) - it's about unleashing all the pent-up innovation from people who understand problems deeply but couldn't translate that understanding into software.

It's like the web democratized publishing, or smartphones democratized photography. Suddenly expertise in the domain matters more than expertise in the tools.

replies(3): >>44319780 #>>44319885 #>>44320587 #
19. nevertoolate ◴[] No.44319780[source]
It sounds too good to be true. Why do you think llm is better in coding then in how education software should be designed?
20. guappa ◴[] No.44319856{6}[source]
Except they have to retrain constantly, so why would you not consider the cost of training?
replies(1): >>44326856 #
21. pphysch ◴[] No.44319885[source]
> These people have been sitting on the sidelines because the barrier to entry was so high.

This comment is wildly out of touch. The SMB owner can now generate some Python code. Great. Where do they deploy it? How do they deploy it? How do they update it? How do they handle disaster recovery? And so on and so forth.

LLMs accelerate only the easiest part of software engineering, writing greenfield code. The remaining 80% is left as an exercise to the reader.

replies(1): >>44320065 #
22. jamessinghal ◴[] No.44319913{6}[source]
This is changing; OpenAI's newer API (Responses) is required to include reasoning tokens in the context while using the API, to get the reasoning summaries, and to use some of the OpenAI provided tools. Google's OpenAI compatibility supports Chat Completions, not Responses.

As the LLM developers continue to add unique features to their APIs, the shared API which is now OpenAI will only support the minimal common subset and many will probably deprecate the compatibility API. Devs will have to rely on SDKs to offer comptibility.

replies(1): >>44321359 #
23. Eggpants ◴[] No.44319946{6}[source]
Starts with “No” then follows that up with “most likely”.

So in other words you don’t know the real answer but posted anyways.

replies(1): >>44320293 #
24. bongodongobob ◴[] No.44320065{3}[source]
All the devs I work with would have to go through me to touch the infra anyway, so I'm not sure I see the issue here. No one is saying they need to deploy fully through the stack. It's a great start for them and I can help them along the way just like I would with anyone else deploying anything.
replies(1): >>44320988 #
25. Eggpants ◴[] No.44320080{5}[source]
Just wait for the enshitencation of LLM services.

It going to get wild when the tech bro investors demand ads be the included in responses.

It will be trivial for a version of AdWords where someone pays for response words be replaced. “Car” replaced by “Honda”, variable names like “index” by “this_index_variable_is_sponsered_by_coinbase” etc.

I’m trying to be funny with the last one but something like this will be coming sooner than later. Remember, google search used to be good and was ruined by bonus seeking executives.

26. NitpickLawyer ◴[] No.44320293{7}[source]
That most likely is for the case where they made their investment calculations wrong and they won't be able to recoup their hw costs. So I think it's safe to say there may be the outlier 3rd party provider that may lose money in the long run.

But the majority of them are serving at ~ the same price, and that matches to the raw cost + some profit if you actually look into serving those models. And those prices are still cheap.

So yeah, I stand by what I wrote, "most likely" included.

My main answer was "no, ..." because the gp post was only considering the closed providers only (oai, anthropic, goog, etc). But youc an get open-weight models pretty cheap, and they are pretty close to SotA, depending on your needs.

27. pton_xd ◴[] No.44320587[source]
> Think about all the domain experts, entrepreneurs, researchers, designers, and creative people who have incredible ideas but have been locked out of software development because they couldn't invest 5-10 years learning to code.

> it's about unleashing all the pent-up innovation from people who understand problems deeply but couldn't translate that understanding into software.

This is just a fantasy. People with "incredible ideas" and "pent-up innovation" also need incredible determination and motivation to make something happen. LLMs aren't going to magically help these people gain the energy and focus needed to pursue an idea to fruition. Coding is just a detail; it's not the key ingredient all these "locked out" people were missing.

replies(1): >>44320846 #
28. agentultra ◴[] No.44320846{3}[source]
100% this. There have been generations of tools built to help realize this idea and there is... not a lot of demand for it. COBOL, BASIC, Hypercard, the wasteland of no-code and low-code tools. The audience for these is incredibly small.

A doctor has an idea. Great. Takes a lot more than a eureka moment to make it reality. Even if you had a magic machine that could turn it into the application you thought of. All of the iterations, testing with users, refining, telemetry, managing data, policies and compliance... it's a lot of work. Code is such a small part. Most doctors want to do doctor stuff.

We've had mind-blowing music production software available to the masses for decades now... not a significant shift in people lining up to be the musicians they always wanted to be but were held back by limited access to the tools to record their ideas.

29. pphysch ◴[] No.44320988{4}[source]
In other words, most of the barriers to leveraging custom software are still present.
replies(1): >>44321793 #
30. dist-epoch ◴[] No.44321359{7}[source]
It's still trivial to map to a somewhat different API. Google has it's Vertex/GenAI API flavors.

At least for now, LLM APIs are just JSONs with a bunch of prompts/responses in them and maybe some file URLs/IDs.

replies(1): >>44322820 #
31. kapildev ◴[] No.44321408[source]
>What would be interesting to see is what those kids produced with their vibe coding.

I think you are referring to what those kids in the vibe coding event produced. Wasn't their output available in the video itself?

32. lelanthran ◴[] No.44321762{6}[source]
> There is no "capture" here, it's trivial to switch LLM/providers, they all use OpenAI API. It's literally a URL change.

So? That's true for search as well, and yet Google has been top-dog for decades in spite of having worse results and a poorer interface than almost all of the competition.

33. dang ◴[] No.44321779{3}[source]
> You think of economics like a 6 years old.

If you continue to break the site guidelines, we'll end up having to ban you.

If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.

replies(1): >>44323613 #
34. dang ◴[] No.44321782{4}[source]
Please don't respond to a bad comment by breaking the site guidelines yourself. That only makes things worse.

https://news.ycombinator.com/newsguidelines.html

35. bongodongobob ◴[] No.44321793{5}[source]
Yes, the parts we aren't talking about that have nothing to do with LLMs, ie normal business processes.
36. jamessinghal ◴[] No.44322820{8}[source]
It isn't necessarily difficult, but it's significantly more effort than swapping a URL as I originally was replying to.
37. LtWorf ◴[] No.44323613{4}[source]
He already had replies in his other multiple comments that stated the same thing.
replies(1): >>44324451 #
38. dang ◴[] No.44324451{5}[source]
Sorry, but I'm not following you here.
39. infecto ◴[] No.44326856{7}[source]
In the medium to long term that R&D matters. In the short term it’s not as important of a metric. I absolutely agree from an underwriting prospective one would ideally be considering those costs but I also think it’s dishonest to simply say they are bleeding money, end of story.

They dont have to retrain constantly and that’s where opinions like yours fall short. I don’t believe anyone has a concrete vision on the economics in the medium to a long term. It’s biased ignorance to hold a strong position in the down or up case.

40. infecto ◴[] No.44326906{3}[source]
I’m struggling to follow your argument, it feels more speculative than evidence-based. Runtime costs have consistently fallen.

As for advertising, it’s possible, but with so many competitors and few defensible moats, there’s real pressure to stay ad-free. These tools are also positioned to command pricing power in a way search never was, given search has been free for decades.

The hardware vs. software angle seems like a distraction. My original point was in response to the claim that LLMs are “toys for the rich.” The C64 was a rich kid’s toy too—and far less capable.