←back to thread

1479 points sandslash | 2 comments | | HN request time: 0.416s | source
Show context
darqis ◴[] No.44317373[source]
when I started coding at the age of 11 in machine code and assembly on the C64, the dream was to create software that creates software. Nowadays it's almost reality, almost because the devil is always in the details. When you're used to write code, writing code is relatively fast. You need this knowledge to debug issues with generated code. However you're now telling AI to fix the bugs in the generated code. I see it kind of like machine code becomes overlaid with asm which becomes overlaid with C or whatever higher level language, which then uses dogma/methodology like MVC and such and on top of that there's now the AI input and generation layer. But it's not widely available. Affording more than 1 computer is a luxury. Many households are even struggling to get by. When you see those what 5 7 Mac Minis, which normal average Joe can afford that or does even have to knowledge to construct an LLM at home? I don't. This is a toy for rich people. Just like with public clouds like AWS, GCP I left out, because the cost is too high and running my own is also too expensive and there are cheaper alternatives that not only cost less but also have way less overhead.

What would be interesting to see is what those kids produced with their vibe coding.

replies(5): >>44317396 #>>44317699 #>>44318049 #>>44319693 #>>44321408 #
dist-epoch ◴[] No.44317699[source]
> This is a toy for rich people

GitHub copilot has a free tier.

Google gives you thousands of free LLM API calls per day.

There are other free providers too.

replies(1): >>44317868 #
guappa ◴[] No.44317868[source]
1st dose is free
replies(2): >>44317929 #>>44318058 #
infecto ◴[] No.44318058[source]
LLM APIs are pretty darn cheap for most of the developed worlds income levels.
replies(2): >>44318209 #>>44318307 #
guappa ◴[] No.44318209[source]
Yeah, because they're bleeding money like crazy now.

You should consider how much it actually costs, not how much they charge.

How do people fail to consider this?

replies(4): >>44318223 #>>44318435 #>>44318736 #>>44320080 #
NitpickLawyer ◴[] No.44318736[source]
No, there are 3rd party providers that run open-weights models and they are (most likely) not bleeding money. Their prices are kind of similar, and make sense in a napkin-math kind of way (we looked into this when ordering hardware).

You are correct that some providers might reduce prices for market capture, but the alternatives are still cheap, and some are close to being competitive in quality to the API providers.

replies(1): >>44319946 #
1. Eggpants ◴[] No.44319946[source]
Starts with “No” then follows that up with “most likely”.

So in other words you don’t know the real answer but posted anyways.

replies(1): >>44320293 #
2. NitpickLawyer ◴[] No.44320293[source]
That most likely is for the case where they made their investment calculations wrong and they won't be able to recoup their hw costs. So I think it's safe to say there may be the outlier 3rd party provider that may lose money in the long run.

But the majority of them are serving at ~ the same price, and that matches to the raw cost + some profit if you actually look into serving those models. And those prices are still cheap.

So yeah, I stand by what I wrote, "most likely" included.

My main answer was "no, ..." because the gp post was only considering the closed providers only (oai, anthropic, goog, etc). But youc an get open-weight models pretty cheap, and they are pretty close to SotA, depending on your needs.