←back to thread

1479 points sandslash | 3 comments | | HN request time: 0.836s | source
Show context
darqis ◴[] No.44317373[source]
when I started coding at the age of 11 in machine code and assembly on the C64, the dream was to create software that creates software. Nowadays it's almost reality, almost because the devil is always in the details. When you're used to write code, writing code is relatively fast. You need this knowledge to debug issues with generated code. However you're now telling AI to fix the bugs in the generated code. I see it kind of like machine code becomes overlaid with asm which becomes overlaid with C or whatever higher level language, which then uses dogma/methodology like MVC and such and on top of that there's now the AI input and generation layer. But it's not widely available. Affording more than 1 computer is a luxury. Many households are even struggling to get by. When you see those what 5 7 Mac Minis, which normal average Joe can afford that or does even have to knowledge to construct an LLM at home? I don't. This is a toy for rich people. Just like with public clouds like AWS, GCP I left out, because the cost is too high and running my own is also too expensive and there are cheaper alternatives that not only cost less but also have way less overhead.

What would be interesting to see is what those kids produced with their vibe coding.

replies(5): >>44317396 #>>44317699 #>>44318049 #>>44319693 #>>44321408 #
dist-epoch ◴[] No.44317699[source]
> This is a toy for rich people

GitHub copilot has a free tier.

Google gives you thousands of free LLM API calls per day.

There are other free providers too.

replies(1): >>44317868 #
guappa ◴[] No.44317868[source]
1st dose is free
replies(2): >>44317929 #>>44318058 #
infecto ◴[] No.44318058[source]
LLM APIs are pretty darn cheap for most of the developed worlds income levels.
replies(2): >>44318209 #>>44318307 #
NoOn3 ◴[] No.44318307[source]
It's cheap now. But if you take into account all the training costs, then at such prices they cannot make a profit in any way. This is called dumping to capture the market.
replies(3): >>44318415 #>>44319180 #>>44319311 #
dist-epoch ◴[] No.44319311[source]
There is no "capture" here, it's trivial to switch LLM/providers, they all use OpenAI API. It's literally a URL change.
replies(2): >>44319913 #>>44321762 #
1. jamessinghal ◴[] No.44319913[source]
This is changing; OpenAI's newer API (Responses) is required to include reasoning tokens in the context while using the API, to get the reasoning summaries, and to use some of the OpenAI provided tools. Google's OpenAI compatibility supports Chat Completions, not Responses.

As the LLM developers continue to add unique features to their APIs, the shared API which is now OpenAI will only support the minimal common subset and many will probably deprecate the compatibility API. Devs will have to rely on SDKs to offer comptibility.

replies(1): >>44321359 #
2. dist-epoch ◴[] No.44321359[source]
It's still trivial to map to a somewhat different API. Google has it's Vertex/GenAI API flavors.

At least for now, LLM APIs are just JSONs with a bunch of prompts/responses in them and maybe some file URLs/IDs.

replies(1): >>44322820 #
3. jamessinghal ◴[] No.44322820[source]
It isn't necessarily difficult, but it's significantly more effort than swapping a URL as I originally was replying to.