Most active commenters
  • benreesman(4)
  • sebzim4500(3)

←back to thread

Tools: Code Is All You Need

(lucumr.pocoo.org)
313 points Bogdanp | 13 comments | | HN request time: 0.002s | source | bottom
Show context
pclowes ◴[] No.44454741[source]
Directionally I think this is right. Most LLM usage at scale tends to be filling the gaps between two hardened interfaces. The reliability comes not from the LLM inference and generation but the interfaces themselves only allowing certain configuration to work with them.

LLM output is often coerced back into something more deterministic such as types, or DB primary keys. The value of the LLM is determined by how well your existing code and tools model the data, logic, and actions of your domain.

In some ways I view LLMs today a bit like 3D printers, both in terms of hype and in terms of utility. They excel at quickly connecting parts similar to rapid prototyping with 3d printing parts. For reliability and scale you want either the LLM or an engineer to replace the printed/inferred connector with something durable and deterministic (metal/code) that is cheap and fast to run at scale.

Additionally, there was a minute during the 3D printer Gardner hype cycle where there were notions that we would all just print substantial amounts of consumer goods when the reality is the high utility use case are much more narrow. There is a corollary here to LLM usage. While LLMs are extremely useful we cannot rely on LLMs to generate or infer our entire operational reality or even engage meaningfully with it without some sort of pre-existing digital modeling as an anchor.

replies(4): >>44455110 #>>44455475 #>>44455505 #>>44456514 #
whiplash451 ◴[] No.44455505[source]
Interesting take but too bearish on LLMs in my opinion.

LLMs have already found large-scale usage (deep research, translation) which makes them more ubiquitous today than 3D printers ever will or could have been.

replies(7): >>44455662 #>>44455664 #>>44456263 #>>44456415 #>>44456476 #>>44456575 #>>44458961 #
1. benreesman ◴[] No.44455664[source]
What we call an LLM today (by which almost everyone means an autogressive language model from the Generative Pretrained Transformer family tree, and BERTs are still doing important eork, believe that) is actually an offshoot of neural machine translation.

This isn't (intentionally at least) mere HN pedantry: they really do act like translation tools in a bunch of observable ways.

And while they have recently crossed the threshold into "yeah, I'm always going to have a gptel buffer open now" territory at the extreme high end, their utility outside of the really specific, totally non-generalizing code lookup gizmo usecase remains a claim unsupported by robust profits.

There is a hole in the ground where something between 100 billion and a trillion dollars in the ground that so far has about 20B in revenue (not profit) going into it annually.

AI is going to be big (it was big ten years ago).

LLMs? Look more and more like the Metaverse every day as concerns the economics.

replies(2): >>44455935 #>>44456578 #
2. rapind ◴[] No.44455935[source]
> There is a hole in the ground where something between 100 billion and a trillion dollars in the ground that so far has about 20B in revenue (not profit) going into it annually.

This is a concern for me. I'm using claude-code daily and find it very useful, but I'm expecting the price to continue getting jacked up. I do want to support Anthropic, but they might eventually need to cross a price threshold where I bail. We'll see.

I expect at some point the more open models and tools will catch up when the expensive models like ChatGPT plateau (assuming they do plateau). Then we'll find out if these valuations measure up to reality.

Note to the Hypelords: It's not perfect. I need to read every change and intervene often enough. "Vibe coding" is nonsense as expected. It is definitely good though.

replies(3): >>44456502 #>>44457037 #>>44457552 #
3. benreesman ◴[] No.44456502[source]
Vibe coding is nonsense, and its really kind of uncomfortable to realize that a bunch of people you had tons of respect for are either ignorant or dishonest/bought enough to say otherwise. There's a cold wind blowing and the bunker-building crowd, well let's just say I won't shed a tear.

You don't stock antibiotics and bullets in a survival compound because you think that's going to keep out a paperclip optimizer gone awry. You do that in the forlorn hope that when the guillotines come out that you'll be able to ride it out until the Nouveau Regime is in a negotiating mood. But they never are.

4. sebzim4500 ◴[] No.44456578[source]
>LLMs? Look more and more like the Metaverse every day as concerns the economics.

ChatGPT has 800M+ weekly active users how is that comparable to the Metaverse in any way?

replies(2): >>44456905 #>>44467623 #
5. benreesman ◴[] No.44456905[source]
I said as concerns the economics. It's clearly more popular than the Oculus or whatever, but it's still a money bonfire and shows no signs of changing on that front.
replies(2): >>44457858 #>>44463069 #
6. juped ◴[] No.44457037[source]
I'm just taking advantage and burning VCs' money on useful but not world-changing tools while I still can. We'll come out of it with consumer-level okay tools even if they don't reach the levels of Claude today, though.
7. strgcmc ◴[] No.44457552[source]
As a thought-exercise -- assume models continue to improve, whereas "using claude-code daily" is something you choose to do because it's useful, but is not yet at the level of "absolute necessity, can't imagine work without it". What if it does become, that level of absolute necessity?

- Is your demand inelastic at that point, if having claude-code becomes effectively required, to sustain your livelihood? Does pricing continue to increase, until it's 1%/5%/20%/50% of your salary (because hey, what's the alternative? if you don't pay, then you won't keep up with other engineers and will just lose your job completely)?

- But if tools like claude-code become such a necessity, wouldn't enterprises be the ones paying? Maybe, but maybe like health-insurance in America (a uniquely dystopian thing), your employer may pay some portion of the premiums, but they'll also pass some costs to you as the employee... Tech salaries have been cushy for a while now, but we might be entering a "K-shaped" inflection point --> if you are an OpenAI elite researcher, then you might get a $100M+ offer from Meta; but if you are an average dev doing average enterprise CRUD, maybe your wages will be suppressed because the small cabal of LLM providers can raise prices and your company HAS to pay, which means you HAVE to bear the cost (or else what? you can quit and look for another job, but who's hiring?)

This is a pessimistic take of course (and vastly oversimplified / too cynical). A more positive outcome might be, that increasing quality of AI/LLM options leads to a democratization of talent, or a blossoming of "solo unicorns"... personally I have toyed with calling this, something like a "techno-Amish utopia", in the sense that Amish people believe in self-sufficiency and are not wholly-resistant to technology (it's actually quite clever, what sorts of technology they allow for themselves or not), so what if we could take that further?

If there was a version of that Amish-mentality of loosely-federated self-sufficient communities (they have newsletters! they travel to each other! but they largely feed themselves, build their own tools, fix their own fences, etc.!), where engineers + their chosen LLM partner could launch companies from home, manage their home automation / security tech, run a high-tech small farm, live off-grid from cheap solar, use excess electricity to Bitcoin mine if they choose to, etc.... maybe there is actually a libertarian world that can arise, where we are no longer as dependent on large institutions to marshal resources, deploy capital, scale production, etc., if some of those things are more in-reach for regular people in smaller communities, assisted by AI. This of course assumes that, the cabal of LLM model creators can be broken, that you don't need to pay for Claude if the cheaper open-source-ish Llama-like alternative is good enough

replies(1): >>44457825 #
8. rapind ◴[] No.44457825{3}[source]
Well my business doesn't rely on AI as a competitive advantage, at least not yet anyways. So as it stands, if claude got 100x as effective, but cost 100x more, I'm not sure I could justify the cost because my market might just not be large enough. Which means I can either ditch it (for an alternative if one exists) or expand into other markets... which is appealing but a huge change from what I'm currently doing.

As usual, the answer is "it depends". I guarantee though that I'll at least start looking at alternatives when there's a huge price hike.

Also I suspect that a 100x improvement (if even possible) wouldn't just cost 100 times as much, but probably 100,000+ times as much. I also suspect than an improvement of 100x will be hyped as an improvement of 1,000x at least :)

Regardless, AI is really looking like a commodity to me. While I'm thankful for all the investment that got us here, I doubt anyone investing this late in the game at these inflated numbers are going to see a long term return (other than ponzi selling).

9. threetonesun ◴[] No.44457858{3}[source]
LLMs as we know them via ChatGPT were a way to disrupt the search monopoly Google had for so many years. And my guess is the reason Google was in no rush to jump into that market was because they knew the economics of it sucked.
replies(1): >>44459428 #
10. benreesman ◴[] No.44459428{4}[source]
Right, and inb4 ads on ChatGPT to stop the bleeding. That's the default outcome at this point: quantize it down gradually to the point where it can be ad supported.

You can just see the scene from the Sorkin film where Fidji is saying to Altman: "Its time to monetize the site."

"We don't even know what it is yet, we know that it is cool."

11. sebzim4500 ◴[] No.44463069{3}[source]
I supposed in that sense it is more like the early days of social media, where there were huge numbers of users but no one was sure how to monetize it properly.

In this case though I think the ChatGPT product line is profitable albeit not enough to cover the R&D costs of OpenAI.

12. player1234 ◴[] No.44467623[source]
I can give a way 800M+ of anything for free. How many of these users are willing to pay OpenAI enough for full ROI and profits on top?
replies(1): >>44471525 #
13. sebzim4500 ◴[] No.44471525{3}[source]
>I can give a way 800M+ of anything for free

No you can't, be serious. 10% of the global population is using their service, you can't just pretend that isn't impressive.

There are a lot of free websites, they do not have 800M users.