Most active commenters
  • JumpCrisscross(3)
  • netdevnet(3)

←back to thread

625 points lukebennett | 11 comments | | HN request time: 0s | source | bottom
Show context
LASR ◴[] No.42140045[source]
Question for the group here: do we honestly feel like we've exhausted the options for delivering value on top of the current generation of LLMs?

I lead a team exploring cutting edge LLM applications and end-user features. It's my intuition from experience that we have a LONG way to go.

GPT-4o / Claude 3.5 are the go-to models for my team. Every combination of technical investment + LLMs yields a new list of potential applications.

For example, combining a human-moderated knowledge graph with an LLM with RAG allows you to build "expert bots" that understand your business context / your codebase / your specific processes and act almost human-like similar to a coworker in your team.

If you now give it some predictive / simulation capability - eg: simulate the execution of a task or project like creating a github PR code change, and test against an expert bot above for code review, you can have LLMs create reasonable code changes, with automatic review / iteration etc.

Similarly there are many more capabilities that you can ladder on and expose into LLMs to give you increasingly productive outputs from them.

Chasing after model improvements and "GPT-5 will be PHD-level" is moot imo. When did you hire a PHD coworker and they were productive on day-0 ? You need to onboard them with human expertise, and then give them execution space / long-term memories etc to be productive.

Model vendors might struggle to build something more intelligent. But my point is that we already have so much intelligence and we don't know what to do with that. There is a LOT you can do with high-schooler level intelligence at super-human scale.

Take a naive example. 200k context windows are now available. Most people, through ChatGPT, type out maybe 1500 tokens. That's a huge amount of untapped capacity. No human is going to type out 200k of context. Hence why we need RAG, and additional forms of input (eg: simulation outcomes) to fully leverage that.

replies(43): >>42140086 #>>42140126 #>>42140135 #>>42140347 #>>42140349 #>>42140358 #>>42140383 #>>42140604 #>>42140661 #>>42140669 #>>42140679 #>>42140726 #>>42140747 #>>42140790 #>>42140827 #>>42140886 #>>42140907 #>>42140918 #>>42140936 #>>42140970 #>>42141020 #>>42141275 #>>42141399 #>>42141651 #>>42141796 #>>42142581 #>>42142765 #>>42142919 #>>42142944 #>>42143001 #>>42143008 #>>42143033 #>>42143212 #>>42143286 #>>42143483 #>>42143700 #>>42144031 #>>42144404 #>>42144433 #>>42144682 #>>42145093 #>>42145589 #>>42146002 #
alangibson ◴[] No.42140383[source]
I think you're playing a different game than the Sam Altmans of the world. The level of investment and profit they are looking for can only be justified by creating AGI.

The > 100 P/E ratios we are already seeing can't be justified by something as quotidian as the exceptionally good productivity tools you're talking about.

replies(3): >>42140539 #>>42140666 #>>42140680 #
1. JumpCrisscross ◴[] No.42140680[source]
> level of investment and profit they are looking for can only be justified by creating AGI

What are you basing this on?

IT outsourcing is a $500+ billion industry. If OpenAI et al can run even a 10% margin, that business alone justifies their valuation.

replies(2): >>42141388 #>>42144909 #
2. HarHarVeryFunny ◴[] No.42141388[source]
It seems you are missing a lot of "ifs" in that hypothetical!

Nobody knows how things like coding assistants or other AI applications will pan out. Maybe it'll be Oracle selling Meta-licenced solutions that gets the lion's share of the market. Maybe custom coding goes away for many business applications as off-the-shelf solutions get smarter.

A future where all that AI (or some hypothetical AGI) changes is work being done by humans to the same work being done by machines seems way too linear.

replies(1): >>42141592 #
3. JumpCrisscross ◴[] No.42141592[source]
> you are missing a lot of "ifs" in that hypothetical

The big one being I'm not assuming AGI. Low-level coding tasks, the kind frequently outsourced, are within the realm of being competitive with offshoring with known methods. My point is we don't need to assume AGI for these valuations to make sense.

replies(2): >>42141668 #>>42145913 #
4. HarHarVeryFunny ◴[] No.42141668{3}[source]
Current AI coding assistants are best at writing functions or adding minor features to an existing code base. They are not agentic systems that can develop an entire solution from scratch given a specification, which in my experience is more typcical of the work that is being outsourced. AI is a tool, whose full-cycle productivity benefit seems questionable. It is not a replacement for a human.
replies(2): >>42141727 #>>42141740 #
5. JumpCrisscross ◴[] No.42141727{4}[source]
> they are not agentic systems that can develop an entire solution from scratch given a specification, which in my experience is more typcical of the work that is being outsourced

If there is one domain where we're seeing tangible progress from AI, it's in working towards this goal. Difficult projects aren't in scope. But most tech, especially most tech branded IT, is not difficult. Everyone doesn't need an inventory or customer-complaint system designed from scratch. Current AI is good at cutting through that cruft.

replies(1): >>42142846 #
6. senko ◴[] No.42141740{4}[source]
There are a number of agentic systems that can develop more complex solutions. Just a few off the top of my head: Pythagora, Devin, OpenHands, Fume, Tusk, Replit, Codebuff, Vly. I'm sure I've missed a bunch.

Are they good enough to replace a human yet? Questionable[0], but they are improving.

[0] You wouldn't believe how low the outsourcing contractors' quality can go. Easily surpassed by current AI systems :) That's a very low bar tho.

7. ehnto ◴[] No.42142846{5}[source]
There have been off the shelf solutions for so many common software use cases, for decades now. I think the reason we still see so much custom software is that the devil is always in the details, and strict details are not an LLMs strong suit.

LLMs are in my opinion hamstrung at the starting gate in regards to replacing software teams, as they would need to be able to understand complex business requirements perfectly, which we know they cannot. Humans can't either. It takes a business requirements/integration logic/code generation pipeline and I think the industry is focused on code generation and not that integration step.

I think there needs to be a re-imaging of how software is built by and for interaction with AI if it were to ever take over from human software teams, rather than trying to get AI to reflect what humans do.

replies(1): >>42145940 #
8. Barrin92 ◴[] No.42144909[source]
if the AI business is a bit more mundane than Altman thinks and there's diminishing returns the market is going to be even more commodified than it already is and you're not going to make any margins or somehow own the entire market. That's already the case, Anthropic works about as well, there's other companies a few months behind, open source is like a year behind.

That's literally Zucc's entire play, in 5 years this stuff is going to be so abundant you'll get access to good enough models for pennies and he'll win because he can slap ads on it, and openAI sits there on its gargantuan research costs.

replies(1): >>42145951 #
9. netdevnet ◴[] No.42145913{3}[source]
I don't know what's your experience with outsourcing. But people outsource full projects not the writing of a couple of methods. With LLMs still unable to fully understand relatively simple stuff, you can't expect them to deliver a project whose specification (like most software projects) contains ambiguities that only an experienced dev can detect and ask deep questions about the intention and purpose of the project. LLMs are nowhere near that. To be able to handle external uncertainty and turn it into certainty, to explain why technical decisions were made, to understand the purpose of a project and how it matches the project. To handle the overall uncertainties of writing code with other's people's code. All this is stuff outsourced teams do well. But LLMs won't be anywhere near good for at least a decade. I am calling it
10. netdevnet ◴[] No.42145940{6}[source]
This, code is written by humans for humans. LLMs cannot compete no matter how much data you throw at them. A world in which software is written by AI will likely won't be code that will be readable by humans. And that is dangerous for anything where people's health, privacy, finances or security is involved
11. netdevnet ◴[] No.42145951[source]
genius move by Mark, this could make them the google of LLMs