←back to thread

111 points mirrir | 3 comments | | HN request time: 0s | source
Show context
adityashankar ◴[] No.46176854[source]
Due to perverse incentives and the historical nature of models over-claiming accuracy, it's very hard to believe anything until it is open source and can be tested out

that being said, I do very much believe that computational efficiency of models is going to go up [correction] drastically over the coming months, which does pose interesting questions over nvidia's throne

*previously miswrote and said computational efficiency will go down

replies(3): >>46176877 #>>46176899 #>>46177234 #
ACCount37 ◴[] No.46177234[source]
I don't doubt the increase in efficiency. I doubt the "drastically".

We already see models become more and more capable per weight and per unit of compute. I don't expect a state-change breakthrough. I expect: more of the same. A SOTA 30B model from 2026 is going to be ~30% better than one from 2025.

Now, expecting that to hurt Nvidia? Delusional.

No one is going to stop and say "oh wow, we got more inference efficiency - now we're going to use less compute". A lot of people are going to say "now we can use larger and more powerful models for the same price" or "with cheaper inference for the same quality, we can afford to use more inference".

replies(1): >>46177314 #
colechristensen ◴[] No.46177314[source]
Eh.

Right now, Claude is good enough. If LLM development hit a magical wall and never got any better, Claude is good enough to be terrifically useful and there's diminishing returns on how much good we get out of it being at $benchmark.

Saying we're satisfied with that... well how many years until efficiency gains from one side and consumer hardware from the other meet in the middle so "good enough for everybody" open models are available for anyone who wants to pay for a $4000 MacBook (and after another couple of years a $1000 MacBook, and several more and a fancy wristwatch).

Point being, unless we get to a point where we start developing "models" that deserve civil rights and citizenship, the years are numbered to where we NEED cloud infrastructure and datacenters full of racks and racks of $x0,000 hardware.

I strongly believe the top end of the S curve is nigh, and with it we're going to see these trillion dollar ambitions crumble. Everybody is going to want a big-ass GPU and a ton of RAM but that's going to quickly become boring because open models are going to exist that eat everybody's lunch and the trillion dollar companies trying to beat them with a premium product aren't going to stack up outside of niche cases and much more ordinary cloud compute motivations.

replies(2): >>46177428 #>>46178229 #
buu700 ◴[] No.46178229[source]
Coding capability in and of itself may be "good enough" or close to it, but there's a long way to go before AI can build and operate a product end-to-end. In fairness, a lot of the gap may be tooling.

But the end state in my mind is telling an AI "build me XYZ", having it ask all the important questions over the course of a 30-minute chat while making reasonable decisions on all lower-level issues, then waking up the next morning to a live cloud-hosted test environment at a subdomain of the domain it said it would buy along with test builds of native apps for Android, iOS, Linux, macOS, and Windows, all with near-100% automated test coverage and passing tests. Coding agents feel like magic, but we're clearly not there yet.

And that's just coding. If someone wanted to generate a high-quality custom feature-length movie within the usage limits of a $20/mo AI plan, they'd be sorely disappointed.

replies(3): >>46179251 #>>46180359 #>>46181482 #
colechristensen ◴[] No.46179251[source]
>But the end state in my mind is telling an AI "build me XYZ", having it ask all the important questions over the course of a 30-minute chat while making reasonable decisions on all lower-level issues, then waking up the next morning to a live cloud-hosted test environment at a subdomain of the domain it said it would buy along with test builds of native apps for Android, iOS, Linux, macOS, and Windows, all with near-100% automated test coverage and passing tests. Coding agents feel like magic, but we're clearly not there yet.

I'm pretty sure we're there. I'm not sure how interested I am in completely closing that loop and completely removing the human from the loop. But I'm also pretty confident that I could do it with nothing but existing models and software built around them.

replies(1): >>46179346 #
buu700 ◴[] No.46179346[source]
I'm not aware that we are there, but would be very interested if you have information to the contrary. Even if we were there, the product/service that does it would have to be at a reasonable cost in order to be useful for most people.

As I said, a lot of the gap may be tooling. But I'm skeptical that even the models themselves are capable of that given sufficiently advanced tooling. I'm not saying we're not close (certainly much closer than we were at the start of the decade), but if we were actually there, you would have zero reservations about removing the human from the loop of an initial prototype.

replies(1): >>46179757 #
colechristensen ◴[] No.46179757{3}[source]
My information to the contrary is my experience in the last few weeks building things with LLMs including tooling to help build things with LLMs. The is experience is one of ... I'm a product manager and devsecops engineer bullying an LLM with the psychology of a toddler into building great software which it can do very successfully. A single instance of a model with a single rolling context window and one set of prompts absolutely can't do what you want, but that's not what I've been doing.

Oneshotting applications isn't interesting to me because I do want to be involved, there are things I have opinions about that I won't know I will have until we get there and there are definitely times where I want to pivot a little or a lot in the middle of development based on experience, an actually agile development cycle.

In the same way I wouldn't want to hire a wedding planner or house builder to plan my wedding or build my home based entirely on a single short meeting before anything started, I don't want to one shot software.

There are all sorts of things where I want to get myself out of the loop because they're stupid problems, some of them I've fixed, others I'd rather fix later because doing the thing is more interesting than pausing and building the tools to make the thing.

There is I think an inverse relationship between the complexity of the tooling and the amount of human involvement; for me I've reached or am quite near the amount of human involvement where I'm much more excited about building stuff than saving more of my attention.

I'm being a bit vague because I'm not sure I want to share all of my secrets just yet.

replies(1): >>46180177 #
buu700 ◴[] No.46180177{4}[source]
Just to be clear, what I was proposing was a single tool which would, on the basis of a single ~30-minute interaction, purchase a domain name, set up a cloud environment, build a full-stack application + cross-platform native apps + useful tests with near-100% coverage, deploy a live test environment, and compile each platform's native app — all entirely autonomously. Are you saying you've used or built something similar to that? That is super interesting if so, even if you're unable to share. A major subset of that could also still be incredibly useful, but the whole solution I described is a very high bar.

I've been very successful building with custom LLM workflows and automation myself, but that's beyond the capabilities of any tooling I've seen, and I wouldn't necessarily expect great results with current models even if current tooling were fully capable of what I described. Even with such tooling, the cost of inference is high enough to deter careless usage without much more rigorous work on the initial spec and/or micromanagement of the development process.

I'm not necessarily advocating for one-shotting in any given context. I'm simply pointing out that there would be huge advantages to LLMs and tooling sufficiently advanced to be fully capable of doing so end-to-end, especially at dramatically lower cost than current models and at superhuman quality. Such an AI could conceivably one-shot any possible project idea, in the same sense that a competent human dev team with nothing but a page of vague requirements and unlimited time could at least eventually produce something functional.

The value of such an AI is that we'd use it in ways that sound ridiculous today. Maybe a chat with some guy at a bar randomly inspires a neat idea, so you quickly whip out your phone and fire off some bullet point notes; by the time you get home, you have 10 different near-production-ready variations to choose from, each with documentation on the various decisions its agent made and why, and each one only cost $5 in account credit. None is quite perfect, but through the process you've learned a lot and substantially refined the idea; you give it a second round of notes and wake up to a new testable batch. One of those has the functional requirements just right, so you make the final decisions on non-functional requirements and let it roll one last time with strict attention to detail on code quality and a bunch of cycles thrown at security review.

That evening, you check back in and find a high-quality final implementation that meets all of your requirements with a performant and scalable architecture, with all infrastructure deployed and apps submitted to all stores/repositories. You subsequently allocate a sales and marketing budget to the AI, and eventually notice that you suddenly have a new source of income. Now imagine that instead of you, this was actually your friend who's never written a line of code and barely knows how to use a computer.

I still agree with you that current models have been "good enough" for some time, in the sense that if LLMs froze today we could spend the next decade collectively building on and with them and it would totally transform the economy. But at the same time, there's definitely latent demand for more and/or better inference. If LLMs were to become radically more efficient, we wouldn't start shuttering data centers; the economy would just become that much more productive.

replies(1): >>46181506 #
nl ◴[] No.46181506{5}[source]
Have you tried Loveable, Replit, V0 etc?

Outside of purchasing the domain and native apps for you they cover a very significant amount of this.

If you insist on Native Apps, it's possible Google Jules could do it. With Gemini 2.5 it wasn't strong enough but I think it has Gemini 3 now which can definitely do native apps just fine.

replies(1): >>46184966 #
buu700 ◴[] No.46184966{6}[source]
Thanks for the recommendations. Regarding your other comment, Flutter is what I've landed on as well for my next cross-platform app project, and I'm currently in the middle of developing a spec for a fairly complex agentic system that I'm going to try having Codex two-shot (basic project setup + file stubs + exhaustive tests -> manual checkpoint -> TDD the rest).

I haven't tried Lovable, V0, or Jules, but I really like Replit for certain things. Having said that, based on my experience, I would characterize it as an amazing tool for rapid frontend iteration with prototype-level backend creation. I'm sure it's gotten better at one-shotting since I tried Agent 2 with Sonnet 3.7 in May, but would still be very (pleasantly) surprised to see that Agent 3 with current models could meet the incredibly high bar of wholly replacing a human dev team.

The fact that tools like Replit also include their own hosting environments is definitely neat, but not really what I was getting at as far as deployment. What I had in mind was managing arbitrary cloud platforms, setting up an optimal architecture for your anticipated scale and usage patterns — whether that's a single Hetzner instance with SQLite or horizontally scaled app servers behind an API gateway with Kafka, Valkey, and Spanner or ScyllaDB — and doing all the DevOps to handle that along with things like CI/CD.

I'm not downplaying how amazing these capabilities are. Being able to generate high-quality code from natural language feels like magic. But all the parts beyond narrow application code are half of the thing I described:

* I'm saying you should be able to send a single off-the-cuff drunk text to an AI and later find a complete production-ready SaaS startup that fully aligns with a reasonable interpretation of your message.

* The other half of the whole thing is >=human-level execution. If the AI can't autonomously deliver work comparable to what an experienced CTO would (given the same requirements, an arbitrarily large hiring budget, and a stipulation to never contact you again until the work was done), it's not there yet.

Again, none of this is to dunk on agentic coding. My point is that I set an absurdly high bar because I want it to one day be met. Just as a $100 storage budget today is equivalent to $100m a few decades ago, I want to live to see a $100 engineering budget reach equivalency with last decade's $100m.

replies(1): >>46186329 #
nl ◴[] No.46186329{7}[source]
If you haven't tried these things since Sonnet 4.5 came out then it's time to give them another try.

Sonnet 4.5 and especially Codex 5.1 have completely changed the way I build software.

> The fact that tools like Replit also include their own hosting environments is definitely neat, but not really what I was getting at as far as deployment. What I had in mind was managing arbitrary cloud platforms, setting up an optimal architecture for your anticipated scale and usage patterns — whether that's a single Hetzner instance with SQLite or horizontally scaled app servers behind an API gateway with Kafka, Valkey, and Spanner or ScyllaDB — and doing all the DevOps to handle that along with things like CI/CD.

I think this is all possible now. But I don't think it'd work first time because there are so many environmental issues (service auth etc) that can go wrong. Maybe it'd be ok if you have it a root AWS account...

replies(1): >>46186833 #
1. buu700 ◴[] No.46186833{8}[source]
Just in case it was unclear, I extensively use AI and agentic coding with current models on a daily basis. The only thing I haven't tried in a few months is specifically one-shotting a greenfield project.

I know computer-use agents exist, and theoretically have tooling and permission to do all the things a human sitting in front of a computer can. I just haven't heard of anyone successfully claiming to have had one do exactly what I described for a non-toy project in one shot with zero mistakes, or of any tool like Replit claiming to support such a capability.

I'd be very interested to know if my impression is out of date. As in, if I could send a single message to some AI service and say "Here's my credit card, banking info, and entity info/EIN; build me a production-ready Google Drive clone with religious branding and 10x higher pricing called God Drive with native Android/iOS/Linux/macOS/Windows apps, then deploy it to production on an optimal cloud architecture capable of scaling to a billion users at whatever domain name you like best and release the apps to all major app stores/repositories", then go to bed with high confidence that I'd be able to start creating God Drive docs/spreadsheets/presentations for work the following morning.

If that isn't the case, it isn't a criticism of the technology. The fact that we're even seriously discussing the scenario is incredible.

replies(1): >>46187437 #
2. colechristensen ◴[] No.46187437[source]
Well... they're not oracles and never will be. The things I'm creating are following recognizable development practices. It's not build-once and done, it's an elaborate design/build/test cycle that happens in many flavors because unless you've already done something and are copying it, that's how you create and language models aren't going to get away from that.
replies(1): >>46187497 #
3. buu700 ◴[] No.46187497[source]
Whether or not it will one day get there is anyone's guess, but it sounds like we agree that it at least isn't currently there. I brought up that goalpost to illustrate why more efficient models will only improve the aggregate volume and/or quality of output for the foreseeable future, as opposed to creating a glut of supply that destroys the economics of data centers.