←back to thread

108 points bertman | 2 comments | | HN request time: 0.425s | source
Show context
ebiester ◴[] No.43821540[source]
First, I think it's fair to say that today, an LLM cannot replace a programmer fully.

However, I have two counters:

- First, the rational argument right now is that one person and money spent toward LLMs can replace three - or more - programmers total. This is the argument with a three year bound. The current technology will improve and developers will learn how to use it to its potential.

- Second, the optimistic argument is that a combination of the LLM model with larger context windows and other supporting technology around it will allow it to emulate a theory of mind that is similar to the average programmer. Consider Go or Chess - we didn't think computers had the theory of mind to be better than a human, but it found other ways. For humans, Naur's advice stands. We cannot assume that this is true if there are tools with different strengths and weaknesses than humans.

replies(2): >>43821634 #>>43822188 #
1. ActionHank ◴[] No.43821634[source]
I think that everyone is misjudging what will improve.

There is no doubt it will improve, but if you look at a car, it is still the same fundamental "shape" of a model T.

There are niceties and conveniences, efficiency went way up, but we don't have flying cars.

I think we are going to have something, somewhere in the middle, AI features will eventually find their niche, people will continue to leverage whatever tools and products are available to build the best thing they can.

I believe that a future of self-writing code pooping out products, AI doing all the other white collar jobs, and robots doing the rest cannot work. Fundamentally there is no "business" without customers and no customers if no one is earning.

replies(1): >>43824673 #
2. ebiester ◴[] No.43824673[source]
You cannot build a tractor unit (the engine-cab half of the tractor-trailer) with Model T Technology even if they are close.

And the changes will be in the auxiliary features. We will figure out ways to have LLMs understand APIs better without training them. We will figure out ways to better focus its context. We will chain LLM requests and contexts in a way that help solve problems better. We will figure out ways to pass context from session to session that an LLM can effectively have a learning memory. And we will figure out our own best practices to emphasize their strengths and minimize their weaknesses. (We will build better roads.)

And as much as you want to say that - a Model T was uncomfortable, had a range of about 150 miles between fill-ups, and maxed out at 40-45 mph. It also broke frequently and required significant maintenance. It might take 13-14 days to get a Model T from new york to los angeles today notwithstanding maintenance issues, and a modern car could make it reliably in 4-5 days if you are driving legally and not pushing more than 10 hours a day.

I too think that self-writing code is not going to happen, but I do think there is a lot of efficiency to be made.