←back to thread

108 points bertman | 9 comments | | HN request time: 0.001s | source | bottom
1. ebiester ◴[] No.43821540[source]
First, I think it's fair to say that today, an LLM cannot replace a programmer fully.

However, I have two counters:

- First, the rational argument right now is that one person and money spent toward LLMs can replace three - or more - programmers total. This is the argument with a three year bound. The current technology will improve and developers will learn how to use it to its potential.

- Second, the optimistic argument is that a combination of the LLM model with larger context windows and other supporting technology around it will allow it to emulate a theory of mind that is similar to the average programmer. Consider Go or Chess - we didn't think computers had the theory of mind to be better than a human, but it found other ways. For humans, Naur's advice stands. We cannot assume that this is true if there are tools with different strengths and weaknesses than humans.

replies(2): >>43821634 #>>43822188 #
2. ActionHank ◴[] No.43821634[source]
I think that everyone is misjudging what will improve.

There is no doubt it will improve, but if you look at a car, it is still the same fundamental "shape" of a model T.

There are niceties and conveniences, efficiency went way up, but we don't have flying cars.

I think we are going to have something, somewhere in the middle, AI features will eventually find their niche, people will continue to leverage whatever tools and products are available to build the best thing they can.

I believe that a future of self-writing code pooping out products, AI doing all the other white collar jobs, and robots doing the rest cannot work. Fundamentally there is no "business" without customers and no customers if no one is earning.

replies(1): >>43824673 #
3. rowanseymour ◴[] No.43822188[source]
If you forced me to put a number on how much more productive having copilot makes me I think I would say < 5%, so I'm struggling to see how anyone can just assert that "the rational argument right now" is that I can be 200% more productive.

Maybe as a senior dev working on a large complex established project I don't benefit from LLMs as much as others because as I and the project mature.. productivity becomes less and less correlated with lines of code, and more about the ability to comprehend the bigger picture and how different components interact... things that even LLMs with bigger context aren't good at.

replies(3): >>43822711 #>>43824435 #>>43824786 #
4. spacemadness ◴[] No.43822711[source]
This is what I tried explaining to our management who are using lines of code metrics on engineers working on an established codebase. Other than lines of code being a terrible metric in general, they don’t seem to understand or care to understand the difference.
5. edanm ◴[] No.43824435[source]
> If you forced me to put a number on how much more productive having copilot makes me I think I would say < 5%, so I'm struggling to see how anyone can just assert that "the rational argument right now" is that I can be 200% more productive.

If you're thinking about Copilot, you're simply not talking about the same thing that most people who claim a 200% speedup are talking about. They're talking about either using chat-oriented workflows, where you're asking Claude or similar to wholesale generate code, often using an IDE like Cursor. Or even possibly talking about Coding Agents like Claude Code, which can be even more productive.

You might still be right! They might still be wrong! But your talking about Copilot makes it seem like you're nowhere near the cutting edge use of AI, so you don't have a well-formed opinion about it.

(Personally, I'm not 200% productive with Coding Agents, for various reasons, but given the number of people I admire who are, I believe this is something that will change, and soon.)

replies(1): >>43825843 #
6. ebiester ◴[] No.43824673[source]
You cannot build a tractor unit (the engine-cab half of the tractor-trailer) with Model T Technology even if they are close.

And the changes will be in the auxiliary features. We will figure out ways to have LLMs understand APIs better without training them. We will figure out ways to better focus its context. We will chain LLM requests and contexts in a way that help solve problems better. We will figure out ways to pass context from session to session that an LLM can effectively have a learning memory. And we will figure out our own best practices to emphasize their strengths and minimize their weaknesses. (We will build better roads.)

And as much as you want to say that - a Model T was uncomfortable, had a range of about 150 miles between fill-ups, and maxed out at 40-45 mph. It also broke frequently and required significant maintenance. It might take 13-14 days to get a Model T from new york to los angeles today notwithstanding maintenance issues, and a modern car could make it reliably in 4-5 days if you are driving legally and not pushing more than 10 hours a day.

I too think that self-writing code is not going to happen, but I do think there is a lot of efficiency to be made.

7. ebiester ◴[] No.43824786[source]
I don't think about it in lines of code, but let me say that there are some efficiencies being left on the table.

It helps because I am quicker to run to a script to automate a process instead of handling it manually, because I can bang it out in 15 minutes rather than an hour.

I am more likely to try a quick prototype of a refactor because I can throw it at the idea and just see what it looks like in ten minutes. If it has good testing and I tell it not to change, it can do a reasonable job getting 80% done and I can think through it.

It generates mock data quicker than I can, and can write good enough tests through chat. I can throw it to legacy code and it does a good job writing characterization tests and sometimes catches things I don't.

Sometimes, when I'm tired, I can throw easy tasks at it that require minimal thought and can get through "it would be nice if" issues.

It's not great at writing documentation, but it's pretty good at taking a slack chat and writing up a howto that I won't have the time or motivation to do.

All of those are small, but they definitely add up.

That's today and being compared to 5% improvement. I think the real improvements come as we learn more.

8. geraneum ◴[] No.43825843{3}[source]
> But your talking about Copilot makes it seem like you're nowhere near the cutting edge use of AI, so you don't have a well-formed opinion about it

You can use Claude, Gemini, etc through Copilot and you can use the agent mode. Maybe you do or maybe you don’t have a well formed opinion of the parent’s workflow.

replies(1): >>43842807 #
9. mhast ◴[] No.43842807{4}[source]
For me personally there was a very big step going from copilot to cursor. Much bigger than going from "normal" programming to copilot.

Copilot seems to perpetually be 3+ months after the competition.