Most active commenters
  • simonw(3)
  • ninetyninenine(3)

←back to thread

628 points cratermoon | 14 comments | | HN request time: 1.304s | source | bottom
1. simonw ◴[] No.44461833[source]
Looks like I was the inspiration for this post then. https://bsky.app/profile/simonwillison.net/post/3lt2xbayttk2...

> Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the invention of the table saw.

The reaction to that post has been interesting. It's mainly intended to be an argument against the LLM hype! I'm pushing back against all the people who are saying "LLMs are so incredible at programming that nobody should consider programming as a career any more" - I think that's total nonsense, like a carpenter quitting because someone invented the table saw.

Analogies like this will inevitably get people hung up on the details of the analogy though. Lots of people jumped straight to "a table saw does a single job reliably, unlike LLMs which are non-deterministic".

I picked table saws because they are actually really dangerous and can cut your thumb off if you don't know how to use them.

replies(4): >>44461877 #>>44461949 #>>44462734 #>>44464002 #
2. ninetyninenine ◴[] No.44461877[source]
You have to realize that we're only a couple years into wide spread adoption of LLMs as agentic coding partners. It's obvious too everyone, and you that LLMs currently cannot replace coders.

People are talking about the trendline, what AI was 5 years ago versus what AI is today points to a different AI 5 years down the line. Whatever AI will be 5 years from now it is immensely possible that LLMs may eliminate programming as a career. If not 5 years... give it 10. If not 10, give it 15. Maybe it happens in a day, a major break through in AI, or maybe it will be like what's currently happening, slow erosion and infiltration into our daily tasks where it takes on more and more responsibilities until one day, it's doing everything.

I mean do I even have to state the above? We all know it. What's baffling to me is how I get people saying shit like this:

>"LLMs are so incredible at programming that nobody should consider programming as a career any more" - I think that's total nonsense, like a carpenter quitting because someone invented the table saw.

I mean it's an obvious complete misrepresentation. People are talking about the future. Not the status quo and we ALL know this yet we still make comments like that.

replies(2): >>44461939 #>>44462158 #
3. simonw ◴[] No.44461939[source]
The more time I spend using LLMs for code (and being impressed at how much better they are compared to six months ago) the less I worry for my career.

Using LLMs as part of my process helps me understand how much of my job isn't just bashing out code.

My job is to identify problems that can be solved with code, then solve them, then verify that the solution works and has actually addressed the problem.

An even more advanced LLM may eventually be able to completely handle the middle piece. It can help with the first and last pieces, but only when operated by someone who understands both the problems to be solved and how to interact with the LLM to help solve them.

No matter how good these things get, they will still need someone to find problems for them to solve, define those problems and confirm that they are solved. That's a job - one that other humans will be happy to outsource to an expert practitioner.

It's also about 80% of what I do as a software developer already.

4. squidbeak ◴[] No.44461949[source]
The thing is that at this stage, LLMs, and perhaps AI in other forms, also have careers. Right now they're junior developers. But whose career will develop faster or go further? Theirs? or the new programmer's?
replies(1): >>44462760 #
5. indigoabstract ◴[] No.44462158[source]
I don't know what will come in the future, but to me it's obvious that any variation of LLMs, no matter how advanced won't replace a skilled human who knows what they're doing.

Through no fault of their own, but they're literally blind. They don't have eyes to see, ears to hear or fingers to touch and feel & have no clue if what they've produced is any good to the original purpose. They are still only (amazing) tools.

replies(1): >>44464897 #
6. rcxdude ◴[] No.44462734[source]
Also, if you don't have a table saw, just cutting a straight line efficiently and accurately is a fairly important baseline skill for doing carpentry, something which becomes a lot less of an issue with a table saw, and that makes some of the skillset of carpentry less important for getting good results (especially if you then make things that only consist of straight lines and so you also don't need to be able to do more complex shapes well). I think it's a pretty decent analogy.
7. wiseowise ◴[] No.44462760[source]
Who cares?
replies(1): >>44465127 #
8. latexr ◴[] No.44464002[source]
> Looks like I was the inspiration for this post then.

You were not, as is patently obvious from the sentence preceding your quote (emphasis mine):

> Another Bluesky quip I saw earlier today, and the reason I picked up writing this post (which I’d started last week)

The post had already been started, your comment was simply a reason to continue writing it at that point in time. Had your comment not existed, this post would probably still have been finished (though perhaps at a later date).

> It's mainly intended to be an argument against the LLM hype! I'm pushing back against all the people who are saying "LLMs are so incredible at programming that nobody should consider programming as a career any more" - I think that's total nonsense, like a carpenter quitting because someone invented the table saw.

Despite your restating, your point still reads to me as the opposite as what you claim to have intended. Inventing the table saw is a poor analogy because the problem with the LLM hype has nothing to do with their invention. It’s the grifts and the irresponsible shoving of it down everyone’s throats that’s a problem. That’s why the comparison fails, you’re juxtaposing things which aren’t even slightly related. The invention of a technology and the hype around it are two entirely orthogonal matters.

replies(1): >>44464642 #
9. simonw ◴[] No.44464642[source]
For your benefit I will make two minor edits to things I have said.

> Looks like I was the inspiration for this post then

I replace that with:

> Looks like I was the inspiration for finishing this post then

And this:

> Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the invention of the table saw.

I can rephrase as:

> Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the introduction of the table saw.

replies(1): >>44464967 #
10. ninetyninenine ◴[] No.44464897{3}[source]
LLMs produce video and audio data and can parse and change audio and visual data. They hear, see and read and the only reason they can’t touch is because we don’t have the training data.

You do not know if LLMs I the future can’t replace humans. You can only say right now they can’t replace humans. In the future the structure of the LLM may be modified or it become one module out of multiple that is required for agi.

These are all plausible possibilities. But you have narrowed it all down to a “no”. LLMs are just tools with no future.

The real answer is nobody knows. But there are legitimate possibilities here. We have a 5 year trend line projecting higher growth into the future.

replies(1): >>44465654 #
11. latexr ◴[] No.44464967{3}[source]
> For your benefit

If that’s your true impetus, please don’t bother. There’s nothing which benefits me about your words being clearer and less open to misinterpretation. You are, of course, completely welcome to disagree with and ignore my suggestions.

> thanks to the introduction of the table saw.

That makes absolutely no difference at all. And it doesn’t matter anymore either, the harm to your point is already done, no one’s going back to it now to reinterpret it. I was merely pointing out what I see as having gone wrong so you can avoid it in the future. But again, entirely up to you what you do with the feedback.

replies(1): >>44465446 #
12. squidbeak ◴[] No.44465127{3}[source]
The person who'd chosen programming as a career, if AI overtakes human programmers.
13. indigoabstract ◴[] No.44465654{4}[source]
> In the future the structure of the LLM may be modified or it become one module out of multiple that is required for agi. > The real answer is nobody knows.

This is all just my opinion of course, but it's easy to expect that being an LLM that knows all there is to know about every subject written in books and the internet would be enough to do every office work that can be done with a computer. Yet strangely enough, it isn't.

At this point they still lack the necessary feedback mechanism (the senses) and ability to learn on the job so they can function on their own independently. And people have to trust them, that they don't fail in some horrible way and things like that. Without all these they can still be very helpful, but can't really "replace" a human in doing most activities. And also, some people seem to possess a sense of aesthetics and a wonderful creative imagination, things that LLMs don't really display at this time.

I agree that nobody knows the answer. If and when they arrive at that point, by then the LLM part would probably be just a tiny fraction of their functioning. Maybe we can start worrying then. Or maybe we could just find something else to do. Because people aren't tools, even when economically worthless.

replies(1): >>44467160 #
14. ninetyninenine ◴[] No.44467160{5}[source]
I disagree. The output of an LLM is like a crapshoot. It might work it might not like 40 to 60 percent of the time. That in itself tells us it’s not a small component of something bigger. It’s likely a large component and core structure of what is to come. We’ve closed the gap about half way.