←back to thread

627 points cratermoon | 6 comments | | HN request time: 1.204s | source | bottom
Show context
simonw ◴[] No.44461833[source]
Looks like I was the inspiration for this post then. https://bsky.app/profile/simonwillison.net/post/3lt2xbayttk2...

> Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the invention of the table saw.

The reaction to that post has been interesting. It's mainly intended to be an argument against the LLM hype! I'm pushing back against all the people who are saying "LLMs are so incredible at programming that nobody should consider programming as a career any more" - I think that's total nonsense, like a carpenter quitting because someone invented the table saw.

Analogies like this will inevitably get people hung up on the details of the analogy though. Lots of people jumped straight to "a table saw does a single job reliably, unlike LLMs which are non-deterministic".

I picked table saws because they are actually really dangerous and can cut your thumb off if you don't know how to use them.

replies(4): >>44461877 #>>44461949 #>>44462734 #>>44464002 #
1. ninetyninenine ◴[] No.44461877[source]
You have to realize that we're only a couple years into wide spread adoption of LLMs as agentic coding partners. It's obvious too everyone, and you that LLMs currently cannot replace coders.

People are talking about the trendline, what AI was 5 years ago versus what AI is today points to a different AI 5 years down the line. Whatever AI will be 5 years from now it is immensely possible that LLMs may eliminate programming as a career. If not 5 years... give it 10. If not 10, give it 15. Maybe it happens in a day, a major break through in AI, or maybe it will be like what's currently happening, slow erosion and infiltration into our daily tasks where it takes on more and more responsibilities until one day, it's doing everything.

I mean do I even have to state the above? We all know it. What's baffling to me is how I get people saying shit like this:

>"LLMs are so incredible at programming that nobody should consider programming as a career any more" - I think that's total nonsense, like a carpenter quitting because someone invented the table saw.

I mean it's an obvious complete misrepresentation. People are talking about the future. Not the status quo and we ALL know this yet we still make comments like that.

replies(2): >>44461939 #>>44462158 #
2. simonw ◴[] No.44461939[source]
The more time I spend using LLMs for code (and being impressed at how much better they are compared to six months ago) the less I worry for my career.

Using LLMs as part of my process helps me understand how much of my job isn't just bashing out code.

My job is to identify problems that can be solved with code, then solve them, then verify that the solution works and has actually addressed the problem.

An even more advanced LLM may eventually be able to completely handle the middle piece. It can help with the first and last pieces, but only when operated by someone who understands both the problems to be solved and how to interact with the LLM to help solve them.

No matter how good these things get, they will still need someone to find problems for them to solve, define those problems and confirm that they are solved. That's a job - one that other humans will be happy to outsource to an expert practitioner.

It's also about 80% of what I do as a software developer already.

3. indigoabstract ◴[] No.44462158[source]
I don't know what will come in the future, but to me it's obvious that any variation of LLMs, no matter how advanced won't replace a skilled human who knows what they're doing.

Through no fault of their own, but they're literally blind. They don't have eyes to see, ears to hear or fingers to touch and feel & have no clue if what they've produced is any good to the original purpose. They are still only (amazing) tools.

replies(1): >>44464897 #
4. ninetyninenine ◴[] No.44464897[source]
LLMs produce video and audio data and can parse and change audio and visual data. They hear, see and read and the only reason they can’t touch is because we don’t have the training data.

You do not know if LLMs I the future can’t replace humans. You can only say right now they can’t replace humans. In the future the structure of the LLM may be modified or it become one module out of multiple that is required for agi.

These are all plausible possibilities. But you have narrowed it all down to a “no”. LLMs are just tools with no future.

The real answer is nobody knows. But there are legitimate possibilities here. We have a 5 year trend line projecting higher growth into the future.

replies(1): >>44465654 #
5. indigoabstract ◴[] No.44465654{3}[source]
> In the future the structure of the LLM may be modified or it become one module out of multiple that is required for agi. > The real answer is nobody knows.

This is all just my opinion of course, but it's easy to expect that being an LLM that knows all there is to know about every subject written in books and the internet would be enough to do every office work that can be done with a computer. Yet strangely enough, it isn't.

At this point they still lack the necessary feedback mechanism (the senses) and ability to learn on the job so they can function on their own independently. And people have to trust them, that they don't fail in some horrible way and things like that. Without all these they can still be very helpful, but can't really "replace" a human in doing most activities. And also, some people seem to possess a sense of aesthetics and a wonderful creative imagination, things that LLMs don't really display at this time.

I agree that nobody knows the answer. If and when they arrive at that point, by then the LLM part would probably be just a tiny fraction of their functioning. Maybe we can start worrying then. Or maybe we could just find something else to do. Because people aren't tools, even when economically worthless.

replies(1): >>44467160 #
6. ninetyninenine ◴[] No.44467160{4}[source]
I disagree. The output of an LLM is like a crapshoot. It might work it might not like 40 to 60 percent of the time. That in itself tells us it’s not a small component of something bigger. It’s likely a large component and core structure of what is to come. We’ve closed the gap about half way.