←back to thread

440 points pseudolus | 1 comments | | HN request time: 0s | source
Show context
muldvarp ◴[] No.45052736[source]
Brutal that software engineering went from one of the least automatable jobs to a job that is universally agreed to be "most exposed to automation".

Was good while it lasted though.

replies(15): >>45052803 #>>45052830 #>>45052911 #>>45052938 #>>45053022 #>>45053037 #>>45056787 #>>45056886 #>>45057129 #>>45057182 #>>45057448 #>>45057657 #>>45057837 #>>45058585 #>>45063626 #
grim_io ◴[] No.45052911[source]
Maybe it's just the nature of being early adopters.

Other fields will get their turn once a baseline of best practices is established that the consultants can sell training for.

In the meantime, memes aside, I'm not too worried about being completely automated away.

These models are extremely unreliable when unsupervised.

It doesn't feel like that will change fundamentally with just incrementally better training.

replies(2): >>45053115 #>>45053192 #
muldvarp ◴[] No.45053192[source]
> These models are extremely unreliable when unsupervised.

> It doesn't feel like that will change fundamentally with just incrementally better training.

I could list several things that I thought wouldn't get better with more training and then got better with more training. I don't have any hope left that LLMs will hit a wall soon.

Also, LLMs don't need to be better programmers than you are, they only need to be good enough.

replies(1): >>45053376 #
grim_io ◴[] No.45053376[source]
No matter how much better they get, I don't see any actual sign of intelligence, do you?

There is a lot of handwaving around the definition of intelligence in this context, of course. My definition would be actual on the job learning and reliability i don't need to second guess every time.

I might be wrong, but those 2 requirements seem not compatible with current approach/hardware limitations.

replies(1): >>45053643 #
muldvarp ◴[] No.45053643[source]
Intelligence doesn't matter. To quote "Superintelligence: Paths, Dangers, Strategies":

> There is an important sense, however, in which chess-playing AI turned out to be a lesser triumph than many imagined it would be. It was once supposed, perhaps not unreasonably, that in order for a computer to play chess at grandmaster level, it would have to be endowed with a high degree of general intelligence.

The same thing might happen with LLMs and software engineering: LLMs will not be considered "intelligent" and software engineering will no longer be thought of as something requiring "actual intelligence".

Yes, current models can't replace software engineers. But they are getting better at it with every release. And they don't need to be as good as actual software engineers to replace them.

replies(3): >>45054263 #>>45056927 #>>45057050 #
grim_io ◴[] No.45054263[source]
There is a reason chess was "solved" so fast. The game maps very nicely onto computers in general.

A grandmaster chess playing ai is not better at driving a car than my calculator from the 90s.

replies(1): >>45054356 #
muldvarp ◴[] No.45054356[source]
Yes, that's my point. AI doesn't need to be general to be useful. LLMs might replace software engineers without ever being "general intelligence".
replies(1): >>45054798 #
grim_io ◴[] No.45054798[source]
Sorry for not making my point clear.

I'm arguing that the category of the problem matters a lot.

Chess is, compared to self-driving cars and (in my opinion) programming, very limited in its rules, the fixed board size and the lack of "fog of war".

replies(3): >>45056700 #>>45058535 #>>45059281 #
romeros1 ◴[] No.45056700[source]
"It is difficult to get a man to understand something when his salary depends upon his not understanding it" ~ Upton Sinclair

Your stance was the widely held stance not just on hacker news but also by the leading proponents of ai when chatgpt was first launched. A lot of people thought the hallucination aspect is something that simply can't be overcome. That LLMs were nothing but glorified stochastic parrots.

Well, things have changed quite dramatically lately. AI could plateau. But the pace at which it is improving is pretty scary.

Regardless of real "intelligence" or not.. the current reality is that AI can already do quite a lot of traditional software work. This wasn't even remotely true if if you were to go 6 months back.

replies(3): >>45056975 #>>45057483 #>>45057721 #
1. anthem2025 ◴[] No.45056975[source]
Ironic to post that quote about AI considering the hype is pretty much entirely from people who stand to make obscene wealth from it.