←back to thread

371 points ulrischa | 3 comments | | HN request time: 0s | source
Show context
bigstrat2003 ◴[] No.43236872[source]
> Hallucinated methods are such a tiny roadblock that when people complain about them I assume they’ve spent minimal time learning how to effectively use these systems—they dropped them at the first hurdle.

This seems like a very flawed assumption to me. My take is that people look at hallucinations and say "wow, if it can't even get the easiest things consistently right, no way am I going to trust it with harder things".

replies(2): >>43236953 #>>43237304 #
JusticeJuice ◴[] No.43236953[source]
You'd be surprised. I know a few people who couldn't really code before LLMs, but now with LLMs they can just brute-force through problems. They seem pretty undetered about 'trusting' the solution, if they ran it and it worked for them, it gets shipped.
replies(1): >>43238109 #
tcoff91 ◴[] No.43238109[source]
Well I hope this isn’t backend code because the amount of vulnerabilities that are going to come from these practices will be staggering
replies(1): >>43239365 #
namaria ◴[] No.43239365[source]
The backlash will be enormous. In the near future, there will be less competent coders and a tsunami of bad code to fix. If 2020 was annoying to hiring managers they have no idea how bad it will become.
replies(2): >>43241045 #>>43242619 #
naasking ◴[] No.43242619[source]
> The backlash will be enormous. In the near future, there will be less competent coders and a tsunami of bad code to fix

These code AIs are just going to get better and better. Fixing this "tsunami of bad code" will consist of just passing it through the better AIs that will easily just fix most of the problems. I can't help but feel like this will be mostly a non-problem in the end.

replies(1): >>43244562 #
dns_snek ◴[] No.43244562[source]
> Fixing this "tsunami of bad code" will consist of just passing it through the better AIs that will easily just fix most of the problems.

At this point in time there's no obvious path to that reality, it's just unfounded optimism and I don't think it's particularly healthy. What happens 5, 10, or 20 years down the line when this magical solution doesn't arrive?

replies(1): >>43245077 #
naasking ◴[] No.43245077[source]
I don't know where you're getting your data that there's no obvious path, or that it's unfounded optimism. When the chatbots first came out they were unusable for code, now they're borderline good for many tasks and excellent at others, and it's only been a couple of years. Every tool has its limitations at any given time, and I think your pessimism is entirely speculative.
replies(2): >>43245590 #>>43264900 #
krupan ◴[] No.43245590[source]
Nobody has to prove a negative, my friend
replies(1): >>43245842 #
naasking ◴[] No.43245842[source]
Anybody making a claim should be able to justify it or admit it's conjecture.
replies(1): >>43247127 #
namaria ◴[] No.43247127[source]
Goes both ways. Your extending the line in some particular way from the past couple of years isn't much more than an article of faith.
replies(1): >>43249532 #
naasking ◴[] No.43249532[source]
It's more than the past couple of years, steady improvements in machine learning stretch back decades at this point. There is no indication this is stopping or slowing down, quite the contrary. We also already know that better is possible because the human brain is still better in many ways, and it exists.

You can claim that continued progression is speculative, and some aspects are, but it's hardly "an article of faith", unlike "we've suddenly hit a surprising wall we can't surmount".

replies(2): >>43250152 #>>43251566 #
Izkata ◴[] No.43250152{3}[source]
> steady improvements in machine learning stretch back decades at this point

Except that's not how it's actually gone. It's more like, improvements happen in erratic jumps as new methods are discovered, then improvements slow or stall out when the limits of those methods are reached.

replies(1): >>43254300 #
naasking ◴[] No.43254300{4}[source]
No, that's just how it looked from the outside if you weren't tracking closely. Even emergent abilities are a mirage when you look at the actual data:

https://hai.stanford.edu/news/ais-ostensible-emergent-abilit...

replies(1): >>43257621 #
1. Izkata ◴[] No.43257621{4}[source]
I'm not talking "past 3 years", I'm talking "past 50 years": https://en.m.wikipedia.org/wiki/AI_winter

And really, there was a version of what I'm talking about in the shorter timespan with LLMs - OpenAI's GPT models existed for several years before someone got the idea to put it behind a chat interface and the popularity / apparent capability exploded a few years ago.

replies(1): >>43275293 #
2. naasking ◴[] No.43275293[source]
> OpenAI's GPT models existed for several years before someone got the idea to put it behind a chat interface and the popularity / apparent capability exploded a few years ago.

That's exactly what I said in the post you responded to: there weren't erratic jumps, there was steady progress over decades.

replies(1): >>43292949 #
3. Izkata ◴[] No.43292949[source]
You keep switching back and forth between short and long time periods, as if the rapid steady growth of the past couple years is how it's gone for decades. This is not the case - we're currently in a short* period of rapid growth after a decade or so of stagnation. That's what "erratic" means, it has not been steady - over the past several decades there have been several times where we've seen rapid growth for a short period, then it hits a wall and we see very little or no growth until the next breakthrough.

* Granted we don't know for sure it'll be short this time, but hints are that we're starting to hit that wall with improvements slowing down.