←back to thread

371 points ulrischa | 1 comments | | HN request time: 0s | source
Show context
bigstrat2003 ◴[] No.43236872[source]
> Hallucinated methods are such a tiny roadblock that when people complain about them I assume they’ve spent minimal time learning how to effectively use these systems—they dropped them at the first hurdle.

This seems like a very flawed assumption to me. My take is that people look at hallucinations and say "wow, if it can't even get the easiest things consistently right, no way am I going to trust it with harder things".

replies(2): >>43236953 #>>43237304 #
JusticeJuice ◴[] No.43236953[source]
You'd be surprised. I know a few people who couldn't really code before LLMs, but now with LLMs they can just brute-force through problems. They seem pretty undetered about 'trusting' the solution, if they ran it and it worked for them, it gets shipped.
replies(1): >>43238109 #
tcoff91 ◴[] No.43238109[source]
Well I hope this isn’t backend code because the amount of vulnerabilities that are going to come from these practices will be staggering
replies(1): >>43239365 #
namaria ◴[] No.43239365[source]
The backlash will be enormous. In the near future, there will be less competent coders and a tsunami of bad code to fix. If 2020 was annoying to hiring managers they have no idea how bad it will become.
replies(2): >>43241045 #>>43242619 #
naasking ◴[] No.43242619[source]
> The backlash will be enormous. In the near future, there will be less competent coders and a tsunami of bad code to fix

These code AIs are just going to get better and better. Fixing this "tsunami of bad code" will consist of just passing it through the better AIs that will easily just fix most of the problems. I can't help but feel like this will be mostly a non-problem in the end.

replies(1): >>43244562 #
dns_snek ◴[] No.43244562[source]
> Fixing this "tsunami of bad code" will consist of just passing it through the better AIs that will easily just fix most of the problems.

At this point in time there's no obvious path to that reality, it's just unfounded optimism and I don't think it's particularly healthy. What happens 5, 10, or 20 years down the line when this magical solution doesn't arrive?

replies(1): >>43245077 #
naasking ◴[] No.43245077[source]
I don't know where you're getting your data that there's no obvious path, or that it's unfounded optimism. When the chatbots first came out they were unusable for code, now they're borderline good for many tasks and excellent at others, and it's only been a couple of years. Every tool has its limitations at any given time, and I think your pessimism is entirely speculative.
replies(2): >>43245590 #>>43264900 #
krupan ◴[] No.43245590[source]
Nobody has to prove a negative, my friend
replies(1): >>43245842 #
naasking ◴[] No.43245842{3}[source]
Anybody making a claim should be able to justify it or admit it's conjecture.
replies(1): >>43247127 #
namaria ◴[] No.43247127{4}[source]
Goes both ways. Your extending the line in some particular way from the past couple of years isn't much more than an article of faith.
replies(1): >>43249532 #
naasking ◴[] No.43249532{5}[source]
It's more than the past couple of years, steady improvements in machine learning stretch back decades at this point. There is no indication this is stopping or slowing down, quite the contrary. We also already know that better is possible because the human brain is still better in many ways, and it exists.

You can claim that continued progression is speculative, and some aspects are, but it's hardly "an article of faith", unlike "we've suddenly hit a surprising wall we can't surmount".

replies(2): >>43250152 #>>43251566 #
1. ◴[] No.43251566{6}[source]