←back to thread

371 points ulrischa | 1 comments | | HN request time: 0.219s | source
Show context
bigstrat2003 ◴[] No.43236872[source]
> Hallucinated methods are such a tiny roadblock that when people complain about them I assume they’ve spent minimal time learning how to effectively use these systems—they dropped them at the first hurdle.

This seems like a very flawed assumption to me. My take is that people look at hallucinations and say "wow, if it can't even get the easiest things consistently right, no way am I going to trust it with harder things".

replies(2): >>43236953 #>>43237304 #
JusticeJuice ◴[] No.43236953[source]
You'd be surprised. I know a few people who couldn't really code before LLMs, but now with LLMs they can just brute-force through problems. They seem pretty undetered about 'trusting' the solution, if they ran it and it worked for them, it gets shipped.
replies(1): >>43238109 #
tcoff91 ◴[] No.43238109[source]
Well I hope this isn’t backend code because the amount of vulnerabilities that are going to come from these practices will be staggering
replies(1): >>43239365 #
namaria ◴[] No.43239365[source]
The backlash will be enormous. In the near future, there will be less competent coders and a tsunami of bad code to fix. If 2020 was annoying to hiring managers they have no idea how bad it will become.
replies(2): >>43241045 #>>43242619 #
naasking ◴[] No.43242619[source]
> The backlash will be enormous. In the near future, there will be less competent coders and a tsunami of bad code to fix

These code AIs are just going to get better and better. Fixing this "tsunami of bad code" will consist of just passing it through the better AIs that will easily just fix most of the problems. I can't help but feel like this will be mostly a non-problem in the end.

replies(1): >>43244562 #
dns_snek ◴[] No.43244562[source]
> Fixing this "tsunami of bad code" will consist of just passing it through the better AIs that will easily just fix most of the problems.

At this point in time there's no obvious path to that reality, it's just unfounded optimism and I don't think it's particularly healthy. What happens 5, 10, or 20 years down the line when this magical solution doesn't arrive?

replies(1): >>43245077 #
naasking ◴[] No.43245077[source]
I don't know where you're getting your data that there's no obvious path, or that it's unfounded optimism. When the chatbots first came out they were unusable for code, now they're borderline good for many tasks and excellent at others, and it's only been a couple of years. Every tool has its limitations at any given time, and I think your pessimism is entirely speculative.
replies(2): >>43245590 #>>43264900 #
1. dns_snek ◴[] No.43264900[source]
What we have now are LLMs that some consider to be good at tasks that are incremental, limited in scope, and require constant human oversight with many iterations.

What you want is an LLM that is exceptionally good at completely rewriting a poorly written codebase spanning tens or hundreds of thousands of lines of code, which works reliably with minimal oversight and without introducing hundreds of critical and hard to diagnose bugs.

Not realizing that these tasks are many orders of magnitudes apart in complexity is where the "unfounded optimism" comment comes from.