←back to thread

418 points floverfelt | 2 comments | | HN request time: 0.398s | source
Show context
jeppester ◴[] No.45057505[source]
In my company I feel that we getting totally overrun with code that's 90% good, 10% broken and almost exactly what was needed.

We are producing more code, but quality is definitely taking a hit now that no-one is able to keep up.

So instead of slowly inching towards the result we are getting 90% there in no time, and then spending lots and lots of time on getting to know the code and fixing and fine-tuning everything.

Maybe we ARE faster than before, but it wouldn't surprise me if the two approaches are closer than what one might think.

What bothers me the most is that I much prefer to build stuff rather than fixing code I'm not intimately familiar with.

replies(8): >>45057537 #>>45058508 #>>45061118 #>>45061272 #>>45061732 #>>45062347 #>>45065856 #>>45070745 #
utyop22 ◴[] No.45058508[source]
"but quality is definitely taking a hit now that no-one is able to keep up."

And its going to get worse! So please explain to me how in the net, you are going to be better off? You're not.

I think most people haven't taken a decent economics class and don't deeply understand the notion of trade offs and the fact there is no free lunch.

replies(4): >>45060469 #>>45060956 #>>45065064 #>>45065157 #
globular-toast ◴[] No.45060956[source]
Yep, my strong feeling is that the net benefit of all of this will be zero. The time you have to spend holding the LLM hand is almost equal to how much time you would have spent writing it yourself. But then you've got yourself a codebase that you didn't write yourself, and we all know hunting bugs in someone else's code is way harder than code you had a part in designing/writing.

People are honestly just drunk on this thing at this point. The sunken cost fallacy has people pushing on (ie. spending more time) when LLMs aren't getting it right. People are happy to trade convenience for everything else, just look at junk food where people trade in flavour and their health. And ultimately we are in a time when nobody is building for the future, it's all get rich quick schemes: squeeze then get out before anyone asks why the river ran dry. LLMs are like the perfect drug for our current society.

Just look at how technology has helped us in the past decades. Instead of launching us towards some kind of Star Trek utopia, most people now just work more for less!

replies(2): >>45061660 #>>45061670 #
jama211 ◴[] No.45061660[source]
Only when purely vibe coding. AI currently saves a LOT of time if you get it to generate boilerplate, diagnose bugs, or assist with sandboxed issues.

The proof is in the pudding. The work I do takes me half as long as it used to and is just as high in quality, even though I manage and carefully curate the output.

replies(2): >>45063095 #>>45069463 #
1. globular-toast ◴[] No.45069463[source]
I don't write much boilerplate anyway. I long ago figured out ways to not do that (I use a computer to do repetitive tasks for me). So when people talk about boilerplate I feel like they're only just catching up to me, not surpassing me.

As for catching bugs, maybe, but I feel like it's pot luck. Sometimes it can find something, sometimes it's just complete rubbish. Sometimes worth giving it a spin but still not convinced it's saving that much. Then again I don't spend much time hunting down bugs in unfamiliar code bases.

replies(1): >>45077494 #
2. jama211 ◴[] No.45077494[source]
Like any tool, it has use cases where it excels and use cases where it’s pretty useless.

Unfamiliar code bases is a great example, if it’s able to find the bug it could do so almost instantly, as opposed to a human trying to read through the code base for ages. But for someone who is intimately familiar with a code base, they’ll probably solve the problem way faster, especially if it’s subtle.

Also say if your job is taking image designs and building them in html/css, just feeding it an image getting it to dump you an html/css framework and then you just clean up the details of will save you a lot of time. But on the flip side if you need to make critically safe software where every line matters, you’ll be way faster on your own.

People want to give a black and white “ai is bad” or “ai is great”, but the truth _as always_ is “it depends”. Humans aren’t very good at “it depends”.