←back to thread

38 points 01-_- | 8 comments | | HN request time: 0.427s | source | bottom
1. ssivark ◴[] No.44396070[source]
If I have decent autocomplete where I type half the characters and the AI predicts the other half that technically satisfies this metric.

Notice the loophole: there’s no qualification of how much problem context the AI started from. Most of the problem -> code “work” would still be done by a human in that situation — even if technically 50% of the code is “AI generated” [because the human did all the hard work of generating the context necessary for those tokens, including the preceding tokens of code].

As the saying goes… lies, damned lies, and statistics.

replies(3): >>44396426 #>>44397536 #>>44399676 #
2. nunez ◴[] No.44396426[source]
Funny, and ironically, enough, I turned off autocorrect on iOS after it moved to a GPT-2 model because it grew increasingly inaccurate the more I used it. (The Markov chain implementation that preceded it wasn't much better, though I remember autocorrect on iOS being significantly better many years ago.)
3. AndrewKemendo ◴[] No.44397536[source]
> there’s no qualification of how much problem context the AI started from

Infer it from the article:

“as much as 30% to 50% of the company’s work is now completed by AI”

There. That’s not nothing.

You can and should call bs on all corporate claims, but this idea that coding agents at scale don’t work or is just total fluff is just wrong.

What I’m seeing is that people over 25 who like to write code and have spent their lives “perfecting” their environment and code generation process, can’t stand that businesses prefer lower quality code that’s created faster and cheaper than their “perfect” code.

Software engineers (and engineers generally) are closer economically to day laborers than theoretical physicists - but we/they refuse to believe that.

This is why unionization matters but you can’t unionize divas until they actually start losing jobs.

replies(3): >>44397811 #>>44399236 #>>44400143 #
4. belter ◴[] No.44397811[source]
AI is the intern now, still does 50% of the work, nobody trusts it with anything important, gets praised by the CEO for “transforming the business.” :-)
replies(1): >>44398660 #
5. riku_iki ◴[] No.44398660{3}[source]
Many humans can't be trusted with important work too, that's why we have all job interviewing and performance review processes, which are messy, costly and inefficient.

With AI, companies can built some rigid analytics/tests/benchmarks, which could be used at scale.

6. pier25 ◴[] No.44399236[source]
It's not about perfect code but maintainable code that can be debugged when (not if) things go wrong.
7. beefnugs ◴[] No.44399676[source]
yeah no one is inserting into code commits "ai generated 46%"

This guy asked his 20 developers, how many of you use ai? and 10 said yes. So he does the ceo thing and tells the world 50% oh my god we can fire so many people now!

8. tuckerman ◴[] No.44400143[source]
The big question is the work in question conceived of, designed, scoped, and then 30% to 50% of the doc/code written by a human and then finished by AI or is there some 30% to 50% of the work where all of that was owned by the AI.

Both are useful, but the former situation is one that just makes good engineers more useful/in demand imho.