←back to thread

The AI Investment Boom

(www.apricitas.io)
271 points m-hodges | 1 comments | | HN request time: 0s | source
Show context
apwell23 ◴[] No.41896263[source]
> AI products are used ubiquitously to generate code, text, and images, analyze data, automate tasks, enhance online platforms, and much, much, much more—with usage expected only to increase going forward.

Why does every hype article start with this. Personally my copilot usage has gone down while coding. I tried and tried but it always gets lost and starts spitting out subtle bugs that takes me more time to debug than if i had written it myself.

I always have this feeling of 'this might fail in production in unknown ways' because i might have missed checking the code throughly . I know i am not the only one, my coworkers and friends have expressed similar feelings.

I even tried the new 'chain of thought' model, which for some reason seems to be even worse.

replies(10): >>41896295 #>>41896310 #>>41896325 #>>41896327 #>>41896363 #>>41896380 #>>41896400 #>>41896497 #>>41896670 #>>41898703 #
bongodongobob ◴[] No.41896295[source]
Well I have the exact opposite experience. I don't know why people struggle to get good results with llms.
replies(4): >>41896332 #>>41896335 #>>41896492 #>>41897988 #
thuuuomas ◴[] No.41896332[source]
Would you feel comfortable pushing generated code to production unaudited?
replies(2): >>41896359 #>>41896360 #
bongodongobob ◴[] No.41896359[source]
Would you feel comfortable pushing human code to production unaudited?
replies(3): >>41896393 #>>41896438 #>>41904561 #
dijksterhuis ◴[] No.41896438[source]
depends on the human.

but i would never push llm generated code. never.

-

edit to add some substance:

if it’s someone who

* does a lot of manual local testing

* adds good unit / integration tests

* writes clear and well documented PRs

* knows the code style, and when to break it

* tests themselves in a staging environment, independent of any QA team or reviews

* monitors the changes after they’ve gone out

* has repeatedly found things in their own PRs and asked to hold off release to fix them

* is reviewing other people’s PRs and spotting things before they go out

yea, sure, i’ll release the changes. they’re doing the auditing work for me.

they clearly care about the software. and i’ve seen enough to trust them.

and if they got it wrong, well, shit, they did everything good enough. i’m sure they’ll be on the ball when it comes to rolling it back and/or fixing it.

an llm does not do those things. an llm *does not care about your software* and never will.

i’ll take people who give a shit any day of the week.

replies(1): >>41896687 #
1. amonith ◴[] No.41896687{3}[source]
I'd say it depends more on "the production" than the human. There are legal means to hold all people accountable for their actions ("Gross neglience" and all that). So you can basically always trust that people will fix what they messed up given the possibility. So if you can afford for the production to be broken (e.g. the downtime will just annoy some people) you might as well allow your team to deploy straight to prod without audits. It's not that rare actually.