←back to thread

1479 points sandslash | 2 comments | | HN request time: 0.438s | source
Show context
abdullin ◴[] No.44316210[source]
Tight feedback loops are the key in working productively with software. I see that in codebases up to 700k lines of code (legacy 30yo 4GL ERP systems).

The best part is that AI-driven systems are fine with running even more tight loops than what a sane human would tolerate.

Eg. running full linting, testing and E2E/simulation suite after any minor change. Or generating 4 versions of PR for the same task so that the human could just pick the best one.

replies(7): >>44316306 #>>44316946 #>>44317531 #>>44317792 #>>44318080 #>>44318246 #>>44318794 #
latexr ◴[] No.44317792[source]
> Or generating 4 versions of PR for the same task so that the human could just pick the best one.

That sounds awful. A truly terrible and demotivating way to work and produce anything of real quality. Why are we doing this to ourselves and embracing it?

A few years ago, it would have been seen as a joke to say “the future of software development will be to have a million monkey interns banging on one million keyboards and submit a million PRs, then choose one”. Today, it’s lauded as a brilliant business and cost-saving idea.

We’re beyond doomed. The first major catastrophe caused by sloppy AI code can’t come soon enough. The sooner it happens, the better chance we have to self-correct.

replies(6): >>44317876 #>>44317884 #>>44317997 #>>44318175 #>>44318235 #>>44318625 #
diggan ◴[] No.44317884[source]
> A truly terrible and demotivating way to work and produce anything of real quality

You clearly have strong feelings about it, which is fine, but it would be much more interesting to know exactly why it would terrible and demotivating, and why it cannot produce anything of quality? And what is "real quality" and does that mean "fake quality" exists?

> million monkey interns banging on one million keyboards and submit a million PRs

I'm not sure if you misunderstand LLMs, or the famous "monkeys writing Shakespeare" part, but that example is more about randomness and infinity than about probabilistic machines somewhat working towards a goal with some non-determinism.

> We’re beyond doomed

The good news is that we've been doomed for a long time, yet we persist. If you take a look at how the internet is basically held up by duct-tape at this point, I think you'd feel slightly more comfortable with how crap absolutely everything is. Like 1% of software is actually Good Software while the rest barely works on a good day.

replies(2): >>44317983 #>>44318020 #
bgwalter ◴[] No.44318020[source]
If "AI" worked (which fortunately isn't the case), humans would be degraded to passive consumers in the last domain in which they were active creators: thinking.

Moreover, you would have to pay centralized corporations that stole all of humanity's intellectual output for engaging in your profession. That is terrifying.

The current reality is also terrifying: Mediocre developers are enabled to have a 10x volume (not quality). Mediocre execs like that and force everyone to use the "AI" snakeoil. The profession becomes even more bureaucratic, tool oriented and soulless.

People without a soul may not mind.

replies(1): >>44319044 #
diggan ◴[] No.44319044[source]
> If "AI" worked (which fortunately isn't the case), humans would be degraded to passive consumers in the last domain in which they were active creators: thinking.

"AI" (depending on what you understand that to be) is already "working" for many, including myself. I've basically stopped using Google because of it.

> humans would be degraded to passive consumers in the last domain in which they were active creators: thinking

Why? I still think (I think at least), why would I stop thinking just because I have yet another tool in my toolbox?

> you would have to pay centralized corporations that stole all of humanity's intellectual output for engaging in your profession

Assuming we'll forever be stuck in the "mainframe" phase, then yeah. I agree that local models aren't really close to SOTA yet, but the ones you can run locally can already be useful in a couple of focused use cases, and judging by the speed of improvements, we won't always be stuck in this mainframe-phase.

> Mediocre developers are enabled to have a 10x volume (not quality).

In my experience, which admittedly been mostly in startups and smaller companies, this has always been the case. Most developers seem to like to produce MORE code over BETTER code, I'm not sure why that is, but I don't think LLMs will change people's mind about this, in either direction. Shitty developers will be shit, with or without LLMs.

replies(1): >>44322276 #
1. zelphirkalt ◴[] No.44322276[source]
The AI as it is currently, will not come up with that new app idea or that clever innovative way of implementing an application. It will endlessly rehash the training data it has ingested. Sure, you can tell an AI to spit out a CRUD, and maybe it will even eventually work in some sane way, but that's not innovative and not necessarily a good software. It is blindly copying existing approaches to implement something. That something is then maybe even working, but lacks any special sauce to make it special.

Example: I am currently building a web app. My goal is to keep it entirely static, traditional template rendering, just using the web as a GUI framework. If I had just told the AI to build this, it would have thrown tons of JS at the problem, because that is what the mainstream does these days, and what it mostly saw as training data. Then my back button would most likely no longer work, I would not be able to use bookmarks properly, it would not automatically have an API as powerful as the web UI, usable from any script, and the whole thing would have gone to shit.

If the AI tools were as good as I am at what I am doing, and I relied upon that, then I would not have spent time trying to think of the principles of my app, as I did when coming up with it myself. As it is now, the AI would not even have managed to prevent duplicate results from showing up in the UI, because I had a GPT4 session about how to prevent that, and none of the suggested AI answers worked and in the end I did what I thought I might have to do when I first discovered the issue.

replies(1): >>44322973 #
2. diggan ◴[] No.44322973[source]
> The AI as it is currently, will not come up with that new app idea or that clever innovative way of implementing an application

Who has claimed that they can do that sort of stuff? I don't think my comment hints at that, nor does the talk in the submission.

You're absolutely right with most of your comment, and seem to just be rehashing what Karpathy talks about but with different words. Of course it won't create good software unless you specify exactly what "good software" is for you, and tell it that. Of course it won't know you want "traditional static template rendering" unless you tell it to. Of course it won't create a API you can use from anywhere unless you say so. Of course it'll follow what's in the training data. Of course things won't automatically implement whatever you imagine your project should have, unless you tell it about those features.

I'm not sure if you're just expanding on the talk but chose my previous comment to attach it to, or if you're replying to something I said in my comment.