Most active commenters
  • mupuff1234(3)

←back to thread

1455 points nromiun | 20 comments | | HN request time: 0.316s | source | bottom
1. xmprt ◴[] No.45076575[source]
This is one of the reasons I fear AI will harm the software engineering industry. AI doesn't have any of these limitation so it can write extremely complex and unreadable code that works... until it doesn't. And then no one can fix it.

It's also why I urge junior engineers to not rely on AI so much because even though it makes writing code so much faster, it prevents them from learning the quirks of the codebase and eventually they'll lose the ability to write code on their own.

replies(5): >>45076587 #>>45076630 #>>45076642 #>>45080089 #>>45082070 #
2. fsckboy ◴[] No.45076587[source]
>AI can write extremely complex and unreadable code that works... until it doesn't. And then

AI can fix it

I'm not defending or encouraging AI, just saying that argument doesn't work

replies(2): >>45076613 #>>45076629 #
3. xmprt ◴[] No.45076613[source]
I'm talking about cases where even AI can't fix it. I've heard of a lot of stories where people vibe code their applications to 80% and then get stuck in a loop where AI is unable to solve their problems.

It's been well documented that LLMs collapse after a certain complexity level.

replies(3): >>45076911 #>>45077465 #>>45077526 #
4. mupuff1234 ◴[] No.45076630[source]
Or maybe it will actually increase the quality of software engineering because it will free up the cognitive load from thinking of the low level design to higher level architecture.
replies(1): >>45076675 #
5. pessimizer ◴[] No.45076642[source]
Thus far, and granted I don't have as much experience as others, I just demand that AI simplify the code until I understand everything that it is doing. If I see it doing something in a convoluted way, I demand that it does it in the obvious way. If it's adding too many dependencies, I tell it to remove the goofy ones and write it the long way with the less capable stdlib function or helped by something that I already have a dependency on.

It's writing something for me, not for itself.

replies(1): >>45077218 #
6. dsego ◴[] No.45076675[source]
That's my fear, it will become a sort of a compiler. Prompts will be the code and code will be assembly, and nobody will even try to understand the details of the generated code unless there is something to debug. This will cause the codebases to be less refined with less abstraction and more duplication and bloat, but we will accept it as progress.
replies(2): >>45076722 #>>45077707 #
7. mupuff1234 ◴[] No.45076722{3}[source]
Funny, I'd say that codebases nowadays usually have too many abstractions.
replies(1): >>45077177 #
8. fsckboy ◴[] No.45076911{3}[source]
you were also talking about the future (as AIs get better and better). as of now AIs cannot write code too complex for better programmers to understand. your point holds for armies of low skill programmers, but you're just raising a fear and haven't come close to proving the case you're trying to make. We already know as counterweight that being first to the market with very substandard code generally wins over taking your time to get it right, so why should it be different with AI?
replies(1): >>45079237 #
9. dsego ◴[] No.45077177{4}[source]
Some certainly do. I have also noticed that the format of the code and structure used depend more on tools and hardware the developer uses rather than some philosophical ideal. A programmer with a big monitor could prefer big blocks of uninterrupted code with long variable names. Because of the big screen area, they can see the whole outline and understand the flow of this long chunk of code. Someone on a small 13" laptop might tend to split big pieces of code into smaller chunks so they won't have to scroll so much because things would get hidden. The other thing is the IDE or editor that's used. A coder who relies on the builtin goto symbol feature might not care as much about organizing folder and file structure, since they can just click on the method name, or use the command palette that will direct them to that piece of code. Their colleague might need the code to be in well organized file structure because they click through folders to reach the method.
replies(1): >>45077344 #
10. balder1991 ◴[] No.45077218[source]
Yeah, as much as I don’t like to use AI to write large portions of code, I’m using it to help me learn web development and it can feel like a following a tutorial, but tailored to the exact project I want.

My current approach is creating something like a Gem on Gemini with custom instructions and the updated source code of the project as context.

I just discuss what I want, and it gives me the code to do it, then I write by hand, ask for clarifications and suggest changes until I feel like the current approach is actually a good one. So not really “vibe-coding”, though I guess a large number of software developers who care about keeping the project sane must be doing this.

11. mupuff1234 ◴[] No.45077344{5}[source]
Those are all examples for why having a single source for code generation would most likely simply things - basically we will have a universal code style and logic, instead of every developer reinventing the wheel.

And let's face it, 95% of software isn't exactly novel.

12. YetAnotherNick ◴[] No.45077465{3}[source]
> AI doesn't have any of these limitation

> AI is unable to solve their problems.

You are contradicting yourself. AI works worse than humans in places where cognitive load is required, and so it can't cross the boundary of cognitive load. If say it becomes better at managing cognitive load in the future, then in any case it doesn't matter as you can ask it to reduce the cognitive load in the code and it would.

13. CamperBob2 ◴[] No.45077526{3}[source]
Most programmers can't make much sense of the output of a C compiler, either. We'll all be in that boat before long.

(To anticipate the usual reaction when I point that out: if you're going to sputter with rage and say that compilers are deterministic while AI isn't, well... save it for a future argument with someone who can be convinced that it matters.)

replies(1): >>45081029 #
14. lukeschlather ◴[] No.45077707{3}[source]
For me, I think it makes it more likely I will pick simple abstractions that have good software verification. Right now the idea of a webservice that has been proven correct to a spec is ridiculous, no one has time to write that, but it seems more likely that sort of thing will become ordinary. Yes, I won't be able to hold the webservice in my head, but reviewing it and making correct and complete statements about how it functions will be easier.
15. AstroBen ◴[] No.45079237{4}[source]
> We already know as counterweight that being first to the market with very substandard code generally wins

..we do?

Who created short stories as used in Tiktok/IG?

The first touch screen phone?

First social media app?

Was Google the first?

I mean I almost see the opposite of what you're saying..

16. inkyoto ◴[] No.45080089[source]
> It's also why I urge junior engineers to not rely on AI so much because even though it makes writing code so much faster […]

I am afraid, the cat is out the bag, and there is no turning back with GenAI and coding – juniors have got a taste of GenAI assisted coding and will persevere. The best we can do it educate them on how to use it correctly and responsibly.

The approach I have taken involves small group huddles where we talk to each other as equals, and where I emphasise the importance of understanding the problem space, the importance of the depth and breadth of knowledge, i.e. going across the problem domain – as opposed to focusing on a narrow part of it. I do not discourage the junior engineers from using GenAI, but I stress the liability factor and the cost: «if you use GenAI to write code, and the code falls apart in production, you will have a hard time supporting it if you do not understand the generated code, so choose your options wisely». I also highlight the importance of simplicity over complexity of the design and implementation, and that simplicity is hard, although it is something we should strive for as an aspiration and as a delivery target.

I reflect on and adjust the approach based on new observations, feedback loop (commits) and other indirect signs and metrics – this area is still new, and the GenAI assisted coding framework is still fledging.

17. cowboylowrez ◴[] No.45081029{4}[source]
on the other hand, I'd love to read an argument that persuades me that determinism doesn't matter in this case because I can't form any mental model that makes determinism-ed-ness a non factor in my decision making. of course, this comes with a disclaimer that I have no experience as a vibe coder lol
replies(1): >>45085130 #
18. edem ◴[] No.45082070[source]
id argue that it already harmed the industry
19. CamperBob2 ◴[] No.45085130{5}[source]
Real-world code bases are already far too large for any one programmer to internalize. They might as well be uninspectable. This problem isn't going to get anything but worse in the years to come. Incremental improvements in languages and methodologies won't suffice; future systems will simply have to be designed differently. They will be based on very high-level descriptions that can be used to synthesize tests that programs written and maintained by AI models can satisfy.

Human involvement will end after the first phase, once the required test targets exist.

Basically we are going to have to stop micromanaging our toolchains and start telling them what outcomes we want. Call it "vibe coding lol" or whatever, that's how it's going to work. When you write code in a traditional high-level language, you are telling your compiler how to generate the code that actually gets executed by the CPU. But this object code, again, might as well be completely inaccessible as far as most developers are concerned. If the compiler were to achieve the same result in different, unknowable ways each time it runs, that's a horrifying notion to those same developers... but we all might as well get used to it, because soon there will be no other way forward.

Notice that the math guys are already having to deal with similar problems. Mochizuki or somebody drops a thousand-page proof on the community, and experts at the very highest levels have to waste years looking for bugs in it. They can't go on like this and they know it. They will have to give up the quaint idea of understanding something completely in order to prove or refute it, just as programmers and engineers will have to relinquish control over how our own creations work.

replies(1): >>45089415 #
20. cowboylowrez ◴[] No.45089415{6}[source]
I suspect that some sort of AI will progress to some point of usefulness, the fact that it can do as much as it can right now should count for something. On the other hand, these llms seem to still go off into the weeds time and again, and from what I read of folks experiences, I can't drop the deterministic mindset as easy as folks like you can. Still, it doesn't matter, like you say these things are here and everyones throwing time and money at them, so hopefully my concerns are overblown and unfounded.