Most active commenters
  • barrell(3)
  • koakuma-chan(3)
  • bdangubic(3)

←back to thread

75 points throwaway-ai-qs | 17 comments | | HN request time: 0.214s | source | bottom

Between code reviews, and AI generated rubbish, I've had it. Whether it's people relying on AI to write pull request descriptions (that are crap by the way), or using it to generate tests.. I'm sick of it.

Over the year, I've been doing a tonne of consulting. The last three months I've watched at least 8 companies embrace AI generated pip for coding, testing, and code reviews. Honestly, the best suggestions I've seen are found by linters in CI, and spell checkers. Is this what we've come to?

My question for my fellow HNers.. is this what the future holds? Is this everywhere? I think I'm finally ready to get off the ride.

1. barrell ◴[] No.45279108[source]
I'm not convinced it's what the future holds for three main reasons:

1. I was a pretty early adopter of LLMs for coding. It got to the point where most of my code was written by an LLM. Eventually this tapered off week by week to the level it is now... which is literally 0. It's more effort to explain a problem to an LLM than it is to just think it through. I can't imagine I'm that special, just a year ahead of the curve.

2. The maintenance burden of code that has no real author is felt months/years after the code in written. Organizations then react a few months/years after that.

3. The quality is not getting better (see gpt 5) and the cost is not going down (see Claude Code, cursor, etc). Eventually the bills will come due and at the very least that will reduce the amount of code generated by an LLM.

I very easily could be wrong, but I think there is hope and if anyone tells me "it's the future" I just hear "it's the present". No one knows what the future holds.

I'm looking for another technical co-founder (in addition to me) to come work on fun hard problems in a hand written Elixir codebase (frontend is clojurescript because <3 functional programming), if anyone is looking for a non-LLM-coded product! https://phrasing.app

replies(3): >>45279156 #>>45279259 #>>45279390 #
2. koakuma-chan ◴[] No.45279156[source]
I agree on all, but I also have a PTSD of the pre-LLM era where people kept telling me that my code is garbage, because it wasn't SOLID or whatever. I prefer the way it is now.
replies(2): >>45279345 #>>45279376 #
3. james2doyle ◴[] No.45279259[source]
Totally agree. I use for chores (generate an initial README, document the changes from this diff, summarize this release, scaffold out a new $LANG/$FRAMEWORK project) that are well understood. I have also been using it to work in languages that I can/have written in the past but are out of practice with (python) but I’m still babysitting it.

I recently used it to write a Sublime Text plugin for me and forked a Chrome extension and added a bunch of features to. Both open source and pretty trivial projects.

However, I rarely use it to write code for me in client projects. I need to know and understand everything going out that we are getting paid for.

replies(1): >>45280211 #
4. majorbugger ◴[] No.45279345[source]
and what LLM has to do with your PTSD?
replies(1): >>45279388 #
5. skydhash ◴[] No.45279376[source]
SOLID is a nice sets of principles. And like principles, there are valid reasons to break them. To use or not to use is a decision best taken after you’ve become a master, when you know the tradeoffs and costs.

Learn the rules first, then learn when to break them.

replies(2): >>45279746 #>>45280100 #
6. koakuma-chan ◴[] No.45279388{3}[source]
It has to do because those assholes will no longer tell me that I should have written an abstract factory or some shit. AI generated code is so fucking clean and SOLID.
7. risyachka ◴[] No.45279390[source]
This.

If someone says "Most of my code is AI" there are only 3 reasons for this 1. They do something very trivial on daily basis (and its not a bad thing, you just need to be clear about this). 2. The skill is not there so they have to use AI, otherwise it would be faster to DIY it than to explain the complex case and how to solve it to AI. 3. They prefer to explain to llm rather than write code themselves. Again, no issue with this. But we must be clear here - its not faster. Its just someone else is writing the code for you while you explain it in details what to do.

replies(3): >>45280268 #>>45280826 #>>45282437 #
8. koakuma-chan ◴[] No.45279746{3}[source]
This is idealistic. Do you actually sit down and evaluate whether the code is SOLID or maybe it's more like you're just vibe checking it, and it doesn't actually matter if you call that SOLID or DRY or whatever letters of the alphabet you prefer. Meanwhile your project is just a PostgreSQL proxy.
replies(1): >>45279871 #
9. skydhash ◴[] No.45279871{4}[source]
These are principles, not mathematical equations. It’s like drawing an human face. The general rule is that the eyes are spaced by another eye length viewed from the front. Or the intervals between the chin, the base of the nose, the eyebrows and the hairline are equal. It does not fit every face, and artists do break these rules. But a beginner breaks them for the wrong reasons.

So there’s a lot of heuristics in code’s quality. But some time, it’s just plain bad.

10. mattmanser ◴[] No.45280100{3}[source]
I actually sat down to really learn what SOLID meant a few years ago when I was getting a new contract and it came up in a few job descriptions. Must have some deep wisdom if everyone wants SOLID code, right?

At least two parts of the SOLID acronym are basically anachronisms, nonsense in modern coding (O + L). And I is basically handled for you with DI frameworks. D doesn't mean what most people think it does.

S is the only bit left and it's pretty much open to interpretation.

I don't really see them as anything meaningful, these days it's basically just make your classes have a single responsibility. It's on a level of KISS, but less general.

11. bdangubic ◴[] No.45280211[source]
> I need to know and understand everything going out that we are getting paid for.

what is preventing you from this even if you are not the one typing it up? you can actually understand more when you remove the burden of typing, keep asking questions, iterate on the code, do code review, security review, performance review… if done “right” you can end up not only understanding better but learning a bunch of stuff you didn’t know aling the way

replies(1): >>45280878 #
12. bdangubic ◴[] No.45280268[source]
there is a 4 and 5 and 6… :)

here’s 4 - there are senior-level SWEs who spent their entire career automating every thing they had to do more than once. it is one of core traits that differentiates “10x” SWE from “others”

LLMs have taken the automation part to another level and best SWEs I know use them every hour of every day to automate shit that we never had tools to automate before

13. barrell ◴[] No.45280826[source]
To be honest, I’m more inclined to attribute the rampant use of LLMs just to the dopaminergic effect of using them. It feels productive. It feels futuristic. It feels like an unlock. Quite viscerally. It doesn’t really matter your seniority or skill level, you feel can do whatever is within your wheelhouse, and more, faster.

Like most dopaminergic activities though you end up chasing that original rush, and eventually quit when you can’t replicate it and/or realize it is a poor substitute for the real thing, and likely stunting your growth

14. barrell ◴[] No.45280878{3}[source]
I have never met an engineer whose abilities were limited by their typing speed. Most engineers can already type far faster than they can critically think/store memories
replies(2): >>45281768 #>>45282398 #
15. bdangubic ◴[] No.45281768{4}[source]
My abilities are limited by time, I don’t work (never have) more than 6-7 hours per day. Hence I automate everything that requires my time (and can be automated). If I have something that saves me 10 minutes per day, that is significant for me (should be for you too if you value your time). now imagine if I have something that saves me 90 minutes per day…
16. JustExAWS ◴[] No.45282398{4}[source]
I don’t use Claude Code or any of the newer coding assistants. I use ChatGPT and tell it the code I want with the same type of specifications I would do when I am designing an implementation. It can definitely type 200+ lines of correct code faster than I could especially since I would need to look up the API calls for the AWS SDK.

I treat it like a junior developer.

17. JustExAWS ◴[] No.45282437[source]
I have been coding professionally for 30 years and 10 years as a hobbyist before then writing assembly on four different architectures. The first 12 years professionally bit twiddling in C across multiple architectures.

I doubt very seriously you could tell my code was LLM generated.

I very much rather explain to an LLM than write the code myself. Explaining it to an LLM is like pre rubber ducking.