Most active commenters
  • sothatsit(6)
  • mycocola(3)

←back to thread

358 points andrewstetsenko | 12 comments | | HN request time: 1.6s | source | bottom
Show context
mycocola ◴[] No.44360585[source]
I think most programmers would agree that thinking represents the majority of our time. Writing code is no different than writing down your thoughts, and that process in itself can be immensely productive -- it can spark new ideas, grant epiphanies, or take you in an entirely new direction altogether. Writing is thinking.

I think an over-reliance, or perhaps any reliance, on AI tools will turn good programmers into slop factories, as they consistently skip over a vital part of creating high-quality software.

You could argue that the prompt == code, but then you are adding an intermediary step between you and the code, and something will always be lost in translation.

I'd say just write the code.

replies(1): >>44360816 #
1. sothatsit ◴[] No.44360816[source]
I think this misses the point. You're right that programmers still need to think. But you're wrong thinking that AI does not help with that.

With AI, instead of starting with zero and building up, you can start with a result and iterate on it straight away. This process really shines when you have a good idea of what you want to do, and how you want it implemented. In these cases, it is really easy to review the code, because you knew what you wanted it to look like. And so, it lets me implement some basic features in 15 minutes instead of an hour. This is awesome.

For more complex ideas, AI can also be a great idea sparring partner. Claude Code can take a paragraph or two from me, and then generate a 200-800 line planning document fleshing out all the details. That document: 1) helps me to quickly spot roadblocks using my own knowledge, and 2) helps me iterate quickly in the design space. This lets me spend more time thinking about the design of the system. And Claude 4 Opus is near-perfect at taking one of these big planning specifications and implementing it, because the feature is so well specified.

So, the reality is that AI opens up new possible workflows. They aren't always appropriate. Sometimes the process of writing the code yourself and iterating on it is important to helping you build your mental model of a piece of functionality. But a lot of the time, there's no mystery in what I want to write. And in these cases, AI is brilliant at speeding up design and implementation.

replies(2): >>44360986 #>>44375145 #
2. mycocola ◴[] No.44360986[source]
Based on your workflow, I think there is considerable risk of you being wooed by AI into believing what you are doing is worthwhile. The plan AI offers is coherent, specific, it sounds good. It's validation. Sugar.
replies(1): >>44361519 #
3. sothatsit ◴[] No.44361519[source]
That is a very weak excuse to avoid these tools.

I know the tools and environments I am working in. I verify the implementations I make by testing them. I review everything I am generating.

The idea that AI is going to trick me is absurd. I'm a professional, not some vibe coding script kiddie. I can recognise when the AI makes mistakes.

Have the humility to see that not everyone using AI is someone who doesn't know what they are doing and just clicks accept on every idea from the AI. That's not how this works.

replies(1): >>44361994 #
4. mycocola ◴[] No.44361994{3}[source]
AI is already tricking people -- images, text, video, voice. As these tools become more advanced, so does the cost of verification.
replies(1): >>44362198 #
5. sothatsit ◴[] No.44362198{4}[source]
We're talking about software development here, not misinformation about politics or something.

Software is incredibly easy to verify compared to other domains. First, my own expertise can pick up most mistakes during review. Second, all of the automated linting, unit testing, integration testing, and manual testing is near guaranteed to pick up a problem with the functionality being wrong.

So, how exactly do you think AI is going to trick me when I'm asking it to write a new migration to add a new table, link that into a model, and expose that in an API? I have done each of these things 100 times. It is so obvious to me when it makes a mistake, because this process is so routine. So how exactly is AI going to trick me? It's an absurd notion.

AI does have risks with people being lulled into a false sense of security. But that is a concern in areas like getting it to explain how a codebase works for you, or getting it to try to teach you about technologies. Then you can end up with a false idea about how something works. But in software development itself? When I already have worked with all of these tools for years? It just isn't a big issue. And the benefits far outweigh it occasionally telling me that an API exists that actually doesn't exist, which I will realise almost immediately when the code fails to run.

People who dismiss AI because it makes mistakes are tiresome. The lack of reliability of LLMs is just another constraint to engineer around. It's not magic.

replies(2): >>44362765 #>>44366381 #
6. bluefirebrand ◴[] No.44362765{5}[source]
> Software is incredibly easy to verify compared to other domains

This strikes me as an absurd thing to believe when there's almost no such thing as bug-free software

replies(1): >>44362875 #
7. sothatsit ◴[] No.44362875{6}[source]
Yes, maybe using the word "verify" here is a bit confusing. The point was to compare software, where it is very easy to verify the positive case, to other domains where it is not possible to verify anything at all, and manual review is all you get.

For example, a research document could sound good, but be complete nonsense. There's no realistic way to verify that an English document is correct other than to verify it manually. Whereas, software has huge amounts of investment into testing whether a piece of software does what it should for a given environment and test cases.

Now, this is a bit different to formally verifying that the software is correct for all environments and inputs. But we definitely have a lot more verification tools at our disposal than most other domains.

replies(1): >>44366842 #
8. Verdex ◴[] No.44366381{5}[source]

  Software is incredibly easy to verify compared to other domains.
Rice, Turing, and Godel would like a word.
replies(1): >>44404797 #
9. bluefirebrand ◴[] No.44366842{7}[source]
That's fair, makes sense. Thank you for explaining further
10. weatherlite ◴[] No.44375145[source]
> So, the reality is that AI opens up new possible workflows. They aren't always appropriate. Sometimes the process of writing the code yourself and iterating on it is important to helping you build your mental model of a piece of functionality. But a lot of the time, there's no mystery in what I want to write. And in these cases, AI is brilliant at speeding up design and implementation.

I agree but I have a hunch we're all gonna be pushed by higher ups to use AI always and for everything. Headcounts will drop, the amount of work will rise and deadlines will become ever so tight. What the resulting codebases would look like years from now will be interesting.

replies(1): >>44376217 #
11. sothatsit ◴[] No.44376217[source]
Yeah, I am grateful that I work with a lot of other engineers and managers who care a lot about quality. If you have a manager who just cares about speed, the corner cutting that AI enables could become a nightmare.
12. sothatsit ◴[] No.44404797{6}[source]
compared to other domains

It's right there in the quote.