←back to thread

94 points Eatcats | 3 comments | | HN request time: 0.87s | source

Small confession

I’ve been using Windsurf editor for about six months now, and it does most of the coding work for me.

Recently, I realized I no longer enjoy programming. It feels like I’m just going through the pain of explaining to the LLM what I want, then sitting and waiting for it to finish. If it fails, I just switch to another model—and usually, one of them gets the job done.

At this point, I’ve even stopped reviewing the exact code changes. I just keep pushing forward until the task is done.

On the bright side, I’ve gotten much better at writing design documents.

Anyone else feel the same?

Show context
John23832 ◴[] No.44499470[source]
I think that's just the way you're doing it?

I feel the opposite. I appreciate the ability to iterate and prototype in a way which lowers friction. Sure I have to plan steps out ahead of time, but that's expected with any kind of software architecture. The stimulating part is the design and thought and learning, not digging the ditch.

If you're just firing off prompts all day with no design/input, yea I'm sure that sucks. You might as well "push the big red button" all day.

> If it fails, I just switch to another model—and usually, one of them gets the job done.

This is a huge red flag that you have no idea what you're doing at the fundamental software architecture level imo. Or at least you have bad process (prior to LLMs).

replies(4): >>44499607 #>>44499669 #>>44499670 #>>44522218 #
1. roenxi ◴[] No.44499669[source]
> This is a huge red flag that you have no idea what you're doing at the fundamental software architecture level imo. Or at least you have bad process (prior to LLMs).

Particularly in the present. If any of the current models can consistently make senior-level decisions I'd like to know which ones they are. They're probably going to cross that boundary soon, but they aren't there yet. They go haywire too often. Anyone who codes only using the current generation of LLM without reviewing the code is surely capping themselves in code quality in a way that will hurt maintainability.

replies(1): >>44500824 #
2. andrei_says_ ◴[] No.44500824[source]
> They're probably going to cross that boundary soon

How? There’s no understanding, just output of highly probable text suggestions which sometimes coincides with correct text suggestions.

Correctness exists only in the understanding of humans.

In the case of writing to tests there are infinite ways to have green tests and break things anyway.

replies(1): >>44504583 #
3. roenxi ◴[] No.44504583[source]
> How?

The typical approach is to prompt an LLM model with an outline of what the problem is and let it write the code itself by giving it file (+ maybe some other things) access. You could look into software packages like Windsurf (as original post) or the Cline extension for VS Code which are both pretty good at this sort of thing.

They perform at what I'd estimate is a mid programmer's level right now and are rapidly improving in quality.