←back to thread

416 points floverfelt | 5 comments | | HN request time: 0s | source
Show context
skhameneh ◴[] No.45056427[source]
There are many I've worked with that idolize Martin Fowler and have treated his words as gospel. That is not me and I've found it to be a nuisance, sometimes leading me to be overly critical of the actual content. As for now, I'm not working with such people and can appreciate the article shared without clouded bias.

I like this article, I generally agree with it. I think the take is good. However, after spending ridiculous amounts of time with LLMs (prompt engineering, writing tokenizers/samplers, context engineering, and... Yes... Vibe coding) for some periods 10 hour days into weekends, I have come to believe that many are a bit off the mark. This article is refreshing, but I disagree that people talking about the future are talking "from another orifice".

I won't dare say I know what the future looks like, but the present very much appears to be an overall upskilling and rework of collaboration. Just like every attempt before, some things are right and some are simply misguided. e.g. Agile for the sake of agile isn't any more efficient than any other process.

We are headed in a direction where written code is no longer a time sink. Juniors can onboard faster and more independently with LLMs, while seniors can shift their focus to a higher level in application stacks. LLMs have the ability to lighten cognitive loads and increase productivity, but just like any other productivity enhancing tool doing more isn't necessarily always better. LLMs make it very easy to create and if all you do is create [code], you'll create your own personal mess.

When I was using LLMs effectively, I found myself focusing more on higher level goals with code being less of a time sink. In the process I found myself spending more time laying out documentation and context than I did on the actual code itself. I spent some days purely on documentation and health systems to keep all content in check.

I know my comment is a bit sparse on specifics, I'm happy to engage and share details for those with questions.

replies(3): >>45056585 #>>45056778 #>>45060603 #
manmal ◴[] No.45056585[source]
> written code is no longer a time sink

It still is, and should be. It’s highly unlikely that you provided all the required info to the agent at first try. The only way to fix that is to read and understand the code thoroughly and suspiciously, and reshaping it until we’re sure it reflects the requirements as we understand them.

replies(1): >>45056715 #
skhameneh ◴[] No.45056715[source]
Vibe coding is not telling an agent what to do and checking back. It's an active engagement and best results are achieved when everything is planned and laid out in advance — which can also be done via vibe coding.

No, written code is no longer a time sink. Vibe coding is >90% building without writing any code.

The written code and actions are literally presented in diffs as they are applied, if one so chooses.

replies(3): >>45057057 #>>45057574 #>>45057673 #
1. anskskbs ◴[] No.45057057[source]
> It's an active engagement and best results are achieved when everything is planned and laid out in advance

The most efficient way to communicate these plans is in code. English is horrible in comparison.

When you’re using an agent and not reviewing every line of code, you’re offloading thinking to the AI. Which is fine in some scenarios, but often not what people would call high quality software.

Writing code was never the slow part for a competent dev. Agent swarming etc is mostly snake oil by those who profit off LLMs.

replies(1): >>45058868 #
2. x0x0 ◴[] No.45058868[source]
ime this is the problem. When I have to deeply understand what an llm created, I don't see much of a speed improvement vs writing it myself.

With an engineer you can hand off work and trust that it works, whereas I find code reviewing llm output something that I have to treat as hostile. It will comment out auth or delete failing tests.

replies(1): >>45060336 #
3. bluefirebrand ◴[] No.45060336[source]
> When I have to deeply understand what an llm created

Which should be always in my opinion

Are people really pushing code to production that they don't understand?

replies(1): >>45061831 #
4. discreteevent ◴[] No.45061831{3}[source]
They are, because in fairness in a lot of cases it just doesn't matter. It's some website to get clicks for ads and as long as you can vibe use it it's good enough to vibe code it.
replies(1): >>45076385 #
5. bluefirebrand ◴[] No.45076385{4}[source]
I wouldn't be caught dead building garbage like that