Most active commenters
  • skhameneh(6)

←back to thread

416 points floverfelt | 18 comments | | HN request time: 1.278s | source | bottom
1. skhameneh ◴[] No.45056427[source]
There are many I've worked with that idolize Martin Fowler and have treated his words as gospel. That is not me and I've found it to be a nuisance, sometimes leading me to be overly critical of the actual content. As for now, I'm not working with such people and can appreciate the article shared without clouded bias.

I like this article, I generally agree with it. I think the take is good. However, after spending ridiculous amounts of time with LLMs (prompt engineering, writing tokenizers/samplers, context engineering, and... Yes... Vibe coding) for some periods 10 hour days into weekends, I have come to believe that many are a bit off the mark. This article is refreshing, but I disagree that people talking about the future are talking "from another orifice".

I won't dare say I know what the future looks like, but the present very much appears to be an overall upskilling and rework of collaboration. Just like every attempt before, some things are right and some are simply misguided. e.g. Agile for the sake of agile isn't any more efficient than any other process.

We are headed in a direction where written code is no longer a time sink. Juniors can onboard faster and more independently with LLMs, while seniors can shift their focus to a higher level in application stacks. LLMs have the ability to lighten cognitive loads and increase productivity, but just like any other productivity enhancing tool doing more isn't necessarily always better. LLMs make it very easy to create and if all you do is create [code], you'll create your own personal mess.

When I was using LLMs effectively, I found myself focusing more on higher level goals with code being less of a time sink. In the process I found myself spending more time laying out documentation and context than I did on the actual code itself. I spent some days purely on documentation and health systems to keep all content in check.

I know my comment is a bit sparse on specifics, I'm happy to engage and share details for those with questions.

replies(3): >>45056585 #>>45056778 #>>45060603 #
2. manmal ◴[] No.45056585[source]
> written code is no longer a time sink

It still is, and should be. It’s highly unlikely that you provided all the required info to the agent at first try. The only way to fix that is to read and understand the code thoroughly and suspiciously, and reshaping it until we’re sure it reflects the requirements as we understand them.

replies(1): >>45056715 #
3. skhameneh ◴[] No.45056715[source]
Vibe coding is not telling an agent what to do and checking back. It's an active engagement and best results are achieved when everything is planned and laid out in advance — which can also be done via vibe coding.

No, written code is no longer a time sink. Vibe coding is >90% building without writing any code.

The written code and actions are literally presented in diffs as they are applied, if one so chooses.

replies(3): >>45057057 #>>45057574 #>>45057673 #
4. sfink ◴[] No.45056778[source]
> We are headed in a direction where written code is no longer a time sink.

Written code has never been a time sink. The actual time that software developers have spent actually writing code has always been a very low percentage of total time.

Figuring out what code to write is a bigger deal. LLMs can help with part of this. Figuring out what's wrong with written code, and figuring out how to change and fix the code, is also a big deal. LLMs can help with a smaller part of this.

> Juniors can onboard faster and more independently with LLMs,

Color me very, very skeptical of this. Juniors previously spent a lot more of their time writing code, and they don't have to do that anymore. On the other hand, that's how they became not-juniors; the feedback loop from writing code and seeing what happened as a result is the point. Skipping part of that breaks the loop. "What the computer wrote didn't work" or "what the computer wrote is too slow" or even to some extent "what the computer wrote was the wrong thing" is so much harder to learn from.

Juniors are screwed.

> LLMs have the ability to lighten cognitive loads and increase productivity,

I'm fascinated to find out where this is true and where it's false. I think it'll be very unevenly distributed. I've seen a lot of silver bullets fired and disintegrate mid-flight, and I'm very doubtful of the latest one in the form of LLMs. I'm guessing LLMs will ratchet forward part of the software world, will remove support for other parts that will fall back, and it'll take us way too long to recognize which part is which and how to build a new system atop the shifted foundation.

replies(1): >>45057116 #
5. anskskbs ◴[] No.45057057{3}[source]
> It's an active engagement and best results are achieved when everything is planned and laid out in advance

The most efficient way to communicate these plans is in code. English is horrible in comparison.

When you’re using an agent and not reviewing every line of code, you’re offloading thinking to the AI. Which is fine in some scenarios, but often not what people would call high quality software.

Writing code was never the slow part for a competent dev. Agent swarming etc is mostly snake oil by those who profit off LLMs.

replies(1): >>45058868 #
6. skhameneh ◴[] No.45057116[source]
> Figuring out what code to write is a bigger deal. LLMs can help with part of this. Figuring out what's wrong with written code, and figuring out how to change and fix the code, is also a big deal. LLMs can help with a smaller part of this.

I found exactly this is what LLMs are great at assisting with.

But, it also requires context to have guiding points for documentation. The starting context has to contain just enough overview with points to expand context as needed. Many projects lack such documentation refinement, which causes major gaps in LLM tooling (thus reducing efficacy and increasing unwanted hallucinations).

> Juniors are screwed.

Mixed, it's like saying "if you start with Python, you're going to miss lower level fundamentals" which is true in some regards. Juniors don't inherently have to know the inner workings, they get to skip a lot of the steps. It won't inherently make them worse off, but it does change the learning process a lot. I'd refute this by saying I somewhat naively wrote a tokenizer, because the >3MB ONNX tokenizer for Gemma written in JS seemed absurd. I went in not knowing what I didn't know and was able to learn what I didn't know through the process of building with an LLM. In other words, I learned hands on, at a faster pace, with less struggle. This is pretty valuable and will create more paths for juniors to learn.

Sure, we may see many lacking fundamentals, but I suppose that isn't so different from the criticism I heard when I wrote most of my first web software in PHP. I do believe we'll see a lot more Python and linguistic influenced development in the future.

> I'm guessing LLMs will ratchet forward part of the software world, will remove support for other parts that will fall back, and it'll take us way too long to recognize which part is which and how to build a new system atop the shifted foundation.

I entirely agree, in fact I think we're seeing it already. There is so much that's hyped and built around rough ideas that's glaringly inefficient. But FWIW inefficiency has less of an impact than adoption and interest. I could complain all day about the horrible design issues of languages and software that I actually like and use. I'd wager this will be no different. Thankfully, such progress in practice creates more opportunities for improvement and involvement.

replies(1): >>45057418 #
7. sfink ◴[] No.45057418{3}[source]
> Sure, we may see many lacking fundamentals, but I suppose that isn't so different from the criticism I heard when I wrote most of my first web software in PHP.

It's not just the fundamentals, though you're right that is an easy casualty. I also agree that LLMs can greatly help with some forms of learning -- previously, you kind of had to follow the incremental path, where you couldn't really do anything complex without have the skills that it built on, because 90% of your time and brain would be spent on getting the syntax right or whatever and so you'd lose track of the higher-level thing you were exploring. With an LLM, it's nice to be able to (temporarily) skip that learning and be able to explore different areas at will. Especially when that motivates the desire to now go back and learn the basics.

But my real fear is about the skill acquisition, or simply the thinking. We are human, we don't want to have to go through the learning stage before we start doing, and we won't if we don't have to. It's difficult, it takes effort, it requires making mistakes and being unhappy about them, unhappy enough to be motivated to learn how to not make them in the future. If we don't have to do it, we won't, even if we logically know that we'd be better off.

Especially if the expectations are raised to the point where the pressure to be "productive" makes it feel like you're wasting other people's time and your paycheck to learn anything that the LLM can do for you. We're reaching the point where it feels irresponsible to learn.

(Sometimes this is ok. I'm fairly bad at long division now, but I don't think it's holding me back. But juniors can't know what they need to know before they know it!)

replies(1): >>45058387 #
8. epolanski ◴[] No.45057574{3}[source]
> It's an active engagement and best results are achieved when everything is planned and laid out in advance — which can also be done via vibe coding.

No.

The general assumed definition of vibe coding, hence the vibe word, is that coding becomes an iterative process guided by intuition rather than spec and processes.

What you describe is literally the opposite of vibe coding, it feels the term is being warped into "coding with an LLM".

replies(1): >>45058279 #
9. mehagar ◴[] No.45057673{3}[source]
How could you possibly plan out "everything" in advance? Code itself would be the only way to explicitly specify the "everything".
replies(1): >>45058258 #
10. skhameneh ◴[] No.45058258{4}[source]
Have a documentation system in place and have the LLM plan the high level before having the LLM write any code.

You can always just wing it, but if you do so and there isn't adequate existing context you're going to struggle with slop and hallucinations more frequently.

11. skhameneh ◴[] No.45058279{4}[source]
I've described an iterative process where one never needs to touch code or documents directly.

Leaving out specs and documentation leads to more slop and hallucinations, especially with smaller models.

12. skhameneh ◴[] No.45058387{4}[source]
> But my real fear is about the skill acquisition, or simply the thinking. We are human, we don't want to have to go through the learning stage before we start doing, and we won't if we don't have to. It's difficult, it takes effort, it requires making mistakes and being unhappy about them, unhappy enough to be motivated to learn how to not make them in the future. If we don't have to do it, we won't, even if we logically know that we'd be better off.

I've noticed the effects of this first hand from intense LLM engagement.

I relate it more to the effects of portable calculators, navigation systems, and tools like Wikipedia. I'm under the impression this is valid criticism, but we may be overly concerned because it's new and a powerful tool. There's even surveys/studies showing differences in how LLM are perceived WRT productivity between different generations.

I'm more concerned with potential loss of critical thinking skills, more than anything else. And on a related note, there have been concerns of critical thinking skills before this mass adoption of LLMs. I'm also concerned with the impact of LLMs on the quality of information. We're seeing a huge jump in quantity while some quality lacks. It bothers me when I see an LLM confidently presenting incorrect information that's seemingly trivial to validate. I've had web searches give me more incorrect information from LLM tooling at a much greater frequency than I've ever experienced before. It's even more unsettling when the LLM gives the wrong answer and the correct answer is in the description of the top result.

replies(1): >>45058491 #
13. utyop22 ◴[] No.45058491{5}[source]
"I'm also concerned with the impact of LLMs on the quality of information."

You have finally made an astute observation...

I have already made the assumption that use of LLMs is going to add new mounds of BS atop the mass of crap that already exists on the internet, as part of my startup thesis.

These things are not obvious in the here and now, but I try to take the view of - how would the present day look, 50 years out in the future looking backwards?

14. x0x0 ◴[] No.45058868{4}[source]
ime this is the problem. When I have to deeply understand what an llm created, I don't see much of a speed improvement vs writing it myself.

With an engineer you can hand off work and trust that it works, whereas I find code reviewing llm output something that I have to treat as hostile. It will comment out auth or delete failing tests.

replies(1): >>45060336 #
15. bluefirebrand ◴[] No.45060336{5}[source]
> When I have to deeply understand what an llm created

Which should be always in my opinion

Are people really pushing code to production that they don't understand?

replies(1): >>45061831 #
16. osullivj ◴[] No.45060603[source]
Written code is an auditable entity in regulated businesses, like banks. Fowler suggests we get more comfortable with uncertainty because other eng domains have developed good risk mgmt. Cart before horse. Other engineering domains strive to mitigate the irreducible uncertainty of the physical world. AI adds uncertainty to any system. And there are many applications where increased uncertainty will lead to an increase in human suffering.
17. discreteevent ◴[] No.45061831{6}[source]
They are, because in fairness in a lot of cases it just doesn't matter. It's some website to get clicks for ads and as long as you can vibe use it it's good enough to vibe code it.
replies(1): >>45076385 #
18. bluefirebrand ◴[] No.45076385{7}[source]
I wouldn't be caught dead building garbage like that