←back to thread

214 points Brajeshwar | 2 comments | | HN request time: 0.001s | source
Show context
manoDev ◴[] No.45084137[source]
“AI” is great for coding in the small, it’s like having a powerful semantic code editor, or pairing with a junior developer who can lookup some info online quickly. The hardest part of the job was never typing or figuring out some API bullshit anyway.

But trying to use it like “please write this entire feature for me” (what vibe coding is supposed to mean) is the wrong way to handle the tool IMO. It turns into a specification problem.

replies(3): >>45084676 #>>45088106 #>>45089397 #
dboreham ◴[] No.45084676[source]
Yes, but in my experience actually no. At least not with the bleeding edge models today. I've been able to get LLMs to write whole features to the point that I'm quite surprised at the result. Perhaps I'm talking to it right (the new "holding it right"?). I tend to begin asking for an empty application with the characteristics I want (CLI, has subcommands, ...) then I ask it to add a simple feature. Get that working then ask it to enhance functionality progressively, testing as we go. Then when functionality is working I ask for a refactor (often it puts 1500 loc in one file, for example), doc, improve help text, and so on. Basically the same way you'd manage a human.

I've also been close to astonished at the capability LLMs have to draw conclusions from very large complex codebases. For example I wanted to understand the details of a distributed replication mechanism in a project that is enormous. Pre-LLM I'd spent a couple of days crawling through the code using grep and perhaps IDE tools, making notes on paper. I'd probably have to run the code or instrument it with logging then look at the results in a test deployment. But I've found I can ask the LLM to take a look at the p2p code and tell me how it works. Then ask it how the peer set is managed. I can ask it if all reachable peers are known at all nodes. It's almost better than me at this, and it's what I've done for a living for 30 years. Certainly it's very good for very low cost and effort. While it's chugging I can think about higher order things.

I say all this as a massive AI skeptic dating back to the 1980s.

replies(2): >>45085174 #>>45088065 #
svachalek ◴[] No.45088065[source]
All the hype is about asking an LLM to start with an empty project with loose requirements. Asking it to work on a million lines of legacy code (inadequately tested, as all legacy code) with ancient and complex contracts is a completely different experience.
replies(1): >>45088556 #
1. felipeerias ◴[] No.45088556[source]
Very large projects are an area where AI tools can really empower developers without replacing them.

It is very useful to be able to ask basic questions about the code that I am working on, without having to read through dozens of other source files. It frees up a lot of time to actually get stuff done.

replies(1): >>45089778 #
2. patrick451 ◴[] No.45089778[source]
AI makes so many mistakes, I cannot trust it with telling me the truth about how a large codebase works.