←back to thread

416 points floverfelt | 1 comments | | HN request time: 0.209s | source
Show context
archeantus ◴[] No.45058432[source]
If all you’ve done with AI is just use it for autocomplete, you’re missing out big time. I built a slick react app using Lovable yesterday and then created a Node BE using Claude Code today. I told Claude Code to look through the FE code to understand the requirements and purpose of the site, and to build a detailed plan (including proposed DB schemas) for a system that could support that functionality.

It generated a thousand line file with a robust breakdown of everything that needed to be done and at my command it did it. We went module by module and I made sure that each module Had comprehensive unit test coverage and that the repo built well as we went. After a few hours of back and forth we made 9 modules, 60+ APIs across 10 different tables, and hundreds of unit tests that all are passing.

Does that mean that I’m all done and ready to deploy to prod? Unlikely. But it does mean that I got a ton of boilerplate stuff put into place really quickly and that I’m eight hours into a project that would have taken at least a month before.

Once the BE was done I had it generate extensive documentation for the agent that would handle the FE integration as a sort of instruction guide - in case we need it. As issues and bugs arise during integration (they will!) the model has everything it needs to keep on track and finish the job it set out to do.

What a time to be alive!

replies(5): >>45058540 #>>45058614 #>>45058616 #>>45062017 #>>45065325 #
1. computerex ◴[] No.45058540[source]
I feel exactly the same way.

Even this post by Martin Fowler shows he's an aging dinosaur stuck in denial.

> I’ve often heard, with decent reason, an LLM compared to a junior colleague. But I find LLMs are quite happy to say “all tests green”, yet when I run them, there are failures. If that was a junior engineer’s behavior, how long would it be before H.R. was involved?

I don't know what "LLM's" he's using but I just simply don't get hallucinations like that with cursor or claude code.

He ends with this: > LLMs create a huge increase in the attack surface of software systems. Simon Willison described the The Lethal Trifecta for AI agents: an agent that combines access to your private data, exposure to untrusted content, and a way to externally communicate (“exfiltration”). That “untrusted content” can come in all sorts of ways, ask it to read a web page, and an attacker can easily put instructions on the website in 1pt white-on-white font to trick the gullible LLM to obtain that private data.

Not sure why he is re-iterating well known prompt injection vulnerability, passing it off as a general weakness of LLM's that applies to all LLM use when that's not the reality.