←back to thread

4 points not_that_d | 1 comments | | HN request time: 0.202s | source

Let me be clear first. I don't dislike LLMs, I query them, trigger agents to do stuff where I kind of know what the end goal is and to make analisys of small parts of an application.

That said, everytime I give it something a little more complex that do something in a single file script it fails me horribly. Either the code is really bad, or the approach is as bad a someone who doesn't really know what to do or it plains start doing things that I explicitly said not to do in the initial prompt.

I have sometimes asked my LLM fan's coworkers to come and help when that happens and they also are not able to "fix it", but somehow I am the one doing it wrong due "wrong prompt" or "lack of correct context".

I have created a lot of "Agents.md" files, drop files into the context window... Nothing.

When I need to do green field stuff, or PoCs it delivers fast, but then applying it to work inside an existent big application fails.

The only place where I feel as "productive" as I heard from other people is when I do stuff in languages or technologies I don't know at all, but then again, I also don't know if that functional code I get at the end is broken in things I am not aware of.

Are any of you guys really using LLMs to create full features in big enterprise apps?

Show context
linesofcode ◴[] No.46214984[source]
The quality of an LLM outputs is greatly dependent on how many guard rails you have setup to keep it on track and heuristics to point it on right direction (type checking + running tests after every change for example).

What is health of your enterprise code base? If it’s anything like ones I’ve experienced it’s a legacy mess then it’s absolutely understandable that an LLMs output is subpar when taking on larger tasks.

Also depends on the models and plan you’re on. There is a significant increase in quality when comparing Cursors default model on a free plan vs Opus 4.5 on a maximum Claude plan.

I think a good exercise is to prohibit yourself from writing any code manually and force yourself to do LLM only, might sound silly but it will develop that skill-set.

Try Claude code in thinking mode with the some super powers - https://github.com/obra/superpowers

I routinely make an implementation plan with Claude and then step away for 15 mins while it spins - the results aren’t perfect but fixing that remaining 10% is better than writing 100% of it myself.

replies(2): >>46215355 #>>46215367 #
1. not_that_d ◴[] No.46215367[source]
Besides my other response, it can also be I am not smart enough for it.