←back to thread

Nobody knows how to build with AI yet

(worksonmymachine.substack.com)
526 points Stwerner | 1 comments | | HN request time: 0s | source
Show context
karel-3d ◴[] No.44616917[source]
Reading articles like this feels like being in a different reality.

I don't work like this, I don't want to work like this and maybe most importantly I don't want to work with somebody who works like this.

Also I am scared that any library that I am using through the myriad of dependencies is written like this.

On the other hand... if I look at this as some alternate universe where I don't need to directly or indirectly touch any of this... I am happy that it works for these people? I guess? Just keep it away from me

replies(20): >>44617013 #>>44617014 #>>44617030 #>>44617053 #>>44617173 #>>44617207 #>>44617235 #>>44617244 #>>44617297 #>>44617336 #>>44617355 #>>44617366 #>>44617387 #>>44617482 #>>44617686 #>>44617879 #>>44617958 #>>44617997 #>>44618547 #>>44618568 #
lordnacho ◴[] No.44617013[source]
But you also can't not swim with the tide. If you drove a horse-buggy 100 years ago, it was probably worth your while to keep your eye on whether motor-cars went anywhere.

I was super skeptical about a year ago. Copilot was making nice predictions, that was it. This agent stuff is truly impressive.

replies(7): >>44617059 #>>44617096 #>>44617165 #>>44617303 #>>44617421 #>>44617514 #>>44618157 #
bloppe ◴[] No.44617165[source]
An I the only one who has to constantly tell Claude and Gemini to stop making edits to my codebase because they keep messing things up and breaking the build like ten times in a row, duplicating logic everywhere, etc? I keep hearing about how impressive agents are. I wish they could automate me out of my job faster
replies(9): >>44617236 #>>44617257 #>>44617322 #>>44617596 #>>44617644 #>>44618327 #>>44618377 #>>44619630 #>>44620251 #
vishvananda ◴[] No.44618377[source]
I'm really baffled why the coding interfaces have not implemented a locking feature for some code. It seems like an obvious feature to be able to select a section of your code and tell the agent not to modify it. This could remove a whole class of problems where the agent tries to change tests to match the code or removes key functionality.

One could even imagine going a step further and having a confidence level associated with different parts of the code, that would help the LLM concentrate changes on the areas that you're less sure about.

replies(1): >>44619462 #
Benjammer ◴[] No.44619462[source]
Why are engineers so obstinate about this stuff? You really need a GUI built for you in order to do this? You can't take the time to just type up this instruction to the LLM? Do you realize that's possible? You can just write instructions "Don't modify XYZ.ts file under any circumstances". Not to mention all the tools have simple hotkeys to dismiss changes for an entire file with the press of a button if you really want to ignore changes to a file or whatever. In Cursor you can literally select a block of text and press a hotkey to "highlight" that code to the LLM in the chat, and you could absolutely tell it "READ BUT DON'T TOUCH THIS CODE" or something, directly tied to specific lines of code, literally the feature you are describing. BUT, you have to work with the LLM and tooling, it's not just going to be a button for you or something.

You can also literally do exactly what you said with "going a step further".

Open Claude Code, run `/init`. Download Superwhisper, open a new file at project root called BRAIN_DUMP.md, put your cursor in the file, activate Superwhisper, talk in stream of consciousness-style about all the parts of the code and your own confidence level, with any details you want to include. Go to your LLM chat, tell it to "Read file @BRAIN_DUMP.md" and organize all the contents into your own new file CODE_CONFIDENCE.md. Tell it to list the parts of the code base and give it's best assessment of the developer's confidence in that part of the code, given the details and tone in the brain dump for each part. Delete the brain dump file if you want. Now you literally have what you asked for, an "index" of sorts for your LLM that tells it the parts of the codebase and developer confidence/stability/etc. Now you can just refer to that file in your project prompting.

Please, everyone, for the love of god, just start prompting. Instead of posting on hacker news or reddit about your skepticism, literally talk to the LLM about it and ask it questions, it can help you work through almost any of this stuff people rant about.

replies(3): >>44620215 #>>44620666 #>>44622866 #
1. lightbulbish ◴[] No.44620215{3}[source]
_all_ models I’ve tried continuously, and still, have problems ignoring rules. I’m actually quite shocked someone would write this if you have experience in the area, as it so clearly contrasts with my own experience.

Despite explicit instructions in all sorts of rules and .md’s, the models still make changes where they should not. When caught they innocently say ”you’re right I shouldn’t have done that as it directly goes against your rule of <x>”.

Just to be clear, are you suggesting that currently, with your existing setup, the AI’s always follow your instructions in your rules and prompts? If so, I want your rules please. If not, I don’t understand why you would diss a solution which aims to hardcode away some of the llm prompt interpretation problems that exist