Most active commenters
  • lordnacho(3)

←back to thread

Nobody knows how to build with AI yet

(worksonmymachine.substack.com)
526 points Stwerner | 55 comments | | HN request time: 0.001s | source | bottom
Show context
karel-3d ◴[] No.44616917[source]
Reading articles like this feels like being in a different reality.

I don't work like this, I don't want to work like this and maybe most importantly I don't want to work with somebody who works like this.

Also I am scared that any library that I am using through the myriad of dependencies is written like this.

On the other hand... if I look at this as some alternate universe where I don't need to directly or indirectly touch any of this... I am happy that it works for these people? I guess? Just keep it away from me

replies(20): >>44617013 #>>44617014 #>>44617030 #>>44617053 #>>44617173 #>>44617207 #>>44617235 #>>44617244 #>>44617297 #>>44617336 #>>44617355 #>>44617366 #>>44617387 #>>44617482 #>>44617686 #>>44617879 #>>44617958 #>>44617997 #>>44618547 #>>44618568 #
1. lordnacho ◴[] No.44617013[source]
But you also can't not swim with the tide. If you drove a horse-buggy 100 years ago, it was probably worth your while to keep your eye on whether motor-cars went anywhere.

I was super skeptical about a year ago. Copilot was making nice predictions, that was it. This agent stuff is truly impressive.

replies(7): >>44617059 #>>44617096 #>>44617165 #>>44617303 #>>44617421 #>>44617514 #>>44618157 #
2. rafaelmn ◴[] No.44617059[source]
More like people telling us there will be no more professional drivers on the road in 5-10 years 10 years ago. Agents are like lane assist, not even up to the current self driving levels.
replies(2): >>44617149 #>>44630320 #
3. mnky9800n ◴[] No.44617096[source]
I think the agent stuff is impressive because we are giving the AI scaffold and tools and things to do. And that is why it is impressive because it has some directive. But it is obvious if you don't give it good directives it doesn't know what to do. So for me, I think a lot of jobs will be making agents do things, but a lot won't. i think its really strange that people are all so against all this stuff. it's cool new computer tools, does nobody actually like computers anymore?
replies(4): >>44617132 #>>44617159 #>>44617335 #>>44622092 #
4. oblio ◴[] No.44617132[source]
People are afraid that instead of skilled craft guild members they will become assembly line workers like Charlie Chaplin in Modern Times. And in 10 years unemployed like people in the Rust Belt.
replies(2): >>44617223 #>>44618167 #
5. miltonlost ◴[] No.44617149[source]
So many people are hyping AI like it's Musk's FSD, with the same fraudulance in overestimating its capabilities.
replies(1): >>44617379 #
6. prinny_ ◴[] No.44617159[source]
A lot of people join this profession because they like building stuff. They enjoy thinking about a problem and coming up with a solution and then implementing and testing it. Prompting is not the same thing and it doesn't scratch the same itch and at the end of the day it's important to enjoy your job, not only be efficient at it.

I have heard the take that "writing code is not what makes you an engineer, solving problems and providing value is what makes you an engineer" and while that's cool and all and super important for advancing in your career and delivering results, I very much also like writing code. So there's that.

replies(6): >>44617289 #>>44617310 #>>44617354 #>>44617386 #>>44617536 #>>44618236 #
7. bloppe ◴[] No.44617165[source]
An I the only one who has to constantly tell Claude and Gemini to stop making edits to my codebase because they keep messing things up and breaking the build like ten times in a row, duplicating logic everywhere, etc? I keep hearing about how impressive agents are. I wish they could automate me out of my job faster
replies(9): >>44617236 #>>44617257 #>>44617322 #>>44617596 #>>44617644 #>>44618327 #>>44618377 #>>44619630 #>>44620251 #
8. avidphantasm ◴[] No.44617223{3}[source]
This, and no one will understand the software that is created. Then you are beholden to AI companies who can charge you whatever they want to maintain the AI code. Will this be cheaper than paying software engineers? Maybe, but I could also see it costing much more.
9. mbrumlow ◴[] No.44617236[source]
Did you tell them to not duplicate code?
10. Benjammer ◴[] No.44617257[source]
Are you paying for the higher end models? Do you have proper system prompts and guidance in place for proper prompt engineering? Have you started to practice any auxiliary forms of context engineering?

This isn't a magic code genie, it's a very complicated and very powerful new tool that you need to practice using over time in order to get good results from.

replies(4): >>44617299 #>>44617324 #>>44618056 #>>44618063 #
11. closewith ◴[] No.44617289{3}[source]
Most people don't enjoy their jobs and go to work for one reason only - to support themselves and their families. The itch is to get paid. This is as true in software as it is in other fields.

That's not to say there aren't vocations, or people in software who feel the way you do, but it's a tiny minority.

12. goalieca ◴[] No.44617299{3}[source]
It ain’t a magic code genie. And developers don’t spend most of their day typing lines of code. Lots of it is designing, figuring out what to build, understanding the code, maintenance considerations, and adhering to the style of whatever file you’re in. All these agents needing local context and still spit junk.
13. verisimilidude ◴[] No.44617303[source]
AI's superpower is doing mediocre work at high speed. That's okay. Great, even. There's lots of mediocre work to do. And mediocre still clears below average.

But! There's still room for expertise. And this is where I disagree about swimming with the tide. There will be those who are uninterested in using the AI. They will struggle. They will hone their craft. They will have muscle memory for the tasks everyone else forgot how to do. And they will be able to perform work that the AI users cannot.

The future needs both types.

replies(1): >>44617446 #
14. theferret ◴[] No.44617310{3}[source]
That's an interesting take - that you like the act of writing code. I think a lot of builders across a variety of areas feel this way. I like writing code too.

I've been experimenting with a toolchain in which I speak to text to agents, navigate the files with vim and autocomplete, and have Grok think through some math for me. It's pretty fun. I wonder if that will change to tuning agents to write code that go through that process in a semi-supervised manner will be fun? I don't know, but I'm open to the idea that as we progress I will find toolchains that bring me into flow as I build.

15. ◴[] No.44617322[source]
16. dingnuts ◴[] No.44617324{3}[source]
guy 1: I put money in the slot machine everyone says wins all the time and I lose

you: HAVE YOU PUT MORE TOKENS IN???? ARE YOU PUTTING THEM IN THE EXPENSIVE MACHINES???

super compelling argument /s

if you want to provide working examples of "prompt engineering" or "context engineering" please do but "just keep paying until the behavior is impressive" isn't winning me as a customer

it's like putting out a demo program that absolutely sucks and promising that if I pay, it'll get good. why put out the shit demo and give me this impression, then, if it sucks?

replies(1): >>44617410 #
17. majormajor ◴[] No.44617335[source]
> does nobody actually like computers anymore

I think this is a really interesting question and an insight into part of the divide.

Places like HN get a lot of attention from two distinct crowds: people who like computers and related tech and people who like to build. And the latter is split into "people who like to build software to help others get stuff done" and "people who like to build software for themselves" too. Even in the professional-developer-world that's a lot of the split between those with "cool" side projects and those with either only-day-job software or "boring" day-job-related side projects.

I used to be in the first group, liking computer tech for its own sake. The longer I work in the profession of "using computer tools to build things for people" the less I like the computer industry, because of how much the marketing/press/hype/fandom elements go overboard. Building-for-money often exposes, very directly, the difference between "cool tools" and "useful and reliable tools" - all the bugs I have to work around, all the popular much-hyped projects that run into the wall in various places when thrown into production, all the times simple and boring beats cool when it comes to winning customers. So I understand when it makes others jaded about the hype too. Especially if you don't have the intrinsic "cool software is what I want to tinker with" drive.

So the split in reactions to articles like this falls on those lines, I think.

If you like cool computer stuff, it's a cool article, with someone doing something neat.

If you are a dev enthusiast who likes side projects and such (regardless of if it's your day job too or not), it's a cool article, with someone doing something neat.

If you are in the "I want to build stuff that helps other people get shit done" crowd then it's probably still cool - who doesn't like POCs and greenfield work? - but it also seems scary for your day to day work, if it promises a flood of "adequate", not-well-tested software that you're going to be expected to use and work with and integrate for less-technical people who don't understand what goes into reliable software quality. And that's not most people's favorite part of the job.)

(Then there's a third crowd which is the "people who like making money" crowd, which loves LLMs because they look like "future lower costs of labor." But that's generally not what the split reaction to this particular sort of article is about, but is part of another common split between the "yay this will let me make more profit" and "oh no this will make people stop paying me" crowds in the biz-oriented articles.)

18. SoftTalker ◴[] No.44617354{3}[source]
Rick Beato posted a video recently where he created a fictitious artist and a couple of songs based on a few prompts. The results were somewhat passable, generic indie/pop music but as he said (I'm paraphrasing) "I didn't create anything here. I prompted a computer to put together a bunch of words and melodies that it knew from what other people had written."
19. dingnuts ◴[] No.44617379{3}[source]
it's exactly like this. we're 3 years into being told all white collar jobs are going to be gone next year, just like we're ten years into being told we'll have self driving cars next year
replies(1): >>44618645 #
20. johannes1234321 ◴[] No.44617386{3}[source]
There is code which is interesting to write, even if it isn't the area with clever algorithms or big architecture decisions or something.

But there is also the area of boilerplate, where non-LLM-AI-based IDEs for a few decades already help a lot with templates and "smart" completion. Current AI systems widen that area.

The trouble with AI is when you are reaching the boundary of its capabilities. The trivial stuff it does well. For the complex stuff it fails spectacularly. In the in between you got to review carefully, which easily becomes less fun than simply writing by oneself.

replies(1): >>44618474 #
21. lordnacho ◴[] No.44617410{4}[source]
The way I ended up paying for Claude max was that I started on the cheap plan, it went well, then it wanted more money, and I paid because things were going well.

Then it ran out of money again, and I gave it even more money.

I'm in the low 4 figures a year now, and it's worth it. For a day's pay each year, I've got a junior dev who is super fast, makes good suggestions, and makes working code.

replies(1): >>44617689 #
22. kellyjprice ◴[] No.44617421[source]
I'm not trying to discount it the analogy, but I'd much rather live without cars (or a lot less).
23. jon-wood ◴[] No.44617446[source]
My ongoing concern is that most of us probably got to being able to do good work via several years of doing mediocre work. We put in the hours and along the way learned what good looks like, and various patterns that allow us to see the path to solving a given problem.

What does the next generation do when we’ve automated away that work? How do they learn to recognise what good looks like, and when their LLM has got stuck on a dead end and is just spewing out nonsense?

replies(2): >>44619081 #>>44622149 #
24. beefnugs ◴[] No.44617514[source]
This doesn't make aaaaany sense: IF this actually worked, then why would all the biggest companies in the world be firing people? They would be forcing them all to DO THE TIDE and multiple their 10 billion dollar dominance to 100 billion dollar or more dominance.

The truth is something like: for this to work, there is huge requirements in tooling/infrastructure/security/simulation/refinement/optimization/cost-saving that just could never be figured out by the big companies. So they are just like... well lets trick as many investors and plebs to try to use this as possible, maybe one of them will come up with some breakthrough we can steal

replies(1): >>44618275 #
25. mnky9800n ◴[] No.44617536{3}[source]
Yeah but I write the code that is interesting to solve and let the LLM solve the problems that are not so important. Like making yet another webscraper tool is not the most exciting part of the process when you are trying to make some kind of real time inference tool for what people post on the internet.
26. esafak ◴[] No.44617596[source]
Create and point them to an agent.md file http://agent.md/
replies(1): >>44618361 #
27. csomar ◴[] No.44617644[source]
They need "context engineering" which what I'll describe best as "railing" them in. If you give them a bit of a loose space, they'll massacre your code base. You can use their freedom for exploration but not for implementation.

In essence, you have to do the "engineering" part of the app and they can write the code pretty fast for you. They can help you in the engineering part, but you still need to be able to weigh in whatever crap they recommend and adjust accordingly.

28. Avicebron ◴[] No.44617689{5}[source]
> For a day's pay each year

For anyone trying to back of the napkin at $1000 as 4-figures per year, averaged as a day salary, the baseline salary where this makes sense is about ~$260,000/yr? Is that about right lordnacho?

replies(2): >>44618010 #>>44618201 #
29. lordnacho ◴[] No.44618010{6}[source]
Yeah I thought that was a reasonable number in the ballpark. I mean, it probably makes sense to pay a lot more for it. A grand is probably well within the range where you shouldn't care about it, even if you only get a basic salary and it's a terrible year with no bonus.

And that's not saying AI tools are the real deal, either. It can be a lot less than a fully self driving dev and still be worth a significant fraction of an entry level dev.

30. tempodox ◴[] No.44618056{3}[source]
That's the beauty of the hype: Anyone who cannot replicate it, is “holding it wrong”.
replies(2): >>44618116 #>>44622478 #
31. QuantumGood ◴[] No.44618063{3}[source]

   > it's a very complicated and very powerful new tool that you need to practice using over time in order to get good results from.
Of course this is and would be expected to be true. Yet adoption of this mindset has been orders of magnitude slower than the increase in AI features and capabilities.
32. ◴[] No.44618116{4}[source]
33. fzeroracer ◴[] No.44618157[source]
Sometimes it's a good thing to not swim with the tide. Enshittification comes from every single dipshit corporation racing to the bottom, and right now said tide is increasingly filling with sewage.

There's a huge disconnect I notice where experienced software engineers rage about how shitty things are nowadays while diving directly into using AI garbage, where they cannot explain what their code is doing if their lives depended on it.

34. lucumo ◴[] No.44618167{3}[source]
There's a kind of karmic comedy in this. Programmers' jobs has always been to automate other people's jobs. The panic of programmers about their own jobs now is immensely funny to me.

As has been the case for all those jobs changed by programmers, the people who keep an open mind and are willing to learn new ways of working will be fine or even thrive. The people rusted to their seat, who are barely adding value as is, will be forced to choose between changing or struggling.

replies(1): >>44618716 #
35. antihipocrat ◴[] No.44618201{6}[source]
I assume it's after tax too..
36. fragmede ◴[] No.44618236{3}[source]
Ah yes, that "is that 6 spaces or 8" in a yaml file itch that just has to be scratched. Programming has a lot of doldrums. LLMs still get stuck at places, and that's just where the new itch to scratch is. Yeah, it's not the same as code golfing an algorithm really neatly into a few lines of really expressive C++, but things change and life goes on. Programming isn't the same as when it was on punch cards either.
37. fragmede ◴[] No.44618275[source]
> why would all the biggest companies in the world be firing people

Because of section 174, now hopefully repealed. Money makes the world go round, and the money people talk to the people with firing authority.

38. exographicskip ◴[] No.44618327[source]
Duplicate logic is definitely a thing. That and littering comments all over the place.

Worth it to me as I can fix all the above after the fact.

Just annoying haha

replies(1): >>44619891 #
39. vishvananda ◴[] No.44618377[source]
I'm really baffled why the coding interfaces have not implemented a locking feature for some code. It seems like an obvious feature to be able to select a section of your code and tell the agent not to modify it. This could remove a whole class of problems where the agent tries to change tests to match the code or removes key functionality.

One could even imagine going a step further and having a confidence level associated with different parts of the code, that would help the LLM concentrate changes on the areas that you're less sure about.

replies(1): >>44619462 #
40. ModernMech ◴[] No.44618474{4}[source]
> But there is also the area of boilerplate, where non-LLM-AI-based IDEs for a few decades already help a lot with templates and "smart" completion.

The thing for me is that AI writing the boilerplate feels like the brute force solution, compared to investing in better language and tooling design that may obviate the need for such boilerplate in the first place.

replies(1): >>44618594 #
41. johannes1234321 ◴[] No.44618594{5}[source]
Yeah, but building tooling is a hard sell considering the ability of contemporary AI.

The energy cost is absurdly high for the result, but in current economics, where it's paid by investors not users, it's hidden. Will be interesting to see when AI companies got to the level where they have to make profits and how much optimisation there is to come ...

42. johnnienaked ◴[] No.44618645{4}[source]
15 years into bitcoin replacing the USD too
43. oblio ◴[] No.44618716{4}[source]
The problem is that these days we're talking about millions of people.

Those kinds of masses of people don't pivot on a dime.

44. commakozzi ◴[] No.44619081{3}[source]
they will be judging the merit of work in much broader context.
45. Benjammer ◴[] No.44619462{3}[source]
Why are engineers so obstinate about this stuff? You really need a GUI built for you in order to do this? You can't take the time to just type up this instruction to the LLM? Do you realize that's possible? You can just write instructions "Don't modify XYZ.ts file under any circumstances". Not to mention all the tools have simple hotkeys to dismiss changes for an entire file with the press of a button if you really want to ignore changes to a file or whatever. In Cursor you can literally select a block of text and press a hotkey to "highlight" that code to the LLM in the chat, and you could absolutely tell it "READ BUT DON'T TOUCH THIS CODE" or something, directly tied to specific lines of code, literally the feature you are describing. BUT, you have to work with the LLM and tooling, it's not just going to be a button for you or something.

You can also literally do exactly what you said with "going a step further".

Open Claude Code, run `/init`. Download Superwhisper, open a new file at project root called BRAIN_DUMP.md, put your cursor in the file, activate Superwhisper, talk in stream of consciousness-style about all the parts of the code and your own confidence level, with any details you want to include. Go to your LLM chat, tell it to "Read file @BRAIN_DUMP.md" and organize all the contents into your own new file CODE_CONFIDENCE.md. Tell it to list the parts of the code base and give it's best assessment of the developer's confidence in that part of the code, given the details and tone in the brain dump for each part. Delete the brain dump file if you want. Now you literally have what you asked for, an "index" of sorts for your LLM that tells it the parts of the codebase and developer confidence/stability/etc. Now you can just refer to that file in your project prompting.

Please, everyone, for the love of god, just start prompting. Instead of posting on hacker news or reddit about your skepticism, literally talk to the LLM about it and ask it questions, it can help you work through almost any of this stuff people rant about.

replies(3): >>44620215 #>>44620666 #>>44622866 #
46. dvfjsdhgfv ◴[] No.44619630[source]
It happens to me, yes. Sometimes they get stuck in the process. I learned how to go around certain issues but it's very annoying.
47. viraptor ◴[] No.44619891{3}[source]
Ask for no comments. Or extremely infrequent ones for complex sections only. Agreeing with https://news.ycombinator.com/item?id=44619462 here.
48. lightbulbish ◴[] No.44620215{4}[source]
_all_ models I’ve tried continuously, and still, have problems ignoring rules. I’m actually quite shocked someone would write this if you have experience in the area, as it so clearly contrasts with my own experience.

Despite explicit instructions in all sorts of rules and .md’s, the models still make changes where they should not. When caught they innocently say ”you’re right I shouldn’t have done that as it directly goes against your rule of <x>”.

Just to be clear, are you suggesting that currently, with your existing setup, the AI’s always follow your instructions in your rules and prompts? If so, I want your rules please. If not, I don’t understand why you would diss a solution which aims to hardcode away some of the llm prompt interpretation problems that exist

49. bradly ◴[] No.44620251[source]
This is way even as a paid user I stick to the browser tab llms. I am a context control freak and constantly just grabbing a new sessions and starting over. I don't try and fix a session and the incentives of a subscription vs api token payment model has inverse incentives.
50. vishvananda ◴[] No.44620666{4}[source]
I am by no means an AI skeptic. It is possible to encode all sorts of things into instructions, but I don’t think the future of programming is every individual constructing and managing artisan prompts. There are surely some new paradigms to be discovered here. A code locking interface seems like an interesting one to explore. I’m sure there are others.
51. hooverd ◴[] No.44622092[source]
I like computers quite a lot and the direction of the tech industry has been to destroy every single think I like or thought would be good about them.
52. hooverd ◴[] No.44622149{3}[source]
they don't!
53. orangecat ◴[] No.44622478{4}[source]
Or maybe it works well in some cases and not others?
54. gjadi ◴[] No.44622866{4}[source]
Or, you know, chmod -w XYZ.ts
55. throw234234234 ◴[] No.44630320[source]
Not sure the analogy is the same. A driverless car if it does something wrong can cause death - the cost of failure is very high. If my code doesn't come out right, as other posts here have said, I can just "re-roll the slot machine" until it does come out as acceptable - the cost is extremely low. Most of the "reasoning" models just increase the probability that the "re-run" will more likely be to preference of most people with RL to make it viable - tools like new agents know how to run tools, etc to give more data for the probability to be viable within a good timeframe and not run into an endless loop most of the time. Software until it is running after all is just text on a page.

Sure I have to be sure what I'm committing and running is good, especially in critical domains. The cheap cost of iteration before actual commit IMO is the one reason why LLM's are disruptive in software and other "generative" domains in the digital world. Conversely real-time requirements, software that needs to be relied on (e.g. a life support system?), things that post opinions in my name online etc will probably, even if written by a LLM, will need someone accountable and verifying the output.

Again as per many other posts "I want to be wrong" given I'm a senior in my career and would find it hard to change now given age. I don't like how our career is concentrating to the big AI labs/companies rather than our own intelligence/creativity. But rationally its hard to see how software continues to be the same career going forward and if I don't adapt I might die. I will most likely going forward, similar to what I do with my current team, just define and verify.