Most active commenters
  • (9)
  • fragmede(5)
  • hooverd(5)
  • karel-3d(3)
  • lordnacho(3)
  • dingnuts(3)
  • gabrieledarrigo(3)
  • esafak(3)

←back to thread

Nobody knows how to build with AI yet

(worksonmymachine.substack.com)
526 points Stwerner | 95 comments | | HN request time: 1.052s | source | bottom
1. karel-3d ◴[] No.44616917[source]
Reading articles like this feels like being in a different reality.

I don't work like this, I don't want to work like this and maybe most importantly I don't want to work with somebody who works like this.

Also I am scared that any library that I am using through the myriad of dependencies is written like this.

On the other hand... if I look at this as some alternate universe where I don't need to directly or indirectly touch any of this... I am happy that it works for these people? I guess? Just keep it away from me

replies(20): >>44617013 #>>44617014 #>>44617030 #>>44617053 #>>44617173 #>>44617207 #>>44617235 #>>44617244 #>>44617297 #>>44617336 #>>44617355 #>>44617366 #>>44617387 #>>44617482 #>>44617686 #>>44617879 #>>44617958 #>>44617997 #>>44618547 #>>44618568 #
2. lordnacho ◴[] No.44617013[source]
But you also can't not swim with the tide. If you drove a horse-buggy 100 years ago, it was probably worth your while to keep your eye on whether motor-cars went anywhere.

I was super skeptical about a year ago. Copilot was making nice predictions, that was it. This agent stuff is truly impressive.

replies(7): >>44617059 #>>44617096 #>>44617165 #>>44617303 #>>44617421 #>>44617514 #>>44618157 #
3. raincole ◴[] No.44617014[source]
> in a different reality.

It is. And one reality is getting bigger each day and the other is shrinking.

4. logicchains ◴[] No.44617030[source]
>I don't work like this, I don't want to work like this and maybe most importantly I don't want to work with somebody who works like this.

It suggests you've had very positive life experiences, that you trust human developers so much more than computers.

replies(2): >>44617158 #>>44617376 #
5. ◴[] No.44617053[source]
6. rafaelmn ◴[] No.44617059[source]
More like people telling us there will be no more professional drivers on the road in 5-10 years 10 years ago. Agents are like lane assist, not even up to the current self driving levels.
replies(2): >>44617149 #>>44630320 #
7. mnky9800n ◴[] No.44617096[source]
I think the agent stuff is impressive because we are giving the AI scaffold and tools and things to do. And that is why it is impressive because it has some directive. But it is obvious if you don't give it good directives it doesn't know what to do. So for me, I think a lot of jobs will be making agents do things, but a lot won't. i think its really strange that people are all so against all this stuff. it's cool new computer tools, does nobody actually like computers anymore?
replies(4): >>44617132 #>>44617159 #>>44617335 #>>44622092 #
8. oblio ◴[] No.44617132{3}[source]
People are afraid that instead of skilled craft guild members they will become assembly line workers like Charlie Chaplin in Modern Times. And in 10 years unemployed like people in the Rust Belt.
replies(2): >>44617223 #>>44618167 #
9. miltonlost ◴[] No.44617149{3}[source]
So many people are hyping AI like it's Musk's FSD, with the same fraudulance in overestimating its capabilities.
replies(1): >>44617379 #
10. sbalough ◴[] No.44617158[source]
I don’t think that was his argument. It would be one thing if we reach a point where humans trust a higher AI intelligence to create/keep software systems predictably meeting requirements. We aren’t there yet. So, it’s important to make sure any AI code is reviewed and approved by humans.
11. prinny_ ◴[] No.44617159{3}[source]
A lot of people join this profession because they like building stuff. They enjoy thinking about a problem and coming up with a solution and then implementing and testing it. Prompting is not the same thing and it doesn't scratch the same itch and at the end of the day it's important to enjoy your job, not only be efficient at it.

I have heard the take that "writing code is not what makes you an engineer, solving problems and providing value is what makes you an engineer" and while that's cool and all and super important for advancing in your career and delivering results, I very much also like writing code. So there's that.

replies(6): >>44617289 #>>44617310 #>>44617354 #>>44617386 #>>44617536 #>>44618236 #
12. bloppe ◴[] No.44617165[source]
An I the only one who has to constantly tell Claude and Gemini to stop making edits to my codebase because they keep messing things up and breaking the build like ten times in a row, duplicating logic everywhere, etc? I keep hearing about how impressive agents are. I wish they could automate me out of my job faster
replies(9): >>44617236 #>>44617257 #>>44617322 #>>44617596 #>>44617644 #>>44618327 #>>44618377 #>>44619630 #>>44620251 #
13. majormajor ◴[] No.44617173[source]
The market for utility software like this predates the internet, we used to pass them around on floppies. It was never subject to particularly high QA or scrutiny. It just has to be "adequate."

But it's never displaced the market for highly-produced, highly-planned, "central" software pieces that the utilities glue together and help you work with, etc.

The growth of that software-as-big-business has only enlarged the need for utilities, really, to integrate everything, but it's a tough space to work in - "it's hard to compete with free." One classic move is selling support, etc.

Might be tough to do non-LLM-driven software development there - the selling support for your LLM-created-products model is still viable, but if there's an increase in velocity in useful utility creation or maintenance, possibly the dev headcount needs are lower.

But does anyone know how to use LLMs to make those giant ones yet? Or to make those central core underlying libraries you mention? Doesn't seem like it. Time will tell if there's a meaningful path that is truly different from "an even higher level programming language." Even on the edges - "we outgrew the library and we have to fork it because of [features/perf/bugs]" is a pretty common pattern when working on those larger projects already, and the more specific the exact changes you need are, the less the LLM might be able to do it for you (e.g. the "it kept assuming this function existed because it exists in a lot of similar things" problem).

What I hope is that we can find good ways to leverage these for quality control and testing and validation. (Though this is the opposite of the sort of greenfield dev demos that get the most press right now.)

Testing/validation is hard and expensive enough that basically nobody does a thorough job of it right now, especially in the consumer space. It would be wonderful if we could find ways to release higher quality software without teams of thousands doing manual validation.

14. quantiq ◴[] No.44617207[source]
This has to be someone working solely on personal projects right? Because I don't know anyone who actually works like this and frequently the code that AI will spit out is actually quite bad.
15. avidphantasm ◴[] No.44617223{4}[source]
This, and no one will understand the software that is created. Then you are beholden to AI companies who can charge you whatever they want to maintain the AI code. Will this be cheaper than paying software engineers? Maybe, but I could also see it costing much more.
16. ◴[] No.44617235[source]
17. mbrumlow ◴[] No.44617236{3}[source]
Did you tell them to not duplicate code?
18. ◴[] No.44617244[source]
19. Benjammer ◴[] No.44617257{3}[source]
Are you paying for the higher end models? Do you have proper system prompts and guidance in place for proper prompt engineering? Have you started to practice any auxiliary forms of context engineering?

This isn't a magic code genie, it's a very complicated and very powerful new tool that you need to practice using over time in order to get good results from.

replies(4): >>44617299 #>>44617324 #>>44618056 #>>44618063 #
20. closewith ◴[] No.44617289{4}[source]
Most people don't enjoy their jobs and go to work for one reason only - to support themselves and their families. The itch is to get paid. This is as true in software as it is in other fields.

That's not to say there aren't vocations, or people in software who feel the way you do, but it's a tiny minority.

21. weitendorf ◴[] No.44617297[source]
I've been working on AI dev tools for a bit over a year and I don't love using AI this way either. I mostly use it for boilerplate, ideas, or to ask questions about error messages. But I've had a very open mind about it ever since I saw it oneshotting what I saw as typical Google Cloud Functions tasks (glue together some APIs, light http stuff) a year ago.

I think in the last month we've entered an inflection point with terminal "agents" and new generations of LLMs trained on their previously spotty ability to actually do the thing. It's not "there" yet and results depend on so many factors like the size of your codebase, how well-represented that kinda stuff is in its training data, etc but you really can feed these things junior-sized tickets and send them off expecting a PR to hit your tray pretty quickly.

Do I want the parts of my codebase with the tricky, important secret sauce to be written that way? Of course not, but I wouldn't give them to most other engineers either. A 5-20 person army of ~interns-newgrads is something I can leverage for a lot of the other work I do. And of course I still have to review the generated code, because it's ultimately my responsibility, but I prefer that over having to think about http response codes for my CRUD APIs. It gives me more time to focus on L7 load balancing and cluster discovery and orchestration engines.

replies(1): >>44620227 #
22. goalieca ◴[] No.44617299{4}[source]
It ain’t a magic code genie. And developers don’t spend most of their day typing lines of code. Lots of it is designing, figuring out what to build, understanding the code, maintenance considerations, and adhering to the style of whatever file you’re in. All these agents needing local context and still spit junk.
23. verisimilidude ◴[] No.44617303[source]
AI's superpower is doing mediocre work at high speed. That's okay. Great, even. There's lots of mediocre work to do. And mediocre still clears below average.

But! There's still room for expertise. And this is where I disagree about swimming with the tide. There will be those who are uninterested in using the AI. They will struggle. They will hone their craft. They will have muscle memory for the tasks everyone else forgot how to do. And they will be able to perform work that the AI users cannot.

The future needs both types.

replies(1): >>44617446 #
24. theferret ◴[] No.44617310{4}[source]
That's an interesting take - that you like the act of writing code. I think a lot of builders across a variety of areas feel this way. I like writing code too.

I've been experimenting with a toolchain in which I speak to text to agents, navigate the files with vim and autocomplete, and have Grok think through some math for me. It's pretty fun. I wonder if that will change to tuning agents to write code that go through that process in a semi-supervised manner will be fun? I don't know, but I'm open to the idea that as we progress I will find toolchains that bring me into flow as I build.

25. ◴[] No.44617322{3}[source]
26. dingnuts ◴[] No.44617324{4}[source]
guy 1: I put money in the slot machine everyone says wins all the time and I lose

you: HAVE YOU PUT MORE TOKENS IN???? ARE YOU PUTTING THEM IN THE EXPENSIVE MACHINES???

super compelling argument /s

if you want to provide working examples of "prompt engineering" or "context engineering" please do but "just keep paying until the behavior is impressive" isn't winning me as a customer

it's like putting out a demo program that absolutely sucks and promising that if I pay, it'll get good. why put out the shit demo and give me this impression, then, if it sucks?

replies(1): >>44617410 #
27. majormajor ◴[] No.44617335{3}[source]
> does nobody actually like computers anymore

I think this is a really interesting question and an insight into part of the divide.

Places like HN get a lot of attention from two distinct crowds: people who like computers and related tech and people who like to build. And the latter is split into "people who like to build software to help others get stuff done" and "people who like to build software for themselves" too. Even in the professional-developer-world that's a lot of the split between those with "cool" side projects and those with either only-day-job software or "boring" day-job-related side projects.

I used to be in the first group, liking computer tech for its own sake. The longer I work in the profession of "using computer tools to build things for people" the less I like the computer industry, because of how much the marketing/press/hype/fandom elements go overboard. Building-for-money often exposes, very directly, the difference between "cool tools" and "useful and reliable tools" - all the bugs I have to work around, all the popular much-hyped projects that run into the wall in various places when thrown into production, all the times simple and boring beats cool when it comes to winning customers. So I understand when it makes others jaded about the hype too. Especially if you don't have the intrinsic "cool software is what I want to tinker with" drive.

So the split in reactions to articles like this falls on those lines, I think.

If you like cool computer stuff, it's a cool article, with someone doing something neat.

If you are a dev enthusiast who likes side projects and such (regardless of if it's your day job too or not), it's a cool article, with someone doing something neat.

If you are in the "I want to build stuff that helps other people get shit done" crowd then it's probably still cool - who doesn't like POCs and greenfield work? - but it also seems scary for your day to day work, if it promises a flood of "adequate", not-well-tested software that you're going to be expected to use and work with and integrate for less-technical people who don't understand what goes into reliable software quality. And that's not most people's favorite part of the job.)

(Then there's a third crowd which is the "people who like making money" crowd, which loves LLMs because they look like "future lower costs of labor." But that's generally not what the split reaction to this particular sort of article is about, but is part of another common split between the "yay this will let me make more profit" and "oh no this will make people stop paying me" crowds in the biz-oriented articles.)

28. gabrieledarrigo ◴[] No.44617336[source]
I know, it's scary. But I guess it's the direction we are aiming for.
replies(1): >>44617532 #
29. SoftTalker ◴[] No.44617354{4}[source]
Rick Beato posted a video recently where he created a fictitious artist and a couple of songs based on a few prompts. The results were somewhat passable, generic indie/pop music but as he said (I'm paraphrasing) "I didn't create anything here. I prompted a computer to put together a bunch of words and melodies that it knew from what other people had written."
30. ◴[] No.44617355[source]
31. ◴[] No.44617366[source]
32. ◴[] No.44617376[source]
33. dingnuts ◴[] No.44617379{4}[source]
it's exactly like this. we're 3 years into being told all white collar jobs are going to be gone next year, just like we're ten years into being told we'll have self driving cars next year
replies(1): >>44618645 #
34. johannes1234321 ◴[] No.44617386{4}[source]
There is code which is interesting to write, even if it isn't the area with clever algorithms or big architecture decisions or something.

But there is also the area of boilerplate, where non-LLM-AI-based IDEs for a few decades already help a lot with templates and "smart" completion. Current AI systems widen that area.

The trouble with AI is when you are reaching the boundary of its capabilities. The trivial stuff it does well. For the complex stuff it fails spectacularly. In the in between you got to review carefully, which easily becomes less fun than simply writing by oneself.

replies(1): >>44618474 #
35. richardw ◴[] No.44617387[source]
Scary part is: what if it’s inevitable? We don’t get to choose our environment, and toss one is forming around us.

A friend’s dad only knows assembly. He’s the ceo of his company and they do hardware, and he’s close to retirement now, but he finds this newfangled C and C++ stuff a little too abstract. He sadly needs to trust “these people” but really he prefers being on the metal.

36. lordnacho ◴[] No.44617410{5}[source]
The way I ended up paying for Claude max was that I started on the cheap plan, it went well, then it wanted more money, and I paid because things were going well.

Then it ran out of money again, and I gave it even more money.

I'm in the low 4 figures a year now, and it's worth it. For a day's pay each year, I've got a junior dev who is super fast, makes good suggestions, and makes working code.

replies(1): >>44617689 #
37. kellyjprice ◴[] No.44617421[source]
I'm not trying to discount it the analogy, but I'd much rather live without cars (or a lot less).
38. jon-wood ◴[] No.44617446{3}[source]
My ongoing concern is that most of us probably got to being able to do good work via several years of doing mediocre work. We put in the hours and along the way learned what good looks like, and various patterns that allow us to see the path to solving a given problem.

What does the next generation do when we’ve automated away that work? How do they learn to recognise what good looks like, and when their LLM has got stuck on a dead end and is just spewing out nonsense?

replies(2): >>44619081 #>>44622149 #
39. vitaflo ◴[] No.44617482[source]
>I don't want to work with somebody who works like this.

You will most likely get your wish but not in the way you want. In a few years when this is fully matured there will be little reason to hire devs with their inflated salaries (especially in the US) when all you need is someone with some technical know-how and a keen eye on how to work with AI agents. There will be plenty of those people all over the globe who will demand much less than you will.

Hate to break it to you but this is the future of writing software and will be a reckoning for the entire software industry and the inflated salaries it contains. It won't happen overnight but it'll happen sooner than many devs are willing to admit.

replies(2): >>44617627 #>>44622114 #
40. beefnugs ◴[] No.44617514[source]
This doesn't make aaaaany sense: IF this actually worked, then why would all the biggest companies in the world be firing people? They would be forcing them all to DO THE TIDE and multiple their 10 billion dollar dominance to 100 billion dollar or more dominance.

The truth is something like: for this to work, there is huge requirements in tooling/infrastructure/security/simulation/refinement/optimization/cost-saving that just could never be figured out by the big companies. So they are just like... well lets trick as many investors and plebs to try to use this as possible, maybe one of them will come up with some breakthrough we can steal

replies(1): >>44618275 #
41. recursive ◴[] No.44617532[source]
Just to clarify, I'm not a member of that "we".
replies(1): >>44618833 #
42. mnky9800n ◴[] No.44617536{4}[source]
Yeah but I write the code that is interesting to solve and let the LLM solve the problems that are not so important. Like making yet another webscraper tool is not the most exciting part of the process when you are trying to make some kind of real time inference tool for what people post on the internet.
43. esafak ◴[] No.44617596{3}[source]
Create and point them to an agent.md file http://agent.md/
replies(1): >>44618361 #
44. dingnuts ◴[] No.44617627[source]
yes yes the chainsaw made lumberjacks obsolete
45. csomar ◴[] No.44617644{3}[source]
They need "context engineering" which what I'll describe best as "railing" them in. If you give them a bit of a loose space, they'll massacre your code base. You can use their freedom for exploration but not for implementation.

In essence, you have to do the "engineering" part of the app and they can write the code pretty fast for you. They can help you in the engineering part, but you still need to be able to weigh in whatever crap they recommend and adjust accordingly.

46. esafak ◴[] No.44617686[source]
Imagine a future where creating software is about the designing the UX, overseeing the architecture and quality assurance. Implementation is farmed out.
replies(1): >>44618128 #
47. Avicebron ◴[] No.44617689{6}[source]
> For a day's pay each year

For anyone trying to back of the napkin at $1000 as 4-figures per year, averaged as a day salary, the baseline salary where this makes sense is about ~$260,000/yr? Is that about right lordnacho?

replies(2): >>44618010 #>>44618201 #
48. intended ◴[] No.44617879[source]
I promise everyone one thing - there ain’t no such thing as a free lunch.

A lot of what is “working” in the article is closer to “jugaad”/prototyping.

Something the author acknowledges in their opening- it’s a way to prototype and get something off the ground.

Technically debt will matter for those products that get off the ground.

49. stillsut ◴[] No.44617958[source]
> Just keep it away from me

I'm reminded of teaching bootcamp software engineering, when every day #1 we go through simple git workflows and it seems very intimidating to students and they don't understand the value. Which fair enough because git has a steep learning curve and you need to use it practically to start picking it up.

I think this might be analogous to the shift going on with ai-generated and agent-generated coding, where you're introducing an unfamiliar tool with a steep learning curve, and many people haven't seen the why? for its value.

Anyways, I'm 150 commits into a vibe coding project that still standing strong, if you're curious as to how this can work, you can see all the prompts and the solutions in this handy markdown I've created: https://github.com/sutt/agro/blob/master/docs/dev-summary-v1...

replies(2): >>44618455 #>>44620232 #
50. tempodox ◴[] No.44617997[source]
It does sound horrible. No more getting in the flow, no more thinking about anything, no more understanding anything. Just touch it with a ten-foot pole every few hours, then get distracted again.

I guess if all you do is write React To-Do apps all day, it might even work for a bit.

replies(1): >>44618319 #
51. lordnacho ◴[] No.44618010{7}[source]
Yeah I thought that was a reasonable number in the ballpark. I mean, it probably makes sense to pay a lot more for it. A grand is probably well within the range where you shouldn't care about it, even if you only get a basic salary and it's a terrible year with no bonus.

And that's not saying AI tools are the real deal, either. It can be a lot less than a fully self driving dev and still be worth a significant fraction of an entry level dev.

52. tempodox ◴[] No.44618056{4}[source]
That's the beauty of the hype: Anyone who cannot replicate it, is “holding it wrong”.
replies(2): >>44618116 #>>44622478 #
53. QuantumGood ◴[] No.44618063{4}[source]

   > it's a very complicated and very powerful new tool that you need to practice using over time in order to get good results from.
Of course this is and would be expected to be true. Yet adoption of this mindset has been orders of magnitude slower than the increase in AI features and capabilities.
54. ◴[] No.44618116{5}[source]
55. karel-3d ◴[] No.44618128[source]
But the architecture is the important (and hard) part!!! Not the UX!
replies(2): >>44618211 #>>44620761 #
56. fzeroracer ◴[] No.44618157[source]
Sometimes it's a good thing to not swim with the tide. Enshittification comes from every single dipshit corporation racing to the bottom, and right now said tide is increasingly filling with sewage.

There's a huge disconnect I notice where experienced software engineers rage about how shitty things are nowadays while diving directly into using AI garbage, where they cannot explain what their code is doing if their lives depended on it.

57. lucumo ◴[] No.44618167{4}[source]
There's a kind of karmic comedy in this. Programmers' jobs has always been to automate other people's jobs. The panic of programmers about their own jobs now is immensely funny to me.

As has been the case for all those jobs changed by programmers, the people who keep an open mind and are willing to learn new ways of working will be fine or even thrive. The people rusted to their seat, who are barely adding value as is, will be forced to choose between changing or struggling.

replies(1): >>44618716 #
58. antihipocrat ◴[] No.44618201{7}[source]
I assume it's after tax too..
59. esafak ◴[] No.44618211{3}[source]
Does it matter if the computer can do it? Can you calculate the cube root of 4?

Users see and care about the UX; the product. They only notice the engineering when it goes wrong.

60. fragmede ◴[] No.44618236{4}[source]
Ah yes, that "is that 6 spaces or 8" in a yaml file itch that just has to be scratched. Programming has a lot of doldrums. LLMs still get stuck at places, and that's just where the new itch to scratch is. Yeah, it's not the same as code golfing an algorithm really neatly into a few lines of really expressive C++, but things change and life goes on. Programming isn't the same as when it was on punch cards either.
61. fragmede ◴[] No.44618275{3}[source]
> why would all the biggest companies in the world be firing people

Because of section 174, now hopefully repealed. Money makes the world go round, and the money people talk to the people with firing authority.

62. fragmede ◴[] No.44618319[source]
Unfortunately, I think the evolution of LLMs is going to put more areas of programming within this "React Todo app" envelope of capability that you suggest, and to have it work for longer, rather than going away.
63. exographicskip ◴[] No.44618327{3}[source]
Duplicate logic is definitely a thing. That and littering comments all over the place.

Worth it to me as I can fix all the above after the fact.

Just annoying haha

replies(1): >>44619891 #
64. vishvananda ◴[] No.44618377{3}[source]
I'm really baffled why the coding interfaces have not implemented a locking feature for some code. It seems like an obvious feature to be able to select a section of your code and tell the agent not to modify it. This could remove a whole class of problems where the agent tries to change tests to match the code or removes key functionality.

One could even imagine going a step further and having a confidence level associated with different parts of the code, that would help the LLM concentrate changes on the areas that you're less sure about.

replies(1): >>44619462 #
65. fragmede ◴[] No.44618455[source]
To the article's point, I built my own version of your agro tool that I use to manage my own git worktrees. Even if I had known about your project, I still would have built my own, because if I build it (with LLM assistance, obvs) then I get to design it for myself.

Looking at other industries, music production is probably the one to look at. What was once the purview of record labels with recording studios that cost a million dollars to outfit, is now a used MacBook and, like, $1,000 of hardware/software. The music industry has changed, dramatically, as a result of the march of technology, and thus so will software. So writing software will go the way of the musician. What used to be a middle class job as a trumpet player in NYC before the advent of records, is now only a hobby except for the truely elite level practicioners.

66. ModernMech ◴[] No.44618474{5}[source]
> But there is also the area of boilerplate, where non-LLM-AI-based IDEs for a few decades already help a lot with templates and "smart" completion.

The thing for me is that AI writing the boilerplate feels like the brute force solution, compared to investing in better language and tooling design that may obviate the need for such boilerplate in the first place.

replies(1): >>44618594 #
67. fragmede ◴[] No.44618547[source]
> and maybe most importantly I don't want to work with somebody who works like this.

Which, of course, is your perogative, but in what other ways do we, as fellow programmers, judge software libraries and dependencies so harshly? As a Vim user, do I care that Django was written with a lot of emacs? Or that Linus used emacs to write git? Or maybe being judgemental about programming languages; ugh, that's "just" a scripting language, it's not "real" programming unless you use a magnet up against a hard drive to program in ones and zeros. As a user, do I care that Calibre is written in Python, and not something "better"? Or that curl is written in good ole C. Or how about being opinionated as to whether or not the programmer used GDB or printf debugging to make the library?

68. AndrewKemendo ◴[] No.44618568[source]
Genuinely this is what it sounds like to accept obsolescence and I just can’t understand it.

What are you attached to and identify with that you’re rejecting new ways to work?

Change is the only constant and tools now look like superhuman tools created for babies compared to the sota at bell or NASA in the 1960s when they were literally trying to create superhuman computing.

We have more access to powerful compute and it’s never been easier to build your own everything.

What’s the big complaint?

replies(1): >>44622147 #
69. johannes1234321 ◴[] No.44618594{6}[source]
Yeah, but building tooling is a hard sell considering the ability of contemporary AI.

The energy cost is absurdly high for the result, but in current economics, where it's paid by investors not users, it's hidden. Will be interesting to see when AI companies got to the level where they have to make profits and how much optimisation there is to come ...

70. johnnienaked ◴[] No.44618645{5}[source]
15 years into bitcoin replacing the USD too
71. oblio ◴[] No.44618716{5}[source]
The problem is that these days we're talking about millions of people.

Those kinds of masses of people don't pivot on a dime.

72. gabrieledarrigo ◴[] No.44618833{3}[source]
And that's fine, it's your choice. But everything, driven by multiple forces (from hype, to marketing, to real progress, to early adopters) is pointing to that future.
replies(1): >>44620315 #
73. commakozzi ◴[] No.44619081{4}[source]
they will be judging the merit of work in much broader context.
74. Benjammer ◴[] No.44619462{4}[source]
Why are engineers so obstinate about this stuff? You really need a GUI built for you in order to do this? You can't take the time to just type up this instruction to the LLM? Do you realize that's possible? You can just write instructions "Don't modify XYZ.ts file under any circumstances". Not to mention all the tools have simple hotkeys to dismiss changes for an entire file with the press of a button if you really want to ignore changes to a file or whatever. In Cursor you can literally select a block of text and press a hotkey to "highlight" that code to the LLM in the chat, and you could absolutely tell it "READ BUT DON'T TOUCH THIS CODE" or something, directly tied to specific lines of code, literally the feature you are describing. BUT, you have to work with the LLM and tooling, it's not just going to be a button for you or something.

You can also literally do exactly what you said with "going a step further".

Open Claude Code, run `/init`. Download Superwhisper, open a new file at project root called BRAIN_DUMP.md, put your cursor in the file, activate Superwhisper, talk in stream of consciousness-style about all the parts of the code and your own confidence level, with any details you want to include. Go to your LLM chat, tell it to "Read file @BRAIN_DUMP.md" and organize all the contents into your own new file CODE_CONFIDENCE.md. Tell it to list the parts of the code base and give it's best assessment of the developer's confidence in that part of the code, given the details and tone in the brain dump for each part. Delete the brain dump file if you want. Now you literally have what you asked for, an "index" of sorts for your LLM that tells it the parts of the codebase and developer confidence/stability/etc. Now you can just refer to that file in your project prompting.

Please, everyone, for the love of god, just start prompting. Instead of posting on hacker news or reddit about your skepticism, literally talk to the LLM about it and ask it questions, it can help you work through almost any of this stuff people rant about.

replies(3): >>44620215 #>>44620666 #>>44622866 #
75. dvfjsdhgfv ◴[] No.44619630{3}[source]
It happens to me, yes. Sometimes they get stuck in the process. I learned how to go around certain issues but it's very annoying.
76. viraptor ◴[] No.44619891{4}[source]
Ask for no comments. Or extremely infrequent ones for complex sections only. Agreeing with https://news.ycombinator.com/item?id=44619462 here.
77. lightbulbish ◴[] No.44620215{5}[source]
_all_ models I’ve tried continuously, and still, have problems ignoring rules. I’m actually quite shocked someone would write this if you have experience in the area, as it so clearly contrasts with my own experience.

Despite explicit instructions in all sorts of rules and .md’s, the models still make changes where they should not. When caught they innocently say ”you’re right I shouldn’t have done that as it directly goes against your rule of <x>”.

Just to be clear, are you suggesting that currently, with your existing setup, the AI’s always follow your instructions in your rules and prompts? If so, I want your rules please. If not, I don’t understand why you would diss a solution which aims to hardcode away some of the llm prompt interpretation problems that exist

78. bluefirebrand ◴[] No.44620227[source]
> but you really can feed these things junior-sized tickets and send them off expecting a PR to hit your tray pretty quickly

This really hasn't been my experience

Maybe I just expect more out of juniors than most people, though

79. ◴[] No.44620232[source]
80. bradly ◴[] No.44620251{3}[source]
This is way even as a paid user I stick to the browser tab llms. I am a context control freak and constantly just grabbing a new sessions and starting over. I don't try and fix a session and the incentives of a subscription vs api token payment model has inverse incentives.
81. ath3nd ◴[] No.44620315{4}[source]
Yes, another web3, crypto and nft "inevitable" futures. Just give Sama a couple more trillion and AGI is juuuust behind the corner.

It's a fact models aren't getting as cost efficient nor better with the same rate that the costs increases of training and running them. It's also a fact that they are so unprofitable that Anthropic feels like they gotta rug-pull your Claude tokens (https://news.ycombinator.com/item?id=44598254#44602695) without telling you, let's just ignore those facts and fanboy with wide-closed about that future.

A future framed as "inevitable" by a bunch of people whose job/wealth depends on framing it as such. Nah, hard pass.

replies(1): >>44624526 #
82. vishvananda ◴[] No.44620666{5}[source]
I am by no means an AI skeptic. It is possible to encode all sorts of things into instructions, but I don’t think the future of programming is every individual constructing and managing artisan prompts. There are surely some new paradigms to be discovered here. A code locking interface seems like an interesting one to explore. I’m sure there are others.
83. lobf ◴[] No.44620761{3}[source]
A customer doesn't care about architecture. They want a good UX.
replies(1): >>44635522 #
84. hooverd ◴[] No.44622092{3}[source]
I like computers quite a lot and the direction of the tech industry has been to destroy every single think I like or thought would be good about them.
85. hooverd ◴[] No.44622114[source]
> some technical know-how

how do they develop the technical know-how? how will you review the AI agents when you understand nothing?

replies(1): >>44626650 #
86. hooverd ◴[] No.44622147[source]
I think it's a great technology but if people's ability to put food on the table is compromised what's their incentive not to ventilate people working for AI labs?
replies(1): >>44624924 #
87. hooverd ◴[] No.44622149{4}[source]
they don't!
88. orangecat ◴[] No.44622478{5}[source]
Or maybe it works well in some cases and not others?
89. gjadi ◴[] No.44622866{5}[source]
Or, you know, chmod -w XYZ.ts
90. gabrieledarrigo ◴[] No.44624526{5}[source]
> A future framed as "inevitable" by a bunch of people whose job/wealth depends on framing it as such. Nah, hard pass.

I agree with you! I'm not saying that I like it; this is the perfect example of turbo capitalism applied to innovation.

I also like to code and to build software, and the joy that comes from the act of creation. Only, I'm quite sure it's not going to last.

91. AndrewKemendo ◴[] No.44624924{3}[source]
If people have shown anything recently it’s the unwillingness to actually do what you just said.

If anyone cared enough to do anything, they would be burning everything down already

It’s a lot of impotent rage because the only virtue people have is consumption, they don’t actually believe in anything. The ones who do believe in fairy tales are part of a dwindling population (religion) that is rightfully crashing.

Welcome to the wasteland of the real

92. vitaflo ◴[] No.44626650{3}[source]
How do you review the machine code generated by the compiler?
replies(1): >>44627399 #
93. hooverd ◴[] No.44627399{4}[source]
surely you can appreciate the difference between a compiler and a non-deterministic natural language interface?
94. throw234234234 ◴[] No.44630320{3}[source]
Not sure the analogy is the same. A driverless car if it does something wrong can cause death - the cost of failure is very high. If my code doesn't come out right, as other posts here have said, I can just "re-roll the slot machine" until it does come out as acceptable - the cost is extremely low. Most of the "reasoning" models just increase the probability that the "re-run" will more likely be to preference of most people with RL to make it viable - tools like new agents know how to run tools, etc to give more data for the probability to be viable within a good timeframe and not run into an endless loop most of the time. Software until it is running after all is just text on a page.

Sure I have to be sure what I'm committing and running is good, especially in critical domains. The cheap cost of iteration before actual commit IMO is the one reason why LLM's are disruptive in software and other "generative" domains in the digital world. Conversely real-time requirements, software that needs to be relied on (e.g. a life support system?), things that post opinions in my name online etc will probably, even if written by a LLM, will need someone accountable and verifying the output.

Again as per many other posts "I want to be wrong" given I'm a senior in my career and would find it hard to change now given age. I don't like how our career is concentrating to the big AI labs/companies rather than our own intelligence/creativity. But rationally its hard to see how software continues to be the same career going forward and if I don't adapt I might die. I will most likely going forward, similar to what I do with my current team, just define and verify.

95. karel-3d ◴[] No.44635522{4}[source]
The customer wants a working product!