←back to thread

358 points andrewstetsenko | 1 comments | | HN request time: 0.204s | source
Show context
hintymad ◴[] No.44362187[source]
Copying from another post. I’m very puzzled on why people don’t talk more about essential complexity of specifying systems any more:

In No Silver Bullet, Fred Brooks argues that the hard part of software engineering lies in essential complexity - understanding, specifying, and modeling the problem space - while accidental complexity like tool limitations is secondary. His point was that no tool or methodology would "magically" eliminate the difficulty of software development because the core challenge is conceptual, not syntactic. Fast forward to today: there's a lot of talk about AI agents replacing engineers by writing entire codebases from natural language prompts. But that seems to assume the specification problem is somehow solved or simplified. In reality, turning vague ideas into detailed, robust systems still feels like the core job of engineers.

If someone provides detailed specs and iteratively works with an AI to build software, aren’t they just using AI to eliminate accidental complexity—like how we moved from assembly to high-level languages? That doesn’t replace engineers; it boosts our productivity. If anything, it should increase opportunities by lowering the cost of iteration and scaling our impact.

So how do we reconcile this? If an agent writes a product from a prompt, that only works because someone else has already fully specified the system—implicitly or explicitly. And if we’re just using AI to replicate existing products, then we’re not solving technical problems anymore; we’re just competing on distribution or cost. That’s not an engineering disruption—it’s a business one.

What am I missing here?

replies(22): >>44362234 #>>44362259 #>>44362323 #>>44362411 #>>44362713 #>>44362779 #>>44362791 #>>44362811 #>>44363426 #>>44363487 #>>44363510 #>>44363707 #>>44363719 #>>44364280 #>>44364282 #>>44364296 #>>44364302 #>>44364456 #>>44365037 #>>44365998 #>>44368818 #>>44371963 #
andyferris ◴[] No.44362234[source]
I'm not sure what the answer is - but I will say that LLMs do help me wrangle with essential complexity / real-world issues too.

Most problems businesses face have been seen by other businesses; perhaps some knowledge is in the training set or perhaps some problems are so easy to reason through that a LLM can do the "reasoning" more-or-less from first principles and your problem description.

I am speculating that AI will help with both sides of the No Silver Bullet dichotomy?

replies(2): >>44363027 #>>44365101 #
daxfohl ◴[] No.44363027[source]
Yeah, I give it about two years until we get to "Hey AI, what should we do today?" "Hi, I've noticed an increase in users struggling with transactions across individual accounts that they own. It appears some aspect of multitenancy would be warmly received by a significant fraction of our userbase. I have compiled a report on the different approaches taken by medium and large tech companies in this regard, and created a summary of user feedback that I've found on each. Based on this, and with the nuance of our industry, current userbase, the future markets we want to explore, and the ability to fit it most naturally into our existing infrastructure, I have boiled it down to one of these three options. Here are detailed design docs for each, that includes all downstream services affected, all data schema changes, lists out any concerns about backwards compatibility, user interface nuances, and has all the new operational and adoption metrics that we will want to monitor. Please read these through and let me know which one to start, and if you have any questions or suggestions I'll be more than happy to take them. For the first option, I've already prepared a list of PRs that I'm ready to commit and deploy in the designated order, and have tested e2e in a test cluster of all affected services, and it is up and running in a test cluster currently if you would like to explore it. It will take me a couple hours to do the same with the other two options if you'd like. If I get the green light today, I can sequence the deployments so that they don't conflict with other projects and have it in production by the end of the week, along with communication and optional training to the users I feel would find the feature most useful. Of course any of this can be changed, postponed, or dropped if you have concerns, would like to take a different approach, or think the feature should not be pursued."
replies(1): >>44363466 #
achierius ◴[] No.44363466[source]
Luckily, by that point it won't just be SWEs who'll be out of a job :)
replies(2): >>44365271 #>>44368675 #
1. daxfohl ◴[] No.44368675[source]
Yeah, PM, data science, compliance, accounting...all largely automatable. You just need a few directors to call the shots on big risks. But even that goes away at some point because in a few months it'll have implemented everything you were thinking about doing for the next ten years and it simply runs out of stuff for humans to do.

What happens after that, I have no idea.

Seems like OpenAI (or whoever wins) could easily just start taking over whole industries at that point, or at least those that are mostly tech based, since it can replicate anything they can do, but cheaper. By that point, probably the only tech jobs left will be building safeguards so that AI doesn't destroy the planet.

Which sounds niche, but conceivably, could be a real, thriving industry. Once AI outruns us, there'll probably be a huge catastrophe at some point, after which we'll realize we need to "dumb down" AI in order to preserve our own species. It will serve almost as a physical resource, or maybe like a giant nuclear reactor, where we mine it as needed but don't let it run unfettered. Coordinating that balance to extract maximal economic growth without blowing everything up could end up being the primary function of human intelligence in the AI age.

Whether something like that can be sustained, in a world with ten billion different opinions on how to do so, remains to be seen.