Most active commenters
  • monkeyelite(3)
  • spwa4(3)

←back to thread

358 points andrewstetsenko | 64 comments | | HN request time: 1.247s | source | bottom
1. hintymad ◴[] No.44362187[source]
Copying from another post. I’m very puzzled on why people don’t talk more about essential complexity of specifying systems any more:

In No Silver Bullet, Fred Brooks argues that the hard part of software engineering lies in essential complexity - understanding, specifying, and modeling the problem space - while accidental complexity like tool limitations is secondary. His point was that no tool or methodology would "magically" eliminate the difficulty of software development because the core challenge is conceptual, not syntactic. Fast forward to today: there's a lot of talk about AI agents replacing engineers by writing entire codebases from natural language prompts. But that seems to assume the specification problem is somehow solved or simplified. In reality, turning vague ideas into detailed, robust systems still feels like the core job of engineers.

If someone provides detailed specs and iteratively works with an AI to build software, aren’t they just using AI to eliminate accidental complexity—like how we moved from assembly to high-level languages? That doesn’t replace engineers; it boosts our productivity. If anything, it should increase opportunities by lowering the cost of iteration and scaling our impact.

So how do we reconcile this? If an agent writes a product from a prompt, that only works because someone else has already fully specified the system—implicitly or explicitly. And if we’re just using AI to replicate existing products, then we’re not solving technical problems anymore; we’re just competing on distribution or cost. That’s not an engineering disruption—it’s a business one.

What am I missing here?

replies(22): >>44362234 #>>44362259 #>>44362323 #>>44362411 #>>44362713 #>>44362779 #>>44362791 #>>44362811 #>>44363426 #>>44363487 #>>44363510 #>>44363707 #>>44363719 #>>44364280 #>>44364282 #>>44364296 #>>44364302 #>>44364456 #>>44365037 #>>44365998 #>>44368818 #>>44371963 #
2. andyferris ◴[] No.44362234[source]
I'm not sure what the answer is - but I will say that LLMs do help me wrangle with essential complexity / real-world issues too.

Most problems businesses face have been seen by other businesses; perhaps some knowledge is in the training set or perhaps some problems are so easy to reason through that a LLM can do the "reasoning" more-or-less from first principles and your problem description.

I am speculating that AI will help with both sides of the No Silver Bullet dichotomy?

replies(2): >>44363027 #>>44365101 #
3. giancarlostoro ◴[] No.44362259[source]
Kind of, but the models also output really awful code, even if it appears to work, and people (especially juniors) will push that awful code into PRs and people eventually approve of it because there's engineers who don't care about the craft, only collecting a paycheck. Then when things break or slow down to a crawl, nobody has any idea how to fix it because its all AI generated goop.
4. lubujackson ◴[] No.44362323[source]
What you are saying is true. In a way, programmers still need to think about and wrestle with architectural complexity. And I agree the biggest overall gain is adding another layer of abstraction. But combine those two things and suddenly you have junior engineers that can very quickly learn how to architect systems. Because that will be the bulk of the job and they will be doing it every day.

Once you remove all the roadblocks with syntax and language blindspots, the high cost of refactoring, the tedium of adding validation and tests, the challenges of integrating systems... suddenly, the work becomes more pure. Yes, you need to know how to do advanced structural things. But you don't need to learn very much about all the rest of it.

And we very quickly get to a point where someone who can break down problems into tidy Jira tickets is effectively programming. Programming was never really about learning languages but making computers do things, which is a transferrable skill and why so many engineers know so many languages.

replies(1): >>44362747 #
5. foolswisdom ◴[] No.44362411[source]
People (especially people who don't have a lot of hands on tech experience, or students who also aren't into building things) get the sense that writing software requires learning a lot of arcane tools. And the idea is to promise that anyone who can write a specification should be able to make software (yes, handwaving away the learning to specify well, which is a real skill with many dependent skills). This was the promise of no-code, and then they realized that the no-code system (in addition to usually being limited in power) is actually complex and requires specialized learning, and more of that the more powerful the system is. The LLM will replace SWEs approach is another take on that, because you don't need to learn a system, you prompt in natural language, and the model knows how to interface with the underlying system so you don't have to. In that sense, vibe coding is already the culmination of this goal (despite weaknesses such as maintainability issues).

I've seen it written that the main reason managers tend to want to get rid of SWEs is because they don't understand how to interface with them. Using an LLM solves that problem, because you don't need a nerd to operate it.

replies(4): >>44362722 #>>44362965 #>>44363568 #>>44364404 #
6. ehnto ◴[] No.44362713[source]
The split between essential and incidental complexity is a really key insight for thinking about how far AI can be pushed into software development. I think it's likely the detail many developers are feeling intuitively but not able to articulate, in regards to why they won't be replaced just yet.

It's certainly how actually using AI in earnest feels, I have been doing my best to get agents like Claude to work through problems in a complex codebase defined by enormous amounts of outside business logic. This lack of ability to truly intuit the business requirements and deep context requirements means it cannot make business related code changes. But it can help with very small context code changes, ie incidental complexity unrelated to the core role of a good developer, which is translating real world requirements into a system.

But I will add that it shouldn't be underestimated how many of us are actually solving the distribution problem, not technical problems. I still would not feel confident replacing a junior with AI, the core issue being lack of self-correction. But certainly people will try, and businesses built around AI development will be real and undercut established businesses. Whether that's net good or bad will probably not matter to those who lose their jobs in the scuffle.

7. drekipus ◴[] No.44362722[source]
Just use LLM to interface with nerds /s

Oh god please kill me

replies(1): >>44365185 #
8. andoando ◴[] No.44362747[source]
I think were still far from Jira -> tickets.

Even the simplest tickets sometimes that end up requiring a one line change can require hours of investigation to fully understand/stamp the effects of that change.

And perhaps I havent used the greatest or latest, but in my experience LLMs break down hard st anything sufficiently large. They make changes and introduce new errors, they end up changing the feature, or worst case just ouright break everything.

Id never trust it unless you have an extensive amount of good tests for validation

9. Aeolun ◴[] No.44362779[source]
Making any non trivial system with AI only highlights this problem. My repo is littered with specs the AI has to refer to to build the system. But the specs are unclear files that have been added to and grown outdated over time, so now we often end up gling back and forth without much progress.
10. rr808 ◴[] No.44362791[source]
You're missing the part where building a modern website is a huge amount of dev time for largely UI work. Also modern deployment is 100x more complicated than in Brook's day. I'd say 90% of my projects are on these two parts which really shows how productivity has gone down (and AI can fix)
replies(4): >>44362913 #>>44362981 #>>44363152 #>>44363255 #
11. mrbungie ◴[] No.44362811[source]
Actually, you're not missing anything. The thing is, hype cycles are just that, cycles. They come around with a mix of genuine amnesia, convenient amnesia, and junior enthusiasm, because cycles require a society (and/or industry) both able and willing to repeat exploration and decisions, whether they end up in wins or losses. Some people start to get get the pattern after a while but they are seen as cynics. After all, the show must go on, "what if this or the next cycle is the one that leads us to tech nirvana?"

Software engineering for any non-trivial problem means a baseline level of essential complexity that isn't going away, no matter the tool, not even if we someday "code" directly from our minds in some almost-free way via parallel programming thought diffusion. That's because (1) depth and breadth of choice; and (2) coordination/socials, mostly due but not uniquely related to (1) are the real bottlenecks.

Sure, accidental complexity can shrink, if you design in a way that's aligned with the tools, but even then, the gains are often overhyped. These kinds of "developer accelerators" (IDEs, low-code platforms, etc.) are always oversold in depth and scope, LLMs included.

The promise of the "10x engineer" is always there, but the reality is more mundane. For example, IDEs and LSPs are helpful, but not really transformative. Up to a point that people are being payed right now and don't use them at all, and they still deliver in a "economically justifiable" (by someone) way.

Today it's LLMs. Tomorrow it'll be LISP Machines v2.

replies(1): >>44362840 #
12. pjmlp ◴[] No.44362840[source]
I thought that was Python notebooks. :)
13. monkeyelite ◴[] No.44362913[source]
This is mostly self inflicted though. We create complex deployments with the promise that the incremental savings will overtake the upfront costs when they rarely do (and the hidden complexity costs).

So it seems AI will just let us stretch further and make more accidentally complex systems.

replies(1): >>44362989 #
14. skydhash ◴[] No.44362965[source]
> I've seen it written that the main reason managers tend to want to get rid of SWEs is because they don't understand how to interface with them

That’s because software is nebulous enough that you can get away with promising the moon to customers/boss, but in the next meeting, you’re given a reality check by the SWEs. And then you realize the mess you’re thrown everyone in.

Managers knows how to interface with SWEs well (people interface with professionals all the time). Most just hates going back to the engineers to get real answers when they fancy themselves as products owners.

15. skydhash ◴[] No.44362981[source]
Modern development is more complex, not more complicated. We’re still using the same categories of tools. What’s changed is the tower of abstraction we put between ourselves and the problem.
16. rezonant ◴[] No.44362989{3}[source]
The value of automation ("complex deployments") is not only incremental cost savings (ie because you don't need to do the work over and over), but also the reduction or outright elimination of human error, which especially in the case of security-sensitive activities like deploying software on the Internet can be orders of magnitude more costly than the time it takes to automate it.
replies(1): >>44363035 #
17. daxfohl ◴[] No.44363027[source]
Yeah, I give it about two years until we get to "Hey AI, what should we do today?" "Hi, I've noticed an increase in users struggling with transactions across individual accounts that they own. It appears some aspect of multitenancy would be warmly received by a significant fraction of our userbase. I have compiled a report on the different approaches taken by medium and large tech companies in this regard, and created a summary of user feedback that I've found on each. Based on this, and with the nuance of our industry, current userbase, the future markets we want to explore, and the ability to fit it most naturally into our existing infrastructure, I have boiled it down to one of these three options. Here are detailed design docs for each, that includes all downstream services affected, all data schema changes, lists out any concerns about backwards compatibility, user interface nuances, and has all the new operational and adoption metrics that we will want to monitor. Please read these through and let me know which one to start, and if you have any questions or suggestions I'll be more than happy to take them. For the first option, I've already prepared a list of PRs that I'm ready to commit and deploy in the designated order, and have tested e2e in a test cluster of all affected services, and it is up and running in a test cluster currently if you would like to explore it. It will take me a couple hours to do the same with the other two options if you'd like. If I get the green light today, I can sequence the deployments so that they don't conflict with other projects and have it in production by the end of the week, along with communication and optional training to the users I feel would find the feature most useful. Of course any of this can be changed, postponed, or dropped if you have concerns, would like to take a different approach, or think the feature should not be pursued."
replies(1): >>44363466 #
18. monkeyelite ◴[] No.44363035{4}[source]
That is a benefit of automation. But it does not appear to correlate with tool complexity, or the primary focus of commercial offerings.

E.g the most complex deployments are not the ones that are the least error prone or require the least amount of intervention.

replies(1): >>44363223 #
19. dehrmann ◴[] No.44363152[source]
Back when IE was king and IE6 was still 10% of users, I did frontend web work. I remember sitting next to our designer with multiple browsers open playing with pixel offsets to get the design as close as practically possible to the mockups for most users and good enough for every one else. This isn't something LLMs do without a model of what looks good.
replies(1): >>44363243 #
20. rezonant ◴[] No.44363223{5}[source]
What do you consider a complex deployment?
replies(1): >>44374081 #
21. pjerem ◴[] No.44363243{3}[source]
My current job involves exactly this (thanks not on IE) and AI is, as you said, absolutely bad at it.

And I’m saying this as someone who kind of adopted AI pretty early for code and who learned how to prompt it.

The best way to make AI worth your time is to make it work towards a predictable output. TDD is really good for this : you write your test cases and you make the AI do the work.

But when you want a visual result ? It will have no feedback of any clue, will always answer "Ok, I solved this" while making things worse. Even if the model is visual, giving it screenshots as feedback is useless too.

22. jayd16 ◴[] No.44363255[source]
Can AI fix it? Most of that complexity is from a need to stand out.
replies(1): >>44366047 #
23. mynti ◴[] No.44363426[source]
i think the difference is that now someone with no coding knowledge could start describing software and make the agent build that software iteratively. so for example a mechanical engineer wants to build some simulation tool. you still need to define those requirements and understand what you want to do but the work could be (and this is the big if still, if agents become good enough for this sort of work) done by the agent not a humand programmer. i do not see that happening at the moment but still this does change the dynamic. you are right in that it is not a silver bullet and a lot of the complexity is impossible to get rid of. but i wonder if for a lot of use cases there will not be a software engineer in the loop. for bigger systems, for sure, but for a lot of smaller business software?
replies(3): >>44363473 #>>44363511 #>>44366086 #
24. achierius ◴[] No.44363466{3}[source]
Luckily, by that point it won't just be SWEs who'll be out of a job :)
replies(2): >>44365271 #>>44368675 #
25. ivan_gammel ◴[] No.44363473[source]
> for a lot of smaller business software?

Small businesses often understand domain less, not more, because they cannot invest as much as big businesses in building expertise. They may achieve something within that limited understanding, but the outcome will limit their growth. Of course, AI can help with discovery, but it may overcomplicate things. Product discovery is an art of figuring out what to do without doing too much or not enough, which AI has not mastered yet.

26. crvdgc ◴[] No.44363487[source]
I think the crux is that specification has been neglected since even before AI.

Stakeholders (client, managers) have been "vibe coding" all along. They send some vague descriptions and someone magically gives back a solution. Does the solution completely work? No one knows. It kinda works, but no one knows for sure.

Most of the time, it's actually the programmers' understanding of the domain that fills out the details (we all know what a correct form submission webpage looks like).

Now the other end has become AI, it remains to be seen whether this can be replicated.

replies(4): >>44363550 #>>44363569 #>>44364898 #>>44367765 #
27. boxed ◴[] No.44363510[source]
> aren’t they just using AI to eliminate accidental complexity

After using Claude Code to vibe code some stuff, it seems to me that AI doesn't eliminate accidental complexity, it just creates more of it and takes away some of the pain of really bad APIs.

28. globular-toast ◴[] No.44363511[source]
A mechanical engineer has a job to do. They can't all spend their time yak shaving with an AI agent building software that they then use to do their actual job. The whole point of building software is it's more efficient to build it once then use it many times. Why would a "someone with no coding knowledge" do this when someone with coding knowledge could do it?
29. Traubenfuchs ◴[] No.44363550[source]
> we all know what a correct form submission webpage looks like

Obviously we don‘t as phone numbers can be split in up to 4 fields with conflicting varieties of validation or just be one field.

replies(1): >>44364121 #
30. bdangubic ◴[] No.44363569[source]
we all know what a correct form submission webpage looks like

millions of forms around the web would like to have a word… :)

replies(1): >>44366614 #
31. MangoToupe ◴[] No.44363568[source]
> I've seen it written that the main reason managers tend to want to get rid of SWEs is because they don't understand how to interface with them.

SWEs are also about the most expensive kind of employee imaginable. I imagine that’s incentive enough.

32. hamstergene ◴[] No.44363707[source]
An easy answer for what's missing is that the industry isn't run by people who read "No Silver Bullet".

- Articles about tricky nature of tech debt aren't written by people who call the shots on whether the team can spend the entire next week on something that a customer can't see.

- Articles about systems architecture aren't written by people who decide how much each piece of work was significant for business.

- Books on methodologies are optional read for engineers, not essential for their job, and adoption happens only when they push it upwards.

Most of the buzz about AI replacing coding is coming from people who don't see a difference between generating a working MVP of an app, and evolving an app codebase for a decade and fixing ancient poor design choices in a spaceship which is already in flight.

I've even seen a manager who proposed allocating 33% time every day on 3 projects, and engineers had to push back. Such old and common knowledge that it doesn't work is apparently still not a job requirement in 2025. Despite that organizing and structuring project time allocation is management competency and not an engineering skill, it is de-facto entirely up to engineers to make sure it's done right. The same managers are now proud to demonstrate their "customer focus" by proposing to ask AI to resolve all the tech debt and write all the missing tests so that engineers could focus on business requests, and same engineers have to figure how to explain why it didn't just work when they tried.

To talk about complexity is to repeat the same old mistake. I am sure most engineers already know and I am yet to see an experienced engineer who believes their job will be taken by simple prompts. The problem we should be talking about should be titled something like,

"Software Engineering Has Poor Management Problem, and AI is Amplifying It"

33. a_c ◴[] No.44363719[source]
You first start using a hammer well, and then internalizing when to use a hammer. Most are now getting excited about the new shinny hammer. Few knows hammer is not for everything. Some will never know. It has always been the case. Microservice, NoSQL, kubernetes, crypto, web3, now LLM. They range from useful some of the time to completely useless. But they surely appeared to be panacea at some time to some people.
34. Tade0 ◴[] No.44364121{3}[source]
Also the format varies depending on region and type of connection.
replies(1): >>44364887 #
35. al_borland ◴[] No.44364280[source]
I think AI also introduces a new form of accidental complexity. When using Copilot, I often find myself telling it something to the effect of, “this seems needlessly complex and confusing, is there a better way to do this, or is this level of complexity justified?” It almost always apologizes and comes back with a solution I find much more pleasing, though on rare occasions it does justify its solution as a form of the original accidental complexity you mention. The more we lean on these tools, the more this accidental complexity from the model itself compounds.
36. agos ◴[] No.44364282[source]
you're spot on. Building software is first and foremost making a team of person understand a problem. The fact that part of it is solved by writing code is almost a byproduct of that understanding, and certainly does not come before.

on this topic I suggest everybody who works in our industry to read Peter Naur's "Programming as Theory Building"[1] and a nice corollary from Baldur Bjarnson: "Theory-building and why employee churn is lethal to software companies"[2]

[1]: https://pages.cs.wisc.edu/~remzi/Naur.pdf [2]: https://www.baldurbjarnason.com/2022/theory-building/

37. ozim ◴[] No.44364296[source]
There is a lot of truth in No Silver Bullet and I had the same idea in my mind.

Downside is there is much more non essential busy work because of which people had their jobs and now loads of those people will lose the job.

People who do work on real essential complexity of systems are far and between. People who say things like "proper professionals will always have work" are utter assholes thinking mostly that "they are those proper professionals".

In reality AI will be like F1 racing team not needing pit workers and have only drivers, how many drivers are there like 20 so it is 10 teams each having 2 drivers. Each team has 300-1000 people that do all the other things.

If you go to corporate level let's say 1 person working on essential complexity requires 10-20 people doing all kinds of non essential stuff that will be taken over by AI, or to be realistic instead of 10-20 people that person will need headcount of 5.

That is still 15 people out of job - are those people able to take over some essential complexity in a different company or different area of the same company, some would but it is also if they would like to do it. So those people will be pushed around or end up jobless, bitter whatever.

That is not great future coming in.

38. austin-cheney ◴[] No.44364302[source]
> What am I missing here?

A terrifyingly large percentage of people employed to write software cannot write software. Not even a little. These are the people that can be easily replaced.

In my prior line of work I wrote JavaScript for a living. There were people doing amazing, just jaw dropping astounding, things. Those people were almost exclusively hobbyists. At work most people struggled to do little more than copy/paste in a struggle just to put text on screen. Sadly, that is not an exaggeration.

Some people did what they considered to be advanced engineering against these colossal frameworks, but the result is just the same: little more than copy/paste and struggle to put text on screen. Yes, they might be solving for advanced complexity, but it is almost always completely unnecessary and frequently related to code vanity.

Virtually none of those people could write original applications, measure anything, write documentation, or do just about anything else practical.

> So how do we reconcile this?

Alienate your workforce by setting high standards, like a bar exam to become a lawyer. Fire those people that fail to rise to the occasion. Moving forward employ people who cannot meet the high standards only as juniors or apprentices, so that the next generation of developers have the opportunity to learn the craft without rewarding failure.

replies(1): >>44365955 #
39. wqaatwt ◴[] No.44364404[source]
> Using an LLM solves that problem, because you don't need a nerd to operate it.

Until you do. LLMs are great ar building prototypes but at some point if you don’t know what you’ll doing you’ll end up with an unmaintainable mess and you won’t have anyone to fix it.

I mean LLMs perhaps are capable of doing that too but they still need to be guided by people who are capable of understanding their output.

Being able to reduce the number of engineers that you need by e.g. 80% would still be a great deal though.

40. js8 ◴[] No.44364456[source]
I agree on the essential complexity, but I think there is a missing piece that we don't really have good mental tools how to operate (compose) the SW systems with uncertainty. Something like fuzzy logic?

I think there is a promise of that in AI and LLMs (but I remain skeptical, because I it needs a formal and not ad hoc definition). The idea you can build the systems using a fuzzy human language and the things will somehow work out.

41. illiac786 ◴[] No.44364887{4}[source]
Or you are in Germany and phone number have variable length. They even have the word “number street” to mean “all phone numbers starting with 0123456” for example. It’s not a block, it’s a street, which can branch out in blocks of different lengths. Completely insane.
42. burnt-resistor ◴[] No.44364898[source]
What they want: Computer: Make the room a wild west bar from 1900.

What they have: An undergraduate intern who is a former used car salesperson used to BSing their way through life.

replies(1): >>44367783 #
43. conartist6 ◴[] No.44365037[source]
Nothing at all. The people who could and should understand this point are indisposed towards criticizing the AI narrative.

They've started a business selling the exact opposite message to everyone who will buy it

44. conartist6 ◴[] No.44365101[source]
So in other words it's helping you race to the median. It can give your business the advantage of moving always in a direction that's average and uninteresting. Nobody will need to lead anymore, so nobody will have the skill of a leader anymore.

It sounds to me like a corporate equivalent of a drug-fueled rager. They want everything good now while deferring all the expenses to tomorrow.

replies(1): >>44381385 #
45. junek ◴[] No.44365185{3}[source]
Please step away from the lathe
46. weatherlite ◴[] No.44365271{4}[source]
You're right but I think we will be among the first to take the hit, we don't have the regulatory protections many doctors, accountants and lawyers have.
47. spwa4 ◴[] No.44365955[source]
> Alienate your workforce by setting high standards, like a bar exam to become a lawyer ...

This would work if the world was willing to pay for software. So at the very least you'd have to outlaw the ad-based business model, or do what lawyers do: things that are absolutely critical for software development (think "program needs to be approved or it won't execute", that deep) that normal people aren't allowed ... and unable ... to do.

replies(1): >>44366316 #
48. AstroBen ◴[] No.44365998[source]
> it should increase opportunities by lowering the cost of iteration and scaling our impact

This is very much up for debate and the weakest point of the argument I think. If developers are now 2-3x (remains to be seen..) as productive what will happen to the job market?

I suppose it depends on how much potential software is there that's not currently viable due to cost? How much would scope be increased on current products?

49. spwa4 ◴[] No.44366047{3}[source]
... and approvals. The fact that the vast majority of companies just don't have infrastructure. The only thing that made a dent in that is VMWare.
50. spwa4 ◴[] No.44366086[source]
This totally breaks due to complexity. This "works", except that AI destroys earlier functionality when adding new things. Also there is a very low level of complexity where AI just blows up and won't work anymore, not even a little bit.

(and certainly Google's newest AI model is actually a step backwards on this front)

Add to that that nothing changes for difficult work. Writing a driver still requires hardware knowledge ... of the actual hardware. Which AI doesn't have ... and doesn't make any attempt to acquire.

I've seen articles that this part can actually be fixed. If you create a loop where the AI is forced to slowly build up low-level knowledge it can actually find this sort of things. But you need 10 expert AI researchers to do anything like this.

(frankly I don't see how this part could be fixed by better AI models)

What's in danger is the coding job consisting of "write me a form with these 20 fields". The 100% repetitive ones.

51. austin-cheney ◴[] No.44366316{3}[source]
From a purely economic perspective its all the same whether you are paying for products or paying for people and whether your revenue comes for media or sales. Those cost/profit first concerns are entirely the wrong questions to ask though, because they limit available routes of revenue generation.

The only purpose of software is automation. All cost factors should derive from that one source of truth. As a result the only valid concerns should be:

* Lowering liabilities

* Increasing capabilities

From a business perspective that means not paying money for unintended harms, and simultaneously either taking market share from the competition or inventing new markets. If your people aren't capable of writing software or your only options are free choices provided to you then you are the mercy of catastrophic opportunity costs that even the smallest players can sprint past.

52. ivandenysov ◴[] No.44366614{3}[source]
We all know. But we all have a different vision of a ‘correct’ form
replies(1): >>44377198 #
53. stego-tech ◴[] No.44367765[source]
This is spot-on, and a comment I wish I could pin for others to see.

GenAI is such a boon at present because it occasionally delivers acceptable mediocrity to PMs and Stakeholders who will accept said mediocrity because they have no real clue what they (or their customers) actually want. It’s a system of roles and output that delivers based on patterns in incomprehensively large data sets, provided to humans who cannot hope to validate that information is accurate or legitimate (and not just random patterns in a large enough data set), but passed along as gospel from digital machinegods. To a withering empire obsessed with nostalgia and whose institutions are crumbling beneath them, GenAI appears as a savior; to those confident in their position in the world order, it is merely a thin tool in a larger toolbox, to be used toward greater ends rather than middling output.

Those who understand the specification problems are in a position to capitalize off such monumental shifts, while those blindly chasing get-rich-quick schemes, grifts, and fads will be left behind.

replies(1): >>44383279 #
54. stego-tech ◴[] No.44367783{3}[source]
They want the Holodeck, but not the post-scarcity society that makes it possible.
replies(1): >>44372396 #
55. daxfohl ◴[] No.44368675{4}[source]
Yeah, PM, data science, compliance, accounting...all largely automatable. You just need a few directors to call the shots on big risks. But even that goes away at some point because in a few months it'll have implemented everything you were thinking about doing for the next ten years and it simply runs out of stuff for humans to do.

What happens after that, I have no idea.

Seems like OpenAI (or whoever wins) could easily just start taking over whole industries at that point, or at least those that are mostly tech based, since it can replicate anything they can do, but cheaper. By that point, probably the only tech jobs left will be building safeguards so that AI doesn't destroy the planet.

Which sounds niche, but conceivably, could be a real, thriving industry. Once AI outruns us, there'll probably be a huge catastrophe at some point, after which we'll realize we need to "dumb down" AI in order to preserve our own species. It will serve almost as a physical resource, or maybe like a giant nuclear reactor, where we mine it as needed but don't let it run unfettered. Coordinating that balance to extract maximal economic growth without blowing everything up could end up being the primary function of human intelligence in the AI age.

Whether something like that can be sustained, in a world with ten billion different opinions on how to do so, remains to be seen.

56. HumblyTossed ◴[] No.44368818[source]
>>> "people don’t talk more about essential complexity "

That doesn't sell stock. Firing high paying employees sells a lot of stock.

57. chipsrafferty ◴[] No.44371963[source]
> lowering the cost of iteration

I don't think anyone thinks engineers are going away. But companies will hire less employees to do the same amount of work, and they won't pay engineers more.

58. andrekandre ◴[] No.44372396{4}[source]
of course, because theres no money in it...

just look at software: cost of duplication is basically zero and here we are paying subscriptions (to huge profits) every month for it

switching to a true post-scarcity economy is gonna take more than just technology i think...

59. monkeyelite ◴[] No.44374081{6}[source]
1. the degree to which it can prevent me from doing my job if it’s not working

2. The level of expertise and skill required to set it up and maintain

60. bdangubic ◴[] No.44377198{4}[source]
with all due respect but what does this mean? we can either "all know" or we can all not have a clue and have our own "visions" of the correct form, both of these are complete opposites.

decades of building garbage-barely-working forms is a proof that we just do not know (much like we (generally) don't know how to center a div on the page so once-per-year, without fail, top story on HN is "how to center a div" :) ).

replies(1): >>44389738 #
61. butlike ◴[] No.44381385{3}[source]
Isn't that business though? Buying power is power, selling is weakness
62. whattheheckheck ◴[] No.44383279{3}[source]
Can you expand into the "those who understand the specification problems are in a position to capitalize off such shifts?"

Is this just business domain knowledge and good communication?

replies(1): >>44392234 #
63. ivandenysov ◴[] No.44389738{5}[source]
We all think we know
64. hyperadvanced ◴[] No.44392234{4}[source]
Always has been meme