When SQL was born, some people said, "It's English! We won't need programmers anymore!"
Now we have AI prompting, and some people are saying, "It's English! We won't need programmers anymore!"
Really?
This article does not appear to be AI-written, but use of the emdash is undeniably correlated with AI writing. Your reasoning would only make sense if the emdash existed on keyboards. It's reasonable for even good writers to not know how or not care to do the extra keystrokes to type an emdash when they're just writing a blog post - that doesn't mean they have bad writing skills or don't understand grammar, as you have implied.
COBOL and SQL aren't English, they're formal languages with keywords that look like English. LLMs work with informal language in a way that computers have never been able to before.
Formalism is way easier than whatever this guys are concocting. And true programmer bliss is live programming. Common programming is like writing a sheet music and having someone else play it. Live programming is you at the instrument tweaking each part.
That same critique should first be aimed at the topmost comment, which has the same problem plus the added guilt of originating (A) a false dichotomy and (B) the derogatory tone that naturally colors later replies.
> It's reasonable for even good writers to not know how or not care
The text is true, but in context there's an implied fallacy: If X is "reasonable", it does not follow that Not-X is unreasonable.
More than enough (reasonable) real humans do add em-dashes when they write. When it comes to a long-form blog post—like this one submitted to HN—it's even more likely than usual!
> the extra keystrokes
Such as alt + numpad 0150 on Windows, which has served me well when on that platform for... gosh, decades now.
This website
1. reacts well to my system preference of a dark theme in my news-reader
2. has a toggle at the top for dark theme
3. works flawlessly with DarkReader in my browser
Until I saw your comment, I didn't even know the website had a light version.
Again: What?
Three hyphens---it looks good! When I use three hyphens, it's like I dropped three fast rounds out of a magazine. It demands attention.
AI almost certainly picked it up mainly from typeset documents, like PDF papers.
It's also possible that some models have a tokenizing rule for recognizing faked-out em-dashes made of hyphens and turning them into real em-dash tokens.
To solve problems. Coding is the means to an end, not the end itself.
> careful configuration of our editor, tinkering with dot files, and dev environments
That may be fun for you, but it doesn’t add value. It’s accidental complexity that I am happy to delegate.
Maybe it is
But in faithful adherence to some kind of uncertainty principle, LLM prompts are also not a programming language, no matter if you turn down the temperature to zero and use a specialized coding model.
They can just use programming languages as their output.
I really enjoy programming and like the author said, it's my hobby.
On some level I kind of resent the fact that I don't really get to do my hobby for work any more. It's something fundamentally different now.
I very much enjoy the end product and I also enjoy designing (not necessarily programming) a program that fits my needs, but rarely implementing, as I have issues focusing on things.
I consider myself an engineer — a problem solver. Like you said, code is just the means to solve the problems put before me.
I’m just as content if solving the problem turns out to be a process change or user education instead of a code commit.
I have no fetish for my terminal window or IDE.
For me, at least, this has not been the case. If I leave the creative puzzle-solving to the machine, it's gonna get creative alright, and create me a mess to clean up. Whether this will be true in the future, hard to say. But, for now, I am happy to let the machines write all the React code I don't feel like writing while I think about other things.
Additionally, as an aside, I already don't think coding is always a craft. I think we want it to be one because it gives us the aura of craftspeople. We want to imagine ourselves as bent over a hunk of marble, carving a masterpiece in our own way, in our time. And for some of us, that is true. For most programmers in human history though, they were already slinging slop before anybody had coined the term. Where is the inherent dignity and human spirit on display in the internal admin tool at a second tier insurance company? Certainly, there is business value there, but it doesn't require a Michalengo to make something that takes in a pdf and spits out a slightly changed pdf.
Most code is already industrial code, which is precisely the opposite of code as craft. We are dissociated from the code we write, the company owns it, not us, which is by definition the opposite of a craftsmen and craft mode of production. I think AI is putting a finer, sharper point on this, but it was already there and has been since the beginning of the field.
A contractor who prefers a specific brand of tool is wrong because the tool is a means to an end
This is what you sound like. Just because you don't understand the value of a craftsman picking and maintaining their tools doesn't mean the value isn't real.
So funny to read how people attack author using non-related to the essay’s message criticism.
You could say that about programming languages in general. "Why are we leaving all the direct binary programming for the compilers?"
It's interesting, because to become a plumber, you pretty much need a plumber parent or a friend to get you interested in the trade show you the ropes. Meanwhile, software engineering is closer to the universal childhood dream of "I want to become an astronaut" or "I want to be a pop star", except more attainable. It's very commoditized by now, so if you're looking for that old-school hacker ethos, you're gonna be disappointed.
Good problem solvers... solve problems. The technological environment will never devalue their skills. It’s only those who rest on their laurels who have this issue.
The real issue is that we've been in-store for a big paradigm shift in how we interact with computers for decades at this point. SketchPad let us do competent, constraints based mathematics with images. Video games and the Logo language demonstrate the potential for programming using, "kinetics." In the future we won't code with symbols we'll dance our intent into and through the machine.
https://www.youtube.com/watch?v=6orsmFndx_o http://www.squeakland.org/tutorials/ https://vimeo.com/27344103
When I started programming for Corporate™ back 1995, it was a wildly different career than what it has become. Say what you want about the lunatics running the asylum, but we liked it that way. Engineering knew their audience, knew the tech stack, knew what was going on in "the industry", ultimately called the shots.
Your code was your private sandbox. Want to rewrite it every other release? Go for it. Like to put your curly braces on a new line? Like TABs (good for you)? Go for it. It's your code, you own it. (You break it, you fix it.)
No unit tests (we called that parameter checking). No code reviews (well, nothing formal — often, time was spent in co-workers offices talking over approaches, white-boarding API… Often if a bug was discovered or known, you just fixed it. There may have been a formal process beginning, but to the lunatics, that was optional.
You can imagine how management felt — having to essentially just trust the devs to deliver.
In the end management won, of course.
When I am asked if I am sorry that I left Apple, I have to tell people, no. I miss working at Apple in the 90's, but that Apple was never coming back. And I hate to say it, but I suspect the industry itself will never return to those "cowboy coding" days. It was fun while it lasted.
The IT world is waiting for a revolution. Only in order to blame that revolution for the mistakes of a few powerful people.
I would not be surprised if all this revolutionary sentiment is manufactured. That thing about "Luddites" (not a thing that will stick by the way), this nostalgic stuff, all of it.
We need to be much smarter than that and not fall for such obvious traps.
An identity is a target on your back. We don't need one. We don't need to unite to a cause, we're already amongst one of the most united kinds of workers there is, and we don't need a galvanizing identity to do it.
Most code is not like that. Most code I want to get something done, and so I achieve something quite a bit below that bar. But some things I get to write in that way, and it is very rewarding to do so. It's my favorite code to write by a mile.
Back to LLMs - I find it is both easier than ever and harder than ever to write code in that mode. Easier than ever because, if I can actually get and stay in that mode psychologically, I can get the result I want faster, and the bar is higher. Even though I am able to write MUCH better code than an LLM is, I can write even better code with LLM assistance.
But it is harder than ever to get into that mode and stay in that mode. It is so easy to just skim LLM-generated code, and it looks good and it works. But it's bad code, maybe just a little bit at first, but it gets worse and worse the more you let through. Heck, sometimes it just starts out as not-excellent code, but every time you accept it without enough diligence the next output is worse. And by the time you notice it's often too late, you've slopped yourself, while also failing to produce an expert in the code that's been written.
e.g. it is difficult to write a traditional program to wash dishes, because how do you formally define a dish? You can only show examples of dishes and not-dishes. This is where informal language and neural networks shine.
You can also solve problems as a local handyman but that doesn’t pad the 401K quite as well as a career in software.
I feel like there’s a lot of tech-fetishist right now on the “if you don’t deeply love to write code then just leave!” train without somehow realizing that most of us have our jobs because we need to pay bills, not because it’s our burning passion.
I also agree with comments on this thread stating that problem solving should be the focus and not the code.
However my view is that our ability to solve problems which require a specific type of deep thought will diminish over time as we allow for AI to do more of this type of thinking.
Purely asking for a feature is not “problem solving”.
"We've always done it this way" is the path of calcification, not of a vibrant craft. And there are certainly many ways you can use LLMs to craft better things, without slop and vibecoding.
But all of Programming isn't the same thing. We just need new names for different types of programmers. I'm sure there were farmers that lamented the advent of machines because of how it threatened their identity, their connection to the land, etc....
but I want to personally thank the farmers who just got after growing food for the rest of us.
Incidentally, I turned this autocorrection off when people started associating em dashes with AI writing. I now leave them manual double dashes--even less correct than before, but at least people are more likely to read my writing.
> Code reviewing coworkers are rapidly losing their minds as they come to the crushing realization that they are now the first layer of quality control instead of one of the last. Asked to review; forced to pick apart. Calling out freshly added functions that are never called, hallucinated library additions, and obvious runtime or compilation errors. All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”
LLMs have made Brandolini's law ("The amount of energy needed to refute bullshit is an order of magnitude larger than to produce it") perhaps understated. When an inexperienced or just inexpert developer can generate thousands of lines of code in minutes, the responsibility for keeping a system correct & sane gets offloaded to the reviewers who still know how to reason with human intelligence.
As a litmus test, look at a PR's added/removed LoC delta. LLM-written ones are almost entirely additive, whereas good senior engineers often remove as much code as they add.
They will never be able to undestand this, unfortunately
Nowadays many enterprise projects have become placing SaaS products together, via low code/no code integrations.
A SaaS product for the CMS, another one for assets, another for ecommerce and payments, another for sending emails, another for marketing, some edge product for hosting the frontend, finally some no code tools to integrate everything, or some serverless code hosted somewhere.
Welcome to MACH architecture.
Agents now made this even less about programming, as the integrations can be orchestrated via agents, instead of low code/no code/serverless.
I don't think I'm sticking my head in the sand - an advanced enough intelligence could absolutely take over programming tasks - but I also think that such an intelligence would be able to take over _every_ thought-related task. And that may not be a bad thing! Although the nature of our economy would have to change quite a bit to accommodate it.
I might be wrong: Doug Hofstadter, who is way, way smarter than me, once predicted that no machine would ever beat a human at chess unless it was the type of machine that said "I'm bored of chess now, I would prefer to talk about poetry". Maybe coding can be distilled to a set of heuristics the way chess programs have (I don't think so, but maybe).
Whether we're right or wrong, there's not much we can do about it except continue to learn.
I will never, ever go back to the time before.
If LLM abilities stagnate around the current level it's not even out of the question that LLMs will negatively impact productivity simply because of all of the AI slop we'll have to deal with.
I hate to suggest that the fix to LLM slop is more LLMs, but in this case it's working for me. My coworkers also seem to appreciate the gesture.
For evidence towards the compulsion argument, look at the existence of FOSS software. Or videogame modding. Or all the other freely available software in existence. None of that is made by people who made the rational decision of "software development is a lucrative field that will pay me a comfortable salary, thus I should study software development". It's all made by people for whom there is no alternative but to build.
Who else becomes the go to person for modifying build scripts?
The amount of people I know who have no idea how to work with Git after decades in the field using it is pretty amazing. It's not helpful for everyone else when you're the one they're delegating their merge conflict bullshit too cause they've never bothered to learn anything about the tools they're using.
My coworkers that are in love with this new world are producing complete AI slop and still take ages to complete tasks. Meanwhile I can finally play my strength as I actually know software architecture, can ask the LLM to consider important corner case and so on.
Plus, I am naturally good at context management. Being neurodivergent has given me decades of practice in working with entities that have a different way of thinking that me own. I have more mechanical empathy for the LLM because I don't confuse it for a human. My coworkers meanwhile get super frustrated that the LLM can not read their mind.
That said, LLMs are getting better. My advantage will not last. And the more AI slop gets produced the more we need LLMs to cope with all the AI slop in our code bases. A vicious cycle. No one will actually know what the code does. Soon my job will mostly consist of praying to the machine gods.
In fact, I usually hate writing code at day job because it is boring things 20 out of 26 sprints.
Project management was a 40 foot Gantt chart printed out on laser printer paper and taped to the wall. The sweet sound of waterfall.
I forwarded your article to my son the dev, since your post captured the magic of being a programmer so well.
And yes Levy’s book Hackers is most excellent.
Social media already reduced our attention spans to that of goldfish, open offices made any sort of deep meaningful work impossible.
I hope this madness dies before it devours us.
Oh, I wouldn't say that. The hacker culture of the 1970s from which the word hacker originated often poked fun at incurious corporate programmers and IIRC even Edsger Dijkstra wrote a fair bit of acerbic comments about them and their disinterest in the craft and science of computing.
It's the literary equivalent of thinking someone must be a "hacker" because they have a Bash terminal open.
I think the real problem is that it's actually increasingly difficult to defend the artisanal "no-AI" approach. I say this as a prior staff-level engineer at a big tech company who has spent the last six months growing my SaaS to ~$100k in ARR, and it never could have happened without AI. I like the kind of coding the OP is talking about too, but ultimately I'm getting paid to solve a problem for my customers. Getting too attached to the knives is missing the point.
Sure the customer still gets fed but it's a far inferior product... And is that chef really cheffing?
It's like that trope of the little angel and demon sitting on the protagonist's shoulders.
"I can get more work done"
"But it's not proper work"
"Sometimes it doesn't matter if it's proper work, not everything is important"
"But you won't learn the tools"
"Tools are incidental"
"I feel like I'm not close to the craft"
"Your colleagues weren't really reading your PRs anyway"
"This isn't just another tool"
"This is just another tool"
And so on forever.
I'm staying to think that if you don't have both these opposing views swirling around in your mind, you haven't thought enough about it.
1. only using AI for small things, very impressed by it
2. giving AI bigger tasks and figuring out how to use it well for those bigger tasks
3. full-agentic mode where AI just does its thing and I review the code at the end
4. realising that I still need to think through all the code and that AI is not the shortcut I was hoping it to be (e.g. where I can give it a high-level plan and be reasonably satisfied with the final code)
5. going back to giving AI small tasks
I've found AI is very useful for research, proof-of-concepts and throwaway code of "this works, but is completely unacceptable in production". It's work I tend to do anyway before I start tackling the final solution.Big-picture coding is in my hands, but AI is good at filling in the logic for functions and helping out with other small things.
We’re 50 years past that now. We’re in the era of boot camps. I feel semi confident saying “most of us” meaning the current developer work force are here for well paying jobs.
Don’t get me wrong I like software development! I enjoy my work. And I think I’d probably like it better than most things I’d otherwise be doing.
But what I’ve been getting at is that I enjoy it for the solving problems part. The actual writing of code itself for me just happens to be the best way to enjoy problem solving while making good money that enables a comfortable life.
To be put it another way, if being a SWE paid a poverty wage, I would not be living in a trailer doing this for my love of coding. I would go be a different kind of engineer.
It'd be neat to have a big user story catalog/map, which tracks what various services are able to help with.
I was a kid in NE43 instead of TFA's Building 26 across the street - with Lisp Machines and 1980s MIT AI's "Programmer's Apprentice" dreams. I years ago gave up on ever having a "this... doesn't suck" dev env, on being able to "dance code". We've had such a badly crippling research and industrial policy, and profession... "not in my lifetime" I thought. Knock on wood, I'm so happy for this chance at being wrong. And also, for "let's just imagine for a moment, ignoring the utterly absurd resources it would take to create, science education content that wasn't a wretched disaster... what might that look like?" - here too it's LLMs, or no chance at all.
Sad to see people reduce themselves willingly to cogs inside business machine.
(Given how massively widespread piracy was back then, programming looked rather like a good way to do hard work for free.)
Money matters, but coders who were drawn into the field purely by money and are personally detached from the substance of the job is an unknown species for me.
"You can also solve problems as a local handyman"
That is NOT the same sort of talent. My fingers are clumsy; my mind is not.
I guess I don't understand posts like this IF you think you can do it better without LLMs. I mean, if using AI makes you miserable because you love the craft of programming, AND you think using AI is a net loss, then just...don't use it?
But I think the problem here that all these posts are speaking to is that it's really hard to compete without using AI. And I sympathize, genuinely. But also...are we knife enthusiasts or chefs?
Secondly, you are very blind if you don’t see that the AI making your job “easier” is close to replacing you entirely, if you don’t also have a deep understanding of the code produced. What’s to stop the Project Manager from vibe coding you out of the loop entirely?
> All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”
as they really deserve, the problem would disappear really fast.
So the problem that you outlined is rather social, and not the LLMs per se (even though they very often do produce shitty code).
Hand-coding can continue, just like knitting co-exists with machine looms, but it need not ultimately maintain a grip on the software productive process.
It is better to come to terms with this reality sooner rather than later in my opinion.
If it happens a second time? A stern talk from their manager.
A third time? PIP or fired.
Let your manager be the bad guy. That's part of what they're for.
Your manager won't do that? Then your team is broken in a way you can't fix. Appeal to their manager, first, and if that fails put your resume on the street.
> That is NOT the same sort of talent. My fingers are clumsy; my mind is not.
if handyman work was paying $600/hr your fingers would un-clums themselves reaaaaaaly fast :)
> Secondly, you are very blind if you don’t see that the AI making your job “easier” is close to replacing you entirely, if you don’t also have a deep understanding of the code produced.
Brother don’t patronize me. I’m a senior engineer I’m not yeeting vibe code I don’t understand into prod.
I also understand the possibility of all of this potentially devaluing my labor or even wholesale taking my job.
What would you like me to do about that? Is me refusing to use the tools going to change that possibility?
Have yet to hear what else we should be doing about this. The hackernews answer appears to be some combination of petulance + burying head in the sand.
I don't think the industry will return to it, but I suspect there will be isolated environments for cowboys. When I was at WhatsApp (2011-2019), we were pretty far on the cowboy side of the spectrum... although I suspect it's different now.
IMHO, what's appropriate depends on how expensive errors are to detect before production, and how expensive errors are when detected after production. I lean into reducing the cost to fix errors rather than trying to detect errors earlier. OTOH, I do try not to make embarrassing errors, so I try to test for things that are reasonable to test for.
There was a difference between a sysadmin and a programmer. Now, I’m expected to be my own sysadmin-ops guy while also delivering features. While I worked on my systems chops for fun on the side, I purposely avoided it on the work side, I don’t usually enjoy how bad vendor documentation, training, etc. can be in the real world of Corporate America.
I would claim that I love coding quite a lot. The problem is rather that my bosses and colleagues don't care about what I love about it. It is rather appreciated if you implement tasks fast with shitty code instead of considering the fact that tasks are easy to implement and the code is really fast as a strong evidence that the abstractions were well-chosen.
Thus, I believe that people who just do it for the money have it easier in the "programming industry" than programmers who really love programming, and are thus a big annoyance to managers.
I thus really wonder myself why companies tell all the time about "love for programming" instead of "love for paying the bills" and "love for implementing tasks fast with shitty code", which would give them people who are a much better culture fit for their real organizational processes.
It’s more of a funeral, collective expression of grievance of a great, painful loss. An obituary for a glorious, short time in history where it was possible to combine a specific kind of intelligence, creativity, discipline, passion and values and be well compensated for it. A time when the ability to solve problems and solve them well had value. Not just being better at taking credit than other people.
It was wonderful.
I know you don’t care. So just go to some other forum where you don’t have to endure the whining of us who have lost something that was important to us.
I don't think anyone is complaining about that too much. I wonder how many people there are like you, where we don't get much data. If people don't complain about it, we generally don't hear about it, because they're just quietly moving on with their work.
Not to be confused with the AI hypesters who are loudly touting the benefits with dubious claims, of course (:
May seem depressing, but the bright side is that you as an individual are then free to find joy in your work wherever you can find it... whether its in delivering high-quality code, or just collecting a paycheck.
I don't believe that. When it comes to motoric skills, including dancing etc., I am probably in the lowest quintile of the population.
Of course, I could become somewhat better by spending crazy amounts of time on training, but I would still be non-competitive even in comparison with an average person.
OTOH I am pretty good at writing prose/commentary, even though it is not a particulary lucrative activity, to the degree of being a fairly known author in Czechia. My tenth book is just out.
Talents are weird and seem to have mind of their own. I never planned to become an author, but something inside just wanted out. My first book was published just a few days shy of my 40th birthday, so not a "youthful experiment" by any means.
If you are wasting time you may be value negative to a business. If you are value negative over the long run you should be let go.
We’re ultimately here to make money, not just pump out characters into text files.
I don't think it is. Labeling passion and love for your work "tech fetishism", is spiritually bankrupt. Mind you we're in general here not talking about people working in a mine to survive, which is a different story.
But people who do have a choice in their career, doing something they have no love for solely to add more zeros to their bank account? That is the fetish, that is someone who has himself become an automaton. It's no surprise they seem to take no issues with LLMs because they're already living like one. Like how devoid of curiosity do you have to be to do something half your waking life that you don't appreciate if you're very likely someone who has the freedom to choose?
I've written a ton of code in my life and while I've been a successful startup CTO, I've always stayed in IC level roles (I'm in one right now in addition to hobby coding) outside of that, data structures and pipelines, keep it simple, all that stuff that makes a thing work and maintainable.
But here is the thing, writing code isn't my identity, being a programmer, vim vs emacs, mechanical keyboard, RTFM noob, pure functions, serverless, leetcode, cargo culting, complexity merchants, resume driven dev, early semantic css lunacy, these are thing outside of me.
I have explored all of these things, had them be part of my life for better or worse, but they aren't who I am.
I am a guy born with a bunch of heart defects who is happy to be here and trying new stuff, I want to explore in space and abstraction through the short slice of time I've got.
I want to figure stuff out and make things and sometimes that's with a keyboard and sometimes that's with a hammer.
I think there are a lot of societal status issues (devs were mostly low social status until The Social Network came out) and personal identity issues.
I've seen that for 40 years, anything tied to a persons identity is basically a thing they can't be honest about, can't update their priors on, can't reason about.
And people who feel secure and appreciated don't give much grace to those who don't, a lot of callous people out there, in the dev community too.
I don't know why people are so fast to narrow the scope of who they are.
Humans emit meaning like stars emit photons.
The natural world would go on without us, but as far as we have empirically observed we make the maximally complex, multi modally coherent meaning of the universe.
We are each like a unique write head in the random walk of giving the universe meaning.
There are a ton of issues from a network resilience and maximizing the random meaning generation walk where Ai and consolidation are extremely dangerous, I think as far as new stuff in the pipeline it's between Ai and artificial wombs that have the greatest risks for narrowing the scope of human discovery and unique meaning expansion to a catastrophic point.
But so many of these arguments are just post-hoc rationalizations to poorly justify what at root is this loss of self identity, we were always in the business of automating jobs out from under people, this is very weak tea and crocodile tears.
The simple fact is, all our tools should allow us to have materially more comfortable and free lives, the Ai isn't the problem, it's the fact that devs didn't understand that tech is best when empowering people to think and connect better and have more freedom and self determination with their time.
If that isn't happening it's not the codes fault, it's the network architecture of our current human power structures fault.
I come here to learn, discuss, and frankly, to hang onto a good life as long as I can have it.
The collective whinging in every AI topic is both annoying and self-defeating.
Do you understand work-life balance? I get paid to do the job, I satisfy my curiosities in my free-time.
> But people who do have a choice in their career, doing something they have no love for solely to add more zeros to their bank account?
Because I doubt finding a well paying job that you love is something that is achievable in our society, at least not for most people.
IMO, the real fetishization here is "work is something more than a way to get paid" that's a corporate propaganda I'm not falling for.
They are usually verbose/include things like "how to run a virtual env for python"
Hand-coding is no longer "the future"?
Did an AI write your post or did you "hand write it"?
Code needs to be simple and maintainable and do what it needs to do. Auto complete wasn't a huge time saver because writing code wasn't the bottleneck then and it definitely is not the bottleneck now. How much you rely on an LLM won't necessarily change the quality or speed of what you produce. Specially if you pretend you're just doing "superior prompting with no hand coding involved".
LLMs are awesome but the IDE didn't replace the console text editor, even if it's popular.
If I order something to be delivered, I don't care what model of car the delivery company uses. Much less what kind of settings they have for the carburetor needles or what kind of oil they're using. Sure, somebody somewhere might have to care about this.
That's also how people like me see programming. If the code delivers what we need, then great. Leave it be like that. There are more interesting problems to solve, no need to mess with a solution which is working well.
Now I don't do code reviews in large teams anymore, but if I did and something like that happened, I'd allow it exactly once, otherwise I'd try to get the person fired. Barring that, I'd probably leave, as that sounds like a horrible experience.
Because in a lot of jobs where you (have to) solve problems, the actual problems to solve are rather "political". So, if you are not good at office politics or you are not a good diplomat, software is often a much better choice.
Which is why I stressed twice, including in the part you chose to quote, that I am talking about people who can achieve that. If you have to take care of your sick grandmother, you don't need to feel addressed.
But if you did have the resources to choose a career, like many people who comment here, and you ended up a software developer completely devoid of passion for the craft you're living like a Severance character. You don't get to blame the big evil corporations for a lack of dedication to a craft. You don't need to work for one to be a gainfully employed programmer, and even if you do and end up on a deadbeat project, you can still love what you do.
This complete indifference to what you produce, complete alienation from work, voluntarily chosen is a diseased attitude.
And yet after 3 decades in the industry I can tell you this fantasy exists only on snarky HN comments.
> Hand-coding is no longer "the future"?
hand-coding is 100% not the future, there are teams already that absolutely do not hand-code anything anymore (I help with one of them that used to have 19 "hand-coders" :) ). The typing for sure will get phased out. it is quite insane that it took "AI" to make people realize how silly and wasteful is to type characters into IDEs/editors. the sooner you see this clearly the better it will be for your career
> How much you rely on an LLM won't necessarily change the quality or speed of what you produce.
if it doesn't you need to spend more time and learn and learn and learn more. 4/6/8 terminals at a time doing all various things for you etc etc :)
> Your manager won't do that? Then your team is broken in a way you can't fix.
If you apply this standard, then most teams are broken.
The sane option is to join the cult. Just accept every pull request. Git blame won't show your name anyways. If CEOs want you to use AI, then tell AIs to do your review, even better.
Maybe Git is too complicated for hobby users, because it has a steep learning curve. But after two weeks using you now enough to handle things, so it shouldn't be a problem in any professional environment.
Now that I'm no longer at that company since a few years ago, I'm invincible. No LLM can scare me!
Meanwhile this is in a discussion about tools which people spend incalculable amounts of hours tuning, for reference. The number of articles on Hacker News about how people have tuned their LLM setups is... grand to say the least.
The chefs coming up with recipes and food scientists doing the pre-packaging will do fine and are still needed. The people making the fast food machine will also do well for themselves. The rest of us fast food workers, well, not so much...
Sure you can discover things that aren't intuitively obvious and these things may be useful, but that's more scientist than anything to do with programming. programming + science = computer science programming + engineering = software engineering programming + iPad = interactive computing programming + AI = vibe coding Don't equate programming with software engineering when they are clearly two distinct things. This article would more accurately be called the software engineers' identity crisis. Maybe some hobby engineers (programming + craft) might also be feeling this depending on how many external tools they already rely on. What's really shocking is how many software engineers claim to put in Herculean effort in their code, but ship it on top (or adjacent if you have an API) of "platforms" that could scarcely be less predictable. These platforms have to work very hard to build trust, but it's all meaningless cause users are locked in anyway. When user abuse is rampant people are going to look for deus ex machina and some slimy guy will be there to sell it to them.
And you can see it coming so there is plenty of time to prepare.
On my own (long abandoned) blog, about 20% of (public) posts seem to contain an em dash: https://shreevatsa.wordpress.com/?s=%E2%80%94 (going by 4 pages of search results for the em dash vs 21 pages in total).
No, it doesn't. But people are putting that out there, people are getting accused of using AI because they know how to use em dashes properly, and this is dumb.
It has also been responsible for predicting revolutions which never failed to materialize. 3D printing would make some kind of manufacturing obsolete, computers would make about half the world's jobs obsolete, etc etc.
Hand coding can be the knitting to the loom, or it can be industrialized plastic injection molding to 3D printing. How do you know? That distinction is not a detail--it's the whole point.
It's survivorship bias to only look at horses, cars, calculators, and whatever other real job market shifting technologies occurred in the past and assume that's how it always happens. You have to include all predictions which never panned out.
As human beings we just tend no to do that.
[EDIT: this being Pedantry News let me get ahead of an inevitable reply: 3D printing is used industrially, and it does have tremendous value. It enabled new ways of working, it grew the economy, and in some cases yes it even replaced processes which used to depend on injection molding. But by and large, the original predictions of "out with the old, in with the new" did not pan out. It was not the automobile to the horse and buggy. It was mostly additive, complementary, and turned out to have different use cases. That's the distinction.]
It's bizarre to me that people want to blame LLMs instead of the employees themselves.
(With open source projects and slop pull requests, it's another story of course.)
After you made your colleagues upset submitting crappy code for review, you start to pay attention.
> LLM-written ones are almost entirely additive,
Unless you noticed that code has to be removed, and you instruct the LLM to do so.
I don't think LLMs really change the dynamics here. "Good programmers" will still submit good code, easy for their colleagues to review, whether it was written with the help of an LLM or not.
One could have made a reasonable remark in the past about how injection molding is dramatically faster than 3D printing (it applies material everywhere, all at once), scales better for large parts, et cetera. This isn't really true for what I'm calling hand-coding.
Obviously nothing about the future can be known for certain... but there are obvious trends that need not stop at software engineering.
Though I don't think this is at play here. Maybe a bit but seeing how my coworkers prompt, there is objective difference. I will spend half an hour on writing a good prompt, revise the implementation plan with the LLM multiple times before I allow it to even start doing anything while my coworkers just write "fix this" and wonder why the stupid AI can't read their minds.
I am producing AI slop as well, just hopefully a bit less. Obviously hand crafted code is still much better but my boss wants me to use "AI" so I do as I am told.
In my experience, the stern talk would probably go to you, for making the problem visible. The manager wouldn't want their manager to hear of any problems in the team. Makes them look bad, and probably lose on bonuses.
Happened to me often enough. What you described I would call a lucky exception.
Regarding your Luddite reference, I think the cost-vs-quality debate was actually the centerpiece of that incident. Would you rather pay $100 for a T-shirt that's only marginally better than one that costs $10? I certainly would not. People are constantly evaluating cost-quality tradeoffs when making purchasing decisions. The exact ratio of the tradeoff matters. There's always a price point at which something starts (or stops) making sense.
It feels like we’re all going to have to have a reinvention or two ahead of us.
How would you formulate this verifiably? Wanna take it to longbets.org?
BUT...
How do have code review be an educational experience for onboarding/teaching if any bad submission is cut down with due prejudice?
I am happy to work with a junior engineer and is trying, and we have to loop on some silly mistakes, and pick and choose which battles to balance building confidence with developing good skills.
But I am not happy to have a junior engineer throw LLM stuff at me, inspired the confidence that the psycophantic AI engendered in it, and then have to churn on that. And if you're not in the same office, how do you even hope to sift through which bad parts are which kind?
I'm not disagreeing with you per se, but those statements are subjective, not an objective truth. Lots of people fundamentally enjoy the process of coding, and would keep doing it even in a hypothetical world with no problems left to solve, or if they had UBI.
We associate authority experts with a) quick and b) broad answers. It's like when we're listening to a radio show and they patch in "Dr So N. So" an expert in Whatever from Academia Forever U. They seem to know their stuff because a) they don't see "I don't know, let me get back to you after I've looked into that" and they can share a breadth of associated validations.
LLMs simulate this experience, by giving broadish, confident, answers very quickly. We have been trained by life's many experiences to trust these types of answers.
I have noticed Claude's extreme and obtuse reluctance to delete code, even code that it just wrote that I told it is wrong. For example, it might produce a fn:
fn foo(bar)
And then I say, no, I actually wanted you to "foo with a frobnitz", so now we get: fn foo(bar) // Never called
fn foo_with_frobnitz(bar)People that realize this care about their oil type and what tire they put on. People that do not, pay it forward when that crash does happen and they don't know how to recover, so queue up the war room, etc...
Even if you're not dogfooding your own software, if you do not take care of it properly, the cost of changes will climb up.
If the only thing keeping you from submitting crappy code is an emotional response from coworkers, you are not a "good programmer", no matter what you instruct your LLM.
How do you mean? If the software works, then it's done. There is no maintenance and it will continue working like that for decades. It doesn't have corrosion and moving parts like a car. Businesses make sure not to touch it or the systems it is depending on.
LLM's feel like a non-deterministic compiler that transforms English into code of some sort.
The author overlooks a core motivation of AI here: to centralize the high-skill high-cost “creative” workers into just the companies that design AIs, so that every other business in the world can fire their creative workers and go back to having industrial cogs that do what they’re told instead of coming up with ‘improvements’ that impact profits. It’s not that the companies are reaching for something complex and nebulous. It’s that companies are being told “AI lets you eject your complex and nebulous creative workers”, which is a vast reduction in nearly everyone’s business complexity. Put in the terms of a classic story, “The Wizard of Oz”, no one bothers to look behind the curtain because everything is easier for them — and if there’s one constant across both people and corporations, it’s the willingness to disregard long-term concerns for short-term improvements so long as someone else has to pay the tradeoff.
Intrinsic value is great, where achievable. Companies do not care at all about intrinsic value. I take pride in my work and my craft to the extent I am allowed to, but the reality is that those of us who can’t adapt to the businesses desires will be made obsolete and cut loose, regardless of whatever values we hold.
I personally tire of people acting like it's some saving grace that doubles/triples/100x your productivity and not a tool that may give you 10-20% uplift just like any other tool
My [GPT5's -poster's note] take / Reflections
I find the article a useful provocation:
it asks us to reflect on what we value in being programmers.
It’s not anti-AI per se, but it is anti-losing-the-core craft.
For someone in your position (in *redacted* / Europe)
it raises questions about what kind of programming work you want:
deep, challenging, craft-oriented, or more tool/AI mediated.
It might also suggest you think about building skills
that are robust to automation: e.g., architecture,
critical thinking, complex problem solving, domain knowledge.
The identity crisis is less about “will we have programmers” and
more “what shapes will programming roles take”.Of course, but I said that people see themselves this way.
To be clear, personally I do not find fiddling with configs particularly exciting, but some people do.
I believe that cowboy coding might still be practiced in small companies, or in small corporate pockets, where the number of engineers doesn't need to scale.
With years, as I matured, and the industry matured, I came to realize that corporate programming is assembly line work but with a much bigger paycheck. You can dance around it as much as you want, but in the end, if you truly zoom out, you will realize that it's no different from an assembly line. There are managers to oversee your work and your time allocations; there is a belief that more people = more output; and everyone past your manager's manager seem to think that what you do is trivial and can be easily replaced by robots. So a software engineer who calls himself an artist is basically that same as a woodworker who works for a big furniture company, and yet insists on calling himself an artist, and referring to their work as craft, while in reality they assemble someone's else vision and product, by using industry standard tools.
And at first, I tried to resist. I held strong opinions as the OP. How come they came for MY CRAFT?! But then I realized that there is no craft. Sure, a handful of people work on really cool things. But if you look around, most companies are just plain dumb REST services with a new an outfit slapped on them. There is no craft. The craft has been distilled and filtered into 3-4 popular frameworks that dictate how things should be written, and chances are if I take an engineer and drop them in another company using the same framework, they won't even notice. Craft is when you build something new and unique, not when you deploy NextJS to Vercel with shadcn/ui and look like the other 99% of new-age SaaS offerings.
So I gave up. And I mainly use AI at my $DAY_JOB. Because why not? It was mundane work before (same REST endpoints, but with different names; copying and pasting around common code blocks), and now I don't suffer that much anymore. Instead of navigating the slop that my coworkers wrote before AI, I just review what AI wrote, in small pieces, and make sure it works as expected. Clean code? Hexagonal architecture? Separation of concerns? Give me a break. These are tools for "architects" and "tech leads" to earn a pat on their shoulder and stroke their ego, so they can move to a different company, collecting a bigger paycheck, while I get stuck with their overengineered solutions.
If I want to craft, I write code in my free time when I'm not limited by corner-cutting philosophy, abusive deadlines, and (some) incompetent coworkers each with their ego spanning to the moon as if instead of building a REST service for 7 users, they are building a world transforming and life-saving device for billions (powered by web3/blockchain/AI of course).
</rant>
The fact of the matter is, that a lot of the development work out there is just boilerplate: build scripts, bootstrapping and configuration, defining mappings for Web APIs and ORMs (or any type of DB interaction), as well as dealing with endless build chain errors and stuff I honestly think is bullshit.
When I see a TypeScript error that's borderline incomprehensible, sometimes I just want to turn to an LLM (or any tool, if there were enough of formalized methods and automatic fixes/refactoring to make LLMs irrelevant, I'd be glad!) and tell it "Here's the intent, make it work."
It's fun to me to dig into the code when I want to reason about the problem space and the domain, but NOT very much so when I have to do menial plumbing. Or work with underdocumented code by people long gone. Or work on crappy workarounds and bandaids on top of bandaids, that were pushed out the door due to looming deadlines, sometimes by myself 2 months prior. Or work with a bad pattern in the codebase, knowing that refactoring it might take changes in 30 times that I don't have enough time for right now. LLM makes some of those issues dissolve, or at least have so little friction that they become solvable.
Assumption: when I use LLMs, I treat it as any other code, e.g. it must compile, it must be readable, make sense and also work.
It is vastly different because there are no (as far as I've ever seen) multi-thousand line blocks of code to cut & paste as-is from stack overflow.
If you're pasting a couple dozen lines of code from a third party without understanding it, that's bad, but not unbearable to discover in a code review.
But if you're posting a 5000 line pull request that you've never read and expect me to do all your work validating it, we have a problem.
A major problem of the way we have built our society in a way such that the wrong people end up with the most power and authority.
the majority of engineers across the industry feel the same way we do and yet there's little most of us can do unless we all decide to do something together :/
Can't be any compilation errors in a README, no need to worry about bugs. And if they're long and boring enough, no one will ever read them.
AI generated READMEs = free metrics bonus points, for the performance reviews :-)
I dunno, I feel like the base rate fallacy [0] could easily become a factor... Especially if we don't even have an idea what the false-positive or false-negative rates are yet, let alone true prevalence.
Then there’s the fact that the user’s needs fluctuate. Imagine having to pay for a whole another software because the current code is spaghetti and full of hardcoded value and magic constants. It worked, but now you want a slight adjustment, but that can’t no longer be made unless you’re willing to rewrite the whole thing (and pay for it). That would be like having to buy a whole new car, because you moved to the house next door, as the car is hardwired to move only between your old place and where you work.
More likely, like other tools, it will be possible to point to clear harms and clear benefits, and people will disagree about the overall impact.
No brother. You are the one being annoyed by it, because you are the one doing nothing about it.
>What would you like me to do about that? Is me refusing to use the tools going to change that possibility?
What I know I refused 'em out of principle, turns out I'm doing fine. I also know for certain that had I not refused them, I would not be doing fine.
>to hang onto a good life as long as I can have it.
Trick question: do you think you deserve a good life?
What if there isn't enough good life for everyone, do you deserve it more than others?
Than which ones then?
>collective whinging
And this is why I think you don't.
The moment you began to perceive mass dissent as "collective whinging" was the moment the totalitarian singularity won you over.
And then it's an entirely different conversation, conducted by entirely different means of expression.
Additionally in many countries, being a developer is an office worker like everyone else, there isn't SV lottery level salaries.
In fact, those of us that rather stay programmers beyond 30 years old are usually seen as failure, from society point of view, where is our hunger for career and climbing up the ladder?
Now the whole set of SaaS products, with low code integrations, that were already a bit depressing from programmer point of view, are getting AI agents as well.
It feels like coding as in the old days is increasingly being left for hobby coding, or a few selected ones working on industry infrastructure products/frameworks.
Most of my programming skills are kept up to date on side projects, thanfully I can managed the time to do them, between family and friends.
The MIT study just has a whole host of problems, but ultimately it boils down to: giving your engineers cursor and telling them to be 10x doesn't work. Beyond each individual engineer being skilled at using AI, you have to adjust your process for it. Code review is a perfect example; until you optimize the review process to reduce human friction, AI tools are going to be massively bottlenecked.
We need to focus on architectural/system patterns and let go of code ownership in the traditional sense.
100% this. I think a lot of the people who are angry at AI coding for them are "code calligraphers" who care more about the form of the thing they're making than the problem that it solves. I can't see how someone's who's primarily solution focused would shed a tear at AI coding for them.
This was first salient to me when I saw posts about opensource developers who make critical infrastructure living hand to mouth. Then the day in the life of a software engineer working in a coffee shop. Then the bootcamps or just learn to code movement. Then the leetcode grinders. Then developers living in cars in SF due to lack of affordable housing. Now it is about developers vibe coding themselves out of a job.
The issue is and will always be that developers are not true professionals. The standards are loosely enforced and we do a poor job of controlling who comes in and out of the industry. There are no ethics codes, skillsets are arbitrary, and we don't have any representation. Worst yet we bought into this egocentric mindset where abuses to workers and customers are overlooked.
This makes no sense to me. Lawyers have bar associations, doctors have medical associations, coders have existential angst.
Now the bosses are like automate your way out of a job or you will lose your job.
I always ask myself, in what other "profession" would its members be so hostile to their own interests?
Recently I was looking at building a gnarly form, that had some really complex interactions and data behind it. It just kept being subtly buggy in all different ways. I threw Claude at it, went down so many rabbit holes, it was convinced there were bugs in all the different frameworks and libraries I was using because it couldn't find the issue in the code (that it had written most of).
After a couple of days of tearing my hair out, I eventually dug in and rewrote it from first principles myself. The code afterwards was so much shorter, so much clearer, and worked a hell of a lot better (not going to say perfectly, but, well, haven't had a single issue with it since).
Maybe coders can see themselves as teachers to the machine. Either they teach character by character, or vibe idea by vibe idea, or anything in between.
Sure, if you test it and see that there is no issue with updating, then you can update if you want. But neither the OS or the hardware or anything else should get any priority over the business-crucial software you are running. Even with hardware failures, the better option is to get older hardware for replacement if newer hardware has compatibility issues.
Someone who finished a bootcamp might be able to write a simple program in Python, but that doesn't make them a software engineer.
I've said this out loud before and have gotten told I'm an elitist, that my degree doesn't make me better at software than those without one. That majoring in computer science teaches you only esoteric knowledge that can't be applied in a "real job".
On the other hand, the industry being less strict about degrees can be considered a positive. There definitely do exist extremely talented self-taught software engineers that have made a great career for themselves.
But I definitely agree with the need of some sort of standard. I don't care if some bootcamper gets a job at the latest "AI on the blockchain as a service" unicorn startup, good for them. I'd rather have people with formal degrees work on something like a Therac-25, though.
Lawyers spend literally hundreds of hours doing just that. Well, their paralegals do.
Git is a legitimately amazing tool, but it can't magically make version control free. You still have to think because ultimately software can't decide which stuff is right and which is wrong.
I think bootcamp era was a decade ago and we're past it now. Not long ago I saw something on here about how a lot of them are closing down and attendance is dropping for the ones still open - likely because of LLMs.
Think about chemistry and chemical engineering. Chemistry is "where do the outer shell electrons go, how strong are the bonds between the atoms". Chemical engineering is "how do we make the stuff in multi-ton quantities without blowing up downtown". Those are not the same discipline.
I mean, sure, a software engineer had better know some about big O, and about how to use locks without getting in trouble. But they also need to know how to find their way around a decade-old million-line codebase, and what things they do today that are likely to turn into maintenance headaches a decade from now, and how to figure out what the code is doing (and why) when there's no documentation. I'm not sure that a CS degree teaches you those things. (For that matter, designing a Software Engineering degree so that it actually teaches you those things isn't easy...)
Despite the name of the degree, most computer science students go on to become software engineers, so software engineering is a required part of many CS programs these days, whereas chemical engineering isn't really required (to the same extent) in chemistry programs. Depending on the program it can vary how much though. At my current place it's 3 semesters but others might have more or less. One course is a sort of simulation of a working software firm, and the other is a sort of 1 year internship with a real company or a research lab. This has not always been the case, as when I was in school I graduated without knowing version control. Today, git is taught to freshmen.
Although, we don't have many decade-old million-line codebases lying around to hand the students, we still try to give them the necessary skills they might need to work with one. But we can't teach everything in 4 years, some things have to be learned in the field on the job and from seniors engineers.
This is so true. That feeling when you're debugging an issue in your code, only to rule out all the possibilities and have to expand your search to include not-your-code: the tooling, the libraries, the operating system, the hardware. This is the worst feeling in debugging, when the problem and the solution /might/ be outside of your control. With LLM/GenAI, the primary problem solving surface is nondeterministic by design, very much out of your control, and you will likely be gaslit into believing that you're just not hitting it right. Like a tube tv that has to be slapped just the right way to restore proper reception. (Sorry, bots, no offense intended.)
> What’s next, TPS reports?
I bet LLMs are really good at producing TPS reports w/ cover sheet.
I'm not saying the industry is perfect, but if the scale of the missed opportunity is so large and so obvious to so many engineers, and we live in a world with tens of thousands of VC firms and probably millions of other avenues for funding like angels, incubators, and grant programs, it's hard to imagine 100% of them missing it. And it only takes 1 to take advantage.
I've been using them more conservatively, slowing down, and manually writing things more. It's easier when I'm just replicating logic over a bunch of different well-defined properties of a clear type definition or spec, but even then results are a bit questionable sometimes.
The 10$ shirt becomes a much shittier proposal once, in addition to its worse looks, fit, and comfort, you factor in its significantly lower durability and lifespan. That's why the 100$ shirt still exists after all. Nevermind that the example is a bad one to begin with because low-price commodities like T-shirts are never worth fixing when they break, but code with a paid maintainer clearly is.
In an market bubble like the one we find ourselves in, longevity is simply not relevant because the financial opportunity lies precisely in getting off in the train right before it crashes. For investors and managers, that is. Developers may be allowed to change cars, but they are stuck the train.
It's sad how some of the doomed are so desperate to avoid their fate that they fall prey to promises they know to be bullshit. The argument for Wish and TEMU products is exactly the same, yet we can all see it for what it is in those cases: a particularly short-lived lie.
That's not what am I saying at all. Unless you have Stockholm syndrome about your job, it's very hard to find a well paying job that you can love.
That is and always has been part of the job. Automating away labor is what we do, and there's no form of labor we understand better or have more opportunity to automate than our own.
Different projects have other incentives. Dealing with AI slop from internet randos is a very real problem in open-source codebases. I've pretty much just stopped reviewing code from people that I don't know on one project that I work on when it's obviously going to take way more time than it would have done to do the patch myself. There used to be an incentive to help educate new contributors, of course, but now I can't know whether that is even happening.
- You have been in very bad environments if you think the way you think.
- Coding/typing is and wasn't the bottleneck. You're not fit to give advice to people on their careers if you think it is. Years of doing the wrong thing doesn't mean you're good.
- Your entire attitude screams "I'm a big seniority fish in a mediocre pond" but it breaks down when you can't put specificity behind your words.
- The last paragraph is just the cherry on top. I'm curious on the specifics actions happening in those terminals and how it relates to the quality and speed point. Why even 6 terminals and not some coordinator tool for what I presume are your "agents"?
For the early MIT hackers, and for many of us still today, it absolutely is.
It's also not about the input mechanisms, which have changed over the years. Solving problems, turning complexity into simplicity, cool hacks, that's what the hacker ethos is about. It's not about driving "value".
I suppose you also feel that there's no value in learning a musical instrument either.
The only instances I've seen so far are from developers who are really, really bad at coding, but, under the false delusion of the Dunning-Kruger effect, believe they're generating reams of "high quality" code.
Unfortunately this isn't like isn't a rare occurrence at all.
en dash: https://www.compart.com/en/unicode/U+2013
em dash: https://www.compart.com/en/unicode/U+2014
Edit: Ah, Libreoffice does have a built-in autocorrect for em dash, but you have to type this:
:---:Regarding proof, if you have contracts for your software write them up. Gherkin specs, api contracts, unit tests, etc. If you care about performance, add stress tests with SLOs. If you care about code organization create custom lint rules. There are so many ways to take yourself out of the loop rigorously so you can spend your time more efficiently.
That principle can be applied to both LLM slop and handcrafted rubbish. Eventually most people will get it.
Programming can't be constrained in that fashion. Having a "Software Developer" association will 1. not solve the problem; and maybe make it worse and 2. move all of the industry outside of the US.
What value is that person adding? I can fire up claude code/cursor/whatever myself and get the same result with less overhead. It's not a matter of "is AI valuable", it's a matter of "is this person adding value to the process". In the above case... no, none at all.
Better yet- why are there orgs that accept this behavior? I know mine is far from it, as they should be.
This is both new and old, because it's the same joy (or dopamine hit) of making a machine do your bidding. Honing your prompts is not that different to honing your shell scripts. I think many people overlook this aspect.
I'm visiting a lot of museums and exhibitions for a number of years now, without having any kind of art education, etc. After some time you start seeing patterns and understand more even if you still haven't read your art history. What strikes me time and time again, is that there's a huge amount of repetition even in great artists' work. It's definitely not an assembly line, but they do the same thing over and over again for years and years. That was true centuries ago and that's true today.
We really need widespread adoption of stuff like design-by-contract in mainstream PLs before we can seriously talk about AI coding.
So ":---:" does work for the em dash? I thought something with fewer keystrokes work, too, at least I remember the em dash from less, but perhaps I just typed it so quickly I did not realize it was indeed ":---:".
And conversely, a CS degree doesn't necessarily mean that the person has actually learned what they were taught.
Now, you might argue that it doesn't matter because it's not the users who pay you, it's the company. But, well - some people have professional standards. It's rather unfortunate that this is apparently not compatible with "doing business" in this day and age, at least outside of very narrow niches.
The real answer to this is collective action - unions etc - to push back against the lowering of the standards by our employees. But software engineers still seem to be broadly allergic to unions.
I'm pretty sure that all the comments about how it was "rarely seen" are because people weren't paying attention to them before in the way they do now.
In any case, to dismiss something as AI slap based solely on this one thing is both lazy and rude, and should be treated as such.
Believe it or not, you can actually use these tools to level up your skills and understanding faster and then ship better code! Like with any powerful tool, it requires some care to use it correctly.