The balance sheet is always equal, so you always have as much liability as asset in amount. Despite the negative connotation of the word, some liabilities are good (earnings[1] is the best, equity and long term debt with low interest rate are nice too), but some are bad.
And it's the same for code, spitting more code to do the same thing is akin to emitting too many shares, having brittle code that needs constant intervention to keep working is akin to high interest debt.
[1] yes, your company's earning belong to the right side of its balance sheet.
Well, there is some dimension pertaining to how well the code is understood by the employees of Companies A & B.
The theoretical advantage of a smaller codebase is of scant practical value if that codebase cannot be maintained.
And maybe this point is captured by the "very similar companies" statement.
Debt is proportional to the time it takes to understand, change, and extend the codebase.
Lines of code are an imperfect mapping to tech debt. And if all things truly were equal then this metric has been minimized at the expense of readability - which almost certainty impacts debt worse.
So I think the better argument here is that, "lack of theory is debt" - so ironically, perhaps the better argument is the shorter codebase is debt in the analogy to LLMs. LLM usage minimizes theory building. But this is assuming that AI doesn't continue to progress and its ability to build theory about the project (and communicate it correctly to the engineer) remains constrained.
The bit that is a challenge with the deprecation view of things is that if the code isn't sufficiently maintained (size outstrips developers) then the cost of fixing the problem increases with time disproportionately, its like you pay interest on the outstanding depreciation debt. So I understand why people use debt but its not quite right.
Company A uses a good framework that allows underlings to make for their middle managers reports which those middle managers use to demonstrate that their portion of the company is providing value. Upper management is well pleased, and refrains from laying off those productive portions.
Company B has no framework, and underlings cannot produce good reports for middle managers. Upper management fires at random, and great lamentations go up across the land.
More seriously: You can spend money to make money in the software world, and making a good abstraction is well worth the effort. There might just be a reason we us OSes these days instead of writing every application to sit directly on hardware. That might be the result of some actual thought.
I wonder how many servers Company A and Company B are running.
A few years ago, I tried that again with a remote team of 20 coders. I failed; I couldn't keep up with the barrage of pull requests.
Today, pair programming with Claude Code and GPT feels more like the later.
I think there is an opportunity here for smart refactoring. But, needs a larger context. I tried this on some legacy code with Cursor and Claude opus 4.1, but a million tokens is not enough. I dunno; maybe translate between a private and shared LLM. Has anyone tried this?
In addition LLMs recognize "meaning" mostly like prose, i.e. they take the code behaviour mostly from the names of the variables and functions less from the actual behaviour. When you name everything randomly, then a LLM is way worse to recognize what the code is doing.
This means that when the code is already confusing, wit naming doesn't match behaviour than the LLM is more likely to tell you the story from the naming not the story of actual behaviour.
A burden, a liability, a drag. But not debt.
It may be more so as AI has a specific context window and each token counts.
You can make a very real decision right now - spend a lot of time making 10,000 lines of good code that you know well, or spend VERY LITTLE time (maybe 1-2% as long) making 20,000 lines of bad code that does the same thing and that you don't know well at all.
Personally I'm continuing to find the middle ground, because if you want to continue to build past that, I think you've actually lost that time in the first case.
Other than that, clean code that is easy to read and understand is way more important.
And I do wear these other hats sometimes. I think nothing of scripting a useful utility or cranking something out in R or VBA for a presentation. But when it comes to production code, I'll spend a lot of time trying to think of ways to reduce the amount of code required.
But it's two completely different philosophies regarding code, and unfortunately in some organizations AI is starting to blur the lines.
If you think it’s phoning it in, tell them to study upstream and downstream consumers, consider edge and corner cases, and what assertions any tests are making of the code in question, and to redo their analysis.
This does have the implication baked in that AI will generate more code than people to do the same thing. Seeing as it had language features that not everyone knows about, alongside a bunch of algorithms and programming paradigms in the training data, sometimes the opposite will be true - the human code will be longer and therefore have more debt.
Of course, others also pointed out how you'd probably want to look at how complex or simple the code is, not just how long it is, but that's besides the point I'm making.
The lack of knowledge about your own codebase will only go deeper and deeper until the debt is insurmountable and the organization can't function at all because it has no knowledgeable engineers.
There is only computer-generated code cobbled together into some kind of business solution.
Personally, I have only used AI to write actual code when it is for Bash and Python scripts that are self contained. In my case self contained means they are interfaced to via command line so their boundaries are very well defined.
I have never returned to look at any of the code.
I would never use it to generate domain code for my codebase because then I'd have to code review it anyways. I mean, if I have an agentic AI solving an issue and generating a PR, great, I can review that and give it feedback on how to change the code before its accepted.
Unless I can either throw the code away or review it for maintainability rather than correctness then I have no need for a tool that write my code for me.
Oh, unless the AI can be the product owner and understand the financial ramifications of not doing its job correctly but I would be worried that the solution is to not have a product by reducing the users to ash.
Before that, reminds me of when people talked about monkey patching RoR.
Seems like the general trend in programming since the beginning, when asked "Cheap, Fast, Good", the winners are always cheap and fast. Regular people have a hard time conceptualizing something they can not see.
1. Just because you use AI to write code does not imply there is more code. You can write concise code with AI, you just have to set yourself (and the AI / tools) up for it.
2. The opposite of debt is not value.
I agree that less code is (usually) better.
When it comes to AI folks are far too quick to use models / tools without properly understanding how to get the outcomes they want - then make blanket statements (or complain) when things don't go the direction they want.
All tools are not created equal, some are far more inclined to create an over-engineered mess (Kiro, Cursor). Using agentic coding tools like Cline or Claude Code with the right agent tools, rules and workflows you can achieve high quality outcomes.
Recently a colleague and I wrote a tiny linux init system that is used in production for several orgs. Under 600 lines of rust standard library, no dependencies. Boots in under 1 second. Turns out you do not need all the risk and disk i/o of systemd and a million lines of c to run an appliance like linux server.
Seriously, if you write it yourself you end up with a lot less code and systems you can understand top to bottom.
On the other hand, a lot of people do think about all sorts of almost irrelevant stuff like formal verification.
The thing that bugs me about formal verification is the fact that it's extremely wasteful and pointless when you haven't even thought about the answer to the first question.
People wrongly assume that all code which engineers produce is good code and serves a worthwhile purpose. This is very wrong. Most coders invent unnecessary abstractions all the time which create more work.
A lot of work in software engineering adds negative value. Sometimes it adds both negative and positive value so it's hard to tell.
Fewer lines of code are better if functionality is constant. But that doesn’t make code debt. By that logic, factories, employees, equipment, processes, and policies would also be debt — since fewer of them would likewise be better if revenue stayed constant.
Code is fundamentally an asset: it generates revenue, scales with near-zero marginal cost, and embodies intellectual property. Like any asset, it carries liabilities. Well-designed and reliable code behaves like a productive factory; buggy or hard-to-maintain code behaves like debt, draining resources instead of producing returns.
Is longer code more “debt” if it’s more readable and more verbose? I could save lines of code by messing ternary operators, shortening variable names, using obscure language-specific features, or stuffing complicated expressions into one-liners.
Is longer code more “debt” if it’s more extensible and flexible for future growth and change?
Is longer code more “debt” if it has more test coverage? This is especially relevant to the article because AI is generally great at assisting with writing tests especially when the alternative is engineers not doing it.
We can’t just say “all else being equal” because that’s not how life works.
Code is an asset. It is the product of software companies. Having more assets certainly increases complexity, but this is almost definitionally true. Imagine saying “the US interstate highway system is debt, because it’s complex and difficult to maintain.” The premise is true, but the conclusion is such a one-dimensional way of seeing things.
The AI stuff aside, in light of the above, what is the author’s thesis here? “For the same code, all else being equal, it’s better to have less complexity than more complexity”? Sure, true, but that’s a pretty easy and obvious point.
It seems this entire article could have been profitably boiled down to “make sure your AI coding tools aren’t adding unnecessary complexity to your finished code.”
My files tend to be fairly long, but I also have a 50/50 comment-to-code ratio.
I'll bet that you could prompt an LLM to both reduce CC, and add lots of comments.
At a previous place we used a dreadful email marketing SaaS tool and it caused us no end of fire-fighting, even though we probably only had 500 lines of integration code. We ended up rewriting the functionality we needed and bringing it in-house and saved a ton of pain and money, and added ~3k lines.
If you use them promptly, in the right way, you'll make money. If you let it sit and degrade, or spill it, then it's a liability.
While source code--unlike certain chemicals--doesn't spontaneously change on its own, its fitness-for-purpose does, as the organization constantly shifts goals and process.
Postulate that you have a viable software business, and work backwards from there. Everything else: every purchase and every pay period you pay someone to work for you is a cost, from this point of view, and undesirable unless necessary for the welfare of the business. But every cost is a step away from the platonic ideal of a business, one with arbitrarily high profits and arbitrarily low costs.
I agree that complexity is worse than simplicity, to the extent that complexity really is more expensive. But cost: whether long term or short term, capital or operational, is what really matters.
The question is whether LLM-generated code increases or reduces cost. Size, complexity, and a million other details not considered in the OP or in your comment all matter a lot.
Nice to be needed, yes. Peace is also nice
Interesting talk about complexity: Complexity is when your systems interact. A complex system can become unreasonable. A complex system cab be generative and surprising!
In theory, we should have been able to delete that role without much thought, but without any extra layer of validation and some code updates, it still would have broken things.
The uncertainty is where most of the risk lies, especially when some people do odd things like this.
And before someone tries to refute the "debt is mostly bad for the debt holder"-statement by some coked up economist in his first semester, I know how money is created today. I also know about the opportunity new debt can open. Doesn't make debt something positive, the field really lacks structured thought... I forgive you if you are the lawyer of Greece, but ffs... Some concepts are simple and fundamental.
I guess the author tried to connect debt and technical debt and there are certain similarities. In both cases work has to be done without direct compensation aside from working of the debt. Perhaps that is why it is called debt.
1) you dont need AI to generate code because you dont need any code at all? If you dont need any code then this discussion is useless.
2) AI generates more and worse code. Well, if you start with this assumption of course the conclusion is dont use AI. But then this is a big assumption to make, you dont back it up with anything and you dont even mention this at all.
Lets consider other things:
1) AI will generate less and better code that you would. Shouldnt you use it?
2) AI will generate code the same quality as you would but in 50% less time. Shouldnt you use it too?