Recently I refactored about 8,000 lines of vibe-coded bloat down into about 40 lines that ran ten times as fast, required 1/20 as much memory, and eliminated both the defect I was tasked with resolving and several others that I found along the way. (Tangentially, LLM-generated unit tests never cease to amaze me.) The PHBs didn't particularly appreciate my efforts, either. We've got a very expensive Copilot Enterprise license to continue justifying.
There will be vibe and amateur banged out hustle trash, which will be the cheap plastic cutlery of the software world.
There will be lovingly hand crafted by experts code (possibly using some AI but in the hands of someone who knows their shit) that will be like the fine stuff and will cost many times more.
A lot of stuff will get prototyped as crap and then if it gets traction reimplemented with quality.
For what it's worth, here's quicksort in 5 lines of haskell https://stackoverflow.com/questions/7717691/why-is-the-minim...
Then as now, if you let the machine do the thinking for you, the result was a steaming mess. Up to you if that was accessible (and for many, it was).
I’m using AI a lot too. I don’t accept all the changes if they look bad. I also keep things concise. I’ve never seen it generate something so bad I could delete 99 percent of it.
They couldn't. I would go find the code that caused a bug, fix it and discover that the bug was still there. Because previous students had, rather than add a parameter to a function, would make a copy and slightly modify it.
I deleted about 3/4 of their code base (thousands of lines of Turbo Pascal) that fall.
Bonus: the customer was the Department of Energy, and the program managed nuclear material inventory. Sleep tight.
In the case I saw, it was rust code and the LLM typed some argument as a Arc<Mutex<_>> when it absolutely did not need to, which caused the entire PR to inflate. The vibe coder apparently didn't catch this and just kept it vibing... Technically the code did what it needed to do but was super inefficient.
It would have been easy for me to just accept the PR. It technically worked. But it was garbage.
In addition to not breaking existing code, also has added benefit of boosting personal contribution metrics in eyes of management. Oh and it's really easy to revert things - all I have to do is find the latest copy and delete it. It'll work great, promise.
Aggggressively "You can write Java in any language" style JavaScript (`Factory`, `Strategy`, etc) plus a whole mini state machine framework that was replaceable with judicious use of iterators.
(This was at Google, and I suspected it was a promo project gone metastatic.)
If the vision were true, we should see it happen with normal goods too. Quality physical goods do not beat the shit goods in the market : crap furniture is the canonical example (with blog articles discussing the issue).
Software (and movies) is free for subsequent copies, so at first sight you might think software is completely different from physical goods.
However for most factory produced goods, designing and building the factory is the major cost. The marginal cost of producing each copy of an item might be reasonably low (highly dependent on raw materials and labor costs?).
Many expensive physical goods are dominated by the initial design costs, so an expensive Maserati might be complete shit (bought for image status or Veblen reasons, not because it is high quality). There's a reason why the best products are often midrange. The per unit 2..n reproduction cost of cheap physical goods is always low almost by definition.
Some parts of iPhone software are high quality (e.g. the security is astounding). Some parts are bad. Apple monetisation adds non-optional features that have negative value to me: however those features have positive value to Apple.
Negative 2000 Lines of Code (1982) - https://news.ycombinator.com/item?id=33483165 - Nov 2022 (167 comments)
-2000 Lines of Code - https://news.ycombinator.com/item?id=26387179 - March 2021 (256 comments)
-2000 Lines of Code - https://news.ycombinator.com/item?id=10734815 - Dec 2015 (131 comments)
-2000 lines of code - https://news.ycombinator.com/item?id=7516671 - April 2014 (139 comments)
-2000 Lines Of Code - https://news.ycombinator.com/item?id=4040082 - May 2012 (34 comments)
-2000 lines of code - https://news.ycombinator.com/item?id=1545452 - July 2010 (50 comments)
-2000 Lines Of Code - https://news.ycombinator.com/item?id=1114223 - Feb 2010 (39 comments)
-2000 Lines Of Code (metrics == bad) (1982) - https://news.ycombinator.com/item?id=1069066 - Jan 2010 (2 comments)
Note for anyone wondering: reposts are ok after a year or so (https://news.ycombinator.com/newsfaq.html).In addition to it being fun to revisit perennials sometimes (though not too often), this is also a way for newer cohorts to encounter the classics for the first time—an important function of this site!
That was the first and last time we had to do it, as the soft drinks returned the following week.
These 5 lines are probably my favorite example.
But it’s pretty obvious when it produces garbage. So you’d reject it there and then. At the very least code review will raise so many questions. How did 8000 lines make it into the code base?
I still remember the behemoth of a commit that was "-60,000 (or similar) lines of code". Best commit I ever pushed.
Those were fun times. Hadn't done anything algorithmically impressive since.
I couldn't believe my eyes. I was working in my own project beside this team with the list, so thankfully I was left out of the whole disaster.
A guy I knew wasn't that lucky. I saw how he suffered from this harmful list. Then I told him a story about the Danish film director Lars von Trier I recently had heard. von Trier was going to be chosen to appear in a "canon" list of important Danish artists that the goverment was responsible for. He then made a short film where he took the Danish flag (red with a white cross) and cut out the white lines and stitched it together again, forming a red communist flag. von Trier was immediately made persona non grata and removed from the "canon".
Later that day my friend approached the bugs caused/fixed list, cut out his own line, taped it together and put it on the wall again. I never forget how a PL came in the room later, stood and gazed at the list for a long time before he realized what had happened. "Did you do this?" he asked my friend. "Yes", he answered. "Why?", said the PL. "I don't want to be part of that list", he answered. The next day the list was gone.
A dear memory of successful subversion.
My manager has it pinned on the breakroom wall.
At that time at Apple, even as an IC, Bill had lines of communication to Steve and was extremely valued. There's absolutely no doubt he could get "middle manager shenanigans" gone simply by not complying or "maliciously complying". Hell, I've seen ICs far less valuable, or even close to negative value get away with stunts far worse than these, succeed and keep their jobs. Out of all the stories in Folklore.org, this is the one you have an issue with?!
Pre-vibe-coding it was more like the difference between fine silverware and cheap stamped metal stuff.
The outcome where all of a sudden leadership just shit its pants and doesn't communicate at all and never followed up... It's like writing "and then everyone clapped" for programmers.
There were three performance optimizations in total, one which I rejected because the gain was minimal for typical use case and there are still some memory allocation optimization which I have deferred with because I'm in the middle of a major refactor of the code. The LLM has already written down plans to restart this process later when I more time.
https://forum.cursor.com/t/cursor-yolo-deleted-everything-in...
How long would a quicksort (say, of integers) be in 68000 assembly? Maybe 20 lines? My 68000 isn't very good. The real advantage of writing it in Haskell is that it's automatically applicable to anything that's Ord, that is, ordered.
I've told this story to every client who tried schemes to benchmark productivity by some single-axis metric. The fact that it was Atkinson demonstrates that real productivity is only benchmarkable by utility, and if you can get a truly accurate quantification for that then you're on the shortlist for a Nobel in economics.
2. It’s a direct recollection from someone who was there, not an unnamed “my cousin’s best friend” or literal folklore that is passed down by oral tradition. Andy knew Bill and was there. There is no clear motivation to tell a fictional story when there were so many real ones.
3. The specifics line up very well with what we know about Bill Atkinson and some his wizardry needed to make the Mac work.
Given this, it’s much easier to assume that your assertion is what is made up.
This is not only a possible outcome, it is a common one. When leadership realizes it was a mistake to instill one of these types of "productivity motivators", it is easier to disappear it and never (officially) speak of it again.
Is this, from elsewhere in the thread, a system rethink, https://github.com/dotnet/runtime/pull/36715/files ?
I've worked on a product that reinvented parts of the standard library in confusing and unexpected ways, meaning that a lot of the code could easily be compacted 10-50 times in many place, i.e. 20-50 lines could be turned into 1-5 or so. I argued for doing this and deleting a lot of the code base, which didn't take hold before me and every other dev left except one. Nine months after that they had deleted half the code base out of necessity, roughly 2 MLOC to 1 MLOC, because most of it wasn't actually used much by the customers and the lone developer just couldn't manage the mess on his own.
I wouldn't call that a system rethink.
Otherwise just downvote or flag I guess, but this comment of yours just reads as an insult to a person that maybe did not put the most effort into writing their comment, but seems genuine to me at least.
The ideals probably worked for that time and that place. Many places in other parts of the world and at other times, would have different ideals, to deal with different priorities at that time and place. America in the 80's had no survival struggle, wars, cultural stigmas, pandemics or famines. Literacy and business were blooming. Great minds and workers were lured with great promises. A natural result is accelerated innovation. Plenty of food and materials. Individualism, fun and luxury was the goal for most. The businesses delivered all of it. Personal computing was an exact fit for that business.
But block mode terminals that did forms had been a thing for over a decade at that point. Not that this was likely at Apple. But there are definitely contemporary ways in which one could have been entering this stuff via a computer.
Indeed, an IBM 3270 could be told that a field was numeric. This wouldn't have the terminal prevent negative numbers. The host would have to have done that upon ENTER. But the idea of unsigned numbers in form data had been around in (say) COBOL PIC strings since the 1960s.
* https://ibm.com/docs/en/cics-ts/5.6.0?topic=terminals-3270-f...
Code is an artifact, undesired debris.
The fewer lines, the better.
About 70 lines, once you strip out the comments and blank lines.
https://github.com/historicalsource/supermario/blob/9dd3c4be...
I think I have mentioned this before in HN too. I am not from CS background and just learnt the trade as I was doing the job, I mean even the normal stuff.
We have a project that tries reify live objects into human readable form. Final representation is so complicated with lot of types and the initial representation is less complicated.
In order to make it readable, if there is any common or similar data nodes, we have to compare and try to combine them i.e. find places that can be made into methods and find the relevant arguments for all the calls (kind of).
Initial implementation did the transformation into the final form first, and then started the comparison. So, the comparison have to deal with all the different combinations of the types we have in final representation now, which made the whole thing so complex and has been maintained by generation of engineers that nobody had clear idea how it was working.
Then, I read about hashmap implementation later (yep, I am that dumb) and it was a revelation. So, we did following things:
1. We created a hash for skeleton that has to remain the same through all the set of comparisons and transformation of the "common nodes", (it can be considered as something similar to methods or arguments) and doing the comparison for nodes with matching skeletal hashes and
2. created a separate layer that does the comparison and creating common nodes on initial primitive form and then doing the transformation as the second layer (so you don't have to deal with all types in final representation) and
3. Don't type. Yes. Data is simplest abstraction and if your logic can made into data or some properties, please do yourself a favor and make them so. We found lot of places, where weird class hierarchies can be converted into data properties.
Basically, it is a dumb multi pass decompiler.
That did not just speed up the process, but resulted in much more readable and understandable abstractions and code. I do not know, if this is widely useful but it helped in one project. There is no silver bullet, but types were actual problem for us and so we solved it this way.
I have mixed feelings because now it's so much simpler, but the frustration of having to write these lines in the first place, it's so annoying. that's what happens when specs aren't clear
the select-a-bunch-of-code-and-then-zap-it-with-the-Del-key is the best hardware algorithm.
uncatchable, so I won't even try.
One way to often arrive at it is to just draw some graphs, on paper/whiteboard, and manually step through examples, pointing with your finger/pen, drawing changes, and sometimes drawing a table. You'll get a better idea of what has to happen, and what the opportunities are.
This sounds "Then draw the rest of the owl," but it can work, once you get immersed.
Then code it up. And when you spot a clever opportunity, and find the right language to document your solution, it can sound like a brilliant insight that you could just pull out of the air, because you are so knowledgeable and smart in general. When you actually had to work through that specific problem, to the point you understood it, like Feynman would want you to.
I think Feynman would tell us to work through problems. And that Feynman would really f-ing hate Leetcode performance art interviews (like he was dismayed when he found students who'd rote-memorize the things to say). Don't let Leetcode asshattery make you think you're "not good at" algorithms.
If the lower (negative) score, the better (given a fixed set of features).
“Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.” ― Antoine de Saint-Exupéry, Airman's Odyssey
Source code for each portal was stored in a separate Git repository. I've asked the original authors how am I supposed to fix bugs that affect all the portals or develop new functionality for all the portals. The answer was to backport all fixes manually to all copies of the source code.
Then I've asked: isn't it possible to use a single source repository and use feature flags to customize appearance and features of each portals. Original authors said that it is impossible.
In 2-3 months I've merged the code of 4-5 portals into one repository, added feature flags, upgraded the framework version, release went flawlessly, and it was possible to fix a bug simultaneously for all the portals or develop a new functionality available across all the countries where the company operated. It was a huge relief for me as copying bugfixes manually was tedious and error-prone process.
Tangent but -- one furniture hack I've found is that if you don't want to pay a lot go for the simplest design you can find made of basic wood or metal. It'll be... a wood or metal kit that assembles into the basic form of what is needed. Wood is often unfinished or minimally finished. That stuff is pretty durable. Things that look "fancy" but are cheap tend to be utter trash, made of the worst materials with poor tolerances. A more elaborate or artistic design plus quality equals expensive.
When I say minimal I mean minimal. A cheap quality bed frame is a rack the mattress sits on. A cheap quality dresser is basically bins on tracks.
Ironically places like Amazon is where you find this cheap quality minimal stuff. Furniture stores are complete trash unless they are artisan, often local, like I live in Ohio and there are artisan Amish furniture sellers that sell good (but $$$) stuff that is literally hand made. But find one that is actually sourcing or even tied to an Amish community. You don't have to look into the store, just the stuff inside. It will be solid and build via obvious craft joinery, etc., and will weigh a ton. (and you're supporting a local community)
So I wonder if software will start to look like that. Pay a lot (like enterprise prices) for highly regarded pro software or find something minimal "hand made" by a 1-5 person shop. The world of quality native Mac apps comes to mind for the latter.
The developer who wrote it was a smart guy, but he had never worked on any other JS project. All state was stored in the DOM in custom attributes, .addEventListeners EVERYWHERE... I joke that it was as if you took a monk, gave him a book about javascript, and then locked him in a cell for 10 years.
I started refactoring pieces into web components, and after about 6 months had removed 50k lines of code. Now knowing enough about the app, I started a complete rewrite. The rewrite is about 80% feature parity, and is around 17k lines of code (not counting libraries like Vue/pinia/etc).
So, soon, I shall have removed over 200,000 loc from the project. I feel like then I should retire as I will never top that.
:)
This is exactly where these comparisons break down. Obviously you don't need as much code to get passable implementations of a fraction of all the features.
You make a fair point that a basic framework can be expressed with much less code.
And that the remaining 20% probably contains more edge cases with proportionally more code.
But do you think the last 20% will eventually make up anywhere near 233k lines of code?
The real save here comes from rewriting: seeing all the common denominators and knowing what's ahead.
I'd rather have 250,000 lines of code but 230,000 of that is in battle tested libraries. And of which only 20,000 lines are what we ever need to read/write.
Too many forget that it's one of the few legal ways to supply your employees with performance enhancing drugs.
It’s because every task was doing a database call but they had a whole repo and aws lambdas for running it. Stupidest thing I’ve ever seen.
I'm trying to code up a version in ARM assembly to compare, and it looks like it'll be about 30 lines; when I get that working I can compare to see why the difference. In some ways the 68000 is more expressive than ARM, like being able to reference memory directly, even twice in one instruction.
(Am I misunderstanding this, or is this the source code to Apple System 7.1? There seems to have been a mailing list about this codebase from 02018 to 02021: https://lists.ucc.gu.uwa.edu.au/pipermail/cdg5/)
But a lot is opportunity. Like, I had the opportunity to work on an old PHP backend, 500ms - 1 second response times (thanks in part to it writing everything to a giant XML string which was then parsed and converted to a JSON blob before being sent back over the line). Simply rewriting it in naive / best practices Go changed response times to 10 ms. In hindsight the project was far too big to rewrite on my own and I should have spent six months to a year trying to optimize and refactor it, but, hindsight.
The demanding / loud person can and should be ignored; as a developer, you are responsible for code quality and maintainability, not your / their manager.
I've had a similar experience (see other comment), the original author was a junior developer at best, but unfortunately, a middle-aged, experienced developer, one of the founders of the company, and very productive. But obviously, not someone who had ever worked in a team or who had someone else work on their codebase.
Think functions thousands of lines long, nested switch/case/if/else/ternary things ten levels deep, concatenated SQL queries (it was PHP because of course), concatenated JS/HTML/HTML-with-JS (it was Dojo front-end), no automated tests of any sort, etc.
You are starting at a specific node in the graph and saying that if there’s an isomorphism the target tree root node must be equivalent to that specific starting node in the original graph.
You just walk through the original graph following the pattern of the target tree and if something doesn’t match it’s false otherwise true? Am I mistaken here? Again the target being a tree is a bit irrelevant. This will work for any subgraph as long as as you are also given starting point nodes for both the target and the original graph?
The graph that is to be determined as a subset is a tree. From there he says it can be done in an algorithm that only traverses every node at most one time.
I’m assuming he’s also given a starting node in the original graph and the algorithm just traverses both graphs at the same time starting from the given start node in the original graph and the root in the tree to see if they match? Standard DFS or BFS works here.
I may be mistaken. Because I don’t see any other way to do it in one walk through unless you are given a starting node in the original graph but I could be mistaken.
To your other point, The algorithm inherently has to also be statefull. All traversal algorithms for graphs have to have long term state. Simply because if your at a node in a graph and it has like 40 paths to other places you can literally only go down one path at a time and you have to statefully remember that node has another 39 paths that you have to come back to later.
Given two graphs one is a tree you cannot determine if the tree is a subgraph of the other graph in one walk through?
It’s only possible if you’re given additional information? Like a starting node to search from? I’m genuinely confused?
I'm distrustful on unit testing as I've seen too many tests written to make code coverage numbers but that don't actually test the functions they are aimed at. A non-trivial number which run the function asynchronously and then report a successful run before the function even finishes executing, meaning that even throwing errors don't fail the tests (granted, part of that is on the testing framework for letting unexpected errors ever result in a pass).
The lead dev was a hard core c programmer and had no perl experience before this job. He handed me a 200 line uncommented function that he wrote and was not working. It was a pattern matcher. I replaced it with 6 lines of commented perl with regex that was very readable (for a regex).
Since he had no idiomatic understanding of perl he did not accept it and complained to management. We had to bring in the local perl demigod to arbitrate(at 21 was half my age at the time, but smart as a whip). Ruled in my favor and the lead was pissed.
[0]: https://en.wikipedia.org/wiki/Halstead_complexity_measures
There’s also a role called being an algorithms engineer in standard tech companies (typically for lower level work like networking, embedded systems, graphics, or embedded systems) but the lack of an engineering background may hamstring you there. Engineers working in crypto also use a fair bit of algorithms knowledge.
I do low level work at a top company, and you only use algorithms knowledge on the job a couple of times a year at best.
(I skipped clarifying in the GP post that they took the soft drinks out of the fridge and emailed the new policy, rather than merely being a little slow in restocking.)