Unless you are a product owner, you have paying clients that love you and your product and won't simply ditch it in favour of a new clone, you are really screwed.
Correction -- sadly, we're already well within this era
Slight pushback on this. The web has been spammed with subpar tutorials for ages now. The kind of medium "articles" that are nothing more than "getting started" steps + slop that got popular circa 2017-2019 is imo worse than the listy-boldy-emojy-filled articles that the LLMs come up with. So nothing gained, nothing lost imo. You still have to learn how to skim and get signals quickly.
I'd actually argue that now it's easier to winnow the slop. I can point my cc running in a devcontainer to a "tutorial" or lib / git repo and say something like "implement this as an example covering x and y, success condition is this and that, I want it to work like this, etc.", and come back and see if it works. It's like a litmus test of a tutorial/approach/repo. Can my cc understand it? Then it'll be worth my time looking into it. If it can't, well, find a different one.
I think we're seeing the "low hanging fruit" of slop right now, and there's an overcorrection of attitude against "AI". But I also see that I get more and more workflows working for me, more or less tailored, more or less adapted for me and my uses. That's cool. And it's powered by the same underlying tech.
The problem is that AI makes all of that far, far easier.
Even using tooling to filter articles doesn't scale as slop grows to be a larger and larger percentage of content, and it means I'm going to have to consider prompt injections and running arbitrary code. All of this is a race to the bottom of suck.
Prompting is just 50% of the work (and the easy part actually). Ask the Head of Product or whoever is there to deploy something valuable to production and maintain it for 6 months while not losing money. It's just not going to happen, not even with truly AGI.
That’s the thing: open source is the only place where the true value (or lack of value) of these tools can be established — the only place where one can test mettle against metal in a completely unconstrained way.
Did you ever want to build a compiler (or an equally complex artifact) but got stuck on various details? Try now. It’s going to stand up something half-baked, and as you refine it, you will learn those details — but you’ll also learn that you can productively use AI to reach past the limits of your knowledge, to make what’s beyond a little more palatable.
All the things people say about AI is true to some degree: my take is that some people are rolling the slots to win a CRUD app, and others are trying to use it to do things that they could only imagine before —- and open source tends to be the home of the latter group.
1. Reducing dependencies is a wrong success metric. You just end up doing more work yourself, except you can't be an expert in everything, so your code is often strictly worse.
2. Regenerating the same solutions with a probabilistic machine will produce bugs a certain percentage of the time. Dependencies are always the same code (when versioned).
3. Cognitive overhead for human review is higher with LLM-generated libs, for no additional benefit.
Trivial NPM libraries were never needed, but LLMs really are the nail in the coffin for them even when it comes to the most incompetent programmers because now they can literally just ask an LLM to spit out the exact same thing.
Except it's just not true in many cases because of social systems we've built. If I want to ship software to Debian I have to make sure that every single of my 3rdparty dependencies is registered and packaged as a proper debian package - a lot of time it will take much less work to rewrite some code than to get 25 100-lines-of-code micro-libraries accepted into debian.
It's really not. Every project of any significance is now fending off AI submissions from people who have not the slightest fucking clue about what is involved in working on long-running, difficult projects or how offensive it is to just slather some slop on a bug report and demand it is given scrutiny.
Even at the 10,000 feet view it has wasted people's time because they have to sit down and have a policy discussion about whether to accept AI submissions, which involves people reheating a lot of anecdotal claims about productivity.
Having learned a bit about how to write compilers I know enough to know that I can guarantee you that an AI cannot help you solve the difficult problems that compiler-building tools and existing libraries cannot solve.
It's the same as it is with any topic: the tools exist and they could be improved, but instead we have people shoehorning AI bollocks into everything.
Thankfully (not against blob-util specifically because I've never intentionally used it), I wouldn't completely blame llms either since languages like Go never had this dependency hell.
npm is a security nightmare not just because of npm the package manager, because the culture of the language rewards behavior such as "left-pad".
Instead of writing endless utilities for other project to re-use, write actual working things instead - that's where the value/fun is.
It shouldn't be a radical idea, it is how science overall works.
Also, as per the educational side, I find in modern software ecosystem, I don't want to learn everything. Excellent new things or dominantly popular new things, sure, but there are a lot of branching paths of what to learn next, and having Claude code whip up a good enough solution is fine and lets me focus on less, more deeply.
(Note: I tried leaving this comment on the blog but my phone keyboard never opened despite a lot of clicking, and on mastodon but hit the length limit).
“A little copying is better than a little dependency.”
This is a point where the lack of alignment between the free beer crowd and those they depend on is all too clear. The free beer enthusiast cannot imagine benefiting from anything other than a finished work. They are concerned about the efficient use of scarce development bandwidth without consciousness of why it is scarce or that it is not theirs to direct. They view solutions without a hot package cache as a form of waste, oblivious to how such solutions expedite the development of all other tools they depend on, commercial or free.
So if fewer people are including silly dependencies like isEven or leftPad, then I see that as a positive outcome.
Further: sitting down to discuss how your project will adapt to change is never a waste of time, I’m surprised you stated it like that.
In such a setting, you’re working within a trusted party — and for a major project, that likely means extremely competent maintainers and contributors.
I don’t think these people will have any difficulty adapting to the usage of these tools …
Use of an AI to write your code is also a form of dependency. When the LLM spits out code and you just dump it in your project with limited vetting, that's not really that different from vendoring a dependency. It has a different set of risks, but it still has risks.
PS. I think this is much less clear and much less settled law than you are suggesting.
AI are thieves!
> I don’t know which direction we’re going in with AI (well, ~80% of us; to the remaining holdouts, I salute you and wish you godspeed!), but I do think it’s a future where we prize instant answers over teaching and understanding.
Google ruined its search engine years ago before AI already.
The big problem I see is that we have become WAY too dependent on these mega-corporations. Which browser are people using? Typically chrome. An evil company writes the code. And soon it will fire the remaining devs and replace them with AI. Which is kind of fitting.
> Even now there’s a movement toward putting documentation in an llms.txt file, so you can just point an agent at it and save your brain cells the effort of deciphering English prose. (Is this even documentation anymore? What is documentation?)
Documentation in general sucks. But documentation is also a hard problem.
I love examples. Small snippets. FAQs. Well, many projects barely have these.
Look at ruby webassembly/wasm or ruby opal. Their documentation is about 99% useless. Or, even worse - rack in ruby. And I did not notice this in the past, in part because e. g. StackOverflow still worked, there were many blogs which helped fill up missing information too. Well all of that is largely gone now or has been slurped up by AI spam.
> the era of small, low-value libraries like blob-util is over. They were already on their way out thanks to Node.js and the browser taking on more and more of their functionality (see node:glob, structuredClone, etc.), but LLMs are the final nail in the coffin.
I still think they have value, but looking at organisations such as rubygems.org disrupt the ecosystem and bleeding it dry by kicking out small hobbyists, I think there is indeed a trend towards eliminating the silly solo devs who think their unpaid spare time is not worthy of anything at all, yet the big organisations eagerly throw down more and more restrictions onto them (my favourite example is the arbitrary 100k download limit for gems hosted at rubygems.org, but look at the new shiny corporate rules on rubygems.org - this is when corporations take over the infrastructure and control it. Ironically this also happened to pypi and they admit this indirectly: https://blog.pypi.org/posts/2023-05-25-securing-pypi-with-2f... - of course they deny that corporations control pypi now, but by claiming otherwise they admit it, because this is how hobbyists get eliminated. Just throw more and more restrictions at them without paying them. Sooner or later they decide to do something better with their time.)
Now, that does not mean it has no value, but it is a trade-off. After about 14 years, for instance, I retired permanently from rubygems.org in 2024 due to the 100k download limit (and now I wouldn't use it after the shameful moves RubyCentral did, as well as the new shiny corporate rules with which I couldn't operate within anyway; it is now a private structure owned by Shopify. Good luck finding people who want to invest their own unpaid spare time into anything tainted by corporations here).
Well yes there’s your problem. But people have been doing this with random snippets found on the internet for a while now. The truth is that irresponsibles developers will produce irresponsible code, with or without LLMs
For some definition of "small piece of code" that may be ok, but also sometimes this is more than folks consider
The Tragedy Of The Commons is always about this: people want what they want, and they do not care to prevent the tragedy, if they even recognise it.
> Project owners should instead be working ways to filter out careless code more efficiently.
Great. So the industry creates a burden and then forces people to deal with it — I guess it's an opportunity to sell some AI detection tools.
It is a waste of time for large-scale volunteer-led projects who now have to deal with tons of shit — when the very topic is "how do we fend off this stuff that we do not want, because our project relies on much deeper knowledge than these submissions ever demonstrate?"
Not including the unused features both makes the code you are adding easier to read and understand, but it also may be more efficient for your specific use case, since you don't have to take into account all the other possible use cases you don't care about.
AI, to me, is a character test: I'm regularly fascinated by finding out who fails it.
For example, in my personal life I have been treated to AI-generated comms from someone that I would never have expected it from. They don't know I know, and they don't know that I think less of them, and I always will.
Do you? It doesn't seem even remotely like an apples-to-apples comparison to me.
If you're the author of a library, you have to cover every possible way in which your code might be used. Most of the "maintenance" ends up being due to some bug report coming from a user who is not doing things in the way you anticipated, and you have to adjust your library (possibly causing more bugs) to accommodate, etc.
If you instead imaging the same functionality being just another private thing within your application, you only need to make sure that functionality works in the one single way you're using it. You don't have to make it arbitrarily general purpose. You can do error handling elsewhere in your app. You can test it only against the range of inputs you've already ensured are the case in your app, etc. The amount of "maintenance" is tiny by comparison to what a library maintainer would have to be doing.
It seems obvious to me that "maintenance" means a much more limited thing when talking about some functionality that the rest of your app is using (and which you can test against the way you're using it), versus a public library that everyone is using and needs to work for everyone's usage of it.
I have used this "instant legacy code" concept before. It's absolutely true, IMO. But people really, really, really hate hearing it.
But what this article and the comments don't say: open-source is mainly a quality metric. I re-use code from popular open-source repo's in part because others have used it without complaints (or document the bugs), in part because people are embarrassed to write poor-quality open-source so it's above-par code, and in part because if there are issues in this corner of the world, this dependency will solve them over time (and I can watch and wait when I don't have time to fix and contribute).
The quality aspect drives me to prefer dependencies over AI when I don't want full ownership, so I'll often ask AI to show open-source projects that do something well.
(As an aside, this article is about AI, but AI is so pervasive now as an issue that it doesn't even need saying in the title.)
You can pin the dependency and review the changes for security reasons, but fully grasping the logic is non-trivial.
Smaller dependencies are fine to copy at first, but at some point the codebase becomes too big, so you abstract it and at that point it becomes a self-maintained dependency. Which is a fair decision, but it is all about tradeoffs and sometimes too costly.
Now, we can argue that a typical SEO-optimized garbage article is not better, but I feel like the trust score for them was lower on average from a typical person.
This isn't "small" open source, "small" would be something you put together in a week or weekend. These are like "micro" projects, where more work goes into actually publishing and maintaining the repository than actually writing the library.
I like the approach C sometimes takes, with the "tiny header file" type of libraries. Though I guess that also stems from the lack of a central build system.
Or don't people do code review any more? I suppose one could outsource the code review to an AI, preferably not the one that wrote it though. But if you do that surely you will end up building systems that no one understands at all.
Now that both have dried up I hope we can close the vault door on js and have people learn how to code again.
I think they're going to be using porn and terrorism (as usual) to do that, but also child suicide. I also think they're going to leverage this rhetoric to lock down OSes in general, by making them uninstallable on legally-available hardware unless approved, because approved OSes will only be able to run approved LLMs.
Meaning that I think LLMs/generative AI will be the lever to eliminate general-purpose computing. As mobile went, so will desktop.
I think this is inevitable. The real question for me is whether China will partner with the west on this, or whether we will be trading Chinese CPUs with each other like contraband in order to run what we want.
> any true semblance of real AGI like systems.
This is the only part I don't agree with. This isn't going to happen, but I'm not even sure it would be more useful than what we have. We have billions of full AGI machines walking around, and most of them aren't great. I'm talking about restrictions on something technically barely better than what we have now; maybe only a significant bit more compute-efficient. Training techniques will probably be where we get the most improvements.
I don't see how the conclusion follows from this.
There will be many LLM-generated functions purporting to do the same thing, and a bug in one of them that gets fixed means only one project gets fixed instead of every project using an NPM package as a dependency.
This is not a worry with NPM. You can just specify a specific version of a dependency in your package.json, and it'll never be updated ever.
I have noticed for years that the JS community is obsessed with updating every package to the latest version no matter what. It's maddening. If it's not broke, don't fix it!
So why not do the same thing with a dependency? Install it once and never update it (and therefore hacked and malicious versions can never arrive in your dependency tree).
You're a JS developer, right? That's the group who thinks a programmer's job includes constantly updating dependencies to the latest version constantly.
You don't actually. You write the library for how you use it, and you accept pull requests that extend it if you feel it has merit.
If you don't, people are free to fork it and pull in your improvements periodically. Or their fork gets more popular, and you get to swap in a library that is now better-maintained by the community.
As long as you pin your package, you're better off. Replicating code pretty quickly stops making sense.
Meanwhile Java goes the other way: twenty-year old packages that are serious blockers to improved readability. Running Java that doesn't even support Option (or Maybe or whatever it's called in Java).
Is there any upside to opensourcing anything anymore? Anything published today becomes training data for the next model, with no attribution to the original work.
If the goal is to experiment, share ideas, or let others learn from the work, maybe the better default now is "source available", instead of FOSS in the classic sense. It gives people visibility while setting clearer boundaries on how the work can be used.
I learned most of what I know thanks to FOSS projects so I'm still on the fence on this.
Maybe programming languages will be designed for AIs in the future. Maybe they'll have features that make grafting unknown generated code easier.
I look at it as why not have the best of both worlds? The docs for my JS framework all have the option of being returned as LLM-friendly text [1].
When I utilize this myself, it's to get help fleshing out skeleton code inside of an app built w/ my framework (e.g, Claude Sonnet w/ these docs in context build a nearly ~90-100% accurate implementation for most stuff I throw at it—anything from little lib stuff up to full-blown API endpoint design and even framework-level implementations of small stuff like helping to refactor the built-in middleware loading). It's not for a lack of desire to read, but rather a lack of desire to build every little thing from scratch when I know an LLM is perfectly capable (and much faster than me).
[1] https://docs.cheatcode.co/joystick/ui/component/dynamic-page...
It depends. For stuff I don’t care about I’m happy to treat it as a black box. Conversely, AI now allows me to do deep dive on essentially anything I’m interested in, which has been a massive boon to my learning ability.
If you're not learning to code, then you want efficient code, so the comments are wasted bytes (ok, not a huge expense, but still).
If you are learning to code, or just want to understand how this code works, then asking an LLM is going to get a lot better result. LLMs are fantastic tutors. Endlessly patient, go down any rabbit hole with you, will continue explaining a concept until you get it, etc. I think they're going to revolutionise education, especially for practical subjects like coding.
Respect to the author for trying to educate, though.
Is there any modern compiler where the output code has anything to do with the comments in the source?
What's the point?
All of my personal projects for the past few months have been entirely private, I don't even host them on Github anymore, I have a private Forgejo instance I use instead.
I also don't trust any new open source project I stumble upon anymore unless I know it was started at least a year ago.
Huh? What if your once-off installation or vendoring IS a hacked an malicious version and you never realise and never update it. That's worse.
I've seen so many tiny packages pull in lodash for some little utility method so many times. 400 bytes of source code becomes 70kb in an instant, all because someone doesn't know how to filter items in an array. And I've also seen plenty of projects which somehow include multiple copies of lodash in their dependency tree.
Its such a common junior move. Ugh.
Experienced engineers know how to pull in just what they need from lodash. But ... most experienced engineers I know & work with don't bother with it. Javascript includes almost everything you need these days anyway. And when it doesn't, the kind of helper functions lodash provides are usually about 4 lines of code to write yourself. Much better to do that manually rather than pull in some 70kb dependency.
[1]: https://github.com/Hypfer/Valetudo#valetudo-is-a-garden
Otherwise, writing it myself is much better. It's more customizable and often much smaller in size. This is because the library has to generalize, and it comes with bloats.
Using AI to help write is great because I should understand that anyway whether AI writes it or not or whether it's in an external library.
One example recently is that I built a virtual list myself. The code is much smaller and simpler compared to other popular libraries. But of course it's not as generalizable.
EDIT: Obvious from the rest of your responses in this thread that this is trolling, leaving this up for posterity only
Why would randomized code be more robust? Also, how is searching/reading the docs slower than checking the correctness of a randomized function?
Or is it the attribution? There are many many libraries I have used and continue to use and I don't know the author's internet handle or Christian name. Does that matter? Why?
I have written a lot of code that my name is no longer attached to. I don't care and I don't know why anyone does. If it were valuable I would have made more money off of it in the first place, and I don't have the ego to just care that people know it's my code either.
I want the things I do today to have an upside for people in the future. If that means I write code that gets incorporated into a model that people use to build things N number of years from now, that's great. That's awesome. Why the hell is that apparently so demotivating to some people?
I came across this once before in the form of a react hooks library that had no dependency to install. It was just a website and when you found the hook you wanted you were meant to paste it into your project.
In decent ecosystems there should be low or zero overhead to that.
> Not including the unused features both makes the code you are adding easier to read and understand, but it also may be more efficient for your specific use case, since you don't have to take into account all the other possible use cases you don't care about.
Maybe. I find generic code is often easier to read than specialised custom implementations, because there is necessarily a proper separation of concerns in the generic version.
"here's the type of message that the author of this page is trying to convey" is not what most people think is a simple question
It's also not the question I asked. I'm literally trying to parse out what question was asked. That's what makes AI slop so infuriating: it's entirely orthogonal to the information I'm after. Asking Google "what is the flag for persevering Metadata using scp" and getting the flag
name instead of a SEO article with the a misleading title go on about so third party program
that you can download that does exactly that and never actually tell you the answer is
ridiculous and I am happy AI has help reduce the click bait
Except that the AI slop Google and Microsoft and DDG use for summaries masks whether or not a result is SEO nonsense. Instead of using excerpts of the page the AI summary simply suggests that the SEO garbage is answering the question you asked. These bullshit AI summaries make it infinitely harder to parse out what's actually useful. I suppose that's the goal though. Hide that most of the results are low quality and force you to click through to more pages (ad views) to find something relevant. AI slop changes the summaries from "garbage in, garbage out" to simply "garbage out".Sure, web search companies moved away from direct keyword matching to much more complex "semantics-adjacent" matching algorithms. But we don't have the counterfactual keyword-based Google search algorithm from 2000 on data from 2025 to claim that it's just search getting worse, or the problem simply getting much harder over time and Google failing to keep up with it.
In light of that, I'm much more inclined to believe that it's SEO spam becoming an industry that killed web search instead of companies "nerfing their own search engines".
Yes, use of LLMs is still developers not writing original code, but it’s still an improvement (a minor one) of the copy/paste of micro dependencies.
Ultimately developers aren’t going to figure out writing original code until they are forced to do so from changed conditions within their employer.
I've been playing a lot recently with various models, lately with the expensive claude models (API, for the large context windows), and in every attempt things are really impressive at the beginning, and start going south once the codebase reaches about 10k to 15k lines of code. Even with tools split out into a separate library and separate documentation at that point it has a tendency to generate tool functions again in the module it's currently working on over taking the already defined one in the helper library.
In the long run I think it's time to starve the system from input until it's attitude reverts to reciprocal. It's not what I'd want, but it seems necessary. People learn from consequences, not from words alone
Take a generic function that recursively converts snake_case object keys to pascalCase. That's about 10 lines of Javascript, you can write that in 2 mins if you're a competent dev. Figuring out the types for it can be done, but you really need a lot of ts expertise to pull it off.
That’s not automatically a problem, however. The problem is that even if you do come up with a really cool idea that LLM is not capable of autocompleting, and you release it under a copyleft license (to ensure the project survives and volunteer contributor’s work is not adopted and extinguished by some commercial interest), it will get incorporated into its dataset regardless of the licensing, and thereafter the LLM will be capable of spitting it out and its large corporate operator will be able to monetise your code (allowing anyone with money wishing to build a commercial product based on it).
Given that some 80% of developers are now using AI in their regular work, blob-util is almost certainly the kind of thing that most developers would just happily have an LLM generate for them. Sure, you could use blob-util, but then you’d be taking on an extra dependency, with unknown performance, maintenance, and supply-chain risks.
Letting LLM write utility code is a sword that cuts both ways. You often create a throw-away code that is unproven and requires maintenance. It's not a guarantee that the blobutil or toString or whatever created by AI won't fail at some edge cases. That's why e.g. in Java there is Apache commons which is perceived as an industry standard nowadays.[0]: https://stackoverflow.com/questions/60269936/typescript-conv...
Not wanting to use well constructed, well tested, well distributed libraries to make code simpler and more robust is not motivated by any practical engineering concern. It's just nostalgia and fetishism.
This only shows how limited and/or impractical dependency management story is. The whole idea behind semver is that at the public interface level patch version does not matter at all and minor versions can be upped without breaking changes, therefore a release build should be safe to only include major versions referenced (or on the safe side, the highest version referenced).
> Its such a common junior move. Ugh.
I can see this happening if a version is pinned at an exact patch version, which is good for reproducibility, but that's what lockfiles are for. The junior moves are to pin a package at an exact patch version and break backwards compatibility promises made with semver.
> Experienced engineers know how to pull in just what they need from lodash. But ...
IMO partial imports are an antipattern. I don't see much value in having exact members imported listed out at the preamble, however default syntax pollutes the global namespace, which outweighs any potential benefits you get from members listed out the preamble. Any decent compiler should be able to shake dead code in source dependencies anyway, therefore there should not be any functional difference between importing specific members and importing the whole package.
I have heard an argument that partial imports allow one to see which exact `sort` is used, but IMO that's moot, because you still have to perform static code analysis to check if there are no sorts used from other imported packages.
"SEO" is not some magic, it is "compliance with ranking rules of the search engine". Google wanted to make their lives easier, implemented heuristics ranking slop higher, resulting in two things happening simultaneously: information to slop ratio decreasing AND information getting buried deeper and deeper within SRPs.
> do you have direct evidence that Google actively made search worse?
https://support.google.com/google-ads/answer/10286719?hl=en-... Google is literally rewriting the queries. Not only results with better potential for ads outrank more organic results, it is impossible to instruct the search engine to not show you storefronts even if you tried.
It isn't but then everyone does it and then everyone does it recursively and 70kb become 300MB and then it matters. Not to mention that "well constructed, well tested, well distributed" are often actually overengineered and poorly maintained.
Spam by its nature is low effort, low yields anyway. They don't particularly care about making scraps since their pipeline is nearly automated.
sure. https://www.wheresyoured.at/the-men-who-killed-google/
>These emails — which I encourage you to look up — tell a dramatic story about how Google’s finance and advertising teams, led by Raghavan with the blessing of CEO Sundar Pichai, actively worked to make Google worse to make the company more money. This is what I mean when I talk about the Rot Economy — the illogical, product-destroying mindset that turns the products you love into torturous, frustrating quasi-tools that require you to fight the company’s intentions to get the service you want.
Of course, it's hard to "objectively" prove that they literally made search worse, but it's clear they were fine with stagnating in order to maximize ad revenue.
I see it as the same way Tinder works if you want the mentality. There's a point where being "optimal" hurts your bottom line, so you don't desire achieving a perfect algorithm. Meanwhile, it can be so bad for Google that directly searching for a blog title at times can leave me unsuccessful.
>Less incentive to write small libraries. Less incentive to write small tutorials on your own website.
What weitendorf posted is definitively not a library, nor is there a small tutorial for the code.
>Unless you are a hacker or a spammer where your incentives have probably increased. We are entering the era of cheap spam of everything with little incentive for quality.
Considering the low effort to post and high effort to understand what weitendorf wrote, he might be considered a spammer given the context. The code quality is also low since his application can easily be replicated by a bunch of echo calls in a bash script, making me lean towards thinking he is a low quality spammer, given the context.
>All this for the best case outcome of most people being made unemployed and rolling the dice on society reorganising to that reality.
I'm not sure you can argue that weitendorf sufficiently addressed this. He put too much emphasis on an obvious strawman (real programmer) which is completely out of context. Nobody is questioning here whether someone is a programmer or not. There is no gatekeeping whatsoever. You're free to use LLMs.
I'll also complain about your use of "salient" here, which generally has two meanings. The first is that something is "eye catching" (making me think more of spam), the second meaning is "relevancy/importance" to a specific thing and that's where weitendorf falls completely flat.
Now you might counter and argue that he packaged all of his salient points inside the statement "want to lay off bread-and-butter red-blooded american programmers", then your position is incredibly weak, because you're deflecting from one strawman to another strawman or alternatively, your counterargument will rely heavily on reinterpretation, which again just means the point wasn't salient.
Closest thing in the YT space would be Nebula, but Nebula's scope is very narrow (by design).
Because javascript isn't compiled. Its distributed as source. And that means the browser needs to actually parse all that code before it can be executed. Parsing javascript is surprisingly slow.
70kb isn't much on its own these days, but it adds up fast. Add react (200kb), a couple copies of momentjs (with bundled timezone databases, of course) (250kb or something each) and some actual application code and its easy to end up with ~1mb of minified javascript. Load that into a creaky old android phone and your website will chug.
For curiosity's sake, I just took a look at reddit in dev tools. Reddit loads 9.4mb of javascript (compressed to 3mb). Fully 25% of the CPU cycles loading reddit (in firefox on my mac) were spent in EvaluateModule.
This is one of the reasons wasm is great. Wasm modules are often bigger than JS modules, but wasm is packed in an efficient binary format. Browsers parse wasm many times faster than they parse javascript.
I'm kinda hoping that Github will provide an Anubis-equivalent for issue submissions by default.
I believe the perspective here is "I make code for fellow hackers to look into, critique, be educated on, or simply play with". If you see the hacker scene as a social one, LLM's are an awful black hole that sucks up everything around it and ruins this collaboration.
Not to mention that the hacker scene was traditionally thought to be a rejection of what we now call "Big Tech". Corporate was free to grab the code, but it didn't matter much as long as the scene was kept. Now even that invisible social contract is broken.
But I suppose if you're of a diehard FOSS mentality, "Free" means "Free". Free to be used to build, or destroy society at its whim. a hivemind to meld into and progress the overall understanding of science, for science's sake.
I'll admit the last few years have had me questioning what I truly want to do within the on these two mentalities.
>Blaming AI is blaming the wrong problem. AI is a a tool like a spreadsheet. Project owners should instead be working ways to filter out careless code more efficiently.
When care leaves, the entire commons starts to fall apart. New talent doesn't come in. Old talent won't put up with it and retire out of the scene. They already have so much work to do, needing to add in non-development work to make better spam filters may very well be the final stray.
Even when the careless leave, it won't bring back the talent lost. Directing the blame onto the sure won't do that.
It's not the tools, it's the quality. No FOSS dev would care where the code came from if it followed the contributor's guidelines and coding style.
This is why it's a spam issue. a bunch of low quality submissions only gum up the time of such developers and slows the entire process down.
>that likely means extremely competent maintainers and contributors.
Your assumption falls apart here, sadly. Dunning-Kruger hits hard here for new contributors powered by LLMs and the maintainers suffer the brunt of the hit.
I also don't get how code can be so massivly inefficient. left-pad needs 9kb to download and the code is a handful of lines: https://www.npmjs.com/package/left-pad?activeTab=code
If my unit tests run through, i don't have 'unproven' code. I have well working code which doesn't need to go through a dependency hell upgrade cycle just because one function in that lib, i don't use, has some CVE too high to be ignored.
Yes, in the case of Google:
- They make more money from ads if the organic results are not as good (especially if it's not clear they're add)
- They get more impressions if you don't find the answer at the first search and have to try a different query
So the time you're talking about is a window when Google existed, but before they gave up on fighting spam.
Is "reversing a binary tree" actually a thing, or is this a cute kind of "rocket surgery" phrase intentionally mixing reversing a linked list and searching a binary tree?
I can only imagine reversing a binary tree would imply changing the “<“ comparison in nodes to “>” which would be a useless exercise
... Sorry, what does that have to do with tree shaking?
I could fear that rather than keep pushing for a Javascript standard library, which would encompass all these smaller function, we now just get more or less the same defective implementations generated by LLMs, but hidden in thousands of repos, where tools won't find security issues. At least with NPM we can pull in updated versions with NPM tells us that we're running an outdated version. Who is going to traverse your proprietary code base and let you know that the vibe coded left-pad Claude put in three years ago is buggy?
Maintenance demands (your library X doesn't work with Python Y, please maintain another version for me) I'd shrug off. Wait for me, pay me, or fix it yourself.
I think what’s missing is some amount of organization to make them more discoverable.
You have to manually update for any releases you care about, but that is also an incentive to keep dependency count low.
Part of the problem is that a javascript module is (or at least used to be) just a normal function body that gets executed. In javascript you can write any code you want at the global scope - including code with side effects. This makes dead code elimination in the compiler waay more complicated.
Modules need to opt in to even allowing tree shaking by adding sideEffects: false in package.json - which is something most people don't know to do.
> I don't see much value in having exact members imported listed out at the preamble
The benefit to having exact members explicitly imported is that you don't need to rely on a "sufficiently advanced compiler". As you say, if its done correctly, the result is indistinguishable anyway.
In my mind, anything that helps stop all of lodash being pulled in unnecessarily is a win in my books. A lot of javascript projects need all the help they can get.
Why not just disallow issues without a vetting process?
Many of these things could be explored -- you're right: it's a spam issue. But we have solutions to spam issues ... filters. LLMs have shown that "praying for the best" with permissive repository settings is not sufficient. We can and will improve our filters, no?
It doesn't have to be this way- coupled with verification (to mitigate hallucination), llms can help so much with personal education.
It's a purely judgement free environment and for a lot of people that's incredibly meaningful.
If we replace code written by 20 of those organizations with code written by ChatGPT, we've gone from 20 code "vendors" we don't know much about who have no formal agreement to speak of with us, to 1 tools vendor which we can even make an enterprise agreement with and all that jazz.
So whatever else the outcome may be, from this perspective it reduces uncertainty to quit using random npm packages and generate your utility classes with ChatGPT. I think this is the conclusion many businesses may reach.
Node.js is very good for IO and it has decent performance even for CPU-intensive work considering it's a dynamic language, but it would sure be nice to have a rich core library like Ruby or Clojure has.
The fact that ClojureScript can do it proves that it's doable even for front-end javascript (using advanced optimisations).
The term I like is that AI has _industrialised_ those behaviours. While native hunted buffalo, it wasn't destructive until it was industrialised [1] it that it became truly destructive.
lol, behavior like this is way more destructive to personal relationships than AI ever will be.
If we somehow paid directly for search, then Google's incentives would be to make search good so that we'd be happy customers and come back again, rather than find devious ways to show us more ads.
Most people put up with the current search experience because they'd rather have "free" than "good" and we see this attitude in all sorts of other markets as well, where we pay for cheap products that fail over and over rather than paying once (but more) for something good, or we trade our personal information and privacy for a discount.
I think most would agree with this, but the way things work today don't support it. As of now, AI gains are privatized while the losses are socialized. Until that one-sided imbalance is addressed, LLM's "use" of open source is unbounded and nonreciprocal.
Attribution is a big part of the human experience. Your response frames it as ego driven, but it's also what motivates people to maintain code that is not usually compensated, it's also what builds reputation, trust, communities, and even careers.
Until that’s figured out, we can still share, but maybe in ways that are closer to one another, or under distribution models that reflect the reality we’re in rather than the one we used to have.
I assume GP's point was that assembly language literacy was a pointless skill nowadays. I found it quite useful, precisely because it's no longer an ubiquitous skill, so you can shine with your expertise in some situations.
okay, you can keep thinking that. I'll just reject anything that has a whiff of AI and lacks care. No point campaigning in this admin to regulate anything, so that's off the table for 1-3 years.
That flag has always been a non-standard mostly-just-Webpack-specific thing. It's still useful to include in package.json for now, because Webpack still has a huge footprint.
It shouldn't be an opt-in that anything written and published purely as ESM should need, it was a hack to paper over problems with CommonJS. One of the reasons to be excitedly dropping CommonJS support everywhere and be we are getting to be mostly on the other side of the long and ugly transition and getting to a much more ESM-native JS world.
You can definitely argue we hit that point a long time ago, but this will exacerbate it.
I’ve been trying to encourage forking my libraries, or have people just copy them into their codebase and adapt them, e.g. https://github.com/mastrojs/mastro/ (especially the “extension” libs.) But it’s an uphill battle against the culture of convenience over understanding.