Now that both have dried up I hope we can close the vault door on js and have people learn how to code again.
Now that both have dried up I hope we can close the vault door on js and have people learn how to code again.
I don't see how the conclusion follows from this.
There will be many LLM-generated functions purporting to do the same thing, and a bug in one of them that gets fixed means only one project gets fixed instead of every project using an NPM package as a dependency.
I've seen so many tiny packages pull in lodash for some little utility method so many times. 400 bytes of source code becomes 70kb in an instant, all because someone doesn't know how to filter items in an array. And I've also seen plenty of projects which somehow include multiple copies of lodash in their dependency tree.
Its such a common junior move. Ugh.
Experienced engineers know how to pull in just what they need from lodash. But ... most experienced engineers I know & work with don't bother with it. Javascript includes almost everything you need these days anyway. And when it doesn't, the kind of helper functions lodash provides are usually about 4 lines of code to write yourself. Much better to do that manually rather than pull in some 70kb dependency.
Yes, use of LLMs is still developers not writing original code, but it’s still an improvement (a minor one) of the copy/paste of micro dependencies.
Ultimately developers aren’t going to figure out writing original code until they are forced to do so from changed conditions within their employer.
I've been playing a lot recently with various models, lately with the expensive claude models (API, for the large context windows), and in every attempt things are really impressive at the beginning, and start going south once the codebase reaches about 10k to 15k lines of code. Even with tools split out into a separate library and separate documentation at that point it has a tendency to generate tool functions again in the module it's currently working on over taking the already defined one in the helper library.
Take a generic function that recursively converts snake_case object keys to pascalCase. That's about 10 lines of Javascript, you can write that in 2 mins if you're a competent dev. Figuring out the types for it can be done, but you really need a lot of ts expertise to pull it off.
[0]: https://stackoverflow.com/questions/60269936/typescript-conv...
Not wanting to use well constructed, well tested, well distributed libraries to make code simpler and more robust is not motivated by any practical engineering concern. It's just nostalgia and fetishism.
This only shows how limited and/or impractical dependency management story is. The whole idea behind semver is that at the public interface level patch version does not matter at all and minor versions can be upped without breaking changes, therefore a release build should be safe to only include major versions referenced (or on the safe side, the highest version referenced).
> Its such a common junior move. Ugh.
I can see this happening if a version is pinned at an exact patch version, which is good for reproducibility, but that's what lockfiles are for. The junior moves are to pin a package at an exact patch version and break backwards compatibility promises made with semver.
> Experienced engineers know how to pull in just what they need from lodash. But ...
IMO partial imports are an antipattern. I don't see much value in having exact members imported listed out at the preamble, however default syntax pollutes the global namespace, which outweighs any potential benefits you get from members listed out the preamble. Any decent compiler should be able to shake dead code in source dependencies anyway, therefore there should not be any functional difference between importing specific members and importing the whole package.
I have heard an argument that partial imports allow one to see which exact `sort` is used, but IMO that's moot, because you still have to perform static code analysis to check if there are no sorts used from other imported packages.
It isn't but then everyone does it and then everyone does it recursively and 70kb become 300MB and then it matters. Not to mention that "well constructed, well tested, well distributed" are often actually overengineered and poorly maintained.
Because javascript isn't compiled. Its distributed as source. And that means the browser needs to actually parse all that code before it can be executed. Parsing javascript is surprisingly slow.
70kb isn't much on its own these days, but it adds up fast. Add react (200kb), a couple copies of momentjs (with bundled timezone databases, of course) (250kb or something each) and some actual application code and its easy to end up with ~1mb of minified javascript. Load that into a creaky old android phone and your website will chug.
For curiosity's sake, I just took a look at reddit in dev tools. Reddit loads 9.4mb of javascript (compressed to 3mb). Fully 25% of the CPU cycles loading reddit (in firefox on my mac) were spent in EvaluateModule.
This is one of the reasons wasm is great. Wasm modules are often bigger than JS modules, but wasm is packed in an efficient binary format. Browsers parse wasm many times faster than they parse javascript.
... Sorry, what does that have to do with tree shaking?
I could fear that rather than keep pushing for a Javascript standard library, which would encompass all these smaller function, we now just get more or less the same defective implementations generated by LLMs, but hidden in thousands of repos, where tools won't find security issues. At least with NPM we can pull in updated versions with NPM tells us that we're running an outdated version. Who is going to traverse your proprietary code base and let you know that the vibe coded left-pad Claude put in three years ago is buggy?
Part of the problem is that a javascript module is (or at least used to be) just a normal function body that gets executed. In javascript you can write any code you want at the global scope - including code with side effects. This makes dead code elimination in the compiler waay more complicated.
Modules need to opt in to even allowing tree shaking by adding sideEffects: false in package.json - which is something most people don't know to do.
> I don't see much value in having exact members imported listed out at the preamble
The benefit to having exact members explicitly imported is that you don't need to rely on a "sufficiently advanced compiler". As you say, if its done correctly, the result is indistinguishable anyway.
In my mind, anything that helps stop all of lodash being pulled in unnecessarily is a win in my books. A lot of javascript projects need all the help they can get.
That flag has always been a non-standard mostly-just-Webpack-specific thing. It's still useful to include in package.json for now, because Webpack still has a huge footprint.
It shouldn't be an opt-in that anything written and published purely as ESM should need, it was a hack to paper over problems with CommonJS. One of the reasons to be excitedly dropping CommonJS support everywhere and be we are getting to be mostly on the other side of the long and ugly transition and getting to a much more ESM-native JS world.