I also think there is an element of, "rewrite in rust" is just easy to say, where changing data structures or whatever requires analysis of the problem at hand.
Yes, the language can bring a nice speed up, or might give you better control of allocations which can save a lot of time. But in many cases, simply picking the correct algorithm will deliver you most of the performance.
As someone who doesn’t JavaScript a lot, I’d definitely prefer a tool written in go and available on brew over something I need to invoke node and its environment for.
It's true, it has some really bad parts but you can avoid them.
If I could design the perfect language for myself, it would have the syntax of JavaScript and the portability of JavaScript but it would use Python's strong duck typing approach.
It's not that hard to do the same for a less terrible language. Choose something markedly different, i.e. a low level language like rust, and you will learn a lot in the process. More so because now you can see and understand the programming world from two different vantage points. Plus, it never hurts to understand what's going on on a lower level, without an interpreter and eco-system abstracting things away so much. This can then feed back into your skills and understanding of JS.
Rewriting in more performant languages spares you from the pain of optimization. These tools written in Rust are somehow 100× as fast despite not being optimized at all.
JavaScript is so slow that you have to optimize stuff, with Rust (and other performant languages) you don't even need to because performance just doesn't bubble up as a problem at all, letting you focus on building the actual tool.
- easier concurrency. - the fact that things are actually getting rewritten with the purpose of speeding them up. - a lot of the JS tooling getting speedups deals with heavily with string parsing, tokenizing, generating and manipulation of ASTs. Being able to have shared references to slices of strings, carefully manage when strings are copied, and have strict typing of the AST nodes you enable things to be much faster than JavaScript.
Webpack has an enormous community of third-party plugins, it would be very hard to do something similar with e.g. Go or Zig.
If you write non-portal code, there might be an important reason (like writing OS components, which you won't do in JS).
He’s making contributions in Rust already. His opinion isn’t invalid just because he has a bias, he opens by acknowledging his bias.
The JStockholm syndrome.
For example, if you install psycopg you'll get a pure Python implementation which is easy to debug and hack. But you can also install psycopg[binary] to obtain a faster, compiled version of the library. https://www.psycopg.org/psycopg3/docs/basic/install.html
I know what you’re referring to but these problems have also taught me a lot about language performance. python and JS array access is just 100x slower than C. Some difficult problems become much harder due to this limitation.
Also, this was a thing before Rust. I've rewritten several things in C or Cpp for python back ends, and most pytbon performance-critical code is already an API to a shared library. You'd be surprised to run OR tools and find Fortran libraries loaded by your python code.
This raises the question, is JavaScript more prone to premature optimization?
(but of course, the vast majority of the code, even in widely used tools, isn't properly designed for optimization in the first place)
I only dabble in javascript, but `tsc` is abominable.
The world is full of slow software because one chose the wrong algorithm: https://randomascii.wordpress.com/2019/04/21/on2-in-createpr... https://randomascii.wordpress.com/2019/12/08/on2-again-now-i... https://randomascii.wordpress.com/2021/02/16/arranging-invis... ...
For analogies, look no further than ASM in the early days and the motivations that brought us C, but with the lessons learned as well.
Rust is fine for this, except for interoperability.
Is it though? Rust/Zig/Go programs are pretty much all incredibly easy to checkout and compile, it's one of the big selling points of those languages. And at the end of the day how often are javascript developers fixing the tooling they use even when it's written in javascript?
I've always felt learning new languages give me not only new tools to use but shapes the way I think about solving problems.
I don't care if you don't know how to write a merge sort from scratch. I do care about you knowing not to write an O(n^2) loop when it can be avoided.
I lost you here. JavaScript doesn't work around type issues, no language really can. It just pushes the type issues to a later time.
It gets really old to get something like "NoneType does not have blah" in a deeply nested, complicated data structure in python, but obviously only at runtime and only in that hard to hit corner case, when all you did is forget to wrap something in the right number of square brackets in some other part of the code.
I haven't fully given up on python, but I only deal with it using mypy, which adds static typing, anymore.
https://stackoverflow.com/questions/65000209/how-to-call-rus...
- you’re less likely to hear about a failed rewrite
- rewrites often gain from having a much better understanding of the problem/requirements than the existing solution which was likely developed more incrementally
- if you know you will care about performance a lot, you hopefully will think about how to architect things in a way that is capable of achieving good performance. (Non-cpu example: if you are gluing streams of data with processing steps together, you may not think much about buffering; if you know you will care about throughput, you will probably have to think about batching and maybe also some kind of fan-out->map->fan-in; if you know you will care about latency you will probably think about each extra hop or batch-building step)
- hopefully people do a bit of napkin math to decide if rewriting something to be faster will achieve the goals, and so you only see the rewrites that people thought would be beneficial (because eg you’re touching a lot of memory so a better memory layout could help)
I feel like you’re much more likely to see ‘we found some JavaScript that was too useful for its own good, figured out how to rewrite it with better algorithms/data structures, concurrency, and sims instructions, which we used rust to get’ than ‘our service receives one request, sends 10 requests to 5 different services, collects the results and responds; we rewrote it in rust but the performance is the same because it turns out most of what our service did was waiting’.
Like, I can't imagine most people using Javascript would want to rewrite in Rust without some decent reason.
For example, I'd expect that Rust (or rustc I guess) can auto-vectorize more than Node/Deno/etc.
Wouldn't it be amazing though? Maybe some combination of JIT and runtime static analysis could do it.
Personally, I never assign different types to the same variable unless it's part of a union (e.g. string | HTMLObject | null, in JS).
It would probably require getting rid of `eval' though, which I am fine with. On average, eval() tends to be naughty and those needs could be better met in other ways than blindly executing a string.
Yes, I agree that is very sad
Python is achingly slow. I know the Python people want to address this, I do not understand. Python makes sense as a scripting/job control language, and execution speed does not matter.
As an application development language it is diabolical. For a lot of reasons, not just speed
Yeah, JavaScript is sloppy, but you can always monkey-patch it by modifying tool-controlled files. Great idea. Not.
JS is just not a good language. The JIT and the web of packages made it slightly more usable, but it's still Not Good. There's no real way to do real parallel processing, async/await are hellish to debug, etc.
It's unavoidable in browsers, but we _can_ avoid using it for tools. Look at Python, a native PIP replacement improved build times for HomeAssistant by an order of magnitude: https://developers.home-assistant.io/blog/2024/04/03/build-i...
The webpack ecosystem on the other hand is it’s own OS.
My hope is one of the Next Big Things in programming languages is the widespread adoption of incremental typing systems.
So during the early stages of dev you get the productivity benefits of dynamic and loose/duck typing as much as you want, and then as the code matures - as the design firms up - you begin layering in the type information on different parts of the program (and hopefully the toolset gives you a jump start by suggesting a lot of this type info for you, or maybe you specify it only in places where the type info can't be deduced).
Then those parts of the program (and hopefully eventually the entire program) are strongly and statically typed, and you get all of the associated goodies.
So yeah, Python is not great for systems programming
I have been dragged, through straight misrepresentation, into the Node.js world.
OMG, awful hardly begins to touch it.
I have not used Go, but as far as I can tell every thing the Node.js people do is done better in Go.
I do not recommend Rust. I have a lot of experience with Rust, and unless you actually need the real time responsiveness it will bog you down.
Some problems are much more complicated, where you have to take, for example, locality (cache hierarchy etc.) and concurrency considerations like lock contention into account. This may affect your choice of algorithm, but by the time you reach that, you've almost certainly thought about the algorithm a lock already.
Like a JS/TS that can have compiled blocks specified in the same language, preferably inline? I'm reaching here.
I didn't know about this before, I wonder how much overhead?
The reason I am reluctant to rely on JS tools for anything CLI is because of Node.js instability due to version sensitivity and impossible-to-fix-without-reinstalling-the-os low level LibC errors.
Compared to go, rust, or python, the odds that any given CLI.js program will run across my (small) fleet of machines is very low, by factor or 10x or more compared to alternatives. Some boxes I don't want to reinstall from scratch every 4 years, they're not public facing and life is too short.
For instance, take function definitions. By just adding types to the function's arguments, you're potentially saving the reader a ton of time and mental overhead since they don't have to chase down the right the chain of function calls to figure out what it is exactly (or is supposed to be) that's getting passed in.
Typed arrays help a lot, but I’m still doubtful. Maybe it all the processing is restrict to idioms in the asm.js subset? And even then you’re getting bounds checking.
In my work, it’s hard to justify using something other than JS/TS — incredible type system, fast, unified code base for server/mobile/web/desktop, world’s biggest package ecosystem for anything you need, biggest hiring pool from being the best known language, etc.
It’s just such a joy to work with, ime. Full-stack JS has been such a superpower for me, especially on smaller teams.
The dissonance between how the silent majority feels about JS (see, e.g the SO yearly survey), vs the animus it receives on platforms like HN is sad. So here’s my attempt at bringing a little positivity and appreciation to the comments haha.
Let me cross-compile a C++ project any day ...
I’ve heard several folks say that about Kubernetes, but in my experience the *nix core always resurfaces the second things get weird.
javascript doesn’t have a compiler is my main point.
NPM has done a pretty great job of showing everyone else what to avoid doing.
The mere mention of “web pack” sends most of the FE devs I’ve met into borderline trauma flashbacks.
There’s seemingly half a dozen package managements tools, some of which also seem to be compilers? There’s also bundlers, but again some of these seem integrated. Half of the frameworks seem to ship their own tools?
So?
Some tool got written and did its job sufficiently well that it became a bottleneck worth optimizing.
That's a win.
"Finishing the task" is, by far, the most difficult thing in programming. And the two biggest contributors to that are 1) simplicity of programming language and 2) convenience of ecosystem.
Python and Javascript are so popular because they tick both boxes.
E.g. C++ vs Node.js here: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Couldn't find C vs JS easily with the new benchmarksgame UI.
> mpzjs. This library wraps around libgmp's integer functions to perform infinite-precision arithmetic
And then the “array”:
> Buffer.allocUnsafe
So is this a good JavaScript benchmark?
Though I don't see an issue with tools for JS built without JS. It's just that I don't think that it's a bad thing for a JavaScript dev to want the ecosystem around JavaScript to be written in JS. JS is orders of magnitudes faster than python in any case.
Or perhaps another way to look at it, if you care enough about performance to choose a particular algorithm, you shouldn't be using a slow language in the first place unless you're forced to due to functional requirements.
Buffer.allocUnsafe just allocates the memory without zero-initializing it, just like e.g. malloc does. Probably usually not worth it, but in a benchmark it's comparable to malloc vs calloc.
Try:
A) Find JS in the box plot charts
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
or
B) Find JS in the detail
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
The n-body looks most like canonical JS to me. It’s a small array, but’s it’s accessed many times.
Unfortunately the c++ version is simd optimized, so I don’t think that’s a fair comparison.
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
You wouldn't conflate Windows development with "C" (and completely discount UNIX along the way) just because of Win32. But that's about how bonkers it is when it comes to JS and people do the same with its relationship to Node—not only was JS not created to serve the Node ecosystem, the prescriptions that NPM and Node programmers insist on often cut against the grain of the language. And that's just when we're focused on convention and haven't even gotten to the outright incompatibilities between Node and the language standard (or Node's proprietary APIs).
node_modules, for example? That has fuck-all to do with ECMA262/JS. Tailwind, Rollup, Prettier, all this other stuff—even the misleadingly named ESLint? Same. You're having a terrible experience because you're interacting with terrible software. It doesn't matter that it's written in JS (or quasi-JS). Rewrite these implementations all in other languages, and the terrible experience will remain.
Besides, anyone who's staking out a position that a language is slow, and that JS is one of them is wrong two ways, and you don't have to listen to or legitimize them.
Take a look at rollup, vite, etc. These tools are essentially replacing webpack, which is written in JS. Modern Rollup (^4) uses SWC (Rust-based bundler), and vite is currently using a mix of esbuild (Go) and Rollup. I think they're switching to SWC in v6 though.
The point here is that for certain operations JS is not nearly as fast as lower-level languages like the aforementioned. Bundling is one of those performance-critical areas where every second counts.
That said, as a TypeScript developer I agree with the sentiment that JS tools should be written in JS, but this isn't a hard and fast rule. Sometimes performance matters more. I think the reasonable approach is to prefer JS – or TS, same difference – for writing JS tools. If that doesn't work, reach for something with more performance like Rust, Go, or C++. So far I've only had to do the latter for 2 use cases, one of which is hardware profiling.
From my point of view, I'm happy if I can convince my juniors to learn a scripting language. Okay? I don't care which one--any one. I'd prefer that they learn one of the portable ones but even PowerShell is fine.
I have seen sooooo many junior folks struggle for days to do something that is 10 lines in any scripting language.
Those folks who program but don't know a scripting language far outnumber the rest of us.
I miss that brief era when coding culture had a moment of trying to be nice, of not crudely shooting out mouths off at each other's stuff crudely.
JS, particularly with typescript, is a pretty fine language. There's a lot of bad developers and many bad organizations not doing their part to enable & tend to their codebases, but any very popular language will likely have that problem & it's not the languages fault.
It's a weakness & a strength that JS is so flexible, can be so many different things to different people. Even though the language is so so much the same as it was a decade & even two ago, how we use it gone through multiple cycles of diversification & consolidation. Like perl, it is a post-modern language; adaptable & changing, not prescriptive. http://www.wall.org/~larry/pm.html
If you do have negative words to say, at least have the courage & ownership to say something distinct & specific, with some arguments about what it is you are feeling.
It then literally had decades of ECMAscript committee effort to shape it into something more useable.
I could repeat the numerous criticisms, but there’s enough funny videos about it that make a much better job pointing out its shortcomings and, sometimes, downright craziness of it.
> but any very popular language will likely have that problem & it's not the languages fault.
No, sorry, just no. I get where you are coming from, but in the case of JavaScript, its history and idiosyncrasies alone set it apart from many (most?) other languages.
Perl for example was made with love and with purpose, I don’t think it’s comparable.
Hillel Wayne posted about this recently:
https://www.linkedin.com/posts/hillel-wayne_pet-peeve-people...
Go programs start at 20MB. The Go AWS Terraform provider is something like 300MB.
A massive amount of the complexity/difficulty in webdev build tools space has to with optimizing delivery sizes on the web platform.
Node.js tooling is straightforward comparatively.
Brendan Eich himself calls JS a “rush job” and with many warts though, having had to add aspects that in retrospect he wouldn’t have. This snippet from your link is consistent with that:
Also, most of JavaScript's modern flaws do *not* come from the prototyping phase. The prototype didn't have implicit type conversion (`"1" == 1`), which was added due to user feedback. And it didn't have `null`, which was added to 1.0 for better Java Interop.
Like many people, I find JS super frustrating to use.
i write plaintext at uris, progressively enhance that to hypertext using a port with a deno service, a runtime that unifies browser js with non browser js.
that hypertext can optionally load javascript and at no point was a compiler required aside from the versioned browser i can ask my customers to inspect or a version of deno we get on freebsd using pkg install.
node is not javascript would be my biggest point if i had to conclude why i responded.
microsoft failed at killing the web with internet explorer and only switched to google’s engine after securing node’s package manager overtly through github and covertly through typescript.
microsoft is not javascript is my final point after circling back to my original point of microsoft is also one of the aforementioned reasoned c-compilers are politically fought over instead of things that just work.
The type system was weakened after the 10 day prototyping phase when he was pressured by user feedback to allow implicit conversions for comparisons between numbers and serialized values from a database. So it wasn't because he was rushing, it was because he caved to some early user feedback.
And with TypeScript or linting, many of the strange comparison/conversion issues go away.
I struggle to find any substantial arguments against the js language, in spite of a lot of strong & vocal disdainful attitudes against it.
What exactly makes JavaScript so unsuitable?
Minor speedbumps like installing Rust don't stop me now, and probably don't stop you either, but they might have at the start of my career. You have to think about the marginal developers here: how many people are able to debug the simple thing who would be unable or unwilling to do it for the complicated thing? As you note, it's already quite rare to fix up one's tooling, so we can't afford to lose too many potential contributors.
I like learning new languages too, but not to the extent that I'd choose to debug my toolchain in Zig while under time pressure. This is something I've actually done before, most notably for FontCustom, which was a lovably janky Ruby tool for generating font icons popular about a decade ago.
I made the difficult choice to rewrite it in English again, even though French might have been more performant.
Pretty much all the usual, boring offenders everyone's familiar with: truthy/falsey, errors passing silently, exceptions, and differences in importing behaviour with bundlers and runtimes. These things are admittedly quite simple to fix when it's your code, but when you multiply that by 1000 dependencies, which is a conservative number for a JS project a whole host of difficult to detect issues will rear over time.
> If you use TypeScript then IME many of the costs are mitigated.
TS meaningfully helps, but it still falls short of the mark imho. Turning on 99% of TS lints to error is the only solid way I've found to prevent a lot of the issues I've encountered. But that's really hard to introduce into existing codebases. It's doable, but with a lot of friction and effort.
Other things worth mentioning are the unusual scoping (by default at least), prototypes, “undefined”, and its role versus "null"... the list goes on.
I give TypeScript a lot of credit for cleaning up at least some of that mess, maybe more. But TypeScript is effectively another language on top of JS, not everyone in the ecosystem has the luxury of only dealing with it, and across all layers and components.
Is my knowledge about JavaScript outdated and obsolete? Certainly. Is the above stuff deprecated and turned off by default now? Probably. I left web development more than 10 years ago and never looked back. I’m a bit of a programming language geek, so I’ve used quite a few languages productively, and looked at many more. But not many serious programming languages have left quite the impression that JavaScript and PHP have.
In the meantime, I have always remembered that one conversation I had with someone who was an ECMAscript committee member at that time: They were working really hard to shape this language into something that makes sense and compiles well. Maybe against its will.
EDIT: Dear god, I completely forgot about JavaScript’s non-integerness, and its choice of using IEEE 754 as its basic Number type. Is that still a thing?
This is one of the biggest falsehoods in the software engineering I know.
Language is a collaboration glue and influences way of thinking guiding solution development. As an analogy: you can make a statue from a glass or from ice, but while both can be of the same shape and be equally awed upon, the process and qualities will differ.
For the prototypes and throwaways context doesn’t matter - That’s why all short lived contests, golfs and puzzles ignore it. Yet, when software is to be developed not over the week but over the decades and (hopefully) delivered to thousands if not millions of computers it’s the technological context (language, architecture, etc.) that matters the most.
Let me rephrase that.
I do, but only in very, very rare circumstances. Basically only when you a) know that the typical use case is going to involve large ns, like millions to billions, b) the loop body takes a long time per invocation, or c) have profiled a performance issue and found that improving it would help.
If you're working with sets of 10 items, just write the damn nested loop and move on. Code jockeying is unlikely to be faster, and even if it is, it doesn't help enough to matter anyway.
Computer science theory likes to ignore constants. Big-O notation does that explicitly. But in the real world, it's usually the constants that kill you. Constants, and time to market.
Neither is easier than the other. Whichever one you already know will be easier for you, and that’s it.
Exhausting is very exhausting, so at a fraction of that effort you could build on better foundations
> Those folks who program but don't know a scripting language far outnumber the rest of us.
What domain are you in? This sounds like the complete inverse of every company I've ever worked at.
Entire products are built on Python, Node ect, and the time after the initial honeymoon phase (if it exists) is spent retrofitting types on top in order to get a handle, any handle, on the complexity that arises without static analysis and compile time errors.
At around the same time, services start OOM'ming left and right, parallellism=1 becomes a giant bottleneck, JIT fails in one path bringing the service performance down an order of magnitude every now and then etc...
> Congratulations on being a programming god. This discussion isn't for you.
On the behalf of mediocre developers everywhere, a lot of us prefer statically typed languages because we are mediocre; I cannot hold thousands of implicit types and heuristics in my head at the same time. Luckily, the type system can.
If you look at JavaScript's history (especially for backend development), it reads like a series of accidents: First, the JS language was hacked together at Netscape in the space of a few months in 1995, and after that it was quickly baked in into all web browsers and therefore became very hard to change in a meaningful way. Then, Google developed the V8 engine for Chrome, and someone thought it would be a great idea to use it for running JS in the backend. My thoughts on that were always: "just because you can do something doesn't mean that you should"...
If you are working with a hardcoded 10 items, and you are certain that won't change significantly, sure.
If not I strongly disagree, because I've seen way too often such cases blow up due to circumstances changing.
Now, if it is very difficult to avoid a nested loop then we can discuss.
But it can simply be due to being unaware that some indexed library call is in fact O(n) or something like that, and avoiding it by using a dictionary or some other approach is not hard.
While constants matter to some degree, the point of big-O is that they don't so much if you get handed two orders of magnitude more data than you expected.
I'll gladly sacrifice a tiny bit of performance for code that doesn't suddenly result in the user not being able to use the application.
>"JavaScript is, in my opinion, a working-class language. It’s very forgiving of types (this is one reason I’m not a huge TypeScript fan)."
Being "forgiving of types" is not a good thing. There's a reason most "type-less" languages have added type hints and the like (Python, Typescript, etc) and it's because the job of a programming language is to make it easier for me to tell the CPU what to do. Not having types is detrimental to that.
I would like to clarify that even without typing python is a LOT less "forgiving of types" than javascript. It has none of the "One plus object is NaN" shenanigans you run into with javascript.
Or you could use the source code already downloaded by a package manager and do similar tweaks locally with the build manager picking them up and compiling for you
I have absolutely no interest in getting into a pissing match about whose language and ecosystem is better, and I in fact agree that the Rust tooling is less complicated than JS to start with. Nevertheless, the article is not about choosing either JS or Rust, it's about rewriting tools for working with JS in Rust, which necessarily makes you learn Rust on top of JS if you want to modify them.
I've worked on multiple rewrites of existing systems in both JS and PHP to Go and those projects were usually re-written strictly 1:1 (bugs becoming features and all that). It was pretty typical to see an 8-10x performance improvement by just switching language.
Too bad JS is not the best candidate for many optimizations.
I wonder if we'll get to the point of having a compiled version of JS that allows more static optimizations to be done.
WebAssembly might occupy that niche if it gets nice standardized runtime.
On top of that some languages don't have support for SIMD/NEON and parallel libraries or GPU processing libraries - those things can significantly improve performance
For a smallish batch processing script I had written in node, I just fed it to chatgpt and got the golang version. It went from being unusable with over 100K records to handling 1M on exactly the same machine.
And only then I started adding things like channels, parallelism, and smart things.
I wonder if the author would feel differently if they spent more time writing in more languages on tooling like this. My life got a lot easier when I stopped trying to write TypeScript everywhere and leveraged other languages for their strengths where it made sense. I really wanted to stick to one language I felt most capable with, but seeing how much easier it could be made me change my mind in an instant.
The desire for stronger duck typing is confusing to me, but to each their own. I find Rust allows me to feel far, far more confident in tooling specifically because of its type system. I love that about it. I wish Go’s was a bit more sane, but there are tons of people who disagree with me.
All of this is under the assumption that whatever you're writing has some degree of complexity to it (an assumption which is satisfied very quickly). Five line python glue scripts don't necessarily benefit from static typing.
If the JIT detects the array as homogenous it will compile it to low level array access. JS JITs are very good.
> I just don’t think we’ve exhausted all the possibilities of making JavaScript tools faster
and then
> Sometimes I look at truly perf-focused JavaScript, such as the recent improvements to the Chromium DevTools using mind-blowing techniques like using Uint8Arrays as bit vectors, and I feel that we’ve barely scratched the surface.
Bit vectors are trivial?
I think the author is too ignorant about those "faster languages". Sure, maybe you can optimize javascript code, but the space of optimizations is only a small subset of what is possible in those other languages (e.g. control over allocations, struct layout, SIMD, ...)
That said, I don't think the author understands performance when it comes to language details. There are several layers of untapped performance, all which JS makes hard to access - optimal vectorization, multi-threading, memory/cache usage, etc.
> I find Rust allows me to feel far, far more confident in tooling specifically because of its type system.
Usually the JS projects become really hard to work on the growing up. Good JS needs a lot of discipline on the team of devs working on it: it get messy easily and refactoring becomes very hard. Type systems help with that. TypeScript helps, but only so much... Going with a languages that both has a sound type system (like Rust) and allows lots of perf improvements (like Rust) becomes an attractive option.
So yes, you can do clever tricks with ArrayBuffers, and the JS VMs will do incredibly clever optimizations for you, but as long as your code is running on one core you cannot be competitive. (Unless your problem is inherently serial, but very few "tool"-type problems are.)
JavaScript and Python have types, and Python has always been strongly typed (type hints have not changed that). Neither TypeScript or Python use type hints at runtime to help tell the CPU what to do.
What type hints in these languages do is make it easier for you to describe more specifics of what your code does to your tooling, your future self, and other programmers.
Really though, my entire career has taught me to never ever talk about performance with other developers... especially JavaScript developers or other developers working on the web. Everybody seems to want performance but only within the most narrow confines of their comfort zone, otherwise cowardice is the giant in the room and everything goes off the rails.
The bottom line is that if you want to go faster then you need to step outside your comfort zone, and most developers are hostile to such. For example if you want to drive faster than 20 miles per hour you have to be willing to take some risks. You can easily drive 120 miles per hour, but even the mere mention of increased speed sends most people into anxiety apocalypse chaos.
The reactions about performance from other developers tend to be so absolutely over the top extreme that I switched careers. I just got tired of all the crying from such extremely insecure people who claim to want something when they clearly want something entirely different. You cannot reasonably claim to want to go faster and simultaneously expect an adult to hold your hand the entire way through it.
> Rather than empowering the next generation of web developers to achieve more, we might be training them for a career of learned helplessness. Imagine what it will feel like for the average junior developer to face a segfault rather than a familiar JavaScript Error.
I feel this slightly misses the point. We should be making sure that the next generation of Software Engineers have a solid grounding in programing machines that aren't just google's V8 Javascript Engine, so that they are empowered to do more, and make better software.
We should be pushing people to be more than just Chrome developers.
Also, while I understand what the author is getting at, referring to lower level developers as demigods is a little unhelpful. As someone who switched careers from high-level languages to a C++ engineer, I can attest to the fact that this stuff is learnable if you are willing to put the time and effort in to learning it. It's not magic knowledge. It just takes time to learn.
It's incredible how I don't need tooling at all, except for a basic IDE integrated language server. No package manager, no transpiler, no linter/formatter, no extensive configuration files. Need to add a dependency? Just copy paste the code you need from a github repo. It's still readable and editable if you need since it's the source code, not some transpiled/minified/optimized mess.
Ever had ESM/CommonJS dependencies conflicting with your tsconfig.json, right when you need to deploy an urgent hotfix? Forget about that madness. It is such a great and simple DX compared to JS.
Edit: Before I'm dismissed, I'll add that my Odin project is becoming as complex as any other JS website I've worked on and it can run in a browser thanks to wasm compilation. So I'm not even comparing apples and oranges.
Look at the performance gains in build tool land (esbuild specifically) and you’ll see the performance gains with native languages.
For most webservers and UIs it’s plenty fast though.
The "more difficult" in this quote makes me somewhat angry.
`This breaks down if JavaScript library authors are using languages that are different (and more difficult!) than JavaScript.`
JS is absolutely not easy!
It is not class oriented but uses funky prototypes, it has classes slapped on PHP-Style.
Types are bonkers, so someone bolted on TypeScript.
It has a dual wield footgun in the form of null/undefined, a repeat of the billion dollar mistake but twice!
The whole Javascript tooling and ecosystem is a giant mess with no fix in sight (hence all the rewrites).
The whole JavaScript ecosystem is ludicrously complicated with lots of opinions on everything.
Tooling is especially bad because you need a VM to run stuff (so lots of rewrites).
This is why Java never got much traction in that space too.
Go for example is way easier to learn than Javascript.
Here i mean to a level of proficiency which goes beyond making some buttons blink or load a bit of stuff from some database.
Tooling just works. There is no thought to spend on how to format stuff or which tool to use to run things.
And even somewhat difficult (and in my opinion useless) features like classes are a absent.
Want to do concurrency? Just to `go func whatever()`. Want it to communicate across threads? Use a channel it makes stuff go from A -> B.
Try this in JS you have to know concepts like Promises, WebWorkers and a VM which is not really multithreaded to begin with.
I think people often overlook this factor when doing rewrites and making big claims about the results.
Chances are if you’d done the rewrite in the same language you’d get similar results.
I don’t know if it’d be possible to empirically prove that. I’ve only seen it happen a few times.
https://blog.nginx.org/blog/server-side-webassembly-nginx-un...
https://github.com/WebAssembly/wasi-http
Write in any language, compile to WebAssembly, have it run on the server no matter what the server's CPU architecture, achieve better performance with high compatibility.
The only reason for wasm is portability. If you can't compile your code for the server you're going to be running it on, then the original argument of choosing wasm over JavaScript is already moot.
Wasting time on optimising cases that don't occur is just wasteful. Go solve something that's a real problem, not some imagined performance problem.
Th same does not hold for frontend use, as it's for one user, and latency trumps throughput in the perception of being fast. You need great single-thread performance, and an ability to offload stuff to parallel threads where possible, to keep the overall latency low. That's basically the approach of game engines.
You can benefit from 1000x (!) speed ups just rewriting sync Python in sync Rust, in my measured experience, because the compiler helps exponentially more the more abstract your code is, and Rust can absolutely do high level systems.
The main blocker is when you’re missing some library because it doesn’t exist in Rust, but that’s almost always a big opportunity for open source innovation
I have a particular stereotypical programmer in mind. The one that rewrites their entire program in X, because it's fast. Not because they understand the data dependencies and run-time performance characteristics of their program.
Typically these folks misattribute the performance gains they experience in such projects to the language itself rather than the tacit knowledge they have of the original program.
I've been in several past-midnight war rooms due to exactly that mindset.
Customer suddently gets a new client which results in 10x as much data as anyone imagined and boom, there you are getting dragged out of bed at 2am and it's not even your fault.
I'm not saying you should spend time optimizing prematurely.
I'm saying you should be aware of what you're doing, and avoid writing bad code. Because almost always it does not take any significant time to avoid writing bad code.
If you know that indexed library function is O(n) and you need to check all items, don't write a for loop but use a while loop using its .first() and .next() functions which are O(1).
Or reach for a dictionary to cache items. Simple stuff, just be aware of it so someone isn't dragged out of bed at 2am.
I really like duck typing when I'm working on small programs - under 10,000 lines of code. Don't make me worry about stupid details like that, you know what I mean so just do the $%^#@ thing I want and get out of my way.
When I work with large programs (more than 50k lines of code - I work with some programs with more than 10 million lines and I know of several other projects that are much larger - and there is reason to believe many other large programs exist where those who work on them are not allowed to talk about them) I'm glad for the discipline that strong typing forces on me. You quickly reach a point in code where types save you from far more problems than their annoyance costs.
In Rust, I’d have added Rayon as a dependency to my Cargo.toml, inserted `use rayon::prelude::;` (or a more specific import, if I preferred) into my file, changed one `.iter()` to `.par_iter()`, and voilà, it’d have compiled (all the types would have satisfied Send) and given probably at least a 6–7× speedup.
Seriously, when you get to talking about a lot of performance tricks and such (I’m thinking things like the bit maps referred to at the end), even when they’re possible* in JavaScript, they’re frequently—I suspect even normally—way easier to implement in Rust.
I’m sure you could get something with similar performance in JS. I’ve messed around with JS daemons, so you don’t care about startup time for programs like tsc and whatnot. The problem is that it’s just a pain in the ass to get any of this to work, whereas ESBuild is just fast.
Maybe these problems with JS will get solved at some point, because we haven’t exhausted all of the possibilities for making JS faster (like the author says). However, when you write the tools in Rust or Go or whatever, you get a fast tool without trying very hard.
Yeah, it's just that about 10k LoC, as I've also noticed, you don't actually know what you yourself mean! It's probably because such amount of code is almost never written in one sitting, so you end up forgetting that e.g. you've switched, for this particular fields, from a stack of strings to just a single string (you manage the stacking elsewhere) and now your foo[-1] gives you hilarious results.
Anecdotally I have had to do this in js a few times. I have never had to do this in Rust. Probably because Rust projects are likely to ship with fewer bugs.
Also Rust is harder to pick up but what are you going to do, use the most accessible tool to solve every problem, regardless of its' efficacy? I am not a Rust expert by any means, but just reading the Rust book and doing a couple projects made me a better programmer in my daily driver languages (js and Python).
I think speed is less important here than correctness. Every time you ship a buggy library you are wasting the time of every single end user. The correctness alone probably saves more time in total than any performance gains.
No.
The word means something.
It's bad enough when it gets misused colloquially e.g. by folks on Twitter and clueless podcasters trying to spice up their talking points, but in a thread like this one, it has no place getting dropped into the discussion except if talking about something that actually fits an exponential curve.
It is.
Brilliant engineers have spent decades making it faster than you might expect, subject to many caveats, and after the JIT has had plenty of time to warm up, and if you're careful to write your code in such a way that it doesn't fall off the JITs optimization paths, etc.
Meanwhile, any typical statically typed language with a rudimentary ahead of time compiler will generally be faster than a JS VM will ever approach. And you don't have to wait for the JIT to warm up.
There are a lot of good things about dynamically typed languages, but if you're writing a large program that must startup quickly and where performance is critical, I think the right answer is a sound typed language.
Still anecdotal, but I have worked on a large Rust codebase (Materialize) for six years, worked professionally in JavaScript before that, and I definitely wouldn’t say that Rust projects have fewer bugs than JavaScript projects. Rust projects have plenty of bugs. Just not memory safety bugs—but then you don’t have those in JavaScript either. And with the advent of TypeScript, many JS projects now have all the correctness benefits of using a language with a powerful type system.
We’ve forked dozens of Rust libraries over the years to fix bugs and add missing features. And I’m know individual Materialize developers have had to patch log lines into our dependencies while debugging locally many a time—no record of that makes it into the commit log, though.
I think having more choices is a good thing, and sometimes rewriting something from scratch will result in a cleaner/better version. The community at large is going to decide which tooling becomes the standard way to do it, so the author should make an argument on why the js tooling is better instead of weak statements like the one I quoted.
I avoid any tool which forces me to pull in a gazillion npm packages, while I gladly use esbuild for example because it looks and feels like a nice little compact tool.
I suspect Java is fast. JavaScript is also fast. They are both fast. Without comparing measures the only significant distinction between the two is the time to compile. In that case Java is slow, or at least just substantially slower than JavaScript.
Fortunately there are comparative benchmarks: The Programming Benchmark Games. It is not always the best, but it is certainly better than naught.
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
https://web.dev/case-studies/google-sheets-wasmgc
Any application that is written in JavaScript will have more and more of it replaced with WebAssembly.
I would blame numpy for Python's popularity today. Writing coffee as fast as c or fortran in Python is awesome (and keeps me employed).
The very notion of "fast and slow languages" is nonsense. A language is just an interface for a compiler, translator, or interpreter of some sort. A language is only steer wheel and pedals, not the whole car, so the whole arguments which one is faster is stupid.
In our case, AOT compilation backfired. We used (contractually had to) support older architectures and our Eigen built with meager SSE2 support couldn't possibly outrun Numpy built with AVX-512.
So we stopped rewriting. And then Numba (built on the same LLVM as clang) came up. And then not one but several AOT Python compilers. And now JIT compiler is in the standard Python.
But the vast majority of slow JS I've encountered was slow because of an insane dependency tree or wildly inefficient call stacks. Faster languages cannot fix polynomial or above complexity issues.
That's what C++ templates always have been, and got way, way tighter with concepts circa C++23.
Rust's traits are also strong duck typing if you squint a little.
The idea in both cases is simple: write the algorithm first, figure outwhat can go into it later — which allows you to write the code as if all the parts have the types you need.
But then, have the compiler examine the ducks before the program runs, and if something doesn't quack, the compiler will.
Related comments: https://news.ycombinator.com/item?id=35045520
Direct comparison I did between Python and C++ semantics - Oil's Parser is 160x to 200x Faster Than It Was 2 Years Ago - https://www.oilshell.org/blog/2020/01/parser-benchmarks.html
This is the same realistic program in both Python and C++ -- no amount of "optimizing Python" is going to get you C++ speed.
---
FWIW I agree with you about the debates -- many people can't seem to hold 2 ideas in their head at once.
Like that C++ unordered_map is atrociously slow, but C++ is a great language for writing hash tables.
And that Python was faster than Go for hash table based workloads when Go first came out, but also Python is slow for AST workloads.
Performance is extremely multi-dimensional, and nuanced, but especially with programming languages people often want to summarize/compress that info in inaccurate ways.
Low level stuff is mostly c++ to talk to v8 or do system calls, talk to libuv, etc... but even that stuff has a bunch of js to wrap and abstract and provide a clean DX.
These are very different than your average JavaScript program
And that's exactly where it starts to be the case that JavaScript semantics are the issue
Take it from Lars Bak and Emery Berger (based on their actions, not just opinions): https://lobste.rs/s/ytjc8x/why_i_m_skeptical_rewriting_javas... :)
Back in the 1980s it was my greatest ambition to go on The Price Is Right and play Plinko. However all I could accomplish was making this cursed programming language instead. You'll love it.
What do you mean by a "sound typed language". Go and Java have unsound type systems, and run circles around JS and Dart. Considering your involvement with Dart, I find contradictory information [1].
Just because JS can be fast doesn't mean it's a pleasure to write fast JS
esbuild versus webpack performance is never a fair fight. Most of the other behemoths are still "just" webpack configurations plus bundles of plugins. It will take a while for the build tools in that model to settle down/slim down.
(esbuild versus Typescript for "Typescript is the only build tool" workflows is a much more interesting fight. esbuild doesn't do type checking only type stripping so it is also not a fair fight, and you really most often want both, but "type strip-only" modes in Typescript are iterating to compete with esbuild in fun ways, so it is also good for the ecosystem to see the fight happening.)
I appreciate esbuild, but I also appreciate esbuild had so much of the benefit of a lot of hindsight and not developing in the open as an ecosystem of plugins like webpack did but rather baking in the known best practices as one core tool.
I only mean that utilizing extra CPU cores in JS is a bit easier for an API server, with tons of identical parallel requests running, and where the question is usually in RPS and tail latency, than for single-task use cases like parallelized builds.
I can’t think of any in the mainstream, however.
Also both Deno and Bun have more optimized startup times in general by default, some of that startup time is just Node, not a reflection of the language itself.
Maybe the problem people have is that node/npm are becoming a similarly “essential” build system piece much like python. That much I can certainly understand.
The biggest problem with JavaScript is that it's an extremely footgunny language. IMO, of the C++ variety, but probably worse.
1. The type system is unsound and complicated. Often times things "work" but silently do something unexpected. The implicit type conversion thing is just one example, but I know you've seen "NaN" on a page or "Object object" on a page. Things can pass through and produce zero errors, but give weird results.
2. JS has two NULLs - null and undefined. The error checking around these is fragile and inherently more complex than what you'd find in even C++.
3. JS has an awful standard library. This is footgunny because then basic functionality needs to be reimplemented, so now basic container types have bugs.
4. JS has no sane error handling. Exceptions are half-baked and barely used, which sounds good until you remember you can't reliably do errors-as-values because JS has no sane type system. So it's mostly the wild wild west of error handling.
5. The APIs for interacting with the DOM are verbose and footgunny. Again things can look as though they work but they won't quite. We develop tools like JSX to get around this, but that means we take all the downsides of that too.
6. Typescript is not a savior. Typescript has an okay-ish type system but it's overly complex. Languages with nominal typing like C# are often safer (no ducks slipping through), but they're also easier to work with. You don't need to do type Olympics for most languages that are statically typed, but you do in TS. This also doesn't address the problem of libraries not properly supporting typescript (footgun), so you often mix highly typed code with free-for-all code, and that's asking for trouble. And it will become trouble, because TS has no runtime constraints.
A benefit to a good JIT, though, is that you can converge to such optimizations over time based on practical usage information. You trade off less optimized startup paths for Profiler Guided Optimization on the live running application, in real time based on real data structures.
JS has some incredible JITs very well optimized for browser tab life-cycles. They can eventually optimize things at a low level far further than you might expect. The eventually of a JIT is of course the rough trade-off, but this also is well optimized for a lot of the browser tab life-cycle: you generally have an interesting balance of short-lived tabs where performance isn't critical and download size is worth minimizing, versus tabs that are short-lived but you return to often and can cache compiled output so each new visit is slightly faster than the last, versus a few long-lived tabs where performance matters and they generally have plenty of time to run and optimize.
This is why Node/Deno/et al excel in long-running server applications/services (including `--watch` modes) and "one-off"/"single run" build tools can be a bit of a worst case, they may not give the JIT enough time or warning to fully optimize things, especially when they start with no previous compilation cache every time. (The article points out that this is something you can turn on now in Node.)
I don’t think there’s a great way to be sure of this. Parcel 2 (my personal favorite), for example, doesn’t include, by default, much of the cruft from mid-2010s JavaScript, but esbuild is still faster.
Theoretically, being able to use multiple cores would bring speed improvements to a lot of the tree manipulation tasks involved in building js projects.
> esbuild versus webpack performance is never a fair fight.
Yeah webpack is just the worst. Bloated from day 1
I mean that if the type checker concludes than an expression or variable has type T, then no execution of the program will ever lead to a value not of type T being observed in that variable or expression.
In most languages today, this property it enforced with a combination of static and runtime checks. Mostly the former, but things like checked casts, runtime array covariance checks, etc. are common.
That in turn means that a compiler can safely rely on the type system to generate more efficient code.
Java intended to have a sound type system, but a hole or two have been found (which are fortunately caught at runtime by the VM). Go's type system is sound as far as I know. Dart's type system is sound and we certainly rely on that fact in the compiler.
There is no contradictory information as far as I know, but many people seem to falsely believe that soundness requires zero runtime checks, which isn't the case.
> ...it’s straightforward to modify JavaScript dependencies locally. I’ve often tweaked something in my local node_modules folder when I’m trying to track down a bug or work on a feature in a library I depend on. Whereas if it’s written in a native language, I’d need to check out the source code and compile it myself – a big barrier to entry.
I too often find myself inserting `console.log` inside node_modules to figure out why the toolchain doesn't work as I'm expecting it to. It has gotten me out of some very nasty situations when StackOverflow/Google/GPT didn't help at all.
Had it been written in Rust, I wouldn't have had a chance.
> The WebAssembly sandbox’s linear memory is initialized with the HTTP context of the current request and the finalized response is sent back to the router for transmission to the client.
They can feel free to clarify that multiple requests can concurrently use a shared context as well if that's true. Or if that's not true, then the thing will of course be slow assuming it needs to do some kind of IO like a database request.
Note that major FaaS implementations like AWS Lambda don't let you have concurrent requests that share context, so it's not exactly crazy to think this wouldn't either.
I've dipped into V8 to understand a bug...exactly once. Even then, I didn't have to build it, which is good because building node and V8 from source used to take hours and probably still does. It's just a more stable piece of software, because Google has a very strong incentive to keep it that way.
The thing is, there is no requirement to ever touch lower level languages in order to work as a JS developer. I would hazard a guess that most JavaScript developers don't. If you need to touch C++ in order to do certain things, then most JS developers will choose not to do them. Expanding the number of tools that can't be fixed by most of their own users has downsides.
More than 90% of performance in JavaScript comes down to:
* comfort with events and callbacks
* avoiding string parsing: queryStrings, innerHTML, and so on
* a solid understanding of transmission and messaging. I wrote my own WebSocket library
None of that, except figuring out your own home grown WebSocket engine, is complicated, but it takes some trial and effort to get it right universally
Spending a week or two getting familiar with the way things are done in a language, and then gradually become effective in it and the specific codebase I would be working on for me at least would beat having to work in an environment with 50 years worth of irreconcilable technical debt inherent to the language.
My focus is message-passing, as opposed to instance-passing. Passing around instances can lead to 'spooky action at a distance' if multiple parts of the code hold on to a reference of the same instance, so I avoid it as much as possible.
The main advantage of static typing is that it helps you to safely pass around complex instances, which I happen to avoid doing anyway. So while I don't see static typing as inherently harmful, it offers me diminishing returns as my coding style improves.
In JavaScript land though, TypeScript forces me to add a transpilation step which forces bundling of my code and adds complexity which causes a range of really annoying problems in various situations. As people like DHH (founder of Ruby on Rails) have shown, we have the opportunity to move away from bundling and it yields a lot of benefits... but it's not possible to do with TypeScript in its current form.
It's particularly difficult for me because I actually like the syntax of TypeScript and its concept of interfaces. Interfaces can be consistent with the idea of passing simple objects which serve as structured messages between functions/components; rather than live instances instantiated from a class. I can treat the object as named parameters and not hold on it.
Not true. JavaScript’s threading model is entirely insufficient for something like Rayon. At best, you could get either something that only worked with shared byte arrays, or something that was vastly less efficient due to structured clone. At best, you have something far more manual and somewhat slower, or something a little more manual and much slower.
Rayon is a magnificent example of making something that is impossible in scripting languages easy. Of making something that you can only feebly imitate with some difficulty, trivial.
Add to that amazing tooling with hot reload (bye bye 2-5 minutes compile times), billions of investments from Big tech to make it better and faster, ability to reuse same code between mobile/backend/frontend, integration into browser and you’ll quickly find that JS literally has no rival.
https://web.dev/case-studies/google-sheets-wasmgc
And with the advent of WebAssembly, any language integrates with the browser.
So why am I using JavaScript again?
Which is not a lot when it comes to web. Sure some algo heavy stuff like Figma will benefit from it, but GUI around it is still written in what?
Works for Dart with Flutter:
https://www.youtube.com/watch?v=Nkjc9r0WDNo
https://www.youtube.com/watch?v=qx42r29HhcM
Works for C# with Avalonia UI:
https://www.youtube.com/watch?v=6mwQDPlbF5Y
And so on.
I’m not sure what you were aiming for here, but you only reinforced me that JS is amazing if rewriting calculation worker yields only 2x improvement.
> So why am I using JavaScript again?
Re-read my comment, it’s all there.
If your stack is FP-ish, and you hire FP-ish developers, it's fine. But having non-FP devs write Haskell? Maybe I've been unlucky, but it's near impossible in my experience.
Try not to worry about it. Welcome your WebAssembly overlords and be happy.
I've rewritten some ~10 small node servers to Go, Java and C#, and they've always been >10X faster without changing algorithms.
Even in the few cases where dynamic languages catch up, they're often written in an unidiomatic style (read: optimized) and compete with unoptimized/naive C/C++.
Let’s regress to level of native apps without benefits of said native apps. No standardization, no performance, no unified integration. Let’s get rid of browser plugins that allow us to fight invasive ads and malicious JS scripts, let’s dump decades of expertise and optimizations, let’s undo all advancements of web just to be able to write same old <div> in C#. Nothing better than a single blob of <canvas>.
It’s ironic that some of people in this thread convict JS devs of using only JS and then you use those “frameworks” as an example of a good thing when they don’t even have a separation of presentation (like HTML and JS) that would allow other languages tap into it.
And all of this is with much worse performance and stability.*
* - for now
No, it’s not a narrow use case. I wake up my phone to spreadsheet calculation, I open HN - a little bit more spreadsheet, my kettle heats water via power of spreadsheet algorithm. Amazon purchase? Only via spreadsheets.
You can use Numba that uses the same LLVM clang does and write all the computation kernels yourself instead of using what Numpy provides. The only difference there would be JIT vs AOT compilation.
Or you can use Codon, that uses the same LLVM clang does and then there will be no difference at all.
Language is just an interface for a compiler.
WebAssembly has all of these things. WebAssembly already there, lurking your browser. That's why it will succeed.
It's interesting how threatened you are by WebAssembly. But change is normal. Embrace the change.
Is that why Flutter demo you’ve linked takes 7 seconds to load on Firefox on iPhone 14 Pro and then barely works skipping frames?
I can’t even select text on the page, since it’s just a big canvas, lmao.
> WebAssembly has all of these things. WebAssembly already there, lurking your browser. That's why it will succeed.
You mean how every of those frameworks that you’ve listed have to reimplement a11y every team, since WASM is pure logic? How all of them have to reimplement OS shortcuts and OS integrations? Is that what you call “unified integration”?
> It's interesting how threatened you are by WebAssembly. But change is normal. Embrace the change.
Why did I even bother replying to you, sigh.
My point was the API simplicity not the technical correctness, which is why my post discussed threading in the first place.
Yes Rayon isn't possible in JS, but a "rayon" api like multi-threaded library that you can reach for in cases where it makes sense is absolutely doable.
> Why did I even bother replying to you, sigh.
I think it's because you're overwrought. Don't fear WebAssembly.
> I think it's because you're overwrought. Don't fear WebAssembly.
Not sure what’s you deal with these comments, as I’m not even a JS dev by trade, but okay.
Rayon’s approach lets you write code that will run in arbitrary other threads, inline and wherever you want to. That’s absolutely essential to Rayon’s API, but you can’t do that in JavaScript, at all: workers don’t execute the same code (it’s not based on forking), and interaction between workers is limited to transferable objects, or things that work with structured clone, which excludes functions.
No, you can’t get anything even vaguely like Rayon in JavaScript. You could get a feeble and hobbled imitation with untenable limitations or extra compilation step requirements (and still nasty limitations), and that’s about it.
With Rayon, you can add parallelism to existing code trivially. With JavaScript, the best you can manage, which is nowhere near as powerful or effective even then, requires that you architect your entire program differently, significantly differently in many cases, and in ways that are generally quite a bit harder to maintain.
If you wish to contest this, if you reckon I’ve overlooked something, I’m open to hearing. I’m looking for something along these lines to work:
import { f1, f2 } from "./f.js";
let n1 = Math.random();
let n2 = Math.random();
await par_iter([1, 2, 3, 4, 5])
.map(n => f1(n + n1))
.filter(n => f2(n + n2))
.collect();
Where the mapping and filtering will be executed in a different worker, and collect() gives you back a Promise<Array>. The fact that f1 and f2 are defined elsewhere is deliberate—if it didn’t close over any variables, you could just stringify the function and recompile it in the worker.https://stackoverflow.com/questions/75029322/does-numpy-use-...
I've been on way too many overdue projects where people went "but this doesn't scale" for stuff that's just not going to happen. Don't waste time on avoiding so called bad algorithms if there's just not that much data going through it. But yeah, don't write a badly scaling algorithm if it does.
Most lists are just way too small to care about the difference. Literal single digit number of items, and a small loop body. You can go up 3, 4, 5 orders of magnitude before you can even measure the nested loop being slower than lower big-O solutions, and a few more before it becomes a problem.
But if you have that one loop that goes into the millions of items and/or has a big body, you'd better be thinking about what you're doing.