←back to thread

3 points wkyleg | 1 comments | | HN request time: 0.209s | source

Right now, JavaScript scales well with a single-threaded event loop. Certainly not as fast as something like Go for async tasks, but enough to power much of the web and be easy to write.

Why hasn't anyone abstracted the event loop model to scale across multiple machines or utilize modern processors? Perhaps with something more like an Actor model or Erlang's BEAM?

It seems like just getting the JavaScript concurrency model as an abstraction over multicore or multi-machine concurrency would be one of the easiest ways to achieve this. I realize that this is still technically difficult, but programming tends towards "just porting things to JavaScript." I would love to have something like Phoenix framework, just built with JavaScript/TypeScript, and I can scale a back end by bumping size of a machine or scaling horizontally.

Show context
toast0 ◴[] No.42177015[source]
I'm not in the node ecosystem, but can't you "just" use node worker-threads for multicore, and run node on multiple machines for multi-machine?

If you want the features of BEAM/dist plus running some javascript, I'd suggest you build your coordination layer in a BEAM language and have some glue to run javascript as a spawned port, or possibly connect node as a c_node to dist.

replies(1): >>42177229 #
wkyleg ◴[] No.42177229[source]
Nobody really uses multithreading for node. I mean I'm some projects do (maybe pnpm?) but the threading support isn't great and the event loop performs well.
replies(1): >>42183767 #
1. pier25 ◴[] No.42183767[source]
Almost everyone running Node in a machine with multiple cores is using multithreading.

Node is multithreaded by default. I believe the default setting is using 4 threads. Most of Node is written in C++.

The JS code written by end users is single threaded (most of it at least) but IO etc is all executed with libuv.

https://docs.libuv.org/en/v1.x/threadpool.html#threadpool