←back to thread

268 points aapoalas | 6 comments | | HN request time: 0.018s | source | bottom

We're building a different kind of JavaScript engine, based on data-oriented design and willingness to try something quite out of left field. This is most concretely visible in our major architectural choices:

1. All data allocated on the JavaScript heap is placed into a type-specific vector. Numbers go into the numbers vector, strings into the strings vector, and so on.

2. All heap references are type-discriminated indexes: A heap number is identified by its discriminant value and the index to which it points to in the numbers vector.

3. Objects are also split up into object kind -specific vectors. Ordinary objects go into one vector, Arrays go into another, DataViews into yet another, and so on.

4. Unordinary objects' heap data does not contain ordinary object data but instead they contain an optional index to the ordinary objects vector.

5. Objects are aggressively split into parts to avoid common use-cases having to reading parts that are known to be unused.

If this sounds interesting, I've written a few blog posts on the internals of Nova over in our blog, you can jump into that here: https://trynova.dev/blog/what-is-the-nova-javascript-engine

Show context
Etheryte ◴[] No.42171237[source]
Architectural choices are interesting to talk about, but I think most people reading this won't have any context to compare against, me included. How does this compare to e.g. the architecture of V8? What benefits do these choices give when compared against other engines? Etc, reading through the list it's easy to nod along, but it's hard to actually have an intuition about whether these are good choices or not.
replies(3): >>42171249 #>>42171285 #>>42171820 #
VPenkov ◴[] No.42171285[source]
They seem to have a blog post on that: https://trynova.dev/blog/why-build-a-js-engine

It reads like an experimental approach because someone decided to will it into existence. That and to see if they can achieve better performance because of the architectural choices.

> Luckily, we do have an idea, a new spin on the ECMAScript specification. The starting point is data-oriented design (...)

> So, when you read a cache line you should aim for the entire cache line to be used. The best data structure in the world, bar none, is the humble vector (...)

> So what we want to explore is then: What sort of an engine do you get when almost everything is a vector or an index into a vector, and data structures are optimised for cache line usage? Join us in finding out (...)

replies(3): >>42171319 #>>42171414 #>>42175089 #
aapoalas ◴[] No.42171319[source]
The impetus for the engine design is indeed, as you say, "someone decided to will it into existence."

A friend of mine who works in the gaming industry told me about the Entity Component System architecture and I thought: Hey, wouldn't that work for a JavaScript engine? So I decided to find out.

Nova itself has already been created at that point and I was part of the project, but it was little more than a README. I then started to push it towards my vision, and the rest is not-quite-history.

replies(2): >>42171643 #>>42183007 #
kitd ◴[] No.42171643[source]
A friend of mine who works in the gaming industry told me about the Entity Component System architecture and I thought: Hey, wouldn't that work for a JavaScript engine?

That was the first thing I thought of when I saw your description. But the reason ECS works well is cache coherence. (Why) would a general-purpose runtime environment like a JS engine benefit from ECS? Or alternatively, have you seen performance improvements as a result?

replies(1): >>42171694 #
aapoalas ◴[] No.42171694[source]
I guess the opposite could also be asked: Why would a game benefit from ECS? A player in the game can do basically anything, there's no guarantee that things are always perfectly accessed in a linear order.

It comes down to statistics: Large data sets in a general-purpose runtime environment are still created through parsing or looping, and they are consumed by looping. A human can manually create small data sets of entirely heterogenous data, but anything more than a 100 items is already unlikely.

Finally, the garbage collector is a kind of "System" in the ECS sense. So even if the JavaScript code has managed to create very nonlinear data sets, the garbage collector will still enjoy benefits. (Tracing the data is still "pointer chasing" but when tracing we don't need to trace in the data order but can instead gather a collection of heap references we've seen, sort them in order and then trace them.)

replies(1): >>42171985 #
kaoD ◴[] No.42171985[source]
> Why would a game benefit from ECS? A player in the game can do basically anything, there's no guarantee that things are always perfectly accessed in a linear order.

There's actually a guarantee that things are mostly going to be accessed in a linear order because player actions don't matter to the execution of the simulation. The whole simulation is run at 1/FPS intervals across the whole set of entities, regardless of player input (or lack thereof).

In an ECS the whole World is run by Systems, which operate on Components. This is why cache locality works there: when the Movement System is acting, it's operating on the Position Component for all (or at least many) Entities, so linear array access pattern is very favorable. Any other component in your cache is going to be unused until the next system runs (and then the Position Component will become the useless data in cache). That's why you'd rather have an array of Components in cache instead of an array of Entities.

This access pattern is very suitable for games because the simulation is running continuously in an infinite loop (the game loop) consisting of even more loops (the Systems running), but not so much for general purpose computation where access patterns are mostly random. (EDIT: or rather, local to each "entity".)

replies(1): >>42172071 #
aapoalas ◴[] No.42172071[source]
It is very true that a general purpose computation can theoretically do anything and mess linearity of access patterns entirely up. But in practice programs do most of their work in very linear fashion. It's not by chance that eg. V8 will try to write objects parsed from a JSON array of objects one right after the other. So in a sense we can say that the JavaScript program itself becomes the System with a capital S.

That is not to say that Nova's heap vectors will necessarily make sense: The two big possible stumbling blocks are 1) growing of heap vectors possibly taking too long, and 2) compacting of heap vectors during GC taking too long.

The first point basically comes down to the fact that, at present, each heap vector is truly a single Rust Vec. When it can no longer fit all the heap data into it, it needs to reallocate. Imagine you have 2 billion ordinary objects, and suddenly the ordinary objects vector needs to reallocate: This will cause horrible stalls in the VM. This can be mitigated at the cost of splitting each heap vector into chunks, but this of course comes at the cost of an extra indirection and some lack of linearity in the memory layout.

The second point is more or less a repeat of the first: Imagine you have 2 billion ordinary objects, and suddenly a single one at the beginning of the vector is removed by GC: The GC has to now move every object remaining in the vector down a step to make the vector dense again. This is something that I cannot really do anything about: I can make this less frequent by introducing a "minor GC" but eventually a "major GC" must happen and something like this can then be experienced. I can only hope that this sort of things are rare.

The alternative would be to do a "swap to tail", so the last item in the vector is moved to take the removed item's place. But that then means that linear access is no longer guaranteed. It also plays havoc on how our GC is implemented but that's kind of a side point.

Software engineering is architecture is full of trade-offs :) I'm just hoping that the ones we've made will prove to make sense.

replies(3): >>42172153 #>>42172232 #>>42173654 #
1. kaoD ◴[] No.42172153[source]
> It's not by chance that eg. V8 will try to write objects parsed from a JSON array of objects one right after the other.

Yes, but note this is still a different pattern of access (array of "entities"). V8 does this because it assumes that e.g. `foo.name` is very likely going to be accessed along with `foo.lastName` (which is likely the 99% case for general computing) whereas ECS assumes `foo.name` is very likely going to be accessed along with `foo2.name`, `foo3.name`, ..., `fooN.name` (which is the 99% case for videogame timestep loops).

> Software engineering is architecture is full of trade-offs :) I'm just hoping that the ones we've made will prove to make sense.

To clarify: my comment is not a criticism of Nova's design decisions. I was only trying to clarify the answer to "Why would a game benefit from ECS?" for those foreign to ECS's existential motive.

I'm sure Nova's tradeoffs make sense for some workloads and I wish you the best!

replies(1): >>42172370 #
2. aapoalas ◴[] No.42172370[source]
Thank you very much for your well-wishes <3

> Yes, but note this is still a different pattern of access (array of "entities").

I was referring to the `[foo, foo2, foo3]` objects themselves; V8 does use an "cache local" placement for those so you'll find them laid out in memory as:

> [foo_proto, foo_elems, foo_props, foo_name, foo_lastName, foo2_proto, foo2_elems, foo2_props, foo2_name, foo2_lastName, ...]

For what it's worth, I am interested in laying object properties out in an ECS like manner in Nova, so the properties would be laid out as `[foo.name, foo2.name, foo3.name, ...]`, but currently the properties are laid out similarly to V8, `[foo.name, foo.lastName]`. The only difference is that we do not have "in object properties".

That being said: I am obviously biased, but I do wonder if an ECS-like layout wouldn't be nearly universally beneficial. Thinking of the `foo.name` and `foo.lastName` access: If those are on the same cache line then accessing the two only reads one cache line. This is nice. But if there are more properties in the objects (and there often are), then those will pollute the cache. If you do this access once, it doesn't matter. If you do this a million times, now the cache pollution becomes a real issue: In Node.js even the optimal case would be that you read read 625,000 cache lines worth of data, only to discard 250,000 cache lines of it.

If instead we use an ECS-like layout, then accessing these two properties reads two 10100cache lines: That's bad, but on the other hand if this happens once then it won't even make a blip on the screen. If a million of these accesses are done, you could think that we'd suddenly be slow as molasses but now the ECS-like layout is probably going to help you: You're more likely reading the next `name` and `lastName` property values on each access. If you have it bad and only half of the property data you read is actually the `name` and `lastName` properties you want, then you read 750,000 cache lines and lose out to the traditional engine by 100,000 cache lines. If you get 67% "hit rate" then you break even. And that's comparing to the case where the objects only contain `name` and `lastName` and nothing more.

It of course all comes down to statistics but... I'm very interested in the potential benefits here :)

Again, thank you for your comments, I've enjoyed discussing and pondering this <3

replies(1): >>42173584 #
3. dinfuehr ◴[] No.42173584[source]
Would this mean that each shape/structure/map get its own vector for each field in order for this to work?
replies(1): >>42174039 #
4. aapoalas ◴[] No.42174039{3}[source]
That would be one way; it would offer the best theoretical memory layout for well-behaved programs. But, I expect it to be very painful to work with and to come with some performance penalties in mixed use cases due to the extra indirection required.

No, my thinking is that properties would be stored into tables based on their size class: All objects that have 4-7 properties are in the same table, and all of their first property would be in the same slice, second property in another etc.

replies(1): >>42177894 #
5. munificent ◴[] No.42177894{4}[source]
That sounds to me like you'll end up getting little benefit from ECS then. Let's say the JavaScript program is iterating over a hundred thousand instances of some Foo class which happens to have 6 properties. You'd ideally want good use of cache, but if your object vector that has all of the Foo objects also happens to have all sorts of instances of other types that have the same field count mixed in there, then you're going to spend a lot of time skipping over those unrelated objects and refreshing the cache.

I know that ECS is treated as a silver bullet by a lot of people, but my experience is that it really only works well when the data you're working with is statically typed so that you can actually partition into arrays where each array does represent a single meaningful type.

replies(1): >>42178232 #
6. aapoalas ◴[] No.42178232{5}[source]
That is definitely possible: If it turns out to not make sense then I can at least always go back go the current system where all properties of an object as stored in a single array.

It's not as ECS'ssy as one would hope but it's at least proven technology :D