←back to thread

269 points aapoalas | 2 comments | | HN request time: 0s | source

We're building a different kind of JavaScript engine, based on data-oriented design and willingness to try something quite out of left field. This is most concretely visible in our major architectural choices:

1. All data allocated on the JavaScript heap is placed into a type-specific vector. Numbers go into the numbers vector, strings into the strings vector, and so on.

2. All heap references are type-discriminated indexes: A heap number is identified by its discriminant value and the index to which it points to in the numbers vector.

3. Objects are also split up into object kind -specific vectors. Ordinary objects go into one vector, Arrays go into another, DataViews into yet another, and so on.

4. Unordinary objects' heap data does not contain ordinary object data but instead they contain an optional index to the ordinary objects vector.

5. Objects are aggressively split into parts to avoid common use-cases having to reading parts that are known to be unused.

If this sounds interesting, I've written a few blog posts on the internals of Nova over in our blog, you can jump into that here: https://trynova.dev/blog/what-is-the-nova-javascript-engine

Show context
Etheryte ◴[] No.42171237[source]
Architectural choices are interesting to talk about, but I think most people reading this won't have any context to compare against, me included. How does this compare to e.g. the architecture of V8? What benefits do these choices give when compared against other engines? Etc, reading through the list it's easy to nod along, but it's hard to actually have an intuition about whether these are good choices or not.
replies(3): >>42171249 #>>42171285 #>>42171820 #
VPenkov ◴[] No.42171285[source]
They seem to have a blog post on that: https://trynova.dev/blog/why-build-a-js-engine

It reads like an experimental approach because someone decided to will it into existence. That and to see if they can achieve better performance because of the architectural choices.

> Luckily, we do have an idea, a new spin on the ECMAScript specification. The starting point is data-oriented design (...)

> So, when you read a cache line you should aim for the entire cache line to be used. The best data structure in the world, bar none, is the humble vector (...)

> So what we want to explore is then: What sort of an engine do you get when almost everything is a vector or an index into a vector, and data structures are optimised for cache line usage? Join us in finding out (...)

replies(3): >>42171319 #>>42171414 #>>42175089 #
aapoalas ◴[] No.42171319[source]
The impetus for the engine design is indeed, as you say, "someone decided to will it into existence."

A friend of mine who works in the gaming industry told me about the Entity Component System architecture and I thought: Hey, wouldn't that work for a JavaScript engine? So I decided to find out.

Nova itself has already been created at that point and I was part of the project, but it was little more than a README. I then started to push it towards my vision, and the rest is not-quite-history.

replies(2): >>42171643 #>>42183007 #
kitd ◴[] No.42171643[source]
A friend of mine who works in the gaming industry told me about the Entity Component System architecture and I thought: Hey, wouldn't that work for a JavaScript engine?

That was the first thing I thought of when I saw your description. But the reason ECS works well is cache coherence. (Why) would a general-purpose runtime environment like a JS engine benefit from ECS? Or alternatively, have you seen performance improvements as a result?

replies(1): >>42171694 #
aapoalas ◴[] No.42171694[source]
I guess the opposite could also be asked: Why would a game benefit from ECS? A player in the game can do basically anything, there's no guarantee that things are always perfectly accessed in a linear order.

It comes down to statistics: Large data sets in a general-purpose runtime environment are still created through parsing or looping, and they are consumed by looping. A human can manually create small data sets of entirely heterogenous data, but anything more than a 100 items is already unlikely.

Finally, the garbage collector is a kind of "System" in the ECS sense. So even if the JavaScript code has managed to create very nonlinear data sets, the garbage collector will still enjoy benefits. (Tracing the data is still "pointer chasing" but when tracing we don't need to trace in the data order but can instead gather a collection of heap references we've seen, sort them in order and then trace them.)

replies(1): >>42171985 #
kaoD ◴[] No.42171985[source]
> Why would a game benefit from ECS? A player in the game can do basically anything, there's no guarantee that things are always perfectly accessed in a linear order.

There's actually a guarantee that things are mostly going to be accessed in a linear order because player actions don't matter to the execution of the simulation. The whole simulation is run at 1/FPS intervals across the whole set of entities, regardless of player input (or lack thereof).

In an ECS the whole World is run by Systems, which operate on Components. This is why cache locality works there: when the Movement System is acting, it's operating on the Position Component for all (or at least many) Entities, so linear array access pattern is very favorable. Any other component in your cache is going to be unused until the next system runs (and then the Position Component will become the useless data in cache). That's why you'd rather have an array of Components in cache instead of an array of Entities.

This access pattern is very suitable for games because the simulation is running continuously in an infinite loop (the game loop) consisting of even more loops (the Systems running), but not so much for general purpose computation where access patterns are mostly random. (EDIT: or rather, local to each "entity".)

replies(1): >>42172071 #
aapoalas ◴[] No.42172071{3}[source]
It is very true that a general purpose computation can theoretically do anything and mess linearity of access patterns entirely up. But in practice programs do most of their work in very linear fashion. It's not by chance that eg. V8 will try to write objects parsed from a JSON array of objects one right after the other. So in a sense we can say that the JavaScript program itself becomes the System with a capital S.

That is not to say that Nova's heap vectors will necessarily make sense: The two big possible stumbling blocks are 1) growing of heap vectors possibly taking too long, and 2) compacting of heap vectors during GC taking too long.

The first point basically comes down to the fact that, at present, each heap vector is truly a single Rust Vec. When it can no longer fit all the heap data into it, it needs to reallocate. Imagine you have 2 billion ordinary objects, and suddenly the ordinary objects vector needs to reallocate: This will cause horrible stalls in the VM. This can be mitigated at the cost of splitting each heap vector into chunks, but this of course comes at the cost of an extra indirection and some lack of linearity in the memory layout.

The second point is more or less a repeat of the first: Imagine you have 2 billion ordinary objects, and suddenly a single one at the beginning of the vector is removed by GC: The GC has to now move every object remaining in the vector down a step to make the vector dense again. This is something that I cannot really do anything about: I can make this less frequent by introducing a "minor GC" but eventually a "major GC" must happen and something like this can then be experienced. I can only hope that this sort of things are rare.

The alternative would be to do a "swap to tail", so the last item in the vector is moved to take the removed item's place. But that then means that linear access is no longer guaranteed. It also plays havoc on how our GC is implemented but that's kind of a side point.

Software engineering is architecture is full of trade-offs :) I'm just hoping that the ones we've made will prove to make sense.

replies(3): >>42172153 #>>42172232 #>>42173654 #
dinfuehr ◴[] No.42173654{4}[source]
> The GC has to now move every object remaining in the vector down a step to make the vector dense again

Is there something which forces you to compact everything here? Or could you do what most GCs do and track that free entry in a free list?

replies(1): >>42174127 #
aapoalas ◴[] No.42174127{5}[source]
I would have to keep a free list per vector, and there's a lot of those, and it would defeat the point of keeping data packed and temporally colocated: I want to ensure that data was created together stays together and only ever gets more compact as it grows older.

Filling in empty slots would mean that likely unrelated data comes and pollutes the cache for the old data :(

replies(1): >>42181796 #
1. dinfuehr ◴[] No.42181796{6}[source]
I see. Thanks for the answers btw!

I get your point but a few gaps here and there likely don't matter at all for performance. At least it's a lot better than making everything super compact all the time. Assuming you are splitting vectors at some points into chunks: In such a world you could choose to get rid of chunks with a lot of gaps and move the remaining entries into other chunks. At that point you really have a regular GC.

And the free list could be stored in the vector itself. E.g. if an entry is empty it would store the pointer to the next free entry. So all you need is a single head/tail index for each vector.

I also wonder how you handle pointers which could point into one of many vectors. E.g. a field could easily point either to an object or an array. Do you plan to pack this vector id into the 32-bit value? If so wouldn't there be a lot of dispatch like this as well:

if (index & VECTOR_ID_MASK == OBJECTS_VECTOR_ID) { return objects[index&VECTOR_INDEX_MASK]; } else { .. }

I hope it's clear what I mean with this.

replies(1): >>42188335 #
2. aapoalas ◴[] No.42188335[source]
Thank you for the discussion!

A few gaps won't matter, and that to me speaks of a split between major and minor GC making sense. However, I'm not really sold on that meaning a free list making sense. For one, if I split the heap vectors into single value parts, then holding free slot data in any of them will become somewhat complicated. Hence at least for the foreseeable future I'm 100% in on the compacted heap vectors idea :) Time will tell if the aggressive compacting makes sense or not.

Our JavaScript Values are the full 8 bytes (yes, this is large and it pains me, but it does give us all integers on stack, most doubles on stack, and up to 7 bytes of string data on stack), so a field that can point to any kind of object stores a byte tag and a u32 index. I might pack this down to 1+3 bytes or so, at the cost of supporting smaller maximum number of objects in the engine. JS Value itself would still probably remain 8 bytes because of the stack data benefits.

There is indeed dynamic dispatching through match statements, though it generally happens at the specification method level. Eg. A specification method to get a property from an object will match on the tag and then dispatch to a concrete method with the index as parameter. The indexes are typed as well, so from this point on we statically know we're dealing with eg. an Array.

So there is dynamic dispatch yes, but we try to eliminate it at the earliest opportunity. We probably still have more of that than a traditional engine would have though: A traditional engine will keep the tag on the heap and there is some dynamic dispatch done based on that, but at least your data lookup isn't based on dispatch.