The graphics stack continues to be one of the biggest bottlenecks in portability. One day I realized that WebAssembly (Wasm) actually held the solution to the madness. It’s runnable anywhere, embeddable into anything, and performant enough for real-time graphics. So I quit my job and dove into the adventure of creating a portable, embeddable WASM-based graphics framework from the ground up: high-level enough for app developers to easily make whatever graphics they want, and low-level enough to take full advantage of the GPU and everything else needed for a high-performance application.
I call it Renderlet to emphasize the embeddable aspect — you can make self-contained graphics modules that do just what you want, connect them together, and make them run on anything or in anything with trivial interop.
If you think of how Unity made it easy for devs to build cross-platform games, the idea is to do the same thing for all visual applications.
Somewhere along the way I got into YC as a solo founder (!) but mostly I’ve been heads-down building this thing for the last 6 months. It’s not quite ready for an open alpha release, but it’s close—close enough that I’m ready to write about it, show it off, and start getting feedback. This is the thing I dreamed of as an application developer, and I want to know what you think!
When Rive open-sourced their 2D vector engine and made a splash on HN a couple weeks ago (https://news.ycombinator.com/item?id=39766893), I was intrigued. Rive’s renderer is built as a higher-level 2D API similar to SVG, whereas the Wander renderer (the open-source runtime part of Renderlet) exposes a lower-level 3D API over the GPU. Could Renderlet use its GPU backend to run the Rive Renderer library, enabling any 3D app to have a 2D vector backend? Yes it can - I implemented it!
You can see it working here: https://vimeo.com/929416955 and there’s a deep technical dive here: https://github.com/renderlet/wander/wiki/Using-renderlet-wit.... The code for my runtime Wasm Renderer (a.k.a. Wander) is here: https://github.com/renderlet/wander.
I’ll come back and do a proper Show HN or Launch HN when the compiler is ready for anyone to use and I have the integration working on all platforms, but I hope this is interesting enough to take a look at now. I want to hear what you think of this!
Looks like it supports geometry and textures now, any plans to support shaders?
Was looking at several different approaches to this, one of which would be cross-compiling wasm to spir-v. Most likely will expose a higher-level shader API (think shadertoy) and have wander compile to the platform backend. Also will be able to run WGSL shaders directly through Wasm with wasi-gfx support.
Deno seems to work on that idea [0], but having a WASI like standard would be better of course.
[0] https://github.com/deno-windowing
PS: How much work was it to "port" the Rive renderer? Would be great to see a blog post or similar about how you approached that and about any difficulties on the way :)
What we do on top of that is compile the graphics code to wasm and provide a well-defined interface around it, so it can run/work inside any application.
Also a simple wasi-audio API is needed (preferrably something less overengineered than WebAudio, just a simple sample-streaming API would be perfect).
Getting rive-renderer working was not hard because in the demo its running on the host side, and not in Wasm yet, although compiling for Windows/DX11 took some minor changes. Getting it fully working in Wasm outside of the browser looks to be non-trivial, but doable, but will likely require upstream changes.
But why wouldn't I "just" use Unity?
I agree with you. Nobody cares about the platform specific details anymore, and people are willing to pay a little bit of money for an end-all-be-all middleware. I have gone my whole life not paying attention to a single Apple-specific API, and every single time, someone has written a better, more robust, cross-platform abstraction.
But Unity is already this middleware. I already can make a whole art application on top of Unity (or Unreal). People do. Sometimes people build whole platforms on top of Unity and are successful (Niantic) and some are not (Improbable). You're one guy. You are promising creating a whole game engine - you're going to get hung up on not using the word game engine, but that is intellectually honest, it is a game engine - which a lot of people 1,000x better capitalized than you have promised, and those people have been unable to reach parity with Unity after years of product development. So while I want you to succeed, I feel like a lot of Y Combinator guys have this, "We make no mistakes, especially we do not make strategic mistakes." It's going to be a long 3 years!
It comes with physics engines, telemetry, networking, a c# runtime and probably even more.
I don’t think that any of the adobe suite would ever be built in unity bc why do they need to ship a physics engine with their photo editor.
Not to mention that unity is backed by an, imo, untrustworthy company who’s obviously willing to change pricing structure on a dime and retroactively.
People can use Unity to build games and non-games. I personally don't think it fits a lot of different use-cases or application models and that it tends to be most successful in specific gaming verticals, but if it works well for you, by all means use it!
I'm strategically betting both on the lines between what is viewed as a game and not blurring, as well as developers needing a friendlier, more flexible way of building this kind of interactive content. I'm by no means under the illusion that strategic mistakes won't be made, or that this won't be a 10-year+ journey - realistically many (most?) successful companies have a very nonlinear path, including Unity themselves.
Have a look into supporting Ruffle/SWF content, Lottie, etc.
Also, for a renderer there is one by Mozilla called Pathfinder: https://github.com/servo/pathfinder
I know you narrowly mean "rigidbody physics for the purpose of videogames." But Adobe did ship a physics engine with their photo editor! They discontinued their "3D" support, and raytracing is most definitely physics with a capital P, but they were shipping it for a long time. If you have an even more normal definition of physics, to include optical physics, well they have a CV solution for many features like camera RAW processing, removing distortion, etc.
> It comes with physics engines, telemetry, networking, a c# runtime and probably even more.
Because that is what people need to make multimedia applications.
The primary issue with things that include their own Wasm env, that then moving that system to the web doesn't work because you can't run wasm in wasm.
That's exactly the goal - one wasm binary with defined input/outputs that can be loaded either in a browser or running in any app outside of a browser.
How did you overcome the shared array buffer accessibility problem on safari vs access to ad networks which is important for online games?
I called it single threads vs regular builds.
Hope to help make sure there's a diverse set of rendering kernels for everyone.
Edited: Link to our work at making portable 3d graphics on the web with an editor. https://editor.godotengine.org/releases/latest/
We were impressed by your work, https://github.com/rive-app and https://graphite.rs/
From an Adobe perspective - it doesn't. If you go to photoshop.adobe.com in Safari, you will see the answer. Things can work in a single-threaded build, but that is not production code.
I can't speak for the Safari team, but I do see this getting traction soon with the current priorities for Wasm. Seems like now the most common answer is just to use Chrome.
For context, my team has spent the past few years porting Unreal Engine 5 to WebGPU and WebAssembly - we have a multi-threaded renderer as well as an asset streaming system that fetches in at runtime asynchronously (as needed) so users don't need to download an entire game/app upfront. This also frees up needing to have the whole application in memory at once. We've also built out a whole hosting platform and backend for developers to deploy their projects to online.
You can learn more about SimplyStream here:
Website: https://simplystream.com/
Blog post: https://simplystream.com/create/blog/latest
I'm in a rush so I can't look to closely now but I have a few questions (and please forgive any stupid questions, I'm not a graphics dev, just a hobbyist):
What's the runtime like? Is there an event loop driving the rendering? (who calls the `render` on each frame? are there hooks into that? ) FFI story? Who owns the window pointer?
I'm interested in audio plugins, and VSTs (etc) have a lot of constrains on what can be done around event loops and window management. JUCE is pretty much the de-facto solution there, but it's pretty old and feels crufty.
You could run a browser and record all the page faults, then remove all the code you didn't run.
https://fgiesen.wordpress.com/2012/04/08/metaprogramming-for...
The host app owns the event loop. I don't foresee that changing even once we re-architect around WebGPU (allowing the Wasm guest to control shaders), as the host app is responsible for "driving" the render tree, including passing in state (like a timer used for animations). The host app owns the window pointer, as renderlets are always designed to be hosted in an environment (either an app or a browser). Open to feedback on this, though.
FFI is coming with the C API soon!
I don't know much about audio but I see a ton of parallels - well-defined data flow across a set of components running real-time, arbitrary code. Simulations also come to mind.
I was sad when UE4 sunset HTML5 support, and glad to see a spiritual successor! There are a lot of parallels to other large in-browser apps in terms of load time for games - not just for the content but the size of game code itself. Are you able to use streaming compilation or some sort of plugin model?
AFAIK https://wgpu.rs/ makes this possible with Rust.
---
But this is very different than what was demonstrated in the vimeo video.
For anyone else that ends up here, I also had to click the radio button for "Map mode" too.
[0] https://docs.google.com/document/d/1peUSMsvFGvqD5yKh3GprskLC...
If you want more, use the capability built into LV2, AU and VST3 for out-of-process GUIs for a plugin (LV2 has had this for more than a decade). CLAP has, I think, abandoned plans to support this based on lack of uptake elsewhere.
I'd hardly call JUCE "pretty old", but then I'm a lot older than JUCE. And it's likely only crufty if you're more used to other styles of GUI toolkits; in terms of the "regular" desktop GUI toolkits, it's really bad at all.
Worth noting that their original GPU backend was Skia, and now they are retooling around Flutter GPU (Impeller)[0], which is kind of designed similarly as an abstract rendering interface over platform-specific GPU APIs.
https://docs.google.com/document/u/0/d/1peUSMsvFGvqD5yKh3Gpr...
I think the ideal in that article is that people can write components in whatever languages they want, and when they compile to WASM, they can all interoperate. It reminds me of all of those compile-to-Javascript languages for writing micro-frontends, although there is not as much interoperability from a React boundary to say, a ClojureScript boundary.
By the way, what are you building as a solo founder for YC? Is it related to this project? For this project, I'm curious to see how exactly WASM interoperates with the GPU directly, bypassing the platform specific APIs. Do you still have to write GPU-specific parts for each of the GPU manufacturers? I wonder if there would be an open standard called WASM-GPU in the future that abstracts over these but doesn't necessarily touch any of the OS directly.
WebGPU is pretty far behind what AAA games are using even as of 6 years ago. There's extra overhead and security in the WebGPU spec that AAA games do not want. Browsers do not lend themselves to downloading 300gb of assets.
Additionally, indie devs aren't using Steam for the technical capabilities. It's purely about marketshare. Video games are a highly saturated market. The users are all on Steam, getting their recommendations from Steam, and buying games in Steam sales. Hence all the indie developers publish to Steam. I don't see a web browser being appealing as a platform, because there's no way for developers to advertise to users.
That's also only indie games. AAA games use their own launchers, because they don't _need_ the discoverability from being on Steam. So they don't, and avoid the fees. If anything users _want_ the Steam monopoly, because they like the platform, and hate the walled garden launchers from AAA companies.
EDIT: As a concrete example of the type of problem's WASM for games face, see this issue we discovered (can't unload memory after you've loaded it, meaning you can never save memory by dropping the asset data after uploading assets to the GPU, unless you load your assets in a very specific, otherwise suboptimal sequence): https://github.com/bevyengine/bevy/issues/12057#issuecomment...
(I work on high end rendering features for the Bevy game engine https://bevyengine.org, and have extensive experience with WebGPU)
Currently it is the low level, cross platform layer that is the most complex and the biggest hurdle towards making a game engine viable. If it wasn't so insanely complex, and the technical barrier towards making your own engine is reduced, the tired cliche of "don't build an engine" wouldn't hold as much weight, and it opens the doors to building a bespoke, fit for purpose engine for every game you create. Don't underestimate what an individual or small teams can produce if they are operating on a solid platform that facilitates a rich ecosystem of tools.
I see a banner mentioning "Rive for Game UI" which is great to see but really the whole platform should be a Flash replacement. It shouldn't just be for doing UIs in games or animated content, it could be used to make full 2D games. Flash was so popular because of its versatility. There were middleware taking Flash content directly into game UIs (ScaleForm) and there is middleware supporting WebKit for game UIs (Coherent labs). Both of these have extensive scripting support (respectively ActionScript and JavaScript) allowing UI designers and coders to create reactive and flexible content, even procedural content like lists of things etc.
By the way, the only way from mobile to get to the downloads link on the main site is only behind the online editor login. I get why but I thought at first that the Editor was online only because of that.
To me, this reads like the intersection of "Web Components as Wasm" and "The Browser as an OS" - almost something analogous to WASI as browser APIs that are delivered via Wasm ABI instead of JS/WebIDL. It's an interesting take, and as long as it can operate alongside existing code, I'm all for that.
There are strong parallels to what we're building - small modules of Wasm graphics code that can interoperate across a common interface.
Check the repo for the GPU integration - it's like a super trimmed down version of wgpu, where graphics data is copied out of Wasm linear memory and a host specific API (WebGPU/OpenGL/DirectX) takes care of the upload to the GPU. There is a wasi-webgpu WebAssembly L1 proposal that I am involved with in the works, driven by Mendy Berger, and at some point all of this will be tooled on top of that with wgpu as a backend.
For renderlet the company, the goal is to build developer tools that make it easy to build renderlets and these kinds of applications without having to write raw Wasm code. The meta-compiler in the video is the first step in that direction! The runtime itself will always be open-source.
I couldn't agree more. My goal is not to simply build "a better game engine", but to make this kind of low-level tech accessible at a higher level and with much better dev tools to a broader class of developers and applications
> Don't underestimate what an individual or small teams can produce if they are operating on a solid platform
This gets into my motivations for building a company - larger companies have the resources to build moats, but often can't quickly realign themselves to go after novel technical opportunities. It's not either / or - both models exist for very valid reasons.
I agree that the feature set around WebGPU is constrained and becoming outdated tech compared to native platforms. It shouldn't have taken this long just to get compute shaders into a browser, but here we are. The lack of programmable mesh pipelines is a barrier for a lot of games, and I know that's just the beginning.
For memory, architecturally, that's why I'm treating wander as a tree of nodes, each containing Wasm functions - everything gets its own stack, and there is a strategy to manage Store sizes in wasmtime. Deleting that is the only way to free memory vs a singular application compiled to Wasm with one stack/heap/etc. More of a data-driven visualization framework than a full engine like Bevy, which I still think is one of the most elegant ways to build browser based games and 3d environments.
Have you seen Kha by any chance? It has similar goals. I find it quite awesome, but it won't gain mass adoption for a bunch of reasons. https://github.com/Kode/Kha
Someone built an immediate mode renderer on top https://github.com/armory3d/zui, which is utilised by ArmorPaint https://armorpaint.org. I also use Zui for my own bespoke 2D game engine.
I find this tech and tooling really quite amazing (just look at how little source code Zui has) given just how small the ecosystem around it is. I think Kha really illustrates what can be achievable if the lower levels have robust but simple APIs, just exposing the bare minimum as a standard for others to build upon. It really suggest taking a look at the graphics2 (2d canvas like) api.
For the kind of project I work on (mostly 2d games), I think it would really awesome if your framework also supported low level audio, and a variety of inputs such as keyboard, mice, and gamepads. If it also had decent text rendering support it would basically be my dream library/framework.
On average, running the Wasm guest code is about 80% of the speed of a native build I use. That is both dependent on what is running in Wasm and not a very scientific measurement - wander needs better benchmarks. We think that performance profile is sufficient for anything that needs a GPU except the highest-performance 3D games.
It’s still just experimental (I’m waiting for some upstream Dart fixes to land around WASM FFI, and shared memory support would be nice in Flutter too) but I think it’s promising. Bundle size is a bit of an issue at the moment too.
Regarding file limits, stay tuned for some announcements there.
Regarding Flash, yep that’s where we’re headed (and most of the use cases on the site should support that). We have some big features launching this year like audio, fluid layouts, and scripting. The banner was added because we’ve been attending game conferences and the game ui market segment is something we’re highlighting right now. Game UI is in dire need of better tools and it’s a market segment we can quickly lead with our current feature set.
The renderlet is a bundle of WebAssembly code that handles data flow for graphics objects. Input is just function parameters, output writes serialized data to a specific place in Wasm linear memory. With the Wasm Component Model, in the future can use much more complex types as input and output.
LoadFromFile() - Instantiates the Wasm module
Render() - runs the code in the module, wander uploads the output data to the GPU
Functions on the render tree - do things with the uploaded GPU data - like bind a texture to a slot, or ID3D11DeviceContext::Draw, for example.
There's some nuance about shading. In the current version, the host app is still responsible for attaching a shader, so should be no issue using the data in a deferred shading pipeline. In the future, the renderlet needs to be able to attach its own shaders, in which case it would have to be configured to use a host app's deferred shading pipeline. I think it is possible, but complicated, to build an API for this, where the host and then the renderlet are both involved in a lighting pass.
Of course, if all shading is handled within the renderlet, it entirely the concept of deferred shading, and this becomes an easier problem to solve.
The state of shared memory for Wasm is not great, although raw SharedArrayBuffers work ok in a browser for running multiple guests. Getting multi-memory properly working through llvm is likely a better solution.
We've got a bundle size issue as well even with -O3. I thought it was due to the amount of templated glm simd code we run, but now am convinced its deeper than that into Emscripten. Haven't been able to look into deeply yet.
As someone who's been using the Rend3/WGPU/Vulkan stack for over three years, I'd like to see some of these renderer projects ship something close to a finished product. We have too many half-finished back ends. I encourage people who want to write engines to get behind one of the existing projects and push.
Which is quite different than a renderer that targets wasm/webgpu. I think super highly of, and have used wgpu a fair amount.
I just interpreted Renderlet to have different goals.
On the "backend", we will switch fully to wgpu as we retool around wasi-webgpu. I explicitly don't want to rebuild a project like wgpu, and everybody should commit upstream to that - we will likely have stuff to upstream as well.
AFAICT (I was peripherally involved with one of the companies that did this work), this really went nowhere, even though it offered "play this new game from any java-equipped browser".
Text / fonts is very much on the roadmap! For input and audio I would have to think through the scope.
That's basically correct, although there is also a cross platform runtime called Hashlink but is unsupported by Kha.
Yes I think JUCE is great, It's very well made, but it drives you into a very narrow path of either using everything in the library, or leaving you to fend for yourself (which I admit may be a normal experience for C++ devs). For instance, the ValueTrees frequently used for UI state are very powerful, but they're not very type safe (or thread safe), and they feel clunky compared to more contemporary reactive state management patterns like signals.
I'm sure folks who use ValueTrees are happy, but I don't see much advancement to that pattern being shared in the JUCE forums. If y'all have some better tricks over in the Ardour project I'd love to know! (BTW, I'm a fan of y'all's work. I really enjoyed reading some of the development resources, like the essay on handling time [0]).
Can you elaborate on what the “graphics code” might be in this case? Many Rust graphics engines seem to cover the same ground by having asset loading cfg’d on the target (wasm vs native). What does your project provide that a dev wouldn’t get with Rust + a wasm compatible engine?
Keep it up! Bookmarked.
...but this..?
> Graphics data and code can be developed together in the same environment, packaged together into a WebAssembly module called a renderlet, and rendered onto any canvas. With WebAssembly, we compile graphics code to portable bytecode that allows it to safely run on any processor and GPU
So what is a renderlet?
> The renderlet compiler is currently in closed preview - please contact us for more information.
Hm... what this seems to be is a C++ library that lets you take compiled WASM and run it to generate and render graphics.
Which, I think it is fair to say, it's surprising, because you can already render graphics using C++.
Only here, you can render graphics using an external WASM binary.
So, why?
Specifically, if you're already using C++:
1) Why use WASM?
2) Why use renderlet instead of webGPU, which is already a high level cross platform abstraction including shader definitions?
What is this even for?
> wander is designed to be a rendering engine for any high-performance application. It primarily is designed as the runtime to run renderlet bundles
...but, why would I use a renderlet, if I already need to be writing C++?
I. Get. It. A cross platform GPU accelerated rendering library you can use from any platform / browser / app would be great. ...but that is not what this is.
This is a C++ library runtime that you can use to run graphics in any circumstance where you you can currently use C++.
...but, in circumstances where I can use C++, I have many other options for rendering graphics.
Look at the workflow:
Rendering code -> Renderlet compiler -> renderlet binary
App -> load renderlet binary -> start renderlet runtime -> execute binary on runtime -> rendered
vs. App -> rendering code (WebGPU) -> rendered
or, if you writing a new cross platform API over the top of webGPU App -> Fancy api -> WebGPU -> rendered
I had a good read of the docs, but I honestly fail to see how this is more useful than just having a C++ library that renders graphics cross platform like SDL.Shaders? Well, we also already have a good cross platform rendering library in webGPU; it already runs on desktop and browsers (maybe even some mobile devices); it already has a cross platform shader pipeline; it's already usable from C++.
I'm not going to deny the webGPU API is kind of frustrating to use, and the tooling for building WASM binaries is too, but... it does actually exist.
Is this like a 'alternative to webGPU' with a different API / easy mode tooling?
...or, have I missed it completely and there's something more to this?
No, the goal is not to create a C++ API to give you GPU functions.
The C++ API for wander is used to embed the WebAssembly module of graphics code into the application. The API footprint is very small - load a file, pass parameters to it, iterate through the tree it produces.
This could be viewed as logically equivalent to programmatically loading a flash/swf file. Or similar to what Rive has built with a .riv, although this is static content, not code.
> 1) Why use WASM?
You're loading arbitrary, third-party code into an app - that is the renderlet. The benefit is to have a sandboxed environment to run code to put data on the GPU.
2) Why use renderlet instead of webGPU, which is already a high level cross platform abstraction including shader definitions?
WebGPU is a low-level API. If you are a graphics programmer, and want to build an app around WebGPU, go for it! A renderlet is more of a graphics plugin system than an entire first-party app.
> The renderlet compiler is currently in closed preview - please contact us for more information.
This is the system to build the renderlet. This is not writing raw C++ code to talk to WebGPU, this can be higher-level functions (build a grid, perform a geometric extrusion, generate a gradient) - you can see in the video it is a yaml specification. The compiler generate the necessary commands, vertex buffers, textures, etc, and soon, shaders to do this, and builds a Wasm module out of it.
> Is this like a 'alternative to webGPU' with a different API / easy mode tooling?
I certainly wouldn't describe it as an alternative to WebGPU, but easy(er) tooling to build graphics, yes.
> What is the use-case for 'I've compiled a part of my application only into a cross platform binary renderlet and I can now run that cross platform ... after I've compiled the rest of my application into a platform specific binary for the platform I'm running it on?'
Let's take an example - Temporal Anti-Aliasing. There are libraries that exist to implement this, or you can implement it through raw code. This requires changes structural changes to your pipeline - to your render targets, additional outputs to your shaders, running additional shaders, etc. Wouldn't it be nice to easy connect a module to your graphics pipeline that contains the code for this, and the shader stages, and works across platforms and graphics APIs, with data-driven configuration? That is the vision.
> ... rest of your application into WASM/platform native code... is that not strange? It seems strange to me
There is not really such a thing as a standalone Wasm application. It has seen great success as a data-driven plugin model. In a browser, it is hosted with / interacts with JavaScript. Even built for pure WASI, as a standalone app where everything is compiled into a single module, there is stil a runtime/host environment.
Does that help clarify?
Just because it happens, doesn't mean it makes sense.
Anyway, people write their own game engines, and programming languages for game engines, because it is intellectually stimulating to do so, and something you spend 100h/wk to yield 1h of gameplay is still giving you more gameplay than something boring you spend 0h/wk on.
Then, the people who use those engines you are naming, they end up porting to Unity anyway. If you want to deploy on iOS and Switch with one codebase, it is the only game in town. And that's sometimes 60% of revenue.
> Don't underestimate what an individual or small teams can produce if they are operating on a solid platform that facilitates a rich ecosystem of tools.
Unity fits this bill exactly. I too want more competition. But in the real world I live in, if someone were to ask me, "what solid platform should I choose to make my multimedia application, as a small team, that also has a rich ecosystem of tools, and will enable me to make pretty much anything I can think of?" I would say, use Unity. Because I want them to succeed.
I see.
So this is basically flash?
A high level API to build binary application bundles (aka .swf files, ie. renderlets) and a runtime that lets you execute arbitrary applications in a sandbox.
renderlet = .swf file
wander = flash runtime
renderlet compiler = magic sauce, macromedia flash editor
yeah?
> Let's take an example - Temporal Anti-Aliasing. There are libraries that exist to implement this, or you can implement it through raw code.
Mhm. You can certainly do it in a cross platform way using webGPU, but I suppose I can see the vision of 'just download this random binary and it'll add SMAA' but it sounds a lot like "and then we'll have a marketplace where people can buy and sell GPU plugins" or "if you're building a web browser" rather than "and this is something that is useful to someone developing a visualization application from scratch".
The majority of these features could exist with just a C++ library and no requirement to 'pre-compile' some of your code into a renderlet... hosting external arbitrary 3rd party binaries in your application seems... niche.
Really, the only reason you would normally ever not just do it from source as a monolithic part of your application was if you didn't have the source code for some reason (eg. because you bought it as a WASM binary from someone).
Smells like Flash, and I'm not sure I like that, but I guess I can see the vision now, thanks for explaining.
https://www.youtube.com/watch?v=CkV-nWFXvbs
Disclaimer: I currently work at a company in the WebAssembly space that was involved with this conference
https://news.ycombinator.com/item?id=33452920
You can get your game/app ideas across far faster by building skills in Unreal/Unity than by using some bespoke little engine. Collaborate with more people, too.
Someone could definitely build another "last" cross platform application development toolkit with WebAssembly right now, and have it actually work reasonably well, and be slightly more desirable than flutter (and it could absolute use flutter/skia underneath) since you could build without the Dart (for those who don't necessarily prefer Dart).
WebGL and WebGPU are mostly fine for visualization and ecommerce, and that is about it.
Ah, and shadertoy like demos as well, probably their biggest use case.
https://github.com/9ballsyndrome/WebGL_Compute_shader/issues...
https://www.khronos.org/webgl/public-mailing-list/public_web...
We're successfully using Wasm harfbuzz to render text in a web-based design tool with relatively high usage so there should be no issues integrating it :)
To over-simplify, the Component Model manages language interop, and WIT constrains the boundaries with interfaces.
IMO the problem here is defining a 90% solution for most window, tab, button, etc management, then building embeddings in QT, Flutter/Skia, and other lower level engines. Getting a good cross-platform way of doing data passing, triggering re-renders, serializing window state is probably the meat of the interesting work.
On top of that, you really need great UX. This is normally where projects fall short -- why should I use this solution instead of something like Tauri[2] which is excellent or Electron?
[0]: https://github.com/WebAssembly/component-model/blob/main/des...
[1]: https://github.com/WebAssembly/component-model/blob/main/des...
[2]: https://tauri.app/
Recently I investigated few WASM runtimes and honestly could not manage achieving this task. Only suggestion I got from people is load bunch of packages using termux package manager and operate in shell environment on Android to compile and run example projects.
I would appreciate link to some project that results in APK which (as part of its work) calls WASM function in non-interpeted mode on android (arm/x86).
And besides that point, what is wrong with the “kid” stuff. A bunch of masterpieces have been created in such kid stuff. Celeste, Hotline Miami, and Dead Cells come to mind. I can’t wait for the day that actual kids are building their own cross platform game engines on a better tech foundation.
Decent support there would be a differentiator in my view.
But besides that point, the very reason why many games are ported from their niche library to Unity or Unreal is mostly just for cross platform support. Not because the game creator has a preference for Unity or Unreal. They are forced into it through lack of choice if they want cross platform. If Love2D, Phaser, Flixel, and any other niche 2D game library had an easy way to target consoles they would get a whole lot more use, but they don’t because the lower levels are extremely complex and engine/framework/library developers can’t support it. WebGPU appears to offer a path forward in that regard.
Which is how Steam charges 30%. Devs yell at Apple because they can say Apple is overcharging because of its "monopoly", they can't blame the monopoly on Steam.
With Steam, devs recognize that retailers get paid for shelf space, both as a percentage of 'retail price' the buyer pays above wholesale, and as literal payments for shelf space, inclusion in weekly mailings, posters on the windows, and more.
That these models worked like this long before digital distribution, and still work like this on platforms with no technical barrier to creating competing stores, gets ignored.
There are many reasons to complain, but 30% surely isn't it, as much as they make it to be.
"More than 20 programming tools vendors offer some 26 programming languages — including C++, Perl, Python, Java, COBOL, RPG and Haskell — on .NET."
https://news.microsoft.com/2001/10/22/massive-industry-and-d...
"The EM intermediate language is a family of intermediate languages created to facilitate the production of portable compilers."
"EM is a real programming language and could be implemented in hardware; a number of the language front-ends have libraries implemented in EM assembly language.", namely C, Pascal, Modula-2, Occam, and BASIC.
https://en.wikipedia.org/wiki/EM_intermediate_language
"The high-level instruction set (called TIMI for "Technology Independent Machine Interface" by IBM), allows application programs to take advantage of advances in hardware and software without recompilation"
https://en.wikipedia.org/wiki/IBM_AS/400#Technology_Independ...
WASM is just another take on this.
https://www.linkedin.com/pulse/what-ctscls-fcl-bcl-crl-net-f...
For instance, I grab individual elements of the UI all the time for sharing screenshots on macOS (like an individual menu, in a transparent png with drop shadow). I have several text shortcuts that work everywhere except on electron apps. Or, for example, how would accessibility work?
Like I said, I think a future where a cross platform open source web stack becomes the standard for UI development is kind of inevitable. I just hope it’s a great stack, with the best ideas from Mac, Windows, Gnome, KDE, etc, and not a lowest common denominator, which is usually the case.
Anyway, this is extremely cool and I’ll keep an eye on the project.
But that's the core the issue. In one case (steam) the developers pay 30% because they estimate the services they are getting from steam are worth it, in apple case the devs. pay because they have to.
The problem is not with the business model or the % cut apple takes, the problem is that the business relies on monopolistic behavior. The solution would be simple, decouple IOS the platform from the app store the service. If the apple store is really worth a 30% cut the market would re converge to that price.
> With Steam, devs recognize that retailers get paid for shelf space, both as a percentage of 'retail price' the buyer pays above wholesale, and as literal payments for shelf space, inclusion in weekly mailings, posters on the windows, and more.
> That these models worked like this long before digital distribution, and still work like this on platforms with no technical barrier to creating competing stores, gets ignored.
I would argue that digital distribution and platform are fundamentally different to brick and mortal retailers.
For one, the marginal cost of an app on the store vs space on the shelves is different. My understanding was that what actually drives the cost of shelve space is competition between product manufacturer and the price setting is closer to an auction as opposed to a set price. Nobody would have an issue if all apple was doing was selling promotion/ads spot on the app store.
Also apple shares on digital distribution is much larger than any single retails chain in the US. Thus giving them extreme pricing power.
For the rive-renderer / 2D integration, it is going to be a much longer path to get working in a browser together with wander.
The problem isn't the tech to run the game, it's the marketplace - how do you actually sell the games without losing the huge customer base that buys through Steam and platform-specific stores? If you're popular enough you probably still get a lot of customers, but I doubt it's anywhere near what Steam does for you.
Oh, piracy and anti-cheat are also a problem, because you just can't have a AAA game without Denuvo and Kernel Backdoors anymore (greeting to the Apex Legends players out there!).
There are probably still a few issues that would have to be solved on the game engine side, but I'm willing to say that the game engine is not the problem with browser-based games.
Indeed.
> They are forced into it through lack of choice if they want cross platform... because the lower levels are extremely complex
That is what I am saying. There are countless indie game engines of great repute, but because they are "1 guy," they cannot reach feature parity with Unity. They never will.
The Blizzard that developed the Overwatch engine had immense experience and a notoriously frugal (in terms of employee pay) culture. It still cost them about $100m and many years to develop a great engine for 3 platforms (Windows/Xbox, Switch, PS5). What hope does Godot have with $8m, or MonoGame with kopeks?
Nobody can vibes their way past the math problem of multi-platform game engines.
This is only controversial because Unity received so much ill will; and, that the indie games business and social media are very sensitive to vibes.
> WebGPU appears to offer a path forward in that regard [like supporting the Nintendo Switch].
While I would love for this to be true, it is significantly more aspirational than saying that because of Game Porting Toolkit, DirectX offers "a path forward" on macOS.