AMA ;-)
Can you say how the it's similar and how it's different to superficially similar sounding work?
(1) https://github.com/linebender/vello , dual Apache/MIT, by Raph Levien et al
(2) https://sluglibrary.com/ , proprietary, by Eric Lengyel (Terathon)
The first priority was to solve paths to pixels efficiently, including text (50,000 glyphs @ 60fps).
Gradients will be added when time allows, as I have code from a previous engine.
The coverage algorithm can be extended to support cheap box blurs, which could be used for drop shadows.
Always neat to see this kind of stuff however. Very cool.
Vello is general purpose, like Rasterizer, but is based on GPU compute. Rasterizer uses the 'traditional' GPU pipeline. Performance numbers for both look very competitive, although Vello seems to have issues with GPU lock up at certain zoom scales. Rasterizer has been heavily tested with huge scenes at any scale.
Paths can be any size, and the problem is hard to parallelize. GPUs like stuff broken into small regular chunks.
TestFiles/Manchester_Union_Democrat_office_1877.svg is composed of a single huge path, which was a great torture test.
SDFs are expensive to calculate, and have too many limitations to be practical for a general-purpose vector engine.
@mindbrix does it blend colors in linear space/are colors linearized internally?
without having looked at your particular shader code, I can only imagine the horrors and countless of hours that went into writing and debugging the shader code...
Which OpenGL and GLSL versions are you targeting?
I've been thinking about possibly prototyping integrating an SVG renderer into my game engine that would rasterize the textures from .svg files on content load. Would offer some benefits of improved packing and better scaling and resolution independence. Using an GPU based solution would offer the benefit of being able to skip the whole "rasterize on CPU then upload" dance but just rasterize directly into a texture render target in some FBO and then use the texture later on. That being said CPU based solution is definitely easier and more bullet proof ;-)
Wouldn't it make sense to do a "first pass" and eliminate paths that intersect themselves? (by splitting them into 2+ paths)
I never understood why these are supported in the SVG spec.
It seems like a pathological case. Once self-intersecting paths are eliminated the problem gets simpler.. no?
Or would a CPU pass be cheating?
Heck, that's what people expect with CSS for example.
Fleshing out the spec is planned, but I cannot provide a timeline as this has all been done at my own considerable expense. Maybe if my tips grow: https://paypal.me/mindbrix
The current version uses Metal. I haven't even considered GPU ports yet, as my methodology is to get it working well on one platform first.
For single-pass SVG --> texture, a CPU approach would probably offer the lowest latency. For repeat passes, the GPU would probably win.
You can switch between CG and Rasterizer in the demo app using the 0 key to see the difference for yourself.
It uses this "personal use zlib license" And So earlier it was actually licensed under the zlib license which I think of as in something similar to the MIT license (I think, I am not a lawyer)
My issue with this is that the personal use zlib license to me feels like its made up by the author, and that you need to contact the author for a commerical license?
At this point, he should've just used something like a dual license with AGPL + commerical license.
Honestly, I get it, I also wish that there was some osi compliant that made open source make sense as a developer as open source is a really weak chain in this economy and I get it, but such licenses basically make your project only source available.
I have nothing wrong with that and honestly just wanted this to be discussed here. I had a blast looking at all the licenses in wikipedia or opensource.com website. Artistic license seems really cool if you want relicense or something. I am looking more into it. I genuinely wish if something like sspl could've been considered open source as it doesn't impact 90% of users and only something like aws/big tech.
Rasterizer can solve quadratic curves in the fragment shaders, which massively reduces the geometry needed for a scene.
Also, the native rasterizer only supports MSAA, which is inferior to reference analytic area AA.
From what I recall they are converting it to triangles. Your solution (curves in the shaders?) seems both cheaper and more accurate, so I'm wondering if they could use it!
https://developer.download.nvidia.com/assets/gamedev/files/N...
and this:
https://developer.nvidia.com/nv-path-rendering-videos
The faq points 30 & 31 say it use multisampling (up to 32 samples per pixel) for AA, and the winding at each sample is calculated analytically from the curve.
From other searching, it seems no other vendor supports that extension.
Mark Kilgard surveyed some path rendering engines with a few curves that most have trouble with. It’d be fun to see how Rasterizer stacks up, and perhaps a nice bragging point if you dominate the competition. https://arxiv.org/pdf/2007.12254
Having used the quadratic solver code that you found on ShaderToy (sqBezier), you might be able to shave some cycles by doing a constant-folding pass on the code. It can be simplified from the state it’s in. Also the constant in there 1.73205 is just sqrt(3), and IMO it’s nicer to see the sqrt(3) than the constant, and it won’t slow anything down, the compiler will compute the constant for you. Also, it might be nice to additionally link to the original source of that code on pouet.
Consider modern C++ practices as outlined here: https://github.com/cpp-best-practices/cppbestpractices/blob/...