What WebGL2 is still missing is MSAA texture objects (it only supports MSAA render buffers), which makes it impossible to directly load individual samples in a shader (useful for custom-resolve render passes). That's only possible in WebGPU.
1. antialiasing should be done in linear rgb space instead of srgb space [1] [2]
2. because of the lack of (1) for decades, fonts have been tweaked to compensate, so sometimes srgb is better [3] [4]
Do you have advice on linear vs srgb space antialiasing?
[1] https://www.puredevsoftware.com/blog/2019/01/22/sub-pixel-ga...
[2] http://hikogui.org/2022/10/24/the-trouble-with-anti-aliasing...
[3] https://news.ycombinator.com/item?id=12023985
[4] http://hikogui.org/2022/10/24/the-trouble-with-anti-aliasing...
I have done a few live visualization based blog posts, and they take me ages to do. I kind of think that's the right idea though. There is so much content out there, taking longer to produce less content at a higher quality benefits everyone.
Found it by going to the comments since the comments are GitHub issues the "x comment" is a link to the issues page.
For font and 2D vector rendering it's likely, in fact afaik, some solutions, such as Slug already do.
But for 3d rendering I don't know of any solutions.
For an intuition, consider two triangles that intersect the same pixel.
Consider if say one has 20% coverage and the other 30%, does that pixel have 50% coverage, 30% by one, 20% by one and 10% by another, or any other conceivable mix? It's very difficult to say without selecting specific points and sampling directly.
[1] https://github.com/FrostKiwi/treasurechest/commits/main/post...
Still, non-standard rendering approaches are very much a thing [1] and I could see setups like [2] be used in scientific particle visualizations.
[1] https://www.youtube.com/watch?v=9U0XVdvQwAI
[2] https://bgolus.medium.com/rendering-a-sphere-on-a-quad-13c92...
It's just so happens to produce visible artifacts in this case. I suppose for 3D scenes it's mostly fine.
How to blend multiple shapes on the same quad, within a draw call is shown in the section "Drawing multiple?". There you fully control the intersection with the blending math.
Finally, there is always the gamma question, which is true for all AA solutions. This is not covered in the post, but that might mess with the results, as is true for any kind of blending.
[1] https://developer.mozilla.org/en-US/docs/Web/API/WebGLRender...
A big customer was furious that they had bought a part that didn't perform the way they wanted, so I was voluntold to fix it.
I was given a ridiculous timeframe to come up with a solution and present them to our customer in a big in-person meeting with all the decision makers. I managed to implement three different alternatives so that the customer would feel they had some agency selecting the one they liked the most. The best looking by far was a form of AAA.
This was one out of several of these big last minute fires I was assigned to solve. Years later my manager told me how great it was knowing that the could throw any crap at me and I would be able to fix it in time.
However, these sort of experiences are why I struggled with burnout during my career, which led me to save like crazy to retire as early as possible, which I did.
For younglings out there: when they ask you to do the impossible, remember that failure IS an option. Push back if you think they are asking you for something unreasonable.
Unfortunately, this is completely context dependent. One central point is, whether or not the graphics pipeline is instructed to perform corrections (GL_FRAMEBUFFER_SRGB in OpenGL), as that changes the answer. Another point is, in which context blending is performed. Luckily the developer has full freedom here and can even specify separate blending for alpha and color [1], something that GPU accelerated terminal emulator Alacritty makes use of [2], though it doesn't do MSDF rendering.
One thing I can say though: The alpha, the fading of edge, has to be linear at the end or perceived as such. Or rather if the edge were to be stretched to 10 pixels, each pixel has to be a 0.1 alpha step. (If smoothstep is used, the alpha has to follow that curve at the end) Otherwise the Anti-Aliasing will be strongly diminished. This is something you can always verify at the end. Correct blending of colors is of course a headache and context specific.
> fonts have been tweaked to compensate, so sometimes srgb is better
This should not concern MSDF rendering. These tweaks happened at specific resolutions with monitors popular at that time. Especially when considering HiDPI modes of modern window systems, all bets are off, DPI scaling completely overthrows any of that. MSDF is size independent and the "tweaks" are mainly thickness adjustments, which MSDF has control over. So if the font doesn't match as it looks in another rendering type, MSDF can correct for it.
[1] https://developer.mozilla.org/en-US/docs/Web/API/WebGLRender...
[2] https://github.com/search?q=repo%3Aalacritty%2Falacritty+ble...
> Whole communities rally around fixing this, like the reddit communities “r/MotionClarity” or the lovingly titled “r/FuckTAA”, all with the understanding, that Anti-Aliasing should not come at the cost of clarity. FXAA creator Timothy Lottes mentioned, that this is solvable to some degree with adjustments to filtering, though even the most modern titles suffer from this.
I certainly agree that the current trend of relying on upscalers has gone too far and results in blurry and artifact riddled AAA game experiences for many. But after seeing this [1] deep dive by Digital foundry I find the arguments he makes quite compelling. There is a level of motion stability and clarity only tech like DLSS can achieve, even outperforming SSAA. So I've shifted my stance from TAA == blurry, TAA + ML when used right == best AA possible currently for 3D games.
Thoughts?