←back to thread

Nvidia won, we all lost

(blog.sebin-nyshkim.net)
981 points todsacerdoti | 2 comments | | HN request time: 0.468s | source
Show context
bigyabai ◴[] No.44468594[source]
> Pretty much all upscalers force TAA for anti-aliasing and it makes the entire image on the screen look blurry as fuck the lower the resolution is.

I feel like this is a misunderstanding, though I admit I'm splitting hairs here. DLSS is a form of TAA, and so is FSR and most other modern upscalers. You generally don't need an extra antialiasing pipeline if you're getting an artificially supersampled image.

We've seen this technique variably developed across the lifespan of realtime raster graphics; first with checkerboard rendering, then TAA, then now DLSS/frame generation. It has upsides and downsides, and some TAA implementations were actually really good for the time.

replies(2): >>44468710 #>>44471110 #
kbolino ◴[] No.44468710[source]
Every kind of TAA that I've seen creates artifacts around fast-moving objects. This may sound like a niche problem only found in fast-twitch games but it's cropped up in turn-based RPGs and factory/city builders. I personally turn it off as soon as I notice it. Unfortunately, some games have removed traditional MSAA as an option, and some are even making it difficult to turn off AA when TAA and FXAA are the only options (though you can usually override these restrictions with driver settings).
replies(2): >>44468805 #>>44469066 #
user____name ◴[] No.44469066[source]
The sad truth is that with rasterization every renderer needs to be designed around a specific set of antialiasing solutions. Antialiasing is like a big wall in your rendering pipeline, there's the stuff you can do before resolving and the stuff you can do afterwards. The problem with MSAA is that it is pretty much tightly coupled with all your architectural rendering decisions. To that end, TAA is simply the easiest to implement and it kills a lot of proverbial birds with one stone. And it can all be implemented as essentially a post processing effect, it has much less of the tight coupling.

MSAA only helps with geometric edges, shader aliasing can be combatted with prefiltering but even then it's difficult to get rid of it completely. MSAA also needs beefy multisample intermediate buffers, this makes it pretty much a non-starter on heavily deferred rendering pipelines, which throw away coverage information to fit their framebuffer budget. On top of that the industry moved to stochastic effects for rendering all kinds of things that were too expensive before, the latest being actual realtime path tracing. I know people moan about TAA and DLSS but to do realtime path tracing at 4k is sort of nuts really. I still consider it a bit of a miracle we can do it at all.

Personally, I wish there was more research by big players into things like texture space lighting, which makes shading aliasing mostly go away, plays nice with alpha blending and would make MSAA viable again. The issue there is with shading only the stuff you see and not wasting texels.

replies(1): >>44469114 #
kbolino ◴[] No.44469114[source]
There's another path, which is to raise the pixel densities so high we don't need AA (as much) anymore, but I'm going to guess it's a) even more expensive and b) not going to fix all the problems anyway.
replies(1): >>44469530 #
MindSpunk ◴[] No.44469530[source]
That's just called super sampling. Render at 4k+ and down sample to your target display. It's as expensive as it sounds.
replies(1): >>44469592 #
1. kbolino ◴[] No.44469592[source]
No, I mean high pixel densities all the way to the display.

SSAA is an even older technique than MSAA but the results are not visually the same as just having a really high-DPI screen with no AA.

replies(1): >>44476539 #
2. int_19h ◴[] No.44476539[source]
Up to a point. I would argue that 8K downsampled to 4K is practically indistinguishable from native 8K.