It looks like none of the proposed approaches work well, and the problem seems to be much more complicated that it looks.
I think what might work properly is:
- A "fractal" dither pattern so that it can be zoomed out and in smoothly and is scale invariant
- Doing things in texel space so that both camera movement and object movement works properly
- Doing bilinear filtering (perhaps keeping all samples instead of storing the weighted average) or perhaps supersampled rendering of the dithered pattern, and then using some sort of error diffusion pass in screen space (with a compute shader)
But not actually sure if this works in practice.
If that's not enough, an alternative would be to do things in screen space "naively", then reverse map the screen space rendering to texel space (in a resolution-preserving way), and use the information in texel space on the next frame to create a screen space solution compatible to the one in texel space, map it to texel space, etc., effectively building up the fractal per-texel pattern incrementally at runtime. This might be the best solution but seems very expensive in terms of memory, computation and complexity.
That is wild! It's not every day I get to see a completely new, unique visual effect. Kudos.
I'd love to see a video with vastly slower movement, so I can pay attention to what's actually happening. The fast movement turns it all into a blur (literally).
I think what might work properly is:
- A "fractal" dither pattern so that it can be zoomed out and in smoothly and is scale invariant
- Doing things in texel space so that both camera movement and object movement works properly
- Doing bilinear filtering (perhaps keeping all samples instead of storing the weighted average) or perhaps supersampled rendering of the dithered pattern, and then using some sort of error diffusion pass in screen space (with a compute shader)
But not actually sure if this works in practice.
If that's not enough, an alternative would be to do things in screen space "naively", then reverse map the screen space rendering to texel space (in a resolution-preserving way), and use the information in texel space on the next frame to create a screen space solution compatible to the one in texel space, map it to texel space, etc., effectively building up the fractal per-texel pattern incrementally at runtime. This might be the best solution but seems very expensive in terms of memory, computation and complexity.