...but... to avoid overflow issues when storing the exponential depth, the paper stores linear depth instead, deferring the exponential until the depth comparison and using ln to recover the proper value for filtering. This means emulating (separable) filtering in a shader. :(
Nevertheless, my current implementation uses regular hardware filtering with linear depth, which is likely the cause of the massive light bleeding when the occluder is near the receiver, and poor filtering when they're far apart.
In the depth comparison, ek(o-r), where k is the slope coefficient, which can be raised to reduce light bleeding at the expense of poorer filtering, k=32 for the above screenshot. [wow]
uniform sampler2D shadowSampler;varying vec4 shadowCoord;const float k = 32;float GetInvShadowImpl(){ float moment = texture2D(shadowSampler, shadowCoord.xy).r; return clamp(exp(k * (moment - shadowCoord.z)), 0, 1);}
I saved 8ms per frame by using this algorithm, so it shouldn't be a big deal to sacrifice a few of those ms to emulate filtering for better quality.
I'm glad you're experimenting with my technique :)
One thing is not clear to me: are you pre-filtering the shadow map at all?
If you store linear depth you obviously need to use ln filtering, on the other hand if you don't need to support shadows casted by 'distant' objects you can just store exp(depth)..
In both cases you need to prefilter your map!
Marco