|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Here's a little spreadsheet to very simply illustrate how I'm thinking about
this.
Hold down F9 and watch the iterations of random sampling cycle.
Lower the tightness value to increase the clustering at the higher values.
sqrt(x) is a nice little function ranging between 0 & 1 over 0 to 1.
The probability ranges from 0 to 2/3.
I calculate the function, and its probability and that's the top graph.
Next, the probability gets normalized then raised to a fractional power
representing the tightness term.
That importance-modified probability corresponds to an x value, which itself
corresponds to an F(x) value.
That gets plotted on the lower graph.
sin (x*(pi/2)) would give a (radially symmetric) spherical coordinate.
That's my implementation of employing the probability. Choosing a suitable
function to evaluate that has the same shape as the BRDF is challenge 1, and
choosing it so that it can be easily and quickly integrated is the second
challenge.
Perhaps some sort of sigmoid shape would give you the peak and the F'(x)=0 at
y=1
Post a reply to this message
Attachments:
Download 'probability.xlsx.zip' (33 KB)
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 02.09.2016 um 06:06 schrieb Bald Eagle:
> Here's a little spreadsheet to very simply illustrate how I'm thinking about
> this.
That's actually a pretty neat spreadsheet to illustrate why that
approach is wrong.
You generate a set of non-uniformly distributed random numbers by
applying a transformation function F(x)=sqrt(x), and you give a
probability density function f(x)=(2/3)*x^(2/3) (which I presume matches
the transformation function). So far so good.
But then you take those non-uniformly distributed random numbers and
apply a second transformation function, G(x)=x^n, and this time you fail
to account for the fact that this transformation again changes the
numbers' distribution.
Been there, done that. Doesn't work, I can tell you from experience ;)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
clipka <ano### [at] anonymousorg> wrote:
> That's actually a pretty neat spreadsheet to illustrate why that
> approach is wrong.
They say stick with what you're good at. ;)
"I have not failed. I've just found 10,000 ways that won't work."
> But then you take those non-uniformly distributed random numbers and
> apply a second transformation function, G(x)=x^n, and this time you fail
> to account for the fact that this transformation again changes the
> numbers' distribution.
Ah. Because now you don't have a function describing the probability of the
G(x) data. Right.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |