public inbox for gsl-discuss@sourceware.org
 help / color / mirror / Atom feed
* discret random distributions: test
@ 2004-12-19 18:22 Jerome BENOIT
  2004-12-21 17:00 ` Brian Gough
  0 siblings, 1 reply; 7+ messages in thread
From: Jerome BENOIT @ 2004-12-19 18:22 UTC (permalink / raw)
  To: gsl-discuss

Hello List,

I have just read in the source (GSL 1.5) the tests made
to check the discret random distributions, but I cannot
clearly figure out the idea behind the test:
any hint is welcome.

Thanks in advance,
Jerome

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: discret random distributions: test
  2004-12-19 18:22 discret random distributions: test Jerome BENOIT
@ 2004-12-21 17:00 ` Brian Gough
  2004-12-21 17:12   ` Jerome BENOIT
  0 siblings, 1 reply; 7+ messages in thread
From: Brian Gough @ 2004-12-21 17:00 UTC (permalink / raw)
  To: jgmbenoit; +Cc: gsl-discuss

Jerome BENOIT writes:
 > I have just read in the source (GSL 1.5) the tests made
 > to check the discret random distributions, but I cannot
 > clearly figure out the idea behind the test:
 > any hint is welcome.

randist/test.c samples the distribution and compares the resulting
histogram with the pdf itself to see if they are compatible within
statistical errors.

-- 
Brian Gough

Network Theory Ltd,
Commercial support for GSL --- http://www.network-theory.co.uk/gsl/

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: discret random distributions: test
  2004-12-21 17:00 ` Brian Gough
@ 2004-12-21 17:12   ` Jerome BENOIT
  2004-12-24 19:00     ` Brian Gough
  0 siblings, 1 reply; 7+ messages in thread
From: Jerome BENOIT @ 2004-12-21 17:12 UTC (permalink / raw)
  To: gsl-discuss

Hello,

thank you very much for the answer.

Brian Gough wrote:
> Jerome BENOIT writes:
>  > I have just read in the source (GSL 1.5) the tests made
>  > to check the discret random distributions, but I cannot
>  > clearly figure out the idea behind the test:
>  > any hint is welcome.
> 
> randist/test.c samples the distribution and compares the resulting
> histogram with the pdf itself to see if they are compatible within
> statistical errors.
> 

I understood the sampling part and the comparing part.
What confuses me is the compatibility criteria.
In particular, why the undimensionless sigma
variable (a difference over a square root) is compare
to a dimensionless value (a constant) ?


Thanks,
Jerome

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: discret random distributions: test
  2004-12-21 17:12   ` Jerome BENOIT
@ 2004-12-24 19:00     ` Brian Gough
  2004-12-24 19:36       ` Jerome BENOIT
  0 siblings, 1 reply; 7+ messages in thread
From: Brian Gough @ 2004-12-24 19:00 UTC (permalink / raw)
  To: jgmbenoit; +Cc: gsl-discuss

Jerome BENOIT writes:
 > I understood the sampling part and the comparing part.
 > What confuses me is the compatibility criteria.
 > In particular, why the undimensionless sigma
 > variable (a difference over a square root) is compare
 > to a dimensionless value (a constant) ?

The number of counts in a bin is a dimensionless quantity.

-- 
best regards,

Brian Gough

Network Theory Ltd,
Commercial support for GSL --- http://www.network-theory.co.uk/gsl/

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: discret random distributions: test
  2004-12-24 19:00     ` Brian Gough
@ 2004-12-24 19:36       ` Jerome BENOIT
  2004-12-25  6:16         ` Robert G. Brown
  0 siblings, 1 reply; 7+ messages in thread
From: Jerome BENOIT @ 2004-12-24 19:36 UTC (permalink / raw)
  To: gsl-discuss

Thanks for the reply.

Brian Gough wrote:
> Jerome BENOIT writes:
>  > I understood the sampling part and the comparing part.
>  > What confuses me is the compatibility criteria.
>  > In particular, why the undimensionless sigma
>  > variable (a difference over a square root) is compare
>  > to a dimensionless value (a constant) ?
> 
> The number of counts in a bin is a dimensionless quantity.
> 

I guess that there is a missunderstanding on my side:
is there somewhere in the (classical) literature
something which can clarify my understanding of the criteria ?

Thanks in advance,
Jerome

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: discret random distributions: test
  2004-12-24 19:36       ` Jerome BENOIT
@ 2004-12-25  6:16         ` Robert G. Brown
  2004-12-25 20:26           ` Jerome BENOIT
  0 siblings, 1 reply; 7+ messages in thread
From: Robert G. Brown @ 2004-12-25  6:16 UTC (permalink / raw)
  To: Jerome BENOIT; +Cc: gsl-discuss

yOn Fri, 24 Dec 2004, Jerome BENOIT wrote:

> Thanks for the reply.
> 
> Brian Gough wrote:
> > Jerome BENOIT writes:
> >  > I understood the sampling part and the comparing part.
> >  > What confuses me is the compatibility criteria.
> >  > In particular, why the undimensionless sigma
> >  > variable (a difference over a square root) is compare
> >  > to a dimensionless value (a constant) ?
> > 
> > The number of counts in a bin is a dimensionless quantity.
> > 
> 
> I guess that there is a missunderstanding on my side:
> is there somewhere in the (classical) literature
> something which can clarify my understanding of the criteria ?

Most generators of random distributions (uniform or otherwise) are
tested by comparing their output with the (or an) expected result.  As
in: "Suppose I generate some large number of samples from this
distribution, and they are truly random.  Then I >>know<< the actual
distribution of values that I >>should<< get.  If I compare the
distribution of values that I >>did<< get to the one I should have
gotten, I can calculate the probability of getting the one I got.  If
that probability is very, very low, there is a pretty good chance that
my generator is broken."

For example, suppose I am generating heads or tails via a coin flip, or
random bits with a generator.  If I generate a large number of them (N)
and M of them turn out to be heads or 1's, I can compute very exactly
from the binomial distribution what the probability is that I got the
particular N/M pair that I did get.  If that probability is small
(perhaps I generated 1000 samples and 900 turned out to be heads) then
we would doubt the generator, or if it were a coin we would doubt that
the coin was an unbiased coin.  We might begin to suspect that if we
flipped it a second 1000 times, we would be significantly more likely to
get heads than tails.

The value computed is called the "p-value" -- the probability of getting
the distribution you got presuming a truly random distribution
generator.  Of course, p-values are THEMSELVES distributed randomly,
presumably uniformly, between 0 and 1.  Sometimes generators might fail
by getting too CLOSE to the "expected result" -- like a coin that always
flipped exactly 500 heads out of 1000 flips, you'd start to look to see
if it were generating sequences like HTHTHT that aren't random at all.

So you can do a bit better by performing lots of trials and generating a
distribution of p-values, and comparing that distribution to a uniform
one to obtain ITS p-value.  Usually one uses a Kolmogorov-Smirnov test
to do this (and/or to compare a nonuniform distribution generator to the
expected nonuniform distribution in the first place).  Alternatively,
one can plot a histogram of p-values and compare it to uniform, although
that isn't quantitatively as sound.

This kind of testing (and more) is described in Knuth's The Art of
Programming, volume II (Seminumerical Algorithms) and is also described
in some detail in the white paper associated with the NIST STS suite for
testing RNG's.  A less detailed description is given in the documents
associated with George Marsaglia's Diehard suite of random number
generator tests.  There are links to both of these sites near the top of
the main project page for an RNG tester I've been writing here:

  http://www.phy.duke.edu/~rgb/General/dieharder.php

If you grab one of the source tarballs from this site, in the docs
directory are both the STS white paper and diehard.txt from diehard (as
well as several other white papers of interest from e.g. FNAL and CERN).

So in a nutshell, most tests are ultimately based on the central limit
theorem.  From theory one gets a mean (expected value) and standard
deviation for some sampled quantity at some sample size.  One generates
a sample (large enough that the CLT has some validity, minimally more
than 30, sometimes much larger).  One compares the difference between
the (computed) value you get and the value you expected to get to the
standard deviation, and use the error function (for example) or chisq
distribution to determine the probability of getting what you got.  If
(very) small, maybe bad.  If not, you can either accept it as good
(really, as "not obviously bad") or work harder to resolve a problem.

Hope this helps...and Merry Christmas!

   rgb

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb@phy.duke.edu


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: discret random distributions: test
  2004-12-25  6:16         ` Robert G. Brown
@ 2004-12-25 20:26           ` Jerome BENOIT
  0 siblings, 0 replies; 7+ messages in thread
From: Jerome BENOIT @ 2004-12-25 20:26 UTC (permalink / raw)
  Cc: gsl-discuss

Thank you very much for the explanation and your time:
I have a clear image now, and I will certainly consult
the cited literature.

Thanks again,
Jerome

Robert G. Brown wrote:
> yOn Fri, 24 Dec 2004, Jerome BENOIT wrote:d
> 
> 
>>Thanks for the reply.
>>
>>Brian Gough wrote:
>>
>>>Jerome BENOIT writes:
>>> > I understood the sampling part and the comparing part.
>>> > What confuses me is the compatibility criteria.
>>> > In particular, why the undimensionless sigma
>>> > variable (a difference over a square root) is compare
>>> > to a dimensionless value (a constant) ?
>>>
>>>The number of counts in a bin is a dimensionless quantity.
>>>
>>
>>I guess that there is a missunderstanding on my side:
>>is there somewhere in the (classical) literature
>>something which can clarify my understanding of the criteria ?
> 
> 
> Most generators of random distributions (uniform or otherwise) are
> tested by comparing their output with the (or an) expected result.  As
> in: "Suppose I generate some large number of samples from this
> distribution, and they are truly random.  Then I >>know<< the actual
> distribution of values that I >>should<< get.  If I compare the
> distribution of values that I >>did<< get to the one I should have
> gotten, I can calculate the probability of getting the one I got.  If
> that probability is very, very low, there is a pretty good chance that
> my generator is broken."
> 
> For example, suppose I am generating heads or tails via a coin flip, or
> random bits with a generator.  If I generate a large number of them (N)
> and M of them turn out to be heads or 1's, I can compute very exactly
> from the binomial distribution what the probability is that I got the
> particular N/M pair that I did get.  If that probability is small
> (perhaps I generated 1000 samples and 900 turned out to be heads) then
> we would doubt the generator, or if it were a coin we would doubt that
> the coin was an unbiased coin.  We might begin to suspect that if we
> flipped it a second 1000 times, we would be significantly more likely to
> get heads than tails.
> 
> The value computed is called the "p-value" -- the probability of getting
> the distribution you got presuming a truly random distribution
> generator.  Of course, p-values are THEMSELVES distributed randomly,
> presumably uniformly, between 0 and 1.  Sometimes generators might fail
> by getting too CLOSE to the "expected result" -- like a coin that always
> flipped exactly 500 heads out of 1000 flips, you'd start to look to see
> if it were generating sequences like HTHTHT that aren't random at all.
> 
> So you can do a bit better by performing lots of trials and generating a
> distribution of p-values, and comparing that distribution to a uniform
> one to obtain ITS p-value.  Usually one uses a Kolmogorov-Smirnov test
> to do this (and/or to compare a nonuniform distribution generator to the
> expected nonuniform distribution in the first place).  Alternatively,
> one can plot a histogram of p-values and compare it to uniform, although
> that isn't quantitatively as sound.
> 
> This kind of testing (and more) is described in Knuth's The Art of
> Programming, volume II (Seminumerical Algorithms) and is also described
> in some detail in the white paper associated with the NIST STS suite for
> testing RNG's.  A less detailed description is given in the documents
> associated with George Marsaglia's Diehard suite of random number
> generator tests.  There are links to both of these sites near the top of
> the main project page for an RNG tester I've been writing here:
> 
>   http://www.phy.duke.edu/~rgb/General/dieharder.php
> 
> If you grab one of the source tarballs from this site, in the docs
> directory are both the STS white paper and diehard.txt from diehard (as
> well as several other white papers of interest from e.g. FNAL and CERN).
> 
> So in a nutshell, most tests are ultimately based on the central limit
> theorem.  From theory one gets a mean (expected value) and standard
> deviation for some sampled quantity at some sample size.  One generates
> a sample (large enough that the CLT has some validity, minimally more
> than 30, sometimes much larger).  One compares the difference between
> the (computed) value you get and the value you expected to get to the
> standard deviation, and use the error function (for example) or chisq
> distribution to determine the probability of getting what you got.  If
> (very) small, maybe bad.  If not, you can either accept it as good
> (really, as "not obviously bad") or work harder to resolve a problem.
> 
> Hope this helps...and Merry Christmas!
> 
>    rgb
> 

-- 
Dr. Jerome BENOIT
room A2-26
Complexo Interdisciplinar da U. L.
Av. Prof. Gama Pinto, 2
P-1649-003 Lisboa, Portugal
email: jgmbenoit@wanadoo.fr or benoit@cii.fc.ul.pt
--
If you are convinced by the necessity of a European research
initiative, please visit http://fer.apinc.org

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2004-12-25 20:26 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2004-12-19 18:22 discret random distributions: test Jerome BENOIT
2004-12-21 17:00 ` Brian Gough
2004-12-21 17:12   ` Jerome BENOIT
2004-12-24 19:00     ` Brian Gough
2004-12-24 19:36       ` Jerome BENOIT
2004-12-25  6:16         ` Robert G. Brown
2004-12-25 20:26           ` Jerome BENOIT

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).