From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 11322 invoked by alias); 25 Dec 2004 06:16:15 -0000 Mailing-List: contact gsl-discuss-help@sources.redhat.com; run by ezmlm Precedence: bulk List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gsl-discuss-owner@sources.redhat.com Received: (qmail 11288 invoked from network); 25 Dec 2004 06:16:07 -0000 Received: from unknown (HELO mail.phy.duke.edu) (152.3.182.2) by sourceware.org with SMTP; 25 Dec 2004 06:16:07 -0000 Received: from lilith.rgb.private.net (client212-5.dsl.intrex.net [209.42.212.5]) by mail.phy.duke.edu (Postfix) with ESMTP id AF49A93244; Sat, 25 Dec 2004 01:16:05 -0500 (EST) Date: Sat, 25 Dec 2004 06:16:00 -0000 From: "Robert G. Brown" X-X-Sender: rgb@lilith.rgb.private.net To: Jerome BENOIT Cc: gsl-discuss@sources.redhat.com Subject: Re: discret random distributions: test In-Reply-To: <41CC6FCB.1020907@wanadoo.fr> Message-ID: References: <41C5C6DC.5010904@wanadoo.fr> <16840.19530.535732.244729@network-theory.co.uk> <41C8594D.90707@wanadoo.fr> <16844.25861.806194.746113@network-theory.co.uk> <41CC6FCB.1020907@wanadoo.fr> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-SW-Source: 2004-q4/txt/msg00129.txt.bz2 yOn Fri, 24 Dec 2004, Jerome BENOIT wrote: > Thanks for the reply. > > Brian Gough wrote: > > Jerome BENOIT writes: > > > I understood the sampling part and the comparing part. > > > What confuses me is the compatibility criteria. > > > In particular, why the undimensionless sigma > > > variable (a difference over a square root) is compare > > > to a dimensionless value (a constant) ? > > > > The number of counts in a bin is a dimensionless quantity. > > > > I guess that there is a missunderstanding on my side: > is there somewhere in the (classical) literature > something which can clarify my understanding of the criteria ? Most generators of random distributions (uniform or otherwise) are tested by comparing their output with the (or an) expected result. As in: "Suppose I generate some large number of samples from this distribution, and they are truly random. Then I >>know<< the actual distribution of values that I >>should<< get. If I compare the distribution of values that I >>did<< get to the one I should have gotten, I can calculate the probability of getting the one I got. If that probability is very, very low, there is a pretty good chance that my generator is broken." For example, suppose I am generating heads or tails via a coin flip, or random bits with a generator. If I generate a large number of them (N) and M of them turn out to be heads or 1's, I can compute very exactly from the binomial distribution what the probability is that I got the particular N/M pair that I did get. If that probability is small (perhaps I generated 1000 samples and 900 turned out to be heads) then we would doubt the generator, or if it were a coin we would doubt that the coin was an unbiased coin. We might begin to suspect that if we flipped it a second 1000 times, we would be significantly more likely to get heads than tails. The value computed is called the "p-value" -- the probability of getting the distribution you got presuming a truly random distribution generator. Of course, p-values are THEMSELVES distributed randomly, presumably uniformly, between 0 and 1. Sometimes generators might fail by getting too CLOSE to the "expected result" -- like a coin that always flipped exactly 500 heads out of 1000 flips, you'd start to look to see if it were generating sequences like HTHTHT that aren't random at all. So you can do a bit better by performing lots of trials and generating a distribution of p-values, and comparing that distribution to a uniform one to obtain ITS p-value. Usually one uses a Kolmogorov-Smirnov test to do this (and/or to compare a nonuniform distribution generator to the expected nonuniform distribution in the first place). Alternatively, one can plot a histogram of p-values and compare it to uniform, although that isn't quantitatively as sound. This kind of testing (and more) is described in Knuth's The Art of Programming, volume II (Seminumerical Algorithms) and is also described in some detail in the white paper associated with the NIST STS suite for testing RNG's. A less detailed description is given in the documents associated with George Marsaglia's Diehard suite of random number generator tests. There are links to both of these sites near the top of the main project page for an RNG tester I've been writing here: http://www.phy.duke.edu/~rgb/General/dieharder.php If you grab one of the source tarballs from this site, in the docs directory are both the STS white paper and diehard.txt from diehard (as well as several other white papers of interest from e.g. FNAL and CERN). So in a nutshell, most tests are ultimately based on the central limit theorem. From theory one gets a mean (expected value) and standard deviation for some sampled quantity at some sample size. One generates a sample (large enough that the CLT has some validity, minimally more than 30, sometimes much larger). One compares the difference between the (computed) value you get and the value you expected to get to the standard deviation, and use the error function (for example) or chisq distribution to determine the probability of getting what you got. If (very) small, maybe bad. If not, you can either accept it as good (really, as "not obviously bad") or work harder to resolve a problem. Hope this helps...and Merry Christmas! rgb -- Robert G. Brown http://www.phy.duke.edu/~rgb/ Duke University Dept. of Physics, Box 90305 Durham, N.C. 27708-0305 Phone: 1-919-660-2567 Fax: 919-660-2525 email:rgb@phy.duke.edu