From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 14203 invoked by alias); 25 Dec 2004 20:26:47 -0000 Mailing-List: contact gsl-discuss-help@sources.redhat.com; run by ezmlm Precedence: bulk List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gsl-discuss-owner@sources.redhat.com Received: (qmail 14177 invoked from network); 25 Dec 2004 20:26:39 -0000 Received: from unknown (HELO punt-mx0.dmpriest.net.uk) (62.13.128.153) by sourceware.org with SMTP; 25 Dec 2004 20:26:39 -0000 Received: from squirrel.dmpriest.net.uk (secure.authsmtp.com [62.13.128.25]) by punt-mx0.dmpriest.net.uk (8.12.11/8.12.11/Kp) with ESMTP id iBPKQdP4041932 for ; Sat, 25 Dec 2004 20:26:39 GMT Received: from nan.dnsalias.org ([81.84.184.224]) (authenticated bits=0) by squirrel.dmpriest.net.uk (8.13.1/8.13.1/Kp) with ESMTP id iBPKQcDp011772 for ; Sat, 25 Dec 2004 20:26:38 GMT (envelope-from jgmbenoit@wanadoo.fr) Received: from localhost ([127.0.0.1]) by nan.dnsalias.org with esmtp (Exim 4.34) id 1CiIUm-0001hm-Q1 for gsl-discuss@sources.redhat.com; Sat, 25 Dec 2004 20:26:37 +0000 Message-ID: <41CDCCFC.3010003@wanadoo.fr> Date: Sat, 25 Dec 2004 20:26:00 -0000 From: Jerome BENOIT Reply-To: jgmbenoit@wanadoo.fr Organization: none User-Agent: Mozilla Thunderbird 0.9 (X11/20041124) MIME-Version: 1.0 CC: gsl-discuss@sources.redhat.com Subject: Re: discret random distributions: test References: <41C5C6DC.5010904@wanadoo.fr> <16840.19530.535732.244729@network-theory.co.uk> <41C8594D.90707@wanadoo.fr> <16844.25861.806194.746113@network-theory.co.uk> <41CC6FCB.1020907@wanadoo.fr> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Score: 0.0 (/) X-Spam-Report: Spam detection software, running on the system "nan.dnsalias.org", has identified this incoming email as possible spam. The original message has been attached to this so you can view it (if it isn't spam) or block similar future email. If you have any questions, see the administrator of that system for details. Content preview: Thank you very much for the explanation and your time: I have a clear image now, and I will certainly consult the cited literature. Thanks again, Jerome Robert G. Brown wrote: > yOn Fri, 24 Dec 2004, Jerome BENOIT wrote:d > > >>Thanks for the reply. >> >>Brian Gough wrote: >> >>>Jerome BENOIT writes: >>> > I understood the sampling part and the comparing part. >>> > What confuses me is the compatibility criteria. >>> > In particular, why the undimensionless sigma >>> > variable (a difference over a square root) is compare >>> > to a dimensionless value (a constant) ? >>> >>>The number of counts in a bin is a dimensionless quantity. >>> >> >>I guess that there is a missunderstanding on my side: >>is there somewhere in the (classical) literature >>something which can clarify my understanding of the criteria ? > > > Most generators of random distributions (uniform or otherwise) are > tested by comparing their output with the (or an) expected result. As > in: "Suppose I generate some large number of samples from this > distribution, and they are truly random. Then I >>know<< the actual > distribution of values that I >>should<< get. If I compare the > distribution of values that I >>did<< get to the one I should have > gotten, I can calculate the probability of getting the one I got. If > that probability is very, very low, there is a pretty good chance that > my generator is broken." > > For example, suppose I am generating heads or tails via a coin flip, or > random bits with a generator. If I generate a large number of them (N) > and M of them turn out to be heads or 1's, I can compute very exactly > from the binomial distribution what the probability is that I got the > particular N/M pair that I did get. If that probability is small > (perhaps I generated 1000 samples and 900 turned out to be heads) then > we would doubt the generator, or if it were a coin we would doubt that > the coin was an unbiased coin. We might begin to suspect that if we > flipped it a second 1000 times, we would be significantly more likely to > get heads than tails. > > The value computed is called the "p-value" -- the probability of getting > the distribution you got presuming a truly [...] Content analysis details: (0.0 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- X-Authentic-SMTP: 61633132333330.squirrel.dmpriest.net.uk:Kp X-Powered-By: AuthSMTP - http://www.authsmtp.com - Authenticated SMTP Mail Relay X-Report-SPAM: If SPAM / abuse - report it at: http://www.authsmtp.com/abuse X-Virus-Status: No virus detected - but ensure you scan with your own anti-virus system! X-SW-Source: 2004-q4/txt/msg00130.txt.bz2 Thank you very much for the explanation and your time: I have a clear image now, and I will certainly consult the cited literature. Thanks again, Jerome Robert G. Brown wrote: > yOn Fri, 24 Dec 2004, Jerome BENOIT wrote:d > > >>Thanks for the reply. >> >>Brian Gough wrote: >> >>>Jerome BENOIT writes: >>> > I understood the sampling part and the comparing part. >>> > What confuses me is the compatibility criteria. >>> > In particular, why the undimensionless sigma >>> > variable (a difference over a square root) is compare >>> > to a dimensionless value (a constant) ? >>> >>>The number of counts in a bin is a dimensionless quantity. >>> >> >>I guess that there is a missunderstanding on my side: >>is there somewhere in the (classical) literature >>something which can clarify my understanding of the criteria ? > > > Most generators of random distributions (uniform or otherwise) are > tested by comparing their output with the (or an) expected result. As > in: "Suppose I generate some large number of samples from this > distribution, and they are truly random. Then I >>know<< the actual > distribution of values that I >>should<< get. If I compare the > distribution of values that I >>did<< get to the one I should have > gotten, I can calculate the probability of getting the one I got. If > that probability is very, very low, there is a pretty good chance that > my generator is broken." > > For example, suppose I am generating heads or tails via a coin flip, or > random bits with a generator. If I generate a large number of them (N) > and M of them turn out to be heads or 1's, I can compute very exactly > from the binomial distribution what the probability is that I got the > particular N/M pair that I did get. If that probability is small > (perhaps I generated 1000 samples and 900 turned out to be heads) then > we would doubt the generator, or if it were a coin we would doubt that > the coin was an unbiased coin. We might begin to suspect that if we > flipped it a second 1000 times, we would be significantly more likely to > get heads than tails. > > The value computed is called the "p-value" -- the probability of getting > the distribution you got presuming a truly random distribution > generator. Of course, p-values are THEMSELVES distributed randomly, > presumably uniformly, between 0 and 1. Sometimes generators might fail > by getting too CLOSE to the "expected result" -- like a coin that always > flipped exactly 500 heads out of 1000 flips, you'd start to look to see > if it were generating sequences like HTHTHT that aren't random at all. > > So you can do a bit better by performing lots of trials and generating a > distribution of p-values, and comparing that distribution to a uniform > one to obtain ITS p-value. Usually one uses a Kolmogorov-Smirnov test > to do this (and/or to compare a nonuniform distribution generator to the > expected nonuniform distribution in the first place). Alternatively, > one can plot a histogram of p-values and compare it to uniform, although > that isn't quantitatively as sound. > > This kind of testing (and more) is described in Knuth's The Art of > Programming, volume II (Seminumerical Algorithms) and is also described > in some detail in the white paper associated with the NIST STS suite for > testing RNG's. A less detailed description is given in the documents > associated with George Marsaglia's Diehard suite of random number > generator tests. There are links to both of these sites near the top of > the main project page for an RNG tester I've been writing here: > > http://www.phy.duke.edu/~rgb/General/dieharder.php > > If you grab one of the source tarballs from this site, in the docs > directory are both the STS white paper and diehard.txt from diehard (as > well as several other white papers of interest from e.g. FNAL and CERN). > > So in a nutshell, most tests are ultimately based on the central limit > theorem. From theory one gets a mean (expected value) and standard > deviation for some sampled quantity at some sample size. One generates > a sample (large enough that the CLT has some validity, minimally more > than 30, sometimes much larger). One compares the difference between > the (computed) value you get and the value you expected to get to the > standard deviation, and use the error function (for example) or chisq > distribution to determine the probability of getting what you got. If > (very) small, maybe bad. If not, you can either accept it as good > (really, as "not obviously bad") or work harder to resolve a problem. > > Hope this helps...and Merry Christmas! > > rgb > -- Dr. Jerome BENOIT room A2-26 Complexo Interdisciplinar da U. L. Av. Prof. Gama Pinto, 2 P-1649-003 Lisboa, Portugal email: jgmbenoit@wanadoo.fr or benoit@cii.fc.ul.pt -- If you are convinced by the necessity of a European research initiative, please visit http://fer.apinc.org