Thank you, Ingo. > The checksum approach isn't quite usefull I think, since algorithms > are and should be changed by hand and communication between developers > should be done as direct as possible. If you introduce checksums, you > provide a tools that simulates stability where there is none. There > is no fire and forget algorithm that you haven't develeoped and > documented quite well. > Think of a checksum as a means to secure a message. Any secure protocol connection lives upon that. It's useful at least in that case. Now take this model: A programmer has proved (e.g. by Knuts rules) and tested her code hard. She made a checksum to be absolutely sure to know what she's tested. Than the software is validated independently, again, no errors. The validator compares the checksum to the one the programmer knows. They are equal. What we've got? Two independent people are telling, this is good software and they are talking about the same thing, for sure. Now the message is sent: From lab to field operations. Copied several times. At the end appeared somewhere in memory. Check the checksum! Now you know, nothing was changed, the module contributes to the system and you're sure it's the same one as in the module test. John is right, there are other things you can mention: same code/checksum but different behavior. But I like to come back to that problem when I'm finished with the first one. The checksum is a matter of securing a message, as in a secure protocol. The only thing here is, that the massage is a module doing work in some software. It's a module that was sent by the first tester through an insecure channel until it arrived at some operation. I think, Ingo, checksums are a good and usual idea for that sort transmission errors. Rolf