public inbox for gcc-help@gcc.gnu.org
 help / color / mirror / Atom feed
* how to make code stay invariant
@ 2006-07-16 23:06 Rolf Schumacher
  2006-07-21  0:44 ` John Carter
  0 siblings, 1 reply; 16+ messages in thread
From: Rolf Schumacher @ 2006-07-16 23:06 UTC (permalink / raw)
  To: gcc-help

[-- Attachment #1: Type: text/plain, Size: 674 bytes --]

Dear gcc professionals,

is it possible to code software in a style so that no bit of the object 
code changes even if referenced objects change?

After an object is loaded into memory I'd like to check the integrity of 
the code by means of a checksum on a regular base.
In order to predict the checksum no bit in the code should change.
I'd like to have an unchanged object in memory even if referenced 
objects change.
(this is due to reduction of test effort).

What are all the areas coding rules have to be assigned to?
Do you know of a source where such things are discussed or solved?

Something like an explicit v-table mechanism or so?

Thank you for help

Rolf



[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/x-pkcs7-signature, Size: 3489 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: how to make code stay invariant
  2006-07-16 23:06 how to make code stay invariant Rolf Schumacher
@ 2006-07-21  0:44 ` John Carter
  2006-07-23  5:22   ` Rolf Schumacher
  0 siblings, 1 reply; 16+ messages in thread
From: John Carter @ 2006-07-21  0:44 UTC (permalink / raw)
  To: Rolf Schumacher; +Cc: gcc-help

On Mon, 17 Jul 2006, Rolf Schumacher wrote:

> is it possible to code software in a style so that no bit of the object code 
> changes even if referenced objects change?

Hmm. I'm not entirely clear as to what you mean but here is some ideas
that may or may not be relevant....

Compile to position independent code.

Reference no globals, const strings or statics.

Somehow I suspect this is a case of, if we really knew which (larger)
problem you were trying solve instead of what you are trying to do,
their would be a much much easier way...



John Carter                             Phone : (64)(3) 358 6639
Tait Electronics                        Fax   : (64)(3) 359 4632
PO Box 1645 Christchurch                Email : john.carter@tait.co.nz
New Zealand

Carter's Clarification of Murphy's Law.

"Things only ever go right so that they may go more spectacularly wrong later."

From this principle, all of life and physics may be deduced.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: how to make code stay invariant
  2006-07-21  0:44 ` John Carter
@ 2006-07-23  5:22   ` Rolf Schumacher
  2006-07-23 22:05     ` John Carter
  0 siblings, 1 reply; 16+ messages in thread
From: Rolf Schumacher @ 2006-07-23  5:22 UTC (permalink / raw)
  To: gcc-help

[-- Attachment #1: Type: text/plain, Size: 3171 bytes --]

John Carter wrote:

thank you, John, seems you know what I mean
as your recommendation is helpful.
However, let me be a bit more specific,
you may have more recommendations.
> On Mon, 17 Jul 2006, Rolf Schumacher wrote:
>
>> is it possible to code software in a style so that no bit of the 
>> object code changes even if referenced objects change?
>
> Hmm. I'm not entirely clear as to what you mean but here is some ideas
> that may or may not be relevant....
>
> Compile to position independent code. 
> Reference no globals, const strings or statics.
O.k., fine. Position independent code would help.
Also - in embedded systems - you can determine
a fixed position for a module, even after changes.
(module == some compilation units but not the whole software)
But you're right, this is not the solution to the problem.
>
> Somehow I suspect this is a case of, if we really knew which (larger)
> problem you were trying solve instead of what you are trying to do,
> their would be a much much easier way...
The larger problem:

As an assessor for safety critical software of embedded systems
I'm regularily faced with statements like:
"we only have made a little change to the source code
we do not want to retest all modules"

The problem is, even if a change is made to one module only,
how can I demonstrate that all other modules are unchanged?

Put it a bit more technical:

Unchanged source code is easy to demonstrate.
But this is not the whole story.

If I change the source of one module and recompile it,
the relative and absolute addresses in that module will change.
Any other module, that has references to the changed one,
will change as well, at least after linking together.
Additionally the absolute location of any other module
may change after linking with the changed one,
resulting in inner changes to that module
(if the code is not location independent)
Also if I recompile unchanged code, meta data (is there any?)
could change that is contained in the object.
As the tooling machines and the compiler itself
is not safety approved errors could have been introduced
in the unchanged, former tested modules.
The only way to find these (unlikely) errors nowadays
is to retest all modules.

Or to ignore this problem with some unknown risk.
But: retests are too expensive and risk negligence
has its probability.

The idea is to have a checksum for each module that has passed all
tests successfully and this checksum is unchanged even if any other
module becomes changed.

It would help to have reference tables in between all modules,
that may change all if one module changes but the modules themselves
stay invariant. (Only one test for each table entry proves it to be
correct.)
That way you can rely on older module tests for old modules
linked to a new software.

Keeping it short: If an object modules checksum hasn't changed
you haven't to retest it.

I'm not a specialist for gcc, to be honest I do not know much about it,
John,
but I would like to point the developers to right direction.
As such I would like to know if there is a chance of success on this
subject prior to do the detailed investigation.

Thanks for your kind help.


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/x-pkcs7-signature, Size: 3489 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: how to make code stay invariant
  2006-07-23  5:22   ` Rolf Schumacher
@ 2006-07-23 22:05     ` John Carter
  2006-07-24 12:19       ` Ingo Krabbe
  2006-07-24 22:38       ` Rolf Schumacher
  0 siblings, 2 replies; 16+ messages in thread
From: John Carter @ 2006-07-23 22:05 UTC (permalink / raw)
  To: Rolf Schumacher; +Cc: gcc-help

On Sun, 23 Jul 2006, Rolf Schumacher wrote:

> The larger problem:
>
> As an assessor for safety critical software of embedded systems
> I'm regularily faced with statements like:
> "we only have made a little change to the source code
> we do not want to retest all modules"
>
> The problem is, even if a change is made to one module only,
> how can I demonstrate that all other modules are unchanged?

The short answer is "You can't. Not even in Principle."

The longer answer is (greetings from Werner by the way!)...

* A common form of bug in C/C++ is an uninitialized variable. 9 times
   out of ten, if the value "zero" or whatever the previous value was is
   acceptable, the bug escapes the notice of the testers, no matter how rigorous.

   However, change the memory layout, change the activation pathway of the code,
   and the uninitialized memory can hold something nasty.

* Bugs infest the boundaries of state space. In a smallish system you
   can easily have 2000 variables ie. more states that there
   are atoms in the universe!

   ie. Your previous testing, no matter how rigorous, did not explored the
   full state space. In fact explored only tiny fraction.

   So a small change entirely external to your "unchanged" module can cause your
   "unchanged" module to move through a different region of state space.

   An untested region.

So far more useful than checksumming I would recommend....

* Compile -W -Wall -Werror and perhaps use www.splint.org as well.

* Design your system in strict layers.

* Decrease the size of the state subspace in each layer.

* Decrease the number of "user configurable items".

* Decrease the number of product variants.

* Place a good suite of automated Unit tests at each layer. Test Driven Development
   is Very Good for this.

* Brace the interior of the code well with asserts. This is like testing
   from inside the code, both during bench test and production test.

* Think your assert handling policy through so you run exactly the same code
   under test as in production.

* Have an automated suite of functional tests around your product as a whole.

* Run all tests on all builds.

* Preferable run all the non-timing critical test
   under something like valgrind as well. http://www.valgrind.org/

All this will do _way_ more to increase safety than any checksumming scheme.


> Unchanged source code is easy to demonstrate.
> But this is not the whole story.

I was recently called in to track the difference between too apparently
identical bits of source code resulting in different executables &
behaviour...

Answer? The __FILE__ macro (commonly used in asserts and the like)
includes the full path, one guy had built his code in /home/joe and the
other in /home/fred resulting in different string lengths! (And hence
different offsets everywhere!)

> If I change the source of one module and recompile it....

All too true. Perhaps if they were separate processes / executables you
in the nice state of being able to byte for byte verify that they are
same.

The Unixy notion of separate processes separate virtual addresses spaces
is not just a pretty face. It gives a lot of really good, really hard
safety guarantees!

> The only way to find these (unlikely) errors nowadays
> is to retest all modules.
>
> Or to ignore this problem with some unknown risk.
> But: retests are too expensive and risk negligence
> has its probability.

If I were to push (hard) on some aspect of the problem I would push
(hard) on the cost of retest.

> The idea is to have a checksum for each module that has passed all
> tests successfully and this checksum is unchanged even if any other
> module becomes changed.

Checksum the individual .o object files. Best balance for difficulty of
doing vs safety.

Not in anyway a guarantee, a test is still better, this would be cheaper
and better than nothing.

Notes about checksumming the .o files....

They include debug data which has absolute file paths.

They include time stamps.

Probably need to use the objdump (or maybe objcopy/ readelf) utilities
to read out the sections you care about and checksum those. eg.

Beware of $Version Control$ tags ending up in strings. They also caused
spurious diffs.


John Carter                             Phone : (64)(3) 358 6639
Tait Electronics                        Fax   : (64)(3) 359 4632
PO Box 1645 Christchurch                Email : john.carter@tait.co.nz
New Zealand

Carter's Clarification of Murphy's Law.

"Things only ever go right so that they may go more spectacularly wrong later."

From this principle, all of life and physics may be deduced.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: how to make code stay invariant
  2006-07-23 22:05     ` John Carter
@ 2006-07-24 12:19       ` Ingo Krabbe
  2006-07-24 22:39         ` Rolf Schumacher
  2006-07-24 22:38       ` Rolf Schumacher
  1 sibling, 1 reply; 16+ messages in thread
From: Ingo Krabbe @ 2006-07-24 12:19 UTC (permalink / raw)
  To: gcc-help

I would strongly recommend to read the released books TAOCP of D.E.Knuth (The 
Art of Computer Programming) and other works of him and his students.

He describes and has always cared about the ability to prove a computer 
algorithm mathematically.  I think besides his own MMIX language, C is an 
excellent language to write algortithms that hold mathematical validations, 
since C is well defined and so are most parts of libc.

I think this is the only approach to algorithms that are really invariant.  
There is now "external" tool to test "internal" invariance of algorithms 
since this would mean to test an algorithm with all conditions that  could 
ever appear.

The checksum approach isn't quite usefull I think, since algorithms are and 
should be changed by hand and communication between developers should be done 
as direct as possible.  If you introduce checksums, you provide a tools that 
simulates stability where there is none.  There is no fire and forget 
algorithm that you haven't develeoped and documented quite well.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: how to make code stay invariant
  2006-07-23 22:05     ` John Carter
  2006-07-24 12:19       ` Ingo Krabbe
@ 2006-07-24 22:38       ` Rolf Schumacher
  2006-07-24 23:22         ` John Carter
  1 sibling, 1 reply; 16+ messages in thread
From: Rolf Schumacher @ 2006-07-24 22:38 UTC (permalink / raw)
  To: John Carter; +Cc: Rolf Schumacher, gcc-help

Thanks, John.

I see you're experienced what you're talking about.
And I like to thank you for the examples.
Splint looks good. Valgind too, but the majority
of software I'm dealing with is non-Linux, RT-based.
I'll give it a second look.

You're absolutely right with your examples.
But think twice: if the only alternative is:
1. take the code with a few immediate ad hoc tests or
2. supply some rules that reduces the probability of
failures, do not change the amount of effort needed.

I'd like to supply the rules to end up with more safety,
that is: less probability for unacceptable failures.

John, we're living in a real world. I can tell:
you can't say: retest 100000 lines of code
upon a small change. Believe me.

We have to come up with some better idea.

For now I'd like to focus solely on dynamic errors.
Errors to happen while compiling, linking, loading and running.
For now, forget about dangerous errors in one module
that were covered so far by the unchanged other one.
For now, forget about programming errors
(even if this is the most likely source).

An error I'd like to uncover is: I'm linking on a PC/XP
and somehow a bit changes just before the linker
packs the object to be written to the disk. Checksum is ok,
the object is bad.

If I had a checksum from last linking and made no change
I could point to the failure immediately, e.g. at load time.

In the first place I'm not thinking about a checksum
to be generated while the object is on disk (this would be nice, though).
I could have a MD5 algorithm to run at initialization time.
I just have to look at the code as an (or some) array(s) of bytes,
calculate the sum and do not start the software
if it is not as expected. I'll do that on three computers
at the same time with some tricky hardware voting.

I have to do this anyway on a regular base
in order to check whether my software has changed
where it shouldn't. This is no cost.

I guess you're right in thinking of each module
as a separate task that takes messages to perform
functions and supplies messages as results.
That points to the right direction.

Than the question is, what's the simplest messaging system,
that leaves the receiver code invariant
upon changes in the sender and vice versa?
Should look familiar to programmers that think in
function calls.



The point that I'm asking is,

John Carter wrote:
> On Sun, 23 Jul 2006, Rolf Schumacher wrote:
>
>> The larger problem:
>>
>> As an assessor for safety critical software of embedded systems
>> I'm regularily faced with statements like:
>> "we only have made a little change to the source code
>> we do not want to retest all modules"
>>
>> The problem is, even if a change is made to one module only,
>> how can I demonstrate that all other modules are unchanged?
>
> The short answer is "You can't. Not even in Principle."
>
> The longer answer is (greetings from Werner by the way!)...
>
> * A common form of bug in C/C++ is an uninitialized variable. 9 times
> out of ten, if the value "zero" or whatever the previous value was is
> acceptable, the bug escapes the notice of the testers, no matter how 
> rigorous.
>
> However, change the memory layout, change the activation pathway of 
> the code,
> and the uninitialized memory can hold something nasty.
>
> * Bugs infest the boundaries of state space. In a smallish system you
> can easily have 2000 variables ie. more states that there
> are atoms in the universe!
>
> ie. Your previous testing, no matter how rigorous, did not explored the
> full state space. In fact explored only tiny fraction.
>
> So a small change entirely external to your "unchanged" module can 
> cause your
> "unchanged" module to move through a different region of state space.
>
> An untested region.
>
> So far more useful than checksumming I would recommend....
>
> * Compile -W -Wall -Werror and perhaps use www.splint.org as well.
>
> * Design your system in strict layers.
>
> * Decrease the size of the state subspace in each layer.
>
> * Decrease the number of "user configurable items".
>
> * Decrease the number of product variants.
>
> * Place a good suite of automated Unit tests at each layer. Test 
> Driven Development
> is Very Good for this.
>
> * Brace the interior of the code well with asserts. This is like testing
> from inside the code, both during bench test and production test.
>
> * Think your assert handling policy through so you run exactly the 
> same code
> under test as in production.
>
> * Have an automated suite of functional tests around your product as a 
> whole.
>
> * Run all tests on all builds.
>
> * Preferable run all the non-timing critical test
> under something like valgrind as well. http://www.valgrind.org/
>
> All this will do _way_ more to increase safety than any checksumming 
> scheme.
>
>
>> Unchanged source code is easy to demonstrate.
>> But this is not the whole story.
>
> I was recently called in to track the difference between too apparently
> identical bits of source code resulting in different executables &
> behaviour...
>
> Answer? The __FILE__ macro (commonly used in asserts and the like)
> includes the full path, one guy had built his code in /home/joe and the
> other in /home/fred resulting in different string lengths! (And hence
> different offsets everywhere!)
>
>> If I change the source of one module and recompile it....
>
> All too true. Perhaps if they were separate processes / executables you
> in the nice state of being able to byte for byte verify that they are
> same.
>
> The Unixy notion of separate processes separate virtual addresses spaces
> is not just a pretty face. It gives a lot of really good, really hard
> safety guarantees!
>
>> The only way to find these (unlikely) errors nowadays
>> is to retest all modules.
>>
>> Or to ignore this problem with some unknown risk.
>> But: retests are too expensive and risk negligence
>> has its probability.
>
> If I were to push (hard) on some aspect of the problem I would push
> (hard) on the cost of retest.
>
>> The idea is to have a checksum for each module that has passed all
>> tests successfully and this checksum is unchanged even if any other
>> module becomes changed.
>
> Checksum the individual .o object files. Best balance for difficulty of
> doing vs safety.
>
> Not in anyway a guarantee, a test is still better, this would be cheaper
> and better than nothing.
>
> Notes about checksumming the .o files....
>
> They include debug data which has absolute file paths.
>
> They include time stamps.
>
> Probably need to use the objdump (or maybe objcopy/ readelf) utilities
> to read out the sections you care about and checksum those. eg.
>
> Beware of $Version Control$ tags ending up in strings. They also caused
> spurious diffs.
>
>
> John Carter Phone : (64)(3) 358 6639
> Tait Electronics Fax : (64)(3) 359 4632
> PO Box 1645 Christchurch Email : john.carter@tait.co.nz
> New Zealand
>
> Carter's Clarification of Murphy's Law.
>
> "Things only ever go right so that they may go more spectacularly 
> wrong later."
>
>> From this principle, all of life and physics may be deduced.
>
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: how to make code stay invariant
  2006-07-24 12:19       ` Ingo Krabbe
@ 2006-07-24 22:39         ` Rolf Schumacher
  2006-07-25  4:47           ` Ingo Krabbe
  0 siblings, 1 reply; 16+ messages in thread
From: Rolf Schumacher @ 2006-07-24 22:39 UTC (permalink / raw)
  To: Ingo Krabbe; +Cc: gcc-help

[-- Attachment #1: Type: text/plain, Size: 1791 bytes --]

Thank you, Ingo.
> The checksum approach isn't quite usefull I think, since algorithms 
> are and should be changed by hand and communication between developers 
> should be done as direct as possible.  If you introduce checksums, you 
> provide a tools that simulates stability where there is none.  There 
> is no fire and forget algorithm that you haven't develeoped and 
> documented quite well.
>   
Think of a checksum as a means to secure a message.
Any secure protocol connection lives upon that. It's useful at least in 
that case.

Now take this model:

A programmer has proved (e.g. by Knuts rules) and tested her code
hard. She made a checksum to be absolutely sure to know what she's tested.
Than the software is validated independently, again, no errors.
The validator compares the checksum to the one the programmer knows.
They are equal.
What we've got? Two independent people are telling, this is good software
and they are talking about the same thing, for sure.
Now the message is sent: From lab to field operations. Copied several 
times.
At the end appeared somewhere in memory. Check the checksum!
Now you know, nothing was changed, the module contributes to the system
and you're sure it's the same one as in the module test.

John is right, there are other things you can mention:
same code/checksum but different behavior.
But I like to come back to that problem when I'm finished with the first 
one.

The checksum is a matter of securing a message, as in a secure protocol.
The only thing here is, that the massage is a module doing work in some 
software.
It's a module that was sent by the first tester through an insecure channel
until it arrived at some operation.

I think, Ingo, checksums are a good and usual idea for that sort 
transmission errors.

Rolf


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/x-pkcs7-signature, Size: 3489 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: how to make code stay invariant
  2006-07-24 22:38       ` Rolf Schumacher
@ 2006-07-24 23:22         ` John Carter
  2006-07-25 22:16           ` Rolf Schumacher
  0 siblings, 1 reply; 16+ messages in thread
From: John Carter @ 2006-07-24 23:22 UTC (permalink / raw)
  To: Rolf Schumacher; +Cc: Rolf Schumacher, gcc-help

On Tue, 25 Jul 2006, Rolf Schumacher wrote:

> John, we're living in a real world. I can tell:
> you can't say: retest 100000 lines of code
> upon a small change. Believe me.

That's why you do Test Driven Development with a test harness to run all
automated tests.

It really really does work in very real world environments with even
larger code bases. It really really does improve design. You really can
rerun all tests on 100000 lines of code.

If you can't rerun all tests, it is quite simply because you designed it
wrong. You didn't design it for testability.

And in a safety critical App that is gravely remiss.

Places to start reading are...
  http://www.objectmentor.com/resources/bookstore/books/welc/
  http://www.agiledata.org/essays/tdd.html

> We have to come up with some better idea.

TDD _is_ the better idea.

> For now I'd like to focus solely on dynamic errors.
> Errors to happen while compiling, linking, loading and running.

You know something, I have been bitten by some compiler bugs in my time.

Pretty rare, but they happen.

I would estimate looking at my current project (200000+ LOC, a man
decade or two of development, real time embedded C) that we have about 3
full orders of magnitude more programmer bugs than compiler bugs. None
of them were sporadic. A correct program simply failed to compile.

We have never been bitten by linker bugs at all. Well, admittedly
writing gnu ld script is actively user hostile, but it either worked or
it didn't.

We have had lots of loader bugs, but then for various strange reasons,
we wrote our own. In all my years programming I have never been bitten
by an OS loader bug. There is a moral there...

> An error I'd like to uncover is: I'm linking on a PC/XP
> and somehow a bit changes just before the linker
> packs the object to be written to the disk. Checksum is ok,
> the object is bad.

Wow! That is such a low probability risk compare to Good Old Human stuff
ups, I wouldn't even give it a moments thought unless I had actually
seen it happening once.

If you really having such errors you have a buggy linker, time for a
newer (or older) version fast, or you have buggy hardware. ie. Fix the
tool, don't create a kludgy workaround patch around the broken tool.

> If I had a checksum from last linking and made no change
> I could point to the failure immediately, e.g. at load time.

Some (targets/versions) of the GCC linker do relaxation passes. ie.
Change long jumps to short jumps, change long references to short
offsets. And since the size of the code has shrunk, they do that again,
and again until it converges.

Basically you want each module to be a DLL/sharable object so the linker
does the absolute minimum of fix ups.

You also need a strict acyclic dependency graph between the sharable
objects and then link each layer with lower layers.

Follow the standard tricks to make a sharable object / DLL.

You still need the objdump tricks I mentioned to pull just the sections
you care about out.

> The point that I'm asking is,

Somehow your mailer lost everything you wrote after this point in your post!




John Carter                             Phone : (64)(3) 358 6639
Tait Electronics                        Fax   : (64)(3) 359 4632
PO Box 1645 Christchurch                Email : john.carter@tait.co.nz
New Zealand

Carter's Clarification of Murphy's Law.

"Things only ever go right so that they may go more spectacularly wrong later."

From this principle, all of life and physics may be deduced.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: how to make code stay invariant
  2006-07-24 22:39         ` Rolf Schumacher
@ 2006-07-25  4:47           ` Ingo Krabbe
  0 siblings, 0 replies; 16+ messages in thread
From: Ingo Krabbe @ 2006-07-25  4:47 UTC (permalink / raw)
  To: Rolf Schumacher; +Cc: gcc-help

Am Dienstag, 25. Juli 2006 00:39 schrieb Rolf Schumacher:
> Thank you, Ingo.
>
> > The checksum approach isn't quite usefull I think, since algorithms
> > are and should be changed by hand and communication between developers
> > should be done as direct as possible.  If you introduce checksums, you
> > provide a tools that simulates stability where there is none.  There
> > is no fire and forget algorithm that you haven't develeoped and
> > documented quite well.
>
> Think of a checksum as a means to secure a message.
> Any secure protocol connection lives upon that. It's useful at least in
> that case.

Ouch, sorry rolf, but you didn't seem to get me.  I never wanted to say that 
checksums are complete rubbish, but I don't think that they are really 
usefull to secure the stability of code.

All I wanted to say is that the stability of code has to be controlled by the 
underlying logic and you will fail if you rely on valgrind, splint.  They are 
usefull to locate some errors that have already been detected in a complex 
system, I think.

If you really want invariant concepts in your code and stay there, you have to 
specify, prove and implement carefully.

Of course, if you think your objects or your code are attacked by someone you 
are right to implement some checksumming.  If you have several people who 
install submodules into one system I would prefer gpg-signing by the 
installer.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: how to make code stay invariant
  2006-07-24 23:22         ` John Carter
@ 2006-07-25 22:16           ` Rolf Schumacher
  2006-07-26  6:47             ` John Carter
  2006-07-28 23:35             ` Rolf Schumacher
  0 siblings, 2 replies; 16+ messages in thread
From: Rolf Schumacher @ 2006-07-25 22:16 UTC (permalink / raw)
  To: John Carter; +Cc: Rolf Schumacher, gcc-help

[-- Attachment #1: Type: text/plain, Size: 6386 bytes --]

Thank you John.

I need two more links. Please see below.

Rolf

John Carter wrote:
> That's why you do Test Driven Development with a test harness to run all
> automated tests.
agree, that's the prefered solution
>
> It really really does work in very real world environments with even
> larger code bases. It really really does improve design. You really can
> rerun all tests on 100000 lines of code.
In complex and safety critical command and control systems
you rely heavily on good simulation in testing. The problem
of trusting the simulators has often to be solved by giving
the system a trial period after going operational.
That's the most expensive art to test.
Even for that problem a solution to "small changes" would count a lot.
>
> If you can't rerun all tests, it is quite simply because you designed it
> wrong. You didn't design it for testability.
trusted systems are often old, hard to exchange legacy systems ...
>
> And in a safety critical App that is gravely remiss.
100% agreed.
>
> Places to start reading are...
> http://www.objectmentor.com/resources/bookstore/books/welc/
ordered that, thanks
> http://www.agiledata.org/essays/tdd.html
I'm currently coaching a first project with steps to Agil Methods,
so I'm aware of what you mean. However ....
>
>> We have to come up with some better idea.
>
> TDD _is_ the better idea.
You should do that first, ok. But it's not enough, there could be more.
>
>> For now I'd like to focus solely on dynamic errors.
>> Errors to happen while compiling, linking, loading and running.
>
> You know something, I have been bitten by some compiler bugs in my time.
>
> Pretty rare, but they happen.
We just had to recall projects in an expensive way
upon a difference in gcc compiling for SUN
and for Intel. (const in parameters)
Debuggin was done on a SUN, delivery was for Intel.
>
> I would estimate looking at my current project (200000+ LOC, a man
> decade or two of development, real time embedded C) that we have about 3
> full orders of magnitude more programmer bugs than compiler bugs. None
> of them were sporadic. A correct program simply failed to compile.
If you manage to overcome that, you're a professional better than all 
the rest.
Even though, what do you got then?: you're able to see the sporadic errors.
E.g. critical regions failures, state machines without conflict 
resolution, ...
all programmers limited accuracy as well.

To your magitude: I'm estimating: an average programmer
puts a failure in the code before testing every 20th decision on average.
One decicion per 10 LOC:
That's 0.005. After thoroughly module testing 1 out ~50 are not found.
That's 10**-3. Integration, validation and system-integration puts this to
10**-6. In safety critical systems we have to demonstrate (!) 10**-9.
For example, systems in an atomic power plant
have to be secure to 10**-13 (asaik). They are not allowed to add more risk.
You have to have risk reduction technologies because you can't reach that
figures with software.
>
> We have never been bitten by linker bugs at all. Well, admittedly
> writing gnu ld script is actively user hostile, but it either worked or
> it didn't.
Do you count "oh, sorry, somehow I used an outdated make file"?
And "May be the SCCS had an error."
>
> We have had lots of loader bugs, but then for various strange reasons,
> we wrote our own. In all my years programming I have never been bitten
> by an OS loader bug. There is a moral there...
That's it, if I go for checksums I have to write my own loader.
>
>> An error I'd like to uncover is: I'm linking on a PC/XP
>> and somehow a bit changes just before the linker
>> packs the object to be written to the disk. Checksum is ok,
>> the object is bad.
>
> Wow! That is such a low probability risk compare to Good Old Human stuff
> ups, I wouldn't even give it a moments thought unless I had actually
> seen it happening once.
We estimate that probability by 10**-5 .. -6. At least we are not able 
to show
better figures. If you know a way to demonstrate better figures ...
>
> If you really having such errors you have a buggy linker, time for a
> newer (or older) version fast, or you have buggy hardware. ie. Fix the
> tool, don't create a kludgy workaround patch around the broken tool.
That's important now: I never had such an error. And it should never happen,
at least not uncovered. But we are only one company. Take Ariadne or Skylab,
thousand companies are delivering software to that project.
What would you do if you are in charge for safety at the purchase 
department?
Believe the suppliers that it never happened in the past?

Just the fact that you can think about an error draws the responsibility
to give an accepted figure for it: 1. HAZOP, 2. FMEA at least FTA,
you do not have any statistics. It hasn't to be real at all in any past.
>
>> If I had a checksum from last linking and made no change
>> I could point to the failure immediately, e.g. at load time.
>
> Some (targets/versions) of the GCC linker do relaxation passes. ie.
> Change long jumps to short jumps, change long references to short
> offsets. And since the size of the code has shrunk, they do that again,
> and again until it converges.
Can I switch that off?
>
> Basically you want each module to be a DLL/sharable object so the linker
> does the absolute minimum of fix ups.
>
> You also need a strict acyclic dependency graph between the sharable
> objects and then link each layer with lower layers.
>
> Follow the standard tricks to make a sharable object / DLL.
Now that's it: I need a link here to update my knowledge.
>
> You still need the objdump tricks I mentioned to pull just the sections
> you care about out.
dito
>
>> The point that I'm asking is,
>
> Somehow your mailer lost everything you wrote after this point in your 
> post!
Sorry I was interrupted and couldn't finish.

What I wanted to tell you is,
that you're completely right with the example of the Unix loader
separating tasks by means of address space.

I have to look at a module as a task that takes messages and respond
with messages. As in UML sequence charts.

What is the easiest way to implement a messaging system e.g. by macros
for programmers that like to use function calls?

This question seems to be just another way to look at the problem.

(I had a bit more text here, but as far as I remember that's the core.)

kind regards

Rolf

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/x-pkcs7-signature, Size: 3489 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: how to make code stay invariant
  2006-07-25 22:16           ` Rolf Schumacher
@ 2006-07-26  6:47             ` John Carter
  2006-07-29 18:50               ` Rolf Schumacher
  2006-07-28 23:35             ` Rolf Schumacher
  1 sibling, 1 reply; 16+ messages in thread
From: John Carter @ 2006-07-26  6:47 UTC (permalink / raw)
  To: Rolf Schumacher; +Cc: gcc-help

Hmm, you probably should be scanning Miller's handy...
   http://www.dwheeler.com/essays/high-assurance-floss.html

High Assurance (for Security or Safety) and Free-Libre / Open Source
Software (FLOSS)... with Lots on Formal Methods


On Wed, 26 Jul 2006, Rolf Schumacher wrote:

>> Pretty rare, but they happen.
> We just had to recall projects in an expensive way
> upon a difference in gcc compiling for SUN
> and for Intel. (const in parameters)
> Debuggin was done on a SUN, delivery was for Intel.

Test like you fly, fly what you tested...

But hmm, you said debug not test... So I think there is more to that
issue than meets the eye...

> In safety critical systems we have to demonstrate (!) 10**-9.
> For example, systems in an atomic power plant
> have to be secure to 10**-13 (asaik). They are not allowed to add more risk.
> You have to have risk reduction technologies because you can't reach that
> figures with software.

I'm reminded of the Bad Old Days when there were MilSpec computers.

Until they realized that the sheer weight of consumer COTS products
meant that what was available from the corner store was...
  * Way way cheaper.
  * Way way faster.
  * And much more reliable!

Happened again with handheld GPS during the Gulf War. The COTS /
Consumer GPS's were just so much better than the MilSpec ones (even with
the delibrate signal fuzzing!!) that they gave up and used the COTS.

The other thought that comes to mind is a variant of a very old joke....

   Patient to Doctor, "Doctor! Doctor! I need to be incredibly hugely
                       impossibly painfully costly reliable to do this."

   Doctor, "Well don't do that then."

> Just the fact that you can think about an error draws the responsibility
> to give an accepted figure for it: 1. HAZOP, 2. FMEA at least FTA,
> you do not have any statistics. It hasn't to be real at all in any past.

Wow! That is really Amazing! You are _so_ deep in the Dilbert Zone! Do
you _ever_ see sunlight there?

http://www.dilbert.com/comics/dilbert/archive/dilbert-20060724.html

>> Some (targets/versions) of the GCC linker do relaxation passes. ie.
>> Change long jumps to short jumps, change long references to short
>> offsets. And since the size of the code has shrunk, they do that again,
>> and again until it converges.
> Can I switch that off?

Only applies to very few CPU's, don't know which one you are using. I
met it on the HC12. Search "info gcc" for "relax".

>> Basically you want each module to be a DLL/sharable object so the linker
>> does the absolute minimum of fix ups.
>> 
>> You also need a strict acyclic dependency graph between the sharable
>> objects and then link each layer with lower layers.
>> 
>> Follow the standard tricks to make a sharable object / DLL.
> Now that's it: I need a link here to update my knowledge.

http://www.dwheeler.com/program-library/Program-Library-HOWTO/x36.html
http://people.redhat.com/drepper/dsohowto.pdf

In fact Drepper's whole page is a gold mine of detailed info on ELF.
http://people.redhat.com/~drepper/

In fact I'll make a wild guess....

If you really understood all the niches and corners of ELF, which is
quite a large and hairy domain, what you want is already in there
somewhere.

>> You still need the objdump tricks I mentioned to pull just the sections
>> you care about out.
> dito

info binutils


> What I wanted to tell you is,
> that you're completely right with the example of the Unix loader
> separating tasks by means of address space.
>
> I have to look at a module as a task that takes messages and respond
> with messages. As in UML sequence charts.
>
> What is the easiest way to implement a messaging system e.g. by macros
> for programmers that like to use function calls?

Make it simple to use, complex == more lines of code == programmer
mistakes.

The one we are using involves declaring and packing and unpacking
structs all over the place. Yuck! Tedious and error prone.

I itch to rewrite using a simple convention that looks like an ordinary
function declaration, definition and reference.

And then add a bit of Ruby code generation magic to generate a header
pulled in by the client and a header to be pulled in by the server. Oh,
and glue it together with a small, possibly entirely non-portable bit of
C that understands varargs to serialize the arguments across the
messaging interface.

I bet I can get a huge reduction in code size, much simpler, much more
reliable and better code.


John Carter                             Phone : (64)(3) 358 6639
Tait Electronics                        Fax   : (64)(3) 359 4632
PO Box 1645 Christchurch                Email : john.carter@tait.co.nz
New Zealand

Carter's Clarification of Murphy's Law.

"Things only ever go right so that they may go more spectacularly wrong later."

From this principle, all of life and physics may be deduced.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: how to make code stay invariant
  2006-07-25 22:16           ` Rolf Schumacher
  2006-07-26  6:47             ` John Carter
@ 2006-07-28 23:35             ` Rolf Schumacher
  1 sibling, 0 replies; 16+ messages in thread
From: Rolf Schumacher @ 2006-07-28 23:35 UTC (permalink / raw)
  To: Rolf Schumacher; +Cc: John Carter, gcc-help

[-- Attachment #1: Type: text/plain, Size: 4778 bytes --]

John Carter wrote:
> Hmm, you probably should be scanning Miller's handy...
> http://www.dwheeler.com/essays/high-assurance-floss.html
>
> High Assurance (for Security or Safety) and Free-Libre / Open Source
> Software (FLOSS)... with Lots on Formal Methods
Thanks for that link John. Wheeler has collected and weighted a snapshot
of open source projects anyhow related to security issues.
Unfortunately I can't find much safety. All the levels are security levels
no safety level. And - to be honest - no one could persuade me yet
that formal methods overcome semi-formals like UML.
In practice they do not do better.
But I might not be at the state of the art.

However, it's the relationship to our subject here is loose.
>
>
> But hmm, you said debug not test... So I think there is more to that
> issue than meets the eye...
hmm, nice, you catched this difference.
Yes, that was the problem:
Debugging is about reducing the number of errors.
Apart from statistics no public demonstration has to come up.
Testing a module is about proving the absence of failures
as far as possible in that step.
With strong demonstration to independent people, representing the public.
And I agree this has to be and will be done on the target system.
Debugging is not testing.
> Wow! That is really Amazing! You are _so_ deep in the Dilbert Zone! Do
> you _ever_ see sunlight there?
>
> http://www.dilbert.com/comics/dilbert/archive/dilbert-20060724.html
Sorry, I didn't mean to impress nor to loose you here.
Just to show the art how safety targets function,
ok, how they function currently, we are learning all the time.
It takes time to smell 10**-9. Your dog is born with it, you got to 
train it.
> http://www.dwheeler.com/program-library/Program-Library-HOWTO/x36.html
> http://people.redhat.com/drepper/dsohowto.pdf
Thanks, the latter one made me learn something: ELF and PIE, GOT and PLT
(About every time they talk about "cost" in terms of speed and memory
I'm reading "solution" in terms of independence and invariance.)
> In fact Drepper's whole page is a gold mine of detailed info on ELF.
> http://people.redhat.com/~drepper/
He seems to know what he's talking about.
>
> In fact I'll make a wild guess....
>
> If you really understood all the niches and corners of ELF, which is
> quite a large and hairy domain, what you want is already in there
> somewhere.
That not a wild guess, for what I see it's true.
What I also see is, that could be a lot of tricky work
as I mentioned above: I'm not so much interested in cost
(memory and speed) as all others are.
Would be nice if someone already has done that work
with more knowledge about ELF as I'm equipped with.

>
>> What I wanted to tell you is,
>> that you're completely right with the example of the Unix loader
>> separating tasks by means of address space.
>>
>> I have to look at a module as a task that takes messages and respond
>> with messages. As in UML sequence charts.
>>
>> What is the easiest way to implement a messaging system e.g. by macros
>> for programmers that like to use function calls?
>
> Make it simple to use, complex == more lines of code == programmer
> mistakes.
>
> The one we are using involves declaring and packing and unpacking
> structs all over the place. Yuck! Tedious and error prone.
>
> I itch to rewrite using a simple convention that looks like an ordinary
> function declaration, definition and reference.
>
> And then add a bit of Ruby code generation magic to generate a header
> pulled in by the client and a header to be pulled in by the server. Oh,
> and glue it together with a small, possibly entirely non-portable bit of
> C that understands varargs to serialize the arguments across the
> messaging interface.
>
> I bet I can get a huge reduction in code size, much simpler, much more
> reliable and better code.
Yes, here is what I understood so far to find the solution I'm looking for:

1. Understand ELF until you're able to play with it
2. Generate PIEs in ELF format
3. Animate gcc to pack every link in GOTs and PLTs, check that the code 
stays invariant, if needed define coding rules, build a checker.
4. Make sure the GOTs and PLTs are not contained in code regions
5. Give programmers a handy tool to reach functions as usual (may be, 
this step is not needed)
6. Generate meta info on the absolute start and length of a code region 
in memory and do the checksumming
7. check checksumming by moving the code around and using different 
implementations of e.g. MD5.

Thought it would be easier and I would not be the first to ask for such 
a thing.

What do I have overlooked, John?
What could be made simpler?
I'm mostly worrying about step 1.
Also there's a risk to be unsuccessful in step 3 and 4.
What are other risks that I haven't identified?



Rolf


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/x-pkcs7-signature, Size: 3489 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: how to make code stay invariant
  2006-07-26  6:47             ` John Carter
@ 2006-07-29 18:50               ` Rolf Schumacher
  2006-07-30 22:33                 ` John Carter
  0 siblings, 1 reply; 16+ messages in thread
From: Rolf Schumacher @ 2006-07-29 18:50 UTC (permalink / raw)
  To: John Carter; +Cc: gcc-help

[-- Attachment #1: Type: text/plain, Size: 6662 bytes --]

Hi, John.

for the first place I found a much simpler solution.
I checked it with gcc -fPIC, and than applied prelink.

Consider a module m1 calling function o2 in module m2.
If we define an interface compilation unit for m2, say m2if,
that implements o2if just calling o2 and m1 is not using
o2 anymore instead of o2if, m1 stays invariant as long
as the interface is invariant. I can do as much "small changes"
to m2 as I like, the checksum of m1 (in memory and on disk)
stays invariant.

in code now, prior to invariance:

m1.c:
#include "m2.h"
int main(void){o2();}

m2.h:
void o2();

m2.c:
#include "m2.h"
void o2() {printf("hello world");}

and now after changes for invariance:

m1.c:
#include "m2if.h"
int main(void){o2if();}

m2if.h:
void o2if();

m2if.c:
#include "m2if.h"
#include "m2.h"
void o2if(){o2();}

m2.h:
void o2();

m2.c:
#include "m2.h"
void o2() {printf("hello world");}

Conclusion:
The object code invariance is gained just by coding rules.
Introduce of object code invariance is applicable even to existing software.
Benefit: all my module tests to m1 apply in the new software
regardless of changes to m2. I haven't to repeat them.

Remark:
For reduction of validation tests of the software product
as a whole against requirements I still need a reliable
impact analysis in order to reduce tests. Invariance doesn't help here.
Code object invariance gives evidence only for code integrity on
the invariant part of the software, nothing more.
However, that's still a lot.

I'll do that!

Thanks for your great help and all the philosophical hints.
I wouldn't have done without even if the resulting solution is that simple.

kind regards

Rolf

John Carter wrote:
> Hmm, you probably should be scanning Miller's handy...
> http://www.dwheeler.com/essays/high-assurance-floss.html
>
> High Assurance (for Security or Safety) and Free-Libre / Open Source
> Software (FLOSS)... with Lots on Formal Methods
>
>
> On Wed, 26 Jul 2006, Rolf Schumacher wrote:
>
>>> Pretty rare, but they happen.
>> We just had to recall projects in an expensive way
>> upon a difference in gcc compiling for SUN
>> and for Intel. (const in parameters)
>> Debuggin was done on a SUN, delivery was for Intel.
>
> Test like you fly, fly what you tested...
>
> But hmm, you said debug not test... So I think there is more to that
> issue than meets the eye...
>
>> In safety critical systems we have to demonstrate (!) 10**-9.
>> For example, systems in an atomic power plant
>> have to be secure to 10**-13 (asaik). They are not allowed to add 
>> more risk.
>> You have to have risk reduction technologies because you can't reach 
>> that
>> figures with software.
>
> I'm reminded of the Bad Old Days when there were MilSpec computers.
>
> Until they realized that the sheer weight of consumer COTS products
> meant that what was available from the corner store was...
> * Way way cheaper.
> * Way way faster.
> * And much more reliable!
>
> Happened again with handheld GPS during the Gulf War. The COTS /
> Consumer GPS's were just so much better than the MilSpec ones (even with
> the delibrate signal fuzzing!!) that they gave up and used the COTS.
>
> The other thought that comes to mind is a variant of a very old joke....
>
> Patient to Doctor, "Doctor! Doctor! I need to be incredibly hugely
> impossibly painfully costly reliable to do this."
>
> Doctor, "Well don't do that then."
>
>> Just the fact that you can think about an error draws the responsibility
>> to give an accepted figure for it: 1. HAZOP, 2. FMEA at least FTA,
>> you do not have any statistics. It hasn't to be real at all in any past.
>
> Wow! That is really Amazing! You are _so_ deep in the Dilbert Zone! Do
> you _ever_ see sunlight there?
>
> http://www.dilbert.com/comics/dilbert/archive/dilbert-20060724.html
>
>>> Some (targets/versions) of the GCC linker do relaxation passes. ie.
>>> Change long jumps to short jumps, change long references to short
>>> offsets. And since the size of the code has shrunk, they do that again,
>>> and again until it converges.
>> Can I switch that off?
>
> Only applies to very few CPU's, don't know which one you are using. I
> met it on the HC12. Search "info gcc" for "relax".
>
>>> Basically you want each module to be a DLL/sharable object so the 
>>> linker
>>> does the absolute minimum of fix ups.
>>>
>>> You also need a strict acyclic dependency graph between the sharable
>>> objects and then link each layer with lower layers.
>>>
>>> Follow the standard tricks to make a sharable object / DLL.
>> Now that's it: I need a link here to update my knowledge.
>
> http://www.dwheeler.com/program-library/Program-Library-HOWTO/x36.html
> http://people.redhat.com/drepper/dsohowto.pdf
>
> In fact Drepper's whole page is a gold mine of detailed info on ELF.
> http://people.redhat.com/~drepper/
>
> In fact I'll make a wild guess....
>
> If you really understood all the niches and corners of ELF, which is
> quite a large and hairy domain, what you want is already in there
> somewhere.
>
>>> You still need the objdump tricks I mentioned to pull just the sections
>>> you care about out.
>> dito
>
> info binutils
>
>
>> What I wanted to tell you is,
>> that you're completely right with the example of the Unix loader
>> separating tasks by means of address space.
>>
>> I have to look at a module as a task that takes messages and respond
>> with messages. As in UML sequence charts.
>>
>> What is the easiest way to implement a messaging system e.g. by macros
>> for programmers that like to use function calls?
>
> Make it simple to use, complex == more lines of code == programmer
> mistakes.
>
> The one we are using involves declaring and packing and unpacking
> structs all over the place. Yuck! Tedious and error prone.
>
> I itch to rewrite using a simple convention that looks like an ordinary
> function declaration, definition and reference.
>
> And then add a bit of Ruby code generation magic to generate a header
> pulled in by the client and a header to be pulled in by the server. Oh,
> and glue it together with a small, possibly entirely non-portable bit of
> C that understands varargs to serialize the arguments across the
> messaging interface.
>
> I bet I can get a huge reduction in code size, much simpler, much more
> reliable and better code.
>
>
> John Carter Phone : (64)(3) 358 6639
> Tait Electronics Fax : (64)(3) 359 4632
> PO Box 1645 Christchurch Email : john.carter@tait.co.nz
> New Zealand
>
> Carter's Clarification of Murphy's Law.
>
> "Things only ever go right so that they may go more spectacularly 
> wrong later."
>
>> From this principle, all of life and physics may be deduced.
>
>


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/x-pkcs7-signature, Size: 3489 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: how to make code stay invariant
  2006-07-29 18:50               ` Rolf Schumacher
@ 2006-07-30 22:33                 ` John Carter
  2006-07-30 23:11                   ` John Carter
  2006-07-31 20:28                   ` Rolf Schumacher
  0 siblings, 2 replies; 16+ messages in thread
From: John Carter @ 2006-07-30 22:33 UTC (permalink / raw)
  To: Rolf Schumacher; +Cc: gcc-help

Ok, so let's threat model that....

Let's say probability of bug per line of code is p.

Let's say the number of bugs of the sort of bug you are trying to
prevent (link time corruption bugs not found on initial test) is q.

Let's say the probability of bug removal from ordinary code (by
inspection, by test, by static tool etc.) is d

So the number of bugs in the ordinary code (as written) is pN where N is
the number of lines of code in the program

The number of bugs after bug removal is with link time / corruption bugs
is pN(1-d)+q

Now you have introduced a further M lines of interface code. Assume approximately
the same bug rate, maybe less. Let's say the bug rate in the interface
code is a*p where a is a number between 0.1 and 1

So you now have (pN+apM)(1-d) == p(1+aM/N)N(1-d)

So compare the two bug rates...

pN(1-d)+q vs p(1+aM/N)N(1-d)
   or
pN(1-d)+q vs pN(1-d)+apM(1-d)

So whether you have gained anything from this activity depends on whether
  q is greater than apM(1-d)

You will have to plug you own numbers into that.

My guess is the answer is a resounding "No!" since q is so very small
compared to pM.


On Sat, 29 Jul 2006, Rolf Schumacher wrote:

> Hi, John.
>
> for the first place I found a much simpler solution.
> I checked it with gcc -fPIC, and than applied prelink.
>
> Consider a module m1 calling function o2 in module m2.
> If we define an interface compilation unit for m2, say m2if,
> that implements o2if just calling o2 and m1 is not using
> o2 anymore instead of o2if, m1 stays invariant as long
> as the interface is invariant. I can do as much "small changes"
> to m2 as I like, the checksum of m1 (in memory and on disk)
> stays invariant.
>
> in code now, prior to invariance:
>
> m1.c:
> #include "m2.h"
> int main(void){o2();}
>
> m2.h:
> void o2();
>
> m2.c:
> #include "m2.h"
> void o2() {printf("hello world");}
>
> and now after changes for invariance:
>
> m1.c:
> #include "m2if.h"
> int main(void){o2if();}
>
> m2if.h:
> void o2if();
>
> m2if.c:
> #include "m2if.h"
> #include "m2.h"
> void o2if(){o2();}
>
> m2.h:
> void o2();
>
> m2.c:
> #include "m2.h"
> void o2() {printf("hello world");}
>
> Conclusion:
> The object code invariance is gained just by coding rules.
> Introduce of object code invariance is applicable even to existing software.
> Benefit: all my module tests to m1 apply in the new software
> regardless of changes to m2. I haven't to repeat them.
>
> Remark:
> For reduction of validation tests of the software product
> as a whole against requirements I still need a reliable
> impact analysis in order to reduce tests. Invariance doesn't help here.
> Code object invariance gives evidence only for code integrity on
> the invariant part of the software, nothing more.
> However, that's still a lot.
>
> I'll do that!
>
> Thanks for your great help and all the philosophical hints.
> I wouldn't have done without even if the resulting solution is that simple.
>
> kind regards
>
> Rolf
>
> John Carter wrote:
>> Hmm, you probably should be scanning Miller's handy...
>> http://www.dwheeler.com/essays/high-assurance-floss.html
>> 
>> High Assurance (for Security or Safety) and Free-Libre / Open Source
>> Software (FLOSS)... with Lots on Formal Methods
>> 
>> 
>> On Wed, 26 Jul 2006, Rolf Schumacher wrote:
>> 
>>>> Pretty rare, but they happen.
>>> We just had to recall projects in an expensive way
>>> upon a difference in gcc compiling for SUN
>>> and for Intel. (const in parameters)
>>> Debuggin was done on a SUN, delivery was for Intel.
>> 
>> Test like you fly, fly what you tested...
>> 
>> But hmm, you said debug not test... So I think there is more to that
>> issue than meets the eye...
>> 
>>> In safety critical systems we have to demonstrate (!) 10**-9.
>>> For example, systems in an atomic power plant
>>> have to be secure to 10**-13 (asaik). They are not allowed to add more 
>>> risk.
>>> You have to have risk reduction technologies because you can't reach that
>>> figures with software.
>> 
>> I'm reminded of the Bad Old Days when there were MilSpec computers.
>> 
>> Until they realized that the sheer weight of consumer COTS products
>> meant that what was available from the corner store was...
>> * Way way cheaper.
>> * Way way faster.
>> * And much more reliable!
>> 
>> Happened again with handheld GPS during the Gulf War. The COTS /
>> Consumer GPS's were just so much better than the MilSpec ones (even with
>> the delibrate signal fuzzing!!) that they gave up and used the COTS.
>> 
>> The other thought that comes to mind is a variant of a very old joke....
>> 
>> Patient to Doctor, "Doctor! Doctor! I need to be incredibly hugely
>> impossibly painfully costly reliable to do this."
>> 
>> Doctor, "Well don't do that then."
>> 
>>> Just the fact that you can think about an error draws the responsibility
>>> to give an accepted figure for it: 1. HAZOP, 2. FMEA at least FTA,
>>> you do not have any statistics. It hasn't to be real at all in any past.
>> 
>> Wow! That is really Amazing! You are _so_ deep in the Dilbert Zone! Do
>> you _ever_ see sunlight there?
>> 
>> http://www.dilbert.com/comics/dilbert/archive/dilbert-20060724.html
>> 
>>>> Some (targets/versions) of the GCC linker do relaxation passes. ie.
>>>> Change long jumps to short jumps, change long references to short
>>>> offsets. And since the size of the code has shrunk, they do that again,
>>>> and again until it converges.
>>> Can I switch that off?
>> 
>> Only applies to very few CPU's, don't know which one you are using. I
>> met it on the HC12. Search "info gcc" for "relax".
>> 
>>>> Basically you want each module to be a DLL/sharable object so the linker
>>>> does the absolute minimum of fix ups.
>>>> 
>>>> You also need a strict acyclic dependency graph between the sharable
>>>> objects and then link each layer with lower layers.
>>>> 
>>>> Follow the standard tricks to make a sharable object / DLL.
>>> Now that's it: I need a link here to update my knowledge.
>> 
>> http://www.dwheeler.com/program-library/Program-Library-HOWTO/x36.html
>> http://people.redhat.com/drepper/dsohowto.pdf
>> 
>> In fact Drepper's whole page is a gold mine of detailed info on ELF.
>> http://people.redhat.com/~drepper/
>> 
>> In fact I'll make a wild guess....
>> 
>> If you really understood all the niches and corners of ELF, which is
>> quite a large and hairy domain, what you want is already in there
>> somewhere.
>> 
>>>> You still need the objdump tricks I mentioned to pull just the sections
>>>> you care about out.
>>> dito
>> 
>> info binutils
>> 
>> 
>>> What I wanted to tell you is,
>>> that you're completely right with the example of the Unix loader
>>> separating tasks by means of address space.
>>> 
>>> I have to look at a module as a task that takes messages and respond
>>> with messages. As in UML sequence charts.
>>> 
>>> What is the easiest way to implement a messaging system e.g. by macros
>>> for programmers that like to use function calls?
>> 
>> Make it simple to use, complex == more lines of code == programmer
>> mistakes.
>> 
>> The one we are using involves declaring and packing and unpacking
>> structs all over the place. Yuck! Tedious and error prone.
>> 
>> I itch to rewrite using a simple convention that looks like an ordinary
>> function declaration, definition and reference.
>> 
>> And then add a bit of Ruby code generation magic to generate a header
>> pulled in by the client and a header to be pulled in by the server. Oh,
>> and glue it together with a small, possibly entirely non-portable bit of
>> C that understands varargs to serialize the arguments across the
>> messaging interface.
>> 
>> I bet I can get a huge reduction in code size, much simpler, much more
>> reliable and better code.
>> 
>> 
>> John Carter Phone : (64)(3) 358 6639
>> Tait Electronics Fax : (64)(3) 359 4632
>> PO Box 1645 Christchurch Email : john.carter@tait.co.nz
>> New Zealand
>> 
>> Carter's Clarification of Murphy's Law.
>> 
>> "Things only ever go right so that they may go more spectacularly wrong 
>> later."
>> 
>>> From this principle, all of life and physics may be deduced.
>> 
>> 
>
>



John Carter                             Phone : (64)(3) 358 6639
Tait Electronics                        Fax   : (64)(3) 359 4632
PO Box 1645 Christchurch                Email : john.carter@tait.co.nz
New Zealand

Carter's Clarification of Murphy's Law.

"Things only ever go right so that they may go more spectacularly wrong later."

From this principle, all of life and physics may be deduced.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: how to make code stay invariant
  2006-07-30 22:33                 ` John Carter
@ 2006-07-30 23:11                   ` John Carter
  2006-07-31 20:28                   ` Rolf Schumacher
  1 sibling, 0 replies; 16+ messages in thread
From: John Carter @ 2006-07-30 23:11 UTC (permalink / raw)
  To: Rolf Schumacher; +Cc: gcc-help

On Mon, 31 Jul 2006, John Carter wrote:

> Ok, so let's threat model that....

Let me distill that into much simpler terms....

Adding code to increase reliability is a bit like trying to put a fire
out by swamping it with petrol.

Yip, you can do it.

But generally it's not a Good Idea.


John Carter                             Phone : (64)(3) 358 6639
Tait Electronics                        Fax   : (64)(3) 359 4632
PO Box 1645 Christchurch                Email : john.carter@tait.co.nz
New Zealand

Carter's Clarification of Murphy's Law.

"Things only ever go right so that they may go more spectacularly wrong later."

From this principle, all of life and physics may be deduced.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: how to make code stay invariant
  2006-07-30 22:33                 ` John Carter
  2006-07-30 23:11                   ` John Carter
@ 2006-07-31 20:28                   ` Rolf Schumacher
  1 sibling, 0 replies; 16+ messages in thread
From: Rolf Schumacher @ 2006-07-31 20:28 UTC (permalink / raw)
  To: John Carter; +Cc: gcc-help

Ok, John. Well done.

I think I can follow your thoughts.
Let's try to put in my figures into
"q is greater than apM(1-d)"

You can't lower p below a certain point.
And all depends on a and d.
If I get some tool to produce interfaces
I can lower a a lot.
If the interfaces are really simple and regular
I've got a better d for interface code
than for average code in tests.
(d depends on the intelligence in the code,
the more simple the code the higher becomes d,
it's harder to make undetected faults in simple code)

And remember we're talking about well tested code
to highest safety level (ICE61508 calls that SIL4)
so d is near to 1, state of the art, the best you can think of
(at least it is stated so and I'm working on that the whole day).

A guess is that a failure showing up from p(1-d) in safety critical code
is in about the same magnitude as the probability that a bit
fails in RAM or on disk randomly and not caught in an average PC.
It happens in both cases.
(we're doing something extra 'cause we are required for better)

Beside undetected hardware faults during software production
the question is how many failures has a linker?
Or a copy program, or the OS running the copy program?
What is the probability that this will effect my linking and copying?
The less we run that programs with unknown quality
the better off we are.

The idea is: If I have once positively tested (a binary object of) a module
I'd like to bank on that tests as long as possible.

And, John, do not forget the task:
"just make a small change to a big software".
We can put that in numbers, too:

1MB of code is not a big program, even not in embedded systems today.
A module may contribute with 1k bytes, because the majority of 1MB
are libs and the like. Modules are small and simple.

So there's another 0.001 reduction to q.

I'm still convinced that it would be good to have
a checksum for 0.999MB of code that stays invariant
compared to 0.001MB code that may introduce yet unknown errors.

Also one other point has its importance,
that is - with checksums - I'm able
to exclude (0.001 for small changes) handling errors.
accidentally coming up with older versions, older failures, ...
SCCS and scripts are preventing all these failures to certain extend.
(while introducing their own)
But if the programmer is in a hurry - and they constantly are -
how would you estimate the probability that they compensate
an error message from a script with manual intervention?
How would you estimate the probability for undetected errors from that?
(you know such excuses: "Shit, I thought that ...")

If I have a checksum I'm (about) absolutely sure
it's the same thing as the last time.

kind regards

Rolf

John Carter wrote:
> Ok, so let's threat model that....
>
> Let's say probability of bug per line of code is p.
>
> Let's say the number of bugs of the sort of bug you are trying to
> prevent (link time corruption bugs not found on initial test) is q.
>
> Let's say the probability of bug removal from ordinary code (by
> inspection, by test, by static tool etc.) is d
>
> So the number of bugs in the ordinary code (as written) is pN where N is
> the number of lines of code in the program
>
> The number of bugs after bug removal is with link time / corruption bugs
> is pN(1-d)+q
>
> Now you have introduced a further M lines of interface code. Assume 
> approximately
> the same bug rate, maybe less. Let's say the bug rate in the interface
> code is a*p where a is a number between 0.1 and 1
>
> So you now have (pN+apM)(1-d) == p(1+aM/N)N(1-d)
>
> So compare the two bug rates...
>
> pN(1-d)+q vs p(1+aM/N)N(1-d)
> or
> pN(1-d)+q vs pN(1-d)+apM(1-d)
>
> So whether you have gained anything from this activity depends on whether
> q is greater than apM(1-d)
>
> You will have to plug you own numbers into that.
>
> My guess is the answer is a resounding "No!" since q is so very small
> compared to pM.
>
>
> On Sat, 29 Jul 2006, Rolf Schumacher wrote:
>
>> Hi, John.
>>
>> for the first place I found a much simpler solution.
>> I checked it with gcc -fPIC, and than applied prelink.
>>
>> Consider a module m1 calling function o2 in module m2.
>> If we define an interface compilation unit for m2, say m2if,
>> that implements o2if just calling o2 and m1 is not using
>> o2 anymore instead of o2if, m1 stays invariant as long
>> as the interface is invariant. I can do as much "small changes"
>> to m2 as I like, the checksum of m1 (in memory and on disk)
>> stays invariant.
>>
>> in code now, prior to invariance:
>>
>> m1.c:
>> #include "m2.h"
>> int main(void){o2();}
>>
>> m2.h:
>> void o2();
>>
>> m2.c:
>> #include "m2.h"
>> void o2() {printf("hello world");}
>>
>> and now after changes for invariance:
>>
>> m1.c:
>> #include "m2if.h"
>> int main(void){o2if();}
>>
>> m2if.h:
>> void o2if();
>>
>> m2if.c:
>> #include "m2if.h"
>> #include "m2.h"
>> void o2if(){o2();}
>>
>> m2.h:
>> void o2();
>>
>> m2.c:
>> #include "m2.h"
>> void o2() {printf("hello world");}
>>
>> Conclusion:
>> The object code invariance is gained just by coding rules.
>> Introduce of object code invariance is applicable even to existing 
>> software.
>> Benefit: all my module tests to m1 apply in the new software
>> regardless of changes to m2. I haven't to repeat them.
>>
>> Remark:
>> For reduction of validation tests of the software product
>> as a whole against requirements I still need a reliable
>> impact analysis in order to reduce tests. Invariance doesn't help here.
>> Code object invariance gives evidence only for code integrity on
>> the invariant part of the software, nothing more.
>> However, that's still a lot.
>>
>> I'll do that!
>>
>> Thanks for your great help and all the philosophical hints.
>> I wouldn't have done without even if the resulting solution is that 
>> simple.
>>
>> kind regards
>>
>> Rolf
>>
>> John Carter wrote:
>>> Hmm, you probably should be scanning Miller's handy...
>>> http://www.dwheeler.com/essays/high-assurance-floss.html
>>>
>>> High Assurance (for Security or Safety) and Free-Libre / Open Source
>>> Software (FLOSS)... with Lots on Formal Methods
>>>
>>>
>>> On Wed, 26 Jul 2006, Rolf Schumacher wrote:
>>>
>>>>> Pretty rare, but they happen.
>>>> We just had to recall projects in an expensive way
>>>> upon a difference in gcc compiling for SUN
>>>> and for Intel. (const in parameters)
>>>> Debuggin was done on a SUN, delivery was for Intel.
>>>
>>> Test like you fly, fly what you tested...
>>>
>>> But hmm, you said debug not test... So I think there is more to that
>>> issue than meets the eye...
>>>
>>>> In safety critical systems we have to demonstrate (!) 10**-9.
>>>> For example, systems in an atomic power plant
>>>> have to be secure to 10**-13 (asaik). They are not allowed to add 
>>>> more risk.
>>>> You have to have risk reduction technologies because you can't 
>>>> reach that
>>>> figures with software.
>>>
>>> I'm reminded of the Bad Old Days when there were MilSpec computers.
>>>
>>> Until they realized that the sheer weight of consumer COTS products
>>> meant that what was available from the corner store was...
>>> * Way way cheaper.
>>> * Way way faster.
>>> * And much more reliable!
>>>
>>> Happened again with handheld GPS during the Gulf War. The COTS /
>>> Consumer GPS's were just so much better than the MilSpec ones (even 
>>> with
>>> the delibrate signal fuzzing!!) that they gave up and used the COTS.
>>>
>>> The other thought that comes to mind is a variant of a very old 
>>> joke....
>>>
>>> Patient to Doctor, "Doctor! Doctor! I need to be incredibly hugely
>>> impossibly painfully costly reliable to do this."
>>>
>>> Doctor, "Well don't do that then."
>>>
>>>> Just the fact that you can think about an error draws the 
>>>> responsibility
>>>> to give an accepted figure for it: 1. HAZOP, 2. FMEA at least FTA,
>>>> you do not have any statistics. It hasn't to be real at all in any 
>>>> past.
>>>
>>> Wow! That is really Amazing! You are _so_ deep in the Dilbert Zone! Do
>>> you _ever_ see sunlight there?
>>>
>>> http://www.dilbert.com/comics/dilbert/archive/dilbert-20060724.html
>>>
>>>>> Some (targets/versions) of the GCC linker do relaxation passes. ie.
>>>>> Change long jumps to short jumps, change long references to short
>>>>> offsets. And since the size of the code has shrunk, they do that 
>>>>> again,
>>>>> and again until it converges.
>>>> Can I switch that off?
>>>
>>> Only applies to very few CPU's, don't know which one you are using. I
>>> met it on the HC12. Search "info gcc" for "relax".
>>>
>>>>> Basically you want each module to be a DLL/sharable object so the 
>>>>> linker
>>>>> does the absolute minimum of fix ups.
>>>>>
>>>>> You also need a strict acyclic dependency graph between the sharable
>>>>> objects and then link each layer with lower layers.
>>>>>
>>>>> Follow the standard tricks to make a sharable object / DLL.
>>>> Now that's it: I need a link here to update my knowledge.
>>>
>>> http://www.dwheeler.com/program-library/Program-Library-HOWTO/x36.html
>>> http://people.redhat.com/drepper/dsohowto.pdf
>>>
>>> In fact Drepper's whole page is a gold mine of detailed info on ELF.
>>> http://people.redhat.com/~drepper/
>>>
>>> In fact I'll make a wild guess....
>>>
>>> If you really understood all the niches and corners of ELF, which is
>>> quite a large and hairy domain, what you want is already in there
>>> somewhere.
>>>
>>>>> You still need the objdump tricks I mentioned to pull just the 
>>>>> sections
>>>>> you care about out.
>>>> dito
>>>
>>> info binutils
>>>
>>>
>>>> What I wanted to tell you is,
>>>> that you're completely right with the example of the Unix loader
>>>> separating tasks by means of address space.
>>>>
>>>> I have to look at a module as a task that takes messages and respond
>>>> with messages. As in UML sequence charts.
>>>>
>>>> What is the easiest way to implement a messaging system e.g. by macros
>>>> for programmers that like to use function calls?
>>>
>>> Make it simple to use, complex == more lines of code == programmer
>>> mistakes.
>>>
>>> The one we are using involves declaring and packing and unpacking
>>> structs all over the place. Yuck! Tedious and error prone.
>>>
>>> I itch to rewrite using a simple convention that looks like an ordinary
>>> function declaration, definition and reference.
>>>
>>> And then add a bit of Ruby code generation magic to generate a header
>>> pulled in by the client and a header to be pulled in by the server. Oh,
>>> and glue it together with a small, possibly entirely non-portable 
>>> bit of
>>> C that understands varargs to serialize the arguments across the
>>> messaging interface.
>>>
>>> I bet I can get a huge reduction in code size, much simpler, much more
>>> reliable and better code.
>>>
>>>
>>> John Carter Phone : (64)(3) 358 6639
>>> Tait Electronics Fax : (64)(3) 359 4632
>>> PO Box 1645 Christchurch Email : john.carter@tait.co.nz
>>> New Zealand
>>>
>>> Carter's Clarification of Murphy's Law.
>>>
>>> "Things only ever go right so that they may go more spectacularly 
>>> wrong later."
>>>
>>>> From this principle, all of life and physics may be deduced.
>>>
>>>
>>
>>
>
>
>
> John Carter Phone : (64)(3) 358 6639
> Tait Electronics Fax : (64)(3) 359 4632
> PO Box 1645 Christchurch Email : john.carter@tait.co.nz
> New Zealand
>
> Carter's Clarification of Murphy's Law.
>
> "Things only ever go right so that they may go more spectacularly 
> wrong later."
>
>> From this principle, all of life and physics may be deduced.
>
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2006-07-31 20:28 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-07-16 23:06 how to make code stay invariant Rolf Schumacher
2006-07-21  0:44 ` John Carter
2006-07-23  5:22   ` Rolf Schumacher
2006-07-23 22:05     ` John Carter
2006-07-24 12:19       ` Ingo Krabbe
2006-07-24 22:39         ` Rolf Schumacher
2006-07-25  4:47           ` Ingo Krabbe
2006-07-24 22:38       ` Rolf Schumacher
2006-07-24 23:22         ` John Carter
2006-07-25 22:16           ` Rolf Schumacher
2006-07-26  6:47             ` John Carter
2006-07-29 18:50               ` Rolf Schumacher
2006-07-30 22:33                 ` John Carter
2006-07-30 23:11                   ` John Carter
2006-07-31 20:28                   ` Rolf Schumacher
2006-07-28 23:35             ` Rolf Schumacher

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).