From mboxrd@z Thu Jan 1 00:00:00 1970 From: "John W. Stevens" To: craig@jcb-sc.com Cc: gcc@gcc.gnu.org Subject: Re: type based aliasing again Date: Thu, 30 Sep 1999 18:02:00 -0000 Message-ID: <199909212246.QAA03752@basho.fc.hp.com> References: <19990921182843.25292.qmail@deer> X-SW-Source: 1999-09n/msg00924.html Message-ID: <19990930180200.-JGkoxxi4nz9DOYxzTd7a5L-pJMe4UqahHiW_9aC08s@z> [Snippage abounds, mostly personal attacks, bald assertions and repetitions, but some other things may have been inadvertently snipped.] > >> >I believe this is a false premise. Standards change. > >> > >> Not in this respect, apparently. > > > >As I understand it, we are talking about a change that moves a construct > >from "undefined" to "incorrect". > > No, read the standard, then come back and tell us what the difference ^^^^^^^^ This is, indeed, the point I am trying to make: there exists a body of code that was written before what-ever-version-of the standard you are refering to by the use of the word: standard. > is between "undefined" and "incorrect" in the standard. According to the second version of "The C Programming Language" (which was based on the Draft ANSI C standard) casting pointers from one type to another is neither incorrect, nor undefined, it was "implementation dependent", with the warnings being related to alignment problems. > If the standard does not define the behavior of a construct, then > the construct *is* undefined. In the first version of "The C Programming Language", this appears to be undefined. > My impression is that's what > was the case with K&R C. That is my impression as well, though of course, this is difficult to pinpoint precisely, due to the fact that the language of the first version of this book was not written as a language standard. > I assume that for the purposes of this > discussion; else -traditional should imply -fno-alias-analysis. I am not familiar with any of the discussion re: -traditional. If -traditional means: "K&R C", then perhaps this switch should imply the use of -fno-alias-analysis, under the assumption that the code being compiled under this flag probably will contain undefined or implementation dependent code. > Right, in the sense that you could not be sent to prison simply because > your code performed that construct. Samed for division by zero, > square root of a negative number, dereferencing a NULL pointer, etc. Excuse, but the two classes are not analogous. Division by zero (etc.) is a *mathematically* undefined operation, and on some processors such an action constitutes a reason for raising a run time exception. The act of doing: { auto sockaddr sockStructure; auto sockaddr_in *inetSockStructurePointer; /* Get pointer to internet socket structure. */ inetSockStructurePointer = (struct sockaddr_in *) &sockStructure; } is not the same class of "undefined" (it is not a mathematically undefined operation). It is, in short, a very common operation. One that now seems to be inadvisable, at best. > I don't recall K&R I ever using language like "defined as illegal", > but, for all intents and purposes, that is *exactly* the same > as "undefined", within the context of a manual describing a language > like C. I disagree with this statement. And, in fact, both K and R wrote code that violated the new standard. There is a fair body of work written by K&R still floating around out there, some of which was released into the public domain, that will demonstrate this. > >Excuse, but the point I'm making is: there isn't necesarily a bug > >in *either* of them. It doesn't have to be a yes/no, black/white, > >binary kind of thing. > > But, in this case, it is *exactly* that. The user code is undefined > according to ISO C. Period. "Undefined". Thank you. That was my point: it isn't a bug, it is undefined. The whole discussion really lies in just this: it is an inflamatory term that assigns blame. It's use, especially in a public forum or in an exchange with the user community, is detrimental to both your stated aims, and the good of the project (the GNU project) as a whole. > How many times must I, and others, say that before you'll a) believe > us or b) read the ISO C standard for yourself? I've read it. I've reread the relevant portions. Your interpretation disagrees with mine. This is not an issue of belief, it is an issue of interpretation. Incidentally, nobody but you has told me that this code constitutes a "bug", and in fact, I've received some private email indicating that some people agree with me: IOW, that this is not a bug in the users program, or in GCC, it is a *mismatch* between tool and input. IOW, when you write implementation dependent code, you are creating a relationship between code and tool. This relationship must be taken into account if you choose to change the tools you use. > >And where we differ is in your use of the term: "bug". I wouldn't consider > >either GCC, or your program, to be buggy. The act of mixing > >incompatible pieces is probably a mistake, though. > > Though I've already corrected you several times, Excuse, but you have yet to correct me. In fact, above, you admit above that this is not a bug, it is simply either undefined, or implementation dependent. > I'll also point > out that when a mistake is the act of mixing Y and Z, and Z is a > specification-conforming off-the-shelf part Which GCC is not. > while Y is fully under the > control of the person doing the mixing (i.e. user code), then the > mistake is on the part of the person who mixed the two, if not > the person who created Y. While your explanation breaks down on point one, I agree that the mistake lies in the persons actions, not Y or Z. But then again, I've already said this multiple times. I'm happy to see that you finally understand this, and agree with me. ;-> > It is never on the part of those who created Z. I never (*NEVER*, *NEVER*) said that it was. *NEVER*. You will not find one shred, segement or phrase that I wrote that assigns blame to GCC for this. What you will find is generalized support for what has been done, except that I whole heartedly agree with the suggestion that warning code be added, where reasonable and feasible, to let the user know what GCC is thinking. > >And I don't believe that anybody is actually advocating this. > > Actually, yes, that's what has been advocated, repeatedly, by at > least two or three people -- that the responsibility for making > buggy code work is on GCC's shoulders in this case. Again, that word . . . I have seen some mail that suggests that backwards compatibility should be a high priority. I've never seen *any* mail that suggests that GCC should be responsible for making buggy code work. When I do: auto char *tp; auto char bf[32]; strcpy(bf, tp); I don't expect GCC to make this buggy code work. I *appreciate* the warnings I *can* get re: uninitialized variables. But that is not the same as saying you *SHOULD* give me such warnings. I will go on record, though, as saying just that: if possible, feasible and reasonable, such warnings should be given. > Then don't throw them in -- teach them. Well, yes, that is what I was advocating. Was I being unclear? > GCC is not a teaching compiler. I disagree. I don't think such strong classifications are reasonable. In fact, I know many situations where GCC is, indeed, used in a teaching environment. > >People that *have* to have this environment to learn should not, > >at the very least, be allowed to play with anything that might > >injure or kill others (learning not to point a loaded gun at > >somebody should not be something you learn by trial and error). > > GCC is not a loaded gun, I did not say it was. GCC is a compiler suite. I was drawing an analogy. Allow me to point out that mission critical software failures can indeed cost lives. I can, if you need me to, point you to one of the more famous cases of a software bug killing people. . . And C is definitely a language that allows, indeed, encourages people to do things "outside the box". > That should not be the responsibility of GCC, I'm not saying that it should be the *responsibility* of GCC, I'm saying that backwards compatibility, even if only for a limited period of time, is a "good thing". > But, we're now told, it *is* to be thought of as the responsibility > of GCC. And I'm told not to object to that anymore. Actually, you were *requested*, quite politely, not to exagerate so much. You seem unable to view the situation in any thing other than black-or-white, my-way-or-yours, right-or-wrong. There is a compromise position that is superior to either extreme, and it seems that RMS has hit on that compromise (or, at least, come darn close to it). No surprise there, as he has been doing this a very long time. Perhaps when you get some time, gain some distance, get some perspective, you will see this. Maybe not. > Which is exactly what GCC provided in 2.95, but that's not good > enough, apparently. It is for me. I don't let my code fall into an "unmaintained state". I will still modify/fix/change the very first free software package I ever released, though I freely admit that due to email address changes and time, it is highly unlikely that some of my original users can in fact get a hold of me . . . :-( > Exactly. And GCC will continue to be plagued by arguments over > that line, which is fuzzy at best, and will be disagreed about among > the hundreds of programmers discussing the issue at any one time. Which is why RMS was seeking a compromise, not laying out a dictate. > There *is* a bright line available: ISO C, for example. Too bad GCC > chooses to ignore that, This is an exageration. GCC already is relatively ISO C compliant, it does not ignore the standard. Please try not to use such absolute terms . . . especially since in doing so, you actually damage your own cause. If someone new to GCC were to read this, they may well conclude that there is no benefit to writing ISO C compliant code when their intention is to feed that code to GCC . . . which as you can see, will cause further problems of the type you seem to be trying to avoid. > >Agreed. But GCC extensions do not make the GCC transparent. They > >simply add interface. > > If that's the case, they can be done without adding extensions -- just > by adding interfaces, since the ISO C language has the concept of > procedure (function) interfaces. Excuse, but the extensions add capabilities not supported by the C language. The intention in using the terminology above was to indicate by a kind of short hand that the implementation details of something like: __builtin_apply Are best kept secret by adding this capability, along with its associated interface, to the language and compiler. This allows for a clean separation between the responsibilities of the compiler (which are to translate high level language constructs into the relevant assembly language) and the responsibilities of the program designer. The addition of "asm" support to the compiler is a much bigger "transparency issue" than the addition of extensions. Machine dependencies can and should be hidden by the high level language design. > The number counts too. The effect of number is very small (not zero, but very small). The following bit of code doesn't suffer from the same degradation curve as you expect in the "physical world": #define NUMBER_OF_ARRAY_ELEMENTS 100 { auto int i; auto DemoStruct *array[NUMBER_OF_ARRAY_ELEMENTS]; for (i = 0; i < NUMBER_OF_ARRAY_ELEMENTS; i++) array[i] = calloc(1, sizeof( DemoStruct )); } This code doesn't become noticeably more complex if you change the definition of NUMBER_OF_ARRAY_ELEMENTS to 1000. Obviously, it becomes more complex when you add error handling and an error handling control flow path . . . but that is, as I am sure you will agree, independent of the number of objects, yes? > If you depend on 2 billion transistors working > correctly instead of 2 million, your risk of failure goes up on > that basis alone. Your analogy . . . isn't. You're comparing hardware manafacturing to software manafacturing. The two are not so totally separate that there is no relationship, but the instantiation of two million instances of the same class does not increase the complexity of a program in the same ratio as the manafacture of circuit that uses two million transistors does. In hardware manafacture, a piece of dirt could damage one of your two million transistors, but in software design, this danger is much reduced and is, in general, considered to be quite low (if not negligible). If it does become a problem, then you have a *HARDWARE* problem, not a software problem (IOW, replace your damaged memory, adjust your processor clock speed, put better fans in the box, replace your hard drive, what have you). > It doesn't matter if all of them are exactly the > same, when assessing that particular dimension. The problem with attempting to apply an understanding of physical engineering to software engineering is that you may fail to question some of your assumptions. And while I am not saying you have a degree in engineering, it does appear to be the case that you have failed to question some of your assumptions. > But, yes, generally, the more *distinct* components, the more of a > problem you have as well, because understanding how they interact, > or might interact, gets harder. (Especially if some are out of > spec, e.g. buggy C code.) The more distinct components, the more code. More code (as a general rule) more complexity. > >Agreed. But GCC extensions are, indeed, published black-box behavioral > >descriptions. > > Not by my standards they aren't They extensions are documented. That documentation is freely available. That documentation comes with every copy of GCC. The ISO C standard does *NOT* come with every copy of GCC. In some respects, the GCC extensions are more public, and better "published" than the ISO C standard is, from the standpoint of a GCC user. :-) > >Disagree. Instantiation with parameters does not increase the > >transparency of the instantiated object. > > You're ignoring the effect of such instantiation on the maintenance ^^^^^^^^^^^ > requirements of the *code* that requires those parameters, and the > ease with which bugs in that code can be assessed. The discussion was re: transparency. Not complexity. And I do not ignore it. It is true that, by reducing functionality, that you *can* reduce complexity. And I am not ignoring maintenance costs. Obviously, one way to reduce maintenance costs is to reduce functionality (thereby reducing the amount of code, thereby reducing complexity). The question then becomes one of "necesity" (or, if you prefer, "design" or "responsibility"), not one of transparency or complexity. If an interface-and-implementation is definable as "unnecesary" you'd have a clear win by eliminating it. But that definition is not solely an engineering decision. Which takes us out of the realm of pure engineering, and back to the (superset) realm of group dynamics *plus*. Which, incidentally, seems to be the real failure here: you seem to view this decision as one that should be made purely and solely from an engineering standpoint, when in reality, this decision must be made with in the context of a wider and much larger system. > Witness this whole thread, which has been repeated several times in > the past. *All* because of bugs in user code. I disagree. The thread has been repeated several times due to a failure in communications. Some of the GCC maintainers have been talking/thinking solely in an "engineering" mode, when in reality, this kind of decision transcends mere engineering. > *All* of which could > have been avoided had the clarity of responsibility been understood > by all parties in the first place. The result would be that, > *today*, there'd be *no* code widely deployed with this bug, because > we'd have collectively devoted our resources to *finding* and *fixing* > it, rather than *arguing* about just how we should *accommodate* it. Your statement is in error. In point of fact, what would have happened is that an indeterminate set of users would have written GCC off as a "bad show", and gone elsewhere. While this prospect may not bother you, I submit to you that it should. The words "feasible, reasonable and practical" are indeed fuzzy, but this is an advantage, not a disadvantage, as it makes the system as a whole more dynamic, more adaptive. > Viewing the accommodating of a bug as equivalent to passing the number of > lines for `tail' to display strikes me as rather unwise. Probably because that is not what I was talking about. > It's certainly > not a view a true engineer would take of an entire system. Please try to avoid jumping topics. The topic was re: complexity control, not "accomodating bugs". As for "the view a true engineer would take", I submit to you that this is a statment of belief: in short, this is not a statement that a true engineer would make. ;-> ;-> ;-> ;-> An engineer stands at the juntion where math, science, politics, economics and group dynamics collide. Decisions must be made within the context of the system as a whole, and not solely within the context of single, small, sub system. > That's a stunning statement, since it implies that even when the > switch no longer has the effect necessary for the *system* to > work, the component being given the switch cannot reject it so > as to warn the designer, or the other component, that something > is no longer working. Yes. Indeed. You are aware that this is already common practice in GNU systems, are you not? I can within 30 seconds draw up documentation on three different GNU programs that describe a switch as being "accepted for compatibility reasons", but that is documented as doing nothing, or as being unnecesary. > It may be reasonable. It *is* unnecessary, by definition: nobody > *needs* GCC to compile code with this bug, You are incorrect. Consider the economic impact of being required to change a body of code that is more than 1 million lines . . . Then mulitply that by the total number of such bodies of code. > since, as long as GCC > compiles ISO-C-conforming code correctly (which *is* presumed to > be necessary), any code believed to contain this bug can be > fixed, Have you ever managed a project? Believe it or not, it is sometimes cheaper to spend 10 million dollars over six years, than to get exactly the same thing for three million dollars all at once. [ Accounting isn't black magic, it just some times looks like it! ] > That's *exactly* what some people have suggested. Again, I have not seen any mail on the list that I would interpret in this fashion. > Look, it *should* be this simple: Another statement of belief? I thought you were the proponent of pure engineering thought? > What *has* been happening is the maintainer tracks down the bug, > sees it's the result of undefined behavior, but reports it as > a bug to the GCC lists anyway, because "that's not the way GCC > worked last week". And this is a valid action on the part of the user. It is equally valid for the GCC maintainers to take into account all of the relevant factors, then make a decision as to what response (ranging from: "you gotta fix your code", to "oops, sorry, we'll fix GCC so it doesn't do that") they will give. But the response given must arise from the philosophical basis for the project: IOW, the GNU project has a philosophical basis that should guide the decision making process. Which was, in short, what RMS was doing. > In other words, the whole discussion is about making GCC's internal > processing more "open", on the assumption it already *is* "open". I don't equate the above to making GCC "open". In fact, changing implementation dependent results from one version of the compile to another does more to makes GCC more transparent than that of a user requesting that the change *not* be made. > Please go back and read the entire thread, as well as the ISO C > standard from front to back, before responding to this email with > further arguments against these basic points. You would be well advised to go back and read more than just the current version of the ISO C standard. Start with K&R's original book, then read the various "draft" versions of the standard, then check the date on which the "finished" version of the standard was released and estimate in your head just how much code was written before the "finished" version was released. It may be worth the effort if you also attempt to estimate the *cost* of changing that much code. > >This is your opinion. You are free to state it. The GCC user (as by > >now is abundantly clear) does not share your opinion on what "matters". > > Of course not. :-( > To those GCC users, what matters is a free ride -- Again, an unfair characterization. > >A compromise is in order. A good engineer cares about what his > >customers/users care about. > > False. I would be shocked by this, but at this point, I am simply saddened by this response, not shocked. > A good engineer cares about the correct results being obtained. "Correct" . . . is a slippery term. In some respects, that is a characterization that could be applied to the user: he just cares about getting correct results, and the new version of GCC didn't give him correct results, so he is pissed off. > Now, if you want to know who *does* care about what his customers/users > care about -- that's called a "marketing person", or a "product manager". If your engineers do not care about the customer, then your engineers are a waste of money. The only reason for having engineers, is to meet the needs of the customer. In short: a software craftsperson is analogous to a counter-person working at MacDonalds . . . you are only useful so long as you service your customer. > Ask the engineers who worked on those dams in China. What their > customers/users -- the government -- cared about was saving money, > etc. The result? The largest technological disaster, in terms of > lives lost, in this century. Again, physical engineering is not equivalent to software engineering. Again, decisions must take into account all factors. That includes, but is *NOT* limited to, "engineering correctness". Limiting a decision to "economic correctness" is just as counter-productive as limiting a decision to "engineering correctness". Both factors (and more!) must be taken into account, and given weight. > Now, had they cared first and foremost about doing the *right* thing, The "right thing" takes into account such factors as money. It does not depend solely on money, but it must take that into account. > But if you look at every technological disaster this century, pretty > much in every instance you can find a marketing type working in an > engineering role, i.e. doing just what *you* say they should be doing -- > accounting first for what someone "wants", rather than what the > realities of engineering *require*. And there are many examples of failed companies due to the fact that they had engineers in the marketing department who cared not a whit about the customer. Or, who worse yet, treated the marketing department as the enemy, instead of as a partner, a valued and respected part of the organization. And, again, *want* and *require* must be judged within the larger context. There are *wants* that outway petty engineering "requirements", because in the context of the larger picture, that *want* is a requirement, while that "engineering requirement" is simply a difficulty that can be worked around. > That "one" therefore is incapable of distinguishing between the risks > we all take by living and the specific risks introduced by tolerating > incompetent design strategies. Consider the fact that no system can be proven "correct". Consider, also, that most systems are built from compromises. Consider the effect that these compromises have on the system. > Therefore, that "one" should be disallowed from working on *any* system > of any import in our society. That "one" pretty much describes the vast majority of the people who control systems design and implementation. Like it or not, compromises are a fact of life. These compromises may reduce the *engineering* correctness of a system, but they do not invalidate that system. They do, in general, make most systems more failure prone than they (theoretically) need to be, but as that theoretical basis is "unreal", it is not a valid point to argue from. > >This assumption makes GCC a smaller, less complex component . . . at > >the cost of making the *whole* system *MUCH* more complex. > > False. No, it's true. This is so true, that it has become a working principle for engineers: reusable design patterns. The GCC extensions arise out of repeatedly solving the same types of problems. Eventually, this fact is recognized, an analysis is done, and a generically applicable solution is constructed and installed. This reduces complexity enourmously. Think: library (generic implmentations of reusable design patterns). __builin_apply is an extension that greatly reduces the complexity of the system as a whole. > In fact, the code GCC compiled would get *easier* to maintain > because it'd conform *better* to the *one* widely known standard > for the C language: ISO C. You contradict yourself. ;-> First you complain that nobody reads or understands the ISO C standard, then you claim that it is widely known. > >This statement seems to be based on the false premise that adding > >interface, makes an object more transparent (it does not). > > Your statement *is* based on the false premise that adding > an interface to cause one component to be more complicated just > to accommodate another component being out of spec is in no > way different than adding an interface for other, legitimate reasons. Nope. I recognize the difference, though I disagree with your use of the phrase "legitimate". Adding interface to *an* object to accomodate a *set* of out-of-spec components is both legitimate, and indeed, it reduces the complexity of the system as a whole. Changing 200 lines in GCC to avoid changing 2 million lines of application/OS source code may indeed constitute a legitimate change. > If you can't distinguish between those cases, you'll *never* be > able to prevent all sorts of new interfaces being added to > component A to accommodate bugs in component B. It can be prevented in exactly the way it has always been prevented: through the group dynamic. If a small number of people want an accomodation, it will probably not occur. If a large number of people want an accomodation, it probably will (and should) occur. > *That's* the whole > point of my objection -- while, in any one instance, it might > indeed seem "helpful" to do that, there's unlikely to be the > backbone necessary to prevent that one instance from turning into > a flood. Your fears are ungrounded. The group dynamic will (and HAS!) prevent this from occurring. > (Go back and read the archives, especially Linus Torvald's posts of months > ago, where he basically says, as he has said so often, "why not > accommodate *my* needs, when you've accommodated X, Y, and Z?". Excuse, but your paraphrase is not entirely accurate. Linus argued that the needs of Linux are symptomatic of a much larger body of code. And yes, I've been following this list for quite some time. I do not need to go back to the archives: I was there, and read it as it came in. > You would, if you paid attention, > quickly learn the lesson: never accommodate anyone's "needs", just > write to the pertinent standards/specifications...or get into > marketing.) I pay attention. And I do accomodate needs. And I am not a marketeer. > Well, it could be said we already *had* the dialog vis-a-vis > this construct: user says "I want to program in C"; GCC says > "I compile ISO C to run pretty fast"; users says "great, I'll > use it". The only problem is: that is a mis-statement, as "C" is a moving target. The problem (as is almost always the case) is one of miscommunication: and in most cases, the user isn't saying "I want to program in pure, latest-available-ISO-C-standard-C", they are saying: I want a compiler that generates correct code from my source, which turns out to be "defacto standard C". If it makes it any clearer, think of C, like every other language, as a result of group consensus. A standards document may say one thing, but the group may say something else. Which is why RMS believes in paying attention to the standard, but not in being slavishly bound to it. > >The success of the GUI simply proves what has been known for a > >very long time: dialog is better than monologue, and monologue > >is better than no communications what-so-ever. > > Tell that to blind users. And I don't mean that to be flippant: > having studied GUIs vs. CLIs, and the issues arising therefrom, > I cannot possibly agree with your flat-out statement as used > in this context. The GUI is an *instance* of a user interface design pattern based on the concept of dialog. It isn't the *GRAPHICAL* part of GUI that makes GUI's preferable for most people to CLI's, it is the *STATIC* *DIALOG* part that is important. Dialog based UI's do not need to be graphical (dialog does not mean: GUI dialog box, it refers to a langauge based information exchange). And in fact, my statement stands as correct: dialog is always better than monologue, monologue is always better than no communications what so ever. A blind person may require that the dialog be audio, instead of written, but the design pattern is still correct. I've written handicapped accessible UI systems. The dialog UI design pattern is so basic, it can be found in nearly every successful software system. > The future will prove me right in > this case as well: the warning will be assailed by many as inadequate, > incorrect, etc. Darn near every warning, error or notice has been assailed at one time or another. This does not mean that we should not have warnings, errors or notices. If a large body of users *DOES* complain, though, that *DOES* indicate we need to fix/modify what they are complaining about. > Somewhere, somehow, responsibility for these bugs has to *stop*. > The buck must stop somewhere. In this case, since the bug is > in user code, it does *not* stop with GCC. Ok, I cannot resist doing what you do to me, so right back at ya: I've corrected you multiple times. When will you read the langauge definitions and understand this? It isn't a bug if it was 1) undefined or 2) implementation dependant. It is a tool vs. input *MISMATCH* (re: philiphs head screw and flat blade screw driver). The responsiblity lies with the person mis-using a tool. Surely you've read those warnings about "this product not intended to be inhaled", or some such? Accomodating the knowledgeable user who is, unfortunately, caught in the situation of being forced to use the new tool with old code (because, after all, the GCC maintainers do not maintain old versions of the compiler), is not only reasonable, it is a requirement. > The job of a compiler is not to communicate: it is to *compile*. The *PRIMARY* job of the compiler is to translate what the user says, into something the machine can understand. As with any translation from one language to another, the mapping isn't one to one. Misunderstandings can and will occur, therefore the compiler *MUST* communicate what it is thinking to the user so that the user can catch and fix these misunderstandings. > But when you say the most *important* facet is to talk, you support > those who choose to have their software talk, even when the result > is the wrong thing (files lost, lives lost, whatever). No, I don't support that. The two do not have to be related. In short, *BAD* use of dialog does not constitute a valid reason for not using dialog. It merely indicates that you should fix your program: its buggy! ;-> > >Warnings should be issued *even* *if* adding them increases internal > >complexity of the object. > > No. Yes. Increasing the complexity of a single object in order to reduce the complexity of the entire system is good engineering practice. > I wasn't talking about an object with no interface. I was talking > about one that does what it is supposed to do, and no more. Yes. And part of what "it is supposed to do" is communicate with the user. > crash? Certainly that would be a warning to the developer. . . but a very poor communication technique, unless the crash (core dump, anyone?) comes with information specifiying *WHY* the crash occurred. > Print a message to stderr? This may indeed be a valid response for certain kinds of programs. What, you don't like logging? You never use Unix, or you do, but you never read the log files to check for intrusion *WARNINGS* from your system? ;-> How do you keep your systems secure, then? > Which one of these > warnings the *caller* that it might not mean what it says? But, in response, consider what you wrote: how can a module warn that it's warnings may not valid? How about this: "This input may be invalid" . . . catch the word, "may" there. To the user, this should imply that the compiler sees the construct in question as ambiguous. > False on all points. Since it is this object, (the compiler) that defines the translation, it is the compilers responsibility to communicate this information, when appropriate, back to the user. > It is *not* the responsibility of the *compiler* > to define proper-and-appropriate. "Implementation defined". > To the extent the *product* of > which it is a part does that, the knowledge can be communicated > via documentation; Making the system much, much more complex as putting this information solely in the documentation, while the *enforcement* responsiblity lies with the compiler, creates a huge and vastly complex relationship between two objects. You talk about complexity, then turn around and want to vastly increase it. Surely in your experience you've come across documentation that disagrees with the actual code? > via a tool that analyses code and spots problems > (issuing warnings); or similar. Which, again, vastly increases complexity. You are talking about implementing two objects that have overlapping responsibilities. > What I'm actually suggesting is that cc1 should not produce diagnostics, > that a separately maintained program should. It's been tried. It was abandoned as a bad design. John S.