public inbox for gcc@gcc.gnu.org
 help / color / mirror / Atom feed
* Re: Ada subtypes and base types
@ 2006-02-21 20:33 Richard Kenner
  2006-02-21 20:54 ` Jeffrey A Law
  0 siblings, 1 reply; 105+ messages in thread
From: Richard Kenner @ 2006-02-21 20:33 UTC (permalink / raw)
  To: law; +Cc: gcc

     So in the case above, the set of permissible values is
     [1, 0x7fffffff] after the addition, right?   

Well, not quite.  The addition isn't done in type X, but in type X'Base,
which does not have the restricted TYPE_{MIN,MAX}_VALUES.  But, as we've all
said, there are conversions in there so VRP can use it's normal logic.

If you have a different case:

	type X is range 0 .. 16#7fff_ffff#;

	Y, Z : X;

	Z : = Y + 1;

you can then conclude that [1, 0x7fffffff] is the permissible values
for Z.  But what the last statement actually becomes (in pseudo-C) is:

	Z = (X) ((X'Base) Y + (X'Base) 1);

So the above is not the range for the *addition* (which has the type
unrestricted by the bounds), just for Z.  The reason this distinction is
relevant is that if checking is enabled, the conversion to X will involve
a bounds check.  VRP should be able to deduce that the bounds check can
be replaced by "/= 0x80000000" and that is indeed a correct deduction.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-21 20:33 Ada subtypes and base types Richard Kenner
@ 2006-02-21 20:54 ` Jeffrey A Law
  0 siblings, 0 replies; 105+ messages in thread
From: Jeffrey A Law @ 2006-02-21 20:54 UTC (permalink / raw)
  To: Richard Kenner; +Cc: gcc

On Tue, 2006-02-21 at 15:40 -0500, Richard Kenner wrote:
>      So in the case above, the set of permissible values is
>      [1, 0x7fffffff] after the addition, right?   
> 
> Well, not quite.  The addition isn't done in type X, but in type X'Base,
> which does not have the restricted TYPE_{MIN,MAX}_VALUES.  But, as we've all
> said, there are conversions in there so VRP can use it's normal logic.
Umm, why bring up the basetype nonsense at all.  The arithemtic
is done in whatever type is associated with the expression, not
the base type.  Nothing else makes sense.  ie, conversions are
explicit.

> 
> If you have a different case:
> 
> 	type X is range 0 .. 16#7fff_ffff#;
> 
> 	Y, Z : X;
> 
> 	Z : = Y + 1;
> 
> you can then conclude that [1, 0x7fffffff] is the permissible values
> for Z.  But what the last statement actually becomes (in pseudo-C) is:
> 
> 	Z = (X) ((X'Base) Y + (X'Base) 1);
> 
> So the above is not the range for the *addition* (which has the type
> unrestricted by the bounds), just for Z.  The reason this distinction is
> relevant is that if checking is enabled, the conversion to X will involve
> a bounds check.  VRP should be able to deduce that the bounds check can
> be replaced by "/= 0x80000000" and that is indeed a correct deduction.
You're just making things more complicated -- we don't have to 
worry about base types, any base type stuff should be explicit,
I don't think there's any argument about that.

So, back to my example.  If I have an object with a range
[0, 0x7fffffff] based on the type of the object and I add
one to that object, then I can safely conclude that the
result of the addition has the range [1, 0x7fffffff].  Right?

Jeff


^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-30 17:47                             ` Jeffrey A Law
  2006-03-30 17:52                               ` Andrew Haley
  2006-03-30 19:00                               ` Tom Tromey
@ 2006-03-31 15:58                               ` Duncan Sands
  2 siblings, 0 replies; 105+ messages in thread
From: Duncan Sands @ 2006-03-31 15:58 UTC (permalink / raw)
  To: gcc, law; +Cc: tromey, Diego Novillo, Waldek Hebisch

> > On irc today we were discussing handling 'this' in gcj.  We can add an
> > attribute to the argument to mark it as non-null... but strangely
> > there doesn't seem to be a way to mark other local variables as
> > known-non-null -- a curious deficiency.
> It seems to me that for other locals that the non-null property
> ought to be a property of how those locals are assigned a value.
> ie, if we're going to be able to derive a non-null for a local we
> should be able to derive it from the RHS of the assignment(s) to
> that local.
>
> IIRC "this" is actually a parameter and thus there's no assignment
> we can derive information from.

Ada has a notion of "non-null pointer type".  A variable declared
to be of such a type is required to have a non-null initial value,
so VRP can presumably work out that it is non-null from the initialization.
The main interest of such a type is in declaring subroutine parameters and
function return values, in which case the existing attribute is likely
good enough.  So the Ada way of handling these types maps pretty well to
what you said.

Ciao,

Duncan.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-31 11:30                                     ` Jeffrey A Law
@ 2006-03-31 15:38                                       ` Diego Novillo
  0 siblings, 0 replies; 105+ messages in thread
From: Diego Novillo @ 2006-03-31 15:38 UTC (permalink / raw)
  To: law; +Cc: Andrew Haley, tromey, Duncan Sands, gcc, Waldek Hebisch

On 03/31/06 04:08, Jeffrey A Law wrote:

> Then we just need to make VRP not be so quick to ignore statements
> with virtual operands (ie, the same problem we need to solve if
> VRP is going to be able to use non-null attributes on function
> return values).
> 
Certainly.  The approach I had sketched out earlier should be able to
accomodate all this:

- The front-end encodes in a langhook the notion of non-null attribute
for symbols or function return values or types.

- Propagators propagate that attribute via the propagate_value helper.
This attribute becomes an SSA name attribute (we need flow sensitivity),
so we will need to be able to encode various attributes on SSA names
(perhaps using the aux field).

- VRP (and others) can pick up those propagated attributes when doing
their thing.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-31 10:40                                   ` Andrew Haley
@ 2006-03-31 11:30                                     ` Jeffrey A Law
  2006-03-31 15:38                                       ` Diego Novillo
  0 siblings, 1 reply; 105+ messages in thread
From: Jeffrey A Law @ 2006-03-31 11:30 UTC (permalink / raw)
  To: Andrew Haley; +Cc: tromey, Duncan Sands, Diego Novillo, gcc, Waldek Hebisch

On Fri, 2006-03-31 at 09:55 +0100, Andrew Haley wrote:

> Well, it's not just functionss but also global variables.  AFAICS
> there are three sources of data: args, retvals, and globals.  And
> there are quite a few of these globals we can usefully tag as "known
> never to be null".
Seems to me the same mechanism we should use for tagging
functions as returning non-null values should be used to tag
globals -- it's just an attribute after all.

Then we just need to make VRP not be so quick to ignore statements
with virtual operands (ie, the same problem we need to solve if
VRP is going to be able to use non-null attributes on function
return values).

Jeff

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-30 18:34                                 ` Jeffrey A Law
@ 2006-03-31 10:40                                   ` Andrew Haley
  2006-03-31 11:30                                     ` Jeffrey A Law
  0 siblings, 1 reply; 105+ messages in thread
From: Andrew Haley @ 2006-03-31 10:40 UTC (permalink / raw)
  To: Jeffrey A Law; +Cc: tromey, Duncan Sands, Diego Novillo, gcc, Waldek Hebisch

Jeffrey A Law writes:
 > On Thu, 2006-03-30 at 18:39 +0100, Andrew Haley wrote:
 > > Jeffrey A Law writes:
 > >  > On Wed, 2006-03-29 at 14:28 -0700, Tom Tromey wrote:
 > >  > 
 > >  > > On irc today we were discussing handling 'this' in gcj.  We can add an
 > >  > > attribute to the argument to mark it as non-null... but strangely
 > >  > > there doesn't seem to be a way to mark other local variables as
 > >  > > known-non-null -- a curious deficiency.
 > >  > It seems to me that for other locals that the non-null property
 > >  > ought to be a property of how those locals are assigned a value.
 > >  > ie, if we're going to be able to derive a non-null for a local we
 > >  > should be able to derive it from the RHS of the assignment(s) to
 > >  > that local.
 > > 
 > > Right, that's true.  The idea is that a great many library functions
 > > either return a valid object or throw an execption, and we can
 > > propagate that information to VRP>
 > > 
 > >  > IIRC "this" is actually a parameter and thus there's no assignment
 > >  > we can derive information from.
 > > 
 > > Mmm, yeah.  The point is that we solved the problem for "this", but
 > > not for other things.

 > And I think the way to solve this is to mark those interesting
 > library functions, then fix VRP so that it's not so eager to ignore
 > function calls :-)

Well, it's not just functionss but also global variables.  AFAICS
there are three sources of data: args, retvals, and globals.  And
there are quite a few of these globals we can usefully tag as "known
never to be null".

Andrew.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-30 17:47                             ` Jeffrey A Law
  2006-03-30 17:52                               ` Andrew Haley
@ 2006-03-30 19:00                               ` Tom Tromey
  2006-03-31 15:58                               ` Duncan Sands
  2 siblings, 0 replies; 105+ messages in thread
From: Tom Tromey @ 2006-03-30 19:00 UTC (permalink / raw)
  To: law; +Cc: Duncan Sands, Diego Novillo, gcc, Waldek Hebisch

>>>>> "Jeff" == Jeffrey A Law <law@redhat.com> writes:

Jeff> It seems to me that for other locals that the non-null property
Jeff> ought to be a property of how those locals are assigned a value.

Yeah, thanks for the re-frame.  This is PR 20318, and also related to
PR 21856.

Tom

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-30 17:52                               ` Andrew Haley
@ 2006-03-30 18:34                                 ` Jeffrey A Law
  2006-03-31 10:40                                   ` Andrew Haley
  0 siblings, 1 reply; 105+ messages in thread
From: Jeffrey A Law @ 2006-03-30 18:34 UTC (permalink / raw)
  To: Andrew Haley; +Cc: tromey, Duncan Sands, Diego Novillo, gcc, Waldek Hebisch

On Thu, 2006-03-30 at 18:39 +0100, Andrew Haley wrote:
> Jeffrey A Law writes:
>  > On Wed, 2006-03-29 at 14:28 -0700, Tom Tromey wrote:
>  > 
>  > > On irc today we were discussing handling 'this' in gcj.  We can add an
>  > > attribute to the argument to mark it as non-null... but strangely
>  > > there doesn't seem to be a way to mark other local variables as
>  > > known-non-null -- a curious deficiency.
>  > It seems to me that for other locals that the non-null property
>  > ought to be a property of how those locals are assigned a value.
>  > ie, if we're going to be able to derive a non-null for a local we
>  > should be able to derive it from the RHS of the assignment(s) to
>  > that local.
> 
> Right, that's true.  The idea is that a great many library functions
> either return a valid object or throw an execption, and we can
> propagate that information to VRP>
> 
>  > IIRC "this" is actually a parameter and thus there's no assignment
>  > we can derive information from.
> 
> Mmm, yeah.  The point is that we solved the problem for "this", but
> not for other things.
And I think the way to solve this is to mark those interesting
library functions, then fix VRP so that it's not so eager to ignore
function calls :-)

jeff
> 
> Andrew.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-29  1:11                         ` Duncan Sands
  2006-03-29 22:54                           ` Tom Tromey
@ 2006-03-30 18:01                           ` Jeffrey A Law
  1 sibling, 0 replies; 105+ messages in thread
From: Jeffrey A Law @ 2006-03-30 18:01 UTC (permalink / raw)
  To: Duncan Sands; +Cc: Diego Novillo, gcc, Waldek Hebisch

On Tue, 2006-03-28 at 23:49 +0200, Duncan Sands wrote:

> That still leaves the problem of how the Ada front-end tells the
> middle-end that a variable is known to be in a certain range.  Due
> to the way the language works, the front-end often does know a useful
> range, helpful for optimisation.  If using types with strange ranges
> is out, and ASSERT_EXPRs aren't appropriate, what is left?  After
> further thought, I'm not so against the use of these ranges as I was...
As I've stated before, I really don't think that we're gaining that
much from those front-end ranges and the way the front-ends are
generating code is likely getting in the way of more useful 
optimizations.

I haven't thought a whole lot about how to get the benefit of those
ranges without the penalties.  The only guidelines I can immediately
come up with are to not expose the subtypes to the optimizers except
for VRP.  ie, do *everything* in the base type and not emit all the
conversions between the base and sub types, then have some way for
VRP to query objects for subtype range information.

> I'm playing with the code right now, getting rid of VR_VARYING.  Do
> you have any suggestions for benchmarking it?
Most of us have a bucket of .i and .ii files that we use for compile
time testing.  You might even be able find a tarball of such files
on Diego's home page IIRC.

You might consider running those benchmarks with a compiler that
has enable-checking turned off.  You're not interested in the
checking bits and they'll just make it harder to find/interpret
any interesting results from your tests.


jeff


^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-30 17:47                             ` Jeffrey A Law
@ 2006-03-30 17:52                               ` Andrew Haley
  2006-03-30 18:34                                 ` Jeffrey A Law
  2006-03-30 19:00                               ` Tom Tromey
  2006-03-31 15:58                               ` Duncan Sands
  2 siblings, 1 reply; 105+ messages in thread
From: Andrew Haley @ 2006-03-30 17:52 UTC (permalink / raw)
  To: Jeffrey A Law; +Cc: tromey, Duncan Sands, Diego Novillo, gcc, Waldek Hebisch

Jeffrey A Law writes:
 > On Wed, 2006-03-29 at 14:28 -0700, Tom Tromey wrote:
 > 
 > > On irc today we were discussing handling 'this' in gcj.  We can add an
 > > attribute to the argument to mark it as non-null... but strangely
 > > there doesn't seem to be a way to mark other local variables as
 > > known-non-null -- a curious deficiency.
 > It seems to me that for other locals that the non-null property
 > ought to be a property of how those locals are assigned a value.
 > ie, if we're going to be able to derive a non-null for a local we
 > should be able to derive it from the RHS of the assignment(s) to
 > that local.

Right, that's true.  The idea is that a great many library functions
either return a valid object or throw an execption, and we can
propagate that information to VRP>

 > IIRC "this" is actually a parameter and thus there's no assignment
 > we can derive information from.

Mmm, yeah.  The point is that we solved the problem for "this", but
not for other things.

Andrew.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-29 22:54                           ` Tom Tromey
  2006-03-30 13:22                             ` Duncan Sands
@ 2006-03-30 17:47                             ` Jeffrey A Law
  2006-03-30 17:52                               ` Andrew Haley
                                                 ` (2 more replies)
  1 sibling, 3 replies; 105+ messages in thread
From: Jeffrey A Law @ 2006-03-30 17:47 UTC (permalink / raw)
  To: tromey; +Cc: Duncan Sands, Diego Novillo, gcc, Waldek Hebisch

On Wed, 2006-03-29 at 14:28 -0700, Tom Tromey wrote:

> On irc today we were discussing handling 'this' in gcj.  We can add an
> attribute to the argument to mark it as non-null... but strangely
> there doesn't seem to be a way to mark other local variables as
> known-non-null -- a curious deficiency.
It seems to me that for other locals that the non-null property
ought to be a property of how those locals are assigned a value.
ie, if we're going to be able to derive a non-null for a local we
should be able to derive it from the RHS of the assignment(s) to
that local.


IIRC "this" is actually a parameter and thus there's no assignment
we can derive information from.

jeff


^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-30 13:22                             ` Duncan Sands
@ 2006-03-30 15:45                               ` Diego Novillo
  0 siblings, 0 replies; 105+ messages in thread
From: Diego Novillo @ 2006-03-30 15:45 UTC (permalink / raw)
  To: Duncan Sands; +Cc: tromey, gcc, Waldek Hebisch, Anthony Green

On 03/30/06 07:24, Duncan Sands wrote:
> On Wednesday 29 March 2006 23:28, Tom Tromey wrote:
>
>> On irc today we were discussing handling 'this' in gcj.  We can add an
>> attribute to the argument to mark it as non-null... but strangely
>> there doesn't seem to be a way to mark other local variables as
>> known-non-null -- a curious deficiency.
> 
> If the front-end was allowed to place ASSERT_EXPRs in the trees it passes
> to the middle-end, then you could place such assertions on the appropriate
> local variables and voila.  Jeff claimed that this would cause serious
> problems for the copy propagation pass, but I don't see why.
> 
When I initially implemented VRP, I was adding assertions very early in
the compilation pipeline.  This has the advantage of getting the SSA
form on assertions built "for free".

However, it has negative effects on passes that rely on equality
properties like copy-propagation, value numbering, etc.  So, not only
you need to "fix" every pass to see through ASSERT_EXPRs, you pay a
compile time penalty for doing so.

My recommendation is to first try and use langhooks that you can then
query in the assert insertion phase in tree-vrp.c.

In fact, ISTR something along these lines that Anthony Green added as a
proof of concept sometime last year.  Anthony?  Do you remember what it was?

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-29 22:54                           ` Tom Tromey
@ 2006-03-30 13:22                             ` Duncan Sands
  2006-03-30 15:45                               ` Diego Novillo
  2006-03-30 17:47                             ` Jeffrey A Law
  1 sibling, 1 reply; 105+ messages in thread
From: Duncan Sands @ 2006-03-30 13:22 UTC (permalink / raw)
  To: tromey; +Cc: Diego Novillo, gcc, Waldek Hebisch

On Wednesday 29 March 2006 23:28, Tom Tromey wrote:
> >>>>> "Duncan" == Duncan Sands <baldrick@free.fr> writes:
> 
> Duncan> That still leaves the problem of how the Ada front-end tells the
> Duncan> middle-end that a variable is known to be in a certain range.  Due
> Duncan> to the way the language works, the front-end often does know a useful
> Duncan> range, helpful for optimisation.  If using types with strange ranges
> Duncan> is out, and ASSERT_EXPRs aren't appropriate, what is left?
> 
> Yeah, there's got to be something.
> 
> On irc today we were discussing handling 'this' in gcj.  We can add an
> attribute to the argument to mark it as non-null... but strangely
> there doesn't seem to be a way to mark other local variables as
> known-non-null -- a curious deficiency.

If the front-end was allowed to place ASSERT_EXPRs in the trees it passes
to the middle-end, then you could place such assertions on the appropriate
local variables and voila.  Jeff claimed that this would cause serious
problems for the copy propagation pass, but I don't see why.

All the best,

Duncan.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-29  1:11                         ` Duncan Sands
@ 2006-03-29 22:54                           ` Tom Tromey
  2006-03-30 13:22                             ` Duncan Sands
  2006-03-30 17:47                             ` Jeffrey A Law
  2006-03-30 18:01                           ` Jeffrey A Law
  1 sibling, 2 replies; 105+ messages in thread
From: Tom Tromey @ 2006-03-29 22:54 UTC (permalink / raw)
  To: Duncan Sands; +Cc: Diego Novillo, gcc, Waldek Hebisch

>>>>> "Duncan" == Duncan Sands <baldrick@free.fr> writes:

Duncan> That still leaves the problem of how the Ada front-end tells the
Duncan> middle-end that a variable is known to be in a certain range.  Due
Duncan> to the way the language works, the front-end often does know a useful
Duncan> range, helpful for optimisation.  If using types with strange ranges
Duncan> is out, and ASSERT_EXPRs aren't appropriate, what is left?

Yeah, there's got to be something.

On irc today we were discussing handling 'this' in gcj.  We can add an
attribute to the argument to mark it as non-null... but strangely
there doesn't seem to be a way to mark other local variables as
known-non-null -- a curious deficiency.

Tom

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-28 18:50                       ` Jeffrey A Law
@ 2006-03-29  1:11                         ` Duncan Sands
  2006-03-29 22:54                           ` Tom Tromey
  2006-03-30 18:01                           ` Jeffrey A Law
  0 siblings, 2 replies; 105+ messages in thread
From: Duncan Sands @ 2006-03-29  1:11 UTC (permalink / raw)
  To: law; +Cc: Diego Novillo, gcc, Waldek Hebisch

Hi Jeff, thanks for the info.

> > I agree that this kind of special casing by the middle-end is all
> > wrong - the front-end should do it.
> I'd disagree.  While it may at first seem useful to have ASSERT_EXPRs
> live through the entire optimization pipeline, they're actually going
> to get in the way of a fair amount of simple optimizations, specifically
> copy propagation and the secondary optimizations exposed by 
> copy propagation.
> 
> 
> > 
> > <RANT>
> > Personally I think the Ada frontend should never pass types with
> > a restricted range to the middle-end (in Ada terms, this means
> > that the middle-end would only ever see base types).
> I certainly do agree with this -- I've been pretty vocal about
> that already :-)

That still leaves the problem of how the Ada front-end tells the
middle-end that a variable is known to be in a certain range.  Due
to the way the language works, the front-end often does know a useful
range, helpful for optimisation.  If using types with strange ranges
is out, and ASSERT_EXPRs aren't appropriate, what is left?  After
further thought, I'm not so against the use of these ranges as I was...

> > So that means that initialising variables to [TYPE_MIN,TYPE_MAX]
> > is likely not acceptable in general.
> No, the problems referenced above are latent bugs that need to
> be fixed.  They do not argue that initializing to [TYPE_MIN, TYPE_MAX]
> is the wrong direction.

You are right.

> What needs to drive the representational solution is the ability to
> get the optimizations we want without paying a hefty compile-time
> or maintenance penalty.

I'm playing with the code right now, getting rid of VR_VARYING.  Do
you have any suggestions for benchmarking it?

Ciao,

Duncan.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-28  1:20                     ` Duncan Sands
@ 2006-03-28 18:50                       ` Jeffrey A Law
  2006-03-29  1:11                         ` Duncan Sands
  0 siblings, 1 reply; 105+ messages in thread
From: Jeffrey A Law @ 2006-03-28 18:50 UTC (permalink / raw)
  To: Duncan Sands; +Cc: Diego Novillo, gcc, Waldek Hebisch

On Tue, 2006-03-28 at 01:03 +0200, Duncan Sands wrote:
> Hi Jeff,
> 
> On Monday 27 March 2006 21:00, Jeffrey A Law wrote:
> > On Sat, 2006-03-25 at 10:35 -0500, Diego Novillo wrote:
> > 
> > > Start by looking at tree-vrp.c:infer_value_range.
> > I'm not so sure this is the best place to start.
> 
> it seems a good place for adding ASSERT_EXPRs on function return
> values though.
No.  For assignments you really want to discover the range in
visit_assignment.

infer_value_range is really best suited for discovering ranges
that are implicit in the statement.  For example, if a statement
dereferences a pointer, then you can infer that the pointer must
not be null.  A conditional or switch statement allows you to
infer a value on one or more outgoing edges from the block
containing the conditional/switch.


> I agree, however I don't yet know how to walk the parameter list, so
> any hints would be welcome :)
Search for loops which iterate over DECL_ARGUMENTS.  Here's an example
from function.c:

  /* Process all parameters of the function.  */
  for (decl = DECL_ARGUMENTS (fndecl); decl; decl = TREE_CHAIN (decl))
    {
      instantiate_decl (DECL_RTL (decl));
      instantiate_decl (DECL_INCOMING_RTL (decl));
      if (DECL_HAS_VALUE_EXPR_P (decl))
        {
          tree v = DECL_VALUE_EXPR (decl);
          walk_tree (&v, instantiate_expr, NULL, NULL);
        }
    }


But again, I would caution against going down this path without
first determining what we're going to do about the representational
issues we need to resolve.  In fact, resolving those issues will
make insertion of ASSERT_EXPRs for the arguments a pointless exercise.


> I agree that this kind of special casing by the middle-end is all
> wrong - the front-end should do it.
I'd disagree.  While it may at first seem useful to have ASSERT_EXPRs
live through the entire optimization pipeline, they're actually going
to get in the way of a fair amount of simple optimizations, specifically
copy propagation and the secondary optimizations exposed by 
copy propagation.


> 
> <RANT>
> Personally I think the Ada frontend should never pass types with
> a restricted range to the middle-end (in Ada terms, this means
> that the middle-end would only ever see base types).
I certainly do agree with this -- I've been pretty vocal about
that already :-)


> Also, I now think it would be a mistake to initialise all
> variable ranges to [TYPE_MIN, TYPE_MAX].  I implemented this,
> and got great optimisation - a bit too much optimisation in fact!
> For example, you can't test whether a variable is in range using
> the obvious "if" statement anymore, since VRP will eliminate it
> (after all, it "knows" the variable is in range already, because
> it was told so).
Right, but that's exactly what you'd expect and exactly what
other optimizations in GCC already do.  And if you've got such
a conditional, then it's clearly useless (which usually means
it's testing the wrong thing).


> Do the Ada people want this level of optimisation?  I'm pretty sure
> they don't: I've heard Robert Dewar say that he considers exactly this
> optimisation (eliminating an "if" on an uninitialised variable based
> on the range given in the type definition) to be a bad idea.
There's definitely a corner case with Ada's uninitialized variables
which is unresolved (I didn't follow the discussion closely, so
perhaps they did come up with a solution).

> So that means that initialising variables to [TYPE_MIN,TYPE_MAX]
> is likely not acceptable in general.
No, the problems referenced above are latent bugs that need to
be fixed.  They do not argue that initializing to [TYPE_MIN, TYPE_MAX]
is the wrong direction.

What needs to drive the representational solution is the ability to
get the optimizations we want without paying a hefty compile-time
or maintenance penalty.



Jeff

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-27 21:34                   ` Jeffrey A Law
@ 2006-03-28  1:20                     ` Duncan Sands
  2006-03-28 18:50                       ` Jeffrey A Law
  0 siblings, 1 reply; 105+ messages in thread
From: Duncan Sands @ 2006-03-28  1:20 UTC (permalink / raw)
  To: law; +Cc: Diego Novillo, gcc, Waldek Hebisch

Hi Jeff,

On Monday 27 March 2006 21:00, Jeffrey A Law wrote:
> On Sat, 2006-03-25 at 10:35 -0500, Diego Novillo wrote:
> 
> > Start by looking at tree-vrp.c:infer_value_range.
> I'm not so sure this is the best place to start.

it seems a good place for adding ASSERT_EXPRs on function return
values though.

> It seems to me that the asserts could be registered at the
> start of insert_range_assertions.
> 
> Just walk the parameter list and extract the default definition
> for each parameter and register an appropriate assertion based
> on the parameter's type.

I agree, however I don't yet know how to walk the parameter list, so
any hints would be welcome :)

> Note that this basically avoids reaching any kind of conclusion
> about the best way to handle our representational issue with
> VR_VARYING vs just creating a range from TYPE_MIN_VALUE to
> TYPE_MAX_VALUE.  There's a part of me which would recommend
> reaching some conclusions on that issue before going down the
> path of special casing parameters.

I agree that this kind of special casing by the middle-end is all
wrong - the front-end should do it.

<RANT>
Personally I think the Ada frontend should never pass types with
a restricted range to the middle-end (in Ada terms, this means
that the middle-end would only ever see base types).  If the
front-end knows (or is prepared to have the optimisers assume)
that the value of a variable is in some restricted range, then
the front-end should itself insert an appropriate ASSERT_EXPR
in the tree that it is going to pass to the middle-end.

Of course this means reworking the middle-end a bit so it can
handle assertions right from the beginning.  Not to mention
reworking the Ada front-end.  But it gives more control to the
front-end, and it would likely be an overall code simplification
for the middle-end, and surely a conceptual simplification,
since no-one would have to worry about funny ranges any more,
and ASSERT_EXPRs are easy to understand.

After all, saying that a variable belongs to a type T with a
restricted range means: (A) it belongs to the base type (int,
for example), and (B) the value that it holds is within some
subrange.  (B) is essentially an assertion, exactly of the kind
ASSERT_EXPR was introduced to capture.  The problem with using
types like T is that sometimes (B) holds, sometimes it doesn't.
A variable declared to be of type T won't usually satisfy (B)
until it gets an initial value.  If the uninitialised variable
is passed to a procedure, now you have a procedure parameter
of type T which doesn't satisfy (B).  So when are the optimisers
allowed to assume (B)?  The Ada standard carefully defines the
possible outcomes of using an out-of-range variable; the middle-end
could try to implement these subtle semantics.  But surely it
shouldn't even try: it should leave Ada semantics to the Ada
front-end, and provide a means by which the front-end can tell
it what it may assume.  ASSERT_EXPRs seem like a decent way
of doing this, in any case a better way than the current type
based system.
</RANT>

That said, I'm still planning to implement the ASSERT_EXPR on
subroutine parameters scheme, to check that it really is feasible,
and to learn about the middle-end.

> Note that this basically avoids reaching any kind of conclusion
> about the best way to handle our representational issue with
> VR_VARYING vs just creating a range from TYPE_MIN_VALUE to
> TYPE_MAX_VALUE.  There's a part of me which would recommend
> reaching some conclusions on that issue before going down the
> path of special casing parameters.

If exotic TYPE_MIN, TYPE_MAX values did not exist, would you be
still feel there was an issue here?

Also, I now think it would be a mistake to initialise all
variable ranges to [TYPE_MIN, TYPE_MAX].  I implemented this,
and got great optimisation - a bit too much optimisation in fact!
For example, you can't test whether a variable is in range using
the obvious "if" statement anymore, since VRP will eliminate it
(after all, it "knows" the variable is in range already, because
it was told so).

Do the Ada people want this level of optimisation?  I'm pretty sure
they don't: I've heard Robert Dewar say that he considers exactly this
optimisation (eliminating an "if" on an uninitialised variable based
on the range given in the type definition) to be a bad idea.

So that means that initialising variables to [TYPE_MIN,TYPE_MAX]
is likely not acceptable in general.  Actually, it might be acceptable
if the Ada front-end was modified to only declare a variable to be of a
type with a restricted range when it is sure (or is willing to have
the optimisers assume) that the value is in that range.  For example,
uninitialised variables would have to be declared to be of the base type
(no funny range), and would be cast to the restricted range type when the
value is known to be in range, perhaps because Ada semantics ensure it is
in range, or because a range check was just performed.  [In the scheme I
suggested in my rant, the front-end would insert an ASSERT_EXPR here, rather
than a type cast].

Maybe the Ada people *are* happy if VRP assumes that uninitialised
variables are in range.  I'm really hoping that some Ada gurus will step
in here and enlighten us.  Or maybe a Pascal guru :)

A final remark: ASSERT_EXPRs and types with restricted ranges are
actually equivalent (at least if you are always allowed to assume
that a variable is in the range of its type).  Proof?  To get rid
of ASSERT_EXPRs and only use restricted ranges, proceed as follows:
every time you want to insert an ASSERT_EXPR in VRP, introduce a new
type instead, with [TYPE_MIN,TYPE_MAX] equal to the range that the
assertion would have defined, and cast the variable to that type; the
rest of VRP would then simply examine variable types and their ranges,
rather than ASSERT_EXPRs.  Conversely, to get rid of types with restricted
ranges, every time you see a variable of such a type, redeclare the
variable to be of the corresponding unrestricted "base" type, and insert
an ASSERT_EXPR that the variable is in the range [TYPE_MIN,TYPE_MAX].

The point of this remark is that there are two mechanisms for doing
the same thing: ASSERT_EXPRs and types with funny ranges.  Supporting
them both is simply a duplication of effort, and asking for trouble.
One of these mechanisms is redundant.  Since everyone understands
ASSERT_EXPRs, and almost no-one understands funny ranges, I think
funny ranges should be eliminated and ASSERT_EXPRs used more widely.

Sorry to have ranted so long.

All the best,

Duncan.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-25 18:00                 ` Diego Novillo
@ 2006-03-27 21:34                   ` Jeffrey A Law
  2006-03-28  1:20                     ` Duncan Sands
  0 siblings, 1 reply; 105+ messages in thread
From: Jeffrey A Law @ 2006-03-27 21:34 UTC (permalink / raw)
  To: Diego Novillo; +Cc: Duncan Sands, gcc, Waldek Hebisch

On Sat, 2006-03-25 at 10:35 -0500, Diego Novillo wrote:

> Start by looking at tree-vrp.c:infer_value_range.
I'm not so sure this is the best place to start.

It seems to me that the asserts could be registered at the
start of insert_range_assertions.

Just walk the parameter list and extract the default definition
for each parameter and register an appropriate assertion based
on the parameter's type.

Note that this basically avoids reaching any kind of conclusion
about the best way to handle our representational issue with
VR_VARYING vs just creating a range from TYPE_MIN_VALUE to
TYPE_MAX_VALUE.  There's a part of me which would recommend
reaching some conclusions on that issue before going down the
path of special casing parameters.

Jeff

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-25 16:26               ` Duncan Sands
@ 2006-03-25 18:00                 ` Diego Novillo
  2006-03-27 21:34                   ` Jeffrey A Law
  0 siblings, 1 reply; 105+ messages in thread
From: Diego Novillo @ 2006-03-25 18:00 UTC (permalink / raw)
  To: Duncan Sands; +Cc: law, gcc, Waldek Hebisch

On 03/25/06 09:12, Duncan Sands wrote:

> I'm quite interested in trying out this scheme.  Unfortunately it's not clear
> to me how I would go about finding which variables are subroutine parameters,
> and adding ASSERT_EXPRs for them; any hints would be welcome.
> 
Probably not a bad idea to test.  You can recognize parameters with the
predicate TREE_CODE (sym) == PARM_DECL.  The SSA name on which you want
to add the assertion is the so called "default definition" for that
PARM_DECL.

A default definition is an SSA name that is created for a variable SYM
when the very first reference to SYM is a load/read.  For instance

foo (int i)
{
  return i - 1;
}

The SSA form for the above is:

foo (int i)
{
   return i_1 - 1;
}

i_1 is the "default definition" for 'i'.  You find that out with the
predicate 'name == default_def (sym)'.

Start by looking at tree-vrp.c:infer_value_range.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-25  0:09             ` Jeffrey A Law
@ 2006-03-25 16:26               ` Duncan Sands
  2006-03-25 18:00                 ` Diego Novillo
  0 siblings, 1 reply; 105+ messages in thread
From: Duncan Sands @ 2006-03-25 16:26 UTC (permalink / raw)
  To: law; +Cc: gcc, Waldek Hebisch

Hi Jeff,

> > By the way, I hacked tree-vrp to start all value ranges for INTEGRAL_TYPE_P
> > variables to [TYPE_MIN, TYPE_MAX].  It certainly helps with eliminating many
> > Ada range checks.  Maybe the compiler will even bootstrap :)
> The thing to check will be compile-time performance -- in general
> with propagators such as VRP, CCP and CPROP it's compile-time
> advantageous to drop to a VARYING state in the lattice as quickly
> as possible.  The flipside is you can sometimes miss transformations
> when there's cases when you can use a VARYING object in an expression
> and get a useful value/range.
> 
> Basically the two extremes we need to look at are:
> 
>   1. Give everything a range, even if it's just TYPE_MIN/TYPE_MAX.  In
>      this case VR_VARYING disappears completely as it's meaningless.
> 
>   2. Anytime we're going to record a range TYPE_MIN/TYPE_MAX, drop to
>      VARYING.  The trick then becomes to find all those cases where we
>      have an expression involving a VARYING which produces a useful
>      range (such as widening typecasts, IOR with nonzero, etc etc).

I've thought of a different approach: add ASSERT_EXPRs on those subroutine
parameters, function return values and (perhaps) global variables which have
exotic TYPE_MIN/TYPE_MAX values ("exotic" means different from what you'd
expect given the signedness and TYPE_PRECISION).  The assertion is that the
variable is in the range [TYPE_MIN,TYPE_MAX].  Except for this, VRP would
completely ignore exotic TYPE_MIN/TYPE_MAX values.  In particular, nothing
special would be done for local variables of a type with an exotic range.

For example, consider

int v (restricted_int i) {
        restricted_int j;

        if (j > 100) return -1;
        j = i;
        if (j > 100) return -1;
        return 0;
}

where restricted_int has TYPE_MAX=100 (this type does not exist in C of course).
Then a single ASSERT_EXPR would be added, something like this (I know the syntax
is wrong, this is just to convey the idea):

int v (restricted_int i) {
        ASSERT_EXPR (i < 100);

        restricted_int j;

        if (j > 100) return -1;
        j = i;
        if (j > 100) return -1;
        return 0;
}

Thus the second "if" statement would be eliminated but not the first.
As mentioned, except for inserting the ASSERT_EXPR, VRP can ignore
the fact that the type has an exotic TYPE_MAX, and reason as if TYPE_MAX
was that given by TYPE_PRECISION, as for a standard C int.  The
exoticness is all captured in the ASSERT_EXPR.

I think this scheme is enough to get good optimisation for a language like
Ada.  Advantages of this approach:

(A) No overhead for languages like C which lack exotic type ranges.
(B) Simplifies VRP, since (except for inserting the ASSERT_EXPRs) it
can completely ignore the existence of exotic ranges; and people working
on it won't need to think about them either.  From a maintenance point
of view this is excellent: as second class citizens, types with exotic
ranges are sure to be forgotten about 99% of the time when someone modifies
VRP; with this scheme, being forgotten about doesn't matter.
(C) Avoids over-aggressive optimisation on uninitialised variables.  Consider
the variable j in the example above.  If VRP considered it to be in the range
[-INF,100] because it is of type restricted_int, then it would eliminate the
first "if" statement.  This kind of thing is extremely dangerous.  Probably
the programmer placed the "if" statement there to check that the variable
really was in it's range.  Eliminating it would be legal in Ada, indeed the
program is wrong, however in practice this kind of thing is unwise: it doesn't
gain you much from an optimisation point of view, but it can make programs
behave in very unexpected ways.  It may be unwise to add ASSERT_EXPRs for
global variables with exotic ranges, since they could be uninitialised.

Furthermore, this scheme agrees well with Ada semantics.  What does
it mean to say that a subroutine parameter is of type restricted_int?  It
really means that (1) it is of type int; and (2) it's value has been checked,
before the subroutine call, and is guaranteed to be <= 100.  The code to do
the checking is automatically inserted by the compiler, which adds some code
like this at the point of the subroutine call:

if x > 100
        raise_some_exception;
else
        result = v (x);

Running VRP on this would cause an ASSERT_EXPR to be placed before the call:

if x > 100
        raise_some_exception;
else {
        ASSERT_EXPR (x <= 100);
        result = v (x);
}

I'm basically suggesting that this same ASSERT_EXPR be duplicated inside the
subroutine v.  This is OK, because the compiler guarantees that any parameter
passed to v always satisfies this assertion, for every call to v.

The compiler also guarantees that the result of a function call is in the range
of the return type (by inserting a check at the end of the function, if necessary).
Thus an ASSERT_EXPR can be placed on the result of a function call.

Finally, the language does not guarantee that an uninitialised variable has a
value that is in range.  Thus there is no need to add ASSERT_EXPRs for variable
declarations in general.

I'm quite interested in trying out this scheme.  Unfortunately it's not clear
to me how I would go about finding which variables are subroutine parameters,
and adding ASSERT_EXPRs for them; any hints would be welcome.

Ciao,

Duncan.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-24 19:41           ` Duncan Sands
@ 2006-03-25  0:09             ` Jeffrey A Law
  2006-03-25 16:26               ` Duncan Sands
  0 siblings, 1 reply; 105+ messages in thread
From: Jeffrey A Law @ 2006-03-25  0:09 UTC (permalink / raw)
  To: Duncan Sands; +Cc: gcc, Waldek Hebisch

On Fri, 2006-03-24 at 18:50 +0100, Duncan Sands wrote:

> Hi Jeff, while your patch catches many cases, the logic seems a bit wonky
> for types with TYPE_MIN/TYPE_MAX different to those that can be deduced
> from TYPE_PRECISION.  For example, there is nothing stopping inner_type
> having a bigger TYPE_PRECISION than outer_type, but a smaller
> [TYPE_MIN,TYPE_MAX] range.  For example, outer_type could be a byte with
> range 0 .. 255, and inner_type could be an integer with range 10 .. 20.
> I think this logic:
I really wouldn't expect that to happen terribly often.  If you think
it's worth handling, then feel free to cobble some code together and
submit it.  It shouldn't be terribly complex.


> By the way, I hacked tree-vrp to start all value ranges for INTEGRAL_TYPE_P
> variables to [TYPE_MIN, TYPE_MAX].  It certainly helps with eliminating many
> Ada range checks.  Maybe the compiler will even bootstrap :)
The thing to check will be compile-time performance -- in general
with propagators such as VRP, CCP and CPROP it's compile-time
advantageous to drop to a VARYING state in the lattice as quickly
as possible.  The flipside is you can sometimes miss transformations
when there's cases when you can use a VARYING object in an expression
and get a useful value/range.

Basically the two extremes we need to look at are:

  1. Give everything a range, even if it's just TYPE_MIN/TYPE_MAX.  In
     this case VR_VARYING disappears completely as it's meaningless.

  2. Anytime we're going to record a range TYPE_MIN/TYPE_MAX, drop to
     VARYING.  The trick then becomes to find all those cases where we
     have an expression involving a VARYING which produces a useful
     range (such as widening typecasts, IOR with nonzero, etc etc).


Jeff

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-21 22:42         ` Jeffrey A Law
  2006-03-22 21:29           ` Duncan Sands
  2006-03-23 10:36           ` Duncan Sands
@ 2006-03-24 19:41           ` Duncan Sands
  2006-03-25  0:09             ` Jeffrey A Law
  2 siblings, 1 reply; 105+ messages in thread
From: Duncan Sands @ 2006-03-24 19:41 UTC (permalink / raw)
  To: law; +Cc: gcc, Waldek Hebisch

On Tuesday 21 March 2006 21:59, Jeffrey A Law wrote:
> On Tue, 2006-03-21 at 10:14 +0100, Duncan Sands wrote:
> 
> > Hi Jeff, on the subject of seeing through typecasts, I was playing around
> > with VRP and noticed that the following "if" statement is not eliminated:
> > 
> > int u (unsigned char c) {
> >         int i = c;
> > 
> >         if (i < 0 || i > 255)
> >                 return -1; /* never taken */
> >         else
> >                 return 0;
> > }
> > 
> > Is it supposed to be?
> Fixed thusly.  Bootstrapped and regression tested on i686-pc-linux-gnu.

Hi Jeff, while your patch catches many cases, the logic seems a bit wonky
for types with TYPE_MIN/TYPE_MAX different to those that can be deduced
from TYPE_PRECISION.  For example, there is nothing stopping inner_type
having a bigger TYPE_PRECISION than outer_type, but a smaller
[TYPE_MIN,TYPE_MAX] range.  For example, outer_type could be a byte with
range 0 .. 255, and inner_type could be an integer with range 10 .. 20.
I think this logic:

!       if (vr0.type == VR_RANGE
!         || (vr0.type == VR_VARYING
!             && TYPE_PRECISION (outer_type) > TYPE_PRECISION (inner_type)))
!       {
!         tree new_min, new_max, orig_min, orig_max;

should really test whether
	TYPE_MAX (inner_type) < TYPE_MAX (outer_type) || TYPE_MIN (inner_type) > TYPE_MIN (outer_type)
and take the appropriate action if so.

By the way, I hacked tree-vrp to start all value ranges for INTEGRAL_TYPE_P
variables to [TYPE_MIN, TYPE_MAX].  It certainly helps with eliminating many
Ada range checks.  Maybe the compiler will even bootstrap :)
This approach has advantages, even if TYPE_MIN/MAX are those given by
TYPE_PRECISION.  For example, after incrementing a variable you automatically
get the range [TYPE_MIN+1,INF].  The approach also has disadvantages: for
example, the notion of an "interesting" range has to be adjusted, since now
VR_RANGE can be pretty uninteresting; and how interesting is [TYPE_MIN+1,INF]
in practice?  Also, there's likely to be extra overhead, due to pointless
computations and comparisons on [TYPE_MIN, TYPE_MAX] ranges, especially if
they are symbolic.  Furthermore, in the case of symbolic TYPE_MIN and/or TYPE_MAX,
it may be possible to deduce better ranges using the max/min range given by
TYPE_PRECISION, since these are compile time constants, rather than using
[TYPE_MIN,TYPE_MAX].

As I final thought, it struck me that it might make sense to associate the
range [TYPE_MIN, TYPE_MAX] to the type (rather than to variables of the type),
and extend the notion of equivalence so that a variable's range can be
equivalent to the range of it's type.   This is probably nonsensical, but
I can't tell given my current minuscule understanding of VRP ;)

All the best,

Duncan.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-23 16:37                 ` Andrew Pinski
@ 2006-03-23 20:44                   ` Duncan Sands
  0 siblings, 0 replies; 105+ messages in thread
From: Duncan Sands @ 2006-03-23 20:44 UTC (permalink / raw)
  To: Andrew Pinski; +Cc: Richard Guenther, law, gcc, Waldek Hebisch

On Thursday 23 March 2006 17:14, Andrew Pinski wrote:
> 
> On Mar 23, 2006, at 11:10 AM, Richard Guenther wrote:
> >
> >
> > Well - we could hack a new type attribute to specify min/max values...
> 
> Or maybe try to using C++ instead since C++ rules for enums are  
> different
> than C :).

Well, I like the attribute idea since it gives much more flexibility
in making examples.  Also, though I guess this only matters to me, in
order to produce C++ examples I'd have to build the compiler with C++
support, which I'd rather not do since I have no use for C++ otherwise.

Ciao,

Duncan.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-23 16:26               ` Richard Guenther
@ 2006-03-23 16:37                 ` Andrew Pinski
  2006-03-23 20:44                   ` Duncan Sands
  0 siblings, 1 reply; 105+ messages in thread
From: Andrew Pinski @ 2006-03-23 16:37 UTC (permalink / raw)
  To: Richard Guenther; +Cc: law, Duncan Sands, gcc, Waldek Hebisch


On Mar 23, 2006, at 11:10 AM, Richard Guenther wrote:
>
>
> Well - we could hack a new type attribute to specify min/max values...

Or maybe try to using C++ instead since C++ rules for enums are  
different
than C :).

-- Pinski

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-23 16:15             ` Jeffrey A Law
@ 2006-03-23 16:26               ` Richard Guenther
  2006-03-23 16:37                 ` Andrew Pinski
  0 siblings, 1 reply; 105+ messages in thread
From: Richard Guenther @ 2006-03-23 16:26 UTC (permalink / raw)
  To: law; +Cc: Duncan Sands, gcc, Waldek Hebisch

On 3/23/06, Jeffrey A Law <law@redhat.com> wrote:
> On Thu, 2006-03-23 at 10:40 +0100, Duncan Sands wrote:
> > Hi Jeff, this seems to work nicely - thanks again.  I still see a bunch
> > of suboptimal range calculations in the Ada code I'm looking at, but these
> > now just coming from having everything initialised to VR_VARYING rather than
> > [TYPE_MIN, TYPE_MAX].  Do you know of any way of getting variables with
> > non-trivial TYPE_MIN/TYPE_MAX values in C?  I ask because I'd rather produce
> > test cases in C than Ada, since everyone has access to a C compiler :)
> You can't -- min/max will be extended to cover the type's precision,
> even for enumerated types.
>
> enum x
> {
>   RED == 0,
>   GREEN = 1,
>   BLUE = 2,
>   PURPLE = 3,
> }
>
> for (color = RED, color <= PURPLE; color++)
> {
>    ...
> }
>
> Note carefully that for the loop test to terminate that color
> will have the value PURPLE+1 (4).  If the type min/max were set to
> 0, 3 respectively then the loop exit test would be optimized away...

Well - we could hack a new type attribute to specify min/max values...

Richard.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-23 10:36           ` Duncan Sands
  2006-03-23 10:44             ` Eric Botcazou
@ 2006-03-23 16:15             ` Jeffrey A Law
  2006-03-23 16:26               ` Richard Guenther
  1 sibling, 1 reply; 105+ messages in thread
From: Jeffrey A Law @ 2006-03-23 16:15 UTC (permalink / raw)
  To: Duncan Sands; +Cc: gcc, Waldek Hebisch

On Thu, 2006-03-23 at 10:40 +0100, Duncan Sands wrote:
> Hi Jeff, this seems to work nicely - thanks again.  I still see a bunch
> of suboptimal range calculations in the Ada code I'm looking at, but these
> now just coming from having everything initialised to VR_VARYING rather than
> [TYPE_MIN, TYPE_MAX].  Do you know of any way of getting variables with
> non-trivial TYPE_MIN/TYPE_MAX values in C?  I ask because I'd rather produce
> test cases in C than Ada, since everyone has access to a C compiler :)
You can't -- min/max will be extended to cover the type's precision,
even for enumerated types. 

enum x
{
  RED == 0,
  GREEN = 1,
  BLUE = 2,
  PURPLE = 3,
}

for (color = RED, color <= PURPLE; color++)
{
   ...
}

Note carefully that for the loop test to terminate that color
will have the value PURPLE+1 (4).  If the type min/max were set to
0, 3 respectively then the loop exit test would be optimized away...

Jeff

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-23 12:30               ` Duncan Sands
@ 2006-03-23 15:59                 ` Eric Botcazou
  0 siblings, 0 replies; 105+ messages in thread
From: Eric Botcazou @ 2006-03-23 15:59 UTC (permalink / raw)
  To: Duncan Sands; +Cc: gcc, law, Waldek Hebisch

> Which ones?

PR tree-optimization/26797.

-- 
Eric Botcazou

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-23 10:44             ` Eric Botcazou
@ 2006-03-23 12:30               ` Duncan Sands
  2006-03-23 15:59                 ` Eric Botcazou
  0 siblings, 1 reply; 105+ messages in thread
From: Duncan Sands @ 2006-03-23 12:30 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: gcc, law, Waldek Hebisch

On Thursday 23 March 2006 11:31, Eric Botcazou wrote:
> > Hi Jeff, this seems to work nicely - thanks again.
> 
> Well, this has introduced 3 regressions in the ACATS testsuite on x86/x86-64.

Which ones?

Ciao,

Duncan.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-23 10:36           ` Duncan Sands
@ 2006-03-23 10:44             ` Eric Botcazou
  2006-03-23 12:30               ` Duncan Sands
  2006-03-23 16:15             ` Jeffrey A Law
  1 sibling, 1 reply; 105+ messages in thread
From: Eric Botcazou @ 2006-03-23 10:44 UTC (permalink / raw)
  To: Duncan Sands; +Cc: gcc, law, Waldek Hebisch

> Hi Jeff, this seems to work nicely - thanks again.

Well, this has introduced 3 regressions in the ACATS testsuite on x86/x86-64.

-- 
Eric Botcazou

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-21 22:42         ` Jeffrey A Law
  2006-03-22 21:29           ` Duncan Sands
@ 2006-03-23 10:36           ` Duncan Sands
  2006-03-23 10:44             ` Eric Botcazou
  2006-03-23 16:15             ` Jeffrey A Law
  2006-03-24 19:41           ` Duncan Sands
  2 siblings, 2 replies; 105+ messages in thread
From: Duncan Sands @ 2006-03-23 10:36 UTC (permalink / raw)
  To: law; +Cc: gcc, Waldek Hebisch

On Tuesday 21 March 2006 21:59, Jeffrey A Law wrote:
> On Tue, 2006-03-21 at 10:14 +0100, Duncan Sands wrote:
> 
> > Hi Jeff, on the subject of seeing through typecasts, I was playing around
> > with VRP and noticed that the following "if" statement is not eliminated:
> > 
> > int u (unsigned char c) {
> >         int i = c;
> > 
> >         if (i < 0 || i > 255)
> >                 return -1; /* never taken */
> >         else
> >                 return 0;
> > }
> > 
> > Is it supposed to be?
> Fixed thusly.  Bootstrapped and regression tested on i686-pc-linux-gnu.

Hi Jeff, this seems to work nicely - thanks again.  I still see a bunch
of suboptimal range calculations in the Ada code I'm looking at, but these
now just coming from having everything initialised to VR_VARYING rather than
[TYPE_MIN, TYPE_MAX].  Do you know of any way of getting variables with
non-trivial TYPE_MIN/TYPE_MAX values in C?  I ask because I'd rather produce
test cases in C than Ada, since everyone has access to a C compiler :)

All the best,

Duncan.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-21 22:42         ` Jeffrey A Law
@ 2006-03-22 21:29           ` Duncan Sands
  2006-03-23 10:36           ` Duncan Sands
  2006-03-24 19:41           ` Duncan Sands
  2 siblings, 0 replies; 105+ messages in thread
From: Duncan Sands @ 2006-03-22 21:29 UTC (permalink / raw)
  To: law; +Cc: gcc, Waldek Hebisch

On Tuesday 21 March 2006 21:59, Jeffrey A Law wrote:
> On Tue, 2006-03-21 at 10:14 +0100, Duncan Sands wrote:
> 
> > Hi Jeff, on the subject of seeing through typecasts, I was playing around
> > with VRP and noticed that the following "if" statement is not eliminated:
> > 
> > int u (unsigned char c) {
> >         int i = c;
> > 
> >         if (i < 0 || i > 255)
> >                 return -1; /* never taken */
> >         else
> >                 return 0;
> > }
> > 
> > Is it supposed to be?
> Fixed thusly.  Bootstrapped and regression tested on i686-pc-linux-gnu.

Thanks a lot - building now.

Best wishes,

Duncan.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-21 14:46       ` Duncan Sands
  2006-03-21 16:29         ` Jeffrey A Law
@ 2006-03-21 22:42         ` Jeffrey A Law
  2006-03-22 21:29           ` Duncan Sands
                             ` (2 more replies)
  1 sibling, 3 replies; 105+ messages in thread
From: Jeffrey A Law @ 2006-03-21 22:42 UTC (permalink / raw)
  To: Duncan Sands; +Cc: gcc, Waldek Hebisch

[-- Attachment #1: Type: text/plain, Size: 490 bytes --]

On Tue, 2006-03-21 at 10:14 +0100, Duncan Sands wrote:

> Hi Jeff, on the subject of seeing through typecasts, I was playing around
> with VRP and noticed that the following "if" statement is not eliminated:
> 
> int u (unsigned char c) {
>         int i = c;
> 
>         if (i < 0 || i > 255)
>                 return -1; /* never taken */
>         else
>                 return 0;
> }
> 
> Is it supposed to be?
Fixed thusly.  Bootstrapped and regression tested on i686-pc-linux-gnu.



[-- Attachment #2: PPP --]
[-- Type: text/plain, Size: 4789 bytes --]

	* tree-vrp.c (extract_range_from_unary_expr): Derive ranges for
	type conversions of a VR_VARYING source to a wider type.

	* gcc.dg/tree-ssa/vrp28.c: New test.

Index: tree-vrp.c
===================================================================
*** tree-vrp.c	(revision 112250)
--- tree-vrp.c	(working copy)
*************** extract_range_from_unary_expr (value_ran
*** 1641,1654 ****
        return;
      }
  
!   /* Refuse to operate on varying and symbolic ranges.  Also, if the
!      operand is neither a pointer nor an integral type, set the
!      resulting range to VARYING.  TODO, in some cases we may be able
!      to derive anti-ranges (like nonzero values).  */
!   if (vr0.type == VR_VARYING
!       || (!INTEGRAL_TYPE_P (TREE_TYPE (op0))
! 	  && !POINTER_TYPE_P (TREE_TYPE (op0)))
!       || symbolic_range_p (&vr0))
      {
        set_value_range_to_varying (vr);
        return;
--- 1641,1652 ----
        return;
      }
  
!   /* Refuse to operate on symbolic ranges, or if neither operand is
!      a pointer or integral type.  */
!   if ((!INTEGRAL_TYPE_P (TREE_TYPE (op0))
!        && !POINTER_TYPE_P (TREE_TYPE (op0)))
!       || (vr0.type != VR_VARYING
! 	  && symbolic_range_p (&vr0)))
      {
        set_value_range_to_varying (vr);
        return;
*************** extract_range_from_unary_expr (value_ran
*** 1681,1700 ****
  	 or equal to the new max, then we can safely use the newly
  	 computed range for EXPR.  This allows us to compute
  	 accurate ranges through many casts.  */
!       if (vr0.type == VR_RANGE)
! 	{
! 	  tree new_min, new_max;
  
! 	  /* Convert VR0's min/max to OUTER_TYPE.  */
! 	  new_min = fold_convert (outer_type, vr0.min);
! 	  new_max = fold_convert (outer_type, vr0.max);
  
  	  /* Verify the new min/max values are gimple values and
! 	     that they compare equal to VR0's min/max values.  */
  	  if (is_gimple_val (new_min)
  	      && is_gimple_val (new_max)
! 	      && tree_int_cst_equal (new_min, vr0.min)
! 	      && tree_int_cst_equal (new_max, vr0.max)
  	      && compare_values (new_min, new_max) <= 0
  	      && compare_values (new_min, new_max) >= -1)
  	    {
--- 1679,1714 ----
  	 or equal to the new max, then we can safely use the newly
  	 computed range for EXPR.  This allows us to compute
  	 accurate ranges through many casts.  */
!       if (vr0.type == VR_RANGE
! 	  || (vr0.type == VR_VARYING
! 	      && TYPE_PRECISION (outer_type) > TYPE_PRECISION (inner_type)))
! 	{
! 	  tree new_min, new_max, orig_min, orig_max;
! 
! 	  /* Convert the input operand min/max to OUTER_TYPE.   If
! 	     the input has no range information, then use the min/max
! 	     for the input's type.  */
! 	  if (vr0.type == VR_RANGE)
! 	    {
! 	      orig_min = vr0.min;
! 	      orig_max = vr0.max;
! 	    }
! 	  else
! 	    {
! 	      orig_min = TYPE_MIN_VALUE (inner_type);
! 	      orig_max = TYPE_MAX_VALUE (inner_type);
! 	    }
  
! 	  new_min = fold_convert (outer_type, orig_min);
! 	  new_max = fold_convert (outer_type, orig_max);
  
  	  /* Verify the new min/max values are gimple values and
! 	     that they compare equal to the orignal input's
! 	     min/max values.  */
  	  if (is_gimple_val (new_min)
  	      && is_gimple_val (new_max)
! 	      && tree_int_cst_equal (new_min, orig_min)
! 	      && tree_int_cst_equal (new_max, orig_max)
  	      && compare_values (new_min, new_max) <= 0
  	      && compare_values (new_min, new_max) >= -1)
  	    {
*************** extract_range_from_unary_expr (value_ran
*** 1717,1722 ****
--- 1731,1746 ----
  	}
      }
  
+   /* Conversion of a VR_VARYING value to a wider type can result
+      in a usable range.  So wait until after we've handled conversions
+      before dropping the result to VR_VARYING if we had a source
+      operand that is VR_VARYING.  */
+   if (vr0.type == VR_VARYING)
+     {
+       set_value_range_to_varying (vr);
+       return;
+     }
+ 
    /* Apply the operation to each end of the range and see what we end
       up with.  */
    if (code == NEGATE_EXPR
Index: testsuite/gcc.dg/tree-ssa/vrp28.c
===================================================================
*** testsuite/gcc.dg/tree-ssa/vrp28.c	(revision 0)
--- testsuite/gcc.dg/tree-ssa/vrp28.c	(revision 0)
***************
*** 0 ****
--- 1,32 ----
+ /* { dg-do compile } */
+ /* { dg-options "-O2 -fdump-tree-vrp1" } */
+ 
+ int f(_Bool a)
+ {
+   int t = a;
+   if (t != 2)
+    return 0;
+   return 1;
+ }
+ 
+ int f1(unsigned char a)
+ {
+   int t = a;
+   if (t != 256)
+    return 0;
+   return 1;
+ }
+ 
+ int f3 (unsigned char c)
+ {
+   int i = c;
+   if (i < 0 || i > 255)
+     return -1;
+   else
+     return 0;
+ }
+ 
+ /* { dg-final { scan-tree-dump-times "if " 0 "vrp1" } } * /
+ /* { dg-final { cleanup-tree-dump "vrp1" } } */
+ 
+ 

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-21 16:42             ` Diego Novillo
@ 2006-03-21 22:30               ` Laurent GUERBY
  0 siblings, 0 replies; 105+ messages in thread
From: Laurent GUERBY @ 2006-03-21 22:30 UTC (permalink / raw)
  To: Diego Novillo
  Cc: law, Andrew Pinski, Roger Sayle, Richard Guenther, Waldek Hebisch, gcc

On Tue, 2006-03-21 at 11:29 -0500, Diego Novillo wrote:
> On 03/21/06 03:25, Laurent GUERBY wrote:
> 
> > A casual read of tree-vrp.c showed that symbolic_range_p is mostly
> > used to do nothing, did I miss something? May be it's in another file.
> > 
> That's correct.  We mostly punt on symbolic ranges because they are
> fairly expensive to track.  We do try to use symbolic range information
> in some places like 'if (a_3 > b_10)'.  If b_10's range is [a_3, +INF]
> then we know the conditional is false.
> 
> Do you have anything specific in mind that you would like to track?  Any
> kind of heavy symbolic processing will likely be very expensive.  And
> VRP already is a hog.

The typical Ada check we'd want to remove is below. I suspect Java
and Pascal are similar (Java has only one variable bound IIRC,
the first is always zero).

procedure T is

   -- Natural means [0 .. 2**32-1]
   procedure P (X : in out String; N1, N2 : in Natural) is
      First : constant Natural := X'First;
      Last  : constant Natural := X'Last;
   begin
      for I in First + N1 .. Last - N2 loop
         X (I) := ' '; -- CHECK that I>=X'First and I<=X'Last
      end loop;
   end P;

   S : String := "123456";

begin
   P (S, 2, 1);
   if S /= "12   6" then
      raise Program_Error;
   end if;
end T;

The Ada front-end expanded codes look like (gcc -c -gnatdg t.adb) :

   procedure t__p (x : in out string; n1 : in natural; n2 : in natural) is
      subtype t__p__S1b is string (x'first(1) .. x'last(1));
      first : constant natural := {x'first};
      last : constant natural := {x'last};
      L_1 : label
   begin
      S2b : integer;
      S2b := first + n1;
      S3b : integer;
      S3b := last - n2;
      L_1 : for i in S2b .. S3b loop
         [constraint_error when not (integer(i) in x'first .. x'last) "index check failed"]
         x (i) := ' ';
      end loop L_1;
      return;
   end t__p;


So we know:

- First, Last, N1, N2 are locally constant
- N1 >= 0 
- N2 >= 0
- First + N1 didn't overflow
- Last - N2 didn't overflow

And within the loop body:

- I >= First + N1
- I <= Last - N2
- First + N1 <= Last - N2

We'd need to track enough to prove that the following are always true

- I >= First 
- I <= Last

So the FE generated check can be removed.

Looks like not very far from what you describe, but I'm not a specialist
of those issues.

Laurent

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-21 17:23               ` Duncan Sands
@ 2006-03-21 20:20                 ` Jeffrey A Law
  0 siblings, 0 replies; 105+ messages in thread
From: Jeffrey A Law @ 2006-03-21 20:20 UTC (permalink / raw)
  To: Duncan Sands; +Cc: gcc, Waldek Hebisch

On Tue, 2006-03-21 at 18:13 +0100, Duncan Sands wrote:

> Is memory use a problem here?  VR_VARYING has the advantage of using
> a bit less memory.  On the other hand, I guess you could introduce the
> convention that VR_RANGE with null min/mae means to use TYPE_MIN/
> TYPE_MAX, or something along those lines.
Well, more than anything, I just don't think we really thought
about the issue of what the best representation would be in this
kind of case.

If I was to lean any particular direction I would tend to lean
towards exposing the range and removing VR_VARYING.  The net
result would be good consistency in the transformations within
VRP.  ie, we would not need to special case stuff like

VR_VARYING | ~[0, 0]  -> ~[0, 0]

SImilarly for typecasts and a variety of other stuff.

jeff


> 
> Ciao,
> 
> D.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-21 17:26           ` Tom Tromey
@ 2006-03-21 18:20             ` Diego Novillo
  0 siblings, 0 replies; 105+ messages in thread
From: Diego Novillo @ 2006-03-21 18:20 UTC (permalink / raw)
  To: tromey
  Cc: law, Andrew Pinski, Roger Sayle, Richard Guenther, Waldek Hebisch, gcc

On 03/21/06 12:14, Tom Tromey wrote:
>>>>>> "Jeff" == Jeffrey A Law <law@redhat.com> writes:
> 
> Jeff> Note that this is closely related to some of the bounds checking
> Jeff> elimination we want to support for Java one day IIRC.
> 
> My understanding from the PR (21855) is that this is stuck on
> aliasing, not VRP -- the VRP parts supposedly work.
> 
Correct.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-21  3:38         ` Jeffrey A Law
  2006-03-21  9:14           ` Laurent GUERBY
@ 2006-03-21 17:26           ` Tom Tromey
  2006-03-21 18:20             ` Diego Novillo
  1 sibling, 1 reply; 105+ messages in thread
From: Tom Tromey @ 2006-03-21 17:26 UTC (permalink / raw)
  To: law; +Cc: Andrew Pinski, Roger Sayle, Richard Guenther, Waldek Hebisch, gcc

>>>>> "Jeff" == Jeffrey A Law <law@redhat.com> writes:

Jeff> Note that this is closely related to some of the bounds checking
Jeff> elimination we want to support for Java one day IIRC.

My understanding from the PR (21855) is that this is stuck on
aliasing, not VRP -- the VRP parts supposedly work.

Tom

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-21 17:20             ` Jeffrey A Law
@ 2006-03-21 17:23               ` Duncan Sands
  2006-03-21 20:20                 ` Jeffrey A Law
  0 siblings, 1 reply; 105+ messages in thread
From: Duncan Sands @ 2006-03-21 17:23 UTC (permalink / raw)
  To: law; +Cc: gcc, Waldek Hebisch

On Tuesday 21 March 2006 18:01, Jeffrey A Law wrote:
> On Tue, 2006-03-21 at 17:41 +0100, Duncan Sands wrote:
> 
> > Should it be?  I was surprised to see that all ranges are initialised
> > to VR_VARYING in the vrp pass, since many types have natural ranges
> > associated with them, for example [0, 255] for the above unsigned char;
> > starting off with this natural range is, well, natural, and surely
> > simplifies a bunch of code, by making things more uniform, eg: no need
> > to special case unsigned values, type conversions, ...
> Depends on who you talk to -- we certainly have a problem in that we
> have two ways to represent the same thing and the optimizer treats
> the two representations differently.
> 
> In a world where we could handle VR_VARYING without pessimizing so
> much, then I don't think either representation would be clearly
> better than the other.

Is memory use a problem here?  VR_VARYING has the advantage of using
a bit less memory.  On the other hand, I guess you could introduce the
convention that VR_RANGE with null min/max means to use TYPE_MIN/
TYPE_MAX, or something along those lines.

Ciao,

D.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-21 17:13           ` Duncan Sands
@ 2006-03-21 17:20             ` Jeffrey A Law
  2006-03-21 17:23               ` Duncan Sands
  0 siblings, 1 reply; 105+ messages in thread
From: Jeffrey A Law @ 2006-03-21 17:20 UTC (permalink / raw)
  To: Duncan Sands; +Cc: gcc, Waldek Hebisch

On Tue, 2006-03-21 at 17:41 +0100, Duncan Sands wrote:

> Should it be?  I was surprised to see that all ranges are initialised
> to VR_VARYING in the vrp pass, since many types have natural ranges
> associated with them, for example [0, 255] for the above unsigned char;
> starting off with this natural range is, well, natural, and surely
> simplifies a bunch of code, by making things more uniform, eg: no need
> to special case unsigned values, type conversions, ...
Depends on who you talk to -- we certainly have a problem in that we
have two ways to represent the same thing and the optimizer treats
the two representations differently.

In a world where we could handle VR_VARYING without pessimizing so
much, then I don't think either representation would be clearly
better than the other.

Jeff

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-21 16:29         ` Jeffrey A Law
  2006-03-21 16:40           ` Andrew Pinski
@ 2006-03-21 17:13           ` Duncan Sands
  2006-03-21 17:20             ` Jeffrey A Law
  1 sibling, 1 reply; 105+ messages in thread
From: Duncan Sands @ 2006-03-21 17:13 UTC (permalink / raw)
  To: law; +Cc: gcc, Waldek Hebisch

On Tuesday 21 March 2006 17:15, Jeffrey A Law wrote:
> On Tue, 2006-03-21 at 10:14 +0100, Duncan Sands wrote:
> 
> > Hi Jeff, on the subject of seeing through typecasts, I was playing around
> > with VRP and noticed that the following "if" statement is not eliminated:
> > 
> > int u (unsigned char c) {
> >         int i = c;
> > 
> >         if (i < 0 || i > 255)
> >                 return -1; /* never taken */
> >         else
> >                 return 0;
> > }
> > 
> > Is it supposed to be?
> Depends...
> 
> The "problem" is the parameter is marked a VR_VARYING (as it
> should be).

Should it be?  I was surprised to see that all ranges are initialised
to VR_VARYING in the vrp pass, since many types have natural ranges
associated with them, for example [0, 255] for the above unsigned char;
starting off with this natural range is, well, natural, and surely
simplifies a bunch of code, by making things more uniform, eg: no need
to special case unsigned values, type conversions, ...

> Unfortunately, the VR_VARYING source operand prevents 
> us from extracting a range for the result of the typecast.
>
> Sigh.  We've got this "interesting" problem in VRP, namely that 
> we can sometimes extract ranges for operations on VR_VARYING
> objects, but generally we punt.  To compensate we often create
> a range, say 0..MAXINT for an unsigned object -- that's effectively
> a VARYING range, but the rest of VRP doesn't pessimize so much
> when presented with 0..MAXINT vs VARYING.
> 
> If we weren't so pessimistic with VR_VARYING or we always created 
> ranges, even if they cover TYPE_MIN .. TYPE_MAX then this wouldn't
> be a problem.
> 
> I'll see what I can cobble together...

That would be great.

All the best,

Duncan.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-21  9:14           ` Laurent GUERBY
@ 2006-03-21 16:42             ` Diego Novillo
  2006-03-21 22:30               ` Laurent GUERBY
  0 siblings, 1 reply; 105+ messages in thread
From: Diego Novillo @ 2006-03-21 16:42 UTC (permalink / raw)
  To: Laurent GUERBY
  Cc: law, Andrew Pinski, Roger Sayle, Richard Guenther, Waldek Hebisch, gcc

On 03/21/06 03:25, Laurent GUERBY wrote:

> A casual read of tree-vrp.c showed that symbolic_range_p is mostly
> used to do nothing, did I miss something? May be it's in another file.
> 
That's correct.  We mostly punt on symbolic ranges because they are
fairly expensive to track.  We do try to use symbolic range information
in some places like 'if (a_3 > b_10)'.  If b_10's range is [a_3, +INF]
then we know the conditional is false.

Do you have anything specific in mind that you would like to track?  Any
kind of heavy symbolic processing will likely be very expensive.  And
VRP already is a hog.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-21 16:29         ` Jeffrey A Law
@ 2006-03-21 16:40           ` Andrew Pinski
  2006-03-21 17:13           ` Duncan Sands
  1 sibling, 0 replies; 105+ messages in thread
From: Andrew Pinski @ 2006-03-21 16:40 UTC (permalink / raw)
  To: law; +Cc: Duncan Sands, gcc, Waldek Hebisch


On Mar 21, 2006, at 11:15 AM, Jeffrey A Law wrote:

> On Tue, 2006-03-21 at 10:14 +0100, Duncan Sands wrote:
>
>> Hi Jeff, on the subject of seeing through typecasts, I was playing 
>> around
>> with VRP and noticed that the following "if" statement is not 
>> eliminated:
>>
>> int u (unsigned char c) {
>>         int i = c;
>>
>>         if (i < 0 || i > 255)
>>                 return -1; /* never taken */
>>         else
>>                 return 0;
>> }
>>
>> Is it supposed to be?
> Depends...
>
> The "problem" is the parameter is marked a VR_VARYING (as it
> should be).  Unfortunately, the VR_VARYING source operand prevents
> us from extracting a range for the result of the typecast.


This is also PR 23745.

-- Pinski

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-21 14:46       ` Duncan Sands
@ 2006-03-21 16:29         ` Jeffrey A Law
  2006-03-21 16:40           ` Andrew Pinski
  2006-03-21 17:13           ` Duncan Sands
  2006-03-21 22:42         ` Jeffrey A Law
  1 sibling, 2 replies; 105+ messages in thread
From: Jeffrey A Law @ 2006-03-21 16:29 UTC (permalink / raw)
  To: Duncan Sands; +Cc: gcc, Waldek Hebisch

On Tue, 2006-03-21 at 10:14 +0100, Duncan Sands wrote:

> Hi Jeff, on the subject of seeing through typecasts, I was playing around
> with VRP and noticed that the following "if" statement is not eliminated:
> 
> int u (unsigned char c) {
>         int i = c;
> 
>         if (i < 0 || i > 255)
>                 return -1; /* never taken */
>         else
>                 return 0;
> }
> 
> Is it supposed to be?
Depends...

The "problem" is the parameter is marked a VR_VARYING (as it
should be).  Unfortunately, the VR_VARYING source operand prevents
us from extracting a range for the result of the typecast.

Sigh.  We've got this "interesting" problem in VRP, namely that 
we can sometimes extract ranges for operations on VR_VARYING
objects, but generally we punt.  To compensate we often create
a range, say 0..MAXINT for an unsigned object -- that's effectively
a VARYING range, but the rest of VRP doesn't pessimize so much
when presented with 0..MAXINT vs VARYING.

If we weren't so pessimistic with VR_VARYING or we always created 
ranges, even if they cover TYPE_MIN .. TYPE_MAX then this wouldn't
be a problem.

I'll see what I can cobble together...

Jeff


^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-17 21:06     ` Jeffrey A Law
  2006-03-18 14:52       ` Laurent GUERBY
@ 2006-03-21 14:46       ` Duncan Sands
  2006-03-21 16:29         ` Jeffrey A Law
  2006-03-21 22:42         ` Jeffrey A Law
  1 sibling, 2 replies; 105+ messages in thread
From: Duncan Sands @ 2006-03-21 14:46 UTC (permalink / raw)
  To: gcc, law; +Cc: Waldek Hebisch

> > I think that it is easy for back end to make good use of
> > TYPE_MIN_VALUE/TYPE_MAX_VALUE. Namely, consider the assignment
> > 
> > x := y + z * w;
> > 
> > where variables y, z and w have values in the interval [0,7] and
> > x have values in [0,1000]. Pascal converts the above to the
> > following C like code:
> > 
> > int tmp = (int) y + (int) z * (int) w;
> > x = (tmp < 0 || tmp > 1000)? (Range_Check_Error (), 0) : tmp;
> >  
> > I expect VRP to deduce that tmp will have values in [0..56] and
> > eliminate range check. Also, it should be clear that in the
> > assigment above artithmetic can be done using any convenient
> > precision.
> VRP can certainly do this -- I added the ability to see through
> more typecasts a while back.  In general, I've tried to improve
> the ability of our optimizers to eliminate or at least "see through"
> some typecasts.  However, that capability is often very limited
> and the ability to see through type casts is not pervasive in
> the optimization pipeline.

Hi Jeff, on the subject of seeing through typecasts, I was playing around
with VRP and noticed that the following "if" statement is not eliminated:

int u (unsigned char c) {
        int i = c;

        if (i < 0 || i > 255)
                return -1; /* never taken */
        else
                return 0;
}

Is it supposed to be?

Thanks,

Duncan.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-21  3:38         ` Jeffrey A Law
@ 2006-03-21  9:14           ` Laurent GUERBY
  2006-03-21 16:42             ` Diego Novillo
  2006-03-21 17:26           ` Tom Tromey
  1 sibling, 1 reply; 105+ messages in thread
From: Laurent GUERBY @ 2006-03-21  9:14 UTC (permalink / raw)
  To: law; +Cc: Andrew Pinski, Roger Sayle, Richard Guenther, Waldek Hebisch, gcc

On Mon, 2006-03-20 at 19:47 -0700, Jeffrey A Law wrote:
> On Sat, 2006-03-18 at 10:24 +0100, Laurent GUERBY wrote:
> > On Fri, 2006-03-17 at 12:51 -0700, Jeffrey A Law wrote:
> > > I'm not suggesting the FEs deduce more types and track ranges;
> > > that would be rather absurd.  What I'm saying is that exposing
> > > these types outside the FE is most likely costing you both on
> > > the compile-time side and on the run-time side.
> > 
> > About optimization, in most languages with array bounds and
> > range checks, the FE will build a structure with bounds
> > and code like the following for a typical array loop (sorry
> > for poor C):
> > 
> > struct {
> >    int low,high; /* no alias */
> >    double *data;
> > } X;
> > 
> > int first_i=X.low+2; 
> > int last_i=X.high-3;
> > int i;
> > 
> > if (first_i<=last_i) {
> >    for(i=first_i;i<=last_i;i++) {
> >       if (i<X.low||i>X.high) raise_bound_error(); /* CHECK */
> >       do_something(array_with_bounds[i]);
> >    }
> > }
> > 
> > The interesting thing performance wise would be to be able to remove the
> > CHECK in the BE. 
> > 
> > Is there some optimization pass able to do that?
> > 
> > And when "+2" and "-3" are replaced by "+x" and "-y" and we
> > know through typing that x>=0 and y>=0?
> Not sure, mostly because of the structure accesses and my lack
> of knowledge about the symbolic range support.  I know we have
> symbolic range support, but haven't looked to see how good it
> really is.  What we're doing now is certainly better than what
> DOM did for symbolic ranges (nothing).

A casual read of tree-vrp.c showed that symbolic_range_p is mostly
used to do nothing, did I miss something? May be it's in another file.

> Note that this is closely related to some of the bounds checking
> elimination we want to support for Java one day IIRC.

Pascal and Ada will benefit (may be Fortran too, I don't know enough to
say).

> Note also that if i, first_i and/or last_i are globals, then the
> odds of these kind of tests being deleted go down considerably
> as they're likely call-clobbered by the call to do_something.

Very likely to be all locals in typical Ada code.

Laurent

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-18 14:52       ` Laurent GUERBY
@ 2006-03-21  3:38         ` Jeffrey A Law
  2006-03-21  9:14           ` Laurent GUERBY
  2006-03-21 17:26           ` Tom Tromey
  0 siblings, 2 replies; 105+ messages in thread
From: Jeffrey A Law @ 2006-03-21  3:38 UTC (permalink / raw)
  To: Laurent GUERBY
  Cc: Andrew Pinski, Roger Sayle, Richard Guenther, Waldek Hebisch, gcc

On Sat, 2006-03-18 at 10:24 +0100, Laurent GUERBY wrote:
> On Fri, 2006-03-17 at 12:51 -0700, Jeffrey A Law wrote:
> > I'm not suggesting the FEs deduce more types and track ranges;
> > that would be rather absurd.  What I'm saying is that exposing
> > these types outside the FE is most likely costing you both on
> > the compile-time side and on the run-time side.
> 
> About optimization, in most languages with array bounds and
> range checks, the FE will build a structure with bounds
> and code like the following for a typical array loop (sorry
> for poor C):
> 
> struct {
>    int low,high; /* no alias */
>    double *data;
> } X;
> 
> int first_i=X.low+2; 
> int last_i=X.high-3;
> int i;
> 
> if (first_i<=last_i) {
>    for(i=first_i;i<=last_i;i++) {
>       if (i<X.low||i>X.high) raise_bound_error(); /* CHECK */
>       do_something(array_with_bounds[i]);
>    }
> }
> 
> The interesting thing performance wise would be to be able to remove the
> CHECK in the BE. 
> 
> Is there some optimization pass able to do that?
> 
> And when "+2" and "-3" are replaced by "+x" and "-y" and we
> know through typing that x>=0 and y>=0?
Not sure, mostly because of the structure accesses and my lack
of knowledge about the symbolic range support.  I know we have
symbolic range support, but haven't looked to see how good it
really is.  What we're doing now is certainly better than what
DOM did for symbolic ranges (nothing).

Note that this is closely related to some of the bounds checking
elimination we want to support for Java one day IIRC.

Note also that if i, first_i and/or last_i are globals, then the
odds of these kind of tests being deleted go down considerably
as they're likely call-clobbered by the call to do_something.

Jeff

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-17 21:06     ` Jeffrey A Law
@ 2006-03-18 14:52       ` Laurent GUERBY
  2006-03-21  3:38         ` Jeffrey A Law
  2006-03-21 14:46       ` Duncan Sands
  1 sibling, 1 reply; 105+ messages in thread
From: Laurent GUERBY @ 2006-03-18 14:52 UTC (permalink / raw)
  To: law; +Cc: Andrew Pinski, Roger Sayle, Richard Guenther, Waldek Hebisch, gcc

On Fri, 2006-03-17 at 12:51 -0700, Jeffrey A Law wrote:
> I'm not suggesting the FEs deduce more types and track ranges;
> that would be rather absurd.  What I'm saying is that exposing
> these types outside the FE is most likely costing you both on
> the compile-time side and on the run-time side.

About optimization, in most languages with array bounds and
range checks, the FE will build a structure with bounds
and code like the following for a typical array loop (sorry
for poor C):

struct {
   int low,high; /* no alias */
   double *data;
} X;

int first_i=X.low+2; 
int last_i=X.high-3;
int i;

if (first_i<=last_i) {
   for(i=first_i;i<=last_i;i++) {
      if (i<X.low||i>X.high) raise_bound_error(); /* CHECK */
      do_something(array_with_bounds[i]);
   }
}

The interesting thing performance wise would be to be able to remove the
CHECK in the BE. 

Is there some optimization pass able to do that?

And when "+2" and "-3" are replaced by "+x" and "-y" and we
know through typing that x>=0 and y>=0?

Laurent

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-14  2:29   ` Waldek Hebisch
  2006-03-14  9:55     ` Duncan Sands
@ 2006-03-17 21:06     ` Jeffrey A Law
  2006-03-18 14:52       ` Laurent GUERBY
  2006-03-21 14:46       ` Duncan Sands
  1 sibling, 2 replies; 105+ messages in thread
From: Jeffrey A Law @ 2006-03-17 21:06 UTC (permalink / raw)
  To: Waldek Hebisch; +Cc: gcc

On Tue, 2006-03-14 at 03:16 +0100, Waldek Hebisch wrote:

> I think that it is easy for back end to make good use of
> TYPE_MIN_VALUE/TYPE_MAX_VALUE. Namely, consider the assignment
> 
> x := y + z * w;
> 
> where variables y, z and w have values in the interval [0,7] and
> x have values in [0,1000]. Pascal converts the above to the
> following C like code:
> 
> int tmp = (int) y + (int) z * (int) w;
> x = (tmp < 0 || tmp > 1000)? (Range_Check_Error (), 0) : tmp;
>  
> I expect VRP to deduce that tmp will have values in [0..56] and
> eliminate range check. Also, it should be clear that in the
> assigment above artithmetic can be done using any convenient
> precision.
VRP can certainly do this -- I added the ability to see through
more typecasts a while back.  In general, I've tried to improve
the ability of our optimizers to eliminate or at least "see through"
some typecasts.  However, that capability is often very limited
and the ability to see through type casts is not pervasive in
the optimization pipeline.

What I seriously doubt is that you have enough cases of the nature
you posted to matter in practice.

I would expect that for the vast majority of cases that the
source and destination operands are going to have the same ranges.
And in that case exposing the tighter min/max data on your ranges
does you no good.  It just creates a lot of silly nop-casts which
waste memory and will tend to inhibit optimizations.


> In principle Pascal front end could deduce more precise types (ranges),
> but that would create some extra type conversions and a lot
> of extra types. Moreover, I assume that VRP can do better job
> at tracking ranges then Pascal front end.
I'm not suggesting the FEs deduce more types and track ranges;
that would be rather absurd.  What I'm saying is that exposing
these types outside the FE is most likely costing you both on
the compile-time side and on the run-time side.

You'd present far cleaner code to the optimizers if you just
did *everything* in the object's base type.  You'd generate
far fewer nodes, saving compile-time and memory and the
optimizers will have a much better opportunity to optimize
the code.

But that's your decision to make -- I think we've all agreed
that as long as you don't allow values outside the min/max
range to be stored into variables with these types, then it
should be safe to continue to do things the way you're doing
them.

jeff

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-17 16:48       ` Waldek Hebisch
@ 2006-03-17 19:51         ` Laurent GUERBY
  0 siblings, 0 replies; 105+ messages in thread
From: Laurent GUERBY @ 2006-03-17 19:51 UTC (permalink / raw)
  To: Waldek Hebisch; +Cc: Robert Dewar, law, gcc

On Fri, 2006-03-17 at 15:07 +0100, Waldek Hebisch wrote:
> Robert Dewar wrote:
> > Laurent GUERBY wrote:
> > > On Mon, 2006-03-13 at 15:31 -0700, Jeffrey A Law wrote:
> > >> On Mon, 2006-02-27 at 20:08 +0100, Waldek Hebisch wrote:
> > >>
> > >>> What do you mean by "abuse"?  TYPE_MAX_VALUE means maximal value
> > >>> allowed by given type.
> > >> As long as you're *absolutely* clear that  a variable with a
> > >> restricted range can never hold a value outside that the
> > >> restricted range in a conforming program, then I'll back off
> > >> the "abuse" label and merely call it pointless :-)
> > > 
> > > Variables in a non erroneous Ada program all have their value between
> > > their type bounds from the optimizer perspective (the special 'valid
> > > case put aside).
> > 
> > Not quite right. If you have an uninitialized variable, the value is
> > invalid and may be out of bounds, but this is a bounded error situation,
> > not an erroneous program. So the possible effects are definitely NOT
> > unbounded, and the use of such values cannot turn a program erroneous.
> > (that's an Ada 95 change, this used to be erroneous in Ada 83).
> > 
> 
> What about initializing _all_ variables? 

In Ada this is the effect of the configuration pragma Normalize_Scalars

<<
H.1 Pragma Normalize_Scalars


1     This pragma ensures that an otherwise uninitialized scalar object is set
to a predictable value, but out of range if possible.

    1.a   Discussion: The goal of the pragma is to reduce the impact of a
          bounded error that results from a reference to an uninitialized
          scalar object, by having such a reference violate a range check and
          thus raise Constraint_Error.


                                   Syntax

2     The form of a pragma Normalize_Scalars is as follows:

3       pragma Normalize_Scalars;
>>

Laurent

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-16  4:10     ` Robert Dewar
  2006-03-16  7:33       ` Geert Bosch
  2006-03-16 11:48       ` Laurent GUERBY
@ 2006-03-17 16:48       ` Waldek Hebisch
  2006-03-17 19:51         ` Laurent GUERBY
  2 siblings, 1 reply; 105+ messages in thread
From: Waldek Hebisch @ 2006-03-17 16:48 UTC (permalink / raw)
  To: Robert Dewar; +Cc: Laurent GUERBY, law, Waldek Hebisch, gcc

Robert Dewar wrote:
> Laurent GUERBY wrote:
> > On Mon, 2006-03-13 at 15:31 -0700, Jeffrey A Law wrote:
> >> On Mon, 2006-02-27 at 20:08 +0100, Waldek Hebisch wrote:
> >>
> >>> What do you mean by "abuse"?  TYPE_MAX_VALUE means maximal value
> >>> allowed by given type.
> >> As long as you're *absolutely* clear that  a variable with a
> >> restricted range can never hold a value outside that the
> >> restricted range in a conforming program, then I'll back off
> >> the "abuse" label and merely call it pointless :-)
> > 
> > Variables in a non erroneous Ada program all have their value between
> > their type bounds from the optimizer perspective (the special 'valid
> > case put aside).
> 
> Not quite right. If you have an uninitialized variable, the value is
> invalid and may be out of bounds, but this is a bounded error situation,
> not an erroneous program. So the possible effects are definitely NOT
> unbounded, and the use of such values cannot turn a program erroneous.
> (that's an Ada 95 change, this used to be erroneous in Ada 83).
> 

What about initializing _all_ variables? If we allow out of range values
at one place than we loose usefull invariant. I see no way that a frontend
could allow out of range values and simultaneously use TYPE_MAX_VALUE for
optimization. Also, out of range values would make TYPE_MAX_VALUE of little
use to backend. 

I see two drawbacks if we initialize all variables:
1) bigger program and lower runtime efficienecy
2) we loose diagnostics about uninitialized variables

I would guess that cost of extra initializiations will be small, because
backend will remove unused variables and obviously redundant initializations
so only tricky cases will remain. Also, extra initializiatin is likely to
be cheaper then extra tests which are needed otherwise.

I am more concerned about loss of diagnostics. That could be resolved if
forntend could mark generated initializations so that backend will generate
code to initialize variable, but also issue diagnostics if such initializatin
can not be removed (yes, I understand that it is conceptually a small
change but it would probably require largish change to backend code).

-- 
                              Waldek Hebisch
hebisch@math.uni.wroc.pl 

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-16 13:32                 ` Andrew Pinski
@ 2006-03-16 16:11                   ` Richard Guenther
  0 siblings, 0 replies; 105+ messages in thread
From: Richard Guenther @ 2006-03-16 16:11 UTC (permalink / raw)
  To: Andrew Pinski
  Cc: Geert Bosch, law, gcc, Laurent GUERBY, Robert Dewar, Waldek Hebisch

On 3/16/06, Andrew Pinski <pinskia@physics.uc.edu> wrote:
>
> On Mar 16, 2006, at 8:10 AM, Richard Guenther wrote:
>
> > The above was for 4.1.0 - with mainline gigi now generates
> >
> >   if (i == 0 || i > 10)
> >     {
> >       __gnat_rcheck_06 ("t2.adb", 7);
> >     }
> >   else
> >     {
> >
> >     }
> >   x[(<unnamed type>) (t2__TrB) i]{lb: 1 sz: 4} = 0;
> >
> > huh?  That's even more bogus.
>
> It might not be gigi that is producing the about, but fold.
> I want to make sure that people know that most of the
> subtype bugs have been in fold lately and nowhere else.

Right.  Though the original problem is not using VIEW_CONVERT_EXPR,
fold rightfully might strip away the NOP_EXPR here.

Richard.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-16 13:27               ` Richard Guenther
@ 2006-03-16 13:32                 ` Andrew Pinski
  2006-03-16 16:11                   ` Richard Guenther
  0 siblings, 1 reply; 105+ messages in thread
From: Andrew Pinski @ 2006-03-16 13:32 UTC (permalink / raw)
  To: Richard Guenther
  Cc: Geert Bosch, law, gcc, Laurent GUERBY, Robert Dewar, Waldek Hebisch


On Mar 16, 2006, at 8:10 AM, Richard Guenther wrote:

> The above was for 4.1.0 - with mainline gigi now generates
>
>   if (i == 0 || i > 10)
>     {
>       __gnat_rcheck_06 ("t2.adb", 7);
>     }
>   else
>     {
>
>     }
>   x[(<unnamed type>) (t2__TrB) i]{lb: 1 sz: 4} = 0;
>
> huh?  That's even more bogus.

It might not be gigi that is producing the about, but fold.
I want to make sure that people know that most of the
subtype bugs have been in fold lately and nowhere else.

-- Pinski

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-16 12:30             ` Richard Guenther
@ 2006-03-16 13:27               ` Richard Guenther
  2006-03-16 13:32                 ` Andrew Pinski
  0 siblings, 1 reply; 105+ messages in thread
From: Richard Guenther @ 2006-03-16 13:27 UTC (permalink / raw)
  To: Laurent GUERBY; +Cc: Geert Bosch, Robert Dewar, law, Waldek Hebisch, gcc

On 3/16/06, Richard Guenther <richard.guenther@gmail.com> wrote:
> On 3/16/06, Laurent GUERBY <laurent@guerby.net> wrote:
> > procedure T2 is
> >    type R is range 1 .. 10;
> >    type T is array (R) of Integer;
> >    I : R;
> >    X : T;
> > begin
> >    X (I) := 0;
> > end T2;
> >
> > The Ada FE will insert an explicit check, as seen when using
> > gcc -c -gnatdg t2.adb:
> >
> >    [constraint_error when not (interfaces__unsigned_32!(i) >= 1 and then
> >      interfaces__unsigned_32!(i) <= 10) "invalid data"]
> >
> > Will the ME or FE remove the check?
>
> Yes it will - as we see from -O0 -fdump-tree-original:
>
>   if ((interfaces__unsigned_32) i == 0 || i > 10)
>     {
>       __gnat_rcheck_06 ("t2.adb", 7);
>     }
>   else
>     {
>
>     }
>
> it uses a regular NOP/CONVERT_EXPR which VRP happily will see through
> (validly so).
> It also misses the conversion for the i>10 check completely.  It needs to print
>
>   if (VIEW_CONVERT_EXPR<interfaces__unsigned_32>(i) == 0
>      || VIEW_CONVERT_EXPR<interfaces__unsigned_32>(i) > 10)
>
> So, this is a bug in gigi here.

The above was for 4.1.0 - with mainline gigi now generates

  if (i == 0 || i > 10)
    {
      __gnat_rcheck_06 ("t2.adb", 7);
    }
  else
    {

    }
  x[(<unnamed type>) (t2__TrB) i]{lb: 1 sz: 4} = 0;

huh?  That's even more bogus.

Richard.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-16 11:48           ` Laurent GUERBY
  2006-03-16 12:03             ` Richard Guenther
@ 2006-03-16 12:30             ` Richard Guenther
  2006-03-16 13:27               ` Richard Guenther
  1 sibling, 1 reply; 105+ messages in thread
From: Richard Guenther @ 2006-03-16 12:30 UTC (permalink / raw)
  To: Laurent GUERBY; +Cc: Geert Bosch, Robert Dewar, law, Waldek Hebisch, gcc

On 3/16/06, Laurent GUERBY <laurent@guerby.net> wrote:
> On Thu, 2006-03-16 at 10:43 +0100, Richard Guenther wrote:
> > On 3/16/06, Geert Bosch <bosch@adacore.com> wrote:
> > >
> > > On Mar 16, 2006, at 05:09, Robert Dewar wrote:
> > > > Not quite right. If you have an uninitialized variable, the value is
> > > > invalid and may be out of bounds, but this is a bounded error
> > > > situation,
> > > > not an erroneous program. So the possible effects are definitely NOT
> > > > unbounded, and the use of such values cannot turn a program erroneous.
> > > > (that's an Ada 95 change, this used to be erroneous in Ada 83).
> > >
> > > Actually, that's a good point and raises some potential issues:
> > > if we're never establish the invariant that a value of a type is in
> > > range, we can only use the base range for variables that might be
> > > used uninitialized. Any read of such a variable would then involve
> > > a range check.
> > >
> > >    package Uninitialized is
> > >       N : Positive;
> > >    end Uninitialized;
> > >
> > >    with Uninitialized;
> > >    procedure Test is
> > >       for J in 1 .. Uninitialized.N loop
> > >          ...
> > >       end loop;
> > >    end Test;
> > >
> > > In this case, GCC might replace the loop with
> > >     declare
> > >        J : Integer := 1;
> > >     begin
> > >        while J /= Uninitialized.N loop
> > >           ...
> > >           J := J + 1;
> > >        end loop;
> > >     end;
> > >
> > > which would be incorrect for N = 0.
> >
> > Uh - what do you expect here??  Does the Ada standard require a out-of-range
> > exception upon the first use of N?
>
> <<
> 13.9.1 Data Validity
>
>                           Bounded (Run-Time) Errors
>
> 9     {invalid representation} {bounded error (cause) [partial]} If the
> representation of a scalar object does not represent a value of the object's
> subtype (perhaps because the object was not initialized), the object is said
> to have an invalid representation. It is a bounded error to evaluate the value
> of such an object. {Program_Error (raised by failure of run-time check)}
> {Constraint_Error (raised by failure of run-time check)} If the error is
> detected, either Constraint_Error or Program_Error is raised. Otherwise,
> execution continues using the invalid representation. The rules of the
> language outside this subclause assume that all objects have valid
> representations. The semantics of operations on invalid representations are as
> follows:
>
>     9.a   Discussion: The AARM is more explicit about what happens when the
>           value of the case expression is an invalid representation.
>
>     9.b/2 Ramification: {AI95-00426-01} This includes the result object of
>           functions, including the result of Unchecked_Conversion, T'Input,
>           and imported functions.
>
> 10    If the representation of the object represents a value of the object's
>       type, the value of the type is used.
>
> 11    If the representation of the object does not represent a value of the
>       object's type, the semantics of operations on such representations is
>       implementation-defined, but does not by itself lead to erroneous or
>       unpredictable execution, or to other objects becoming abnormal.
>
>     11.a/2 Implementation Note: {AI95-00426-01} This means that the
>           implementation must take care not to use an invalid representation
>           in a way that might cause erroneous execution. For instance, the
>           exception mandated for case_statements must be raised. Array
>           indexing must not cause memory outside of the array to be written
>           (and usually, not read either). These cases and similar cases may
>           require explicit checks by the implementation.
> >>
>
> So in this case the behaviour is implementation defined, from my reading
> an infinite loop is not contrary to the standard.
>
> > In this case, the frontend needs
> > to insert a proper
> > check.  You cannot expect the middle-end to avoid the above transformation, so
> > this is a frontend bug.
>
> Do the ME or the BE have a representation for potentially uninitalized
> variables? In the following case:
>
> procedure T2 is
>    type R is range 1 .. 10;
>    type T is array (R) of Integer;
>    I : R;
>    X : T;
> begin
>    X (I) := 0;
> end T2;
>
> The Ada FE will insert an explicit check, as seen when using
> gcc -c -gnatdg t2.adb:
>
>    [constraint_error when not (interfaces__unsigned_32!(i) >= 1 and then
>      interfaces__unsigned_32!(i) <= 10) "invalid data"]
>
> Will the ME or FE remove the check?

Yes it will - as we see from -O0 -fdump-tree-original:

  if ((interfaces__unsigned_32) i == 0 || i > 10)
    {
      __gnat_rcheck_06 ("t2.adb", 7);
    }
  else
    {

    }

it uses a regular NOP/CONVERT_EXPR which VRP happily will see through
(validly so).
It also misses the conversion for the i>10 check completely.  It needs to print

  if (VIEW_CONVERT_EXPR<interfaces__unsigned_32>(i) == 0
     || VIEW_CONVERT_EXPR<interfaces__unsigned_32>(i) > 10)

So, this is a bug in gigi here.

Richard.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-16 12:03             ` Richard Guenther
@ 2006-03-16 12:19               ` Richard Guenther
  0 siblings, 0 replies; 105+ messages in thread
From: Richard Guenther @ 2006-03-16 12:19 UTC (permalink / raw)
  To: Laurent GUERBY; +Cc: Geert Bosch, Robert Dewar, law, Waldek Hebisch, gcc

On 3/16/06, Richard Guenther <richard.guenther@gmail.com> wrote:
> On 3/16/06, Laurent GUERBY <laurent@guerby.net> wrote:
> > On Thu, 2006-03-16 at 10:43 +0100, Richard Guenther wrote:
> > > On 3/16/06, Geert Bosch <bosch@adacore.com> wrote:
> > > >
> > > > On Mar 16, 2006, at 05:09, Robert Dewar wrote:
> > > > > Not quite right. If you have an uninitialized variable, the value is
> > > > > invalid and may be out of bounds, but this is a bounded error
> > > > > situation,
> > > > > not an erroneous program. So the possible effects are definitely NOT
> > > > > unbounded, and the use of such values cannot turn a program erroneous.
> > > > > (that's an Ada 95 change, this used to be erroneous in Ada 83).
> > > >
> > > > Actually, that's a good point and raises some potential issues:
> > > > if we're never establish the invariant that a value of a type is in
> > > > range, we can only use the base range for variables that might be
> > > > used uninitialized. Any read of such a variable would then involve
> > > > a range check.
> > > >
> > > >    package Uninitialized is
> > > >       N : Positive;
> > > >    end Uninitialized;
> > > >
> > > >    with Uninitialized;
> > > >    procedure Test is
> > > >       for J in 1 .. Uninitialized.N loop
> > > >          ...
> > > >       end loop;
> > > >    end Test;
> > > >
> > > > In this case, GCC might replace the loop with
> > > >     declare
> > > >        J : Integer := 1;
> > > >     begin
> > > >        while J /= Uninitialized.N loop
> > > >           ...
> > > >           J := J + 1;
> > > >        end loop;
> > > >     end;
> > > >
> > > > which would be incorrect for N = 0.
> > >
> > > Uh - what do you expect here??  Does the Ada standard require a out-of-range
> > > exception upon the first use of N?
> >
> > <<
> > 13.9.1 Data Validity
> >
> >                           Bounded (Run-Time) Errors
> >
> > 9     {invalid representation} {bounded error (cause) [partial]} If the
> > representation of a scalar object does not represent a value of the object's
> > subtype (perhaps because the object was not initialized), the object is said
> > to have an invalid representation. It is a bounded error to evaluate the value
> > of such an object. {Program_Error (raised by failure of run-time check)}
> > {Constraint_Error (raised by failure of run-time check)} If the error is
> > detected, either Constraint_Error or Program_Error is raised. Otherwise,
> > execution continues using the invalid representation. The rules of the
> > language outside this subclause assume that all objects have valid
> > representations. The semantics of operations on invalid representations are as
> > follows:
> >
> >     9.a   Discussion: The AARM is more explicit about what happens when the
> >           value of the case expression is an invalid representation.
> >
> >     9.b/2 Ramification: {AI95-00426-01} This includes the result object of
> >           functions, including the result of Unchecked_Conversion, T'Input,
> >           and imported functions.
> >
> > 10    If the representation of the object represents a value of the object's
> >       type, the value of the type is used.
> >
> > 11    If the representation of the object does not represent a value of the
> >       object's type, the semantics of operations on such representations is
> >       implementation-defined, but does not by itself lead to erroneous or
> >       unpredictable execution, or to other objects becoming abnormal.
> >
> >     11.a/2 Implementation Note: {AI95-00426-01} This means that the
> >           implementation must take care not to use an invalid representation
> >           in a way that might cause erroneous execution. For instance, the
> >           exception mandated for case_statements must be raised. Array
> >           indexing must not cause memory outside of the array to be written
> >           (and usually, not read either). These cases and similar cases may
> >           require explicit checks by the implementation.
> > >>
> >
> > So in this case the behaviour is implementation defined, from my reading
> > an infinite loop is not contrary to the standard.
> >
> > > In this case, the frontend needs
> > > to insert a proper
> > > check.  You cannot expect the middle-end to avoid the above transformation, so
> > > this is a frontend bug.
> >
> > Do the ME or the BE have a representation for potentially uninitalized
> > variables?
>
> No.
>
> > In the following case:
> >
> > procedure T2 is
> >    type R is range 1 .. 10;
> >    type T is array (R) of Integer;
> >    I : R;
> >    X : T;
> > begin
> >    X (I) := 0;
> > end T2;
> >
> > The Ada FE will insert an explicit check, as seen when using
> > gcc -c -gnatdg t2.adb:
> >
> >    [constraint_error when not (interfaces__unsigned_32!(i) >= 1 and then
> >      interfaces__unsigned_32!(i) <= 10) "invalid data"]
> >
> > Will the ME or FE remove the check?
>
> If you do the check in the base type then it will be only removed if
> the compiler
> can prove the comparisons are always true.  For instance if there is a preceding
> initialization of i to say 3.  Also if the basetype is somehow known
> to have a value
> outside of the range, the constraint error will be raised
> unconditionally (but as I
> understand there is no way the frontend can produce such knowledge).

Note that gigi needs to use VIEW_CONVERT_EXPR to create a view in the
base type for such checks (likewise for the 'Valid case).  Now for VRP
suppose we have
(sorry for the C-ish Ada code ;))

  derived uninitialized;
  if (VIEW_CONVERT_EXPR(base_type, uninitialized) < 0)
   abort();

now VRP infers a range of 1..10 for uninitialized infered from
TYPE_MIN/MAX_VALUE.
If VRP wants to propagate ranges through VIEW_CONVERT_EXPRs (which it wants,
if the above check should be optimized in case the value is
initialized), it needs to
exclude somehow range information infered from TYPE_MIN/MAX_VALUE.  So, for

  derived initialized = 1;
  if (VIEW_CONVERT_EXPR(base_type, uninitialized) < 0)
   abort();

VRP should be able to remove the test (I guess at the moment VRP simply stops
if seeing VIEW_CONVERT_EXPRs).

Richard.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-16 11:48           ` Laurent GUERBY
@ 2006-03-16 12:03             ` Richard Guenther
  2006-03-16 12:19               ` Richard Guenther
  2006-03-16 12:30             ` Richard Guenther
  1 sibling, 1 reply; 105+ messages in thread
From: Richard Guenther @ 2006-03-16 12:03 UTC (permalink / raw)
  To: Laurent GUERBY; +Cc: Geert Bosch, Robert Dewar, law, Waldek Hebisch, gcc

On 3/16/06, Laurent GUERBY <laurent@guerby.net> wrote:
> On Thu, 2006-03-16 at 10:43 +0100, Richard Guenther wrote:
> > On 3/16/06, Geert Bosch <bosch@adacore.com> wrote:
> > >
> > > On Mar 16, 2006, at 05:09, Robert Dewar wrote:
> > > > Not quite right. If you have an uninitialized variable, the value is
> > > > invalid and may be out of bounds, but this is a bounded error
> > > > situation,
> > > > not an erroneous program. So the possible effects are definitely NOT
> > > > unbounded, and the use of such values cannot turn a program erroneous.
> > > > (that's an Ada 95 change, this used to be erroneous in Ada 83).
> > >
> > > Actually, that's a good point and raises some potential issues:
> > > if we're never establish the invariant that a value of a type is in
> > > range, we can only use the base range for variables that might be
> > > used uninitialized. Any read of such a variable would then involve
> > > a range check.
> > >
> > >    package Uninitialized is
> > >       N : Positive;
> > >    end Uninitialized;
> > >
> > >    with Uninitialized;
> > >    procedure Test is
> > >       for J in 1 .. Uninitialized.N loop
> > >          ...
> > >       end loop;
> > >    end Test;
> > >
> > > In this case, GCC might replace the loop with
> > >     declare
> > >        J : Integer := 1;
> > >     begin
> > >        while J /= Uninitialized.N loop
> > >           ...
> > >           J := J + 1;
> > >        end loop;
> > >     end;
> > >
> > > which would be incorrect for N = 0.
> >
> > Uh - what do you expect here??  Does the Ada standard require a out-of-range
> > exception upon the first use of N?
>
> <<
> 13.9.1 Data Validity
>
>                           Bounded (Run-Time) Errors
>
> 9     {invalid representation} {bounded error (cause) [partial]} If the
> representation of a scalar object does not represent a value of the object's
> subtype (perhaps because the object was not initialized), the object is said
> to have an invalid representation. It is a bounded error to evaluate the value
> of such an object. {Program_Error (raised by failure of run-time check)}
> {Constraint_Error (raised by failure of run-time check)} If the error is
> detected, either Constraint_Error or Program_Error is raised. Otherwise,
> execution continues using the invalid representation. The rules of the
> language outside this subclause assume that all objects have valid
> representations. The semantics of operations on invalid representations are as
> follows:
>
>     9.a   Discussion: The AARM is more explicit about what happens when the
>           value of the case expression is an invalid representation.
>
>     9.b/2 Ramification: {AI95-00426-01} This includes the result object of
>           functions, including the result of Unchecked_Conversion, T'Input,
>           and imported functions.
>
> 10    If the representation of the object represents a value of the object's
>       type, the value of the type is used.
>
> 11    If the representation of the object does not represent a value of the
>       object's type, the semantics of operations on such representations is
>       implementation-defined, but does not by itself lead to erroneous or
>       unpredictable execution, or to other objects becoming abnormal.
>
>     11.a/2 Implementation Note: {AI95-00426-01} This means that the
>           implementation must take care not to use an invalid representation
>           in a way that might cause erroneous execution. For instance, the
>           exception mandated for case_statements must be raised. Array
>           indexing must not cause memory outside of the array to be written
>           (and usually, not read either). These cases and similar cases may
>           require explicit checks by the implementation.
> >>
>
> So in this case the behaviour is implementation defined, from my reading
> an infinite loop is not contrary to the standard.
>
> > In this case, the frontend needs
> > to insert a proper
> > check.  You cannot expect the middle-end to avoid the above transformation, so
> > this is a frontend bug.
>
> Do the ME or the BE have a representation for potentially uninitalized
> variables?

No.

> In the following case:
>
> procedure T2 is
>    type R is range 1 .. 10;
>    type T is array (R) of Integer;
>    I : R;
>    X : T;
> begin
>    X (I) := 0;
> end T2;
>
> The Ada FE will insert an explicit check, as seen when using
> gcc -c -gnatdg t2.adb:
>
>    [constraint_error when not (interfaces__unsigned_32!(i) >= 1 and then
>      interfaces__unsigned_32!(i) <= 10) "invalid data"]
>
> Will the ME or FE remove the check?

If you do the check in the base type then it will be only removed if
the compiler
can prove the comparisons are always true.  For instance if there is a preceding
initialization of i to say 3.  Also if the basetype is somehow known
to have a value
outside of the range, the constraint error will be raised
unconditionally (but as I
understand there is no way the frontend can produce such knowledge).

Richard.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-16 10:20         ` Richard Guenther
  2006-03-16 11:48           ` Laurent GUERBY
@ 2006-03-16 11:49           ` Geert Bosch
  1 sibling, 0 replies; 105+ messages in thread
From: Geert Bosch @ 2006-03-16 11:49 UTC (permalink / raw)
  To: Richard Guenther; +Cc: Robert Dewar, Laurent GUERBY, law, Waldek Hebisch, gcc


On Mar 16, 2006, at 10:43, Richard Guenther wrote:
> Uh - what do you expect here??  Does the Ada standard
> require a out-of-range exception upon the first use of N?
> In this case, the frontend needs to insert a proper check.
> You cannot expect the middle-end to avoid the above
> transformation, so this is a frontend bug.

That's exactly the point I'm making.

   -Geert

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-16  4:10     ` Robert Dewar
  2006-03-16  7:33       ` Geert Bosch
@ 2006-03-16 11:48       ` Laurent GUERBY
  2006-03-17 16:48       ` Waldek Hebisch
  2 siblings, 0 replies; 105+ messages in thread
From: Laurent GUERBY @ 2006-03-16 11:48 UTC (permalink / raw)
  To: Robert Dewar; +Cc: law, Waldek Hebisch, gcc

On Wed, 2006-03-15 at 23:09 -0500, Robert Dewar wrote:
> Laurent GUERBY wrote:
> > On Mon, 2006-03-13 at 15:31 -0700, Jeffrey A Law wrote:
> >> On Mon, 2006-02-27 at 20:08 +0100, Waldek Hebisch wrote:
> >>
> >>> What do you mean by "abuse"?  TYPE_MAX_VALUE means maximal value
> >>> allowed by given type.
> >> As long as you're *absolutely* clear that  a variable with a
> >> restricted range can never hold a value outside that the
> >> restricted range in a conforming program, then I'll back off
> >> the "abuse" label and merely call it pointless :-)
> > 
> > Variables in a non erroneous Ada program all have their value between
> > their type bounds from the optimizer perspective (the special 'valid
> > case put aside).
> 
> Not quite right. If you have an uninitialized variable, the value is
> invalid and may be out of bounds, but this is a bounded error situation,
> not an erroneous program. So the possible effects are definitely NOT
> unbounded, and the use of such values cannot turn a program erroneous.
> (that's an Ada 95 change, this used to be erroneous in Ada 83).

Yes, thanks for the precision. I'm not sure that such difference
can reach the back-end in the current state of affairs, do you have
a case in mind?

Laurent

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-16 10:20         ` Richard Guenther
@ 2006-03-16 11:48           ` Laurent GUERBY
  2006-03-16 12:03             ` Richard Guenther
  2006-03-16 12:30             ` Richard Guenther
  2006-03-16 11:49           ` Geert Bosch
  1 sibling, 2 replies; 105+ messages in thread
From: Laurent GUERBY @ 2006-03-16 11:48 UTC (permalink / raw)
  To: Richard Guenther; +Cc: Geert Bosch, Robert Dewar, law, Waldek Hebisch, gcc

On Thu, 2006-03-16 at 10:43 +0100, Richard Guenther wrote:
> On 3/16/06, Geert Bosch <bosch@adacore.com> wrote:
> >
> > On Mar 16, 2006, at 05:09, Robert Dewar wrote:
> > > Not quite right. If you have an uninitialized variable, the value is
> > > invalid and may be out of bounds, but this is a bounded error
> > > situation,
> > > not an erroneous program. So the possible effects are definitely NOT
> > > unbounded, and the use of such values cannot turn a program erroneous.
> > > (that's an Ada 95 change, this used to be erroneous in Ada 83).
> >
> > Actually, that's a good point and raises some potential issues:
> > if we're never establish the invariant that a value of a type is in
> > range, we can only use the base range for variables that might be
> > used uninitialized. Any read of such a variable would then involve
> > a range check.
> >
> >    package Uninitialized is
> >       N : Positive;
> >    end Uninitialized;
> >
> >    with Uninitialized;
> >    procedure Test is
> >       for J in 1 .. Uninitialized.N loop
> >          ...
> >       end loop;
> >    end Test;
> >
> > In this case, GCC might replace the loop with
> >     declare
> >        J : Integer := 1;
> >     begin
> >        while J /= Uninitialized.N loop
> >           ...
> >           J := J + 1;
> >        end loop;
> >     end;
> >
> > which would be incorrect for N = 0.
> 
> Uh - what do you expect here??  Does the Ada standard require a out-of-range
> exception upon the first use of N?  

<<
13.9.1 Data Validity

                          Bounded (Run-Time) Errors

9     {invalid representation} {bounded error (cause) [partial]} If the
representation of a scalar object does not represent a value of the object's
subtype (perhaps because the object was not initialized), the object is said
to have an invalid representation. It is a bounded error to evaluate the value
of such an object. {Program_Error (raised by failure of run-time check)}
{Constraint_Error (raised by failure of run-time check)} If the error is
detected, either Constraint_Error or Program_Error is raised. Otherwise,
execution continues using the invalid representation. The rules of the
language outside this subclause assume that all objects have valid
representations. The semantics of operations on invalid representations are as
follows:

    9.a   Discussion: The AARM is more explicit about what happens when the
          value of the case expression is an invalid representation.

    9.b/2 Ramification: {AI95-00426-01} This includes the result object of
          functions, including the result of Unchecked_Conversion, T'Input,
          and imported functions.

10    If the representation of the object represents a value of the object's
      type, the value of the type is used.

11    If the representation of the object does not represent a value of the
      object's type, the semantics of operations on such representations is
      implementation-defined, but does not by itself lead to erroneous or
      unpredictable execution, or to other objects becoming abnormal.

    11.a/2 Implementation Note: {AI95-00426-01} This means that the
          implementation must take care not to use an invalid representation
          in a way that might cause erroneous execution. For instance, the
          exception mandated for case_statements must be raised. Array
          indexing must not cause memory outside of the array to be written
          (and usually, not read either). These cases and similar cases may
          require explicit checks by the implementation.
>>

So in this case the behaviour is implementation defined, from my reading
an infinite loop is not contrary to the standard.

> In this case, the frontend needs
> to insert a proper
> check.  You cannot expect the middle-end to avoid the above transformation, so
> this is a frontend bug.

Do the ME or the BE have a representation for potentially uninitalized
variables? In the following case:

procedure T2 is
   type R is range 1 .. 10;
   type T is array (R) of Integer;
   I : R;
   X : T;
begin
   X (I) := 0;
end T2;

The Ada FE will insert an explicit check, as seen when using 
gcc -c -gnatdg t2.adb:

   [constraint_error when not (interfaces__unsigned_32!(i) >= 1 and then
     interfaces__unsigned_32!(i) <= 10) "invalid data"]

Will the ME or FE remove the check?

Laurent

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-16  7:33       ` Geert Bosch
@ 2006-03-16 10:20         ` Richard Guenther
  2006-03-16 11:48           ` Laurent GUERBY
  2006-03-16 11:49           ` Geert Bosch
  0 siblings, 2 replies; 105+ messages in thread
From: Richard Guenther @ 2006-03-16 10:20 UTC (permalink / raw)
  To: Geert Bosch; +Cc: Robert Dewar, Laurent GUERBY, law, Waldek Hebisch, gcc

On 3/16/06, Geert Bosch <bosch@adacore.com> wrote:
>
> On Mar 16, 2006, at 05:09, Robert Dewar wrote:
> > Not quite right. If you have an uninitialized variable, the value is
> > invalid and may be out of bounds, but this is a bounded error
> > situation,
> > not an erroneous program. So the possible effects are definitely NOT
> > unbounded, and the use of such values cannot turn a program erroneous.
> > (that's an Ada 95 change, this used to be erroneous in Ada 83).
>
> Actually, that's a good point and raises some potential issues:
> if we're never establish the invariant that a value of a type is in
> range, we can only use the base range for variables that might be
> used uninitialized. Any read of such a variable would then involve
> a range check.
>
>    package Uninitialized is
>       N : Positive;
>    end Uninitialized;
>
>    with Uninitialized;
>    procedure Test is
>       for J in 1 .. Uninitialized.N loop
>          ...
>       end loop;
>    end Test;
>
> In this case, GCC might replace the loop with
>     declare
>        J : Integer := 1;
>     begin
>        while J /= Uninitialized.N loop
>           ...
>           J := J + 1;
>        end loop;
>     end;
>
> which would be incorrect for N = 0.

Uh - what do you expect here??  Does the Ada standard require a out-of-range
exception upon the first use of N?  In this case, the frontend needs
to insert a proper
check.  You cannot expect the middle-end to avoid the above transformation, so
this is a frontend bug.

Richard.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-16  4:10     ` Robert Dewar
@ 2006-03-16  7:33       ` Geert Bosch
  2006-03-16 10:20         ` Richard Guenther
  2006-03-16 11:48       ` Laurent GUERBY
  2006-03-17 16:48       ` Waldek Hebisch
  2 siblings, 1 reply; 105+ messages in thread
From: Geert Bosch @ 2006-03-16  7:33 UTC (permalink / raw)
  To: Robert Dewar; +Cc: Laurent GUERBY, law, Waldek Hebisch, gcc


On Mar 16, 2006, at 05:09, Robert Dewar wrote:
> Not quite right. If you have an uninitialized variable, the value is
> invalid and may be out of bounds, but this is a bounded error  
> situation,
> not an erroneous program. So the possible effects are definitely NOT
> unbounded, and the use of such values cannot turn a program erroneous.
> (that's an Ada 95 change, this used to be erroneous in Ada 83).

Actually, that's a good point and raises some potential issues:
if we're never establish the invariant that a value of a type is in
range, we can only use the base range for variables that might be
used uninitialized. Any read of such a variable would then involve
a range check.

   package Uninitialized is
      N : Positive;
   end Uninitialized;

   with Uninitialized;
   procedure Test is
      for J in 1 .. Uninitialized.N loop
         ...
      end loop;
   end Test;

In this case, GCC might replace the loop with
    declare
       J : Integer := 1;
    begin
       while J /= Uninitialized.N loop
          ...
          J := J + 1;
       end loop;
    end;

which would be incorrect for N = 0.

   -Geert

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-16  1:53   ` Laurent GUERBY
@ 2006-03-16  4:10     ` Robert Dewar
  2006-03-16  7:33       ` Geert Bosch
                         ` (2 more replies)
  0 siblings, 3 replies; 105+ messages in thread
From: Robert Dewar @ 2006-03-16  4:10 UTC (permalink / raw)
  To: Laurent GUERBY; +Cc: law, Waldek Hebisch, gcc

Laurent GUERBY wrote:
> On Mon, 2006-03-13 at 15:31 -0700, Jeffrey A Law wrote:
>> On Mon, 2006-02-27 at 20:08 +0100, Waldek Hebisch wrote:
>>
>>> What do you mean by "abuse"?  TYPE_MAX_VALUE means maximal value
>>> allowed by given type.
>> As long as you're *absolutely* clear that  a variable with a
>> restricted range can never hold a value outside that the
>> restricted range in a conforming program, then I'll back off
>> the "abuse" label and merely call it pointless :-)
> 
> Variables in a non erroneous Ada program all have their value between
> their type bounds from the optimizer perspective (the special 'valid
> case put aside).

Not quite right. If you have an uninitialized variable, the value is
invalid and may be out of bounds, but this is a bounded error situation,
not an erroneous program. So the possible effects are definitely NOT
unbounded, and the use of such values cannot turn a program erroneous.
(that's an Ada 95 change, this used to be erroneous in Ada 83).

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-13 23:31 ` Jeffrey A Law
  2006-03-14  2:29   ` Waldek Hebisch
  2006-03-15 17:23   ` Michael Matz
@ 2006-03-16  1:53   ` Laurent GUERBY
  2006-03-16  4:10     ` Robert Dewar
  2 siblings, 1 reply; 105+ messages in thread
From: Laurent GUERBY @ 2006-03-16  1:53 UTC (permalink / raw)
  To: law; +Cc: Waldek Hebisch, gcc

On Mon, 2006-03-13 at 15:31 -0700, Jeffrey A Law wrote:
> On Mon, 2006-02-27 at 20:08 +0100, Waldek Hebisch wrote:
> 
> > What do you mean by "abuse"?  TYPE_MAX_VALUE means maximal value
> > allowed by given type.
> As long as you're *absolutely* clear that  a variable with a
> restricted range can never hold a value outside that the
> restricted range in a conforming program, then I'll back off
> the "abuse" label and merely call it pointless :-)

Variables in a non erroneous Ada program all have their value between
their type bounds from the optimizer perspective (the special 'valid
case put aside).

The Ada FE and RTS are currently compiled with compiler checks off
so of course programming errors are magnified by VRP (as seen in one
case already).

> The scheme you're using "promoting" to a base type before all
> arithmetic creates lots of useless type conversions and means
> that the optimizers can't utilize TYPE_MIN_VALUE/TYPE_MAX_VALUE
> anyway.  ie, you don't gain anything over keeping that knowledge
> in the front-end.

It just means that the optimizer has to be smarter than it currently is,
may be one day it will be, so yes front-end should pass their knowledge.

There's no reason in language design (unless you want to cripple
performance and usability) not to do what Ada and Pascal are doing:
intermediate values are kept in an implementation defined
"base" type. If you have a reason to do otherwise, please
let us know.

Anyway, I'm pretty sure that it's possible to find cases where
the current optimizer thanks to VRP improvements is able to remove
checks that the Ada front-end wasn't able to remove, and check removal
is usually a pretty important optimization in Ada and Pascal (of course
mostly useless in C :).

Laurent


^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-13 23:31 ` Jeffrey A Law
  2006-03-14  2:29   ` Waldek Hebisch
@ 2006-03-15 17:23   ` Michael Matz
  2006-03-16  1:53   ` Laurent GUERBY
  2 siblings, 0 replies; 105+ messages in thread
From: Michael Matz @ 2006-03-15 17:23 UTC (permalink / raw)
  To: Jeffrey A Law; +Cc: Waldek Hebisch, gcc

Hi,

On Mon, 13 Mar 2006, Jeffrey A Law wrote:

> > What do you mean by "abuse"?  TYPE_MAX_VALUE means maximal value
> > allowed by given type.
> As long as you're *absolutely* clear that  a variable with a
> restricted range can never hold a value outside that the
> restricted range in a conforming program, then I'll back off
> the "abuse" label and merely call it pointless :-)

I really really can't understand your strong opposition against the 
sensible semantics of TYPE_MAX_VALUE.  If the type has only a certain 
range, than it obviously is a good thing to make this knowledge available 
to the optimizers, even if that range doesn't end at a power of two.  That 
the value of variables of that type has to stay in that range is a 
tautology (i.e. whereever that's not the case it's a bug somewhere, either 
in the source, in the frontend or the optimizer).  It doesn't make the 
notion of TYPE_MAX_VALUE pointless at all.

> The scheme you're using "promoting" to a base type before all
> arithmetic creates lots of useless type conversions and means
> that the optimizers can't utilize TYPE_MIN_VALUE/TYPE_MAX_VALUE
> anyway.  ie, you don't gain anything over keeping that knowledge
> in the front-end.

Sure it can, namely in VRP, and connected optimizations.  First: It's not
clear that also the pascal frontend goes back to the base type for doing
arithmetics.  In fact you assume that arithmetics is done on such
variables at all, which is not necessary.  It can just be used as lookup
value in switches for instance, or in comparisons.  And second: even if it
does promote to do arithmetics the final result probably is again in the
restricted type, again providing the knowledge of the range to VRP 
(obviously the frontend has to somehow make sure that the result of this 
arithmetic indeed fits into the real type, but see above for why this is 
not a very special request).


Ciao,
Michael.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-14  2:29   ` Waldek Hebisch
@ 2006-03-14  9:55     ` Duncan Sands
  2006-03-17 21:06     ` Jeffrey A Law
  1 sibling, 0 replies; 105+ messages in thread
From: Duncan Sands @ 2006-03-14  9:55 UTC (permalink / raw)
  To: gcc; +Cc: Waldek Hebisch, law

On Tuesday 14 March 2006 03:16, Waldek Hebisch wrote:
> Jeffrey A Law wrote:
> > On Mon, 2006-02-27 at 20:08 +0100, Waldek Hebisch wrote:
> > 
> > > What do you mean by "abuse"?  TYPE_MAX_VALUE means maximal value
> > > allowed by given type.
> > As long as you're *absolutely* clear that  a variable with a
> > restricted range can never hold a value outside that the
> > restricted range in a conforming program, then I'll back off
> > the "abuse" label and merely call it pointless :-)
> > 
> > The scheme you're using "promoting" to a base type before all
> > arithmetic creates lots of useless type conversions and means
> > that the optimizers can't utilize TYPE_MIN_VALUE/TYPE_MAX_VALUE
> > anyway.  ie, you don't gain anything over keeping that knowledge
> > in the front-end.
> > 
> 
> Pascal arithmetic essentially is untyped: operators take integer
> arguments and are supposed to give mathematically correct result
> (provided all intermediate results are representable in machine
> arithmetic, overflow is treated as user error). OTOH for proper
> type checking front end have to track ranges associated to
> variables. So "useless" type conversions are needed due to
> Pascal standard and backend constraints.
> 
> I think that it is easy for back end to make good use of
> TYPE_MIN_VALUE/TYPE_MAX_VALUE. Namely, consider the assignment
> 
> x := y + z * w;
> 
> where variables y, z and w have values in the interval [0,7] and
> x have values in [0,1000]. Pascal converts the above to the
> following C like code:
> 
> int tmp = (int) y + (int) z * (int) w;
> x = (tmp < 0 || tmp > 1000)? (Range_Check_Error (), 0) : tmp;
>
> I expect VRP to deduce that tmp will have values in [0..56] and
> eliminate range check.

This is much the same in Ada.  However the Ada runtime and compiler
are compiled using a special flag (-gnatp) that turns all checks off.
This is not conformant with Ada semantics.  If you look at what the
front end is generating during a bootstrap, you therefore see it
happily converting between types and base types, and completely ignoring
the possibility of out-of-range values.  Someone inspecting the output
of the front-end during a bootstrap could well wonder why it bothers setting
TYPE_MIN_VALUE/TYPE_MAX_VALUE, and what the point of all the conversions
to and from base types is.  The point is that usually there would be
range checks all over the place as in Waldek's example, but they have
been suppressed.

> Also, it should be clear that in the 
> assigment above artithmetic can be done using any convenient
> precision.
> 
> In principle Pascal front end could deduce more precise types (ranges),
> but that would create some extra type conversions and a lot
> of extra types. Moreover, I assume that VRP can do better job
> at tracking ranges then Pascal front end.

Duncan.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-03-13 23:31 ` Jeffrey A Law
@ 2006-03-14  2:29   ` Waldek Hebisch
  2006-03-14  9:55     ` Duncan Sands
  2006-03-17 21:06     ` Jeffrey A Law
  2006-03-15 17:23   ` Michael Matz
  2006-03-16  1:53   ` Laurent GUERBY
  2 siblings, 2 replies; 105+ messages in thread
From: Waldek Hebisch @ 2006-03-14  2:29 UTC (permalink / raw)
  To: law; +Cc: gcc

Jeffrey A Law wrote:
> On Mon, 2006-02-27 at 20:08 +0100, Waldek Hebisch wrote:
> 
> > What do you mean by "abuse"?  TYPE_MAX_VALUE means maximal value
> > allowed by given type.
> As long as you're *absolutely* clear that  a variable with a
> restricted range can never hold a value outside that the
> restricted range in a conforming program, then I'll back off
> the "abuse" label and merely call it pointless :-)
> 
> The scheme you're using "promoting" to a base type before all
> arithmetic creates lots of useless type conversions and means
> that the optimizers can't utilize TYPE_MIN_VALUE/TYPE_MAX_VALUE
> anyway.  ie, you don't gain anything over keeping that knowledge
> in the front-end.
> 

Pascal arithmetic essentially is untyped: operators take integer
arguments and are supposed to give mathematically correct result
(provided all intermediate results are representable in machine
arithmetic, overflow is treated as user error). OTOH for proper
type checking front end have to track ranges associated to
variables. So "useless" type conversions are needed due to
Pascal standard and backend constraints.

I think that it is easy for back end to make good use of
TYPE_MIN_VALUE/TYPE_MAX_VALUE. Namely, consider the assignment

x := y + z * w;

where variables y, z and w have values in the interval [0,7] and
x have values in [0,1000]. Pascal converts the above to the
following C like code:

int tmp = (int) y + (int) z * (int) w;
x = (tmp < 0 || tmp > 1000)? (Range_Check_Error (), 0) : tmp;
 
I expect VRP to deduce that tmp will have values in [0..56] and
eliminate range check. Also, it should be clear that in the
assigment above artithmetic can be done using any convenient
precision.

In principle Pascal front end could deduce more precise types (ranges),
but that would create some extra type conversions and a lot
of extra types. Moreover, I assume that VRP can do better job
at tracking ranges then Pascal front end.

-- 
                              Waldek Hebisch
hebisch@math.uni.wroc.pl 

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-27 19:12 Waldek Hebisch
  2006-02-28 21:01 ` Laurent GUERBY
@ 2006-03-13 23:31 ` Jeffrey A Law
  2006-03-14  2:29   ` Waldek Hebisch
                     ` (2 more replies)
  1 sibling, 3 replies; 105+ messages in thread
From: Jeffrey A Law @ 2006-03-13 23:31 UTC (permalink / raw)
  To: Waldek Hebisch; +Cc: gcc

On Mon, 2006-02-27 at 20:08 +0100, Waldek Hebisch wrote:

> What do you mean by "abuse"?  TYPE_MAX_VALUE means maximal value
> allowed by given type.
As long as you're *absolutely* clear that  a variable with a
restricted range can never hold a value outside that the
restricted range in a conforming program, then I'll back off
the "abuse" label and merely call it pointless :-)

The scheme you're using "promoting" to a base type before all
arithmetic creates lots of useless type conversions and means
that the optimizers can't utilize TYPE_MIN_VALUE/TYPE_MAX_VALUE
anyway.  ie, you don't gain anything over keeping that knowledge
in the front-end.

jeff

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-27 19:12 Waldek Hebisch
@ 2006-02-28 21:01 ` Laurent GUERBY
  2006-03-13 23:31 ` Jeffrey A Law
  1 sibling, 0 replies; 105+ messages in thread
From: Laurent GUERBY @ 2006-02-28 21:01 UTC (permalink / raw)
  To: Waldek Hebisch; +Cc: Jeffrey A Law, gcc

On Mon, 2006-02-27 at 20:08 +0100, Waldek Hebisch wrote:
> Jeffrey A Law wrote:
> > My suspicions appear to be correct.  This never triggers except for
> > Ada code and it's relatively common in Ada code.  No surprise since
> > I don't think any other front-end abuses TYPE_MAX_VALUE in the way
> > the Ada front-end does.  This wouldn't be the first time we've had
> > to hack up something in the generic optimizers to deal with the
> > broken TYPE_MAX_VALUE.
> 
> What do you mean by "abuse"?  

I'd like to know too :). Both Pascal and Ada came up with the same
representation of ranged types to the back-end, if anything else
should be used, by all mean let us know the proposal.

Laurent


> TYPE_MAX_VALUE means maximal value
> allowed by given type.  For range types it is clearily the upper
> bound of the range.  Of course, upper bound should be representable,
> so TYPE_MAX_VALUE <= (2**TYPE_PRECISION - 1) for unsigned types
> and TYPE_MAX_VALUE <= (2**(TYPE_PRECISION - 1) - 1) for signed types.
> However, if the language has non-trivial range types you can expect
> strict inequality.  Note, that if you do not allow strict inequality
> above, TYPE_MAX_VALUE would be redundant.
> 
> FYI GNU Pascal is using such representation for range types, so for
> example:
> 
> type t = 0..5;
> 
> will give you TYPE_PRECISION equal to 32 (this is an old decision
> which tries to increase speed at the cost of space, otherwise 8
> would be enough) and TYPE_MAX_VALUE equal to 5.
> 
> GNU Pascal promotes arguments of operators, so that arithmetic take
> place in "standard" types -- I belive Ada is doing the same.
> 
> BTW, setting TYPE_PRECISION to 3 for the type above used to cause
> wrong code, so the way above was forced by the backend.
> 
> If you think that such behaviour is "abuse" then why to have sparate
> TYPE_MAX_VALUE. How to represent range types so that optimizers
> will know about allowed ranges (and use them!)? And how about debug
> info?
> 

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-27 18:26         ` Jeffrey A Law
@ 2006-02-27 19:25           ` Zdenek Dvorak
  0 siblings, 0 replies; 105+ messages in thread
From: Zdenek Dvorak @ 2006-02-27 19:25 UTC (permalink / raw)
  To: Jeffrey A Law; +Cc: Sebastian Pop, Richard Kenner, gcc

Hello,

> > > Jeffrey A Law wrote:
> > > > Another possibility is to simply not allow conversions between a
> > > > subtype and basetype. 
> > > 
> > > Such a patch also solves the problem.  But I'm not sure to understand
> > > the impact on other codes.  Is this kind of conversion between a type
> > > and its basetype specific to Ada?
> > 
> > this still seems unnecessarily conservative to me.  I would just check
> > for types whose TYPE_MIN and TYPE_MAX do not match the natural values
> > derived from the type precision (i.e., those returned by
> > upper_bound_in_type (type, type) and lower_bound_in_type (type, type)).
> I doubt it matters in any significant way -- your proposed solution
> may allow optimization in a few more cases of Ada code, but at a 
> compile-time cost (probably not significant).

just forbidding cast to a subtype does not seem enough -- cast from any
type to the type in that TYPE_MIN and MAX do not match TYPE_PRECISION
may cause problems.  I am not sure whether such casts really appear,
but I do not see any principial reason why they should not.

Zdenek

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-27 19:12         ` Sebastian Pop
@ 2006-02-27 19:17           ` Jeffrey A Law
  0 siblings, 0 replies; 105+ messages in thread
From: Jeffrey A Law @ 2006-02-27 19:17 UTC (permalink / raw)
  To: Sebastian Pop; +Cc: Richard Kenner, gcc

On Mon, 2006-02-27 at 20:26 +0100, Sebastian Pop wrote:
> Jeffrey A Law wrote:
> > On Fri, 2006-02-24 at 19:47 +0100, Sebastian Pop wrote:
> > > Jeffrey A Law wrote:
> > > > Another possibility is to simply not allow conversions between a
> > > > subtype and basetype. 
> > > 
> > > Such a patch also solves the problem.  But I'm not sure to understand
> > > the impact on other codes.  Is this kind of conversion between a type
> > > and its basetype specific to Ada?
> > My suspicions appear to be correct.  This never triggers except for
> > Ada code and it's relatively common in Ada code.  No surprise since
> > I don't think any other front-end abuses TYPE_MAX_VALUE in the way
> > the Ada front-end does.  This wouldn't be the first time we've had
> > to hack up something in the generic optimizers to deal with the
> > broken TYPE_MAX_VALUE.
> > 
> 
> This is good to know, thanks.
> 
> I was testing on amd64 this patch that was suggested by Zdenek, it
> passed bootstrap, but is still not finished the test.  As you said,
> this is a little bit more computation inefficient, so probably the
> other patch is preferable instead.
I'm testing a patch which implements Zdenek's suggestion --
but in a way that's more computationally efficient than yours.


There's no need to use fold_build2, creating junk nodes in the
process.  We just get the bounds and use operand_equal_p to
compare them to the type's min/max values. 

Jeff

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Ada subtypes and base types
@ 2006-02-27 19:12 Waldek Hebisch
  2006-02-28 21:01 ` Laurent GUERBY
  2006-03-13 23:31 ` Jeffrey A Law
  0 siblings, 2 replies; 105+ messages in thread
From: Waldek Hebisch @ 2006-02-27 19:12 UTC (permalink / raw)
  To: gcc

Jeffrey A Law wrote:
> My suspicions appear to be correct.  This never triggers except for
> Ada code and it's relatively common in Ada code.  No surprise since
> I don't think any other front-end abuses TYPE_MAX_VALUE in the way
> the Ada front-end does.  This wouldn't be the first time we've had
> to hack up something in the generic optimizers to deal with the
> broken TYPE_MAX_VALUE.

What do you mean by "abuse"?  TYPE_MAX_VALUE means maximal value
allowed by given type.  For range types it is clearily the upper
bound of the range.  Of course, upper bound should be representable,
so TYPE_MAX_VALUE <= (2**TYPE_PRECISION - 1) for unsigned types
and TYPE_MAX_VALUE <= (2**(TYPE_PRECISION - 1) - 1) for signed types.
However, if the language has non-trivial range types you can expect
strict inequality.  Note, that if you do not allow strict inequality
above, TYPE_MAX_VALUE would be redundant.

FYI GNU Pascal is using such representation for range types, so for
example:

type t = 0..5;

will give you TYPE_PRECISION equal to 32 (this is an old decision
which tries to increase speed at the cost of space, otherwise 8
would be enough) and TYPE_MAX_VALUE equal to 5.

GNU Pascal promotes arguments of operators, so that arithmetic take
place in "standard" types -- I belive Ada is doing the same.

BTW, setting TYPE_PRECISION to 3 for the type above used to cause
wrong code, so the way above was forced by the backend.

If you think that such behaviour is "abuse" then why to have sparate
TYPE_MAX_VALUE. How to represent range types so that optimizers
will know about allowed ranges (and use them!)? And how about debug
info?

-- 
                              Waldek Hebisch
hebisch@math.uni.wroc.pl 

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-27 18:23       ` Jeffrey A Law
@ 2006-02-27 19:12         ` Sebastian Pop
  2006-02-27 19:17           ` Jeffrey A Law
  0 siblings, 1 reply; 105+ messages in thread
From: Sebastian Pop @ 2006-02-27 19:12 UTC (permalink / raw)
  To: Jeffrey A Law; +Cc: Richard Kenner, gcc

Jeffrey A Law wrote:
> On Fri, 2006-02-24 at 19:47 +0100, Sebastian Pop wrote:
> > Jeffrey A Law wrote:
> > > Another possibility is to simply not allow conversions between a
> > > subtype and basetype. 
> > 
> > Such a patch also solves the problem.  But I'm not sure to understand
> > the impact on other codes.  Is this kind of conversion between a type
> > and its basetype specific to Ada?
> My suspicions appear to be correct.  This never triggers except for
> Ada code and it's relatively common in Ada code.  No surprise since
> I don't think any other front-end abuses TYPE_MAX_VALUE in the way
> the Ada front-end does.  This wouldn't be the first time we've had
> to hack up something in the generic optimizers to deal with the
> broken TYPE_MAX_VALUE.
> 

This is good to know, thanks.

I was testing on amd64 this patch that was suggested by Zdenek, it
passed bootstrap, but is still not finished the test.  As you said,
this is a little bit more computation inefficient, so probably the
other patch is preferable instead.

Index: tree-chrec.c
===================================================================
--- tree-chrec.c	(revision 111416)
+++ tree-chrec.c	(working copy)
@@ -1210,6 +1210,22 @@ chrec_convert_aggressive (tree type, tre
   if (TYPE_PRECISION (type) > TYPE_PRECISION (inner_type))
     return NULL_TREE;
 
+  /* In Ada, casts to subtypes are important, as the operations are
+     performed on the base type and then casted back to the subtype.
+     We have to preserve these types unless we can prove that the
+     sequence does not wrap.  Subtypes are recognized as the types
+     whose TYPE_{MIN,MAX}_VALUE are different than the min and max
+     values computed for that precision.  Avoid the aggressive
+     conversion of these types.  */
+  if (!POINTER_TYPE_P (type) && !POINTER_TYPE_P (inner_type)
+      && (integer_zerop (fold_build2 (EQ_EXPR, boolean_type_node,
+				      lower_bound_in_type (type, inner_type),
+				      TYPE_MIN_VALUE (inner_type)))
+	  || integer_zerop (fold_build2 (EQ_EXPR, boolean_type_node,
+					 upper_bound_in_type (type, inner_type),
+					 TYPE_MAX_VALUE (inner_type)))))
+    return NULL_TREE;
+
   left = CHREC_LEFT (chrec);
   right = CHREC_RIGHT (chrec);
   lc = chrec_convert_aggressive (type, left);

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-25  8:48       ` Zdenek Dvorak
@ 2006-02-27 18:26         ` Jeffrey A Law
  2006-02-27 19:25           ` Zdenek Dvorak
  0 siblings, 1 reply; 105+ messages in thread
From: Jeffrey A Law @ 2006-02-27 18:26 UTC (permalink / raw)
  To: Zdenek Dvorak; +Cc: Sebastian Pop, Richard Kenner, gcc

On Sat, 2006-02-25 at 09:48 +0100, Zdenek Dvorak wrote:
> Hello,
> 
> > Jeffrey A Law wrote:
> > > Another possibility is to simply not allow conversions between a
> > > subtype and basetype. 
> > 
> > Such a patch also solves the problem.  But I'm not sure to understand
> > the impact on other codes.  Is this kind of conversion between a type
> > and its basetype specific to Ada?
> 
> this still seems unnecessarily conservative to me.  I would just check
> for types whose TYPE_MIN and TYPE_MAX do not match the natural values
> derived from the type precision (i.e., those returned by
> upper_bound_in_type (type, type) and lower_bound_in_type (type, type)).
I doubt it matters in any significant way -- your proposed solution
may allow optimization in a few more cases of Ada code, but at a 
compile-time cost (probably not significant).

Either approach is OK with me.    I'll probably get one spun completely
today which will hopefully get Ada building again.

jeff


^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-24 18:46     ` Sebastian Pop
  2006-02-24 20:16       ` Jeffrey A Law
  2006-02-25  8:48       ` Zdenek Dvorak
@ 2006-02-27 18:23       ` Jeffrey A Law
  2006-02-27 19:12         ` Sebastian Pop
  2 siblings, 1 reply; 105+ messages in thread
From: Jeffrey A Law @ 2006-02-27 18:23 UTC (permalink / raw)
  To: Sebastian Pop; +Cc: Richard Kenner, gcc

On Fri, 2006-02-24 at 19:47 +0100, Sebastian Pop wrote:
> Jeffrey A Law wrote:
> > Another possibility is to simply not allow conversions between a
> > subtype and basetype. 
> 
> Such a patch also solves the problem.  But I'm not sure to understand
> the impact on other codes.  Is this kind of conversion between a type
> and its basetype specific to Ada?
My suspicions appear to be correct.  This never triggers except for
Ada code and it's relatively common in Ada code.  No surprise since
I don't think any other front-end abuses TYPE_MAX_VALUE in the way
the Ada front-end does.  This wouldn't be the first time we've had
to hack up something in the generic optimizers to deal with the
broken TYPE_MAX_VALUE.

jeff


^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-24 18:46     ` Sebastian Pop
  2006-02-24 20:16       ` Jeffrey A Law
@ 2006-02-25  8:48       ` Zdenek Dvorak
  2006-02-27 18:26         ` Jeffrey A Law
  2006-02-27 18:23       ` Jeffrey A Law
  2 siblings, 1 reply; 105+ messages in thread
From: Zdenek Dvorak @ 2006-02-25  8:48 UTC (permalink / raw)
  To: Sebastian Pop; +Cc: Jeffrey A Law, Richard Kenner, gcc

Hello,

> Jeffrey A Law wrote:
> > Another possibility is to simply not allow conversions between a
> > subtype and basetype. 
> 
> Such a patch also solves the problem.  But I'm not sure to understand
> the impact on other codes.  Is this kind of conversion between a type
> and its basetype specific to Ada?

this still seems unnecessarily conservative to me.  I would just check
for types whose TYPE_MIN and TYPE_MAX do not match the natural values
derived from the type precision (i.e., those returned by
upper_bound_in_type (type, type) and lower_bound_in_type (type, type)).

Zdenek

> Index: tree-chrec.c
> ===================================================================
> --- tree-chrec.c	(revision 111416)
> +++ tree-chrec.c	(working copy)
> @@ -1207,7 +1207,9 @@ chrec_convert_aggressive (tree type, tre
>      return NULL_TREE;
>  
>    inner_type = TREE_TYPE (chrec);
> -  if (TYPE_PRECISION (type) > TYPE_PRECISION (inner_type))
> +  if (TYPE_PRECISION (type) > TYPE_PRECISION (inner_type)
> +      /* Conversions between a subtype and its basetype are not allowed.  */
> +      || TREE_TYPE (type) == TREE_TYPE (chrec))
>      return NULL_TREE;
>  
>    left = CHREC_LEFT (chrec);

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-24 20:16       ` Jeffrey A Law
@ 2006-02-24 22:14         ` Sebastian Pop
  0 siblings, 0 replies; 105+ messages in thread
From: Sebastian Pop @ 2006-02-24 22:14 UTC (permalink / raw)
  To: Jeffrey A Law; +Cc: Richard Kenner, gcc

Jeffrey A Law wrote:
> I suspect, but certainly have not confirmed that these conversions
> are very common in Ada, but relatively rare elsewhere.  Confirming
> that suspicion one way or the other is on my TODO list for the weekend.
> 

okay, thanks for your investigation.  I won't be able to read my email
until Monday as I'm going to the FOSDEM in Bruxelles during the weekend.

Sebastian

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-24 18:46     ` Sebastian Pop
@ 2006-02-24 20:16       ` Jeffrey A Law
  2006-02-24 22:14         ` Sebastian Pop
  2006-02-25  8:48       ` Zdenek Dvorak
  2006-02-27 18:23       ` Jeffrey A Law
  2 siblings, 1 reply; 105+ messages in thread
From: Jeffrey A Law @ 2006-02-24 20:16 UTC (permalink / raw)
  To: Sebastian Pop; +Cc: Richard Kenner, gcc

On Fri, 2006-02-24 at 19:47 +0100, Sebastian Pop wrote:
> Jeffrey A Law wrote:
> > Another possibility is to simply not allow conversions between a
> > subtype and basetype. 
> 
> Such a patch also solves the problem.  But I'm not sure to understand
> the impact on other codes.  Is this kind of conversion between a type
> and its basetype specific to Ada?
I suspect, but certainly have not confirmed that these conversions
are very common in Ada, but relatively rare elsewhere.  Confirming
that suspicion one way or the other is on my TODO list for the weekend.

jeff


^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
@ 2006-02-24 19:22 Richard Kenner
  0 siblings, 0 replies; 105+ messages in thread
From: Richard Kenner @ 2006-02-24 19:22 UTC (permalink / raw)
  To: sebastian.pop; +Cc: gcc

    So if I understand correctly, if we can prove that the operation does
    not overflow in natural___XDLU_0___2147483647, then there is no need
    of a cast to the base type and back.

This stuff is complex, but I'm fairly sure that's correct.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-24 18:15   ` Jeffrey A Law
@ 2006-02-24 18:46     ` Sebastian Pop
  2006-02-24 20:16       ` Jeffrey A Law
                         ` (2 more replies)
  0 siblings, 3 replies; 105+ messages in thread
From: Sebastian Pop @ 2006-02-24 18:46 UTC (permalink / raw)
  To: Jeffrey A Law; +Cc: Richard Kenner, gcc

Jeffrey A Law wrote:
> Another possibility is to simply not allow conversions between a
> subtype and basetype. 

Such a patch also solves the problem.  But I'm not sure to understand
the impact on other codes.  Is this kind of conversion between a type
and its basetype specific to Ada?

Index: tree-chrec.c
===================================================================
--- tree-chrec.c	(revision 111416)
+++ tree-chrec.c	(working copy)
@@ -1207,7 +1207,9 @@ chrec_convert_aggressive (tree type, tre
     return NULL_TREE;
 
   inner_type = TREE_TYPE (chrec);
-  if (TYPE_PRECISION (type) > TYPE_PRECISION (inner_type))
+  if (TYPE_PRECISION (type) > TYPE_PRECISION (inner_type)
+      /* Conversions between a subtype and its basetype are not allowed.  */
+      || TREE_TYPE (type) == TREE_TYPE (chrec))
     return NULL_TREE;
 
   left = CHREC_LEFT (chrec);

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-24 17:48   ` Jeffrey A Law
@ 2006-02-24 18:22     ` Sebastian Pop
  0 siblings, 0 replies; 105+ messages in thread
From: Sebastian Pop @ 2006-02-24 18:22 UTC (permalink / raw)
  To: Jeffrey A Law; +Cc: Richard Kenner, gcc

Jeffrey A Law wrote:
> One possibility that isn't as drastic would be to add one to the
> TYPE_MAX_VALUE of the inner type, then see if the result fits in
> the outer type.  

Yes, that's basically the idea that is implemented in chrec_convert.

Because we're in a loop and we keep adding "one"s to the base, we have
to know how many times we are going to add the step to the base.  Thus
there are three restrictions for a static analyzer: when the base is a
name, when the step is a name and when the number of iterations isn't
known.

If the base is a name and it is not possible to have a bound on its
value, then even if we know that the loop runs a constant number of
times, we cannot infer that the sequence will not overflow.  The same
is true for an unknown step and for an unknown number of iterations.

We can also have some more information on the base and step from the
range of their type.  This is not yet implemented in chrec_convert,
but it will not catch enough cases if we'll have to stop using the
aggressive convert.

> Perhaps also limiting that test to unsigned types.
> 

yes, there is a test for flag_wrapv in chrec_convert: it calls
scev_probably_wraps_p.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-24 16:39 ` Sebastian Pop
  2006-02-24 16:53   ` Jeffrey A Law
  2006-02-24 17:48   ` Jeffrey A Law
@ 2006-02-24 18:15   ` Jeffrey A Law
  2006-02-24 18:46     ` Sebastian Pop
  2 siblings, 1 reply; 105+ messages in thread
From: Jeffrey A Law @ 2006-02-24 18:15 UTC (permalink / raw)
  To: Sebastian Pop; +Cc: Richard Kenner, gcc

On Fri, 2006-02-24 at 18:42 +0100, Sebastian Pop wrote:
> Richard Kenner wrote:
> >     Just to make sure I've dotted the i's and crossed the t's, this is not
> >     what's happening when we hang in VRP when compiling a-textio.
> > 
> >     We convert the incoming object from natural___XDLU_0___2147483647
> >     into its base type, perform the addition in the base type, then
> >     convert back to XDLU_0_2147483647.
> > 
> > The above is exactly what I thought everybody agrees is and should be
> > happening, so I'm confused by your "this is not what's happening"
> > comment above.
> 
> So if I understand correctly, if we can prove that the operation does
> not overflow in natural___XDLU_0___2147483647, then there is no need
> of a cast to the base type and back.
> 
> chrec_convert is checking for non overflowing sequences before
> removing a cast, and that is missing from the aggressive convert.
> Aggressive convert has been intentionally implemented this way because
> the conservative chrec_convert has caused performance regressions (see
> sixtrack slowdowns from last August).
> 
> A patch like the following would solve the problem too, but will
> introduce performance regressions... so I'm not sure that it is a good
> solution.
Another possibility is to simply not allow conversions between a
subtype and basetype.  Again, that might not be as drastic as
your change.

jeff


^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-24 16:39 ` Sebastian Pop
  2006-02-24 16:53   ` Jeffrey A Law
@ 2006-02-24 17:48   ` Jeffrey A Law
  2006-02-24 18:22     ` Sebastian Pop
  2006-02-24 18:15   ` Jeffrey A Law
  2 siblings, 1 reply; 105+ messages in thread
From: Jeffrey A Law @ 2006-02-24 17:48 UTC (permalink / raw)
  To: Sebastian Pop; +Cc: Richard Kenner, gcc

On Fri, 2006-02-24 at 18:42 +0100, Sebastian Pop wrote:
> Richard Kenner wrote:
> >     Just to make sure I've dotted the i's and crossed the t's, this is not
> >     what's happening when we hang in VRP when compiling a-textio.
> > 
> >     We convert the incoming object from natural___XDLU_0___2147483647
> >     into its base type, perform the addition in the base type, then
> >     convert back to XDLU_0_2147483647.
> > 
> > The above is exactly what I thought everybody agrees is and should be
> > happening, so I'm confused by your "this is not what's happening"
> > comment above.
> 
> So if I understand correctly, if we can prove that the operation does
> not overflow in natural___XDLU_0___2147483647, then there is no need
> of a cast to the base type and back.
> 
> chrec_convert is checking for non overflowing sequences before
> removing a cast, and that is missing from the aggressive convert.
> Aggressive convert has been intentionally implemented this way because
> the conservative chrec_convert has caused performance regressions (see
> sixtrack slowdowns from last August).
> 
> A patch like the following would solve the problem too, but will
> introduce performance regressions... so I'm not sure that it is a good
> solution.
One possibility that isn't as drastic would be to add one to the
TYPE_MAX_VALUE of the inner type, then see if the result fits in
the outer type.  Perhaps also limiting that test to unsigned types.

Then again, that could just be papering over the problem; I
really don't know this code anywhere near enough to know what
a good solution will be.

jeff


^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-24 16:39 ` Sebastian Pop
@ 2006-02-24 16:53   ` Jeffrey A Law
  2006-02-24 17:48   ` Jeffrey A Law
  2006-02-24 18:15   ` Jeffrey A Law
  2 siblings, 0 replies; 105+ messages in thread
From: Jeffrey A Law @ 2006-02-24 16:53 UTC (permalink / raw)
  To: Sebastian Pop; +Cc: Richard Kenner, gcc

On Fri, 2006-02-24 at 18:42 +0100, Sebastian Pop wrote:
> Richard Kenner wrote:
> >     Just to make sure I've dotted the i's and crossed the t's, this is not
> >     what's happening when we hang in VRP when compiling a-textio.
> > 
> >     We convert the incoming object from natural___XDLU_0___2147483647
> >     into its base type, perform the addition in the base type, then
> >     convert back to XDLU_0_2147483647.
> > 
> > The above is exactly what I thought everybody agrees is and should be
> > happening, so I'm confused by your "this is not what's happening"
> > comment above.
> 
> So if I understand correctly, if we can prove that the operation does
> not overflow in natural___XDLU_0___2147483647, then there is no need
> of a cast to the base type and back.
And that's still a topic that needs to be thoroughly 
discussed.  We overflow the TYPE_MAX_VALUE, but do not
overflow its TYPE_PRECISION.  ie, TYPE_MIN_VALUE/TYPE_MAX_VALUE
do not cover the set of values allowed by TYPE_PRECISION.

I claim that is a bug in the Ada front-end.  Others might
disagree.

jeff

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-24 16:10 Richard Kenner
@ 2006-02-24 16:39 ` Sebastian Pop
  2006-02-24 16:53   ` Jeffrey A Law
                     ` (2 more replies)
  0 siblings, 3 replies; 105+ messages in thread
From: Sebastian Pop @ 2006-02-24 16:39 UTC (permalink / raw)
  To: Richard Kenner; +Cc: law, gcc

Richard Kenner wrote:
>     Just to make sure I've dotted the i's and crossed the t's, this is not
>     what's happening when we hang in VRP when compiling a-textio.
> 
>     We convert the incoming object from natural___XDLU_0___2147483647
>     into its base type, perform the addition in the base type, then
>     convert back to XDLU_0_2147483647.
> 
> The above is exactly what I thought everybody agrees is and should be
> happening, so I'm confused by your "this is not what's happening"
> comment above.

So if I understand correctly, if we can prove that the operation does
not overflow in natural___XDLU_0___2147483647, then there is no need
of a cast to the base type and back.

chrec_convert is checking for non overflowing sequences before
removing a cast, and that is missing from the aggressive convert.
Aggressive convert has been intentionally implemented this way because
the conservative chrec_convert has caused performance regressions (see
sixtrack slowdowns from last August).

A patch like the following would solve the problem too, but will
introduce performance regressions... so I'm not sure that it is a good
solution.

Index: tree-chrec.c
===================================================================
--- tree-chrec.c	(revision 111416)
+++ tree-chrec.c	(working copy)
@@ -1202,6 +1202,8 @@ chrec_convert_aggressive (tree type, tre
 {
   tree inner_type, left, right, lc, rc;
 
+  return chrec_convert (type, chrec, NULL_TREE);
+
   if (automatically_generated_chrec_p (chrec)
       || TREE_CODE (chrec) != POLYNOMIAL_CHREC)
     return NULL_TREE;


^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
@ 2006-02-24 16:10 Richard Kenner
  2006-02-24 16:39 ` Sebastian Pop
  0 siblings, 1 reply; 105+ messages in thread
From: Richard Kenner @ 2006-02-24 16:10 UTC (permalink / raw)
  To: law; +Cc: gcc

    Just to make sure I've dotted the i's and crossed the t's, this is not
    what's happening when we hang in VRP when compiling a-textio.

    We convert the incoming object from natural___XDLU_0___2147483647
    into its base type, perform the addition in the base type, then
    convert back to XDLU_0_2147483647.

The above is exactly what I thought everybody agrees is and should be
happening, so I'm confused by your "this is not what's happening"
comment above.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-24  0:59 ` Jeffrey A Law
@ 2006-02-24  9:48   ` Richard Guenther
  0 siblings, 0 replies; 105+ messages in thread
From: Richard Guenther @ 2006-02-24  9:48 UTC (permalink / raw)
  To: law; +Cc: Richard Kenner, gcc

On 2/24/06, Jeffrey A Law <law@redhat.com> wrote:
> On Wed, 2006-02-22 at 13:25 -0500, Richard Kenner wrote:
> >      When I speak about doing arithmetic in a type, I'm referring to the
> >      type of the expression & its input operands in a gimplified
> >      expression.  At that point I do not care about base types or anything
> >      like that.  All that should matter is TREE_TYPE (expr), nothing more,
> >      nothing less.  How the inputs are converted or how the output is later
> >      converted is not a concern -- all that matters is TREE_TYPE (expr) in
> >      a gimplified tree.
> >
> >      Can we agree on that?
> >
> > Yes.
> >
> > The base type reference is that I'm *also* saying "If you see an arithmetic
> > node where TREE_TYPE is *not* a base type, there's a bug someplace that
> > has to be fixed". (Well, with the exception of such things as sizetypes
> > or subtypes that don't actually change anything.)
> Just to make sure I've dotted the i's and crossed the t's, this
> is not what's happening when we hang in VRP when compiling a-textio.
>
> We convert the incoming object from natural___XDLU_0___2147483647
> into its base type, perform the addition in the base type, then
> convert back to XDLU_0_2147483647.

Which is exactly what I would expect, i.e. fine.  So VRP can assume for

 (XDLU..) ( (base) a + (base) b )

that a and b have values in the range of natural ___XDLU_0___2147483647,
the sum is in range of the base type and this result get's truncated again to
___XDLU_0___2147483647.

Which is my understanding of Ada semantics.  There is nothing in Ada semantics
as far as I understand that requires possible overflow to be preserved
until after
the cast to (XDLU..) for checking with 'Valid - if there is, Ada needs to use an
explicit unchecked conversion which again has to be implemented using a
VIEW_CONVERT_EXPR.

Any other semantics don't make sense, so the middle-end ought to use these
semantics and bugs in the middle-end and the Ada frontend uncovered by it
fixed.  Better soon than later.

Richard.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-22 18:18 Richard Kenner
  2006-02-22 18:41 ` Jeffrey A Law
@ 2006-02-24  0:59 ` Jeffrey A Law
  2006-02-24  9:48   ` Richard Guenther
  1 sibling, 1 reply; 105+ messages in thread
From: Jeffrey A Law @ 2006-02-24  0:59 UTC (permalink / raw)
  To: Richard Kenner; +Cc: gcc

On Wed, 2006-02-22 at 13:25 -0500, Richard Kenner wrote:
>      When I speak about doing arithmetic in a type, I'm referring to the
>      type of the expression & its input operands in a gimplified
>      expression.  At that point I do not care about base types or anything
>      like that.  All that should matter is TREE_TYPE (expr), nothing more,
>      nothing less.  How the inputs are converted or how the output is later
>      converted is not a concern -- all that matters is TREE_TYPE (expr) in
>      a gimplified tree.
> 
>      Can we agree on that?
> 
> Yes.
> 
> The base type reference is that I'm *also* saying "If you see an arithmetic
> node where TREE_TYPE is *not* a base type, there's a bug someplace that
> has to be fixed". (Well, with the exception of such things as sizetypes
> or subtypes that don't actually change anything.)
Just to make sure I've dotted the i's and crossed the t's, this
is not what's happening when we hang in VRP when compiling a-textio.

We convert the incoming object from natural___XDLU_0___2147483647
into its base type, perform the addition in the base type, then
convert back to XDLU_0_2147483647.

Jeff

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-22 18:18 Richard Kenner
@ 2006-02-22 18:41 ` Jeffrey A Law
  2006-02-24  0:59 ` Jeffrey A Law
  1 sibling, 0 replies; 105+ messages in thread
From: Jeffrey A Law @ 2006-02-22 18:41 UTC (permalink / raw)
  To: Richard Kenner; +Cc: gcc

On Wed, 2006-02-22 at 13:25 -0500, Richard Kenner wrote:
> The base type reference is that I'm *also* saying "If you see an arithmetic
> node where TREE_TYPE is *not* a base type, there's a bug someplace that
> has to be fixed". (Well, with the exception of such things as sizetypes
> or subtypes that don't actually change anything.)
Ok.  That's actually good to know and something worth investigating.
Thanks,
Jeff

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
@ 2006-02-22 18:18 Richard Kenner
  2006-02-22 18:41 ` Jeffrey A Law
  2006-02-24  0:59 ` Jeffrey A Law
  0 siblings, 2 replies; 105+ messages in thread
From: Richard Kenner @ 2006-02-22 18:18 UTC (permalink / raw)
  To: law; +Cc: gcc

     When I speak about doing arithmetic in a type, I'm referring to the
     type of the expression & its input operands in a gimplified
     expression.  At that point I do not care about base types or anything
     like that.  All that should matter is TREE_TYPE (expr), nothing more,
     nothing less.  How the inputs are converted or how the output is later
     converted is not a concern -- all that matters is TREE_TYPE (expr) in
     a gimplified tree.

     Can we agree on that?

Yes.

The base type reference is that I'm *also* saying "If you see an arithmetic
node where TREE_TYPE is *not* a base type, there's a bug someplace that
has to be fixed". (Well, with the exception of such things as sizetypes
or subtypes that don't actually change anything.)

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-21 22:24   ` Laurent GUERBY
  2006-02-21 23:20     ` Robert Dewar
@ 2006-02-22 17:02     ` Jeffrey A Law
  1 sibling, 0 replies; 105+ messages in thread
From: Jeffrey A Law @ 2006-02-22 17:02 UTC (permalink / raw)
  To: Laurent GUERBY; +Cc: Richard Kenner, gcc

On Tue, 2006-02-21 at 23:23 +0100, Laurent GUERBY wrote:
> On Tue, 2006-02-21 at 15:02 -0700, Jeffrey A Law wrote:
> > Given an expression, we have to do computations in some other type than
> > the type of the expression? Now that's just silly.  If the expression
> > has some type X, then we should be doing our computations in type X.
> 
> That would obviously lead to very inefficient implementation if you
> put that in a language with user range types and bound checking since it
> would force a dynamic bound check after each operation.
I'm saying nothing about where/when bounds checking occurs -- 
Ada is free to insert bounds checking wherever it is necessary
to impelment Ada semantics.


In our trees, if I have
PLUS_EXPR <TYPE, X, Y>

That says very simply to add X & Y in type TYPE -- anything else
is wrong.  If our Ada front-end assumes something different, then
that is braindamage in our Ada front-end.

Note carefully I'm saying absolutely nothing about ignoring
user range types, I'm saying nothing about where you insert
bounds checking.  I'm saying that within an arithmetic expression
the type of the expression specifies the type in which the
arithmetic will occur.

Jeff

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-21 23:11 ` Robert Dewar
@ 2006-02-22 16:55   ` Jeffrey A Law
  0 siblings, 0 replies; 105+ messages in thread
From: Jeffrey A Law @ 2006-02-22 16:55 UTC (permalink / raw)
  To: Robert Dewar; +Cc: Richard Kenner, gcc

On Tue, 2006-02-21 at 18:11 -0500, Robert Dewar wrote:
> Richard Kenner wrote:
> 
> > Let me try again and take a simpler example.  If we have
> > 
> > 	subtype T is Integer range 20..50;
> > 
> > 	Y: T;
> > 
> > 	   ... Y + 1 ...
> > 
> > What the tree looks like is a PLUS_EXPR of type "Integer" (the base type of
> > T), not T, whose first operand is a NOP_EXPR converting Y to Integer and
> > whose second operand is the constant 1 also of type Integer, not T.
> 
> Note that this *exactly* reflects the formal Ada semantics ...
And that's not what's at issue here.  We agreed on this some time
ago.  The nop-conversions are a necessary requirement to implement
Ada semantics and those nop conversions are and need to be explicit
in the IL.

Assuming the NOP conversions are explicit statements in the IL
(as they are by the time VRP runs), then within an expression we
are type consistent.  ie, TREE_TYPE (expr) == TREE_TYPE (op0) ==
TREE_TYPE (op1) where op0 & op1 are either SSA_NAMEs or invariants.
Furthermore, the operation specified by expr is to be carried out
in TREE_TYPE (expr).

Jeff

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-21 22:15 Richard Kenner
  2006-02-21 23:11 ` Robert Dewar
@ 2006-02-22 16:52 ` Jeffrey A Law
  1 sibling, 0 replies; 105+ messages in thread
From: Jeffrey A Law @ 2006-02-22 16:52 UTC (permalink / raw)
  To: Richard Kenner; +Cc: gcc

On Tue, 2006-02-21 at 17:22 -0500, Richard Kenner wrote:
>      Given an expression, we have to do computations in some other type than
>      the type of the expression? Now that's just silly.  
> 
> Sure, but that's not what I said.
> 
>      If the expression has some type X, then we should be doing our
>      computations in type X.  
> 
> Right.
> 
> Let me try again and take a simpler example.  If we have
> 
> 	subtype T is Integer range 20..50;
> 
> 	Y: T;
> 
> 	   ... Y + 1 ...
> 
> What the tree looks like is a PLUS_EXPR of type "Integer" (the base type of
> T), not T, whose first operand is a NOP_EXPR converting Y to Integer and
> whose second operand is the constant 1 also of type Integer, not T.
In a gimplified tree, for nodes such as PLUS_EXPR, the type
of the inputs, the type of the expression and the type of the
result are all one and the same.  Any input conversions occur
before the arithmetic statement and any output conversions 
occur after the arithmetic statement.  ie, within the statement
we are type consistent.  I think we're in agreement about that.

When I speak about doing arithmetic in a type, I'm referring to the
type of the expression & its input operands in a gimplified expression.
At that point I do not care about base types or anything like that.
All that should matter is TREE_TYPE (expr), nothing more, nothing
less.    How the inputs are converted or how the output is later
converted is not a concern -- all that matters is TREE_TYPE (expr)
in a gimplified tree.

Can we agree on that?

jef

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-22 16:32       ` Jeffrey A Law
@ 2006-02-22 16:48         ` Robert Dewar
  0 siblings, 0 replies; 105+ messages in thread
From: Robert Dewar @ 2006-02-22 16:48 UTC (permalink / raw)
  To: law; +Cc: Laurent GUERBY, Richard Kenner, gcc

Jeffrey A Law wrote:

> TYPE_MIN_VALUE/TYPE_MAX_VALUE should cover the entire range of
> values that can be assigned to a particular object.

Do you mean assigned as in "with an assignment statement", or do
you mean "end up in the object by any mechanism"?
> 
> Most of the other stuff discussed so far is background material to
> that ultimate issue IMHO.
> 
> Jeff


^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-21 23:20     ` Robert Dewar
@ 2006-02-22 16:32       ` Jeffrey A Law
  2006-02-22 16:48         ` Robert Dewar
  0 siblings, 1 reply; 105+ messages in thread
From: Jeffrey A Law @ 2006-02-22 16:32 UTC (permalink / raw)
  To: Robert Dewar; +Cc: Laurent GUERBY, Richard Kenner, gcc

On Tue, 2006-02-21 at 18:20 -0500, Robert Dewar wrote:
> Laurent GUERBY wrote:
> 
> > You keep saying "brain damage", but please if you see a better design
> > (other than "forget about user range types" :), let us all know!
> 
> Actually I think everyone agrees on what is appropriate here. it is
> a matter of working out a clear view. I don't think there are any
> real disagreements, just some communication and documentation issues.
> (as well as making sure we don't lose conversions we need!)
Actually, I don't think we have an agreement on whether or not
TYPE_MIN_VALUE/TYPE_MAX_VALUE should cover the entire range of
values that can be assigned to a particular object.

Most of the other stuff discussed so far is background material to
that ultimate issue IMHO.

Jeff

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-22  9:52 ` Richard Guenther
@ 2006-02-22 12:27   ` Robert Dewar
  0 siblings, 0 replies; 105+ messages in thread
From: Robert Dewar @ 2006-02-22 12:27 UTC (permalink / raw)
  To: Richard Guenther; +Cc: Richard Kenner, law, gcc

Richard Guenther wrote:
> On 2/21/06, Richard Kenner <kenner@vlsi1.ultra.nyu.edu> wrote:
>>      But if the values in there do not reflect the reality of what values
>>      are valid for the type, then I don't see how they can be generally
>>      useful -- that's my point.  We have two fields that are inaccurate,
>>      apparently on purpose, and as a result they are basically unusable.
>>
>> No, they *do* reflect the "reality of what values are valid for the type".
>> The only glitch, which is not what we're talking about here, is that you have
>> to have a way to implement the language-defined test to see if the value is
>> valid or not.  However, the need to implement that relative-uncommon test
>> should't drive the basic methodology used to represent types.
> 
> As you mention in another post an invalid value can only occour (as in, being
> not undefined behavior) with an unchecked conversion.

The use of the term invalid is confusing in connection
with Ada. Ada has two concepts of out of range values

   Invalid values. Caused most notably by uninitialized
   variables. Use of such values is generally not
   erroneous, but rather a bounded error with a small
   subset of reasonable outcomes (program termination,
   exception, or just use the value, but such use cannot
   cause erroneous behavior).

   Abnormal values, Caused e.g. by two tasks messing with
   a variable at the same time. Any use of an abnormal
   variable is erroneous.

The result of a bogus unchecked_conversion is impl defined.
For GNAT, in the case of discrete types, we say that such a
result is invalid rather than abnormal.

'Valid does not have to work for abnormal values, it must
work correctly for invalid values (in practice it will work
as expected for abnormal values).

   The Ada frontend
> has to make sure then that for
> 
>   BaseType i;
>   SubType x = <unchecked>(SubType)i;
> // now the value stored in x may exceed its range
>   if (Valid (x))
>    ...
> 
> that for the Valid (x) test, a VIEW_CONVERT_EXPR is used to do the comparison(s)
> in the BaseType again.  And using the VIEW_CONVERT_EXPR to tell the compiler
> it cannot look through the cast and infer range information from the type of x.

Sounds exactly right to me.
> 
> Richard.


^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-21 22:24   ` Laurent GUERBY
@ 2006-02-21 23:20     ` Robert Dewar
  2006-02-22 16:32       ` Jeffrey A Law
  2006-02-22 17:02     ` Jeffrey A Law
  1 sibling, 1 reply; 105+ messages in thread
From: Robert Dewar @ 2006-02-21 23:20 UTC (permalink / raw)
  To: Laurent GUERBY; +Cc: law, Richard Kenner, gcc

Laurent GUERBY wrote:

> You keep saying "brain damage", but please if you see a better design
> (other than "forget about user range types" :), let us all know!

Actually I think everyone agrees on what is appropriate here. it is
a matter of working out a clear view. I don't think there are any
real disagreements, just some communication and documentation issues.
(as well as making sure we don't lose conversions we need!)

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-21 22:15 Richard Kenner
@ 2006-02-21 23:11 ` Robert Dewar
  2006-02-22 16:55   ` Jeffrey A Law
  2006-02-22 16:52 ` Jeffrey A Law
  1 sibling, 1 reply; 105+ messages in thread
From: Robert Dewar @ 2006-02-21 23:11 UTC (permalink / raw)
  To: Richard Kenner; +Cc: law, gcc

Richard Kenner wrote:

> Let me try again and take a simpler example.  If we have
> 
> 	subtype T is Integer range 20..50;
> 
> 	Y: T;
> 
> 	   ... Y + 1 ...
> 
> What the tree looks like is a PLUS_EXPR of type "Integer" (the base type of
> T), not T, whose first operand is a NOP_EXPR converting Y to Integer and
> whose second operand is the constant 1 also of type Integer, not T.

Note that this *exactly* reflects the formal Ada semantics ...

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-21 22:03 ` Jeffrey A Law
@ 2006-02-21 22:24   ` Laurent GUERBY
  2006-02-21 23:20     ` Robert Dewar
  2006-02-22 17:02     ` Jeffrey A Law
  0 siblings, 2 replies; 105+ messages in thread
From: Laurent GUERBY @ 2006-02-21 22:24 UTC (permalink / raw)
  To: law; +Cc: Richard Kenner, gcc

On Tue, 2006-02-21 at 15:02 -0700, Jeffrey A Law wrote:
> ?!?  WTF
> 
> Given an expression, we have to do computations in some other type than
> the type of the expression? Now that's just silly.  If the expression
> has some type X, then we should be doing our computations in type X.

That would obviously lead to very inefficient implementation if you
put that in a language with user range types and bound checking since it
would force a dynamic bound check after each operation.

I don't see many other definitions that allow for efficient
implementation other than the choice Ada language designer made here:
- type bound check on user variable assignement
- intermediate computation made with a type choosed by the compiler with
equal or larger bounds, and in practive a convenient machine word with
no checks at all on most intermediate computations

You keep saying "brain damage", but please if you see a better design
(other than "forget about user range types" :), let us all know!

Laurent

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
@ 2006-02-21 22:15 Richard Kenner
  2006-02-21 23:11 ` Robert Dewar
  2006-02-22 16:52 ` Jeffrey A Law
  0 siblings, 2 replies; 105+ messages in thread
From: Richard Kenner @ 2006-02-21 22:15 UTC (permalink / raw)
  To: law; +Cc: gcc

     Given an expression, we have to do computations in some other type than
     the type of the expression? Now that's just silly.  

Sure, but that's not what I said.

     If the expression has some type X, then we should be doing our
     computations in type X.  

Right.

Let me try again and take a simpler example.  If we have

	subtype T is Integer range 20..50;

	Y: T;

	   ... Y + 1 ...

What the tree looks like is a PLUS_EXPR of type "Integer" (the base type of
T), not T, whose first operand is a NOP_EXPR converting Y to Integer and
whose second operand is the constant 1 also of type Integer, not T.

So the expression is of type Integer and that's what we do the
computation in.

If the context of that expression is
	
	Y := Y + 1;

then there'll be a conversion to type T (and possibly a bounds check)
of the RHS of the MODIFY_EXPR.  VRP will know that the results of the
PLUS_EXPR are [21,51] (not [21,50]!).  The bounds check will be against
[20,50], so VRP could convert it to a test of "!= 51" if it wanted.


^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-21 21:17 Richard Kenner
@ 2006-02-21 22:03 ` Jeffrey A Law
  2006-02-21 22:24   ` Laurent GUERBY
  0 siblings, 1 reply; 105+ messages in thread
From: Jeffrey A Law @ 2006-02-21 22:03 UTC (permalink / raw)
  To: Richard Kenner; +Cc: gcc

On Tue, 2006-02-21 at 16:24 -0500, Richard Kenner wrote:

>      So, back to my example.  If I have an object with a range [0,
>      0x7ff  fffff] based on the type of the object and I add one to that
>      object, then I can safely conclude that the result of the addition has
>      the range [1, 0x7fffffff].  Right?
> 
> If the addition were in the type of the object, yes.  But it's not supposed
> to be.  It's supposed to be in the *base type* of the object which won't
> have the TYPE_MAX_VALUE restriction so that nobody would try to conclude
> that there was an upper-bound limit.
?!?  WTF

Given an expression, we have to do computations in some other type than
the type of the expression? Now that's just silly.  If the expression
has some type X, then we should be doing our computations in type X.
Not the basetype X'.  If Ada really expects this throughout GCC, then
we've got some major underlying problems.



Jeff

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
@ 2006-02-21 21:17 Richard Kenner
  2006-02-21 22:03 ` Jeffrey A Law
  0 siblings, 1 reply; 105+ messages in thread
From: Richard Kenner @ 2006-02-21 21:17 UTC (permalink / raw)
  To: law; +Cc: gcc

     Umm, why bring up the basetype nonsense at all.  The arithemtic
     is done in whatever type is associated with the expression, not
     the base type.  Nothing else makes sense.  ie, conversions are
     explicit.

The conversions are explicit, but are to the base type, which is also
the type associated with the expression.  By mentioning the base type,
we're just saying what the type of the expression will be.

     So, back to my example.  If I have an object with a range [0,
     0x7ff  fffff] based on the type of the object and I add one to that
     object, then I can safely conclude that the result of the addition has
     the range [1, 0x7fffffff].  Right?

If the addition were in the type of the object, yes.  But it's not supposed
to be.  It's supposed to be in the *base type* of the object which won't
have the TYPE_MAX_VALUE restriction so that nobody would try to conclude
that there was an upper-bound limit.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-21 19:35   ` Ada subtypes and base types Robert Dewar
@ 2006-02-21 19:55     ` Jeffrey A Law
  0 siblings, 0 replies; 105+ messages in thread
From: Jeffrey A Law @ 2006-02-21 19:55 UTC (permalink / raw)
  To: Robert Dewar; +Cc: Richard Kenner, gcc

On Tue, 2006-02-21 at 14:34 -0500, Robert Dewar wrote:
> Jeffrey A Law wrote:
> 
> > So, if we have an object with the range based on its type of
> > [0, 0x7fffffff] and we add 1 to that object, the resulting range
> > should be [1, 0x7fffffff].   ie, 0x80000000 is not a valid value
> > for the type.  Right?
> 
> The actual rule in Ada works like this:
> 
>      type X is range 0 .. 16#7fff_ffff#;
> 
>      Y : X;
> 
>      Y := (Y + 1) - 1;
> 
> If Y is X'Last, then the addition of 1 must either raise an
> overflow exception or "work". Works means give its proper
> mathematical value.
> 
> So this assignment can either raise CE, or leave Y unchanged.
> Either is OK.
So in the case above, the set of permissible values is
[1, 0x7fffffff] after the addition, right?   It's also valid
to raise a CE.

Jeff

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-21 19:28 ` Jeffrey A Law
@ 2006-02-21 19:35   ` Robert Dewar
  2006-02-21 19:55     ` Jeffrey A Law
  0 siblings, 1 reply; 105+ messages in thread
From: Robert Dewar @ 2006-02-21 19:35 UTC (permalink / raw)
  To: law; +Cc: Richard Kenner, gcc

Jeffrey A Law wrote:

> So, if we have an object with the range based on its type of
> [0, 0x7fffffff] and we add 1 to that object, the resulting range
> should be [1, 0x7fffffff].   ie, 0x80000000 is not a valid value
> for the type.  Right?

The actual rule in Ada works like this:

     type X is range 0 .. 16#7fff_ffff#;

     Y : X;

     Y := (Y + 1) - 1;

If Y is X'Last, then the addition of 1 must either raise an
overflow exception or "work". Works means give its proper
mathematical value.

So this assignment can either raise CE, or leave Y unchanged.
Either is OK.

If checks are off, then it is probably better to just let
the addition wrap (in fact it seems pretty clear that Ada
would be better off enabling -fwrapv)

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-21 19:04 ` Jeffrey A Law
@ 2006-02-21 19:19   ` Robert Dewar
  0 siblings, 0 replies; 105+ messages in thread
From: Robert Dewar @ 2006-02-21 19:19 UTC (permalink / raw)
  To: law; +Cc: Richard Kenner, gcc

Jeffrey A Law wrote:
> On Tue, 2006-02-21 at 13:57 -0500, Richard Kenner wrote:
>>      Can a conforming program set the object to a value outside of
>>      TYPE_MIN_VALUE/TYPE_MAX_VALUE.
>>
>> Let's forget about the obscure unchecked conversion -> 'Valid case
>> because we're going to handle that in whatever way we need to.
>>
>> So the answer is "no".
> OK.  So if a program sets an object to a value outside 
> TYPE_MIN_VALUE/TYPE_MAX_VALUE, then that program is
> invalid for the purposes of this discussion?

The program is either erroneous, in which case we don't have
to discuss it, or the invalid value is the result of a bounded
error, and we have to do something vaguely reasonable (use
the out of range value, for example, and possibly blow up
as a result).
> 
> Jeff


^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-21 17:39       ` Jeffrey A Law
@ 2006-02-21 19:17         ` Robert Dewar
  0 siblings, 0 replies; 105+ messages in thread
From: Robert Dewar @ 2006-02-21 19:17 UTC (permalink / raw)
  To: law; +Cc: Richard Guenther, Laurent GUERBY, Richard Kenner, ebotcazou, gcc


>> Indeed.  Ada should in this case generate
>>
>>   R = (T)( (basetype)100 + (basetype)X - (basetype)X )

It does!
>>
>> i.e. carry out all arithmetic explicitly in the basetype and only for stores
>> and loads use the subtype.
> I'd tend to agree, furthermore, if a pass starts wiping out those
> type conversions, then we've got a bug. 

Right, the type conversions must not be wiped out, that's a
real issue!

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: Ada subtypes and base types
  2006-02-19 19:43 ` Ada subtypes and base types (was: Bootstrap failure on trunk: x86_64-linux-gnu) Laurent GUERBY
@ 2006-02-19 20:04   ` Robert Dewar
  2006-02-20 20:36   ` Ada subtypes and base types (was: Bootstrap failure on trunk: x86_64-linux-gnu) Jeffrey A Law
  1 sibling, 0 replies; 105+ messages in thread
From: Robert Dewar @ 2006-02-19 20:04 UTC (permalink / raw)
  To: Laurent GUERBY; +Cc: Richard Kenner, Jeffrey A Law, ebotcazou, gcc

Laurent GUERBY wrote:
> On Sun, 2006-02-19 at 14:23 -0500, Richard Kenner wrote:
>>     "Second, for a given integer type (such as
>>     natural___XDLU_0_2147483647), the type for the nodes in TYPE_MIN_VALUE
>>     and TYPE_MAX_VALUE really should be a natural___XDLU_0_2147483647.
>>     ie, the type of an integer constant should be the same as the type of
>>     its min/max values."
>>
>> No, the type of the bounds of a subtype should be the *base type*.  That's
>> how the tree has always looked, as far back as  I can remember.
> 
> This is because intermediate computations can produce results
> outside the subtype range but within the base type range (RM 3.5(6)),
> right?

No, it is because this is the way the language is defined, what
other type could the bounds of a subtype have? Clearly they are
not of the subtype itself (that would be a nonsense recursion).

^ permalink raw reply	[flat|nested] 105+ messages in thread

end of thread, other threads:[~2006-03-31 12:55 UTC | newest]

Thread overview: 105+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-02-21 20:33 Ada subtypes and base types Richard Kenner
2006-02-21 20:54 ` Jeffrey A Law
  -- strict thread matches above, loose matches on Subject: below --
2006-02-27 19:12 Waldek Hebisch
2006-02-28 21:01 ` Laurent GUERBY
2006-03-13 23:31 ` Jeffrey A Law
2006-03-14  2:29   ` Waldek Hebisch
2006-03-14  9:55     ` Duncan Sands
2006-03-17 21:06     ` Jeffrey A Law
2006-03-18 14:52       ` Laurent GUERBY
2006-03-21  3:38         ` Jeffrey A Law
2006-03-21  9:14           ` Laurent GUERBY
2006-03-21 16:42             ` Diego Novillo
2006-03-21 22:30               ` Laurent GUERBY
2006-03-21 17:26           ` Tom Tromey
2006-03-21 18:20             ` Diego Novillo
2006-03-21 14:46       ` Duncan Sands
2006-03-21 16:29         ` Jeffrey A Law
2006-03-21 16:40           ` Andrew Pinski
2006-03-21 17:13           ` Duncan Sands
2006-03-21 17:20             ` Jeffrey A Law
2006-03-21 17:23               ` Duncan Sands
2006-03-21 20:20                 ` Jeffrey A Law
2006-03-21 22:42         ` Jeffrey A Law
2006-03-22 21:29           ` Duncan Sands
2006-03-23 10:36           ` Duncan Sands
2006-03-23 10:44             ` Eric Botcazou
2006-03-23 12:30               ` Duncan Sands
2006-03-23 15:59                 ` Eric Botcazou
2006-03-23 16:15             ` Jeffrey A Law
2006-03-23 16:26               ` Richard Guenther
2006-03-23 16:37                 ` Andrew Pinski
2006-03-23 20:44                   ` Duncan Sands
2006-03-24 19:41           ` Duncan Sands
2006-03-25  0:09             ` Jeffrey A Law
2006-03-25 16:26               ` Duncan Sands
2006-03-25 18:00                 ` Diego Novillo
2006-03-27 21:34                   ` Jeffrey A Law
2006-03-28  1:20                     ` Duncan Sands
2006-03-28 18:50                       ` Jeffrey A Law
2006-03-29  1:11                         ` Duncan Sands
2006-03-29 22:54                           ` Tom Tromey
2006-03-30 13:22                             ` Duncan Sands
2006-03-30 15:45                               ` Diego Novillo
2006-03-30 17:47                             ` Jeffrey A Law
2006-03-30 17:52                               ` Andrew Haley
2006-03-30 18:34                                 ` Jeffrey A Law
2006-03-31 10:40                                   ` Andrew Haley
2006-03-31 11:30                                     ` Jeffrey A Law
2006-03-31 15:38                                       ` Diego Novillo
2006-03-30 19:00                               ` Tom Tromey
2006-03-31 15:58                               ` Duncan Sands
2006-03-30 18:01                           ` Jeffrey A Law
2006-03-15 17:23   ` Michael Matz
2006-03-16  1:53   ` Laurent GUERBY
2006-03-16  4:10     ` Robert Dewar
2006-03-16  7:33       ` Geert Bosch
2006-03-16 10:20         ` Richard Guenther
2006-03-16 11:48           ` Laurent GUERBY
2006-03-16 12:03             ` Richard Guenther
2006-03-16 12:19               ` Richard Guenther
2006-03-16 12:30             ` Richard Guenther
2006-03-16 13:27               ` Richard Guenther
2006-03-16 13:32                 ` Andrew Pinski
2006-03-16 16:11                   ` Richard Guenther
2006-03-16 11:49           ` Geert Bosch
2006-03-16 11:48       ` Laurent GUERBY
2006-03-17 16:48       ` Waldek Hebisch
2006-03-17 19:51         ` Laurent GUERBY
2006-02-24 19:22 Richard Kenner
2006-02-24 16:10 Richard Kenner
2006-02-24 16:39 ` Sebastian Pop
2006-02-24 16:53   ` Jeffrey A Law
2006-02-24 17:48   ` Jeffrey A Law
2006-02-24 18:22     ` Sebastian Pop
2006-02-24 18:15   ` Jeffrey A Law
2006-02-24 18:46     ` Sebastian Pop
2006-02-24 20:16       ` Jeffrey A Law
2006-02-24 22:14         ` Sebastian Pop
2006-02-25  8:48       ` Zdenek Dvorak
2006-02-27 18:26         ` Jeffrey A Law
2006-02-27 19:25           ` Zdenek Dvorak
2006-02-27 18:23       ` Jeffrey A Law
2006-02-27 19:12         ` Sebastian Pop
2006-02-27 19:17           ` Jeffrey A Law
2006-02-22 18:18 Richard Kenner
2006-02-22 18:41 ` Jeffrey A Law
2006-02-24  0:59 ` Jeffrey A Law
2006-02-24  9:48   ` Richard Guenther
2006-02-21 22:15 Richard Kenner
2006-02-21 23:11 ` Robert Dewar
2006-02-22 16:55   ` Jeffrey A Law
2006-02-22 16:52 ` Jeffrey A Law
2006-02-21 21:17 Richard Kenner
2006-02-21 22:03 ` Jeffrey A Law
2006-02-21 22:24   ` Laurent GUERBY
2006-02-21 23:20     ` Robert Dewar
2006-02-22 16:32       ` Jeffrey A Law
2006-02-22 16:48         ` Robert Dewar
2006-02-22 17:02     ` Jeffrey A Law
2006-02-21 19:08 Ada subtypes and base types (was: Bootstrap failure on trunk: x86_64-linux-gnu) Richard Kenner
2006-02-21 19:28 ` Jeffrey A Law
2006-02-21 19:35   ` Ada subtypes and base types Robert Dewar
2006-02-21 19:55     ` Jeffrey A Law
2006-02-21 18:51 Ada subtypes and base types (was: Bootstrap failure on trunk: x86_64-linux-gnu) Richard Kenner
2006-02-21 19:04 ` Jeffrey A Law
2006-02-21 19:19   ` Ada subtypes and base types Robert Dewar
2006-02-21 17:39 Ada subtypes and base types (was: Bootstrap failure on trunk: x86_64-linux-gnu) Richard Kenner
2006-02-22  9:52 ` Richard Guenther
2006-02-22 12:27   ` Ada subtypes and base types Robert Dewar
2006-02-19 19:17 Bootstrap failure on trunk: x86_64-linux-gnu Richard Kenner
2006-02-19 19:43 ` Ada subtypes and base types (was: Bootstrap failure on trunk: x86_64-linux-gnu) Laurent GUERBY
2006-02-19 20:04   ` Ada subtypes and base types Robert Dewar
2006-02-20 20:36   ` Ada subtypes and base types (was: Bootstrap failure on trunk: x86_64-linux-gnu) Jeffrey A Law
2006-02-20 21:00     ` Richard Guenther
2006-02-21 17:39       ` Jeffrey A Law
2006-02-21 19:17         ` Ada subtypes and base types Robert Dewar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).