public inbox for gcc@gcc.gnu.org
 help / color / mirror / Atom feed
* VTA merge?
@ 2009-06-05 10:06 Alexandre Oliva
  2009-06-05 10:19 ` Richard Guenther
                   ` (2 more replies)
  0 siblings, 3 replies; 39+ messages in thread
From: Alexandre Oliva @ 2009-06-05 10:06 UTC (permalink / raw)
  To: gcc

It's been a very long time since I started working in the
var-tracking-assignments branch.  It is finally approaching a state in
which I'm comfortable enough to propose that it be integrated.

Alas, it's not quite finished yet and, unless it is merged, it might
very well never be.  New differences in final RTL between -g and -g0
keep popping up: I just found out one more that went in as recently as
last week (lots of -O3 -g torture testsuite failures in -fcompare-debug
test runs).  After every merge, I spent an inordinate amount of time
addressing such -fcompare-debug regressions.

I'm not complaining.  I actually enjoy doing that, it is fun.  But it
never ends, and it took time away from implementing the features that,
for a long time, were missing.  I think it's close enough to ready now
that I feel it is no longer unfair to request others to share the burden
of keeping GCC from emitting different code when given -g compared with
-g0, a property we should have always ensured.


== What is VTA?

This project aims at getting GCC to emit debug information for local
variables that is always correct, and as complete as possible.  By
correct, I mean, if GCC says a variable is at a certain location at a
certain point in the program, that location must hold the value of the
variable at that point.  By complete, I mean if the value of the
variable is available somewhere, or can be computed from values
available somewhere, then debug information for the variable should tell
the debug information consumer how to obtain or compute it.

The key to keep the mapping between SL (source-level) variables and IR
objects from being corrupted or lost was to introduce explicit IR
mappings that, on the SL hand, remained stable fixed points and, on the
IR hand, expressions that got naturally adjusted as part of the
optimization process, without any changes to the optimization passes.

Alas, no changes to the passes would be too good to be true.  It was
indeed true for several of them, but many that dealt with special
boundary cases such as single or no occurrences of references to a
value, or that counted references to make decisions, had to be taught
how to disregard references that appeared in these new binding IR
elements.  Others had to be taught to disregard these elements when
checking for the absence of intervening code between a pair of
statements or instructions.

In nearly all cases, the changes were trivial, and the need for them was
shown in -fcompare-debug or bootstrap-debug testing.  In a few cases,
changes had to be more elaborate, for disregarding debug uses during
analysis ended up requiring them to be explicitly adjusted afterwards.
For example, substituting a set into its single non-debug use required
adding code to substitute into the debug uses as well.  In most of these
cases, adjusting them would merely avoid loss of debug information.  In
a few, failing to do so could actually cause incorrect debug information
to be output, but there are safety nets in place that avoid this in the
SSA level, causing debug information to be dropped instead.

Overall, the amount of changes to the average pass was ridiculously
small, compared both with the amount of code in the pass, and with the
amount of code that would have to added for the pass to update debug
info mappings as it performs its pass-specific transformations.  It
might be possible to cover some of these updates by generic code, but
it's precisely in the non-standard transformations that they'd require
additional code.  Simply letting them apply their work to the debug
stuff proved to be quite a successful approach, as I hope anyone who
bothers to look at the patches will verify.


After the binding points are carried and updated throughout
optimizations and IR conversions, we arrive at the var-tracking pass,
where we used to turn register and memory attributes into var_location
annotations.  It is here that VTA does more of its magic.

Using something that vaguely resembles global value numbering, but
without the benefits of SSA, we propagate the bindings and analyze
loads, stores, copies and computations, so that we can determine where
all copies of the value of each variable are, so that, if one location
is modified, we can still use another to refer to it in debug
information.

At control flow confluences, we merge the known locations, known values,
computing expressions, etc, as expected.  This is where some work is
still required: although we merge stuff in registers perfectly, we still
don't deal with stack slots properly.  Sometimes they work, but mostly
by chance.  It is the lack of this feature that makes VTA debug
information not uniformly superior to current debug information at this
point.

This feature is next in my to-do list, it shouldn't take long, but I
wanted to post the bulk of the changes before the GCC Summit, so that
you get a chance to discuss it there.  Unfortunately, I won't be there;
by the time budget for my attendance became available, I was already
committed to participating and organizing several other events later
this month.

Anyhow, since VTA is still missing at least one essential feature, it
shouldn't be enabled by default even if it goes into the trunk.  It
would be nice, however, to have it in, so that people can start testing
it out, verifying that it imposes essentially zero overhead when debug
information (or VTA itself) are not enabled, and that, when VTA is
enabled, the increase in memory use and compile time are tolerable.

Compile time overhead in the var tracking pass was pretty bad as
recently as a couple of months ago, but I managed to bring it down to
something that varies between negligible and not too bad, except for
some hopefully pathological and fixable cases I'm yet to look into.
HTML_401F in libjava appears to be *the* worst-case scenario.  Once that
is taken care of, performance- and memory-related bug reports will be
useful.


Furthermore, if VTA goes in, but disabled by default, people can start
testing it on platforms I can't easily test on, and letting me know
about any problems introduced by VTA: hopefully none when it's disabled,
possibly some when it's enabled (say, I haven't tested it on any machine
with delayed branch slots yet), quite likely some when -fcompare-debug
or bootstrap-debug are in use.

I know there are some recent -fcompare-debug regressions on IA64 with
VTA enabled, that I haven't got myself to fix yet, and several others in
C++ and even in C with -O3 -g (mentioned above) that show up without
VTA.

I'd very much appreciate any other such reports, and I'm committed to
addressing them as quickly as possible, with the caveat that I won't be
around for most of the second half of June (one more reason to keep VTA
disabled by default at first).


== Submission plan

I've talked to a number of people about how to submit the patch.  There
was consensus that posting it as a single huge patch wouldn't fly.
OTOH, turning it into a series of dozens of small patches that would
have to be tested so that they could be applied incrementally would be
an inordinate amount of work.

An approach that everyone I talked to found acceptable was to first
clear the VTA-independent stuff out of the way (which I started early
this week, and that is now nearly completed), then break up the actual
VTA changes into conceptual components, which would ease review, but
that, save for exceptions, would still be applied as a unit.

I broke it up into the following patches, that I'm going to submit soon
to gcc-patches:

cmdline (7K) - new command line flags to turn VTA on or off, as well as
a few debugging options that helped me debug it

ssa (55K) - introduce debug bind stmts in the tree and tuples level

ssa-to-rtl (24K) - convert debug bind stmts to debug insns

rtl (48K) - introduce debug insns in the RTL level

tracking (176K) - turn debug insns into var_location notes

ssa-compare-debug (22K) - fix -fcompare-debug errors that showed up in
the presence of debug bind stmts

rtl-compare-debug (53K) - fix -fcompare-debug errors that showed up in the
presence of debug insns

sched (63K) - fix schedulers (except for sel-sched, that's only partially
fixed, which means VTA is not ready for -O3 on IA64) to deal properly
with debug insns

ports (9K) - minor adjustments to ports, mostly to schedulers, to
avoid -fcompare-debug regressions

testsuite-guality (16K) - (still small) debug info quality testsuite

buildopts (4K) - new BUILD_CONFIG options that can test VTA more thoroughly

I realize the division is quite uneven, but I hope this will do.  Most
of the changes in the compare-debug patches are not interdependent and
could be broken up into smaller patches, and even go in after the rest.
The same is probably true for the last four as well, but the first 5
pretty much have to go in as a unit.


I haven't fished the ChangeLog entries from the VTA branch.  The patches
I'm going to post don't have ChangeLog entries at all.  I suppose the
purpose is clear (add VTA), but rather than just taking the incremental
changes to the VTA branch, I'd write a consolidated ChangeLog entry.
But if I did this, I wouldn't be able to post these patches tonight
(oops, it's morning already ;-), and then you probably wouldn't get to
see them before the Summit.  So, please bear with the lack of ChangeLogs
and, if you feel a need to understand some particular change without
asking me, all the patches along with their rationales were posted to
gcc-patches before, but perhaps ChangeLog.vta might be enough to clear
it up:
http://gcc.gnu.org/svn/gcc/branches/var-tracking-assignments-branch/gcc/ChangeLog.vta


For those of you attending the Summit, have a great one.

-- 
Alexandre Oliva, freedom fighter    http://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist      Red Hat Brazil Compiler Engineer

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-05 10:06 VTA merge? Alexandre Oliva
@ 2009-06-05 10:19 ` Richard Guenther
  2009-06-05 10:53   ` Alexandre Oliva
  2009-06-06  8:12   ` Eric Botcazou
  2009-06-05 10:42 ` Joseph S. Myers
  2009-06-05 12:28 ` David Edelsohn
  2 siblings, 2 replies; 39+ messages in thread
From: Richard Guenther @ 2009-06-05 10:19 UTC (permalink / raw)
  To: Alexandre Oliva; +Cc: gcc

On Fri, Jun 5, 2009 at 12:05 PM, Alexandre Oliva <aoliva@redhat.com> wrote:
> It's been a very long time since I started working in the
> var-tracking-assignments branch.  It is finally approaching a state in
> which I'm comfortable enough to propose that it be integrated.
>
> Alas, it's not quite finished yet and, unless it is merged, it might
> very well never be.  New differences in final RTL between -g and -g0
> keep popping up: I just found out one more that went in as recently as
> last week (lots of -O3 -g torture testsuite failures in -fcompare-debug
> test runs).  After every merge, I spent an inordinate amount of time
> addressing such -fcompare-debug regressions.
>
> I'm not complaining.  I actually enjoy doing that, it is fun.  But it
> never ends, and it took time away from implementing the features that,
> for a long time, were missing.  I think it's close enough to ready now
> that I feel it is no longer unfair to request others to share the burden
> of keeping GCC from emitting different code when given -g compared with
> -g0, a property we should have always ensured.
>
>
> == What is VTA?
>
> This project aims at getting GCC to emit debug information for local
> variables that is always correct, and as complete as possible.  By
> correct, I mean, if GCC says a variable is at a certain location at a
> certain point in the program, that location must hold the value of the
> variable at that point.  By complete, I mean if the value of the
> variable is available somewhere, or can be computed from values
> available somewhere, then debug information for the variable should tell
> the debug information consumer how to obtain or compute it.
>
> The key to keep the mapping between SL (source-level) variables and IR
> objects from being corrupted or lost was to introduce explicit IR
> mappings that, on the SL hand, remained stable fixed points and, on the
> IR hand, expressions that got naturally adjusted as part of the
> optimization process, without any changes to the optimization passes.
>
> Alas, no changes to the passes would be too good to be true.  It was
> indeed true for several of them, but many that dealt with special
> boundary cases such as single or no occurrences of references to a
> value, or that counted references to make decisions, had to be taught
> how to disregard references that appeared in these new binding IR
> elements.  Others had to be taught to disregard these elements when
> checking for the absence of intervening code between a pair of
> statements or instructions.
>
> In nearly all cases, the changes were trivial, and the need for them was
> shown in -fcompare-debug or bootstrap-debug testing.

So if I understand the above right then VTA is a new source of
code-generation differences with -g vs. -g0.  A possibly quite
bad one (compared to what we have now).

IMHO a much more convincing way to avoid code generation
differences with -g vs. -g0 and VTA would be to _always_ have
the debug statements/instructions around, regardless of -g/-g0
or -fvta or -fno-vta (that would merely switch var-tracking into
the new mode).  This would also ensure we keep a very good
eye on compile-time/memory-usage overhead of the debug
instructions.

As of the var-tracking changes - do they make sense even with
the current state of affairs?  I remember us enhancing var-tracking
for the var-mappings approach as well.

Richard.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-05 10:06 VTA merge? Alexandre Oliva
  2009-06-05 10:19 ` Richard Guenther
@ 2009-06-05 10:42 ` Joseph S. Myers
  2009-06-05 11:11   ` Alexandre Oliva
  2009-06-05 12:28 ` David Edelsohn
  2 siblings, 1 reply; 39+ messages in thread
From: Joseph S. Myers @ 2009-06-05 10:42 UTC (permalink / raw)
  To: Alexandre Oliva; +Cc: gcc

On Fri, 5 Jun 2009, Alexandre Oliva wrote:

> testsuite-guality (16K) - (still small) debug info quality testsuite

Has this been reworked as per my previous comments 
<http://gcc.gnu.org/ml/gcc/2008-07/msg00595.html> to use DejaGnu 
interfaces to execute the debugger and test programs so that any host and 
target board files that work for the GDB testsuite will also work for 
running this testsuite?  (Or so board files will at most need small 
changes, since such changes are commonly needed in practice to support a 
new testsuite - but in any case, using the DejaGnu interfaces to support 
the wide range of supported hosts and targets with runtest --host_board 
--target_board.)

-- 
Joseph S. Myers
joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-05 10:19 ` Richard Guenther
@ 2009-06-05 10:53   ` Alexandre Oliva
  2009-06-05 11:18     ` Richard Guenther
  2009-06-06  8:12   ` Eric Botcazou
  1 sibling, 1 reply; 39+ messages in thread
From: Alexandre Oliva @ 2009-06-05 10:53 UTC (permalink / raw)
  To: Richard Guenther; +Cc: gcc

On Jun  5, 2009, Richard Guenther <richard.guenther@gmail.com> wrote:

> So if I understand the above right then VTA is a new source of
> code-generation differences with -g vs. -g0.

It was, but that was before I spent several months stopping it from
being it ;-)

And once VTA is on and bootstrap-debug is the rule rather than the
exception (with RTH's suggestion, it will again be faster than normal
bootstrap, and catch even some regressions that current
BUILD_CONFIG=bootstrap-debug doesn't), it won't be just me catching and
fixing these ;-)

FTR, in the last two or three merges, I've had more -fcompare-debug
regressions with VTA disabled than with it enabled.  Perhaps we should
default to BUILD_CONFIG=bootstrap-debug?  It would be a start, but it
wouldn't have caught all of the recent regressions.  Some of them only
affected C++ and Ada testcases, and bootstrap-debug won't catch these.
It takes -fcompare-debug for the testsuite run or something equivalent
to do so.

Hopefully people who run automated testers can be talked into using the
-fcompare-debug option for the library builds and testsuite runs.

> IMHO a much more convincing way to avoid code generation
> differences with -g vs. -g0 and VTA would be to _always_ have
> the debug statements/instructions around, regardless of -g/-g0

That's an option I haven't discarded, but I wouldn't be able to claim
VTA had zero cost when disabled if that was so.

It might make sense to have an option that emitted all notes but just
discarded them at the end, rather than actually emitting location notes
out of them.  Although I'm not sure how useful it would be: as long as
you can still get debug info without VTA (and you can), you can get the
same effect of such an option:

-fno-var-tracking-assignments, with -g0 or -g, will get you the same
debug info we emit it nowadays

-fvar-tracking-assignments followed by strip will get you the same
object code you'd have gotten with the approach you suggest

Since stripping is trivial, and probably the most common use, the most
interesting case is probably the one in which you start out from a
binary that fails and then find out the failure can't be duplicated once
you build with VTA.  Building with -fcompare-debug will let you know
you're running into one of these cases, and then you can resort to
disabling VTA and trying to make do with the sucky debug info we emit
today.

> This would also ensure we keep a very good eye on
> compile-time/memory-usage overhead of the debug instructions.

We can probably think of better ways to waste memory and compile time
;-)

Not that keeping them on check isn't something we should all strive to
do, mind you.

> As of the var-tracking changes - do they make sense even with
> the current state of affairs?

Most of it would just fit in, but it would obviously have to be
retargeted to take the input of known bindings from something else.

> I remember us enhancing var-tracking for the var-mappings approach as
> well.

Yeah, it should be pretty easy to retarget VTA to take, instead of debug
insns, any other source of information that correlates user variables
with locations at points in which they are known at first, and all the
machinery should propagate that information and figure out the rest:
equivalences, confluences, etc.

-- 
Alexandre Oliva, freedom fighter    http://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist      Red Hat Brazil Compiler Engineer

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-05 10:42 ` Joseph S. Myers
@ 2009-06-05 11:11   ` Alexandre Oliva
  0 siblings, 0 replies; 39+ messages in thread
From: Alexandre Oliva @ 2009-06-05 11:11 UTC (permalink / raw)
  To: Joseph S. Myers; +Cc: gcc

On Jun  5, 2009, "Joseph S. Myers" <joseph@codesourcery.com> wrote:

> On Fri, 5 Jun 2009, Alexandre Oliva wrote:
>> testsuite-guality (16K) - (still small) debug info quality testsuite

> Has this been reworked as per my previous comments

Sorry, no, I didn't complete the rework, although I made some changes
towards that end.  But it's still stuck to native testing.

I've reworked the harness so that communication with the debugger is now
much simpler, which should make the next step easier.  But I have still
had my focus on adding missing features and fixing compare-debug
regressions, so I could never get 'round to teaching myself enough
dejagnu/expect to refit the harness as you suggested.

But don't think your suggestions were forgotten.  I even mentioned I
wasn't there yet last time I posted the patch, and it's high in my to-do
list.  Hopefully once we manage to avoid new -fcompare-debug regressions
I'll have more time to complete that task.  Of course I wouldn't mind if
someone beat me to it or taught me the basics on how to do it.  I'm not
even sure where to begin.  I'd start out by looking at the GDB testsuite
to try to figure out how they do it, but any more specific pointers
would be definitely welcome.

Thanks,

-- 
Alexandre Oliva, freedom fighter    http://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist      Red Hat Brazil Compiler Engineer

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-05 10:53   ` Alexandre Oliva
@ 2009-06-05 11:18     ` Richard Guenther
  0 siblings, 0 replies; 39+ messages in thread
From: Richard Guenther @ 2009-06-05 11:18 UTC (permalink / raw)
  To: Alexandre Oliva; +Cc: gcc

On Fri, Jun 5, 2009 at 12:53 PM, Alexandre Oliva <aoliva@redhat.com> wrote:
> On Jun  5, 2009, Richard Guenther <richard.guenther@gmail.com> wrote:
>
>> So if I understand the above right then VTA is a new source of
>> code-generation differences with -g vs. -g0.
>
> It was, but that was before I spent several months stopping it from
> being it ;-)

Obviously ;)

> And once VTA is on and bootstrap-debug is the rule rather than the
> exception (with RTH's suggestion, it will again be faster than normal
> bootstrap, and catch even some regressions that current
> BUILD_CONFIG=bootstrap-debug doesn't), it won't be just me catching and
> fixing these ;-)

IMHO we should make bootstrap-debug (that's the one building
stage2 w/o debug info and stage3 with debug info, correct?) the
default regardless of VTA going in or not.  If it works on the
primary and secondary targets of course ;)

Can you submit a separate patch to do so? (maybe you did already)

> FTR, in the last two or three merges, I've had more -fcompare-debug
> regressions with VTA disabled than with it enabled.  Perhaps we should
> default to BUILD_CONFIG=bootstrap-debug?  It would be a start, but it
> wouldn't have caught all of the recent regressions.  Some of them only
> affected C++ and Ada testcases, and bootstrap-debug won't catch these.
> It takes -fcompare-debug for the testsuite run or something equivalent
> to do so.

bootstrap-debug by default would be a start.

Honestly I don't care too much about -g vs. -g0 differences as we
build everything with -g and strip debug info later.  But passing
bootstrap-debug is a release goal that I will support.

> Hopefully people who run automated testers can be talked into using the
> -fcompare-debug option for the library builds and testsuite runs.
>
>> IMHO a much more convincing way to avoid code generation
>> differences with -g vs. -g0 and VTA would be to _always_ have
>> the debug statements/instructions around, regardless of -g/-g0
>
> That's an option I haven't discarded, but I wouldn't be able to claim
> VTA had zero cost when disabled if that was so.

So what is the overhead of having the debug stmts/insns if you
throw them away before var-tracking and do debug info the old way?

Thanks,
Richard.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-05 10:06 VTA merge? Alexandre Oliva
  2009-06-05 10:19 ` Richard Guenther
  2009-06-05 10:42 ` Joseph S. Myers
@ 2009-06-05 12:28 ` David Edelsohn
  2009-06-05 19:18   ` Alexandre Oliva
  2 siblings, 1 reply; 39+ messages in thread
From: David Edelsohn @ 2009-06-05 12:28 UTC (permalink / raw)
  To: Alexandre Oliva; +Cc: gcc

I thought a number of people had concerns that VTA was too expensive
and disruptive for the perceived benefit.

David

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-05 12:28 ` David Edelsohn
@ 2009-06-05 19:18   ` Alexandre Oliva
  2009-06-05 20:56     ` Daniel Berlin
  2009-06-05 22:11     ` Machine Description Template? Graham Reitz
  0 siblings, 2 replies; 39+ messages in thread
From: Alexandre Oliva @ 2009-06-05 19:18 UTC (permalink / raw)
  To: David Edelsohn; +Cc: gcc

On Jun  5, 2009, David Edelsohn <dje.gcc@gmail.com> wrote:

> I thought a number of people had concerns that VTA was too expensive
> and disruptive for the perceived benefit.

There were such concerns, indeed.

All we knew back then was that there was room for a lot of improvement
in the quality of debug information, and that debug info quality was a
priority for some and a non-concern for others.

Time went by, code was written, adjustments were made, initial steps
towards measuring debug info quality in our testsuite were taken.  I
guess it is now time to assess whether the concerns voiced before the
implementation started, that I shared myself and took into account in
its design, were sufficiently addressed in the design and in the
implementation.

We can measure some of these things now.  Some can even be measured
objectively ;-)

-- 
Alexandre Oliva, freedom fighter    http://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist      Red Hat Brazil Compiler Engineer

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-05 19:18   ` Alexandre Oliva
@ 2009-06-05 20:56     ` Daniel Berlin
  2009-06-07 20:04       ` Alexandre Oliva
  2009-06-05 22:11     ` Machine Description Template? Graham Reitz
  1 sibling, 1 reply; 39+ messages in thread
From: Daniel Berlin @ 2009-06-05 20:56 UTC (permalink / raw)
  To: Alexandre Oliva; +Cc: David Edelsohn, gcc

>
> We can measure some of these things now.  Some can even be measured
> objectively ;-)

Do you have any of them handy (memory use, compile time with release
checking only, etc) so that we can start the public
argument^H^H^H^H^H^discussion?

;)

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Machine Description Template?
  2009-06-05 19:18   ` Alexandre Oliva
  2009-06-05 20:56     ` Daniel Berlin
@ 2009-06-05 22:11     ` Graham Reitz
  2009-06-05 22:31       ` Ramana Radhakrishnan
                         ` (3 more replies)
  1 sibling, 4 replies; 39+ messages in thread
From: Graham Reitz @ 2009-06-05 22:11 UTC (permalink / raw)
  To: gcc


Is there a machine description template in the gcc file source tree?

If there is also template for the 'C header file of macro definitions'  
that would be good to know too.

I did a file search for '.md' and there are tons of examples.   
Although, I was curious if there was a generic template.

graham 

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Machine Description Template?
  2009-06-05 22:11     ` Machine Description Template? Graham Reitz
@ 2009-06-05 22:31       ` Ramana Radhakrishnan
  2009-06-05 22:46       ` Michael Hope
                         ` (2 subsequent siblings)
  3 siblings, 0 replies; 39+ messages in thread
From: Ramana Radhakrishnan @ 2009-06-05 22:31 UTC (permalink / raw)
  To: Graham Reitz; +Cc: gcc

On Fri, Jun 5, 2009 at 11:11 PM, Graham Reitz<grahamreitz@gmail.com> wrote:
>
> Is there a machine description template in the gcc file source tree?

There is no template as such but you could look at existing ports for
the basic templates. Google should give you results for previous
questions on this list regarding new ports. There are some links to
other documents about starting new ports in the gcc wiki under the
tutorials and documentation section.


>
> If there is also template for the 'C header file of macro definitions' that
> would be good to know too.

Most of the header files in the config/<machinename>/*.h have a
description of the target macros and some values in them. You should
be able to find something there though the best description should be
read from the internals documents .

>
> I did a file search for '.md' and there are tons of examples.  Although, I
> was curious if there was a generic template.

Sadly you'd have to keep them in sync with every version of gcc and no
one has thought of maintaining something like that.

Best of luck - HTH

Ramana
>
> graham
>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Machine Description Template?
  2009-06-05 22:11     ` Machine Description Template? Graham Reitz
  2009-06-05 22:31       ` Ramana Radhakrishnan
@ 2009-06-05 22:46       ` Michael Hope
  2009-06-05 22:55         ` Graham Reitz
  2009-06-05 23:48       ` Jeff Law
  2009-06-12 21:36       ` Michael Meissner
  3 siblings, 1 reply; 39+ messages in thread
From: Michael Hope @ 2009-06-05 22:46 UTC (permalink / raw)
  To: Graham Reitz; +Cc: gcc

I've found the MMIX port to be a good place to start.  It's a bit old
but the archtecture is nice and simple and the implementation nice and
brief.  Watch out though as it is a pure 64 bit machine - you'll need
to think SI every time you see DI.

The trick past there is to compare the significant features of your
machine with existing machines.  For example, GCC prefers a 68000
style machine with a set of condition codes, however many machines
only have one condition flag that changes meaning based on what you
are doing.

-- Michael

2009/6/6 Graham Reitz <grahamreitz@gmail.com>:
>
> Is there a machine description template in the gcc file source tree?
>
> If there is also template for the 'C header file of macro definitions' that
> would be good to know too.
>
> I did a file search for '.md' and there are tons of examples.  Although, I
> was curious if there was a generic template.
>
> graham
>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Machine Description Template?
  2009-06-05 22:46       ` Michael Hope
@ 2009-06-05 22:55         ` Graham Reitz
  2009-06-09  8:57           ` Martin Guy
  0 siblings, 1 reply; 39+ messages in thread
From: Graham Reitz @ 2009-06-05 22:55 UTC (permalink / raw)
  To: gcc

Excellent!  Thanks Ramana and Michael.

I have been working through sections 16 & 17 of the gccint.info  
document and also read through Hans' 'Porting GCC for Dunces'.

He sure wasn't kidding mentioning you would need to read them several  
times.

graham


On Jun 5, 2009, at 5:46 PM, Michael Hope wrote:

> I've found the MMIX port to be a good place to start.  It's a bit old
> but the archtecture is nice and simple and the implementation nice and
> brief.  Watch out though as it is a pure 64 bit machine - you'll need
> to think SI every time you see DI.
>
> The trick past there is to compare the significant features of your
> machine with existing machines.  For example, GCC prefers a 68000
> style machine with a set of condition codes, however many machines
> only have one condition flag that changes meaning based on what you
> are doing.
>
> -- Michael
>
> 2009/6/6 Graham Reitz <grahamreitz@gmail.com>:
>>
>> Is there a machine description template in the gcc file source tree?
>>
>> If there is also template for the 'C header file of macro  
>> definitions' that
>> would be good to know too.
>>
>> I did a file search for '.md' and there are tons of examples.   
>> Although, I
>> was curious if there was a generic template.
>>
>> graham
>>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Machine Description Template?
  2009-06-05 22:11     ` Machine Description Template? Graham Reitz
  2009-06-05 22:31       ` Ramana Radhakrishnan
  2009-06-05 22:46       ` Michael Hope
@ 2009-06-05 23:48       ` Jeff Law
  2009-06-12 21:36       ` Michael Meissner
  3 siblings, 0 replies; 39+ messages in thread
From: Jeff Law @ 2009-06-05 23:48 UTC (permalink / raw)
  To: Graham Reitz; +Cc: gcc

Graham Reitz wrote:
>
> Is there a machine description template in the gcc file source tree?
>
> If there is also template for the 'C header file of macro definitions' 
> that would be good to know too.
>
> I did a file search for '.md' and there are tons of examples.  
> Although, I was curious if there was a generic template.
>
>
Cygnus/Red Hat once had a generic template for ports; however, I 
seriously doubt it has been kept up-to-date.

The best suggestion I could give would be to identify supported chips 
with similar characteristics as your chip.  Then review how those ports 
handle each common characteristic.

Jeff

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-05 10:19 ` Richard Guenther
  2009-06-05 10:53   ` Alexandre Oliva
@ 2009-06-06  8:12   ` Eric Botcazou
  2009-06-07 21:32     ` Alexandre Oliva
  1 sibling, 1 reply; 39+ messages in thread
From: Eric Botcazou @ 2009-06-06  8:12 UTC (permalink / raw)
  To: Richard Guenther; +Cc: gcc, Alexandre Oliva

> So if I understand the above right then VTA is a new source of
> code-generation differences with -g vs. -g0.  A possibly quite
> bad one (compared to what we have now).

IIUC it's a paradigm shift: currently the absence of differences in the 
generated code is guaranteed by the absence of differences in the IR all the 
way from the initial GENERIC down to the final RTL.  In other words, unless a 
pass makes an active mistake, it preserves the invariant.

With the new approach, the absence of differences in the generated code is 
guaranteed by the absence of differences in the behavior of the compiler for 
different IRs.  In other words, unless a pass actively plays by the rules, it 
breaks the invariant.

-fcompare-debug or not, the former is inherently more robust and IMHO more 
trustworthy than the latter.  So, if we are to ditch the former in favor of 
the latter, the reward from the other side should be sufficiently high.

-- 
Eric Botcazou

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-05 20:56     ` Daniel Berlin
@ 2009-06-07 20:04       ` Alexandre Oliva
  2009-06-08 16:19         ` Frank Ch. Eigler
  2009-06-08 17:35         ` Diego Novillo
  0 siblings, 2 replies; 39+ messages in thread
From: Alexandre Oliva @ 2009-06-07 20:04 UTC (permalink / raw)
  To: Daniel Berlin; +Cc: David Edelsohn, gcc

On Jun  5, 2009, Daniel Berlin <dberlin@dberlin.org> wrote:

>> 
>> We can measure some of these things now.  Some can even be measured
>> objectively ;-)

> Do you have any of them handy (memory use, compile time with release
> checking only, etc) so that we can start the public
> argument^H^H^H^H^H^discussion?

> ;)

:-)

I don't, really.  Part of the guidance I expected was on what the
relevant measures should be.  I wouldn't want to decide that for myself,
because I can't say for sure that I fully understand the concerns in
people's minds, and I wouldn't want to spend a lot of time collecting
irrelevant data, or data that might be perceived as biased because of my
lack of experience with benchmarks.

That said, I planned on collecting and presenting at least some data,
but I ran out of time before the deadline I'd set for myself (must post
before the summit), while working on addressing the various suggestions
I'd received for the bug fixes and new features I recently submitted.

So the question is, what should I measure?  Memory use for any specific
set of testcases, summarized over a bootstrap with memory use tracking
enabled, something else?  Likewise for compile time?  What else?

-- 
Alexandre Oliva, freedom fighter    http://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist      Red Hat Brazil Compiler Engineer

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-06  8:12   ` Eric Botcazou
@ 2009-06-07 21:32     ` Alexandre Oliva
  2009-06-08  2:49       ` Eric Botcazou
  0 siblings, 1 reply; 39+ messages in thread
From: Alexandre Oliva @ 2009-06-07 21:32 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: Richard Guenther, gcc

On Jun  6, 2009, Eric Botcazou <ebotcazou@adacore.com> wrote:

>> So if I understand the above right then VTA is a new source of
>> code-generation differences with -g vs. -g0.  A possibly quite
>> bad one (compared to what we have now).

> IIUC it's a paradigm shift: currently the absence of differences in the 
> generated code is guaranteed by the absence of differences in the IR all the 
> way from the initial GENERIC down to the final RTL.  In other words, unless a 
> pass makes an active mistake, it preserves the invariant.

It would be nice if it worked this way, but the dozens of patches to fix
-g/-g0 compile differences I posted over the last several months show
it's really not that simple, because the codegen IR does not tell the
whole story.  We have kind of IR extensions for debug info, for types
and templates, for aliasing information, even for GC of internal data
structures, and all of these do affect codegen, sometimes in very subtle
ways.

It would be nice if things were as you describe above, I agree, but
that's not where we are, and in ways other than the ones I mentioned
above.

Speaking specifically of debug information, the little attention given
to preserving information needed to generate correct debug info means
that introducing errors is not just a matter of active mistakes.  Most
of the debug information errors we have now do not follow from actively
breaking debug info data structures, but rather from passively failing
to keep them up to date, or even missing data structures to that end.

Now, once we realize we need additional data structures to retain a
correct mapping between source-level constructs and the result of
transformations that occur throughout compilation, it shouldn't be hard
to realize that there are two options:

1. maintain those IR data structures regardless of whether we're
emitting debug information, spending computation time and memory to keep
computation identical, so as to avoid risk, and in the end discard the
results the user didn't ask for; or

2. avoid the unnecessary computation and memory use by accepting that
there are going to be IR differences between compilations with or
without -g, and work torwards minimizing the risks of such differences.


I can certainly understand the wish to keep debug info IR out of sight,
and have it all be maintained sort of by magic, without need for
developers to even think about it.  While I share that wish and even
tried to satisfy it in the design, I've come to the conclusion that it
can't be done.  And it's not just a “it can't be done without major
surgery in GCC as it is today”, it's a “it can't be done at all”.  Let
me share with you the example by which I proved it to myself.

Consider an IR API that offers interfaces to remove and add operations
to sequences.  If you want to move an operation, you remove it from its
original position and add it to another.  Problem is, the moment you
remove it, any debug info monitor running behind the scenes has to
behave as if the operation would no longer be seen, making any changes
to the debug info IR so as to minimize the absence of that operation,
and not keeping any references to it.  Then, when that operation is
re-added, debug info monitor must deal with it as a new operation, so it
can't fully recover from whatever loss of debug info the removal
caused.

The loss can be even greater if the operation, rather than being just
moved, is re-created, without concern for debug information.  Think
removing an insn and creating a new insn out of its pattern, without
preserving the debug info locators.

Would you consider this kind of transformation an active mistake, or
failure to play by the rules?


Even if the API is extended so as to move operations without loss of
debug info, and all existing pairs of remove/add that could/should be
implemented in terms of this new interface, new code could still be
added that used remove and add rather than move.  This would generate
exactly the same executable code, but it would prevent debug information
from being preserved.

Would you qualify the addition of new such code as active mistakes, or
failure to play by the rules?

After pondering about this, do you agree that paying attention to debug
information concerns is not only something we already do routinely (just
not enough), it is something that can't really be helped?

If so, the question turns into how much computation you're willing to
perform and baggage you're willing to carry to reduce the risk of errors
caused by deviations from the rules.

Apparently most GCC developers don't mind carrying around the source
locator information in INSNs, EXPRs and STMTs, even though the lack of
care for checking their correctness has led to very many errors that
went undetected over the years.  Andrew Macleod and Aldy Hernandez have
been giving these issues a lot more attention than I have, and they can
probably say more about how deep the hole created by all these years of
erosion got.

Apparently most GCC developers don't mind carrying around the original
declaration information in attributes of REGs and MEMs, used exclusively
for debug information.  AFAICT, for codegen, alias set numbers in MEMs
would suffice, but it takes actual effort to maintain the attributes
during some transformations, and although there are routines that
simplify this, nothing stops people from using the “old way” of say
creating MEMs with different offsets or so, and new such occurrences
would show up every now and then shortly after the attrs were added.

However, there is a clear interest in reducing memory use and compile
time, and avoiding needless computation for debug info when no debug
info is wanted is one of the points of high concern during the
discussions held about better debug info over the last 2 years or so.
The design I proposed enables this reduction, but it can certainly work
in the wasteful way we've approached this issue so far.


And then, if we look into the risks of errors arising from the current
stance and the one of VTA, it appears to me that the current stance
favors codegen correctness without any concern for debug info
correctness, whereas the VTA approach introduces some risk of generation
of different but equally correct code, with much greater likelihood of
correct debug info.

I write different but equally correct because the presence of debug
stmts/insns can at most be an inhibitor to optimizations that don't
disregard them.  I'd be the last person to dismiss the requirement of
generating the same executable code regardless of debug info options,
but considering the reasoning below, I believe the risk is acceptable:

- the availability of -fcompare-debug, and its regular use as part of
the development process, will reduce by far the likelihood of running
into this kind of problem

- if you used VTA during the development phase and find that compilation
without debug info breaks your program, -fcompare-debug will confirm the
diagnosis and then you can compile with VTA debug info and strip it off
afterwards

- if you're investigating an error in a program originally compiled
without debug info and you can't duplicate it with VTA enabled, you can
confirm the diagnosis with -fcompare-debug and then refrain from
enabling VTA.  You'll then be no worse off than we are today, for you'll
then still be able to use the same debug info we generate today.

Considering how many latent -g/-g0 errors I've fixed myself because of
the introduction of machinery to detect them, and how many new ones were
introduced since I started monitoring them, I know the current design
doesn't offer the guarantees you seem to have been counting on.

This is obviously no excuse to go wild, counting on a safety net to keep
things right.  But the proposal on the table is certainly not a wild
one ;-)

-- 
Alexandre Oliva, freedom fighter    http://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist      Red Hat Brazil Compiler Engineer

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-07 21:32     ` Alexandre Oliva
@ 2009-06-08  2:49       ` Eric Botcazou
  2009-06-08 21:31         ` Alexandre Oliva
  0 siblings, 1 reply; 39+ messages in thread
From: Eric Botcazou @ 2009-06-08  2:49 UTC (permalink / raw)
  To: Alexandre Oliva; +Cc: Richard Guenther, gcc

> It would be nice if it worked this way, but the dozens of patches to fix
> -g/-g0 compile differences I posted over the last several months show
> it's really not that simple, because the codegen IR does not tell the
> whole story.  We have kind of IR extensions for debug info, for types
> and templates, for aliasing information, even for GC of internal data
> structures, and all of these do affect codegen, sometimes in very subtle
> ways.

Yes, precisely, they are IR extensions, most passes shouldn't have to bother 
with them.  Fixing bugs there can probably be done once for all passes.

> Speaking specifically of debug information, the little attention given
> to preserving information needed to generate correct debug info means
> that introducing errors is not just a matter of active mistakes.  Most
> of the debug information errors we have now do not follow from actively
> breaking debug info data structures, but rather from passively failing
> to keep them up to date, or even missing data structures to that end.

I was only talking about code generation, not debug info generation.

> I can certainly understand the wish to keep debug info IR out of sight,
> and have it all be maintained sort of by magic, without need for
> developers to even think about it.  While I share that wish and even
> tried to satisfy it in the design, I've come to the conclusion that it
> can't be done.  And it's not just a “it can't be done without major
> surgery in GCC as it is today”, it's a “it can't be done at all”.

Well understood.  So, in the end, we seem to agree that your approach is 
fundamentally different from what we have now.  I only added that in my 
opinion it is inherently less robust as far as -g vs -g0 code is concerned 
(unless it is enabled unconditionally) and we shouldn't trade this loss of 
robustness for nothing.

-- 
Eric Botcazou

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-07 20:04       ` Alexandre Oliva
@ 2009-06-08 16:19         ` Frank Ch. Eigler
  2009-06-08 17:35         ` Diego Novillo
  1 sibling, 0 replies; 39+ messages in thread
From: Frank Ch. Eigler @ 2009-06-08 16:19 UTC (permalink / raw)
  To: Alexandre Oliva; +Cc: Daniel Berlin, David Edelsohn, gcc

Alexandre Oliva <aoliva@redhat.com> writes:

>> Do you have any of them handy (memory use, compile time with release
>> checking only, etc) so that we can start the public
>> argument^H^H^H^H^H^discussion?

> I don't, really.  Part of the guidance I expected was on what the
> relevant measures should be.  [...]

Well, disregard "disruptiveness" for now, which people can judge for
themselves by looking at the new code.

As for "costs" in terms of compile time/space and output size, you
should definitely present some preliminary data please.  For example,
the time/space for a plain bootstrap with vs. without the vta patches
applied.  Then another one with "-g" vs "-g0" vs whatever corresponds
to "full vta" - local variable debuginfo.

As for "benefits", you could give some gdb (or systemtap :-) session
transcripts that show the new data being used.


- FChE

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-07 20:04       ` Alexandre Oliva
  2009-06-08 16:19         ` Frank Ch. Eigler
@ 2009-06-08 17:35         ` Diego Novillo
  2009-06-08 21:04           ` Alexandre Oliva
                             ` (2 more replies)
  1 sibling, 3 replies; 39+ messages in thread
From: Diego Novillo @ 2009-06-08 17:35 UTC (permalink / raw)
  To: Alexandre Oliva; +Cc: Daniel Berlin, David Edelsohn, gcc

On Sun, Jun 7, 2009 at 16:04, Alexandre Oliva<aoliva@redhat.com> wrote:

> So the question is, what should I measure?  Memory use for any specific
> set of testcases, summarized over a bootstrap with memory use tracking
> enabled, something else?  Likewise for compile time?  What else?

Some quick measurements I'd be interested in:

- Size of the IL over some standard code bodies
  (http://gcc.gnu.org/wiki/PerformanceTesting).
- Memory consumption in cc1/cc1plus at -Ox -g over that set of apps.
- Compile time in cc1/cc1plus at -Ox -g.
- Performance differences over SPEC2006 and the other benchmarks
  we keep track of.

Do all these comparisons against mainline as of the last merge
point.

The other set of measurements that would be interesting are
probably harder to specify.  I would like to have a set of
criteria or guidelines of what should a pass writer keep in mind
to make sure that their transformations do not butcher debug
information.  From what I understand, there are two situations
that need handling:

- When doing analysis, passes should explicitly ignore certain
  artifacts that carry debugging info.

- When applying transformations, passes should
  generate/move/modify those artifacts.

Documentation should describe exactly what those artifacts are
and how should they be handled.

I'd like to have a metric of intrusiveness that can be tied to
the quality of the debugging information:

- What percentage of code in a pass is dedicated exclusively to
  handling debug info?
- What is the point of diminishing returns?  If I write 30% more
  to keep track of debug info, will the debug info get 30%
  better?
- What does it mean for debug info to be 30% better?  How do
  we measure 'debug info goodness'?
- Does keeping debug info up-to-date introduce algorithmic
  changes to the pass?

Clearly, if one needs to dedicate a large portion of the pass
just to handle debug information, that is going to be a very hard
sell.  Keeping perfect debug information at any cost is not
sustainable long term.


Diego.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-08 17:35         ` Diego Novillo
@ 2009-06-08 21:04           ` Alexandre Oliva
  2009-06-08 21:30             ` Joe Buck
  2009-06-18 11:04             ` Diego Novillo
  2009-06-18  8:37           ` Alexandre Oliva
  2009-07-03 10:15           ` VTA compile time memory usage comparison Jakub Jelinek
  2 siblings, 2 replies; 39+ messages in thread
From: Alexandre Oliva @ 2009-06-08 21:04 UTC (permalink / raw)
  To: Diego Novillo; +Cc: Daniel Berlin, David Edelsohn, gcc

On Jun  8, 2009, Diego Novillo <dnovillo@google.com> wrote:

> - Performance differences over SPEC2006 and the other benchmarks
>   we keep track of.

This one is trivial: none whatsoever.  The generated code is the same,
and it *must* be the same.  Debug information must never change the
generated code, and VTA is all about debug information.  There's a lot
of infrastructure to ensure that code remains the unchanged, and
-fcompare-debug testing backs this up.  It doesn't make much sense to
run the same code twice to verify that it performs the same, does it?
:-)


> Do all these comparisons against mainline as of the last merge
> point.

I'll start performing the other measurements you requested.  Please be
patient, it will take some time until I figure out how to use the
scripts you pointed at me and locate the code bases.

For the measurements, I won't use the last merge, but rather the trunk
(in which most of the infrastructure patches were already installed,
with minor changes) vs trunk+the posted patchset.  Or maybe I'll do
another merge into the branch, so that we have exact revisions in the
SVN tree to refer to.  I hope you don't mind that I make the tests in a
slightly different tree (it's easier for me, and shouldn't make any
difference for you), but if you insist, I'll do exactly what you
suggested.


> I would like to have a set of criteria or guidelines of what should a
> pass writer keep in mind to make sure that their transformations do
> not butcher debug information.

I've already written about this.  Butchering debug information with this
design is very hard.  Basically, you have to work very hard to break it,
because it's designed so that, unless you actively stop transformations
that are made to executable code from also applying to debug
annotations, you'll keep it up to date and correct.

What needs to be taken care of is something else: avoiding codegen
differences.  This means that whatever factors you use to make decisions
on whether or not to make a transformation shouldn't take debug
annotations into account.  E.g., if you count how many references there
are to a certain DEF, don't take the debug USEs into account.  If you
count how many STMTs there are in a function or block to decide whether
to inline it or duplicate it, don't count the annotations.

And then, in the cases in which an transformation is made when there is
only one (non-debug) reference to a name, it is probably useful to
update any debug insns that refer to that name.  If you don't, debug
info will be less complete, but still correct, at least in SSA land.  In
post-reload RTL it's more important to fix these things up, otherwise
you might end up with incorrect debug info.

That's all.  Doesn't sound that bad, does it?

> From what I understand, there are two situations
> that need handling:

> - When doing analysis, passes should explicitly ignore certain
>   artifacts that carry debugging info.

Yup.  This is where most of the few changes go.  If you fail to do that
where you should, you get -fcompare-debug errors or slightly different
code.

> - When applying transformations, passes should
>   generate/move/modify those artifacts.

Only in very rare circumstances (1- or 0- refs special cases) do they
need special attention.  In nearly all cases, because of their nature,
they're correctly updated just like the optimizer would have to do to
any other piece of code.

> Documentation should describe exactly what those artifacts are
> and how should they be handled.

Are the 3 paragraphs above clear enough?

When in the documentation do you suggest this should go?

> - What percentage of code in a pass is dedicated exclusively to
>   handling debug info?

In nearly all of the tree passes, it's one or two lines per file, if
it's that much.  In RTL it's sometimes a bit more than that.

> - What is the point of diminishing returns?  If I write 30% more
>   to keep track of debug info, will the debug info get 30%
>   better?

See below.

> - What does it mean for debug info to be 30% better?  How do
>   we measure 'debug info goodness'?

I don't know how to measure “30% better” debug info.  Do you have a
criterion to suggest?

I see at least two dimensions for measuring debug info improvements:
correctness and completeness.  Currently we suck at both.

VTA's design is such that the infrastructure work I've done over its
development addresses teh correctness problem once and for all.  The
remaining improvements are in completeness, and those are going to be
(i) in the var-tracking pass and debug info emitters, that still can't
or don't know how to use all the information that reaches them, and (ii)
in passes that currently discard or invalidate debug annotations (so
that variables end up marked as optimized out), but that could retain it
with a bit of additional work.  I don't have any actual examples of
(ii), I'm only aware of their theoretical possibility, so I can't
quantify additional work required for that.  That said, the additional
work would be explicitly optional, and certainly not necessarily taken
up by the maintainer of the pass, but rather by someone interested in
debug information.


> - Does keeping debug info up-to-date introduce algorithmic
>   changes to the pass?

For passes that make changes regardless of how many references there are
to names, no changes whatsoever are required.  Debug annotations will be
updated just like any other pieces of code.

For passes that make changes for boundary cases (say 1 or 0 references),
it is useful (but only mandatory post reload) to modify the algorithm to
also update any other debug stmts/insns that refer to the modified item.


> Clearly, if one needs to dedicate a large portion of the pass
> just to handle debug information, that is going to be a very hard
> sell.

Agreed.  That's why it was designed to be absolutely trivial.  All the
-fcompare-debug debugging strongly supports that it is so.

-- 
Alexandre Oliva, freedom fighter    http://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist      Red Hat Brazil Compiler Engineer

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-08 21:04           ` Alexandre Oliva
@ 2009-06-08 21:30             ` Joe Buck
  2009-06-09  1:15               ` Alexandre Oliva
  2009-06-18 11:04             ` Diego Novillo
  1 sibling, 1 reply; 39+ messages in thread
From: Joe Buck @ 2009-06-08 21:30 UTC (permalink / raw)
  To: Alexandre Oliva; +Cc: Diego Novillo, Daniel Berlin, David Edelsohn, gcc

On Mon, Jun 08, 2009 at 02:03:53PM -0700, Alexandre Oliva wrote:
> On Jun  8, 2009, Diego Novillo <dnovillo@google.com> wrote:
> 
> > - Performance differences over SPEC2006 and the other benchmarks
> >   we keep track of.
> 
> This one is trivial: none whatsoever.  The generated code is the same,
> and it *must* be the same.  Debug information must never change the
> generated code, and VTA is all about debug information.  There's a lot
> of infrastructure to ensure that code remains the unchanged, and
> -fcompare-debug testing backs this up.  It doesn't make much sense to
> run the same code twice to verify that it performs the same, does it?

I haven't kept careful track, but at one point you were talking about
inhibiting some optimizations because they made it harder to keep the
debug information precise.  Is this no longer an issue?  Do you require
that any optimizations that are now in the trunk be disabled?

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-08  2:49       ` Eric Botcazou
@ 2009-06-08 21:31         ` Alexandre Oliva
  0 siblings, 0 replies; 39+ messages in thread
From: Alexandre Oliva @ 2009-06-08 21:31 UTC (permalink / raw)
  To: Eric Botcazou; +Cc: Richard Guenther, gcc

On Jun  7, 2009, Eric Botcazou <ebotcazou@adacore.com> wrote:

>> It would be nice if it worked this way, but the dozens of patches to fix
>> -g/-g0 compile differences I posted over the last several months show
>> it's really not that simple, because the codegen IR does not tell the
>> whole story.  We have kind of IR extensions for debug info, for types
>> and templates, for aliasing information, even for GC of internal data
>> structures, and all of these do affect codegen, sometimes in very subtle
>> ways.

> Yes, precisely, they are IR extensions, most passes shouldn't have to bother 
> with them.

But they do, and we don't mind.  Just count the occurrences of
preserving/copying locations in expr trees, in insns, attributes in REGs
and MEMs.  It's quite a lot.

> Fixing bugs there can probably be done once for all passes.

Unfortunately that's not how things have worked in the past.  Every pass
had to be adjusted over time to stop debug info from being lost, and
Andrew says there's still a lot of work to be done just for correct line
number info, as he found out trying to stuff variable location
information into the infrastructure we used for line numbers.

On top of that, the assumption that the extensions don't require passes
to bother with them is unfortunately false.  The misuse of the
extensions has caused a number of codegen bugs over time, and the
patches I posted and installed recently are only a few examples of that;
several others were posted, approved and installed along the way over
the past year or so.  They weren't debug info errors, they were codegen
errors caused by existing debug info IR extensions.

Part of the problem, I think, is precisely that they were so invisible
that people often forgot their existence and their interaction, and got
sloppy about it beyond the point of rupture.  That's why, in my design,
I focused on optimizing for sloppiness: if you do nothing, you still get
debug info right, and if you care only about codegen, you will notice
codegen issues if you forgot to take debug info into account where it
mattered.

> So, in the end, we seem to agree that your approach is 
> fundamentally different from what we have now.

In some senses, yes.  In others, it's quite the opposite.

It's no different in that it's still there, but mostly unnoticed, like
file:line information and REG/MEM attrs.

It's completely different in that, if you totally forget about it, you
don't get broken auto var location debug info, and you might actually be
reminded, during testing, that your code failed to take debug info into
account, because it caused codegen differences which you do care about.

Isn't that great?

-- 
Alexandre Oliva, freedom fighter    http://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist      Red Hat Brazil Compiler Engineer

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-08 21:30             ` Joe Buck
@ 2009-06-09  1:15               ` Alexandre Oliva
  0 siblings, 0 replies; 39+ messages in thread
From: Alexandre Oliva @ 2009-06-09  1:15 UTC (permalink / raw)
  To: Joe Buck; +Cc: Diego Novillo, Daniel Berlin, David Edelsohn, gcc

On Jun  8, 2009, Joe Buck <Joe.Buck@synopsys.COM> wrote:

> I haven't kept careful track, but at one point you were talking about
> inhibiting some optimizations because they made it harder to keep the
> debug information precise.  Is this no longer an issue?

No, it never was, it must have been some misunderstanding.  I've never
planned on inhibiting any optimizations whatsoever as part of VTA.  The
plan has always been to represent the result of optimizations, not to
modify optimizers.

I suppose there may have been some confusion because of the patch to do
less SSA coalescing to try to improve debug info, long before VTA even
started.  This issue came up again after VTA development was underway,
when it became clear that we could coalesce more, rather than less, and
still get correct and complete debug info.

It is the current trunk code that throttles optimization for better
debug information.  VTA doesn't need that.

-- 
Alexandre Oliva, freedom fighter    http://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist      Red Hat Brazil Compiler Engineer

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Machine Description Template?
  2009-06-05 22:55         ` Graham Reitz
@ 2009-06-09  8:57           ` Martin Guy
  0 siblings, 0 replies; 39+ messages in thread
From: Martin Guy @ 2009-06-09  8:57 UTC (permalink / raw)
  To: Graham Reitz; +Cc: gcc

On 6/5/09, Graham Reitz <grahamreitz@gmail.com> wrote:
> I have been working through sections 16 & 17 of the gccint.info
> document and also read through Hans' 'Porting GCC for Dunces'.

There is also "Incremental Machine Descriptions for GCC"
http://www.cse.iitb.ac.in/~uday/soft-copies/incrementalMD.pdf
which describes creation of a new, clean machine description from scratch

    M

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: Machine Description Template?
  2009-06-05 22:11     ` Machine Description Template? Graham Reitz
                         ` (2 preceding siblings ...)
  2009-06-05 23:48       ` Jeff Law
@ 2009-06-12 21:36       ` Michael Meissner
  3 siblings, 0 replies; 39+ messages in thread
From: Michael Meissner @ 2009-06-12 21:36 UTC (permalink / raw)
  To: Graham Reitz; +Cc: gcc

On Fri, Jun 05, 2009 at 05:11:06PM -0500, Graham Reitz wrote:
> 
> Is there a machine description template in the gcc file source tree?
> 
> If there is also template for the 'C header file of macro definitions'  
> that would be good to know too.
> 
> I did a file search for '.md' and there are tons of examples.   
> Although, I was curious if there was a generic template.

Many years ago, I wrote a generic machine that was intended to be a template
for this, but it quickly became out of date and useless.  I'm not aware of a
more modern version.

-- 
Michael Meissner, IBM
4 Technology Place Drive, MS 2203A, Westford, MA, 01886, USA
meissner@linux.vnet.ibm.com

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-08 17:35         ` Diego Novillo
  2009-06-08 21:04           ` Alexandre Oliva
@ 2009-06-18  8:37           ` Alexandre Oliva
  2009-06-18 10:00             ` Paolo Bonzini
  2009-06-21  5:01             ` Alexandre Oliva
  2009-07-03 10:15           ` VTA compile time memory usage comparison Jakub Jelinek
  2 siblings, 2 replies; 39+ messages in thread
From: Alexandre Oliva @ 2009-06-18  8:37 UTC (permalink / raw)
  To: Diego Novillo; +Cc: Daniel Berlin, David Edelsohn, gcc

Hi,

Sorry that it's taking me so long to get back to you on this.  I wanted
to finish a bunch of patches and check them in, then perform a merge,
before proceeding to the tests, so that the performance results could be
easily duplicated.  This took longer than I'd anticipated.

On Jun  8, 2009, Diego Novillo <dnovillo@google.com> wrote:

> - Size of the IL over some standard code bodies
>   (http://gcc.gnu.org/wiki/PerformanceTesting).

I started looking at this wiki page last night.  I expected to find
something in there to measure the size of the IL, but nothing jumped at
me.  Are you speaking of taking the sizes of tree or rtl dumps, or is
there a more accurate measure?

> - Memory consumption in cc1/cc1plus at -Ox -g over that set of apps.

Wouldn't this be expected to be strongly correlated with the above?  Is
-fmem-report processed by mem-stats what you're after?

> - Compile time in cc1/cc1plus at -Ox -g.

While trying to figure out the items above, I've been working on this
first, although now I realize there's -ftime-report and a time-stats
that you might have wanted instead.

Anyhow...  I'll present the methodology and results I have so far,
mainly because I expect to be mostly away from computers starting some
time tomorrow (*), to return only on the 28th.

(*) still pending a fix for tickets that were purchased incorrectly for
a Free Software event in which I'm expected to speak; I might end up not
flying, and stay around till Tuesday.

On x86_64-linux-gnu, I bootstrapped and installed
tags/var-tracking-assignments-merge-148582-trunk (trunk@148582) and
tags/var-tracking-assignments-merge-148582-after
(branches/var-tracking-assignments-branch@148600), then used these
toolchains to build and install --disable-bootstrap --enable-languages=c
toolchains with -O2 -g0.  These C-only toolchains were the ones I used
for the performance tests below.

Then, I configured and built, out of the sources in the vta branch, 6
variants of GCC, all of them --disable-bootstrap --enable-languages=c,
with CFLAGS="-O2 -time=`pwd`.log" and
CC="/path/to/installed/$which/bin/gcc $gflags" (which and gflags defined
below; below, vt=var-tracking and vta=var-tracking-assignments)

#  name       user time  which  gflags
1  g0-trunk   18m57.284s trunk  -g0
2  g0         18m36.999s vta    -g0
3  g-novt     19m08.668s vta    -g -fno-$vt -fno-vta
4  g-novta    19m35.518s vta    -g -f$vt    -fno-$vta
5  g-novt-vta 19m29.107s vta    -g -fno-$vt -f$vta
6  g          21m19.831s vta    -g -f$vt    -f$vta

This is a single run so far; I'm now running a few more of these to
average the results, but they already show some interesting points:

- using the trunk compiler, rather than the vta compiler, makes the
build slower.  AFAICT, the difference between the object files is mostly
limited to the compiler version in .comment, but I found cases in which
rodata and eh_frame were emitted by trunk, but not by vta.  I don't
recall any patch that might have this effect, and I haven't looked into
it further yet.

This difference might also be explained by caching: I built the 3
toolchains out of a top-level Makefile that recursed into them, using
-j3 for the top level, on a 4-processor box, so there were two
toolchains building out of the vta compiler while only one building out
of the trunk compiler.  This could keep the vta compiler hotter in the
cache or something.  Anyhow, this hopefully provides some evidence that
supporting VTA doesn't make the compiler slower, in spite of the testing
for debug stmts and insns.

I'll repeat the tests without -j, just to be on the safe side.


- emitting debug information, without any var tracking whatsoever,
incurs an overhead on -O2 compilations of 2.9%

- enabling var tracking raises the overhead over -g0 to 5.2%, or 2.2%
over non-VT debug info, and this doesn't count the overhead of carrying
REG and MEM attributes that are only used for debug information
purposes.

- carrying, maintaining and ignoring as needed the annotations needed
for VTA, without running the var-tracking pass, costs less than
var-tracking: 4.7% over -g0, or 1.7% over non-VT debug info

- carrying all the VTA debug annotations and running var-tracking with
support for them, and tracking values as needed to support VTA, raises
the overhead to 14.6% over -g0, or 11.3% over non-VT debug info.  This
is more than just adding the overheads of carrying the annotations and
that of running the old, much simpler VT pass: the VTA-supporting VT
pass maintains and propagates far more information in order to get
better debug info.

The extra information exposes weaknesses in var-tracking data structures
and algorithms, such as excess memory use and algorithmic complexity.
This is not a privilege of VTA:
https://bugzilla.redhat.com/show_bug.cgi?id=503816 comes up in a
toolchain that has no traces whatsoever of VTA code, but it still
exhibits excessive memory use and compile time, very much like compiling
HTML401F in libjava with vs without VTA.

Redesigning the VT data structures for more efficient propagation of
information is something that we should look into, for it will benefit
VTA as well as non-VTA VT compilations.  But I hope that's not set as a
requirement to have VTA support integrated into the compiler.


Results from the second run (still with -j3) are just in:

#  name       run0       run1 
1  g0-trunk   18m57.284s 19m04.429s trunk  -g0
2  g0         18m36.999s 19m13.588s vta    -g0
3  g-novt     19m08.668s 19m16.078s vta    -g -fno-$vt -fno-vta
4  g-novta    19m35.518s 20m06.529s vta    -g -f$vt    -fno-$vta
5  g-novt-vta 19m29.107s 19m52.220s vta    -g -fno-$vt -f$vta
6  g          21m19.831s 20m44.965s vta    -g -f$vt    -f$vta

the distortion between 1 and 2 appears to be fixed, the overhead for -g
without VT is much smaller, and the VTA overhead is down to 8.8%.  Ok,
so the results of the first run are not that significant, and I guess
I'll have to average the results over more runs, but maybe they can at
least give you a rough idea of where we'll be heading if we bring VTA
in.

-- 
Alexandre Oliva, freedom fighter    http://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist      Red Hat Brazil Compiler Engineer

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-18  8:37           ` Alexandre Oliva
@ 2009-06-18 10:00             ` Paolo Bonzini
  2009-06-18 12:31               ` Michael Matz
  2009-06-21  5:01             ` Alexandre Oliva
  1 sibling, 1 reply; 39+ messages in thread
From: Paolo Bonzini @ 2009-06-18 10:00 UTC (permalink / raw)
  To: Alexandre Oliva; +Cc: Diego Novillo, Daniel Berlin, David Edelsohn, gcc


>> - Memory consumption in cc1/cc1plus at -Ox -g over that set of apps.
> 
> Wouldn't this be expected to be strongly correlated with the above?  Is
> -fmem-report processed by mem-stats what you're after?

People usually just look at top's output, but Honza has a memory tester 
so I thought maybe you can script it...  Indeed here is a script to do that

#! /bin/bash
trap 'test -n $pid && kill -TERM $pid 2>/dev/null; :' 0 TERM INT
"$@" &
pid=$!
vsize=
rss=
while :; do
   set `cat /proc/$pid/statm 2>/dev/null || echo break`
   test $1 = break && break
   test $1 -gt ${vsize:-0} && vsize=$1
   test $2 -gt ${rss:-0} && rss=$2
   sleep 1
done
test -n "$vsize" && echo max. vmsize: $(($vsize * 4)) Kb >&2
test -n "$rss" && echo max. rss: $(($rss * 4)) Kb >&2

Paolo

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-08 21:04           ` Alexandre Oliva
  2009-06-08 21:30             ` Joe Buck
@ 2009-06-18 11:04             ` Diego Novillo
  2009-06-18 20:58               ` Alexandre Oliva
  1 sibling, 1 reply; 39+ messages in thread
From: Diego Novillo @ 2009-06-18 11:04 UTC (permalink / raw)
  To: Alexandre Oliva; +Cc: Daniel Berlin, David Edelsohn, gcc

On Mon, Jun 8, 2009 at 17:03, Alexandre Oliva<aoliva@redhat.com> wrote:

> For the measurements, I won't use the last merge, but rather the trunk

Comparing trunk as of the last merge point is the easiest thing to do
(just checkout trunk at the revision that you last merged with the
branch).  That's why I suggested that.  Additionally, it gives you a
clear picture of how the branch differs from mainline without any
other artifacts.

Of course, if you've recently merged, then it shouldn't make much difference.


> What needs to be taken care of is something else: avoiding codegen
> differences.  This means that whatever factors you use to make decisions
> on whether or not to make a transformation shouldn't take debug
> annotations into account.  E.g., if you count how many references there
> are to a certain DEF, don't take the debug USEs into account.  If you
> count how many STMTs there are in a function or block to decide whether
> to inline it or duplicate it, don't count the annotations.

This is going to be a source of headaches.  But I don't think we'll
ever really win this fight.  Dealing with debug information will
painful in one dimension or another.  Hopefully this will be easier to
deal with than the current -O2 -g disaster.


> When in the documentation do you suggest this should go?

A new chapter in gccint.texi should be fine, I think.  It doesn't have
to start long, but we may add to it as time goes on.

> That said, the additional
> work would be explicitly optional, and certainly not necessarily taken
> up by the maintainer of the pass, but rather by someone interested in
> debug information.

I like it.  This is a good property.  In general, folks interested in
optimization are reluctant to care about debugging too much.  If we
can cater to both camps, we all win.


Diego.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-18 10:00             ` Paolo Bonzini
@ 2009-06-18 12:31               ` Michael Matz
  2009-06-18 13:35                 ` Diego Novillo
  0 siblings, 1 reply; 39+ messages in thread
From: Michael Matz @ 2009-06-18 12:31 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Alexandre Oliva, gcc

[-- Attachment #1: Type: TEXT/PLAIN, Size: 408 bytes --]

Hi,

On Thu, 18 Jun 2009, Paolo Bonzini wrote:

> > > - Memory consumption in cc1/cc1plus at -Ox -g over that set of apps.
> 
> People usually just look at top's output, but Honza has a memory tester 
> so I thought maybe you can script it...  Indeed here is a script to do 
> that

The memory tester is based on the attached scripts, using strace for 
tracking, not polling on /proc output.


Ciao,
Michael.

[-- Attachment #2: Type: APPLICATION/x-shellscript, Size: 487 bytes --]

[-- Attachment #3: Type: TEXT/x-python, Size: 1928 bytes --]

#!/usr/bin/python

import string
import re
import sys

pidre = re.compile("^([0-9]*)")
brkre = re.compile("^[0-9 ]*brk.*= ([0-9a-fx]*)")
# munmap (0x40017000, 118685)
# mmap (NULL, 1174524, 
# mremap (void  *old_address,  size_t old_size , size_t new_size
mmapre = re.compile("^[0-9 ]*[mo][ml][ad][^\(]*\([^,]*, ([0-9a-fx]*)")
munmapre = re.compile("^[0-9 ]*munmap[^\(]*\([^,]*, ([0-9a-fx]*)")
mremapre = re.compile("^[0-9 ]*mremap[^\(]*\([^,]*, ([0-9a-fx]*), ([0-9a-fx]*)")

mmapmax = dict()
mmap = dict()
brkmin = dict()
brkmax = dict()

for line in sys.stdin:
	mo = pidre.search(line)
	pid = int(mo.group(1))
	mmapmax.setdefault(pid, 0)
	mmap.setdefault(pid, 0)
	mo = brkre.search(line)
	if mo:
		brkval = int(mo.group(1), 16)
		brkmin.setdefault(pid, brkval)
		brkmax.setdefault(pid, brkval)
		if brkval < brkmin[pid]:
			brkmin[pid] = brkval
		if brkval > brkmax[pid]:
			brkmax[pid] = brkval
	else:
		mo = mmapre.search(line)
		if mo:
			mmapval = int(mo.group(1))
			mmap[pid] += mmapval
		else:
			mo = munmapre.search(line)
			if mo:
				munmapval = int(mo.group(1))
				mmap[pid] -= munmapval
			else:
				mo = mremapre.search(line)
				if mo:
					mremappval = int(mo.group(1))
					mremapnval = int(mo.group(2))
					mmap[pid] += mremapnval - mremappval
		if mmap[pid] > mmapmax[pid]:
			mmapmax[pid] = mmap[pid]

ovrallmmap = 0
for pid, val in mmapmax.iteritems():
#	print "max mmap usage of pid %u is %u kB" % (pid, val/1024)
	if val > ovrallmmap:
		ovrallmmap = val

ovrallbrk = 0
for pid, val in brkmax.iteritems():
	use = val - brkmin[pid]
#	print "max brk usage of pid %u is %u kB" % (pid, use/1024)
	if use > ovrallbrk:
		ovrallbrk = use

#print "mmapmax: %u kB, brkmin: 0x%x, brkmax: 0x%x" % (mmapmax/1024, brkmin, brkmax)
#print "total: %u kB" % ((mmapmax + (brkmax-brkmin))/1024)

print "total: %u kB" % ((ovrallmmap + ovrallbrk)/1024)

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-18 12:31               ` Michael Matz
@ 2009-06-18 13:35                 ` Diego Novillo
  2009-06-18 18:05                   ` Gerald Pfeifer
  0 siblings, 1 reply; 39+ messages in thread
From: Diego Novillo @ 2009-06-18 13:35 UTC (permalink / raw)
  To: Michael Matz; +Cc: Paolo Bonzini, Alexandre Oliva, gcc

On Thu, Jun 18, 2009 at 08:31, Michael Matz<matz@suse.de> wrote:

> The memory tester is based on the attached scripts, using strace for
> tracking, not polling on /proc output.

Nice.  Could you upload them to http://gcc.gnu.org/wiki/PerformanceTesting?


Thanks.  Diego.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-18 13:35                 ` Diego Novillo
@ 2009-06-18 18:05                   ` Gerald Pfeifer
  0 siblings, 0 replies; 39+ messages in thread
From: Gerald Pfeifer @ 2009-06-18 18:05 UTC (permalink / raw)
  To: Diego Novillo; +Cc: Michael Matz, Paolo Bonzini, Alexandre Oliva, gcc

On Thu, 18 Jun 2009, Diego Novillo wrote:
> Nice.  Could you upload them to http://gcc.gnu.org/wiki/PerformanceTesting?

How about gcc/contrib?

Gerald

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-18 11:04             ` Diego Novillo
@ 2009-06-18 20:58               ` Alexandre Oliva
  0 siblings, 0 replies; 39+ messages in thread
From: Alexandre Oliva @ 2009-06-18 20:58 UTC (permalink / raw)
  To: Diego Novillo; +Cc: Daniel Berlin, David Edelsohn, gcc

On Jun 18, 2009, Diego Novillo <dnovillo@google.com> wrote:

> On Mon, Jun 8, 2009 at 17:03, Alexandre Oliva<aoliva@redhat.com> wrote:
>> For the measurements, I won't use the last merge, but rather the trunk

> Comparing trunk as of the last merge point is the easiest thing to do
> (just checkout trunk at the revision that you last merged with the
> branch).

There had been too much debug-info-related patching after the previous
merge on both sides, and tracking them all down would have been a pain.
So I ran another merge.

> painful in one dimension or another.  Hopefully this will be easier to
> deal with than the current -O2 -g disaster.

+1

>> When in the documentation do you suggest this should go?

> A new chapter in gccint.texi should be fine, I think.  It doesn't have
> to start long, but we may add to it as time goes on.

Heh, I asked *when*, not *where*!  Doh.  Sorry, I hope it wasn't too
confusing.

>> That said, the additional work would be explicitly optional, and
>> certainly not necessarily taken up by the maintainer of the pass, but
>> rather by someone interested in debug information.

> I like it.  This is a good property.  In general, folks interested in
> optimization are reluctant to care about debugging too much.  If we
> can cater to both camps, we all win.

+1

That was a factor I took very much into consideration in the design.
I'm happy this is becoming clearer now that the smoke is vanishing ;-)

-- 
Alexandre Oliva, freedom fighter    http://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist      Red Hat Brazil Compiler Engineer

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-18  8:37           ` Alexandre Oliva
  2009-06-18 10:00             ` Paolo Bonzini
@ 2009-06-21  5:01             ` Alexandre Oliva
  2009-06-21 11:02               ` Richard Guenther
  1 sibling, 1 reply; 39+ messages in thread
From: Alexandre Oliva @ 2009-06-21  5:01 UTC (permalink / raw)
  To: Diego Novillo; +Cc: Daniel Berlin, David Edelsohn, gcc

On Jun 18, 2009, Alexandre Oliva <aoliva@redhat.com> wrote:

>> - Memory consumption in cc1/cc1plus at -Ox -g over that set of apps.

I had to use a different machine for this test.  The one I was using had
to be taken off line and moved, for reasons beyond my control, and I
probably won't be able to get into it to collect the results before I
hit the road later this week.  Sorry.


For the total memory uses below, I moved gcc to gcc.actual in both the
trunk and vta install trees, and installed a new gcc script that ran
maxmem2.sh $0.actual "$@".

I modified maxmem-pipe2.py to output to a named pipe, and for maxmem2.sh
to wait for the “cat >&2” from the named pipe to complete, just so that
I could correlate the memory use output with the command that produced
it.  Without this change, in a number of cases the python script output
the totals after make had already printed the following command, which
got the output mangled and confusing.

Having logged the build output of each of the trees that I had
configured before (-O2 is used for all of them), now with the maxmem
wrapper, I totaled the “total:” lines it printed for each of the builds,
resulting the values in the memory column below.

#  name       mem(KiB) %Δ#1 which  gflags
1  g0-trunk   58114157 0    trunk  -g0
2  g0         58114261 0    vta    -g0
3  g-novt     59722133 2.77 vta    -g -fno-$vt -fno-vta
4  g-novta    59840445 2.97 vta    -g -f$vt    -fno-$vta
5  g-novt-vta 59764629 2.84 vta    -g -fno-$vt -f$vta
6  g          59997781 3.24 vta    -g -f$vt    -f$vta

Conclusions: generating debug information incurred a memory penalty of
nearly 3% before VTA, for a C-only optimized GCC build.

Carrying VTA notes uses very little memory besides that which is
required to generate debug info without VT (0.07% more).

Actually using VTA notes to emit debug information in the VT pass
increases maximum memory use, when compared with VT without VTA, by as
little as 0.26%.

Wow, this was actually much better than I had anticipated.

-- 
Alexandre Oliva, freedom fighter    http://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist      Red Hat Brazil Compiler Engineer

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-21  5:01             ` Alexandre Oliva
@ 2009-06-21 11:02               ` Richard Guenther
  2009-06-21 11:38                 ` Richard Guenther
  0 siblings, 1 reply; 39+ messages in thread
From: Richard Guenther @ 2009-06-21 11:02 UTC (permalink / raw)
  To: Alexandre Oliva; +Cc: Diego Novillo, Daniel Berlin, David Edelsohn, gcc

On Sun, Jun 21, 2009 at 7:00 AM, Alexandre Oliva<aoliva@redhat.com> wrote:
> On Jun 18, 2009, Alexandre Oliva <aoliva@redhat.com> wrote:
>
>>> - Memory consumption in cc1/cc1plus at -Ox -g over that set of apps.
>
> I had to use a different machine for this test.  The one I was using had
> to be taken off line and moved, for reasons beyond my control, and I
> probably won't be able to get into it to collect the results before I
> hit the road later this week.  Sorry.
>
>
> For the total memory uses below, I moved gcc to gcc.actual in both the
> trunk and vta install trees, and installed a new gcc script that ran
> maxmem2.sh $0.actual "$@".
>
> I modified maxmem-pipe2.py to output to a named pipe, and for maxmem2.sh
> to wait for the "cat >&2" from the named pipe to complete, just so that
> I could correlate the memory use output with the command that produced
> it.  Without this change, in a number of cases the python script output
> the totals after make had already printed the following command, which
> got the output mangled and confusing.
>
> Having logged the build output of each of the trees that I had
> configured before (-O2 is used for all of them), now with the maxmem
> wrapper, I totaled the "total:" lines it printed for each of the builds,
> resulting the values in the memory column below.
>
> #  name       mem(KiB) %Δ#1 which  gflags
> 1  g0-trunk   58114157 0    trunk  -g0
> 2  g0         58114261 0    vta    -g0
> 3  g-novt     59722133 2.77 vta    -g -fno-$vt -fno-vta
> 4  g-novta    59840445 2.97 vta    -g -f$vt    -fno-$vta
> 5  g-novt-vta 59764629 2.84 vta    -g -fno-$vt -f$vta
> 6  g          59997781 3.24 vta    -g -f$vt    -f$vta
>
> Conclusions: generating debug information incurred a memory penalty of
> nearly 3% before VTA, for a C-only optimized GCC build.
>
> Carrying VTA notes uses very little memory besides that which is
> required to generate debug info without VT (0.07% more).
>
> Actually using VTA notes to emit debug information in the VT pass
> increases maximum memory use, when compared with VT without VTA, by as
> little as 0.26%.
>
> Wow, this was actually much better than I had anticipated.

The overhead of carrying VTA notes at -g0 vs not doing so would be
the same 0.07%?  I'm just curious because I try to be insisting on that
we do exactly this ;)

I wonder if the above figures apply to compiling a C++ application as well
(I see a lot of VTA notes - more than 50% of all tree instructions - when
compiling tramp3d for example).

Thanks,
Richard.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-21 11:02               ` Richard Guenther
@ 2009-06-21 11:38                 ` Richard Guenther
  2009-06-21 11:50                   ` Richard Guenther
  2009-07-01  5:26                   ` Alexandre Oliva
  0 siblings, 2 replies; 39+ messages in thread
From: Richard Guenther @ 2009-06-21 11:38 UTC (permalink / raw)
  To: Alexandre Oliva; +Cc: Diego Novillo, Daniel Berlin, David Edelsohn, gcc

2009/6/21 Richard Guenther <richard.guenther@gmail.com>:
> On Sun, Jun 21, 2009 at 7:00 AM, Alexandre Oliva<aoliva@redhat.com> wrote:
>> On Jun 18, 2009, Alexandre Oliva <aoliva@redhat.com> wrote:
>>
>>>> - Memory consumption in cc1/cc1plus at -Ox -g over that set of apps.
>>
>> I had to use a different machine for this test.  The one I was using had
>> to be taken off line and moved, for reasons beyond my control, and I
>> probably won't be able to get into it to collect the results before I
>> hit the road later this week.  Sorry.
>>
>>
>> For the total memory uses below, I moved gcc to gcc.actual in both the
>> trunk and vta install trees, and installed a new gcc script that ran
>> maxmem2.sh $0.actual "$@".
>>
>> I modified maxmem-pipe2.py to output to a named pipe, and for maxmem2.sh
>> to wait for the "cat >&2" from the named pipe to complete, just so that
>> I could correlate the memory use output with the command that produced
>> it.  Without this change, in a number of cases the python script output
>> the totals after make had already printed the following command, which
>> got the output mangled and confusing.
>>
>> Having logged the build output of each of the trees that I had
>> configured before (-O2 is used for all of them), now with the maxmem
>> wrapper, I totaled the "total:" lines it printed for each of the builds,
>> resulting the values in the memory column below.
>>
>> #  name       mem(KiB) %Δ#1 which  gflags
>> 1  g0-trunk   58114157 0    trunk  -g0
>> 2  g0         58114261 0    vta    -g0
>> 3  g-novt     59722133 2.77 vta    -g -fno-$vt -fno-vta
>> 4  g-novta    59840445 2.97 vta    -g -f$vt    -fno-$vta
>> 5  g-novt-vta 59764629 2.84 vta    -g -fno-$vt -f$vta
>> 6  g          59997781 3.24 vta    -g -f$vt    -f$vta
>>
>> Conclusions: generating debug information incurred a memory penalty of
>> nearly 3% before VTA, for a C-only optimized GCC build.
>>
>> Carrying VTA notes uses very little memory besides that which is
>> required to generate debug info without VT (0.07% more).
>>
>> Actually using VTA notes to emit debug information in the VT pass
>> increases maximum memory use, when compared with VT without VTA, by as
>> little as 0.26%.
>>
>> Wow, this was actually much better than I had anticipated.
>
> The overhead of carrying VTA notes at -g0 vs not doing so would be
> the same 0.07%?  I'm just curious because I try to be insisting on that
> we do exactly this ;)
>
> I wonder if the above figures apply to compiling a C++ application as well
> (I see a lot of VTA notes - more than 50% of all tree instructions - when
> compiling tramp3d for example).

So I just tested tramp3d for memory usage (I hope I got the same flags as you,
base flags are -O2 -ffast-math -funroll-loops):

-g0 -fno-var-tracking -fno-var-tracking-assignments: 502361 kB
-g -fno-var-tracking -fno-var-tracking-assignments: 615305 kB +18.5%
-g -fno-var-tracking -fvar-tracking-assignments: 647773 kB +5%
-g -fvar-tracking -fvar-tracking-assignments: 655197 kB +1.2%

all %ages relative to the previous line.

So keeping the VTA notes for tramp3d is an overhead of 5%, or
6% on top of -g0 (extrapolated).  The rest may be interesting but
without a trunk build to compare I cannot state anything about
overhead of the branch (var-tracking or -g may be more expensive
due to changes on the branch).

The numbers are from a checking enabled build, so due to more
GC the numbers are artificially low but independent on the amount
of memory in the machine and more realistically reflect the actual
overhead.  Can you specify how you built VTA?

tramp3d certainly is not representative for random C++ applications.
Your GCC testing may be representative for random C applications
(can you check a kernel build to asses compile-time, memory-usage
and debug-size differences?)

Thanks,
Richard.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-21 11:38                 ` Richard Guenther
@ 2009-06-21 11:50                   ` Richard Guenther
  2009-07-01  5:26                   ` Alexandre Oliva
  1 sibling, 0 replies; 39+ messages in thread
From: Richard Guenther @ 2009-06-21 11:50 UTC (permalink / raw)
  To: Alexandre Oliva; +Cc: Diego Novillo, Daniel Berlin, David Edelsohn, gcc

2009/6/21 Richard Guenther <richard.guenther@gmail.com>:
> 2009/6/21 Richard Guenther <richard.guenther@gmail.com>:
>
> So I just tested tramp3d for memory usage (I hope I got the same flags as you,
> base flags are -O2 -ffast-math -funroll-loops):
>
> -g0 -fno-var-tracking -fno-var-tracking-assignments: 502361 kB

-g0 -fno-var-tracking -fvar-tracking-assignments: 521869 kB +3.8%

(no idea what that actually tests)

Are the number and positions of VTA notes the same at -g vs. -g0?
If so the difference to the extrapolated 6% must be a GC artifact...?

> -g -fno-var-tracking -fno-var-tracking-assignments: 615305 kB +18.5%
> -g -fno-var-tracking -fvar-tracking-assignments: 647773 kB +5%
> -g -fvar-tracking -fvar-tracking-assignments: 655197 kB +1.2%

Richard.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: VTA merge?
  2009-06-21 11:38                 ` Richard Guenther
  2009-06-21 11:50                   ` Richard Guenther
@ 2009-07-01  5:26                   ` Alexandre Oliva
  1 sibling, 0 replies; 39+ messages in thread
From: Alexandre Oliva @ 2009-07-01  5:26 UTC (permalink / raw)
  To: Richard Guenther; +Cc: Diego Novillo, Daniel Berlin, David Edelsohn, gcc

Hi,

I'm back!

On Jun 21, 2009, Richard Guenther <richard.guenther@gmail.com> wrote:

> So I just tested tramp3d for memory usage

Thanks!

> -g -fno-var-tracking -fno-var-tracking-assignments: 615305 kB +18.5%
> -g -fno-var-tracking -fvar-tracking-assignments: 647773 kB +5%

> So keeping the VTA notes for tramp3d is an overhead of 5%


> -g0 -fno-var-tracking -fno-var-tracking-assignments: 502361 kB
> -g0 -fno-var-tracking -fvar-tracking-assignments: 521869 kB +3.8%

> (no idea what that actually tests)

It tests the memory overhead of carrying the VTA notes in a compilation
without debug information.


> without a trunk build to compare I cannot state anything about
> overhead of the branch (var-tracking or -g may be more expensive
> due to changes on the branch).

Memory-wise, VTA shouldn't have significant changes compared with trunk,
at least when VTA isn't enabled.  When it is, the VTA pass itself will
tend to consume more memory (although sometimes it may use less, because
of the much-smaller data structure allocated for variables suitable for
gimple regs), and because of the notes carried throughout compilation.

> Can you specify how you built VTA?

Is http://gcc.gnu.org/ml/gcc/2009-06/msg00426.html complete enough?

> Your GCC testing may be representative for random C applications
> (can you check a kernel build to asses compile-time, memory-usage
> and debug-size differences?)

Sure.  It will take time away from implementing some improvements I'd
like to implement before the cut-off for 4.5, but...  I'll try.

> Are the number and positions of VTA notes the same at -g vs. -g0?

They ought to be, by construction, but I haven't devised a way to verify
this property.

> If so the difference to the extrapolated 6% must be a GC artifact...?

I guess...  I can't think of any other reason for it.

-- 
Alexandre Oliva, freedom fighter    http://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist      Red Hat Brazil Compiler Engineer

^ permalink raw reply	[flat|nested] 39+ messages in thread

* VTA compile time memory usage comparison
  2009-06-08 17:35         ` Diego Novillo
  2009-06-08 21:04           ` Alexandre Oliva
  2009-06-18  8:37           ` Alexandre Oliva
@ 2009-07-03 10:15           ` Jakub Jelinek
  2 siblings, 0 replies; 39+ messages in thread
From: Jakub Jelinek @ 2009-07-03 10:15 UTC (permalink / raw)
  To: Diego Novillo; +Cc: Alexandre Oliva, gcc

On Mon, Jun 08, 2009 at 01:35:34PM -0400, Diego Novillo wrote:
> On Sun, Jun 7, 2009 at 16:04, Alexandre Oliva<aoliva@redhat.com> wrote:
> 
> > So the question is, what should I measure?  Memory use for any specific
> > set of testcases, summarized over a bootstrap with memory use tracking
> > enabled, something else?  Likewise for compile time?  What else?
> 
> Some quick measurements I'd be interested in:
> 
> - Size of the IL over some standard code bodies
>   (http://gcc.gnu.org/wiki/PerformanceTesting).

> - Memory consumption in cc1/cc1plus at -Ox -g over that set of apps.

Here are data comparing trunk@148582 (last merge point to VTA branch) with
vta@149180 (current VTA branch head), using maxmem2.sh and maxmem-pipe2.py.
Both compilers were built with --enable-checking=release, all numbers are
on x86_64-linux from cc1 resp. cc1plus (*.i resp. *.ii) with -g -quiet and
the options listed in the header line.

The only major compile time memory consumption problem is in VARIOUS/,
particularly pr28071.i where var-tracking goes through roof, I hope Alex
will look into it, worst case we could just silently turn off
flag_var_tracking_assignments in var-tracking.c for functions which have
too many basic blocks and too many VALUEs to track.

I'm now running compilations with -ftime-report, once that finishes will
post statistics for compile time as well, then size of IL.

			-O0-m64	-O0-m32	-O1-m64	-O1-m32	-O2-m64	-O2-m32	-O3-m64	-O3-m32	-Os-m64	-Os-m32
GCC trunk@148582 avg	103772	102936	105455	104689	106334	105539	108212	107044	105339	104666
GCC vta@149180   avg	103773	102937	106076	105426	106963	106324	108964	107892	105864	105203
vta@149180/trunk@148582	100.00%	100.00%	100.59%	100.70%	100.59%	100.74%	100.69%	100.79%	100.50%	100.51%
GCC trunk@148582 max	528254	460362	587570	542154	598130	533138	625174	544654	591410	555526
GCC vta@149180   max	528254	460362	579338	540282	598014	547914	627214	561434	578970	553646
vta@149180/trunk@148582	100.00%	100.00%	98.60%	99.65%	99.98%	102.77%	100.33%	103.08%	97.90%	99.66%
FF3D trunk@148582 avg	160478	160379	169280	170735	174005	175407	179056	179874	164380	165596
FF3D vta@149180   avg	160463	160384	171352	173741	176746	178873	182519	183965	165770	167263
vta@149180/trunk@148582	99.99%	100.00%	101.22%	101.76%	101.58%	101.98%	101.93%	102.27%	100.85%	101.01%
FF3D trunk@148582 max	494298	493310	497114	492538	509234	508778	529734	540582	476798	493110
FF3D vta@149180   max	494298	493338	515690	517554	531498	530806	542822	552638	514798	534822
vta@149180/trunk@148582	100.00%	100.01%	103.74%	105.08%	104.37%	104.33%	102.47%	102.23%	107.97%	108.46%
MICO trunk@148582 avg	270379	240593	276524	248775	278922	251851	282331	254687	273297	246126
MICO vta@149180   avg	270328	240570	278185	251241	280705	254305	284368	257374	274451	247978
vta@149180/trunk@148582	99.98%	99.99%	100.60%	100.99%	100.64%	100.97%	100.72%	101.06%	100.42%	100.75%
MICO trunk@148582 max	497802	494486	537502	522602	528098	519598	557086	544478	528158	527110
MICO vta@149180   max	497838	494506	538998	524282	533038	522418	566886	550298	545302	553194
vta@149180/trunk@148582	100.01%	100.00%	100.28%	100.32%	100.94%	100.54%	101.76%	101.07%	103.25%	104.95%
SPEC2K trunk@148582 avg	101353	101091	102610	102519	103434	103358	105819	105165	102910	102884
SPEC2K vta@149180   avg	101349	101092	103087	103067	103936	103914	106482	105914	103360	103395
vta@149180/trunk@148582	100.00%	100.00%	100.46%	100.53%	100.49%	100.54%	100.63%	100.71%	100.44%	100.50%
SPEC2K trunk@148582 max	172526	172678	188594	188810	192194	193514	204014	202330	186690	189914
SPEC2K vta@149180   max	172526	172674	189394	192394	195182	197266	205002	206714	187870	192654
vta@149180/trunk@148582	100.00%	100.00%	100.42%	101.90%	101.55%	101.94%	100.48%	102.17%	100.63%	101.44%
TRAMP3Dtrunk@148582 avg	686766	685898	893478	889398	919662	942746	996782	995498	891466	893930
TRAMP3Dvta@149180   avg	687558	686494	1030534	1024634	1023046	1020642	1045182	1053774	891838	897942
vta@149180/trunk@148582	100.12%	100.09%	115.34%	115.21%	111.24%	108.26%	104.86%	105.85%	100.04%	100.45%
TRAMP3Dtrunk@148582 max	686766	685898	893478	889398	919662	942746	996782	995498	891466	893930
TRAMP3Dvta@149180   max	687558	686494	1030534	1024634	1023046	1020642	1045182	1053774	891838	897942
vta@149180/trunk@148582	100.12%	100.09%	115.34%	115.21%	111.24%	108.26%	104.86%	105.85%	100.04%	100.45%
DLV trunk@148582 avg	239153	237855	260962	261421	263187	263827	270711	268845	247990	248256
DLV vta@149180   avg	239135	237804	264511	265722	267777	269264	275713	275005	250084	250658
vta@149180/trunk@148582	99.99%	99.98%	101.36%	101.65%	101.74%	102.06%	101.85%	102.29%	100.84%	100.97%
DLV trunk@148582 max	375554	376498	383990	383798	385438	388962	401438	397650	381074	382142
DLV vta@149180   max	375558	376462	386966	392262	408638	412070	421038	427414	382886	383446
vta@149180/trunk@148582	100.00%	99.99%	100.78%	102.21%	106.02%	105.94%	104.88%	107.48%	100.48%	100.34%
VARIOUStrunk@148582 avg	415948	416633	518190	544250	684925	709474	703472	725017	565568	708509
VARIOUSvta@149180   avg	415933	416624	615444	646627	3143720	3642774	3133712	3607254	1646542	3351722
vta@149180/trunk@148582	100.00%	100.00%	118.77%	118.81%	458.99%	513.45%	445.46%	497.54%	291.13%	473.07%
VARIOUStrunk@148582 max	544654	547038	686858	705486	1388662	1511014	1413242	1511062	835282	1514758
VARIOUSvta@149180   max	544714	547190	953818	953810	13096222	15577234	13095842	15550054	5811502	14301914
vta@149180/trunk@148582	100.01%	100.03%	138.87%	135.20%	943.08%	1030.91%	926.65%	1029.08%	695.75%	944.17%

	Jakub

^ permalink raw reply	[flat|nested] 39+ messages in thread

end of thread, other threads:[~2009-07-03 10:15 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-06-05 10:06 VTA merge? Alexandre Oliva
2009-06-05 10:19 ` Richard Guenther
2009-06-05 10:53   ` Alexandre Oliva
2009-06-05 11:18     ` Richard Guenther
2009-06-06  8:12   ` Eric Botcazou
2009-06-07 21:32     ` Alexandre Oliva
2009-06-08  2:49       ` Eric Botcazou
2009-06-08 21:31         ` Alexandre Oliva
2009-06-05 10:42 ` Joseph S. Myers
2009-06-05 11:11   ` Alexandre Oliva
2009-06-05 12:28 ` David Edelsohn
2009-06-05 19:18   ` Alexandre Oliva
2009-06-05 20:56     ` Daniel Berlin
2009-06-07 20:04       ` Alexandre Oliva
2009-06-08 16:19         ` Frank Ch. Eigler
2009-06-08 17:35         ` Diego Novillo
2009-06-08 21:04           ` Alexandre Oliva
2009-06-08 21:30             ` Joe Buck
2009-06-09  1:15               ` Alexandre Oliva
2009-06-18 11:04             ` Diego Novillo
2009-06-18 20:58               ` Alexandre Oliva
2009-06-18  8:37           ` Alexandre Oliva
2009-06-18 10:00             ` Paolo Bonzini
2009-06-18 12:31               ` Michael Matz
2009-06-18 13:35                 ` Diego Novillo
2009-06-18 18:05                   ` Gerald Pfeifer
2009-06-21  5:01             ` Alexandre Oliva
2009-06-21 11:02               ` Richard Guenther
2009-06-21 11:38                 ` Richard Guenther
2009-06-21 11:50                   ` Richard Guenther
2009-07-01  5:26                   ` Alexandre Oliva
2009-07-03 10:15           ` VTA compile time memory usage comparison Jakub Jelinek
2009-06-05 22:11     ` Machine Description Template? Graham Reitz
2009-06-05 22:31       ` Ramana Radhakrishnan
2009-06-05 22:46       ` Michael Hope
2009-06-05 22:55         ` Graham Reitz
2009-06-09  8:57           ` Martin Guy
2009-06-05 23:48       ` Jeff Law
2009-06-12 21:36       ` Michael Meissner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).