public inbox for gcc@gcc.gnu.org
 help / color / mirror / Atom feed
* Re: source mgt....[_HAS_ gcc relevance]
@ 2002-12-16  4:56 Richard Kenner
  2002-12-16  5:33 ` Tom Lord
  0 siblings, 1 reply; 14+ messages in thread
From: Richard Kenner @ 2002-12-16  4:56 UTC (permalink / raw)
  To: lord; +Cc: gcc, torvalds, zack

    but wouldn't it be nice if that were automated: so a developer could
    hit the "try to test and merge" button before going home for the
    night, coming back in the morning to either a commit email or a list
    of test failures

I'm not sure I like that kind of automation because of the potentially
unknown delay in the testing process (what if the queue that runs the
tests got stuck).  I'd want to be able to know and control exactly
*when* the change went in.

^ permalink raw reply	[flat|nested] 14+ messages in thread
* Re: source mgt....[_HAS_ gcc relevance]
@ 2002-12-18 11:03 Robert Dewar
  0 siblings, 0 replies; 14+ messages in thread
From: Robert Dewar @ 2002-12-18 11:03 UTC (permalink / raw)
  To: dewar, lord; +Cc: gcc, torvalds, zack

> A lot of the thinking behind arch is to scale up and simplify adopting
> practices such as you describe so that they are applied by default to
> pretty much all of the free software (and "open source") projects in
> the world.  With your 35, you have social pressures and the power of
> the employer to enforce restrictions like "run the tests before
> committing to mainline" -- but wouldn't it be nice if that were
> automated: so a developer could hit the "try to test and merge" button
> before going home for the night, coming back in the morning to either
> a commit email or a list of test failures -- and if you _didn't_ have
> to write all your own tools for that automation because they were just
> there already, such that setting up a new project with these
> properties was as easy as creating a project on Savannah currently is
> (or, easier :).

Yes, indeed, automating requirements like this is always desirable.
Although you probably want ways to override requirements in emergencies.

What we find is that the key point is that it must be *easy* to follow
procedures, if it is, then they get followed, if not, no amount of
social pressure can guarantee conformance :-)

^ permalink raw reply	[flat|nested] 14+ messages in thread
* Re: source mgt....[_HAS_ gcc relevance]
@ 2002-12-16  3:05 Robert Dewar
  2002-12-16  4:15 ` Tom Lord
  0 siblings, 1 reply; 14+ messages in thread
From: Robert Dewar @ 2002-12-16  3:05 UTC (permalink / raw)
  To: lord, zack; +Cc: gcc, torvalds

> Well, my ideal is that changes to the mainline should occur only
> _after_ they have verifiably passed all the available tests on a wide
> range of platforms (a process that can be fully automated) and the
> changes have passed senior engineer reviews (a process that can be
> facilitated by substantial automated assistance).  Mainlines should
> increase in quality in a strictly monotonic fashion -- that's the
> essence of what "gatekeeper management" is all about.  Neither GCC nor
> lk have that property -- though better tools can do much to put us
> there.  With good tools, the release manager can ultimately be
> replaced by shell scripts.

with GNAT, we let everyone within ACT, which is quite a diverse set of folks
about 35 in all, change anything in the mainline, but we guarantee the
monotonic properly (I agree this is crucial) by enforcing fairly strenuous
requirements on anyone doing a change. No change of any kind (not even
something that is "obviously" safe) is allowed without doing a complete
bootstrap, and running the entire regression suite (which is pretty
comprehensive at this stage) first. Now we only require this on one target
for changes that are expected to be target independent, so it is possible
to have unanticipated hits on other targets. We deal with this by building
the system on all targets every night and running the regression suites on
all targets every night. If the reports in the morning indicate a problem,
then it is all hands on deck to fix the problem.

When we get GNAT properly integrated into GCC, which involves several
things still to be done:

1. We need to get to a release point internally where the GCC 3 based GNAT
passes all regression tests etc. We are close to this, and expecting to
do a beta release in January on selected targets (should include Solaris,
Windows, GNU/Linux, HPUX).

2. We need to get the sources and our internal source procedures more
amenable to GCC style (e.g. we have removed the version numbers from
our sources, and adjusted all our scripts for this change recently).

3. We need to establish the ACATS test suite so that anyone can run it. This
is not as comprehensive as our internal test suite (which is not distributable
since it is mostly proprietary code).

4. We need to set up procedures so we can run and test changes that others
make against our internal test suite.

... then hopefullly we can duplicate at least some of these procedures
in a manner that others outside ACT can follow a similar path. We regard
this kind of automatic testing as absolutely crucial.

> With good tools, the release manager can ultimately be
> replaced by shell scripts.

I don't believe that, based on our experience where we have elaborate
scripts that try to automate everything, but you still need a release
manager to coordinate activities and check that everything is working
as expected.

^ permalink raw reply	[flat|nested] 14+ messages in thread
* Re: source mgt. requirements solicitation
@ 2002-12-14 21:34 Linus Torvalds
  2002-12-14 23:12 ` source mgt....[_HAS_ gcc relevance] Tom Lord
  0 siblings, 1 reply; 14+ messages in thread
From: Linus Torvalds @ 2002-12-14 21:34 UTC (permalink / raw)
  To: neroden, gcc


In article <20021215014255.GA1146@doctormoo> you write:
>
>In GCC, we've been known to lose development history when we merge a 
>branch, and merging branches has been incredibly painful.  So I'm not
>sure merging forks is actually harder; merging branches may be. ;-)  

Heh. That's a sad statement about CVS branches in itself.

>Fork merges get submitted as a series of patches (which then need to get 
>approved), and associated ChangeLog entries.  They go in pretty cleanly. 

This is actually not that different from the "old" Linux way, ie the SCM
does _nothing_ for merging stuff. It certainly worked fine for me, and
it's how about half of the Linux developers still work.

The advantage of the SCM-assisted merges is really that when you trust
the other side, it becomes a non-issue.  So to some degree you might as
well think of a SCM-assisted merge as having "write access" to the tree,
except it's a one-time event rather than a continuing process (but
unlike CVS write access it doesn't _need_ to be constant, since both
sides have access to their own SCM structures on their own, and don't
need to merge all the time).

>The fork developer can track his/her own internal change history however 
>he or she likes, but generally will submit an 'expurgated' history for 
>merging, devoid of the false starts, which makes the patches a lot easier 
>to review.  This is in fact an argument in favor of losing 
>development history.  ;-D

We do that with BK too, occasionally. It's sometimes just cleaner to
create a new clone with a "cleaned up" revision history. It's not needed
all that often, but I certainly agree that sometimes you just don't want
to see all the mistakes people initially made.

It's also needed in BK for things like merging from two totally
different repositories - you can't auto-merge just one fix from a
Linux-2.4.x BK tree into a 2.5.x BK tree, for example (when you merge in
BK, you merge _everything_ in the two repositories).  So those have to
be done as patches, kind of like the clean-up thing. 

>>Yet it is the _cheap_ branches that should be the true first-class
>>citizen. Potentially throw-away code that may end up being really 
>>really useful, but might just be a crazy pipe-dream. The experimental 
>>stuff that would _really_ want to have nice source control.
>
>Interestingly, I tend to find that this sort of stuff is exactly what
>*doesn't* need source control; source control simply obscures the 
>process by exposing too much development history, much of which has no 
>relevance to the current version.  Or did you mean code that already 
>works, and is being refined, rather than code in the 'rewrite from 
>scratch every two weeks' stage?

I personally find that _most_ changes by far tend to be fairly small and
simple, and take a few hours or days to do. Yet at the same time, you
want to have access to a lot of the SCM functionality (commit one set of
changes as "phase 1 - preparation", "phase 2 - update filesystems" etc).

At the same time, the tree often doesn't work until all phases are done,
so you do NOT want to commit "phase 1" to the CVS head - and creating a
CVS branch for something that is really not a big project is clearly not
something most people want to do. The pain of the branch is bigger than 
it's worth.

And THIS is where the distributed repository nature of BK really shines.
It's a totally everyday thing, not something odd or special. You can
work in your own repository, with all the SCM tools, and document your
changes as you make them (and undo something if you notice it was
wrong). Yet you do _not_ need to pollute anything that somebody else
might be working on.

And then, when you're ready, you just push your changes to some other
tree (in BK it's an atomic operation to push _multiple_ changesets), and
tell others that you're done.

See? I'm not talking about a big six-month project.  I'm talking about
something that potentially is just a few hours.  You might do your first
cut, and check it into your tree, verify that it works, and then you
might want to go back and make another few check-ins to handle other
cases. 

In gcc terms, let's say that you change the layout of something
fundamental, and you first make sure that the C front-end works. You
check that in and test it (on the C side only) as one stage. Only when
you're happy with that do you even _bother_ to start editing the C++ and
other front-ends.

With distributed trees, it's easy to make these kinds of multi-stage
things. Because nobody else sees what you're doing until you actually
decide to export the thing. With CVS, it's a total _disaster_ to do
this (and the way everybody works is to do all the work _without_ SCM
support, and then try to do one large check-in).

			Linus

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2002-12-18 18:23 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2002-12-16  4:56 source mgt....[_HAS_ gcc relevance] Richard Kenner
2002-12-16  5:33 ` Tom Lord
  -- strict thread matches above, loose matches on Subject: below --
2002-12-18 11:03 Robert Dewar
2002-12-16  3:05 Robert Dewar
2002-12-16  4:15 ` Tom Lord
2002-12-16  4:40   ` Tom Lord
2002-12-16 16:36   ` Florian Weimer
2002-12-17  0:38     ` Momchil Velikov
2002-12-17 11:41       ` Daniel Egger
2002-12-17 13:17       ` Tom Lord
2002-12-14 21:34 source mgt. requirements solicitation Linus Torvalds
2002-12-14 23:12 ` source mgt....[_HAS_ gcc relevance] Tom Lord
2002-12-14 22:12   ` Linus Torvalds
2002-12-15  3:04   ` Zack Weinberg
2002-12-15  3:23     ` Tom Lord

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).