From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 1887 invoked by alias); 24 Jul 2010 16:51:07 -0000 Received: (qmail 1879 invoked by uid 22791); 24 Jul 2010 16:51:06 -0000 X-SWARE-Spam-Status: No, hits=-2.0 required=5.0 tests=AWL,BAYES_00,TW_PD X-Spam-Check-By: sourceware.org Received: from vexpert.dbai.tuwien.ac.at (HELO vexpert.dbai.tuwien.ac.at) (128.131.111.2) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Sat, 24 Jul 2010 16:50:52 +0000 Received: from acrux.dbai.tuwien.ac.at (acrux.dbai.tuwien.ac.at [128.131.111.60]) by vexpert.dbai.tuwien.ac.at (Postfix) with ESMTP id 3B8781E155; Sat, 24 Jul 2010 18:50:46 +0200 (CEST) Date: Sat, 24 Jul 2010 16:51:00 -0000 From: Gerald Pfeifer To: Benjamin Kosnik cc: gcc@gcc.gnu.org Subject: Re: onlinedocs/libstdc++ appears stale In-Reply-To: <20100714165142.17315efd@shotwell> Message-ID: References: <20100714165142.17315efd@shotwell> User-Agent: Alpine 2.00 (LNX 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-IsSubscribed: yes Mailing-List: contact gcc-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-owner@gcc.gnu.org X-SW-Source: 2010-07/txt/msg00373.txt.bz2 On Wed, 14 Jul 2010, Benjamin Kosnik wrote: > I would rather move to this structure for release documentation. And > then just have some nightly run on gcc.gnu.or on a properly configured > machine that just generates the documentation directly from the sources. gcc.gnu.org will be preferrable, I think. That allows a number of us to help out if neede, re-running scripts, etc. For the time being I suggest to apply the patch below, though. What we have in place as of today simply is broken (and has been for quarters, at a minimum). Objections? Gerald Index: update_web_docs_libstdcxx_svn =================================================================== --- update_web_docs_libstdcxx_svn (revision 162499) +++ update_web_docs_libstdcxx_svn (working copy) @@ -37,11 +37,6 @@ cd doc rm -f Makefile -# build a compressed copy of the HTML, preserve directory structure -for file in `find . -name "*.html" -print`; do - gzip --best < $file > $file.gz -done - # copy the tree to the onlinedocs area, preserve directory structure #find . -depth -print | cpio -pdv $WWWDIR find . -depth -print | cpio -pd $WWWDIR > /dev/null 2>&1