From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 11666 invoked by alias); 7 Apr 2004 19:05:06 -0000 Mailing-List: contact overseers-help@sources.redhat.com; run by ezmlm Precedence: bulk List-Archive: List-Post: List-Help: , Sender: overseers-owner@sources.redhat.com Received: (qmail 11646 invoked from network); 7 Apr 2004 19:05:01 -0000 Received: from unknown (HELO mx1.redhat.com) (66.187.233.31) by sources.redhat.com with SMTP; 7 Apr 2004 19:05:01 -0000 Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.12.10/8.12.10) with ESMTP id i37J51WB024900 for ; Wed, 7 Apr 2004 15:05:01 -0400 Received: from pobox.toronto.redhat.com (pobox.toronto.redhat.com [172.16.14.4]) by int-mx1.corp.redhat.com (8.11.6/8.11.6) with ESMTP id i37J50j16621 for ; Wed, 7 Apr 2004 15:05:01 -0400 Received: from touchme.toronto.redhat.com (IDENT:postfix@touchme.toronto.redhat.com [172.16.14.9]) by pobox.toronto.redhat.com (8.12.8/8.12.8) with ESMTP id i37J50e1002276 for ; Wed, 7 Apr 2004 15:05:00 -0400 Received: from toenail.toronto.redhat.com (toenail.toronto.redhat.com [172.16.14.211]) by touchme.toronto.redhat.com (Postfix) with ESMTP id 4B8B8800169 for ; Wed, 7 Apr 2004 15:05:00 -0400 (EDT) Received: from toenail.toronto.redhat.com (localhost.localdomain [127.0.0.1]) by toenail.toronto.redhat.com (8.12.10/8.12.5) with ESMTP id i37J4xHl012285 for ; Wed, 7 Apr 2004 15:05:00 -0400 Received: (from fche@localhost) by toenail.toronto.redhat.com (8.12.10/8.12.10/Submit) id i37J4xwN012283 for overseers@sources.redhat.com; Wed, 7 Apr 2004 15:04:59 -0400 Date: Wed, 07 Apr 2004 19:05:00 -0000 From: "Frank Ch. Eigler" To: Sourceware Overseers Subject: more data re disk i/o Message-ID: <20040407190458.GD11209@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4.1i X-SW-Source: 2004-q2/txt/msg00092.txt.bz2 Hi - Plain I/O statistics show that /dev/sdb is being hammered but the load distribution is not obvious below that granularity. LVM statistics give more details, like the cumulative reads and writes for each partition: /dev/vg1/lvol1 /tmp 1281310 11800345 /dev/vg1/lvol2 /sourceware/qmail 70145851 90972406 /dev/vg1/lvol3 /sourceware/ftp 203613864 5736522 /dev/vg1/lvol4 /sourceware/projects 1833947517 139725672 /dev/vg1/lvol5 /sourceware/snapshot-tmp 19699167 52943842 /dev/vg1/lvol6 /sourceware/www 257096451 33898986 /dev/vg1/lvol7 /sourceware/libre 9499367 89131 /dev/vg1/lvol8 /sourceware/cvs-tmp 49807668 428982596 /dev/vg1/lvol9 /sourceware/htdig 504302787 140198200 /dev/vg1/lvol10 /scratch 112121 3 /dev/vg1/lvol11 /home 1056439 412797 Some more live monitoring confirms that lvol8 and lvol2 have relatively many writes per unit time, and that lvol3 and lvol4 have relatively many reads. On an intuitive level it would make sense to segregate some of this traffic a bit better. There is some unused room on sdc that could for example host cvs-tmp, next time we get the chance to move things around. - FChE