* more data re disk i/o
@ 2004-04-07 19:05 Frank Ch. Eigler
2004-04-07 19:12 ` Ian Lance Taylor
2004-04-07 19:17 ` Christopher Faylor
0 siblings, 2 replies; 9+ messages in thread
From: Frank Ch. Eigler @ 2004-04-07 19:05 UTC (permalink / raw)
To: Sourceware Overseers
Hi -
Plain I/O statistics show that /dev/sdb is being hammered but the
load distribution is not obvious below that granularity. LVM
statistics give more details, like the cumulative reads and writes
for each partition:
/dev/vg1/lvol1 /tmp 1281310 11800345
/dev/vg1/lvol2 /sourceware/qmail 70145851 90972406
/dev/vg1/lvol3 /sourceware/ftp 203613864 5736522
/dev/vg1/lvol4 /sourceware/projects 1833947517 139725672
/dev/vg1/lvol5 /sourceware/snapshot-tmp 19699167 52943842
/dev/vg1/lvol6 /sourceware/www 257096451 33898986
/dev/vg1/lvol7 /sourceware/libre 9499367 89131
/dev/vg1/lvol8 /sourceware/cvs-tmp 49807668 428982596
/dev/vg1/lvol9 /sourceware/htdig 504302787 140198200
/dev/vg1/lvol10 /scratch 112121 3
/dev/vg1/lvol11 /home 1056439 412797
Some more live monitoring confirms that lvol8 and lvol2
have relatively many writes per unit time, and that lvol3 and
lvol4 have relatively many reads. On an intuitive level it
would make sense to segregate some of this traffic a bit better.
There is some unused room on sdc that could for example host
cvs-tmp, next time we get the chance to move things around.
- FChE
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: more data re disk i/o
2004-04-07 19:05 more data re disk i/o Frank Ch. Eigler
@ 2004-04-07 19:12 ` Ian Lance Taylor
2004-04-07 19:20 ` Jason Molenda
2004-04-07 19:17 ` Christopher Faylor
1 sibling, 1 reply; 9+ messages in thread
From: Ian Lance Taylor @ 2004-04-07 19:12 UTC (permalink / raw)
To: Frank Ch. Eigler; +Cc: Sourceware Overseers
"Frank Ch. Eigler" <fche@redhat.com> writes:
> Some more live monitoring confirms that lvol8 and lvol2
> have relatively many writes per unit time, and that lvol3 and
> lvol4 have relatively many reads. On an intuitive level it
> would make sense to segregate some of this traffic a bit better.
> There is some unused room on sdc that could for example host
> cvs-tmp, next time we get the chance to move things around.
It might be a cunning move to put cvs-tmp on some sort of RAM disk.
There is no reason to preserve it across reboots. That's what we do
at Wasabi, and it seems to work fine.
Ian
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: more data re disk i/o
2004-04-07 19:05 more data re disk i/o Frank Ch. Eigler
2004-04-07 19:12 ` Ian Lance Taylor
@ 2004-04-07 19:17 ` Christopher Faylor
2004-04-07 19:29 ` Frank Ch. Eigler
1 sibling, 1 reply; 9+ messages in thread
From: Christopher Faylor @ 2004-04-07 19:17 UTC (permalink / raw)
To: Sourceware Overseers
On Wed, Apr 07, 2004 at 03:04:58PM -0400, Frank Ch. Eigler wrote:
>Plain I/O statistics show that /dev/sdb is being hammered but the
>load distribution is not obvious below that granularity. LVM
>statistics give more details, like the cumulative reads and writes
>for each partition:
>
>/dev/vg1/lvol1 /tmp 1281310 11800345
>/dev/vg1/lvol2 /sourceware/qmail 70145851 90972406
>/dev/vg1/lvol3 /sourceware/ftp 203613864 5736522
>/dev/vg1/lvol4 /sourceware/projects 1833947517 139725672
>/dev/vg1/lvol5 /sourceware/snapshot-tmp 19699167 52943842
>/dev/vg1/lvol6 /sourceware/www 257096451 33898986
>/dev/vg1/lvol7 /sourceware/libre 9499367 89131
>/dev/vg1/lvol8 /sourceware/cvs-tmp 49807668 428982596
>/dev/vg1/lvol9 /sourceware/htdig 504302787 140198200
>/dev/vg1/lvol10 /scratch 112121 3
>/dev/vg1/lvol11 /home 1056439 412797
>
>Some more live monitoring confirms that lvol8 and lvol2
>have relatively many writes per unit time, and that lvol3 and
>lvol4 have relatively many reads. On an intuitive level it
>would make sense to segregate some of this traffic a bit better.
>There is some unused room on sdc that could for example host
>cvs-tmp, next time we get the chance to move things around.
sdc will eventually hold some parts of /www, which is close to becoming
full again.
Also, /scratch should eventually disappear and its space be distributed
elsewhere.
cgf
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: more data re disk i/o
2004-04-07 19:12 ` Ian Lance Taylor
@ 2004-04-07 19:20 ` Jason Molenda
2004-04-07 19:32 ` Ian Lance Taylor
0 siblings, 1 reply; 9+ messages in thread
From: Jason Molenda @ 2004-04-07 19:20 UTC (permalink / raw)
To: overseers, fche
On Wed, Apr 07, 2004 at 03:12:40PM -0400, Ian Lance Taylor wrote:
>
> It might be a cunning move to put cvs-tmp on some sort of RAM disk.
> There is no reason to preserve it across reboots. That's what we do
> at Wasabi, and it seems to work fine.
>
Yeah, Chris and I looked at doing this a while back. I think Chris had
even experimented with the cygwin repository, but I don't remember
clearly.
The only caveat here is that you need enough space to store a complete
checkout of a tree, worst-case. I remember a worst-case scenario at
Yahoo where a user did a 'cvs update' on a checkout and cvs thought
every file had been changed (I think his timestamps were off), so it
uploaded all of the files to the server. Or something much like that -
I can look up the e-mails to confirm the details if anyone doubts it.
But the point was that you need a pretty big RAM disk to handle this
worst-case scenario correctly.
Or we can just declare this scenario an expected failure (it doesn't
come up very often - normal usage is a directory-at-a-time) and clean
up by hand on those rare occasions when it happens. However, if this
ever does come up, cvs will be broken for everyone until someone logs
in and removes the big disk user.
J
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: more data re disk i/o
2004-04-07 19:17 ` Christopher Faylor
@ 2004-04-07 19:29 ` Frank Ch. Eigler
2004-04-07 20:07 ` Christopher Faylor
0 siblings, 1 reply; 9+ messages in thread
From: Frank Ch. Eigler @ 2004-04-07 19:29 UTC (permalink / raw)
To: Sourceware Overseers
Hi -
cgf wrote:
> [...]
> sdc will eventually hold some parts of /www, which is close to becoming
> full again.
>
> Also, /scratch should eventually disappear and its space be distributed
> elsewhere.
Would you agree with an experiment in the interim, consisting of
adding sdc1 (18gb) to the vg as a new pv, and then moving cvs-tmp
to this new pv? (It can be done online.) It would be straightforward
to undo. Plus with its space freed up on sdb (and later with /scratch),
lvol6 (/www) could be enlarged in situ.
- FChE
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: more data re disk i/o
2004-04-07 19:20 ` Jason Molenda
@ 2004-04-07 19:32 ` Ian Lance Taylor
2004-04-07 20:51 ` Zack Weinberg
0 siblings, 1 reply; 9+ messages in thread
From: Ian Lance Taylor @ 2004-04-07 19:32 UTC (permalink / raw)
To: Jason Molenda; +Cc: overseers, fche
Jason Molenda <jason-swarelist@molenda.com> writes:
> The only caveat here is that you need enough space to store a complete
> checkout of a tree, worst-case. I remember a worst-case scenario at
> Yahoo where a user did a 'cvs update' on a checkout and cvs thought
> every file had been changed (I think his timestamps were off), so it
> uploaded all of the files to the server. Or something much like that -
> I can look up the e-mails to confirm the details if anyone doubts it.
> But the point was that you need a pretty big RAM disk to handle this
> worst-case scenario correctly.
>
> Or we can just declare this scenario an expected failure (it doesn't
> come up very often - normal usage is a directory-at-a-time) and clean
> up by hand on those rare occasions when it happens. However, if this
> ever does come up, cvs will be broken for everyone until someone logs
> in and removes the big disk user.
It's easy to understand the failure case, but it seems unfortunate
that it broke CVS until it was cleaned up. The CVS server is supposed
to delete the temporary directory when it exits, even if that is due
to a failure, although there is an exception if CVS dumps core.
I do have a vague recollection that we used a RAM disk before, but it
caused trouble because CVS would eat up all of memory sometimes. Or
something. But maybe I misremember.
What we really need is a file system which stays in RAM up to a point,
and then swaps out to the swap file. But I don't suppose Linux has
anything like that.
Ian
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: more data re disk i/o
2004-04-07 19:29 ` Frank Ch. Eigler
@ 2004-04-07 20:07 ` Christopher Faylor
0 siblings, 0 replies; 9+ messages in thread
From: Christopher Faylor @ 2004-04-07 20:07 UTC (permalink / raw)
To: Frank Ch. Eigler; +Cc: Sourceware Overseers
On Wed, Apr 07, 2004 at 03:29:47PM -0400, Frank Ch. Eigler wrote:
>Hi -
>
>cgf wrote:
>> [...]
>> sdc will eventually hold some parts of /www, which is close to becoming
>> full again.
>>
>> Also, /scratch should eventually disappear and its space be distributed
>> elsewhere.
>
>Would you agree with an experiment in the interim, consisting of
>adding sdc1 (18gb) to the vg as a new pv, and then moving cvs-tmp
>to this new pv? (It can be done online.)
Sure. That's pretty much what I was planning on doing. Once it's part of
the pv, it can be distributed as needed.
Most of /dev/sdc2 is currently just backup files -- gcc snapshots and www
logs. I was waiting for word from Angela that she'd gotten a snapshot of
these before I did anything else but maybe it's not all that important to
save them.
cgf
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: more data re disk i/o
2004-04-07 19:32 ` Ian Lance Taylor
@ 2004-04-07 20:51 ` Zack Weinberg
2004-04-07 21:00 ` Ian Lance Taylor
0 siblings, 1 reply; 9+ messages in thread
From: Zack Weinberg @ 2004-04-07 20:51 UTC (permalink / raw)
To: overseers; +Cc: fche, jason-swarelist
Ian Lance Taylor <ian@wasabisystems.com> writes:
> What we really need is a file system which stays in RAM up to a point,
> and then swaps out to the swap file. But I don't suppose Linux has
> anything like that.
2.4/2.6 has tmpfs which is exactly this. (originally implemented for
the sake of /dev/shm, but it's a general filesystem)
zw
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: more data re disk i/o
2004-04-07 20:51 ` Zack Weinberg
@ 2004-04-07 21:00 ` Ian Lance Taylor
0 siblings, 0 replies; 9+ messages in thread
From: Ian Lance Taylor @ 2004-04-07 21:00 UTC (permalink / raw)
To: Zack Weinberg; +Cc: overseers, fche, jason-swarelist
Zack Weinberg <zack@codesourcery.com> writes:
> Ian Lance Taylor <ian@wasabisystems.com> writes:
>
> > What we really need is a file system which stays in RAM up to a point,
> > and then swaps out to the swap file. But I don't suppose Linux has
> > anything like that.
>
> 2.4/2.6 has tmpfs which is exactly this. (originally implemented for
> the sake of /dev/shm, but it's a general filesystem)
Sounds promising, and I see it listed in /proc/filesystems on
sourceware. Does anybody here have any experience with this?
Ian
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2004-04-07 21:00 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2004-04-07 19:05 more data re disk i/o Frank Ch. Eigler
2004-04-07 19:12 ` Ian Lance Taylor
2004-04-07 19:20 ` Jason Molenda
2004-04-07 19:32 ` Ian Lance Taylor
2004-04-07 20:51 ` Zack Weinberg
2004-04-07 21:00 ` Ian Lance Taylor
2004-04-07 19:17 ` Christopher Faylor
2004-04-07 19:29 ` Frank Ch. Eigler
2004-04-07 20:07 ` Christopher Faylor
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).