public inbox for gcc@gcc.gnu.org
 help / color / mirror / Atom feed
* libstdc++ test suite still drives machine into swap
@ 2001-06-26 19:57 Zack Weinberg
  2001-06-27  3:10 ` Wolfram Gloger
  2001-07-31 18:53 ` Phil Edwards
  0 siblings, 2 replies; 18+ messages in thread
From: Zack Weinberg @ 2001-06-26 19:57 UTC (permalink / raw)
  To: gcc, libstdc++

The libstdc++ test suite still drives my machine, which has plenty of
RAM thank you, into swap.  I suspect this is because ulimit -d does
not actually work on Linux, because libc allocates huge blocks with
anonymous mmap, and anonymous mmap ignores that particular limit.

The last time I brought this up I tried to explain why these tests
shouldn't be done at all and no one got it.  Code like

  try {
    csz01 = str01.max_size();
    std::string str03(csz01 - 1, 'A');
    VERIFY( str03.size() == csz01 - 1 );
    VERIFY( str03.size() <= str03.capacity() );
  } catch (std::bad_alloc) {
    VERIFY (true);
  }

*does not test* that size() is equal to csz01 - 1 or <= capacity().
It doesn't even test that std::bad_alloc is thrown.

What actually happens is that execution never gets past the
std::string constructor.  Malloc lies.  It gives you a pointer to
memory that does not exist, on the theory that you won't *really* use
all of it.  Then the operating system kills the process - if you're
lucky - while it's trying to write 'A' over a gigabyte of virtual
memory that the computer does not actually have.[1]

The *only* way you can test std::bad_alloc reliably is to override
libc malloc with something that fails on huge requests.  Resource
limits are not sufficient.  Assuming that malloc will return NULL 
most certainly does not cut it.

The *only* way you can test the constraints on max_size, size,
capacity, etc. is if you can somehow create a basic_string<>
instantiation for which max_size is small enough that you *can*
allocate that much memory without bringing the computer to its knees.[2]

[Sample code above was taken from 21_strings/ctor_copy_dtor.cc.  That
particular test has been #if 0'ed out with a comment referencing one
of the previous threads on this subject.  However, not all the tests
with this problem have been disabled.  Right now I'm not sure which
test is the current problem.]

[I might add that surely it is only necessary to test this sort of
thing *once*, not the five or six times it appears to be done in the
current test suite.]

-- 
zw   The beginning of almost every story is actually a bone, something with
     which to court the dog, which may bring you closer to the lady.
     	-- Amos Oz, _The Story Begins_

[1] If you're unlucky, the computer crashes, or the operating system
kills half a dozen random innocent processes before it hits the one
that's eating all the memory.

[2] I tried to figure out how to do that and got lost in a maze of
twisty little template classes.  But I don't really speak C++.  I
would suggest that #define __STD_STRING_MAX_SIZE or some such should
override the default for std::string's max_size() method.

^ permalink raw reply	[flat|nested] 18+ messages in thread
* Re: libstdc++ test suite still drives machine into swap
@ 2001-08-02 15:41 Zachary Weinberg
  2001-08-02 16:19 ` Phil Edwards
  0 siblings, 1 reply; 18+ messages in thread
From: Zachary Weinberg @ 2001-08-02 15:41 UTC (permalink / raw)
  To: Benjamin Kosnik, gcc, libstdc++

Benjamin Kosnik wrote:

> I think Zack and I will just have to disagree on what constitutes
> a testsuite. For me, it includes pathological edge cases as defined in
> the standard. He thinks they are tested elsewhere, I disagree. Since this
> topic seems to be activated by cron jobs every 3 months, I just thought
> I'd reply, yet again.

This is not exactly my position.  I agree that pathological edge
cases should be tested.  My argument is that the existing code
which claims to test them, doesn't.  This should be quite clear
just from inspection.  Allocating and initializing a string of
max_size() can only succeed on a machine with more than a gigabyte
of virtual memory.  On a normal machine with less memory, either
the process dies, or bad_alloc is thrown.  Either way, the actual
test - the code intended to be executed after the allocation
completes - never gets executed.

The problem of the machine getting driven into swap by the test is
an independent problem with the same cause.  I would suggest that
that one be solved by adding a utility routine to libstdc++ which
makes the appropriate setrlimit() calls to constrain process memory,
and then calling it from test cases that are known to thrash the computer.
This should not, however, be seen as a solution to the problem that
the tests are not testing what they are intended to.

zw
semi-temporary new address
not subscribed to gcc lists at the moment

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2001-08-03 10:27 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-06-26 19:57 libstdc++ test suite still drives machine into swap Zack Weinberg
2001-06-27  3:10 ` Wolfram Gloger
2001-06-27 10:28   ` Zack Weinberg
2001-07-31 18:53 ` Phil Edwards
2001-07-31 20:46   ` Gabriel Dos Reis
2001-07-31 22:59     ` Benjamin Kosnik
2001-08-01  0:22       ` Gabriel Dos Reis
2001-08-01 12:22       ` Phil Edwards
2001-08-01 13:29         ` Stephen M. Webb
2001-08-01 14:02           ` Phil Edwards
2001-08-02  6:01             ` Stephen M. Webb
2001-08-02 13:31               ` Phil Edwards
2001-08-02 15:41 Zachary Weinberg
2001-08-02 16:19 ` Phil Edwards
2001-08-02 16:40   ` Zack Weinberg
2001-08-03 10:27     ` Phil Edwards
2001-08-02 19:28   ` Hans-Peter Nilsson
2001-08-03  6:10     ` Stephen M. Webb

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).