* RE: Analysis of Mauve failures - The final chapter
@ 2002-04-04 16:45 Boehm, Hans
2002-04-04 17:14 ` Mark Wielaard
2002-04-05 1:15 ` Andrew Haley
0 siblings, 2 replies; 20+ messages in thread
From: Boehm, Hans @ 2002-04-04 16:45 UTC (permalink / raw)
To: 'Mark Wielaard', java
> From: Mark Wielaard [mailto:mark@klomp.org]
> > !java.lang.reflect.Array.newInstance
> Ugh, not fun. Running by hand also hangs, but turning on the -debug or
> -verbose flag makes it run... When not giving any flags it only prints
> Needed to allocate blacklisted block at 0x824b000
> The test actually tries to force a OutOfMemoryError exception which
> might explain this. But the Object.clone() test also seems to do this
> and that one just works.
>
Mark -
All tests that allocate large objects should ideally be run with the
environment variable GC_NO_BLACKLIST_WARNING defined. That will get rid of
the message. The occurrence of the warning is often less than 100%
deterministic, and that's expected. Was there an issue here beyond the
warning?
Hans
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: Analysis of Mauve failures - The final chapter
2002-04-04 16:45 Analysis of Mauve failures - The final chapter Boehm, Hans
@ 2002-04-04 17:14 ` Mark Wielaard
2002-04-05 3:47 ` Mark Wielaard
2002-04-05 1:15 ` Andrew Haley
1 sibling, 1 reply; 20+ messages in thread
From: Mark Wielaard @ 2002-04-04 17:14 UTC (permalink / raw)
To: Boehm, Hans; +Cc: java
Hi,
On Fri, 2002-04-05 at 02:40, Boehm, Hans wrote:
> All tests that allocate large objects should ideally be run with the
> environment variable GC_NO_BLACKLIST_WARNING defined. That will get rid of
> the message. The occurrence of the warning is often less than 100%
> deterministic, and that's expected.
Good to know. Thanks for the tip.
> Was there an issue here beyond the warning?
There was an issue, but it wasn't the garbage collector. natArray.cc
seems to forget to check for a null object and crashes. But why that
didn't show up when using -verbose or -debug is unclear.
Cheers,
Mark
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: Analysis of Mauve failures - The final chapter
2002-04-04 17:14 ` Mark Wielaard
@ 2002-04-05 3:47 ` Mark Wielaard
2002-04-05 4:07 ` Andrew Haley
2002-04-08 13:40 ` Tom Tromey
0 siblings, 2 replies; 20+ messages in thread
From: Mark Wielaard @ 2002-04-05 3:47 UTC (permalink / raw)
To: java
Hi,
On Fri, 2002-04-05 at 03:10, Mark Wielaard wrote:
> On Fri, 2002-04-05 at 02:40, Boehm, Hans wrote:
>
> > Was there an issue here beyond the warning?
>
> There was an issue, but it wasn't the garbage collector. natArray.cc
> seems to forget to check for a null object and crashes. But why that
> didn't show up when using -verbose or -debug is unclear.
I spoke to soon. The null pointer check seems to be not the only issue.
This patch seems clearly needed:
--- natArray.cc 2001/10/02 13:44:32 1.11
+++ natArray.cc 2002/04/05 10:34:11
@@ -1,6 +1,6 @@
// natField.cc - Implementation of java.lang.reflect.Field native
methods.
-/* Copyright (C) 1999, 2000, 2001 Free Software Foundation
+/* Copyright (C) 1999, 2000, 2001, 2002 Free Software Foundation
This file is part of libgcj.
@@ -16,6 +16,7 @@
#include <gcj/cni.h>
#include <java/lang/reflect/Array.h>
#include <java/lang/IllegalArgumentException.h>
+#include <java/lang/NullPointerException.h>
#include <java/lang/Byte.h>
#include <java/lang/Short.h>
#include <java/lang/Integer.h>
@@ -46,6 +47,8 @@
java::lang::reflect::Array::newInstance (jclass componentType,
jintArray dimensions)
{
+ if (! dimensions)
+ throw new java::lang::NullPointerException;
jint ndims = dimensions->length;
if (ndims == 0)
throw new java::lang::IllegalArgumentException ();
But what is really going on with the Mauve test is not yet clear to me.
The following program (extracted from the mauve test) run under gdb
gives:
import java.lang.reflect.Array;
public class Big
{
public static void main(String[] args)
{
String[][] t = (String[][]) Array.newInstance(String.class,
new int[] {Integer.MAX_VALUE, Integer.MAX_VALUE});
System.out.println(t.length);
}
}
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 1024 (LWP 23056)]
0x40250020 in java::lang::Class::isPrimitive() (this=0x0)
at ../../../gcc/libjava/java/lang/Class.h:208
208 return vtable == JV_PRIMITIVE_VTABLE;
Current language: auto; currently c++
(gdb) bt
#0 0x40250020 in java::lang::Class::isPrimitive() (this=0x0)
at ../../../gcc/libjava/java/lang/Class.h:208
#1 0x40222e89 in _Jv_NewMultiArrayUnchecked (type=0x80a2af0,
dimensions=1,
sizes=0x8085e7c) at ../../../gcc/libjava/prims.cc:541
#2 0x40222f09 in _Jv_NewMultiArrayUnchecked (type=0x80a2a10,
dimensions=2,
sizes=0x8085e78) at ../../../gcc/libjava/prims.cc:552
#3 0x40222fad in _Jv_NewMultiArray(java::lang::Class*, int, int*) (
type=0x80a2a10, dimensions=2, sizes=0x8085e78)
at ../../../gcc/libjava/prims.cc:566
#4 0x4025af14 in
java::lang::reflect::Array::newInstance(java::lang::Class*,
JArray<int>*) (componentType=0x8049350, dimensions=0x8085e70)
at ../../../gcc/libjava/java/lang/reflect/natArray.cc:63
#5 0x08048ab7 in Big.main(java.lang.String[]) (args=0x8089fe8) at
Big.java:6
It seems to me that something like the following is needed since the
Class type does not have to be an array class with Array.newInstanceOf()
--- prims.cc 2002/03/10 03:30:48 1.71.2.1
+++ prims.cc 2002/04/05 11:06:30
@@ -535,8 +535,11 @@
static jobject
_Jv_NewMultiArrayUnchecked (jclass type, jint dimensions, jint *sizes)
{
- JvAssert (type->isArray());
- jclass element_type = type->getComponentType();
+ jclass element_type;
+ if (type->isArray())
+ element_type = type->getComponentType();
+ else
+ element_type = type;
jobject result;
if (element_type->isPrimitive())
result = _Jv_NewPrimArray (element_type, sizes[0]);
But that dies horribly with:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 1024 (LWP 13418)]
GC_build_fl_clear2 (h=0x81cd000, ofl=0x0)
at ../../../gcc/boehm-gc/new_hblk.c:59
59 p[0] = (word)ofl;
(gdb) bt
#0 GC_build_fl_clear2 (h=0x81cd000, ofl=0x0)
at ../../../gcc/boehm-gc/new_hblk.c:59
#1 0x403f5675 in GC_build_fl (h=0x81cd000, sz=2, clear=136105984, list=0x0)
at ../../../gcc/boehm-gc/new_hblk.c:184
#2 0x403f194f in GC_generic_malloc_many (lb=8, k=0, result=0x405a9784)
at ../../../gcc/boehm-gc/mallocx.c:479
#3 0x403ef1ba in GC_local_malloc (bytes=4)
at ../../../gcc/boehm-gc/linux_threads.c:346
#4 0x403e7663 in _Jv_AllocArray(int, java::lang::Class*) (size=4,
klass=0x8120910) at ../../../gcc/libjava/boehm.cc:354
#5 0x40222bf9 in _Jv_NewObjectArray (count=2147483647,
elementClass=0x80a2af0, init=0x0) at ../../../gcc/libjava/prims.cc:463
#6 0x40222ed9 in _Jv_NewMultiArrayUnchecked (type=0x80a2af0, dimensions=1,
sizes=0x8085e7c) at ../../../gcc/libjava/prims.cc:547
#7 0x40222f20 in _Jv_NewMultiArrayUnchecked (type=0x80a2a10, dimensions=2,
sizes=0x8085e78) at ../../../gcc/libjava/prims.cc:554
#8 0x40222fc3 in _Jv_NewMultiArray(java::lang::Class*, int, int*) (
type=0x80a2a10, dimensions=2, sizes=0x8085e78)
at ../../../gcc/libjava/prims.cc:568
#9 0x4025af24 in java::lang::reflect::Array::newInstance(java::lang::Class*, JArray<int>*) (componentType=0x8049350, dimensions=0x8085e70)
at ../../../gcc/libjava/java/lang/reflect/natArray.cc:63
#10 0x08048ab7 in Big.main(java.lang.String[]) (args=0x8089fe8) at Big.java:6
Maybe someone who is more familiar with the Array code can take a look
at it.
Cheers,
Mark
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: Analysis of Mauve failures - The final chapter
2002-04-05 3:47 ` Mark Wielaard
@ 2002-04-05 4:07 ` Andrew Haley
2002-04-05 4:22 ` Mark Wielaard
2002-04-08 13:40 ` Tom Tromey
1 sibling, 1 reply; 20+ messages in thread
From: Andrew Haley @ 2002-04-05 4:07 UTC (permalink / raw)
To: Mark Wielaard; +Cc: java
Mark Wielaard writes:
>
> {
> + if (! dimensions)
> + throw new java::lang::NullPointerException;
> jint ndims = dimensions->length;
No, this is wrong, there's no need to do this check. The dereference
of dimensions will generate a SEGV and throw a NullPointerException,
unless it's running on a broken system that doesn't catch SEGV.
The thing to do with such systems is to fix them, not add checks to
the library.
Andrew.
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: Analysis of Mauve failures - The final chapter
2002-04-05 4:07 ` Andrew Haley
@ 2002-04-05 4:22 ` Mark Wielaard
0 siblings, 0 replies; 20+ messages in thread
From: Mark Wielaard @ 2002-04-05 4:22 UTC (permalink / raw)
To: Andrew Haley; +Cc: java
Hi,
On Fri, 2002-04-05 at 13:47, Andrew Haley wrote:
> Mark Wielaard writes:
> >
> > {
> > + if (! dimensions)
> > + throw new java::lang::NullPointerException;
> > jint ndims = dimensions->length;
>
> No, this is wrong, there's no need to do this check. The dereference
> of dimensions will generate a SEGV and throw a NullPointerException,
> unless it's running on a broken system that doesn't catch SEGV.
You are right. Thanks for pointing that out. I should have known this. I
am so used to C/C++ code needing these kind of checks that I forgot that
it is not needed with CNI for dereferencing java objects
Cheers,
Mark
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Analysis of Mauve failures - The final chapter
2002-04-05 3:47 ` Mark Wielaard
2002-04-05 4:07 ` Andrew Haley
@ 2002-04-08 13:40 ` Tom Tromey
2002-04-08 14:56 ` Mark Wielaard
1 sibling, 1 reply; 20+ messages in thread
From: Tom Tromey @ 2002-04-08 13:40 UTC (permalink / raw)
To: Mark Wielaard; +Cc: java
>>>>> "Mark" == Mark Wielaard <mark@klomp.org> writes:
Mark> import java.lang.reflect.Array;
Mark> public class Big
Mark> {
Mark> public static void main(String[] args)
Mark> {
Mark> String[][] t = (String[][]) Array.newInstance(String.class,
Mark> new int[] {Integer.MAX_VALUE, Integer.MAX_VALUE});
Mark> System.out.println(t.length);
Mark> }
Mark> }
I tried this. I get this result:
creche. gcj --main=Big -o Big Big.java -Wl,-rpath,/x1/gcc3/install/lib
creche. ./Big
Out of Memory! Returning NIL!
This isn't what I expected. I expected OutOfMemoryError to be thrown
and a stack trace to be printed.
I definitely didn't see the failure that you report.
Mark> It seems to me that something like the following is needed since
Mark> the Class type does not have to be an array class with
Mark> Array.newInstanceOf()
In Array.newInstance(Class,int[]) we compute the array type, which we
then pass to _Jv_NewMultiArray:
jclass arrayType = componentType;
for (int i = 0; i < ndims; i++) // FIXME 2nd arg should
// be "current" loader
arrayType = _Jv_GetArrayClass (arrayType, 0);
So I don't think this patch is necessary. I think something else is
going on here. What do you think?
Tom
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Analysis of Mauve failures - The final chapter
2002-04-08 13:40 ` Tom Tromey
@ 2002-04-08 14:56 ` Mark Wielaard
2002-04-08 17:31 ` Mark Wielaard
0 siblings, 1 reply; 20+ messages in thread
From: Mark Wielaard @ 2002-04-08 14:56 UTC (permalink / raw)
To: tromey; +Cc: java
Hi,
On Mon, 2002-04-08 at 22:35, Tom Tromey wrote:
> >>>>> "Mark" == Mark Wielaard <mark@klomp.org> writes:
>
> Mark> import java.lang.reflect.Array;
> Mark> public class Big
> Mark> {
> Mark> public static void main(String[] args)
> Mark> {
> Mark> String[][] t = (String[][]) Array.newInstance(String.class,
> Mark> new int[] {Integer.MAX_VALUE, Integer.MAX_VALUE});
> Mark> System.out.println(t.length);
> Mark> }
> Mark> }
>
> I tried this. I get this result:
>
> creche. gcj --main=Big -o Big Big.java -Wl,-rpath,/x1/gcc3/install/lib
> creche. ./Big
> Out of Memory! Returning NIL!
>
> This isn't what I expected. I expected OutOfMemoryError to be thrown
> and a stack trace to be printed.
>
> I definitely didn't see the failure that you report.
Notice that you also didn't get any output from the println().
Try running the program under gdb. I just tried and when I just run it
it exists without printing anything. Running it under gdb always gives
the SIGSEGV (make sure you recompile libgcj without -O2 otherwise gdb
will give some misleading backtraces).
> Mark> It seems to me that something like the following is needed since
> Mark> the Class type does not have to be an array class with
> Mark> Array.newInstanceOf()
>
> In Array.newInstance(Class,int[]) we compute the array type, which we
> then pass to _Jv_NewMultiArray:
>
> jclass arrayType = componentType;
> for (int i = 0; i < ndims; i++) // FIXME 2nd arg should
> // be "current" loader
> arrayType = _Jv_GetArrayClass (arrayType, 0);
>
> So I don't think this patch is necessary. I think something else is
> going on here. What do you think?
You are right. I missed that for loop, must have been the
indenting/whitespace.
Hmmm. I am really lost here. I do not even have a clue why the program
segfaults when run under gdb but seems to terminate normally (although
without giving output) without gdb. And on another machine with less
memory it is just killed after a while by the kernel because it gets out
of memory.
Regards,
Mark
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Analysis of Mauve failures - The final chapter
2002-04-08 14:56 ` Mark Wielaard
@ 2002-04-08 17:31 ` Mark Wielaard
2002-04-09 0:49 ` Bryce McKinlay
2002-04-19 19:59 ` Tom Tromey
0 siblings, 2 replies; 20+ messages in thread
From: Mark Wielaard @ 2002-04-08 17:31 UTC (permalink / raw)
To: tromey; +Cc: java
Hi,
On Mon, 2002-04-08 at 23:40, Mark Wielaard wrote:
> Hmmm. I am really lost here. I do not even have a clue why the program
> segfaults when run under gdb but seems to terminate normally (although
> without giving output) without gdb. And on another machine with less
> memory it is just killed after a while by the kernel because it gets out
> of memory.
On the other machine I was trying with "normal" big numbers (10000,
16000) when I change the first value of the dimension array to
Integer.MAX_VALUE it gives the same result (silent run or SEGV under
gdb). Replacing the first value with Integer.MAX_VALUE-1 always gives a
SEGV (with or without gdb). And using just a huge value like 2000000000
actually does give OutOfMemoryError!
Small numbers {100,200} -> OK.
Big numbers {10000, 16000} -> Out of swap space, kernel kill.
Huge numbers {2000000000, 1000} -> OutOfMemoryError
Almost MAXINT {Integer.MAX_VALUE-1, Integer.MAX_VALUE} -> SEGV.
MAXINT number {Integer.MAX_VALUE, Intger.MAX_VALUE} -> Silent failure.
So it seems to be some sort of trouble when we are really, really out of
memory and cannot even create an OutOfMemoryError.
Cheers,
Mark
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Analysis of Mauve failures - The final chapter
2002-04-08 17:31 ` Mark Wielaard
@ 2002-04-09 0:49 ` Bryce McKinlay
2002-04-19 19:59 ` Tom Tromey
1 sibling, 0 replies; 20+ messages in thread
From: Bryce McKinlay @ 2002-04-09 0:49 UTC (permalink / raw)
To: Mark Wielaard; +Cc: tromey, java
Mark Wielaard wrote:
>So it seems to be some sort of trouble when we are really, really out of
>memory and cannot even create an OutOfMemoryError.
>
prims.cc has a special static-allocated OutOfMemoryError which can be
used in this situation, provided that throwing the exception itself
doesn't need to allocate. However, we have to make sure that printing
this exception doesn't itself require allocation (of a StringBuffer, for
example). It probibly isn't possible to print a stack trace for this
reason. The stack trace printer will just give up if another exception
occurs while its trying to print the stack trace.
regards
Bryce.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Analysis of Mauve failures - The final chapter
2002-04-08 17:31 ` Mark Wielaard
2002-04-09 0:49 ` Bryce McKinlay
@ 2002-04-19 19:59 ` Tom Tromey
1 sibling, 0 replies; 20+ messages in thread
From: Tom Tromey @ 2002-04-19 19:59 UTC (permalink / raw)
To: Mark Wielaard; +Cc: java
>>>>> "Mark" == Mark Wielaard <mark@klomp.org> writes:
Mark> Small numbers {100,200} -> OK.
Mark> Big numbers {10000, 16000} -> Out of swap space, kernel kill.
Mark> Huge numbers {2000000000, 1000} -> OutOfMemoryError
Mark> Almost MAXINT {Integer.MAX_VALUE-1, Integer.MAX_VALUE} -> SEGV.
Mark> MAXINT number {Integer.MAX_VALUE, Intger.MAX_VALUE} -> Silent failure.
This isn't really a regression. I think we've always had problems
with this; at least when those Mauve tests were changed I remember
running into problems. Please file a PR for this.
Tom
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: Analysis of Mauve failures - The final chapter
2002-04-04 16:45 Analysis of Mauve failures - The final chapter Boehm, Hans
2002-04-04 17:14 ` Mark Wielaard
@ 2002-04-05 1:15 ` Andrew Haley
1 sibling, 0 replies; 20+ messages in thread
From: Andrew Haley @ 2002-04-05 1:15 UTC (permalink / raw)
To: Boehm, Hans; +Cc: 'Mark Wielaard', java
Boehm, Hans writes:
> > From: Mark Wielaard [mailto:mark@klomp.org]
> > > !java.lang.reflect.Array.newInstance
> > Ugh, not fun. Running by hand also hangs, but turning on the -debug or
> > -verbose flag makes it run... When not giving any flags it only prints
> > Needed to allocate blacklisted block at 0x824b000
> > The test actually tries to force a OutOfMemoryError exception which
> > might explain this. But the Object.clone() test also seems to do this
> > and that one just works.
> All tests that allocate large objects should ideally be run with the
> environment variable GC_NO_BLACKLIST_WARNING defined. That will get rid of
> the message. The occurrence of the warning is often less than 100%
> deterministic, and that's expected. Was there an issue here beyond the
> warning?
Hans,
I don't understand. If the gc isn't buggy, why does it produce this
warning at all in a production quality system?
Andrew.
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: Analysis of Mauve failures - The final chapter
@ 2002-04-08 15:20 Boehm, Hans
0 siblings, 0 replies; 20+ messages in thread
From: Boehm, Hans @ 2002-04-08 15:20 UTC (permalink / raw)
To: 'tromey@redhat.com', Mark Wielaard; +Cc: java
The "Out of Memory! Returning NIL!" message ia a GC warning. The patch I'm
about to check in will make that clearer. It should not prevent the
exception from being thrown.
There is a strong argument that this warning should be turned off for gcj
once we're convinced that the exception is actually delivered correctly in
this case. I think on some platforms that will be quite hard, since
throwing an exception may involve reading debug information, which allocates
lots of memory.
Hans
> -----Original Message-----
> From: Tom Tromey [mailto:tromey@redhat.com]
> Sent: Monday, April 08, 2002 1:35 PM
> To: Mark Wielaard
> Cc: java@gcc.gnu.org
> Subject: Re: Analysis of Mauve failures - The final chapter
>
>
> >>>>> "Mark" == Mark Wielaard <mark@klomp.org> writes:
>
> Mark> import java.lang.reflect.Array;
> Mark> public class Big
> Mark> {
> Mark> public static void main(String[] args)
> Mark> {
> Mark> String[][] t = (String[][]) Array.newInstance(String.class,
> Mark> new int[] {Integer.MAX_VALUE,
> Integer.MAX_VALUE});
> Mark> System.out.println(t.length);
> Mark> }
> Mark> }
>
> I tried this. I get this result:
>
> creche. gcj --main=Big -o Big Big.java
> -Wl,-rpath,/x1/gcc3/install/lib
> creche. ./Big
> Out of Memory! Returning NIL!
>
> This isn't what I expected. I expected OutOfMemoryError to be thrown
> and a stack trace to be printed.
>
> I definitely didn't see the failure that you report.
>
> Mark> It seems to me that something like the following is needed since
> Mark> the Class type does not have to be an array class with
> Mark> Array.newInstanceOf()
>
> In Array.newInstance(Class,int[]) we compute the array type, which we
> then pass to _Jv_NewMultiArray:
>
> jclass arrayType = componentType;
> for (int i = 0; i < ndims; i++) // FIXME 2nd arg should
> // be "current" loader
> arrayType = _Jv_GetArrayClass (arrayType, 0);
>
> So I don't think this patch is necessary. I think something else is
> going on here. What do you think?
>
> Tom
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: Analysis of Mauve failures - The final chapter
@ 2002-04-08 13:33 Boehm, Hans
0 siblings, 0 replies; 20+ messages in thread
From: Boehm, Hans @ 2002-04-08 13:33 UTC (permalink / raw)
To: 'tromey@redhat.com', Boehm, Hans
Cc: 'Andrew Haley', 'Mark Wielaard', java
I have a patch. I expect to check it in later today, along withg the one to
fix the _DYNAMIC reference for statically linked executables.
Hans
> -----Original Message-----
> From: Tom Tromey [mailto:tromey@redhat.com]
> Sent: Monday, April 08, 2002 1:27 PM
> To: Boehm, Hans
> Cc: 'Andrew Haley'; 'Mark Wielaard'; java@gcc.gnu.org
> Subject: Re: Analysis of Mauve failures - The final chapter
>
>
> >>>>> "Hans" == Boehm, Hans <hans_boehm@hp.com> writes:
>
> Hans> 1) We unconditionally suppress all but every Nth instance of
> Hans> this warning. (N settable by environment variable replacing
> Hans> GC_NO_BLACKLIST_WARNING, defaulting to 3).
> Hans> 2) We change the warning message to something like
> Hans> "Repeated allocation of very large block (size %ld): may lead to
> Hans> poor GC performance and memory leak."
>
>
> What is the status of this?
> Is anybody working on it?
> If not, I will add it to my list.
>
> Tom
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: Analysis of Mauve failures - The final chapter
@ 2002-04-05 11:37 Boehm, Hans
2002-04-05 12:16 ` Per Bothner
2002-04-08 13:32 ` Tom Tromey
0 siblings, 2 replies; 20+ messages in thread
From: Boehm, Hans @ 2002-04-05 11:37 UTC (permalink / raw)
To: 'Andrew Haley'; +Cc: 'Mark Wielaard', java
How about the following "solution" for now:
1) We unconditionally suppress all but every Nth instance of this warning.
(N settable by environment variable replacing GC_NO_BLACKLIST_WARNING,
defaulting to 3). I expect this eliminates the warning completely for all
the standard test cases, and 90% of the rest. I'm not too uncomfortable
with that, since a bounded number of these usually at most indicate a
bounded space leak.
2) We change the warning message to something like
"Repeated allocation of very large block (size %ld): may lead to poor GC
performance and memory leak."
Observations:
- I think it's virtually guaranteed that you will run out of memory before
you get 2 billion of these. Thus setting the threshold sufficiently large
effectively turns off the warning. For backward compatibility,
GC_NO_BLACKLIST_WARNING can be implemented this way.
- If you get regularly repeating instances of this warning, you really do
want to know about it. Unfortunately, currently the only workaround is to
allocate data structures in smaller chunks, or perhaps to instead allocate a
permanent data structure once. (The latter is worth considering anyway,
since large object allocation is expensive in any garbage-collected system.)
On the other hand, I'm not sure anyone has encountered this situation, yet.
Does this sound reasonable?
Hans
> -----Original Message-----
> From: Andrew Haley [mailto:aph@cambridge.redhat.com]
> Sent: Friday, April 05, 2002 10:19 AM
> To: Boehm, Hans
> Cc: 'Mark Wielaard'; java@gcc.gnu.org
> Subject: RE: Analysis of Mauve failures - The final chapter
>
>
> Thank you for the reminder.
>
> Boehm, Hans writes:
>
> > As it stands, I'm hesitant to turn off the warnings by
> default, though
> > I can see arguments either way. If the warnings occur repeatedly,
> > they are indicative of a potential memory leak. If
> someone wants to
> > turn it off by default, and instead provide an environment
> variable to
> > turn it back on, I could probably be talked into that, too.
>
> It seems to me that we have to make up our minds.
>
> IMO: If we are shipping a production-quality system then we shouldn't
> output warnings about which we'll say "ah, don't worry about that
> message, we already know about that." It doesn't look good, and it
> will suggest to people that we don't have a serious offering. This is
> especially true if the warning message uses obscure and frightening
> terminology. This warning message looks like something major has
> failed.
>
> Andrew.
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Analysis of Mauve failures - The final chapter
2002-04-05 11:37 Boehm, Hans
@ 2002-04-05 12:16 ` Per Bothner
2002-04-08 13:32 ` Tom Tromey
1 sibling, 0 replies; 20+ messages in thread
From: Per Bothner @ 2002-04-05 12:16 UTC (permalink / raw)
To: Boehm, Hans; +Cc: java
Boehm, Hans wrote:
> How about the following "solution" for now:
>
> 1) We unconditionally suppress all but every Nth instance of this warning.
> (N settable by environment variable replacing GC_NO_BLACKLIST_WARNING,
> defaulting to 3). I expect this eliminates the warning completely for all
> the standard test cases, and 90% of the rest. I'm not too uncomfortable
> with that, since a bounded number of these usually at most indicate a
> bounded space leak.
>
> 2) We change the warning message to something like
>
> "Repeated allocation of very large block (size %ld): may lead to poor GC
> performance and memory leak."
>
> Does this sound reasonable?
That seems ok, though perhaps a higher threshold (10?) might be better.
--
--Per Bothner
per@bothner.com http://www.bothner.com/per/
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Analysis of Mauve failures - The final chapter
2002-04-05 11:37 Boehm, Hans
2002-04-05 12:16 ` Per Bothner
@ 2002-04-08 13:32 ` Tom Tromey
1 sibling, 0 replies; 20+ messages in thread
From: Tom Tromey @ 2002-04-08 13:32 UTC (permalink / raw)
To: Boehm, Hans; +Cc: 'Andrew Haley', 'Mark Wielaard', java
>>>>> "Hans" == Boehm, Hans <hans_boehm@hp.com> writes:
Hans> 1) We unconditionally suppress all but every Nth instance of
Hans> this warning. (N settable by environment variable replacing
Hans> GC_NO_BLACKLIST_WARNING, defaulting to 3).
Hans> 2) We change the warning message to something like
Hans> "Repeated allocation of very large block (size %ld): may lead to
Hans> poor GC performance and memory leak."
What is the status of this?
Is anybody working on it?
If not, I will add it to my list.
Tom
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: Analysis of Mauve failures - The final chapter
@ 2002-04-05 9:27 Boehm, Hans
2002-04-05 11:35 ` Andrew Haley
0 siblings, 1 reply; 20+ messages in thread
From: Boehm, Hans @ 2002-04-05 9:27 UTC (permalink / raw)
To: 'Andrew Haley', Boehm, Hans; +Cc: 'Mark Wielaard', java
This is a long story, and I think it has been discussed here before.
The "short" answer is that in an ideal world, the gcc back end should:
1) Guarantee GC-safety, i.e. guarantee that all live objects are referenced
by recognizable by pointers to the object. Currently it's just close enough
to guaranteeing that that nobody complains, and thus nobody is sufficiently
motivated to fix it. I believe currently all Java references stored in
statically allocated memory or the heap are in fact pointers to the base of
the corresponding object, and are thus guaranteed to be collector
recognizable. This is not true for temporaries in registers or spilled to
the stack. But the collector always recognizes interior pointers from those
sources, so in nearly all cases, the collector makes up for this problem,
especially since Java arrays have a header. The compiler should guarantee
that it's true in ALL cases. I'm not sure there's an official bug report
about this. AFAIK, it has never been observed in practice with gcj. It's
easy enough to contrive C test cases for which it breaks with optimization,
at least on some architectures.
2) Ensure that for arrays, an accessible array is always referenced by a
pointer to near the beginning of the array. (The conjecture is that if we
did (1), this would be easy.)
If it did these, we could allocate large arrays such that interior pointers
would never need to be recognized, which would basically cause these
warnings to disappear, at least in the absence of native code that allocated
collectable objects. (The collector hooks to do this have been there for a
long time. They're the ...ignore_off_page() allocation calls.)
Clearly none of this will happen in time for 3.1.
As it stands, I'm hesitant to turn off the warnings by default, though I can
see arguments either way. If the warnings occur repeatedly, they are
indicative of a potential memory leak. If someone wants to turn it off by
default, and instead provide an environment variable to turn it back on, I
could probably be talked into that, too.
Hans
> -----Original Message-----
> From: Andrew Haley [mailto:aph@cambridge.redhat.com]
> Sent: Friday, April 05, 2002 1:08 AM
> To: Boehm, Hans
> Cc: 'Mark Wielaard'; java@gcc.gnu.org
> Subject: RE: Analysis of Mauve failures - The final chapter
>
>
> Boehm, Hans writes:
> > > From: Mark Wielaard [mailto:mark@klomp.org]
> > > > !java.lang.reflect.Array.newInstance
> > > Ugh, not fun. Running by hand also hangs, but turning on
> the -debug or
> > > -verbose flag makes it run... When not giving any flags
> it only prints
> > > Needed to allocate blacklisted block at 0x824b000
> > > The test actually tries to force a OutOfMemoryError
> exception which
> > > might explain this. But the Object.clone() test also
> seems to do this
> > > and that one just works.
>
> > All tests that allocate large objects should ideally be
> run with the
> > environment variable GC_NO_BLACKLIST_WARNING defined.
> That will get rid of
> > the message. The occurrence of the warning is often less than 100%
> > deterministic, and that's expected. Was there an issue
> here beyond the
> > warning?
>
> Hans,
>
> I don't understand. If the gc isn't buggy, why does it produce this
> warning at all in a production quality system?
>
> Andrew.
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: Analysis of Mauve failures - The final chapter
2002-04-05 9:27 Boehm, Hans
@ 2002-04-05 11:35 ` Andrew Haley
2002-04-05 11:41 ` Per Bothner
0 siblings, 1 reply; 20+ messages in thread
From: Andrew Haley @ 2002-04-05 11:35 UTC (permalink / raw)
To: Boehm, Hans; +Cc: 'Mark Wielaard', java
Thank you for the reminder.
Boehm, Hans writes:
> As it stands, I'm hesitant to turn off the warnings by default, though
> I can see arguments either way. If the warnings occur repeatedly,
> they are indicative of a potential memory leak. If someone wants to
> turn it off by default, and instead provide an environment variable to
> turn it back on, I could probably be talked into that, too.
It seems to me that we have to make up our minds.
IMO: If we are shipping a production-quality system then we shouldn't
output warnings about which we'll say "ah, don't worry about that
message, we already know about that." It doesn't look good, and it
will suggest to people that we don't have a serious offering. This is
especially true if the warning message uses obscure and frightening
terminology. This warning message looks like something major has
failed.
Andrew.
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: Analysis of Mauve failures - The final chapter
2002-04-05 11:35 ` Andrew Haley
@ 2002-04-05 11:41 ` Per Bothner
0 siblings, 0 replies; 20+ messages in thread
From: Per Bothner @ 2002-04-05 11:41 UTC (permalink / raw)
To: Boehm, Hans; +Cc: java
Andrew Haley wrote:
> It seems to me that we have to make up our minds.
>
> IMO: If we are shipping a production-quality system then we shouldn't
> output warnings about which we'll say "ah, don't worry about that
> message, we already know about that." It doesn't look good, and it
> will suggest to people that we don't have a serious offering. This is
> especially true if the warning message uses obscure and frightening
> terminology. This warning message looks like something major has
> failed.
I agree. Forthermore, it makes running the regression tests more
difficult, since some of them *may* cause the warning to be emitted.
I don't really think this situation is acceptable - the default
should be that the warning is *not* emitted.
--
--Per Bothner
per@bothner.com http://www.bothner.com/per/
^ permalink raw reply [flat|nested] 20+ messages in thread
* Analysis of Mauve failures - The final chapter
@ 2002-04-04 7:27 Mark Wielaard
0 siblings, 0 replies; 20+ messages in thread
From: Mark Wielaard @ 2002-04-04 7:27 UTC (permalink / raw)
To: java
Hi,
Here is the last overview of gcj Mauve issues.
I promise that this will be the last big list and to do
some real work (tm) to solve some of the issues myself and not
just hope that others will take the lists and solve it for me.
Although I did trick some people into submitting fixes, so part
of the plan actually worked :)
Today we will look at the mauve-libgcj file which contains tests
that we don't even try to run and the libjava-mauve/xfails file
which contains tests we do run and which we expect to fail.
muave-libgcj currently contains the following:
> # These 2 are tests that fail with JDBC2.0 but the tags don't seem to
> # have the right effect.
> !java.sql.Connection.TestJdbc10
> !java.sql.DatabaseMetaData.TestJdbc10
Should investigate why the mauve configury does not work properly, but
they are harmless since they fail because we implement JDBC2.0 things
(extra methods in some abstract classes) that make these tests fail.
> # Cannot be compiled
> !java.text.ACIAttribute
We don't implement the innerclass
java.text.AttributedCharacterIterator.Attribute.
> # The following tests seem to (sometimes) hang or crash the testsuite
> !java.io.ObjectInputOutput
The gnu.testlet.java.io.ObjectInputOutput.OutputTest testlet crashes on
the Test$HairyGraph. This seems an infinite recursion since the
backtrace in gdb gives hundreds of:
#199 0x402823e1 in
java.io.ObjectOutputStream.writeObject(java.lang.Object) (
this=0x81b9f00, obj=0x81c2b58)
at ../../../gcc/libjava/java/io/ObjectOutputStream.java:352
#200 0x40283e7c in
java.io.ObjectOutputStream.writeFields(java.lang.Object,
java.io.ObjectStreamField[], boolean) (this=0x81b9f00, obj=0x81c2b70,
fields=0x81a1fa0)
at ../../../gcc/libjava/java/io/ObjectOutputStream.java:1173
> !java.lang.reflect.Array.newInstance
Ugh, not fun. Running by hand also hangs, but turning on the -debug or
-verbose flag makes it run... When not giving any flags it only prints
Needed to allocate blacklisted block at 0x824b000
The test actually tries to force a OutOfMemoryError exception which
might explain this. But the Object.clone() test also seems to do this
and that one just works.
> !java.util.ResourceBundle.getBundle
Don't know why that was in there. It seems to run fine, although it
gives some failures which should be investigated:
FAIL: gnu.testlet.java.util.ResourceBundle.getBundle: with locale of
Canada (number 4)
FAIL: gnu.testlet.java.util.ResourceBundle.getBundle: with locale of
Canada (number 5)
FAIL: gnu.testlet.java.util.ResourceBundle.getBundle: with locale of
France (number 4)
FAIL: gnu.testlet.java.util.ResourceBundle.getBundle: book sample
(number 2)
FAIL: gnu.testlet.java.util.ResourceBundle.getBundle: book sample
(number 5)
FAIL: gnu.testlet.java.util.ResourceBundle.getBundle: book sample
(number 6)
FAIL: gnu.testlet.java.util.ResourceBundle.getBundle: book sample
(number 7)
7 of 23 tests failed
> !java.util.zip.GZIPInputStream.basic
Also seems to just work. Both tests seems to succeed.
O Duh. Enabling one or both of the last two tests seem to crash or hang
the testsuite again when run in full. Curious. Keep disabled for now.
> !java.net.DatagramSocket.DatagramSocketTest2
Our DatagramSocket.receive() blocks till a packet is received even when
the buffer of the DatagramPacket is zero. Which seems correct given the
spec. Wrong test?
The libjava.mauve/xfail file currently contains the following:
> FAIL: gnu.testlet.java.lang.Double.DoubleTest: Error: test_shortbyteValue failed - 5 (number 1)
> FAIL: gnu.testlet.java.lang.Float.FloatTest: Error: test_shortbyteValue failed - 5 (number 1)
Yeah! These are now XPASS.
> FAIL: gnu.testlet.java.io.Utf8Encoding.mojo: Four Byte Range Error (0) (number 1)
> FAIL: gnu.testlet.java.io.Utf8Encoding.mojo: Four Byte Range Error (1) (number 1)
> FAIL: gnu.testlet.java.io.Utf8Encoding.mojo: Five Bytes (0) (number 1)
> FAIL: gnu.testlet.java.io.Utf8Encoding.mojo: Five Bytes (1) (number 1)
> FAIL: gnu.testlet.java.io.Utf8Encoding.mojo: Six Bytes (0) (number 1)
> FAIL: gnu.testlet.java.io.Utf8Encoding.mojo: Six Bytes (1) (number 1)
> FAIL: gnu.testlet.java.io.Utf8Encoding.mojo: Orphan Continuation (1) (number 1)
> FAIL: gnu.testlet.java.io.Utf8Encoding.mojo: Orphan Continuation (2) (number 1)
> FAIL: gnu.testlet.java.io.Utf8Encoding.mojo: Four Byte Range Error (2) (number 1)
The test says:Note that JDK 1.1 and JDK 1.2 don't currently pass these tests;
there are known problems in their UTF-8 encoding support at this time.
This probably also true for libgcj.
> FAIL: gnu.testlet.java.io.ObjectStreamClass.Test: getSerialVersionUID (number 7)
Now passes!
> FAIL: gnu.testlet.java.text.DateFormatSymbols.Test: patterns (number 2)
> FAIL: gnu.testlet.java.text.SimpleDateFormat.Test: equals() (number 1)
> FAIL: gnu.testlet.java.text.SimpleDateFormat.Test: parse() strict (number 1)
> FAIL: gnu.testlet.java.text.SimpleDateFormat.getAndSet2DigitYearStart: get2DigitYearStart() initial (number 1)
Seem to be known issues with our java.text support.
> FAIL: gnu.testlet.java.net.URLConnection.URLConnectionTest: Error in test_Basics - 2 should not have raised Throwable here (number 1)
> FAIL: gnu.testlet.java.net.URL.URLTest: openStream (number 1)
> FAIL: gnu.testlet.java.net.URL.URLTest: sameFile (number 2)
> FAIL: gnu.testlet.java.net.URL.URLTest: Error in test_toString - 5 exception should not be thrown here (number 1)
> FAIL: gnu.testlet.java.net.URL.URLTest: new URL(protocol, host, file) (number 26)
> FAIL: gnu.testlet.java.net.URL.URLTest: new URL(protocol, host, file) (number 54)
We need to merge URL and URLConnection with Classpath and check again.
> FAIL: gnu.testlet.java.net.ServerSocket.ServerSocketTest: Error : test_params failed - 5getInetAddress did not return proper values (number 1)
> FAIL: gnu.testlet.java.net.Socket.SocketTest: Error : test_BasicServer failed - 11 exception was thrown :Illegal seek (number 1)
> FAIL: gnu.testlet.java.net.MulticastSocket.MulticastSocketTest: joinGroup() twice. (number 1)
No time to investigate. Keep them in for now.
Actions:
Character.unicode should probably also be added to the mauve-libgcj ignore list
since it generates a lot of wrong failures.
We seem to agree that some tests are bogus or wrong in Mauve. I will make
patches for those tests and submit them to the mauve mailinglist.
I will try to add all tests that FAIL now and for which we know we will not
solve the issue for 3.1 to the xfail list (and submit bug reports).
Most issues can probably not be solved for 3.1 since that release is in about
10 days, but we can try. The goal is to have zero FAILs (but a couple of XFAILs)
for 3.1.
Cheers,
Mark
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2002-04-20 1:20 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2002-04-04 16:45 Analysis of Mauve failures - The final chapter Boehm, Hans
2002-04-04 17:14 ` Mark Wielaard
2002-04-05 3:47 ` Mark Wielaard
2002-04-05 4:07 ` Andrew Haley
2002-04-05 4:22 ` Mark Wielaard
2002-04-08 13:40 ` Tom Tromey
2002-04-08 14:56 ` Mark Wielaard
2002-04-08 17:31 ` Mark Wielaard
2002-04-09 0:49 ` Bryce McKinlay
2002-04-19 19:59 ` Tom Tromey
2002-04-05 1:15 ` Andrew Haley
-- strict thread matches above, loose matches on Subject: below --
2002-04-08 15:20 Boehm, Hans
2002-04-08 13:33 Boehm, Hans
2002-04-05 11:37 Boehm, Hans
2002-04-05 12:16 ` Per Bothner
2002-04-08 13:32 ` Tom Tromey
2002-04-05 9:27 Boehm, Hans
2002-04-05 11:35 ` Andrew Haley
2002-04-05 11:41 ` Per Bothner
2002-04-04 7:27 Mark Wielaard
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).