public inbox for gdb@sourceware.org
 help / color / mirror / Atom feed
* Re: More SSE infrastructure
       [not found] ` <Pine.LNX.3.96.1000703211132.12211A-100000@masala.cygnus.co.uk>
@ 2000-07-03 14:02   ` Richard Henderson
  2000-07-03 15:31     ` Bernd Schmidt
  2000-07-03 17:20     ` Mark Kettenis
  0 siblings, 2 replies; 7+ messages in thread
From: Richard Henderson @ 2000-07-03 14:02 UTC (permalink / raw)
  To: Bernd Schmidt; +Cc: gcc-patches, gdb

[ For the GDB list, we're discussing what needs to be emitted for
  debug information for 128-bit integers used with SSE.

  Note that this is not the same as when a user has declared a proper
  128-bit vector type, which is given in the debugging information as
  a struct, but rather to the Intel API defined __m128, which does not
  define the shape of the vector (float[4], int[4], short[8], ...) and
  so is represented as a plain int.  ]

On Mon, Jul 03, 2000 at 09:14:52PM +0100, Bernd Schmidt wrote:
> On Mon, 3 Jul 2000, Richard Henderson wrote:
> > On Mon, Jul 03, 2000 at 07:08:34PM +0100, Bernd Schmidt wrote:
> > > ... the only place in the compiler I've found so far that relies
> > > on it is debugging output (where TImode constants are used for
> > > TYPE_{MIN,MAX}_VALUE of 128 bit integers.
> > 
> > I wonder if we can just bail on that?
> 
> Possibly.  I don't know for what the debugger could use this information.

Neither do I.  It seems relatively certain that the debugger isn't going
to allow us to evaluate 128-bit int expressions; what other use of the
bounds of the type I don't know.  Why it would even need to be told the
bounds, given the size of the type and its signedness I don't know. 
(Possibly to represent pascal-like integer subranges?)

> If you think that's OK, we could leave out that part for now.

I don't think we can do nothing right now.  We need to come to agreement
with the gdb folks what would be acceptible. 

If we need to emit *something*, it would be possible for us to put
code in at this point to recognize that TYPE_{SIZE,PRECISION} is out
of range for what an INTEGER_CST could represent, and emit the max
bounds by hand, in octal, based on the known size of the type.


r~

PS: Oh, you'll have to watch type_for_mode, which does 

	#if HOST_BITS_PER_WIDE_INT >= 64
	  if (mode == TYPE_MODE (intTI_type_node))

and possibly a few other places in the compiler.  What I'd like you
to do while we're sorting out the debugging thing is to hack the debug
code to not crash (or just use -g0) and see where else you run into
problems compiling SSE code with HOST_BITS_PER_WIDE_INT=32.  Because
you'll probably need to have TYPE_{MIN,MAX}_VALUE=NULL, which could
well give fold-const (among other places) indigestion.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: More SSE infrastructure
  2000-07-03 14:02   ` More SSE infrastructure Richard Henderson
@ 2000-07-03 15:31     ` Bernd Schmidt
  2000-07-03 17:20     ` Mark Kettenis
  1 sibling, 0 replies; 7+ messages in thread
From: Bernd Schmidt @ 2000-07-03 15:31 UTC (permalink / raw)
  To: Richard Henderson; +Cc: gcc-patches, gdb

On Mon, 3 Jul 2000, Richard Henderson wrote:

> What I'd like you
> to do while we're sorting out the debugging thing is to hack the debug
> code to not crash (or just use -g0) and see where else you run into
> problems compiling SSE code with HOST_BITS_PER_WIDE_INT=32.  Because
> you'll probably need to have TYPE_{MIN,MAX}_VALUE=NULL, which could
> well give fold-const (among other places) indigestion.

I did something like this last year.  The only difference in behaviour
I noticed was slightly different debugging output (TYPE_{MIN_MAX}_VALUE
contained bogus constant values, but not NULL).
I did not look at the whole compiler, but I believe things like
fold-const are relatively safe, as we aren't really building any "real"
expressions with 128 bit types.  These types only show up as function
call arguments and return values and variable declarations.  This is
not something I can imagine fold-const ever wanting to touch.

What I did was by no means an exhaustive test, though.  I don't really
have a large chunk of SSE code to test with.

Bernd

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: More SSE infrastructure
  2000-07-03 14:02   ` More SSE infrastructure Richard Henderson
  2000-07-03 15:31     ` Bernd Schmidt
@ 2000-07-03 17:20     ` Mark Kettenis
  2000-07-05  0:16       ` Richard Henderson
  1 sibling, 1 reply; 7+ messages in thread
From: Mark Kettenis @ 2000-07-03 17:20 UTC (permalink / raw)
  To: rth; +Cc: bernds, gcc-patches, gdb

   Date: Mon, 3 Jul 2000 14:02:20 -0700
   From: Richard Henderson <rth@cygnus.com>

   [ For the GDB list, we're discussing what needs to be emitted for
     debug information for 128-bit integers used with SSE.

     Note that this is not the same as when a user has declared a proper
     128-bit vector type, which is given in the debugging information as
     a struct, but rather to the Intel API defined __m128, which does not
     define the shape of the vector (float[4], int[4], short[8], ...) and
     so is represented as a plain int.  ]

Some time ago, Jim Blandy added support for SSE/SIMD to GDB[1].  We have
the following definitions in gdbtypes.h:

/* SIMD types.  We inherit these names from GCC.  */
extern struct type *builtin_type_v4sf;
extern struct type *builtin_type_v4si;
extern struct type *builtin_type_v8qi;
extern struct type *builtin_type_v4hi;
extern struct type *builtin_type_v2si;

And appropriate initializations in gdbtypes.c.  The "default" type for
the SSE registers is builtin_type_v4sf (see
config/i386/tm-i386.h:REGISTER_VIRTUAL_TYPE(N)).  I don't know why
(you'd have to ask Jim, but I believe he's on vacation until July 10).
It would make some sense though to make the __m128 type similar to the
SSE registers.  If builtin_type_v4sf is indeed the most suitable
return type, that would probably mean emitting the debug information
as a struct.

Mark

[1] This work was supposed to work together with a Linux kernel patch
    developed by Cygnus.  The stuff recenty added to Linux 2.4.0test2
    is a bit different, and I'm in the process of changing GDB
    accordingly.  If people are interested I can post a (preliminary)
    patch.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: More SSE infrastructure
  2000-07-03 17:20     ` Mark Kettenis
@ 2000-07-05  0:16       ` Richard Henderson
  2000-07-05  6:45         ` Mark Kettenis
  0 siblings, 1 reply; 7+ messages in thread
From: Richard Henderson @ 2000-07-05  0:16 UTC (permalink / raw)
  To: Mark Kettenis; +Cc: bernds, gcc-patches, gdb

On Tue, Jul 04, 2000 at 02:19:58AM +0200, Mark Kettenis wrote:
> It would make some sense though to make the __m128 type similar to the
> SSE registers.

I don't agree.  If you wanted v4sf, you can use v4sf in the source.
Though not, admittedly, with Intel's interface.  But you can get it
from GCC easily enough.

In any case, the question at hand has absolutely nothing to do with
SSE specifically, but rather what we should give GDB for describing
a 128-bit integer.

The GCC infrastructure currently prohibits representing full 
arithmetic on items larger than twice the host word size.  If you
take away the actual arithmetic, and just manipulate the things as
data (movement and such), then the only thing left in GCC that 
appears to have problems is debugging.

The current solution in the Cygnus tree is to force the use of
long long as the `host' word size.  Which slows down the compiler
significantly.  We're trying to figure a way out of that.

So, the object being, fix the debugging code to cope with integer
objects larger than twice the host word size.  And in order to do
that, we need to know what GDB needs to do its job.

In the original message I sent, perhaps only to the GCC list (oops),
I quoted a fragment from the stabs emitter, wherein we look to see
if it's easy to print the upper bound for the type.  If it isn't
easy, we just print -1.

So the question is, do we really have to print the correct upper
and lower bounds for the 128-bit type, or can we just use 0 and -1,
as the existing code would suggest.


r~

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: More SSE infrastructure
  2000-07-05  0:16       ` Richard Henderson
@ 2000-07-05  6:45         ` Mark Kettenis
  2000-07-05 10:56           ` Richard Henderson
  0 siblings, 1 reply; 7+ messages in thread
From: Mark Kettenis @ 2000-07-05  6:45 UTC (permalink / raw)
  To: rth; +Cc: bernds, gcc-patches, gdb

   Date: Wed, 5 Jul 2000 00:16:46 -0700
   From: Richard Henderson <rth@cygnus.com>

   On Tue, Jul 04, 2000 at 02:19:58AM +0200, Mark Kettenis wrote:
   > It would make some sense though to make the __m128 type similar to the
   > SSE registers.

   I don't agree.  If you wanted v4sf, you can use v4sf in the source.
   Though not, admittedly, with Intel's interface.  But you can get it
   from GCC easily enough.

Whatever, I don't really care.

   In any case, the question at hand has absolutely nothing to do with
   SSE specifically, but rather what we should give GDB for describing
   a 128-bit integer.

[snip]

   So, the object being, fix the debugging code to cope with integer
   objects larger than twice the host word size.  And in order to do
   that, we need to know what GDB needs to do its job.

I'm assuming you're talking about stabs here.

   In the original message I sent, perhaps only to the GCC list (oops),
   I quoted a fragment from the stabs emitter, wherein we look to see
   if it's easy to print the upper bound for the type.  If it isn't
   easy, we just print -1.

If the lower bound is 0 and the upper bound is -1, GDB interprets the
type as an unsigned integer with the size of the hosts natural integer
type (i.e. 4 bytes on a 32-bit machine).

   So the question is, do we really have to print the correct upper
   and lower bounds for the 128-bit type, or can we just use 0 and -1,
   as the existing code would suggest.

GDB uses the range info to determine the size of an object, even for
global symbols where it might be able to get the size from the symbol
table.  So using 0 and -1 wouldn't produce anything useful.

Giving the correct upper and lower bounds does work (tested on Solaris
2.6, with a recent GDB snapshot and egcs-2.91.66, where I added the
stab for a 128-bit type by hand).  GDB happily prints a 128-bit
hexadecimal constant when I ask it to print the value of a 128-bit
variable, but refuses to evaluate any complex expressions using this
variable, telling me that it cannot do such things on integer
variables larger than 8 bytes.

If printing correct lower and upper bounds is too hard for GCC, there
are alternatives.  If the lower bound is 0 and the upper bound is a
negative number, GDB assumes the size of the type (in bytes) is the
absolute value of the upper bound.  I've verified that emitting:

.stabs "__m128:t(0,20)=r(0,20);0;-16;",128,0,0,0

does indeed work.  The GNU stabs info file suggests that this is a
Convex convention.

It seems that GDB also supports type attributes (an AIX extension, see
the GNU stabs info file).  So

.stabs "__m128:t(0,20)=@s128;r(0,20);0;-1;",128,0,0,0

also works.

The relevant bits of code in GDB are stabsread.c:read_range_type()
(which interprets the lower and upper bound) and
stabsread.c:read_type() (which is sopposed to interpret the entire
stabs type string, and interprets the `s' type attribute).

Mark

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: More SSE infrastructure
  2000-07-05  6:45         ` Mark Kettenis
@ 2000-07-05 10:56           ` Richard Henderson
  2000-07-05 12:08             ` Mark Kettenis
  0 siblings, 1 reply; 7+ messages in thread
From: Richard Henderson @ 2000-07-05 10:56 UTC (permalink / raw)
  To: Mark Kettenis; +Cc: bernds, gcc-patches, gdb

On Wed, Jul 05, 2000 at 03:44:42PM +0200, Mark Kettenis wrote:
> GDB uses the range info to determine the size of an object, even for
> global symbols where it might be able to get the size from the symbol
> table.  So using 0 and -1 wouldn't produce anything useful.

Ok.  I figured it couldn't be that easy.

> Giving the correct upper and lower bounds does work (tested on Solaris
> 2.6, with a recent GDB snapshot and egcs-2.91.66, where I added the
> stab for a 128-bit type by hand).

Next question: will GDB accept negative octal constants for
signed 128-bit types?  E.g. -017777.  I surely don't want to
do decimal output without libgmp on my side, which we don't
want to assume.

It wouldn't be the end of the world if we wound up considering
all such types unsigned in the debugger, but if it's possible...

> If printing correct lower and upper bounds is too hard for GCC, there
> are alternatives.  If the lower bound is 0 and the upper bound is a
> negative number, GDB assumes the size of the type (in bytes) is the
> absolute value of the upper bound.  I've verified that emitting:
> 
> .stabs "__m128:t(0,20)=r(0,20);0;-16;",128,0,0,0
> 
> does indeed work.  The GNU stabs info file suggests that this is a
> Convex convention.

Interesting to know.  However, I would imagine that not all
stabs system debuggers allow such a thing, so it'd be better
to go with printing proper bounds if possible.


r~

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: More SSE infrastructure
  2000-07-05 10:56           ` Richard Henderson
@ 2000-07-05 12:08             ` Mark Kettenis
  0 siblings, 0 replies; 7+ messages in thread
From: Mark Kettenis @ 2000-07-05 12:08 UTC (permalink / raw)
  To: rth; +Cc: bernds, gcc-patches, gdb

   Date: Wed, 5 Jul 2000 10:56:00 -0700
   From: Richard Henderson <rth@cygnus.com>

   Next question: will GDB accept negative octal constants for
   signed 128-bit types?  E.g. -017777.  I surely don't want to
   do decimal output without libgmp on my side, which we don't
   want to assume.

Not with the explicit minus sign.  You specify the sign bit as part of
the octal number, e.g. 020000 for your example (the range would be
specified as 020000;017777; in this case).  This is exactly what is
already done for `long' and `long long'.

   It wouldn't be the end of the world if we wound up considering
   all such types unsigned in the debugger, but if it's possible...

Looks like GDB treats those extremely large integers as unsigned
anyway :-(.  I might take a look on a rainy day or so ...

   > If printing correct lower and upper bounds is too hard for GCC, there
   > are alternatives.  If the lower bound is 0 and the upper bound is a
   > negative number, GDB assumes the size of the type (in bytes) is the
   > absolute value of the upper bound.  I've verified that emitting:
   > 
   > .stabs "__m128:t(0,20)=r(0,20);0;-16;",128,0,0,0
   > 
   > does indeed work.  The GNU stabs info file suggests that this is a
   > Convex convention.

   Interesting to know.  However, I would imagine that not all
   stabs system debuggers allow such a thing, so it'd be better
   to go with printing proper bounds if possible.

I'd be really surprised if those stabs system debugger would allow
large integers at all.

Mark

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2000-07-05 12:08 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20000703122133.F25642@cygnus.com>
     [not found] ` <Pine.LNX.3.96.1000703211132.12211A-100000@masala.cygnus.co.uk>
2000-07-03 14:02   ` More SSE infrastructure Richard Henderson
2000-07-03 15:31     ` Bernd Schmidt
2000-07-03 17:20     ` Mark Kettenis
2000-07-05  0:16       ` Richard Henderson
2000-07-05  6:45         ` Mark Kettenis
2000-07-05 10:56           ` Richard Henderson
2000-07-05 12:08             ` Mark Kettenis

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).