public inbox for gdb@sourceware.org
 help / color / mirror / Atom feed
From: Andrew Cagney <ac131313@redhat.com>
To: Robert Vazan <robertvazan@privateweb.sk>
Cc: gdb@sources.redhat.com
Subject: Re: Terminally slow (75 seconds) on some steps
Date: Fri, 15 Aug 2003 22:59:00 -0000	[thread overview]
Message-ID: <3F3D65C5.2070005@redhat.com> (raw)
In-Reply-To: <Mahogany-0.65.0-1468-20030814-140545.00@pcw2k69.energoinfo.sk>

>> > On Tue, 05 Aug 2003 01:20:26 -0400 Andrew Cagney <ac131313@redhat.com>
>> > wrote:
>> > 
> 
>> >> Ok, during library load, GDB has to constantly start/stop the target
>> >> and that slows things down.  Would you be able to oprofile your
> 
>> entire 
> 
>> >> system so its possible to see exactly where that 11 seconds goes?
> 
> 
> OK, I have here running oprofile. I am not sure what sort of report do you
> need. I can copy-n-paste reports to list or publish larger files on my
> website. Here go some simple statistics: run without gdb[1], run inside
> gdb[2], symbol details from gdb run[3]. Oprofile cannot do call graphs, I
> hope you get something out of these numbers.

(I think they are working on the call graph problem :-)

I'm guessing that the second column is seconds.

> [1]
> 
>      2371 30.3352 vmlinux
>       736  9.4166 XFree86
>       729  9.3270 ld-2.3.1.so
>       595  7.6126 libwx_based-2.5.so.0.0.0
>       487  6.2308 libwx_gtkd_core-2.5.so.0.0.0
>       471  6.0261 libc-2.3.1.so
>       454  5.8086 M
> ...
> 
> [2]
> 
>     10567 37.2826 vmlinux
>      8537 30.1203 gdb
>      4402 15.5312 libc-2.3.1.so
>       786  2.7732 ld-2.3.1.so
>       622  2.1945 XFree86
>       511  1.8029 libwx_based-2.5.so.0.0.0
>       455  1.6053 M
> ...

The first suprize is how much time, without GDB, it spends in the kernel 
and the link-loader (ld.so?).  GDB's adding 7 seconds of kernel time 
which isn't too bad but would be useful to know where.

Second is yes, 30s of GDB and 9 of glibc.

> [3]
> 
> 00000000 4402     15.5317     libc-2.3.1.so            (no symbols)
> 080e40e0 1134      4.0011     gdb                      bcache

That's striking.  GDB spent 4s/30s avoiding the creation of duplicate 
strings and other symbols.  The lack of call graph is a shame, useful to 
know from where this was called.  Hmm, assuming no macro debug info 
there are only three calls, all in symtab.c:

- recording the symbol name (linkage name?)
- recording the demangled name
- recording a partial symbol

This is to save a significant amount of memory.  Often two files will 
contain the same partial symbol information (due to headers et.al.).

It's interesting how linkage and demangled names (strings) share the 
same bcache as partial symbol (a struct).  Wonder if separate caches 
would lead to a better hit rate?

I also wonder if the full symbols are making use of the partial symbol 
name bcache.


> c01221cc 1011      3.5671     vmlinux                  file_read_actor
> c01dc330 955       3.3696     vmlinux                  __generic_copy_to_user

Does this copy-to-user appear when GDB isn't present?  Having GDB do 
page sized transfers (which a kernel should do efficiently) is on todo 
lists.

Have you tried `set trust-read-only-sections on'?

> c011f6b8 663       2.3393     vmlinux                  do_anonymous_page
> 00000000 622       2.1946     XFree86                  (no symbols)
> 08134b74 574       2.0253     gdb                      read_partial_die

Ok, so its spending a lot of time reading partial symbols.

> c0118f2c 516       1.8206     vmlinux                  access_process_vm
> 080e3fc0 459       1.6195     gdb                      hash
> 00000000 384       1.3549     libglib-1.2.so.0.0.10    (no symbols)
> c0105370 369       1.3020     vmlinux                  default_idle
> 081b7f7c 360       1.2702     gdb                      htab_find_slot_with_hash
> 08134fe4 307       1.0832     gdb                      read_attribute_value
> 081b8218 292       1.0303     gdb                      htab_hash_string
> c0135408 292       1.0303     vmlinux                  link_path_walk
> c0106d54 283       0.9985     vmlinux                  system_call
> c0147650 273       0.9632     vmlinux                  proc_lookup
> 081351bc 246       0.8680     gdb                      read_attribute
> 00006e48 244       0.8609     ld-2.3.1.so              do_lookup
> 0813195c 240       0.8468     gdb                      add_partial_symbol
> 08135560 237       0.8362     gdb                      read_unsigned_leb128
> c0126294 231       0.8150     vmlinux                  kmem_cache_free
> 00000000 226       0.7974     libX11.so.6.2            (no symbols)
> 00007042 215       0.7586     ld-2.3.1.so              do_lookup_versioned
> c012da38 209       0.7374     vmlinux                  fput
> c0126100 208       0.7339     vmlinux                  kmem_cache_alloc
> 081354e4 204       0.7198     gdb                      read_indirect_string
> c0127834 200       0.7057     vmlinux                  rmqueue
> 0000dc18 188       0.6633     ld-2.3.1.so              strcmp
> c01397f4 185       0.6527     vmlinux                  do_select
> 00002210 168       0.5928     unix.o                   unix_poll
> c013e84c 164       0.5786     vmlinux                  clean_inode
> 00000000 160       0.5645     libgdk-1.2.so.0.9.1      (no symbols)
> 080a43f0 159       0.5610     gdb                      symbol_set_names
> c0127640 155       0.5469     vmlinux                  __free_pages_ok
> 081568b4 147       0.5187     gdb                      bfd_getl32
> 080aa8b0 139       0.4904     gdb                      add_psymbol_to_list
> c0135f24 138       0.4869     vmlinux                  open_namei
> c010fbe4 134       0.4728     vmlinux                  schedule
> 00000000 131       0.4622     libgtk-1.2.so.0.9.1      (no symbols)
> c013cc00 120       0.4234     vmlinux                  dput
> c013d204 117       0.4128     vmlinux                  d_alloc
> c013d3cc 110       0.3881     vmlinux                  d_lookup
> 08134b30 102       0.3599     gdb                      dwarf2_lookup_abbrev
> 081b08e8 101       0.3564     gdb                      demangle_prefix
> 08131800 97        0.3422     gdb                      scan_partial_symbols
> c012db08 97        0.3422     vmlinux                  fget
> 080fbba0 96        0.3387     gdb                      xmmalloc
> 080fd028 93        0.3281     gdb                      strcmp_iw_ordered
> c0146fd0 93        0.3281     vmlinux                  proc_pid_lookup
> c013ece0 92        0.3246     vmlinux                  iput
> 00000000 91        0.3211     librep.so.9.2.2          (no symbols)
> c0146d44 90        0.3175     vmlinux                  proc_base_lookup
> c019738c 89        0.3140     vmlinux                  sock_poll
> c011eb28 87        0.3070     vmlinux                  get_user_pages
> 080fb19c 82        0.2893     gdb                      do_my_cleanups
> 000098c3 82        0.2893     libpthread-0.10.so       __pthread_alt_unlock
> c01464f0 80        0.2823     vmlinux                  mem_read
> 000096bf 78        0.2752     libpthread-0.10.so       __pthread_alt_lock
> 081b71e0 75        0.2646     gdb                      dyn_string_resize
> 081b7510 73        0.2576     gdb                      dyn_string_append_char
> c01460a4 72        0.2540     vmlinux                  may_ptrace_attach
> 080de514 70        0.2470     gdb                      target_xfer_memory
> 08161014 70        0.2470     gdb                      bfd_elf32_slurp_symbol_table
> c0110cb0 70        0.2470     vmlinux                  add_wait_queue
> ...
> 
> 


  reply	other threads:[~2003-08-15 22:59 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1057999221.6815.ezmlm@sources.redhat.com>
2003-07-12  9:03 ` Robert Vazan
2003-07-17 16:34   ` Andrew Cagney
2003-07-18 13:06     ` Robert Vazan
2003-07-18 21:56       ` Andrew Cagney
2003-07-19 11:29         ` Robert Vazan
2003-07-21 14:21           ` Andrew Cagney
2003-07-21 17:38             ` Robert Vazan
2003-08-05  5:20               ` Andrew Cagney
2003-08-05 18:04                 ` Robert Vazan
2003-08-05 18:35                   ` Andrew Cagney
2003-08-14 12:05                     ` Robert Vazan
2003-08-15 22:59                       ` Andrew Cagney [this message]
2003-08-16 15:11                         ` Daniel Jacobowitz
2003-08-16 16:36                           ` Andrew Cagney
2003-08-16 16:41                             ` Daniel Jacobowitz
2003-08-17  3:29                               ` Daniel Berlin
2003-08-18 15:05                                 ` Andrew Cagney
2003-08-18 16:15                                   ` Daniel Berlin
2003-08-16 16:38                         ` Robert Vazan
2003-08-20 16:12                         ` Robert Vazan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3F3D65C5.2070005@redhat.com \
    --to=ac131313@redhat.com \
    --cc=gdb@sources.redhat.com \
    --cc=robertvazan@privateweb.sk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).