* memory increased rapidly when adding a break
@ 2022-11-13 10:01 DeJiang Zhu
2022-11-14 0:26 ` Simon Marchi
0 siblings, 1 reply; 5+ messages in thread
From: DeJiang Zhu @ 2022-11-13 10:01 UTC (permalink / raw)
To: gdb
[-- Attachment #1: Type: text/plain, Size: 8530 bytes --]
Hi,
I compiled envoy(a big c++ project) by using gcc-12.2.0, debug it by using
gdb 12.1.
But, memory increased rapidly(over 40+GB, until OOM), when adding a break.
I got this backtrace, after attach the gdb process, when memory increasing.
```
(gdb) bt
#0 0x00007fa6893dc935 in _int_malloc () from /lib64/libc.so.6
#1 0x00007fa6893df6fc in malloc () from /lib64/libc.so.6
#2 0x0000000000468278 in xmalloc (size=4064) at alloc.c:60
#3 0x00000000008ecd95 in call_chunkfun (size=<optimized out>,
h=0x17a246ed0) at ./obstack.c:94
#4 _obstack_begin_worker (h=0x17a246ed0, size=<optimized out>,
alignment=<optimized out>) at ./obstack.c:141
#5 0x000000000052d0d3 in demangle_parse_info::demangle_parse_info
(this=0x17a246ec0) at cp-name-parser.y:1973
#6 cp_demangled_name_to_comp (demangled_name=demangled_name@entry=0x8d12c8c0
"std::stack<unsigned int, std::deque<unsigned int, std::allocator<unsigned
int> > >::size_type", errmsg=errmsg@entry=0x0) at cp-name-parser.y:2040
#7 0x000000000052ff5e in cp_canonicalize_string
(string=string@entry=0x8d12c8c0
"std::stack<unsigned int, std::deque<unsigned int, std::allocator<unsigned
int> > >::size_type") at cp-support.c:635
#8 0x0000000000570b98 in dwarf2_canonicalize_name (name=0x8d12c8c0
"std::stack<unsigned int, std::deque<unsigned int, std::allocator<unsigned
int> > >::size_type", cu=<optimized out>, objfile=0x2c3af10) at
dwarf2/read.c:22908
#9 0x0000000000590265 in dwarf2_compute_name (name=0x7fa55773c524
"size_type", die=0x172590eb0, cu=0xe2aeefd0, physname=0) at
dwarf2/read.c:10095
#10 0x000000000058bf39 in dwarf2_full_name (cu=0xe2aeefd0, die=0x172590eb0,
name=0x0) at dwarf2/read.c:10123
#11 read_typedef (cu=0xe2aeefd0, die=0x172590eb0) at dwarf2/read.c:17687
#12 read_type_die_1 (cu=0xe2aeefd0, die=0x172590eb0) at dwarf2/read.c:22531
#13 read_type_die (die=0x172590eb0, cu=0xe2aeefd0) at dwarf2/read.c:22473
#14 0x000000000059acda in dwarf2_add_type_defn (cu=0xe2aeefd0,
die=0x172590eb0, fip=0x7ffd8a1be3e0) at dwarf2/read.c:14740
#15 handle_struct_member_die (child_die=0x172590eb0, type=0x17a6becd0,
fi=0x7ffd8a1be3e0, template_args=<optimized out>, cu=0xe2aeefd0) at
dwarf2/read.c:15867
#16 0x0000000000597044 in process_structure_scope (cu=0xe2aeefd0,
die=0x172590920) at dwarf2/read.c:15908
#17 process_die (die=0x172590920, cu=0xe2aeefd0) at dwarf2/read.c:9698
#18 0x000000000059646d in read_namespace (cu=0xe2aeefd0, die=0x16802e140)
at dwarf2/read.c:17068
#19 process_die (die=0x16802e140, cu=0xe2aeefd0) at dwarf2/read.c:9737
#20 0x0000000000598df9 in read_file_scope (die=0x1594e8360, cu=0xe2aeefd0)
at dwarf2/read.c:10648
#21 0x0000000000595f32 in process_die (die=0x1594e8360, cu=0xe2aeefd0) at
dwarf2/read.c:9669
#22 0x000000000059c0c8 in process_full_comp_unit
(pretend_language=<optimized out>, cu=0xe2aeefd0) at dwarf2/read.c:9439
#23 process_queue (per_objfile=0x9d546c0) at dwarf2/read.c:8652
#24 dw2_do_instantiate_symtab (per_cu=<optimized out>,
per_objfile=0x9d546c0, skip_partial=<optimized out>) at dwarf2/read.c:2311
#25 0x000000000059c5f0 in dw2_instantiate_symtab (per_cu=0x9c886f0,
per_objfile=0x9d546c0, skip_partial=<optimized out>) at
dwarf2/read.c:2335#26 0x000000000059c78a in
dw2_expand_symtabs_matching_one(dwarf2_per_cu_data *, dwarf2_per_objfile *,
gdb::function_view<bool(char const*, bool)>,
gdb::function_view<bool(compunit_symtab*)>) (per_cu=<optimized out>,
per_objfile=<optimized out>, file_matcher=..., expansion_notify=...) at
dwarf2/read.c:4204
#27 0x000000000059c94b in
dwarf2_gdb_index::expand_symtabs_matching(objfile*, gdb::function_view<bool
(char const*, bool)>, lookup_name_info const*, gdb::function_view<bool
(char const*)>, gdb::function_view<bool (compunit_symtab*)>,
enum_flags<block_search_flag_values>, domain_enum_tag, search_domain)
(this=<optimized out>, objfile=<optimized out>, file_matcher=...,
lookup_name=<optimized out>, symbol_matcher=
..., expansion_notify=..., search_flags=..., domain=UNDEF_DOMAIN,
kind=<optimized out>) at dwarf2/read.c:4421
#28 0x0000000000730feb in objfile::map_symtabs_matching_filename(char
const*, char const*, gdb::function_view<bool (symtab*)>) (this=0x2c3af10,
name=<optimized out>, name@entry=0x586f26f0 "utility.h",
real_path=<optimized out>, real_path@entry=0x0, callback=...) at
symfile-debug.c:207
#29 0x0000000000741abd in iterate_over_symtabs(char const*,
gdb::function_view<bool (symtab*)>) (name=name@entry=0x586f26f0
"utility.h", callback=...) at symtab.c:624
#30 0x00000000006311d7 in collect_symtabs_from_filename (file=0x586f26f0
"utility.h", search_pspace=<optimized out>) at linespec.c:3716
#31 0x0000000000631212 in symtabs_from_filename (filename=0x586f26f0
"utility.h", search_pspace=<optimized out>) at linespec.c:3736
#32 0x0000000000635e9f in parse_linespec (parser=0x7ffd8a1bf1b0,
arg=<optimized out>, match_type=<optimized out>) at linespec.c:2557
#33 0x0000000000636cac in event_location_to_sals (parser=0x7ffd8a1bf1b0,
location=0x51ed4da0) at linespec.c:3082
#34 0x0000000000636f73 in decode_line_full (location=location@entry=0x51ed4da0,
flags=flags@entry=1, search_pspace=search_pspace@entry=0x0,
default_symtab=<optimized out>, default_line=<optimized out>,
canonical=0x7ffd8a1bf4e0, select_mode=0x0, filter=<optimized out>) at
linespec.c:3161
#35 0x00000000004b1683 in parse_breakpoint_sals (location=0x51ed4da0,
canonical=0x7ffd8a1bf4e0) at breakpoint.c:8730
#36 0x00000000004b5d03 in create_breakpoint (gdbarch=0xeca5dc0,
location=location@entry=0x51ed4da0, cond_string=cond_string@entry=0x0,
thread=<optimized out>, thread@entry=-1, extra_string=0x0,
extra_string@entry=0x7ffd8a1bf650 "",
force_condition=force_condition@entry=false,
parse_extra=0, tempflag=0, type_wanted=bp_breakpoint, ignore_count=0,
pending_break_support=AUTO_BOOLEAN_TRUE, ops=0xc23c00
<bkpt_breakpoint_ops>, from_tty=0, enabled=1, internal=0, flags=0) at
breakpoint.c:9009
#37 0x0000000000674ba8 in mi_cmd_break_insert_1 (dprintf=0, argv=<optimized
out>, argc=<optimized out>, command=<optimized out>) at
mi/mi-cmd-break.c:361
```
Also, I found it's loop in `dwarf2_gdb_index::expand_symtabs_matching`.
I added a break on `dw2_expand_symtabs_matching_one`, it hit this break
repeatly.
```
if (lookup_name == nullptr)
{
for (dwarf2_per_cu_data *per_cu
: all_comp_units_range (per_objfile->per_bfd))
{
QUIT;
if (!dw2_expand_symtabs_matching_one (per_cu, per_objfile,
file_matcher, expansion_notify))
return false;
}
return true;
}
```
Seems, `per_bfd->all_comp_units.size()` is `28776`.
I'm not sure if this is a reasonable value.
```
(gdb) p per_objfile->per_bfd->all_comp_units
$423 = {<std::_Vector_base<std::unique_ptr<dwarf2_per_cu_data,
dwarf2_per_cu_data_deleter>,
std::allocator<std::unique_ptr<dwarf2_per_cu_data,
dwarf2_per_cu_data_deleter> > >> = {_M_impl =
{<std::allocator<std::unique_ptr<dwarf2_per_cu_data,
dwarf2_per_cu_data_deleter> >> =
{<__gnu_cxx::new_allocator<std::unique_ptr<dwarf2_per_cu_data,
dwarf2_per_cu_data_deleter> >> = {<No data fields>}, <No data fields>},
<std::_Vector_base<std::unique_ptr<dwarf2_per_cu_data,
dwarf2_per_cu_data_deleter>,
std::allocator<std::unique_ptr<dwarf2_per_cu_data,
dwarf2_per_cu_data_deleter> > >::_Vector_impl_data> = {_M_start =
0x3b6f980, _M_finish = 0x3b769e8, _M_end_of_storage = 0x3b769e8}, <No data
fields>}}, <No data fields>}
(gdb) p 0x3b769e8-0x3b6f980
$424 = 28776
```
I can see the memory increasing rapidly in the for loop.
I'm new to the gdb internal implementation.
I'm not sure where could be the problem, gcc or gdb, or just a wrong use.
Could you help to point the direction? I have the files to reproduce it
stablely.
Thanks a bot!
gcc version:
```
gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/home/
zhudejiang.pt/work/gcc-12/libexec/gcc/x86_64-pc-linux-gnu/12.2.0/lto-wrapper
Target: x86_64-pc-linux-gnu
Configured with: ./configure --prefix=/home/zhudejiang.pt/work/gcc-12/
--with-gmp=/home/zhudejiang.pt/work/gnu/gmp --with-mpfr=/home/
zhudejiang.pt/work/gnu/mpfr --with-mpfr-lib=/home/
zhudejiang.pt/work/gnu/mpfr/lib --with-mpc=/home/zhudejiang.pt/work/gnu/mpc
--disable-multilib
Thread model: posix
Supported LTO compression algorithms: zlib
gcc version 12.2.0 (GCC)
```
gdb version:
```
gdb -v
GNU gdb (GDB) 12.1
Copyright (C) 2022 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html
>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
```
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: memory increased rapidly when adding a break
2022-11-13 10:01 memory increased rapidly when adding a break DeJiang Zhu
@ 2022-11-14 0:26 ` Simon Marchi
2022-11-14 3:58 ` DeJiang Zhu
0 siblings, 1 reply; 5+ messages in thread
From: Simon Marchi @ 2022-11-14 0:26 UTC (permalink / raw)
To: DeJiang Zhu, gdb
On 11/13/22 05:01, DeJiang Zhu via Gdb wrote:
> Hi,
>
> I compiled envoy(a big c++ project) by using gcc-12.2.0, debug it by using
> gdb 12.1.
>
> But, memory increased rapidly(over 40+GB, until OOM), when adding a break.
>
> I got this backtrace, after attach the gdb process, when memory increasing.
>
> ```
> (gdb) bt
> #0 0x00007fa6893dc935 in _int_malloc () from /lib64/libc.so.6
> #1 0x00007fa6893df6fc in malloc () from /lib64/libc.so.6
> #2 0x0000000000468278 in xmalloc (size=4064) at alloc.c:60
> #3 0x00000000008ecd95 in call_chunkfun (size=<optimized out>,
> h=0x17a246ed0) at ./obstack.c:94
> #4 _obstack_begin_worker (h=0x17a246ed0, size=<optimized out>,
> alignment=<optimized out>) at ./obstack.c:141
> #5 0x000000000052d0d3 in demangle_parse_info::demangle_parse_info
> (this=0x17a246ec0) at cp-name-parser.y:1973
> #6 cp_demangled_name_to_comp (demangled_name=demangled_name@entry=0x8d12c8c0
> "std::stack<unsigned int, std::deque<unsigned int, std::allocator<unsigned
> int> > >::size_type", errmsg=errmsg@entry=0x0) at cp-name-parser.y:2040
> #7 0x000000000052ff5e in cp_canonicalize_string
> (string=string@entry=0x8d12c8c0
> "std::stack<unsigned int, std::deque<unsigned int, std::allocator<unsigned
> int> > >::size_type") at cp-support.c:635
> #8 0x0000000000570b98 in dwarf2_canonicalize_name (name=0x8d12c8c0
> "std::stack<unsigned int, std::deque<unsigned int, std::allocator<unsigned
> int> > >::size_type", cu=<optimized out>, objfile=0x2c3af10) at
> dwarf2/read.c:22908
> #9 0x0000000000590265 in dwarf2_compute_name (name=0x7fa55773c524
> "size_type", die=0x172590eb0, cu=0xe2aeefd0, physname=0) at
> dwarf2/read.c:10095
> #10 0x000000000058bf39 in dwarf2_full_name (cu=0xe2aeefd0, die=0x172590eb0,
> name=0x0) at dwarf2/read.c:10123
> #11 read_typedef (cu=0xe2aeefd0, die=0x172590eb0) at dwarf2/read.c:17687
> #12 read_type_die_1 (cu=0xe2aeefd0, die=0x172590eb0) at dwarf2/read.c:22531
> #13 read_type_die (die=0x172590eb0, cu=0xe2aeefd0) at dwarf2/read.c:22473
> #14 0x000000000059acda in dwarf2_add_type_defn (cu=0xe2aeefd0,
> die=0x172590eb0, fip=0x7ffd8a1be3e0) at dwarf2/read.c:14740
> #15 handle_struct_member_die (child_die=0x172590eb0, type=0x17a6becd0,
> fi=0x7ffd8a1be3e0, template_args=<optimized out>, cu=0xe2aeefd0) at
> dwarf2/read.c:15867
> #16 0x0000000000597044 in process_structure_scope (cu=0xe2aeefd0,
> die=0x172590920) at dwarf2/read.c:15908
> #17 process_die (die=0x172590920, cu=0xe2aeefd0) at dwarf2/read.c:9698
> #18 0x000000000059646d in read_namespace (cu=0xe2aeefd0, die=0x16802e140)
> at dwarf2/read.c:17068
> #19 process_die (die=0x16802e140, cu=0xe2aeefd0) at dwarf2/read.c:9737
> #20 0x0000000000598df9 in read_file_scope (die=0x1594e8360, cu=0xe2aeefd0)
> at dwarf2/read.c:10648
> #21 0x0000000000595f32 in process_die (die=0x1594e8360, cu=0xe2aeefd0) at
> dwarf2/read.c:9669
> #22 0x000000000059c0c8 in process_full_comp_unit
> (pretend_language=<optimized out>, cu=0xe2aeefd0) at dwarf2/read.c:9439
> #23 process_queue (per_objfile=0x9d546c0) at dwarf2/read.c:8652
> #24 dw2_do_instantiate_symtab (per_cu=<optimized out>,
> per_objfile=0x9d546c0, skip_partial=<optimized out>) at dwarf2/read.c:2311
> #25 0x000000000059c5f0 in dw2_instantiate_symtab (per_cu=0x9c886f0,
> per_objfile=0x9d546c0, skip_partial=<optimized out>) at
> dwarf2/read.c:2335#26 0x000000000059c78a in
> dw2_expand_symtabs_matching_one(dwarf2_per_cu_data *, dwarf2_per_objfile *,
> gdb::function_view<bool(char const*, bool)>,
> gdb::function_view<bool(compunit_symtab*)>) (per_cu=<optimized out>,
> per_objfile=<optimized out>, file_matcher=..., expansion_notify=...) at
> dwarf2/read.c:4204
> #27 0x000000000059c94b in
> dwarf2_gdb_index::expand_symtabs_matching(objfile*, gdb::function_view<bool
> (char const*, bool)>, lookup_name_info const*, gdb::function_view<bool
> (char const*)>, gdb::function_view<bool (compunit_symtab*)>,
> enum_flags<block_search_flag_values>, domain_enum_tag, search_domain)
> (this=<optimized out>, objfile=<optimized out>, file_matcher=...,
> lookup_name=<optimized out>, symbol_matcher=
> ..., expansion_notify=..., search_flags=..., domain=UNDEF_DOMAIN,
> kind=<optimized out>) at dwarf2/read.c:4421
> #28 0x0000000000730feb in objfile::map_symtabs_matching_filename(char
> const*, char const*, gdb::function_view<bool (symtab*)>) (this=0x2c3af10,
> name=<optimized out>, name@entry=0x586f26f0 "utility.h",
> real_path=<optimized out>, real_path@entry=0x0, callback=...) at
> symfile-debug.c:207
> #29 0x0000000000741abd in iterate_over_symtabs(char const*,
> gdb::function_view<bool (symtab*)>) (name=name@entry=0x586f26f0
> "utility.h", callback=...) at symtab.c:624
> #30 0x00000000006311d7 in collect_symtabs_from_filename (file=0x586f26f0
> "utility.h", search_pspace=<optimized out>) at linespec.c:3716
> #31 0x0000000000631212 in symtabs_from_filename (filename=0x586f26f0
> "utility.h", search_pspace=<optimized out>) at linespec.c:3736
> #32 0x0000000000635e9f in parse_linespec (parser=0x7ffd8a1bf1b0,
> arg=<optimized out>, match_type=<optimized out>) at linespec.c:2557
> #33 0x0000000000636cac in event_location_to_sals (parser=0x7ffd8a1bf1b0,
> location=0x51ed4da0) at linespec.c:3082
> #34 0x0000000000636f73 in decode_line_full (location=location@entry=0x51ed4da0,
> flags=flags@entry=1, search_pspace=search_pspace@entry=0x0,
> default_symtab=<optimized out>, default_line=<optimized out>,
> canonical=0x7ffd8a1bf4e0, select_mode=0x0, filter=<optimized out>) at
> linespec.c:3161
> #35 0x00000000004b1683 in parse_breakpoint_sals (location=0x51ed4da0,
> canonical=0x7ffd8a1bf4e0) at breakpoint.c:8730
> #36 0x00000000004b5d03 in create_breakpoint (gdbarch=0xeca5dc0,
> location=location@entry=0x51ed4da0, cond_string=cond_string@entry=0x0,
> thread=<optimized out>, thread@entry=-1, extra_string=0x0,
> extra_string@entry=0x7ffd8a1bf650 "",
> force_condition=force_condition@entry=false,
> parse_extra=0, tempflag=0, type_wanted=bp_breakpoint, ignore_count=0,
> pending_break_support=AUTO_BOOLEAN_TRUE, ops=0xc23c00
> <bkpt_breakpoint_ops>, from_tty=0, enabled=1, internal=0, flags=0) at
> breakpoint.c:9009
> #37 0x0000000000674ba8 in mi_cmd_break_insert_1 (dprintf=0, argv=<optimized
> out>, argc=<optimized out>, command=<optimized out>) at
> mi/mi-cmd-break.c:361
> ```
>
> Also, I found it's loop in `dwarf2_gdb_index::expand_symtabs_matching`.
> I added a break on `dw2_expand_symtabs_matching_one`, it hit this break
> repeatly.
>
> ```
> if (lookup_name == nullptr)
> {
> for (dwarf2_per_cu_data *per_cu
> : all_comp_units_range (per_objfile->per_bfd))
> {
> QUIT;
> if (!dw2_expand_symtabs_matching_one (per_cu, per_objfile,
> file_matcher, expansion_notify))
> return false;
> }
> return true;
> }
> ```
>
> Seems, `per_bfd->all_comp_units.size()` is `28776`.
> I'm not sure if this is a reasonable value.
I think that's possible, if it's a big project. For instance, my gdb
binary has about 660 compile units, and gdb is not really big.
>
> ```
> (gdb) p per_objfile->per_bfd->all_comp_units
> $423 = {<std::_Vector_base<std::unique_ptr<dwarf2_per_cu_data,
> dwarf2_per_cu_data_deleter>,
> std::allocator<std::unique_ptr<dwarf2_per_cu_data,
> dwarf2_per_cu_data_deleter> > >> = {_M_impl =
> {<std::allocator<std::unique_ptr<dwarf2_per_cu_data,
> dwarf2_per_cu_data_deleter> >> =
> {<__gnu_cxx::new_allocator<std::unique_ptr<dwarf2_per_cu_data,
> dwarf2_per_cu_data_deleter> >> = {<No data fields>}, <No data fields>},
> <std::_Vector_base<std::unique_ptr<dwarf2_per_cu_data,
> dwarf2_per_cu_data_deleter>,
> std::allocator<std::unique_ptr<dwarf2_per_cu_data,
> dwarf2_per_cu_data_deleter> > >::_Vector_impl_data> = {_M_start =
> 0x3b6f980, _M_finish = 0x3b769e8, _M_end_of_storage = 0x3b769e8}, <No data
> fields>}}, <No data fields>}
> (gdb) p 0x3b769e8-0x3b6f980
> $424 = 28776
> ```
>
> I can see the memory increasing rapidly in the for loop.
> I'm new to the gdb internal implementation.
> I'm not sure where could be the problem, gcc or gdb, or just a wrong use.
>
> Could you help to point the direction? I have the files to reproduce it
> stablely.
GDB works in two steps to read compile units. From you stack trace, it
looks like you are using an index (the .gdb_index kind). When GDB first
loads you binary, it reads in an index present in the binary (or in the
index cache) that lists all the entity names present in each compile
unit of the program. When you set a breakpoint using a name, GDB
"expands" all the compile units with something in it that matches what
you asked for. "Expand" means that GDB reads the full debug information
from the DWARF for that compile unit, creating some internal data
structures to represent it.
It sounds like the breakpoint spec string you passed matches a lot of
compile units, and a lot of them get expanded. That creates a lot of
in-memory objects, eventually reaching some limit.
Out of curiosity, what is the string you used to create your breakpoint?
From you stack trace, it sounds like it's "utility.h:LINE".
Expanding that many CUs could be legitimate, if there's really something
matching in all these CUs, or it could be a bug where GDB expands
unrelated CUs. There is an open bug related to a problem like this:
https://sourceware.org/bugzilla/show_bug.cgi?id=29105
Although I'm not sure this is what you see.
Is the project you build something open source that other people could
build and try?
Simon
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: memory increased rapidly when adding a break
2022-11-14 0:26 ` Simon Marchi
@ 2022-11-14 3:58 ` DeJiang Zhu
2022-11-14 14:47 ` Simon Marchi
0 siblings, 1 reply; 5+ messages in thread
From: DeJiang Zhu @ 2022-11-14 3:58 UTC (permalink / raw)
To: Simon Marchi; +Cc: gdb
[-- Attachment #1: Type: text/plain, Size: 15441 bytes --]
On Mon, Nov 14, 2022 at 8:26 AM Simon Marchi <simark@simark.ca> wrote:
>
>
> On 11/13/22 05:01, DeJiang Zhu via Gdb wrote:
> > Hi,
> >
> > I compiled envoy(a big c++ project) by using gcc-12.2.0, debug it by
> using
> > gdb 12.1.
> >
> > But, memory increased rapidly(over 40+GB, until OOM), when adding a
> break.
> >
> > I got this backtrace, after attach the gdb process, when memory
> increasing.
> >
> > ```
> > (gdb) bt
> > #0 0x00007fa6893dc935 in _int_malloc () from /lib64/libc.so.6
> > #1 0x00007fa6893df6fc in malloc () from /lib64/libc.so.6
> > #2 0x0000000000468278 in xmalloc (size=4064) at alloc.c:60
> > #3 0x00000000008ecd95 in call_chunkfun (size=<optimized out>,
> > h=0x17a246ed0) at ./obstack.c:94
> > #4 _obstack_begin_worker (h=0x17a246ed0, size=<optimized out>,
> > alignment=<optimized out>) at ./obstack.c:141
> > #5 0x000000000052d0d3 in demangle_parse_info::demangle_parse_info
> > (this=0x17a246ec0) at cp-name-parser.y:1973
> > #6 cp_demangled_name_to_comp (demangled_name=demangled_name@entry
> =0x8d12c8c0
> > "std::stack<unsigned int, std::deque<unsigned int,
> std::allocator<unsigned
> > int> > >::size_type", errmsg=errmsg@entry=0x0) at cp-name-parser.y:2040
> > #7 0x000000000052ff5e in cp_canonicalize_string
> > (string=string@entry=0x8d12c8c0
> > "std::stack<unsigned int, std::deque<unsigned int,
> std::allocator<unsigned
> > int> > >::size_type") at cp-support.c:635
> > #8 0x0000000000570b98 in dwarf2_canonicalize_name (name=0x8d12c8c0
> > "std::stack<unsigned int, std::deque<unsigned int,
> std::allocator<unsigned
> > int> > >::size_type", cu=<optimized out>, objfile=0x2c3af10) at
> > dwarf2/read.c:22908
> > #9 0x0000000000590265 in dwarf2_compute_name (name=0x7fa55773c524
> > "size_type", die=0x172590eb0, cu=0xe2aeefd0, physname=0) at
> > dwarf2/read.c:10095
> > #10 0x000000000058bf39 in dwarf2_full_name (cu=0xe2aeefd0,
> die=0x172590eb0,
> > name=0x0) at dwarf2/read.c:10123
> > #11 read_typedef (cu=0xe2aeefd0, die=0x172590eb0) at dwarf2/read.c:17687
> > #12 read_type_die_1 (cu=0xe2aeefd0, die=0x172590eb0) at
> dwarf2/read.c:22531
> > #13 read_type_die (die=0x172590eb0, cu=0xe2aeefd0) at dwarf2/read.c:22473
> > #14 0x000000000059acda in dwarf2_add_type_defn (cu=0xe2aeefd0,
> > die=0x172590eb0, fip=0x7ffd8a1be3e0) at dwarf2/read.c:14740
> > #15 handle_struct_member_die (child_die=0x172590eb0, type=0x17a6becd0,
> > fi=0x7ffd8a1be3e0, template_args=<optimized out>, cu=0xe2aeefd0) at
> > dwarf2/read.c:15867
> > #16 0x0000000000597044 in process_structure_scope (cu=0xe2aeefd0,
> > die=0x172590920) at dwarf2/read.c:15908
> > #17 process_die (die=0x172590920, cu=0xe2aeefd0) at dwarf2/read.c:9698
> > #18 0x000000000059646d in read_namespace (cu=0xe2aeefd0, die=0x16802e140)
> > at dwarf2/read.c:17068
> > #19 process_die (die=0x16802e140, cu=0xe2aeefd0) at dwarf2/read.c:9737
> > #20 0x0000000000598df9 in read_file_scope (die=0x1594e8360,
> cu=0xe2aeefd0)
> > at dwarf2/read.c:10648
> > #21 0x0000000000595f32 in process_die (die=0x1594e8360, cu=0xe2aeefd0) at
> > dwarf2/read.c:9669
> > #22 0x000000000059c0c8 in process_full_comp_unit
> > (pretend_language=<optimized out>, cu=0xe2aeefd0) at dwarf2/read.c:9439
> > #23 process_queue (per_objfile=0x9d546c0) at dwarf2/read.c:8652
> > #24 dw2_do_instantiate_symtab (per_cu=<optimized out>,
> > per_objfile=0x9d546c0, skip_partial=<optimized out>) at
> dwarf2/read.c:2311
> > #25 0x000000000059c5f0 in dw2_instantiate_symtab (per_cu=0x9c886f0,
> > per_objfile=0x9d546c0, skip_partial=<optimized out>) at
> > dwarf2/read.c:2335#26 0x000000000059c78a in
> > dw2_expand_symtabs_matching_one(dwarf2_per_cu_data *, dwarf2_per_objfile
> *,
> > gdb::function_view<bool(char const*, bool)>,
> > gdb::function_view<bool(compunit_symtab*)>) (per_cu=<optimized out>,
> > per_objfile=<optimized out>, file_matcher=..., expansion_notify=...) at
> > dwarf2/read.c:4204
> > #27 0x000000000059c94b in
> > dwarf2_gdb_index::expand_symtabs_matching(objfile*,
> gdb::function_view<bool
> > (char const*, bool)>, lookup_name_info const*, gdb::function_view<bool
> > (char const*)>, gdb::function_view<bool (compunit_symtab*)>,
> > enum_flags<block_search_flag_values>, domain_enum_tag, search_domain)
> > (this=<optimized out>, objfile=<optimized out>, file_matcher=...,
> > lookup_name=<optimized out>, symbol_matcher=
> > ..., expansion_notify=..., search_flags=..., domain=UNDEF_DOMAIN,
> > kind=<optimized out>) at dwarf2/read.c:4421
> > #28 0x0000000000730feb in objfile::map_symtabs_matching_filename(char
> > const*, char const*, gdb::function_view<bool (symtab*)>) (this=0x2c3af10,
> > name=<optimized out>, name@entry=0x586f26f0 "utility.h",
> > real_path=<optimized out>, real_path@entry=0x0, callback=...) at
> > symfile-debug.c:207
> > #29 0x0000000000741abd in iterate_over_symtabs(char const*,
> > gdb::function_view<bool (symtab*)>) (name=name@entry=0x586f26f0
> > "utility.h", callback=...) at symtab.c:624
> > #30 0x00000000006311d7 in collect_symtabs_from_filename (file=0x586f26f0
> > "utility.h", search_pspace=<optimized out>) at linespec.c:3716
> > #31 0x0000000000631212 in symtabs_from_filename (filename=0x586f26f0
> > "utility.h", search_pspace=<optimized out>) at linespec.c:3736
> > #32 0x0000000000635e9f in parse_linespec (parser=0x7ffd8a1bf1b0,
> > arg=<optimized out>, match_type=<optimized out>) at linespec.c:2557
> > #33 0x0000000000636cac in event_location_to_sals (parser=0x7ffd8a1bf1b0,
> > location=0x51ed4da0) at linespec.c:3082
> > #34 0x0000000000636f73 in decode_line_full (location=location@entry
> =0x51ed4da0,
> > flags=flags@entry=1, search_pspace=search_pspace@entry=0x0,
> > default_symtab=<optimized out>, default_line=<optimized out>,
> > canonical=0x7ffd8a1bf4e0, select_mode=0x0, filter=<optimized out>) at
> > linespec.c:3161
> > #35 0x00000000004b1683 in parse_breakpoint_sals (location=0x51ed4da0,
> > canonical=0x7ffd8a1bf4e0) at breakpoint.c:8730
> > #36 0x00000000004b5d03 in create_breakpoint (gdbarch=0xeca5dc0,
> > location=location@entry=0x51ed4da0, cond_string=cond_string@entry=0x0,
> > thread=<optimized out>, thread@entry=-1, extra_string=0x0,
> > extra_string@entry=0x7ffd8a1bf650 "",
> > force_condition=force_condition@entry=false,
> > parse_extra=0, tempflag=0, type_wanted=bp_breakpoint, ignore_count=0,
> > pending_break_support=AUTO_BOOLEAN_TRUE, ops=0xc23c00
> > <bkpt_breakpoint_ops>, from_tty=0, enabled=1, internal=0, flags=0) at
> > breakpoint.c:9009
> > #37 0x0000000000674ba8 in mi_cmd_break_insert_1 (dprintf=0,
> argv=<optimized
> > out>, argc=<optimized out>, command=<optimized out>) at
> > mi/mi-cmd-break.c:361
> > ```
> >
> > Also, I found it's loop in `dwarf2_gdb_index::expand_symtabs_matching`.
> > I added a break on `dw2_expand_symtabs_matching_one`, it hit this break
> > repeatly.
> >
> > ```
> > if (lookup_name == nullptr)
> > {
> > for (dwarf2_per_cu_data *per_cu
> > : all_comp_units_range (per_objfile->per_bfd))
> > {
> > QUIT;
> > if (!dw2_expand_symtabs_matching_one (per_cu, per_objfile,
> > file_matcher, expansion_notify))
> > return false;
> > }
> > return true;
> > }
> > ```
> >
> > Seems, `per_bfd->all_comp_units.size()` is `28776`.
> > I'm not sure if this is a reasonable value.
>
> I think that's possible, if it's a big project. For instance, my gdb
> binary has about 660 compile units, and gdb is not really big.
>
> >
> > ```
> > (gdb) p per_objfile->per_bfd->all_comp_units
> > $423 = {<std::_Vector_base<std::unique_ptr<dwarf2_per_cu_data,
> > dwarf2_per_cu_data_deleter>,
> > std::allocator<std::unique_ptr<dwarf2_per_cu_data,
> > dwarf2_per_cu_data_deleter> > >> = {_M_impl =
> > {<std::allocator<std::unique_ptr<dwarf2_per_cu_data,
> > dwarf2_per_cu_data_deleter> >> =
> > {<__gnu_cxx::new_allocator<std::unique_ptr<dwarf2_per_cu_data,
> > dwarf2_per_cu_data_deleter> >> = {<No data fields>}, <No data fields>},
> > <std::_Vector_base<std::unique_ptr<dwarf2_per_cu_data,
> > dwarf2_per_cu_data_deleter>,
> > std::allocator<std::unique_ptr<dwarf2_per_cu_data,
> > dwarf2_per_cu_data_deleter> > >::_Vector_impl_data> = {_M_start =
> > 0x3b6f980, _M_finish = 0x3b769e8, _M_end_of_storage = 0x3b769e8}, <No
> data
> > fields>}}, <No data fields>}
> > (gdb) p 0x3b769e8-0x3b6f980
> > $424 = 28776
> > ```
> >
> > I can see the memory increasing rapidly in the for loop.
> > I'm new to the gdb internal implementation.
> > I'm not sure where could be the problem, gcc or gdb, or just a wrong use.
> >
> > Could you help to point the direction? I have the files to reproduce it
> > stablely.
>
> GDB works in two steps to read compile units. From you stack trace, it
> looks like you are using an index (the .gdb_index kind). When GDB first
> loads you binary, it reads in an index present in the binary (or in the
> index cache) that lists all the entity names present in each compile
> unit of the program. When you set a breakpoint using a name, GDB
> "expands" all the compile units with something in it that matches what
> you asked for. "Expand" means that GDB reads the full debug information
> from the DWARF for that compile unit, creating some internal data
> structures to represent it.
>
> It sounds like the breakpoint spec string you passed matches a lot of
> compile units, and a lot of them get expanded. That creates a lot of
> in-memory objects, eventually reaching some limit.
>
> Out of curiosity, what is the string you used to create your breakpoint?
> From you stack trace, it sounds like it's "utility.h:LINE".
>
Thanks for your detailed explanation.
Yes, it's `utility.h:560`, I added this break from vscode.
> creating some internal data structures to represent it
I wonder where would allocate so much memory, (at least 40GB),
while the binary is 985MB with the whole debug info.
Maybe many match results need too many memories?
```
$ ls -lh envoy-deadloop
-r-xr-xr-x 1 zhudejiang.pt users 985M Nov 13 15:34 envoy-deadloop
$ readelf -SW envoy-deadloop
There are 48 section headers, starting at offset 0x3d856690:
Section Headers:
[Nr] Name Type Address Off Size ES
Flg Lk Inf Al
[ 0] NULL 0000000000000000 000000 000000 00
0 0 0
[ 1] .interp PROGBITS 00000000000002e0 0002e0 00001c 00
A 0 0 1
[ 2] .note.ABI-tag NOTE 00000000000002fc 0002fc 000020 00
A 0 0 4
[ 3] .dynsym DYNSYM 0000000000000320 000320 003810 18
A 7 1 8
[ 4] .gnu.version VERSYM 0000000000003b30 003b30 0004ac 02
A 3 0 2
[ 5] .gnu.version_r VERNEED 0000000000003fdc 003fdc 0001d0 00
A 7 6 4
[ 6] .gnu.hash GNU_HASH 00000000000041b0 0041b0 0005ac 00
A 3 0 8
[ 7] .dynstr STRTAB 000000000000475c 00475c 001cce 00
A 0 0 1
[ 8] .rela.dyn RELA 0000000000006430 006430 4c20d8 18
A 3 0 8
[ 9] .rela.plt RELA 00000000004c8508 4c8508 002550 18
A 3 29 8
[10] .rodata PROGBITS 00000000004cb000 4cb000 c1b474 00
AMS 0 0 4096
[11] .gcc_except_table PROGBITS 00000000010e6474 10e6474 5f231c 00
A 0 0 4
[12] protodesc_cold PROGBITS 00000000016d87a0 16d87a0 079d68 00
A 0 0 32
[13] flags_help_cold PROGBITS 0000000001752520 1752520 001202 00
A 0 0 32
[14] .eh_frame_hdr PROGBITS 0000000001753724 1753724 4a1e4c 00
A 0 0 4
[15] .eh_frame PROGBITS 0000000001bf5570 1bf5570 13a6b0c
00 A 0 0 8
[16] .text PROGBITS 0000000002f9d080 2f9c080 3bbc38f
00 AX 0 0 64
[17] .init PROGBITS 0000000006b59410 6b58410 00001a 00
AX 0 0 4
[18] .fini PROGBITS 0000000006b5942c 6b5842c 000009 00
AX 0 0 4
[19] malloc_hook PROGBITS 0000000006b59436 6b58436 000786 00
AX 0 0 2
[20] google_malloc PROGBITS 0000000006b59bc0 6b58bc0 005bd0 00
AX 0 0 64
[21] .plt PROGBITS 0000000006b5f790 6b5e790 0018f0 00
AX 0 0 16
[22] .tdata PROGBITS 0000000006b62080 6b60080 000088 00
WAT 0 0 64
[23] .tbss NOBITS 0000000006b62140 6b60108 002500 00
WAT 0 0 64
[24] .fini_array FINI_ARRAY 0000000006b62108 6b60108 000008 08
WA 0 0 8
[25] .init_array INIT_ARRAY 0000000006b62110 6b60110 0064b0 08
WA 0 0 8
[26] .data.rel.ro PROGBITS 0000000006b685c0 6b665c0 1d7448 00
WA 0 0 32
[27] .dynamic DYNAMIC 0000000006d3fa08 6d3da08 000200 10
WA 7 0 8
[28] .got PROGBITS 0000000006d3fc08 6d3dc08 000760 00
WA 0 0 8
[29] .got.plt PROGBITS 0000000006d40368 6d3e368 000c88 00
WA 0 0 8
[30] .data PROGBITS 0000000006d42000 6d3f000 060c18 00
WA 0 0 64
[31] .tm_clone_table PROGBITS 0000000006da2c18 6d9fc18 000000 00
WA 0 0 8
[32] .bss NOBITS 0000000006da2c40 6d9fc18 2119e0 00
WA 0 0 64
[33] .comment PROGBITS 0000000000000000 6d9fc18 000060 01
MS 0 0 1
[34] .debug_info PROGBITS 0000000000000000 6d9fc78 8a32aa 00
0 0 1
[35] .debug_abbrev PROGBITS 0000000000000000 7642f22 093663 00
0 0 1
[36] .debug_aranges PROGBITS 0000000000000000 76d6590 62d2cd0
00 0 0 16
[37] .debug_line PROGBITS 0000000000000000 d9a9260 128ef333
00 0 0 1
[38] .debug_str PROGBITS 0000000000000000 20298593 1a392b
01 MS 0 0 1
[39] .debug_addr PROGBITS 0000000000000000 2043bebe 44b00a0
00 0 0 1
[40] .debug_rnglists PROGBITS 0000000000000000 248ebf5e 196f4db
00 0 0 1
[41] .debug_loclists PROGBITS 0000000000000000 2625b439 22e7fb
00 0 0 1
[42] .debug_frame PROGBITS 0000000000000000 26489c38 000060
00 0 0 8
[43] .debug_line_str PROGBITS 0000000000000000 26489c98 000569
01 MS 0 0 1
[44] .gdb_index PROGBITS 0000000000000000 2648a201 eef593d
00 0 0 1
[45] .symtab SYMTAB 0000000000000000 3537fb40 194d828
18 47 418626 8
[46] .shstrtab STRTAB 0000000000000000 36ccd368 0001fe
00 0 0 1
[47] .strtab STRTAB 0000000000000000 36ccd566 6b89128
00 0 0 1
Key to Flags:
W (write), A (alloc), X (execute), M (merge), S (strings), I (info),
L (link order), O (extra OS processing required), G (group), T (TLS),
C (compressed), x (unknown), o (OS specific), E (exclude),
l (large), p (processor specific)
```
> Expanding that many CUs could be legitimate, if there's really something
> matching in all these CUs, or it could be a bug where GDB expands
> unrelated CUs. There is an open bug related to a problem like this:
>
> https://sourceware.org/bugzilla/show_bug.cgi?id=29105
This looks like a CPU performance issue.
May not be my issue, in my opinion.
>
> Although I'm not sure this is what you see.
>
> Is the project you build something open source that other people could
> build and try?
>
Yes, it's an open source project.
It builds with Bazel and depends on Go, which may be a bit complicated.
This is the doc for build & run.
https://github.com/mosn/envoy-go-extension#envoy
>
> Simon
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: memory increased rapidly when adding a break
2022-11-14 3:58 ` DeJiang Zhu
@ 2022-11-14 14:47 ` Simon Marchi
2022-11-15 1:34 ` DeJiang Zhu
0 siblings, 1 reply; 5+ messages in thread
From: Simon Marchi @ 2022-11-14 14:47 UTC (permalink / raw)
To: DeJiang Zhu; +Cc: gdb
> Thanks for your detailed explanation.
> Yes, it's `utility.h:560`, I added this break from vscode.
>
>> creating some internal data structures to represent it
>
> I wonder where would allocate so much memory, (at least 40GB),
> while the binary is 985MB with the whole debug info.
> Maybe many match results need too many memories?
It's hard to tell if it's a GDB bug, or it's working as expected, just
that GDB is inefficient.
If you end up expanding all CUs, it's not unrealistic. I've just ran
gdb on itself, and did "maint expand symtabs" to force the expansion of
all symtabs. htop shows 4.6 GB of virtual memory used. So I can
imagine that for a project 10 times bigger, it can take 10 times more
memory.
>>
>> Although I'm not sure this is what you see.
>>
>> Is the project you build something open source that other people could
>> build and try?
>>
>
> Yes, it's an open source project.
> It builds with Bazel and depends on Go, which may be a bit complicated.
> This is the doc for build & run.
> https://github.com/mosn/envoy-go-extension#envoy
I'm curious, so I built that, but then I'm not sure what to do, how to
reproduce your case.
Simon
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: memory increased rapidly when adding a break
2022-11-14 14:47 ` Simon Marchi
@ 2022-11-15 1:34 ` DeJiang Zhu
0 siblings, 0 replies; 5+ messages in thread
From: DeJiang Zhu @ 2022-11-15 1:34 UTC (permalink / raw)
To: Simon Marchi; +Cc: gdb
[-- Attachment #1: Type: text/plain, Size: 1534 bytes --]
On Mon, Nov 14, 2022 at 10:47 PM Simon Marchi <simark@simark.ca> wrote:
> > Thanks for your detailed explanation.
> > Yes, it's `utility.h:560`, I added this break from vscode.
>
> It's hard to tell if it's a GDB bug, or it's working as expected, just
> that GDB is inefficient.
>
> If you end up expanding all CUs, it's not unrealistic. I've just ran
> gdb on itself, and did "maint expand symtabs" to force the expansion of
> all symtabs. htop shows 4.6 GB of virtual memory used. So I can
> imagine that for a project 10 times bigger, it can take 10 times more
> memory.
>
> I'm curious, so I built that, but then I'm not sure what to do, how to
> reproduce your case.
>
Thanks! I tried again, it really ended, after 13 minutes.
Also, as `top` shows, it takes `30+GB` memory.
```
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
16357 zhudeji+ 20 0 31.9g 30.5g 7.2g S 0.0 49.0 13:34.93 gdb
```
Here are the steps, after build envoy
```
$ gdb --args ./envoy
Reading symbols from ./envoy...
(gdb) b utility.h:560
Breakpoint 1 at 0x361130b: utility.h:560. (2 locations)
```
Sorry, I can not reproduce `40+GB` memory again.
I tried with vscode again, it also ended, with `30+GB` memory.
Maybe I got a mistake, or something other changed.
I just `kill -9` before it eats huge memory in the next debugging,
after I hit `40+GB` once, since it makes the whole system nearly stuck.
I'm so sorry for the wrong info.
Thanks a lot again.
Also, hope gdb could optimize it if it could be there.
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2022-11-15 1:35 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-13 10:01 memory increased rapidly when adding a break DeJiang Zhu
2022-11-14 0:26 ` Simon Marchi
2022-11-14 3:58 ` DeJiang Zhu
2022-11-14 14:47 ` Simon Marchi
2022-11-15 1:34 ` DeJiang Zhu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).