public inbox for gcc-bugs@sourceware.org
help / color / mirror / Atom feed
From: "jakub at gcc dot gnu.org" <gcc-bugzilla@gcc.gnu.org>
To: gcc-bugs@gcc.gnu.org
Subject: [Bug lto/65515] [5 Regression] FAIL: gcc.c-torture/compile/limits-fndefn.c   -O2 -flto -flto-partition=none  (ICE) -- SIGSEGV for stack growth failure
Date: Mon, 23 Mar 2015 17:20:00 -0000	[thread overview]
Message-ID: <bug-65515-4-OIW6Om3Ccq@http.gcc.gnu.org/bugzilla/> (raw)
In-Reply-To: <bug-65515-4@http.gcc.gnu.org/bugzilla/>

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65515

Jakub Jelinek <jakub at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Target|hppa64-hp-hpux11.11         |
                 CC|                            |jakub at gcc dot gnu.org
               Host|hppa64-hp-hpux11.11         |
              Build|hppa64-hp-hpux11.11         |

--- Comment #3 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
Doesn't seem to be specific to hppa, on x86_64-linux I can reproduce it as
well, and need ulimit -s 46000 to pass.
The Fedora default of ulimit -sH unlimited and ulimit -sS 8192 works, because
gcc automatically attempts to raise stack limit to 64MB if possible.

Backtrace during crash is:
#0  0x0000000000bacbd0 in DFS::DFS_write_tree (this=<error reading variable:
Cannot access memory at address 0x7ffffd40cfc8>, ob=<error reading variable:
Cannot access memory at address 0x7ffffd40cfc0>, 
    from_state=<error reading variable: Cannot access memory at address
0x7ffffd40cfb8>, expr=<error reading variable: Cannot access memory at address
0x7ffffd40cfb0>, 
    ref_p=<error reading variable: Cannot access memory at address
0x7ffffd40cfac>, this_ref_p=<error reading variable: Cannot access memory at
address 0x7ffffd40cfa8>, 
    single_p=<error reading variable: Cannot access memory at address
0x7ffffd40cfa4>) at ../../gcc/lto-streamer-out.c:1343
#1  0x0000000000ba499f in DFS::DFS_write_tree_body (this=0x7fffffffdca0,
ob=0x3e4d9b0, expr=0x7fffef9d57f8, expr_state=0x2522dc0, ref_p=false,
single_p=false) at ../../gcc/lto-streamer-out.c:539
#2  0x0000000000bacf50 in DFS::DFS_write_tree (this=0x7fffffffdca0,
ob=0x3e4d9b0, from_state=0x2522db0, expr=0x7fffef9d57f8, ref_p=false,
this_ref_p=false, single_p=false) at ../../gcc/lto-streamer-out.c:1376
#3  0x0000000000ba5dae in DFS::DFS_write_tree_body (this=0x7fffffffdca0,
ob=0x3e4d9b0, expr=0x7fffef9d5820, expr_state=0x2522db0, ref_p=false,
single_p=false) at ../../gcc/lto-streamer-out.c:659
#4  0x0000000000bacf50 in DFS::DFS_write_tree (this=0x7fffffffdca0,
ob=0x3e4d9b0, from_state=0x2522da0, expr=0x7fffef9d5820, ref_p=false,
this_ref_p=false, single_p=false) at ../../gcc/lto-streamer-out.c:1376
...
#198591 0x0000000000ba5dae in DFS::DFS_write_tree_body (this=0x7fffffffdca0,
ob=0x3e4d9b0, expr=0x7fffef5a2fa0, expr_state=0x3d3f1d0, ref_p=false,
single_p=false) at ../../gcc/lto-streamer-out.c:659
#198592 0x0000000000bacf50 in DFS::DFS_write_tree (this=0x7fffffffdca0,
ob=0x3e4d9b0, from_state=0x3d3f1c0, expr=0x7fffef5a2fa0, ref_p=false,
this_ref_p=false, single_p=false) at ../../gcc/lto-streamer-out.c:1376
#198593 0x0000000000ba5dae in DFS::DFS_write_tree_body (this=0x7fffffffdca0,
ob=0x3e4d9b0, expr=0x7fffef5a2fc8, expr_state=0x3d3f1c0, ref_p=false,
single_p=false) at ../../gcc/lto-streamer-out.c:659
#198594 0x0000000000bacf50 in DFS::DFS_write_tree (this=0x7fffffffdca0,
ob=0x3e4d9b0, from_state=0x3d3f1b0, expr=0x7fffef5a2fc8, ref_p=false,
this_ref_p=false, single_p=false) at ../../gcc/lto-streamer-out.c:1376
#198595 0x0000000000ba5acb in DFS::DFS_write_tree_body (this=0x7fffffffdca0,
ob=0x3e4d9b0, expr=0x7ffff1975d20, expr_state=0x3d3f1b0, ref_p=false,
single_p=false) at ../../gcc/lto-streamer-out.c:646
#198596 0x0000000000bacf50 in DFS::DFS_write_tree (this=0x7fffffffdca0,
ob=0x3e4d9b0, from_state=0x3d3f1a0, expr=0x7ffff1975d20, ref_p=false,
this_ref_p=false, single_p=false) at ../../gcc/lto-streamer-out.c:1376
#198597 0x0000000000ba499f in DFS::DFS_write_tree_body (this=0x7fffffffdca0,
ob=0x3e4d9b0, expr=0x7ffff19791b0, expr_state=0x3d3f1a0, ref_p=false,
single_p=false) at ../../gcc/lto-streamer-out.c:539
#198598 0x0000000000bacf50 in DFS::DFS_write_tree (this=0x7fffffffdca0,
ob=0x3e4d9b0, from_state=0x0, expr=0x7ffff19791b0, ref_p=false,
this_ref_p=false, single_p=false) at ../../gcc/lto-streamer-out.c:1376
#198599 0x0000000000ba483e in DFS::DFS (this=0x7fffffffdca0, ob=0x3e4d9b0,
expr=0x7ffff19791b0, ref_p=false, this_ref_p=false, single_p=false) at
../../gcc/lto-streamer-out.c:512
#198600 0x0000000000bad6fc in lto_output_tree (ob=0x3e4d9b0,
expr=0x7ffff19791b0, ref_p=false, this_ref_p=false) at
../../gcc/lto-streamer-out.c:1571
#198601 0x0000000000bafa3d in write_global_stream (ob=0x3e4d9b0,
encoder=0x3e4d6f0) at ../../gcc/lto-streamer-out.c:2359
#198602 0x0000000000bafb70 in lto_output_decl_state_streams (ob=0x3e4d9b0,
state=0x3e4d6d0) at ../../gcc/lto-streamer-out.c:2406
#198603 0x0000000000bb0af1 in produce_asm_for_decls () at
../../gcc/lto-streamer-out.c:2776
#198604 0x0000000000c25e21 in write_lto () at ../../gcc/passes.c:2408
#198605 0x0000000000c26030 in ipa_write_summaries_1 (encoder=0x29263d0) at
../../gcc/passes.c:2469
#198606 0x0000000000c26285 in ipa_write_summaries () at ../../gcc/passes.c:2529
#198607 0x000000000084127d in ipa_passes () at ../../gcc/cgraphunit.c:2199
#198608 0x00000000008415e6 in symbol_table::compile (this=0x7ffff185e000) at
../../gcc/cgraphunit.c:2295
#198609 0x0000000000841908 in symbol_table::finalize_compilation_unit
(this=0x7ffff185e000) at ../../gcc/cgraphunit.c:2444
#198610 0x000000000069ceb9 in c_write_global_declarations () at
../../gcc/c/c-decl.c:10801
#198611 0x0000000000d1ebad in compile_file () at ../../gcc/toplev.c:608
#198612 0x0000000000d21067 in do_compile () at ../../gcc/toplev.c:2076
#198613 0x0000000000d21295 in toplev::main (this=0x7fffffffe010, argc=20,
argv=0x7fffffffe118) at ../../gcc/toplev.c:2174
#198614 0x0000000001660011 in main (argc=20, argv=0x7fffffffe118) at
../../gcc/main.c:39

Dunno if this is really something that should be fixed, 100000 of function
arguments is really something very unlikely to be used (nobody sane would use
that).
Guess a fix for the case where a function has too many arguments would be to
allocate a temporary vector holding the arguments copied out of the chain, and
then
stream the arguments from the last one to the first one instead of the other
way around - that way we don't really recurse on TREE_CHAIN.


  parent reply	other threads:[~2015-03-23 16:39 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-03-22 15:02 [Bug lto/65515] New: " danglin at gcc dot gnu.org
2015-03-22 16:59 ` [Bug lto/65515] " danglin at gcc dot gnu.org
2015-03-23  3:49 ` hubicka at gcc dot gnu.org
2015-03-23  9:49 ` rguenth at gcc dot gnu.org
2015-03-23 17:20 ` jakub at gcc dot gnu.org [this message]
2015-03-23 17:38 ` jakub at gcc dot gnu.org
2015-03-23 18:12 ` dave.anglin at bell dot net
2015-03-24 10:18 ` rguenth at gcc dot gnu.org
2015-03-24 13:15 ` rguenther at suse dot de
2015-03-24 16:10 ` dave.anglin at bell dot net
2015-03-25 10:16 ` jakub at gcc dot gnu.org
2015-03-25 10:27 ` jakub at gcc dot gnu.org

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-65515-4-OIW6Om3Ccq@http.gcc.gnu.org/bugzilla/ \
    --to=gcc-bugzilla@gcc.gnu.org \
    --cc=gcc-bugs@gcc.gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).