public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: "Martin Liška" <marxin.liska@gmail.com>
To: Teresa Johnson <tejohnson@google.com>
Cc: Jan Hubicka <hubicka@ucw.cz>,
	Bernhard Reutner-Fischer <rep.dot.nop@gmail.com>,
		"gcc-patches@gcc.gnu.org" <gcc-patches@gcc.gnu.org>,
	Steven Bosscher <stevenb.gcc@gmail.com>,
		Jeff Law <law@redhat.com>, Sriraman Tallam <tmsriram@google.com>
Subject: Re: [PATCH] Sanitize block partitioning under -freorder-blocks-and-partition
Date: Mon, 19 Aug 2013 19:56:00 -0000	[thread overview]
Message-ID: <CAObPJ3PqmH5LHAgM_t-ncKF9L3DCfbKWzBKD9grQR73MOsSrYQ@mail.gmail.com> (raw)
In-Reply-To: <CAAe5K+UnqBfxYrZxSkjRudq8NYni_9ih+t=us+2hH+UPsrwDLA@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 19977 bytes --]

Dear Teresa,

On 19 August 2013 19:47, Teresa Johnson <tejohnson@google.com> wrote:
> On Mon, Aug 19, 2013 at 8:09 AM, Jan Hubicka <hubicka@ucw.cz> wrote:
>>> Remember it isn't using dominance anymore. The latest patch was
>>> instead ensuring the most frequent path between hot blocks and the
>>> entry/exit are marked hot. That should be better than the dominance
>>> approach used in the earlier version.
>>
>> Indeed, that looks more resonable approach.
>> Can you point me to the last version of patch? Last one I remember still
>> walked dominators...
>
> I've included the latest patch below. I still use dominators in the
> post-cfg-optimization fixup (fixup_partitions), but not in the
> partition sanitizing done during the partitioning itself
> (sanitize_hot_paths). The former is looking for hot bbs newly
> dominated by cold bbs after cfg transformations.
>
>>>
>>> > We can commit it and work on better solution incrementally but it will
>>> > probably mean replacing it later.  If you think it makes things easier
>>> > to work on it incrementally, I think the patch is OK.
>>>
>>> Yes, I think this is a big step forward from what is there now for
>>> splitting, which does the splitting purely based on bb count in
>>> isolation. I don't have a much better solution in mind yet.
>>>
>>> >>
>>> >> > - I'll try building and profiling gimp myself to see if I can
>>> >> > reproduce the issue with code executing out of the cold section.
>>> >>
>>> >> I have spent some time this week trying to get the latest gimp Martin
>>> >> pointed me to configured and built, but it took awhile to track down
>>> >> and configure/build all of the required versions of dependent
>>> >> packages. I'm still hitting some issues trying to get it compiled, so
>>> >> it may not yet be configured properly. I'll take a look again early
>>> >> next week.
>>> >
>>> > I do not think there is anything special about gimp.  You can probably
>>> > take any other bigger app, like GCC itself. With profiledbootstrap
>>> > and linker script to lock unlikely section you should get ICEs where
>>> > we jump into cold secton and should not.
>>>
>>> Ok, please point me to the linker script and I will try gcc
>>> profiledbootstrap as well. I wanted to try gimp if possible as I
>>> haven't seen this much jumping to the cold section in some of the
>>> internal apps I tried.
>>
>> You can also discuss with Martin the systemtap script to plot disk accesses
>> during the startup.  It is very handy for analyzing the code layout issues

I send you as an attachment linker script that preserves
.text.unlikely, .text.exit, .text.startup and .text.hot sections. I
tries to modify memory access flags for .text.unlikely section to
read-only, but I am not so familiar with linker scripting. Maybe, you
can define with MEMORY command the read-only memory area for
.text.unlikely.

I am using for my graphing disabled read-ahead function in the Linux
kernel with systemtap script and the results are presented with
matplotlib library. If you are more interested in this tool-chain,
write me and I can send you these scripts. I was also using valgrind
to trace functions, so I will try to get locations of called functions
to identify which functions are called in corresponding sections.

Martin

> Ok. I am using linux perf to collect this info (fed through some
> scripts that munge and plot the data).
>
>>
>> It may be interesting to get similar script taking traces from valgrind
>> and ploting the most frequent calls in the final layout ;)
>
> I think linux perf -g to get a callgraph should give similar data.
>
> Teresa
>
>>
>> Honza
>
> 2013-08-05  Teresa Johnson  <tejohnson@google.com>
>             Steven Bosscher  <steven@gcc.gnu.org>
>
>         * cfgrtl.c (fixup_new_cold_bb): New routine.
>         (commit_edge_insertions): Invoke fixup_partitions.
>         (find_partition_fixes): New routine.
>         (fixup_partitions): Ditto.
>         (verify_hot_cold_block_grouping): Update comments.
>         (rtl_verify_edges): Invoke find_partition_fixes.
>         (rtl_verify_bb_pointers): Update comments.
>         (rtl_verify_bb_layout): Ditto.
>         * basic-block.h (fixup_partitions): Declare.
>         * cfgcleanup.c (try_optimize_cfg): Invoke fixup_partitions.
>         * bb-reorder.c (sanitize_hot_paths): New function.
>         (find_rarely_executed_basic_blocks_and_crossing_edges): Invoke
>         sanitize_hot_paths.
>
> Index: cfgrtl.c
> ===================================================================
> --- cfgrtl.c    (revision 201461)
> +++ cfgrtl.c    (working copy)
> @@ -1341,6 +1341,43 @@ fixup_partition_crossing (edge e)
>      }
>  }
>
> +/* Called when block BB has been reassigned to the cold partition,
> +   because it is now dominated by another cold block,
> +   to ensure that the region crossing attributes are updated.  */
> +
> +static void
> +fixup_new_cold_bb (basic_block bb)
> +{
> +  edge e;
> +  edge_iterator ei;
> +
> +  /* This is called when a hot bb is found to now be dominated
> +     by a cold bb and therefore needs to become cold. Therefore,
> +     its preds will no longer be region crossing. Any non-dominating
> +     preds that were previously hot would also have become cold
> +     in the caller for the same region. Any preds that were previously
> +     region-crossing will be adjusted in fixup_partition_crossing.  */
> +  FOR_EACH_EDGE (e, ei, bb->preds)
> +    {
> +      fixup_partition_crossing (e);
> +    }
> +
> +  /* Possibly need to make bb's successor edges region crossing,
> +     or remove stale region crossing.  */
> +  FOR_EACH_EDGE (e, ei, bb->succs)
> +    {
> +      /* We can't have fall-through edges across partition boundaries.
> +         Note that force_nonfallthru will do any necessary partition
> +         boundary fixup by calling fixup_partition_crossing itself.  */
> +      if ((e->flags & EDGE_FALLTHRU)
> +          && BB_PARTITION (bb) != BB_PARTITION (e->dest)
> +          && e->dest != EXIT_BLOCK_PTR)
> +        force_nonfallthru (e);
> +      else
> +        fixup_partition_crossing (e);
> +    }
> +}
> +
>  /* Attempt to change code to redirect edge E to TARGET.  Don't do that on
>     expense of adding new instructions or reordering basic blocks.
>
> @@ -1979,6 +2016,14 @@ commit_edge_insertions (void)
>  {
>    basic_block bb;
>
> +  /* Optimization passes that invoke this routine can cause hot blocks
> +     previously reached by both hot and cold blocks to become dominated only
> +     by cold blocks. This will cause the verification below to fail,
> +     and lead to now cold code in the hot section. In some cases this
> +     may only be visible after newly unreachable blocks are deleted,
> +     which will be done by fixup_partitions.  */
> +  fixup_partitions ();
> +
>  #ifdef ENABLE_CHECKING
>    verify_flow_info ();
>  #endif
> @@ -2173,6 +2218,101 @@ get_last_bb_insn (basic_block bb)
>    return end;
>  }
>
> +/* Sanity check partition hotness to ensure that basic blocks in
> +   the cold partition don't dominate basic blocks in the hot partition.
> +   If FLAG_ONLY is true, report violations as errors. Otherwise
> +   re-mark the dominated blocks as cold, since this is run after
> +   cfg optimizations that may make hot blocks previously reached
> +   by both hot and cold blocks now only reachable along cold paths.  */
> +
> +static vec<basic_block>
> +find_partition_fixes (bool flag_only)
> +{
> +  basic_block bb;
> +  vec<basic_block> bbs_in_cold_partition = vNULL;
> +  vec<basic_block> bbs_to_fix = vNULL;
> +
> +  /* Callers check this.  */
> +  gcc_checking_assert (crtl->has_bb_partition);
> +
> +  FOR_EACH_BB (bb)
> +    if ((BB_PARTITION (bb) == BB_COLD_PARTITION))
> +      bbs_in_cold_partition.safe_push (bb);
> +
> +  if (bbs_in_cold_partition.is_empty ())
> +    return vNULL;
> +
> +  bool dom_calculated_here = !dom_info_available_p (CDI_DOMINATORS);
> +
> +  if (dom_calculated_here)
> +    calculate_dominance_info (CDI_DOMINATORS);
> +
> +  while (! bbs_in_cold_partition.is_empty  ())
> +    {
> +      bb = bbs_in_cold_partition.pop ();
> +      /* Any blocks dominated by a block in the cold section
> +         must also be cold.  */
> +      basic_block son;
> +      for (son = first_dom_son (CDI_DOMINATORS, bb);
> +           son;
> +           son = next_dom_son (CDI_DOMINATORS, son))
> +        {
> +          /* If son is not yet cold, then mark it cold here and
> +             enqueue it for further processing.  */
> +          if ((BB_PARTITION (son) != BB_COLD_PARTITION))
> +            {
> +              if (flag_only)
> +                error ("non-cold basic block %d dominated "
> +                       "by a block in the cold partition (%d)",
> son->index, bb->index);
> +              else
> +                BB_SET_PARTITION (son, BB_COLD_PARTITION);
> +              bbs_to_fix.safe_push (son);
> +              bbs_in_cold_partition.safe_push (son);
> +            }
> +        }
> +    }
> +
> +  if (dom_calculated_here)
> +    free_dominance_info (CDI_DOMINATORS);
> +
> +  return bbs_to_fix;
> +}
> +
> +/* Perform cleanup on the hot/cold bb partitioning after optimization
> +   passes that modify the cfg.  */
> +
> +void
> +fixup_partitions (void)
> +{
> +  basic_block bb;
> +
> +  if (!crtl->has_bb_partition)
> +    return;
> +
> +  /* Delete any blocks that became unreachable and weren't
> +     already cleaned up, for example during edge forwarding
> +     and convert_jumps_to_returns. This will expose more
> +     opportunities for fixing the partition boundaries here.
> +     Also, the calculation of the dominance graph during verification
> +     will assert if there are unreachable nodes.  */
> +  delete_unreachable_blocks ();
> +
> +  /* If there are partitions, do a sanity check on them: A basic block in
> +     a cold partition cannot dominate a basic block in a hot partition.
> +     Fixup any that now violate this requirement, as a result of edge
> +     forwarding and unreachable block deletion.  */
> +  vec<basic_block> bbs_to_fix = find_partition_fixes (false);
> +
> +  /* Do the partition fixup after all necessary blocks have been converted to
> +     cold, so that we only update the region crossings the minimum number of
> +     places, which can require forcing edges to be non fallthru.  */
> +  while (! bbs_to_fix.is_empty ())
> +    {
> +      bb = bbs_to_fix.pop ();
> +      fixup_new_cold_bb (bb);
> +    }
> +}
> +
>  /* Verify, in the basic block chain, that there is at most one switch
>     between hot/cold partitions. This condition will not be true until
>     after reorder_basic_blocks is called.  */
> @@ -2219,7 +2359,8 @@ verify_hot_cold_block_grouping (void)
>  /* Perform several checks on the edges out of each block, such as
>     the consistency of the branch probabilities, the correctness
>     of hot/cold partition crossing edges, and the number of expected
> -   successor edges.  */
> +   successor edges.  Also verify that the dominance relationship
> +   between hot/cold blocks is sane.  */
>
>  static int
>  rtl_verify_edges (void)
> @@ -2382,6 +2523,14 @@ rtl_verify_edges (void)
>         }
>      }
>
> +  /* If there are partitions, do a sanity check on them: A basic block in
> +     a cold partition cannot dominate a basic block in a hot partition.  */
> +  if (crtl->has_bb_partition && !err)
> +    {
> +      vec<basic_block> bbs_to_fix = find_partition_fixes (true);
> +      err = !bbs_to_fix.is_empty ();
> +    }
> +
>    /* Clean up.  */
>    return err;
>  }
> @@ -2515,7 +2664,7 @@ rtl_verify_bb_pointers (void)
>       and NOTE_INSN_BASIC_BLOCK
>     - verify that no fall_thru edge crosses hot/cold partition boundaries
>     - verify that there are no pending RTL branch predictions
> -   - verify that there is a single hot/cold partition boundary after bbro
> +   - verify that hot blocks are not dominated by cold blocks
>
>     In future it can be extended check a lot of other stuff as well
>     (reachability of basic blocks, life information, etc. etc.).  */
> @@ -2761,7 +2910,8 @@ rtl_verify_bb_layout (void)
>     - check that all insns are in the basic blocks
>       (except the switch handling code, barriers and notes)
>     - check that all returns are followed by barriers
> -   - check that all fallthru edge points to the adjacent blocks.  */
> +   - check that all fallthru edge points to the adjacent blocks
> +   - verify that there is a single hot/cold partition boundary after bbro  */
>
>  static int
>  rtl_verify_flow_info (void)
> Index: basic-block.h
> ===================================================================
> --- basic-block.h       (revision 201461)
> +++ basic-block.h       (working copy)
> @@ -797,6 +797,7 @@ extern bool contains_no_active_insn_p (const_basic
>  extern bool forwarder_block_p (const_basic_block);
>  extern bool can_fallthru (basic_block, basic_block);
>  extern void emit_barrier_after_bb (basic_block bb);
> +extern void fixup_partitions (void);
>
>  /* In cfgbuild.c.  */
>  extern void find_many_sub_basic_blocks (sbitmap);
> Index: cfgcleanup.c
> ===================================================================
> --- cfgcleanup.c        (revision 201461)
> +++ cfgcleanup.c        (working copy)
> @@ -2807,10 +2807,21 @@ try_optimize_cfg (int mode)
>               df_analyze ();
>             }
>
> +         if (changed)
> +            {
> +              /* Edge forwarding in particular can cause hot blocks previously
> +                 reached by both hot and cold blocks to become dominated only
> +                 by cold blocks. This will cause the verification
> below to fail,
> +                 and lead to now cold code in the hot section. This is not easy
> +                 to detect and fix during edge forwarding, and in some cases
> +                 is only visible after newly unreachable blocks are deleted,
> +                 which will be done in fixup_partitions.  */
> +              fixup_partitions ();
> +
>  #ifdef ENABLE_CHECKING
> -         if (changed)
> -           verify_flow_info ();
> +              verify_flow_info ();
>  #endif
> +            }
>
>           changed_overall |= changed;
>           first_pass = false;
> Index: bb-reorder.c
> ===================================================================
> --- bb-reorder.c        (revision 201461)
> +++ bb-reorder.c        (working copy)
> @@ -1444,27 +1444,134 @@ fix_up_crossing_landing_pad (eh_landing_pad old_lp
>        ei_next (&ei);
>  }
>
> +
> +/* Ensure that all hot bbs are included in a hot path through the
> +   procedure. This is done by calling this function twice, once
> +   with WALK_UP true (to look for paths from the entry to hot bbs) and
> +   once with WALK_UP false (to look for paths from hot bbs to the exit).
> +   Returns the updated value of COLD_BB_COUNT and adds newly-hot bbs
> +   to BBS_IN_HOT_PARTITION.  */
> +
> +static unsigned int
> +sanitize_hot_paths (bool walk_up, unsigned int cold_bb_count,
> +                    vec<basic_block> *bbs_in_hot_partition)
> +{
> +  /* Callers check this.  */
> +  gcc_checking_assert (cold_bb_count);
> +
> +  /* Keep examining hot bbs while we still have some left to check
> +     and there are remaining cold bbs.  */
> +  vec<basic_block> hot_bbs_to_check = bbs_in_hot_partition->copy ();
> +  while (! hot_bbs_to_check.is_empty ()
> +         && cold_bb_count)
> +    {
> +      basic_block bb = hot_bbs_to_check.pop ();
> +      vec<edge, va_gc> *edges = walk_up ? bb->preds : bb->succs;
> +      edge e;
> +      edge_iterator ei;
> +      int highest_probability = 0;
> +      bool found = false;
> +
> +      /* Walk the preds/succs and check if there is at least one already
> +         marked hot. Keep track of the most frequent pred/succ so that we
> +         can mark it hot if we don't find one.  */
> +      FOR_EACH_EDGE (e, ei, edges)
> +        {
> +          basic_block reach_bb = walk_up ? e->src : e->dest;
> +
> +          if (e->flags & EDGE_DFS_BACK)
> +            continue;
> +
> +          if (BB_PARTITION (reach_bb) != BB_COLD_PARTITION)
> +          {
> +            found = true;
> +            break;
> +          }
> +          if (e->probability > highest_probability)
> +            highest_probability = e->probability;
> +        }
> +
> +      /* If bb is reached by (or reaches, in the case of !WALK_UP) another hot
> +         block (or unpartitioned, e.g. the entry block) then it is ok. If not,
> +         then the most frequent pred (or succ) needs to be adjusted.  In the
> +         case where multiple preds/succs have the same probability (e.g. a
> +         50-50 branch), then both will be adjusted.  */
> +      if (found)
> +        continue;
> +
> +      FOR_EACH_EDGE (e, ei, edges)
> +        {
> +          if (e->flags & EDGE_DFS_BACK)
> +            continue;
> +          if (e->probability < highest_probability)
> +            continue;
> +
> +          basic_block reach_bb = walk_up ? e->src : e->dest;
> +
> +          /* We have a hot bb with an immediate dominator that is cold.
> +             The dominator needs to be re-marked hot.  */
> +          BB_SET_PARTITION (reach_bb, BB_HOT_PARTITION);
> +          cold_bb_count--;
> +
> +          /* Now we need to examine newly-hot reach_bb to see if it is also
> +             dominated by a cold bb.  */
> +          bbs_in_hot_partition->safe_push (reach_bb);
> +          hot_bbs_to_check.safe_push (reach_bb);
> +        }
> +    }
> +
> +  return cold_bb_count;
> +}
> +
> +
>  /* Find the basic blocks that are rarely executed and need to be moved to
>     a separate section of the .o file (to cut down on paging and improve
>     cache locality).  Return a vector of all edges that cross.  */
>
> -static vec<edge>
> +static vec<edge>
>  find_rarely_executed_basic_blocks_and_crossing_edges (void)
>  {
>    vec<edge> crossing_edges = vNULL;
>    basic_block bb;
>    edge e;
>    edge_iterator ei;
> +  unsigned int cold_bb_count = 0;
> +  vec<basic_block> bbs_in_hot_partition = vNULL;
>
>    /* Mark which partition (hot/cold) each basic block belongs in.  */
>    FOR_EACH_BB (bb)
>      {
>        if (probably_never_executed_bb_p (cfun, bb))
> -       BB_SET_PARTITION (bb, BB_COLD_PARTITION);
> +        {
> +          BB_SET_PARTITION (bb, BB_COLD_PARTITION);
> +          cold_bb_count++;
> +        }
>        else
> -       BB_SET_PARTITION (bb, BB_HOT_PARTITION);
> +        {
> +          BB_SET_PARTITION (bb, BB_HOT_PARTITION);
> +          bbs_in_hot_partition.safe_push (bb);
> +        }
>      }
>
> +  /* Ensure that hot bbs are included along a hot path from the entry to exit.
> +     Several different possibilities may include cold bbs along all paths
> +     to/from a hot bb. One is that there are edge weight insanities
> +     due to optimization phases that do not properly update basic block profile
> +     counts. The second is that the entry of the function may not be
> hot, because
> +     it is entered fewer times than the number of profile training
> runs, but there
> +     is a loop inside the function that causes blocks within the function to be
> +     above the threshold for hotness. This is fixed by walking up from hot bbs
> +     to the entry block, and then down from hot bbs to the exit, performing
> +     partitioning fixups as necessary.  */
> +  if (cold_bb_count)
> +    {
> +      mark_dfs_back_edges ();
> +      cold_bb_count = sanitize_hot_paths (true, cold_bb_count,
> +                                          &bbs_in_hot_partition);
> +      if (cold_bb_count)
> +        sanitize_hot_paths (false, cold_bb_count, &bbs_in_hot_partition);
> +    }
> +
>    /* The format of .gcc_except_table does not allow landing pads to
>       be in a different partition as the throw.  Fix this by either
>       moving or duplicating the landing pads.  */
>
>
> --
> Teresa Johnson | Software Engineer | tejohnson@google.com | 408-460-2413

[-- Attachment #2: ld.script --]
[-- Type: application/octet-stream, Size: 8644 bytes --]

/* Script for -z combreloc: combine and sort reloc sections */
OUTPUT_FORMAT("elf64-x86-64", "elf64-x86-64",
	      "elf64-x86-64")
OUTPUT_ARCH(i386:x86-64)
ENTRY(_start)
SEARCH_DIR("/home/marxin/binutils-bin/x86_64-unknown-linux-gnu/lib64"); SEARCH_DIR("/home/marxin/binutils-bin/lib64"); SEARCH_DIR("/usr/local/lib64"); SEARCH_DIR("/lib64"); SEARCH_DIR("/usr/lib64"); SEARCH_DIR("/home/marxin/binutils-bin/x86_64-unknown-linux-gnu/lib"); SEARCH_DIR("/home/marxin/binutils-bin/lib"); SEARCH_DIR("/usr/local/lib"); SEARCH_DIR("/lib"); SEARCH_DIR("/usr/lib");
SECTIONS
{
  /* Read-only sections, merged into text segment: */
  PROVIDE (__executable_start = SEGMENT_START("text-segment", 0x400000)); . = SEGMENT_START("text-segment", 0x400000) + SIZEOF_HEADERS;
  .interp         : { *(.interp) }
  .note.gnu.build-id : { *(.note.gnu.build-id) }
  .hash           : { *(.hash) }
  .gnu.hash       : { *(.gnu.hash) }
  .dynsym         : { *(.dynsym) }
  .dynstr         : { *(.dynstr) }
  .gnu.version    : { *(.gnu.version) }
  .gnu.version_d  : { *(.gnu.version_d) }
  .gnu.version_r  : { *(.gnu.version_r) }
  .rela.dyn       :
    {
      *(.rela.init)
      *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*)
      *(.rela.fini)
      *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*)
      *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*)
      *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*)
      *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*)
      *(.rela.ctors)
      *(.rela.dtors)
      *(.rela.got)
      *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*)
      *(.rela.ldata .rela.ldata.* .rela.gnu.linkonce.l.*)
      *(.rela.lbss .rela.lbss.* .rela.gnu.linkonce.lb.*)
      *(.rela.lrodata .rela.lrodata.* .rela.gnu.linkonce.lr.*)
      *(.rela.ifunc)
    }
  .rela.plt       :
    {
      *(.rela.plt)
      PROVIDE_HIDDEN (__rela_iplt_start = .);
      *(.rela.iplt)
      PROVIDE_HIDDEN (__rela_iplt_end = .);
    }
  .init           :
  {
    KEEP (*(SORT_NONE(.init)))
  }
  .plt            : { *(.plt) *(.iplt) }
  .text.unlikely  : { *(.text.unlikely .text.*_unlikely .text.unlikely.*) }
  .text.exit      : { *(.text.exit .text.exit.*) }
  .text.startup   : { *(.text.startup .text.startup.*) }
  .text.hot       : { *(.text.hot .text.hot.*) }
  .text           :
  {
    *(.text .stub .text.* .gnu.linkonce.t.*)
    /* .gnu.warning sections are handled specially by elf32.em.  */
    *(.gnu.warning)
  }
  .fini           :
  {
    KEEP (*(SORT_NONE(.fini)))
  }
  PROVIDE (__etext = .);
  PROVIDE (_etext = .);
  PROVIDE (etext = .);
  .rodata         : { *(.rodata .rodata.* .gnu.linkonce.r.*) }
  .rodata1        : { *(.rodata1) }
  .eh_frame_hdr : { *(.eh_frame_hdr) }
  .eh_frame       : ONLY_IF_RO { KEEP (*(.eh_frame)) }
  .gcc_except_table   : ONLY_IF_RO { *(.gcc_except_table
  .gcc_except_table.*) }
  /* These sections are generated by the Sun/Oracle C++ compiler.  */
  .exception_ranges   : ONLY_IF_RO { *(.exception_ranges
  .exception_ranges*) }
  /* Adjust the address for the data segment.  We want to adjust up to
     the same address within the page on the next page up.  */
  . = ALIGN (CONSTANT (MAXPAGESIZE)) - ((CONSTANT (MAXPAGESIZE) - .) & (CONSTANT (MAXPAGESIZE) - 1)); . = DATA_SEGMENT_ALIGN (CONSTANT (MAXPAGESIZE), CONSTANT (COMMONPAGESIZE));
  /* Exception handling  */
  .eh_frame       : ONLY_IF_RW { KEEP (*(.eh_frame)) }
  .gcc_except_table   : ONLY_IF_RW { *(.gcc_except_table .gcc_except_table.*) }
  .exception_ranges   : ONLY_IF_RW { *(.exception_ranges .exception_ranges*) }
  /* Thread Local Storage sections  */
  .tdata	  : { *(.tdata .tdata.* .gnu.linkonce.td.*) }
  .tbss		  : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) }
  .preinit_array     :
  {
    PROVIDE_HIDDEN (__preinit_array_start = .);
    KEEP (*(.preinit_array))
    PROVIDE_HIDDEN (__preinit_array_end = .);
  }
  .init_array     :
  {
    PROVIDE_HIDDEN (__init_array_start = .);
    KEEP (*(SORT_BY_INIT_PRIORITY(.init_array.*) SORT_BY_INIT_PRIORITY(.ctors.*)))
    KEEP (*(.init_array EXCLUDE_FILE (*crtbegin.o *crtbegin?.o *crtend.o *crtend?.o ) .ctors))
    PROVIDE_HIDDEN (__init_array_end = .);
  }
  .fini_array     :
  {
    PROVIDE_HIDDEN (__fini_array_start = .);
    KEEP (*(SORT_BY_INIT_PRIORITY(.fini_array.*) SORT_BY_INIT_PRIORITY(.dtors.*)))
    KEEP (*(.fini_array EXCLUDE_FILE (*crtbegin.o *crtbegin?.o *crtend.o *crtend?.o ) .dtors))
    PROVIDE_HIDDEN (__fini_array_end = .);
  }
  .ctors          :
  {
    /* gcc uses crtbegin.o to find the start of
       the constructors, so we make sure it is
       first.  Because this is a wildcard, it
       doesn't matter if the user does not
       actually link against crtbegin.o; the
       linker won't look for a file to match a
       wildcard.  The wildcard also means that it
       doesn't matter which directory crtbegin.o
       is in.  */
    KEEP (*crtbegin.o(.ctors))
    KEEP (*crtbegin?.o(.ctors))
    /* We don't want to include the .ctor section from
       the crtend.o file until after the sorted ctors.
       The .ctor section from the crtend file contains the
       end of ctors marker and it must be last */
    KEEP (*(EXCLUDE_FILE (*crtend.o *crtend?.o ) .ctors))
    KEEP (*(SORT(.ctors.*)))
    KEEP (*(.ctors))
  }
  .dtors          :
  {
    KEEP (*crtbegin.o(.dtors))
    KEEP (*crtbegin?.o(.dtors))
    KEEP (*(EXCLUDE_FILE (*crtend.o *crtend?.o ) .dtors))
    KEEP (*(SORT(.dtors.*)))
    KEEP (*(.dtors))
  }
  .jcr            : { KEEP (*(.jcr)) }
  .data.rel.ro : { *(.data.rel.ro.local* .gnu.linkonce.d.rel.ro.local.*) *(.data.rel.ro .data.rel.ro.* .gnu.linkonce.d.rel.ro.*) }
  .dynamic        : { *(.dynamic) }
  .got            : { *(.got) *(.igot) }
  . = DATA_SEGMENT_RELRO_END (SIZEOF (.got.plt) >= 24 ? 24 : 0, .);
  .got.plt        : { *(.got.plt)  *(.igot.plt) }
  .data           :
  {
    *(.data .data.* .gnu.linkonce.d.*)
    SORT(CONSTRUCTORS)
  }
  .data1          : { *(.data1) }
  _edata = .; PROVIDE (edata = .);
  . = .;
  __bss_start = .;
  .bss            :
  {
   *(.dynbss)
   *(.bss .bss.* .gnu.linkonce.b.*)
   *(COMMON)
   /* Align here to ensure that the .bss section occupies space up to
      _end.  Align after .bss to ensure correct alignment even if the
      .bss section disappears because there are no input sections.
      FIXME: Why do we need it? When there is no .bss section, we don't
      pad the .data section.  */
   . = ALIGN(. != 0 ? 64 / 8 : 1);
  }
  .lbss   :
  {
    *(.dynlbss)
    *(.lbss .lbss.* .gnu.linkonce.lb.*)
    *(LARGE_COMMON)
  }
  . = ALIGN(64 / 8);
  . = SEGMENT_START("ldata-segment", .);
  .lrodata   ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)) :
  {
    *(.lrodata .lrodata.* .gnu.linkonce.lr.*)
  }
  .ldata   ALIGN(CONSTANT (MAXPAGESIZE)) + (. & (CONSTANT (MAXPAGESIZE) - 1)) :
  {
    *(.ldata .ldata.* .gnu.linkonce.l.*)
    . = ALIGN(. != 0 ? 64 / 8 : 1);
  }
  . = ALIGN(64 / 8);
  _end = .; PROVIDE (end = .);
  . = DATA_SEGMENT_END (.);
  /* Stabs debugging sections.  */
  .stab          0 : { *(.stab) }
  .stabstr       0 : { *(.stabstr) }
  .stab.excl     0 : { *(.stab.excl) }
  .stab.exclstr  0 : { *(.stab.exclstr) }
  .stab.index    0 : { *(.stab.index) }
  .stab.indexstr 0 : { *(.stab.indexstr) }
  .comment       0 : { *(.comment) }
  /* DWARF debug sections.
     Symbols in the DWARF debugging sections are relative to the beginning
     of the section so we begin them at 0.  */
  /* DWARF 1 */
  .debug          0 : { *(.debug) }
  .line           0 : { *(.line) }
  /* GNU DWARF 1 extensions */
  .debug_srcinfo  0 : { *(.debug_srcinfo) }
  .debug_sfnames  0 : { *(.debug_sfnames) }
  /* DWARF 1.1 and DWARF 2 */
  .debug_aranges  0 : { *(.debug_aranges) }
  .debug_pubnames 0 : { *(.debug_pubnames) }
  /* DWARF 2 */
  .debug_info     0 : { *(.debug_info .gnu.linkonce.wi.*) }
  .debug_abbrev   0 : { *(.debug_abbrev) }
  .debug_line     0 : { *(.debug_line .debug_line.* .debug_line_end ) }
  .debug_frame    0 : { *(.debug_frame) }
  .debug_str      0 : { *(.debug_str) }
  .debug_loc      0 : { *(.debug_loc) }
  .debug_macinfo  0 : { *(.debug_macinfo) }
  /* SGI/MIPS DWARF 2 extensions */
  .debug_weaknames 0 : { *(.debug_weaknames) }
  .debug_funcnames 0 : { *(.debug_funcnames) }
  .debug_typenames 0 : { *(.debug_typenames) }
  .debug_varnames  0 : { *(.debug_varnames) }
  /* DWARF 3 */
  .debug_pubtypes 0 : { *(.debug_pubtypes) }
  .debug_ranges   0 : { *(.debug_ranges) }
  /* DWARF Extension.  */
  .debug_macro    0 : { *(.debug_macro) }
  .gnu.attributes 0 : { KEEP (*(.gnu.attributes)) }
  /DISCARD/ : { *(.note.GNU-stack) *(.gnu_debuglink) *(.gnu.lto_*) }
}

  reply	other threads:[~2013-08-19 18:54 UTC|newest]

Thread overview: 62+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-08-01 16:32 Teresa Johnson
2013-08-02 11:22 ` Bernhard Reutner-Fischer
2013-08-02 14:51   ` Teresa Johnson
2013-08-02 15:05     ` Jan Hubicka
2013-08-02 23:05       ` Steven Bosscher
2013-08-03  4:53         ` Teresa Johnson
2013-08-03  4:48       ` Teresa Johnson
2013-08-05 13:36         ` Teresa Johnson
2013-08-05 14:11           ` Jan Hubicka
2013-08-05 14:57             ` Teresa Johnson
2013-08-06  3:01               ` Teresa Johnson
     [not found]           ` <20130808222332.GA31755@kam.mff.cuni.cz>
2013-08-08 23:04             ` Teresa Johnson
2013-08-09  9:58               ` Jan Hubicka
2013-08-09 14:38                 ` Teresa Johnson
2013-08-09 15:28                   ` Jan Hubicka
2013-08-09 15:54                     ` Martin Liška
2013-08-09 21:03                       ` Teresa Johnson
2013-08-09 21:02                     ` Teresa Johnson
2013-08-09 22:43                       ` Jan Hubicka
2013-08-11 12:21                       ` Jan Hubicka
2013-08-11 13:25                         ` Teresa Johnson
2013-08-11 15:55                           ` Martin Liška
2013-08-11 17:55                             ` Jan Hubicka
2013-08-11 21:05                           ` Jan Hubicka
2013-08-17 15:54                       ` Teresa Johnson
2013-08-17 21:02                         ` Jan Hubicka
2013-08-19 13:51                           ` Teresa Johnson
2013-08-19 15:16                             ` Jan Hubicka
2013-08-19 17:48                               ` Teresa Johnson
2013-08-19 19:56                                 ` Martin Liška [this message]
2013-08-27 18:12                                 ` Teresa Johnson
2013-08-28 16:59                                   ` Jan Hubicka
2013-08-28 18:35                                     ` Teresa Johnson
2013-08-30  7:17                                       ` Teresa Johnson
2013-08-30  9:16                                         ` Jan Hubicka
2013-08-30 15:13                                           ` Teresa Johnson
2013-08-30 15:28                                             ` Jan Hubicka
2013-08-30 15:54                                               ` Teresa Johnson
2013-08-30 21:56                                             ` Rong Xu
2013-08-31 16:20                                   ` Jan Hubicka
2013-08-31 23:40                                     ` Jan Hubicka
2013-09-24  8:07                                       ` Teresa Johnson
2013-09-24 13:44                                         ` Jan Hubicka
2013-09-24 19:06                                           ` Teresa Johnson
2013-09-26 20:55                                           ` Rong Xu
2013-09-26 22:23                                             ` Jan Hubicka
2013-09-26 22:54                                               ` Rong Xu
2013-09-24 18:28                                         ` Jan Hubicka
2013-09-24 18:51                                           ` Teresa Johnson
2013-09-25 23:10                                             ` Teresa Johnson
2013-09-26  8:44                                               ` Teresa Johnson
2013-09-26 22:26                                             ` Jan Hubicka
2013-09-27 14:50                                               ` Teresa Johnson
2013-09-29 17:34                                                 ` Teresa Johnson
2013-10-02 16:19                                                   ` Jan Hubicka
2013-10-02 17:55                                                     ` Teresa Johnson
2013-10-02 18:10                                                       ` Jan Hubicka
2013-10-03 13:42                                                     ` Teresa Johnson
2013-10-03 23:37                                                       ` Teresa Johnson
2013-10-01 17:36                                             ` Teresa Johnson
2013-08-19 15:34                           ` Teresa Johnson
2013-08-21 15:31                             ` Jan Hubicka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAObPJ3PqmH5LHAgM_t-ncKF9L3DCfbKWzBKD9grQR73MOsSrYQ@mail.gmail.com \
    --to=marxin.liska@gmail.com \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=hubicka@ucw.cz \
    --cc=law@redhat.com \
    --cc=rep.dot.nop@gmail.com \
    --cc=stevenb.gcc@gmail.com \
    --cc=tejohnson@google.com \
    --cc=tmsriram@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).