public inbox for gdb-patches@sourceware.org
 help / color / mirror / Atom feed
* [RFC] Monster testcase generator for performance testsuite
@ 2015-01-02 10:07 Doug Evans
  2015-01-05 13:32 ` Yao Qi
  0 siblings, 1 reply; 8+ messages in thread
From: Doug Evans @ 2015-01-02 10:07 UTC (permalink / raw)
  To: gdb-patches

Hi.

This patch adds preliminary support for generating large programs.
"Large" as in 10000 compunits or 5000 shared libraries or 3M ELF symbols.

There's still a bit more I want to add to this, but it's at a point
where I can use it, and thus now's a good time to get some feedback.

One difference between these tests and current perf tests is that
one .exp is used to build the program and another .exp is used to
run the test.  These programs take awhile to compile and link.
Generating the sources for these monster testcases takes hardly any time
at all relative to the amount of time to compile them.  I measured 13.5
minutes to compile the included gmonster1 benchmark (with -j6!), and about
an equivalent amount of time to run the benchmark.  Therefore it makes
sense to be able to use one program in multiple performance tests, and
therefore it makes sense to separate the build from the test run.

These tests currently require separate build-perf and check-perf steps,
which is different from normal perf tests.  However, due to the time
it takes to build the program I've added support for building the pieces
of the test in parallel, and hooking this parallel build support into
the existing framework required some pragmatic compromise.

Running the gmonster1-ptype benchmark requires about 8G to link the program,
and 11G to run it under gdb.  I still need to add the ability to
have a small version enabled by default, and turn on the bigger version
from the command line.  I don't expect everyone to have a big enough
machine to run the test configuration that I do.

I don't expect the gmonster1-ptype test to remain as is.
I'm still playing with it.

I wanted the generated files from the parallel build to appear in the
gdb.perf directory, so I enhanced GDB_PARALLEL support to let one specify
the location of the outputs/cache/temp directories.

In order to add parallel build support I needed a way to step
through the phases of the build:

1) To generate the build .exp files
   GDB_PERFTEST_MODE=gen-build-exps
   This step allows for parallel builds of the majority of pieces of the
   test binary and shlibs.
2) To compile the "pieces" of the binary and shlibs.
   "Pieces" are the bulk of the machine-generated sources of the test.
   This step is driven by lib/build-piece.exp.
   GDB_PERFTEST_MODE=build-pieces
3) To perform the final link of the binary.
   GDB_PERFTEST_MODE=compile

Going this route makes the "both" value of GDB_PERFTEST_MODE
(which means compile+run) a bit confusing.  I'm open to suggestions
for how one would want this done differently.  I'm used to the meaning
of "both" now so I don't mind this, but I think the thing to do is rename
"both".  Another possibility is using a different variable than
GDB_PERFTEST_MODE to step through the three phases of the parallel build.
Given the size of these programs and the time it takes to compile them,
I think having parallel build support up front is important.

Also, I still need to do some regression tests to make sure I haven't
broken anything. :-)

Example:

bash$ cd testsuite ; make site.exp
bash$ make -j6 build-perf RUNTESTFLAGS=gmonster1.exp
... wait awhile ...
bash$ make check-perf GDB_PERFTEST_MODE=run RUNTESTFLAGS=gmonster1-ptype.exp
... wait awhile ...


diff --git a/gdb/testsuite/Makefile.in b/gdb/testsuite/Makefile.in
index 07d3942..5a32d02 100644
--- a/gdb/testsuite/Makefile.in
+++ b/gdb/testsuite/Makefile.in
@@ -227,13 +227,30 @@ do-check-parallel: $(TEST_TARGETS)
 
 @GMAKE_TRUE@check/%.exp:
 @GMAKE_TRUE@	-mkdir -p outputs/$*
-@GMAKE_TRUE@	@$(DO_RUNTEST) GDB_PARALLEL=yes --outdir=outputs/$* $*.exp $(RUNTESTFLAGS)
+@GMAKE_TRUE@	@$(DO_RUNTEST) GDB_PARALLEL=. --outdir=outputs/$* $*.exp $(RUNTESTFLAGS)
 
 check/no-matching-tests-found:
 	@echo ""
 	@echo "No matching tests found."
 	@echo ""
 
+@GMAKE_TRUE@pieces/%.exp:
+@GMAKE_TRUE@	mkdir -p gdb.perf/outputs/$*
+@GMAKE_TRUE@	$(DO_RUNTEST) --status --outdir=gdb.perf/outputs/$* lib/build-piece.exp PIECE=gdb.perf/pieces/$*.exp WORKER=$* GDB_PARALLEL=gdb.perf $(RUNTESTFLAGS) GDB_PERFTEST_MODE=build-pieces
+
+# GDB_PERFTEST_MODE appears *after* RUNTESTFLAGS here because we don't want
+# anything in RUNTESTFLAGS to override it.
+@GMAKE_TRUE@build-perf: $(abs_builddir)/site.exp
+@GMAKE_TRUE@	rm -rf gdb.perf/pieces
+@GMAKE_TRUE@	rm -rf gdb.perf/cache gdb.perf/outputs gdb.perf/temp
+@GMAKE_TRUE@	mkdir -p gdb.perf/pieces
+@GMAKE_TRUE@	@: Step 1: Generate the build .exp files.
+@GMAKE_TRUE@	$(DO_RUNTEST) --status --directory=gdb.perf --outdir gdb.perf/pieces GDB_PARALLEL=gdb.perf $(RUNTESTFLAGS) GDB_PERFTEST_MODE=gen-build-exps
+@GMAKE_TRUE@	@: Step 2: Compile the pieces.
+@GMAKE_TRUE@	$(MAKE) $$(cd gdb.perf && echo pieces/*/*.exp)
+@GMAKE_TRUE@	@: Step 3: Do the final link.
+@GMAKE_TRUE@	$(DO_RUNTEST) --status --directory=gdb.perf --outdir gdb.perf GDB_PARALLEL=gdb.perf $(RUNTESTFLAGS) GDB_PERFTEST_MODE=compile
+
 check-perf: all $(abs_builddir)/site.exp
 	@if test ! -d gdb.perf; then mkdir gdb.perf; fi
 	$(DO_RUNTEST) --directory=gdb.perf --outdir gdb.perf GDB_PERFTEST_MODE=both $(RUNTESTFLAGS)
diff --git a/gdb/testsuite/gdb.perf/gmonster1-ptype.exp b/gdb/testsuite/gdb.perf/gmonster1-ptype.exp
new file mode 100644
index 0000000..e4fde74
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster1-ptype.exp
@@ -0,0 +1,42 @@
+# Copyright (C) 2014 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Measure speed of ptype on a simple class.
+# Test parameters are the standard GenPerfTest parameters.
+
+load_lib perftest.exp
+
+if [skip_perf_tests] {
+    return 0
+}
+
+GenPerfTest::load_test_description gmonster1.exp
+
+# This variable is required by perftest.exp.
+# This isn't the name of the test program, it's the name of the test.
+# The harness assumes they are the same, which is not the case here.
+set testfile "gmonster1-ptype"
+
+array set testcase [make_testcase_config]
+
+PerfTest::assemble {
+    # Compilation is handled by gmonster1.exp.
+    return 0
+} {
+    clean_restart
+} {
+    global testcase
+    gdb_test "python Gmonster1Ptype('$testfile', [tcl_string_list_to_python_list $testcase(run_names)], '$testcase(binfile)').run()"
+}
diff --git a/gdb/testsuite/gdb.perf/gmonster1-ptype.py b/gdb/testsuite/gdb.perf/gmonster1-ptype.py
new file mode 100644
index 0000000..041d7e4
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster1-ptype.py
@@ -0,0 +1,72 @@
+# Copyright (C) 2014 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Measure speed of ptype on a simple class.
+
+from perftest import perftest
+from perftest import measure
+
+class Gmonster1Ptype(perftest.TestCaseWithBasicMeasurements):
+    def __init__(self, name, run_names, binfile):
+        # We want to measure time in this test.
+        super(Gmonster1Ptype, self).__init__(name)
+        self.run_names = run_names
+        self.binfile = binfile
+
+    def warm_up(self):
+        pass
+
+    @staticmethod
+    def _safe_execute(command):
+        try:
+            gdb.execute(command)
+        except gdb.error:
+            pass
+
+    @staticmethod
+    def _convert_spaces(file_name):
+        return file_name.replace(" ", "-")
+
+    @staticmethod
+    def _select_file(file_name):
+        gdb.execute("file %s" % (file_name))
+
+    @staticmethod
+    def _runto_main():
+        gdb.execute("tbreak main")
+        gdb.execute("run")
+
+    def execute_test(self):
+        self._safe_execute("set confirm off")
+        class_to_print = { "1-cu": "class_0_0",
+                           "10-cus": "class_9_0",
+                           "100-cus": "class_99_0",
+                           "1000-cus": "class_999_0",
+                           "10000-cus": "class_9999_0" }
+        for run in self.run_names:
+            class_name = "ns_0::ns_1::%s" % (class_to_print[run])
+            this_run_binfile = "%s-%s" % (self.binfile,
+                                          self._convert_spaces(run))
+            self._select_file(this_run_binfile)
+            self._runto_main()
+            self._safe_execute("mt expand-symtabs")
+            self._safe_execute("set $this = (%s*) 0" % (class_name))
+            self._safe_execute("break %s::method_0" % (class_name))
+            self._safe_execute("call $this->method_0()")
+            iteration = 5
+            while iteration > 0:
+                func = lambda: self._safe_execute("ptype %s" % (class_name))
+                self.measure.measure(func, run)
+                iteration -= 1
diff --git a/gdb/testsuite/gdb.perf/gmonster1.exp b/gdb/testsuite/gdb.perf/gmonster1.exp
new file mode 100644
index 0000000..f1e6c2a
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster1.exp
@@ -0,0 +1,84 @@
+# Copyright (C) 2014 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Perftest description file for building the "gmonster1" benchmark.
+#
+# Perftest descriptions are loaded thrice:
+# 1) To generate the build .exp files
+#    GDB_PERFTEST_MODE=gen-build-exps
+#    This step allows for parallel builds of the majority of pieces of the
+#    test binary and shlibs.
+# 2) To compile the "pieces" of the binary and shlibs.
+#    "Pieces" are the bulk of the machine-generated sources of the test.
+#    This step is driven by lib/build-piece.exp.
+#    GDB_PERFTEST_MODE=build-pieces
+# 3) To perform the final link of the binary and shlibs.
+#    GDB_PERFTEST_MODE=compile
+
+load_lib perftest.exp
+
+if [skip_perf_tests] {
+    return 0
+}
+
+proc make_testcase_config { } {
+    set program_name "gmonster1"
+    array set testcase [GenPerfTest::init_testcase $program_name]
+
+    set testcase(language) cplus
+    set testcase(run_names) { 1-cu 10-cus 100-cus 1000-cus 10000-cus }
+    set testcase(nr_shlibs) { 0 }
+    set testcase(nr_compunits) { 1 10 100 1000 10000 }
+    set testcase(nr_extern_functions) 10
+    set testcase(nr_static_functions) 10
+    # class_specs needs to be embedded in an outer list because remember each
+    # element of the outer list is for each run, and here we want to use the
+    # same value for all runs.
+    set testcase(class_specs) { { { 0 10 } { 1 10 } { 2 10 } } }
+    set testcase(nr_members) 10
+    set testcase(nr_static_members) 10
+    set testcase(nr_methods) 10
+    set testcase(nr_static_methods) 10
+
+    return [array get testcase]
+}
+
+verbose -log "gmonster1: $GDB_PERFTEST_MODE"
+
+switch $GDB_PERFTEST_MODE {
+    gen-build-exps {
+	if { [GenPerfTest::gen_build_exp_files gmonster1.exp make_testcase_config] < 0 } {
+	    return -1
+	}
+    }
+    build-pieces {
+	;# Nothing to do.
+    }
+    compile {
+	array set testcase [make_testcase_config]
+	if { [GenPerfTest::compile testcase] < 0 } {
+	    return -1
+	}
+    }
+    run {
+	;# Nothing to do.
+    }
+    both {
+	;# Don't do anything here.  Tests that use us must have explicitly
+	;# separate compile/run steps.
+    }
+}
+
+return 0
diff --git a/gdb/testsuite/lib/build-piece.exp b/gdb/testsuite/lib/build-piece.exp
new file mode 100644
index 0000000..c48774c
--- /dev/null
+++ b/gdb/testsuite/lib/build-piece.exp
@@ -0,0 +1,36 @@
+# Copyright (C) 2014 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Utility to bootstrap building a piece of a performance test in a
+# parallel build.
+# See testsuite/Makefile.in:pieces/%.exp.
+
+# Dejagnu presents a kind of API to .exp files, but using this file to
+# bootstrap the parallel build process breaks that.  Before invoking $PIECE
+# set various globals to their expected values.  The tests may not use these
+# today, but if/when they do the error modes are confusing, so fix it now.
+
+# $subdir is set to "lib", because that is where this file lives,
+# which is not what tests expect.  The makefile sets WORKER for us.
+# Its value is <name>/<name>-<number>.
+set subdir [file dirname $WORKER]
+
+# $gdb_test_file_name is set to this file, build-piece, which is not what
+# tests expect.  This assumes each piece's build .exp file lives in
+# $objdir/gdb.perf/pieces/<name>.
+# See perftest.exp:GenPerfTest::gen_build_exp_files.
+set gdb_test_file_name [file tail [file dirname $PIECE]]
+
+source $PIECE
diff --git a/gdb/testsuite/lib/cache.exp b/gdb/testsuite/lib/cache.exp
index 2f4a34e..d33d1cb 100644
--- a/gdb/testsuite/lib/cache.exp
+++ b/gdb/testsuite/lib/cache.exp
@@ -35,7 +35,7 @@ proc gdb_do_cache {name} {
     }
 
     if {[info exists GDB_PARALLEL]} {
-	set cache_filename [file join $objdir cache $cache_name]
+	set cache_filename [file join $objdir $GDB_PARALLEL cache $cache_name]
 	if {[file exists $cache_filename]} {
 	    set fd [open $cache_filename]
 	    set gdb_data_cache($cache_name) [read -nonewline $fd]
diff --git a/gdb/testsuite/lib/gdb.exp b/gdb/testsuite/lib/gdb.exp
index 08087f2..7f5dd81 100644
--- a/gdb/testsuite/lib/gdb.exp
+++ b/gdb/testsuite/lib/gdb.exp
@@ -3729,7 +3729,7 @@ proc standard_output_file {basename} {
     global objdir subdir gdb_test_file_name GDB_PARALLEL
 
     if {[info exists GDB_PARALLEL]} {
-	set dir [file join $objdir outputs $subdir $gdb_test_file_name]
+	set dir [file join $objdir $GDB_PARALLEL outputs $subdir $gdb_test_file_name]
 	file mkdir $dir
 	return [file join $dir $basename]
     } else {
@@ -3743,7 +3743,7 @@ proc standard_temp_file {basename} {
     global objdir GDB_PARALLEL
 
     if {[info exists GDB_PARALLEL]} {
-	return [file join $objdir temp $basename]
+	return [file join $objdir $GDB_PARALLEL temp $basename]
     } else {
 	return $basename
     }
@@ -4645,17 +4645,27 @@ proc build_executable { testname executable {sources ""} {options {debug}} } {
     return [eval build_executable_from_specs $arglist]
 }
 
-# Starts fresh GDB binary and loads EXECUTABLE into GDB. EXECUTABLE is
-# the basename of the binary.
-proc clean_restart { executable } {
+# Starts fresh GDB binary and loads an optional executable into GDB.
+# Usage: clean_restart [executable]
+# EXECUTABLE is the basename of the binary.
+
+proc clean_restart { args } {
     global srcdir
     global subdir
-    set binfile [standard_output_file ${executable}]
+
+    if { [llength $args] > 1 } {
+	error "bad number of args: [llength $args]"
+    }
 
     gdb_exit
     gdb_start
     gdb_reinitialize_dir $srcdir/$subdir
-    gdb_load ${binfile}
+
+    if { [llength $args] >= 1 } {
+	set executable [lindex $args 0]
+	set binfile [standard_output_file ${executable}]
+	gdb_load ${binfile}
+    }
 }
 
 # Prepares for testing by calling build_executable_full, then
@@ -4859,7 +4869,10 @@ if {[info exists GDB_PARALLEL]} {
     if {[is_remote host]} {
 	unset GDB_PARALLEL
     } else {
-	file mkdir outputs temp cache
+	file mkdir \
+	    [file join $GDB_PARALLEL outputs] \
+	    [file join $GDB_PARALLEL temp] \
+	    [file join $GDB_PARALLEL cache]
     }
 }
 
diff --git a/gdb/testsuite/lib/perftest.exp b/gdb/testsuite/lib/perftest.exp
index 6b1cab4..f9c9e11 100644
--- a/gdb/testsuite/lib/perftest.exp
+++ b/gdb/testsuite/lib/perftest.exp
@@ -12,6 +12,10 @@
 #
 # You should have received a copy of the GNU General Public License
 # along with this program.  If not, see <http://www.gnu.org/licenses/>.
+#
+# Notes:
+# 1) This follows a Python convention for marking internal vs public functions.
+# Internal functions are prefixed with "_".
 
 namespace eval PerfTest {
     # The name of python file on build.
@@ -42,14 +46,7 @@ namespace eval PerfTest {
     # actual compilation.  Return zero if compilation is successful,
     # otherwise return non-zero.
     proc compile {body} {
-	global GDB_PERFTEST_MODE
-
-	if { [info exists GDB_PERFTEST_MODE]
-	     && [string compare $GDB_PERFTEST_MODE "run"] } {
-	    return [uplevel 2 $body]
-	}
-
-	return 0
+	return [uplevel 2 $body]
     }
 
     # Start up GDB.
@@ -82,14 +79,24 @@ namespace eval PerfTest {
     proc assemble {compile startup run} {
 	global GDB_PERFTEST_MODE
 
-	if { [eval compile {$compile}] } {
-	    untested "Could not compile source files."
+	if ![info exists GDB_PERFTEST_MODE] {
 	    return
 	}
 
+	if { "$GDB_PERFTEST_MODE" == "gen-build-exps"
+	     || "$GDB_PERFTEST_MODE" == "build-pieces" } {
+	    return
+	}
+
+	if { [string compare $GDB_PERFTEST_MODE "run"] } {
+	    if { [eval compile {$compile}] } {
+		untested "Could not compile source files."
+		return
+	    }
+	}
+
 	# Don't execute the run if GDB_PERFTEST_MODE=compile.
-	if { [info exists GDB_PERFTEST_MODE]
-	     && [string compare $GDB_PERFTEST_MODE "compile"] == 0} {
+	if { [string compare $GDB_PERFTEST_MODE "compile"] == 0} {
 	    return
 	}
 
@@ -110,10 +117,11 @@ proc skip_perf_tests { } {
 
     if [info exists GDB_PERFTEST_MODE] {
 
-	if { "$GDB_PERFTEST_MODE" != "compile"
+	if { "$GDB_PERFTEST_MODE" != "gen-build-exps"
+	     && "$GDB_PERFTEST_MODE" != "build-pieces"
+	     && "$GDB_PERFTEST_MODE" != "compile"
 	     && "$GDB_PERFTEST_MODE" != "run"
 	     && "$GDB_PERFTEST_MODE" != "both" } {
-	    # GDB_PERFTEST_MODE=compile|run|both is allowed.
 	    error "Unknown value of GDB_PERFTEST_MODE."
 	    return 1
 	}
@@ -123,3 +131,771 @@ proc skip_perf_tests { } {
 
     return 1
 }
+
+# Given a list of tcl strings, return the same list as the text form of a
+# python list.
+
+proc tcl_string_list_to_python_list { l } {
+    proc quote { text } {
+	return "\"$text\""
+    }
+    set quoted_list ""
+    foreach elm $l {
+	lappend quoted_list [quote $elm]
+    }
+    return "([join $quoted_list {, }])"
+}
+
+# A simple testcase generator.
+#
+# Usage Notes:
+#
+# 1) The length of each parameter list must either be one, in which case the
+# same value is used for each run, or the length must match all other
+# parameters of length greater than one.
+#
+# 2) Values for parameters that vary across runs must appear in increasing
+# order.  E.g. nr_shlibs = { 0 1 10 } is good, { 1 0 10 } is bad.
+# This rule simplifies the code a bit, without being onerous on the user:
+#  a) Report generation doesn't have to sort the output by run, it'll already
+#  be sorted.
+#  b) In the static object file case, the last run can be used used to generate
+#  all the source files.
+#
+# TODO:
+# 1) Lots.  E.g., having functions call each other within an objfile and across
+# objfiles to measure things like backtrace times.
+# 2) Lots.  E.g., inline methods.
+#
+# Implementation Notes:
+#
+# 1) The implementation would be a bit simpler if we could assume Tcl 8.5.
+# Then we could use a dictionary to record the testcase instead of an array.
+# With the array we use here, there is only one copy of it and instead of
+# passing its value we pass its name.  Yay Tcl.
+#
+# 2) Array members cannot (apparently) be references in the conditional
+# expression of a for loop (-> variable not found error).  That is why they're
+# all extracted before the for loop.
+
+namespace eval GenPerfTest {
+
+    # The default level of compilation parallelism we support.
+    set DEFAULT_PERF_TEST_COMPILE_PARALLELISM 10
+
+    # The language of the test.
+    set DEFAULT_LANGUAGE "c"
+
+    # The number of shared libraries to create.
+    set DEFAULT_NR_SHLIBS 0
+
+    # The number of compunits in each objfile.
+    set DEFAULT_NR_COMPUNITS 1
+
+    # The number of public globals in each compunit.
+    set DEFAULT_NR_EXTERN_GLOBALS 1
+
+    # The number of static globals in each compunit.
+    set DEFAULT_NR_STATIC_GLOBALS 1
+
+    # The number of public functions in each compunit.
+    set DEFAULT_NR_EXTERN_FUNCTIONS 1
+
+    # The number of static functions in each compunit.
+    set DEFAULT_NR_STATIC_FUNCTIONS 1
+
+    # List of pairs of class depth and number of classes at that depth.
+    # By "depth" here we mean nesting within a namespace.
+    # E.g.,
+    # class foo {};
+    # namespace n { class foo {}; class bar {}; }
+    # would be represented as { { 0 1 } { 1 2 } }.
+    # This is only used if the selected language permits it.
+    set DEFAULT_CLASS_SPECS {}
+
+    # Number of members in each class.
+    # This is only used if classes are enabled.
+    set DEFAULT_NR_MEMBERS 0
+
+    # Number of static members in each class.
+    # This is only used if classes are enabled.
+    set DEFAULT_NR_STATIC_MEMBERS 0
+
+    # Number of methods in each class.
+    # This is only used if classes are enabled.
+    set DEFAULT_NR_METHODS 0
+
+    # Number of static methods in each class.
+    # This is only used if classes are enabled.
+    set DEFAULT_NR_STATIC_METHODS 0
+
+    set suffixes(c) "c"
+    set suffixes(cplus) "cc"
+
+    # Helper function to generate .exp build files.
+
+    proc _gen_build_exp_files { program_name nr_workers output_dir code } {
+	verbose -log "_gen_build_exp_files: $nr_workers workers"
+	for { set i 0 } { $i < $nr_workers } { incr i } {
+	    set file_name "$output_dir/${program_name}-${i}.exp"
+	    verbose -log "_gen_build_exp_files: Generating $file_name"
+	    set f [open $file_name "w"]
+	    puts $f "# DO NOT EDIT, machine generated file."
+	    puts $f "# See perftest.exp:GenPerfTest::gen_build_exp_files."
+	    puts $f ""
+	    puts $f "set worker_nr $i"
+	    puts $f ""
+	    puts $f "# The body of the file is supplied by the test."
+	    puts $f ""
+	    puts $f $code
+	    close $f
+	}
+	return 0
+    }
+
+    # Generate .exp files to build all the "pieces" of the testcase.
+    # This doesn't include "main" or any test-specific stuff.
+    # This mostly consists of the "bulk" (aka "crap" :-)) of the testcase to
+    # give gdb something meaty to chew on.
+    # The result is 0 for success, -1 for failure.
+    #
+    # Benchmarks generated by some of the tests are big.  I mean really big.
+    # And it's a pain to build one piece at a time, we need a parallel build.
+    # To achieve this, given the framework we're working with, we generate
+    # several .exp files, and then let testsuite/Makefile.in's support for
+    # parallel runs of the testsuite to do its thing.
+
+    proc gen_build_exp_files { test_description_exp make_config_thunk_name } {
+	global objdir PERF_TEST_COMPILE_PARALLELISM
+
+	if { [file tail $test_description_exp] != $test_description_exp } {
+	    error "test description file contains directory name"
+	}
+
+	set program_name [file rootname $test_description_exp]
+
+	set output_dir "$objdir/gdb.perf/pieces/$program_name"
+	file mkdir $output_dir
+
+	# N.B. The generation code below cannot reference anything that exists
+	# here, the code isn't run until later, in another process.  That is
+	# why we split up the assignment to $code.
+	# TODO(dje): Not the cleanest way, but simple enough for now.
+	set code {
+	    # This code is put in each copy of the generated .exp file.
+
+	    load_lib perftest.exp
+
+	    GenPerfTest::load_test_description}
+	append code " $test_description_exp"
+	append code {
+
+	    array set testcase [}
+	append code "$make_config_thunk_name"
+	append code {]
+
+	    if { [GenPerfTest::compile_pieces testcase $worker_nr] < 0 } {
+		return -1
+	    }
+
+	    return 0
+	}
+
+	return [_gen_build_exp_files $program_name $PERF_TEST_COMPILE_PARALLELISM $output_dir $code]
+    }
+
+    # Load a perftest description.
+    # Test descriptions are used to build the input files (binary + shlibs)
+    # of one or more performance tests.
+
+    proc load_test_description { basename } {
+	global srcdir
+
+	if { [file tail $basename] != $basename } {
+	    error "test description file contains directory name"
+	}
+
+	verbose -log "load_file $srcdir/gdb.perf/$basename"
+	if { [load_file $srcdir/gdb.perf/$basename] == 0 } {
+	    error "Unable to load test description $basename"
+	}
+    }
+
+    # Create a testcase object for test NAME.
+    # The caller must call this as:
+    # array set my_test [GenPerfTest::init_testcase $name]
+
+    proc init_testcase { name } {
+	set testcase(name) $name
+	set testcase(language) $GenPerfTest::DEFAULT_LANGUAGE
+	set testcase(run_names) [list $name]
+	set testcase(nr_shlibs) $GenPerfTest::DEFAULT_NR_SHLIBS
+	set testcase(nr_compunits) $GenPerfTest::DEFAULT_NR_COMPUNITS
+
+	set testcase(nr_extern_globals) $GenPerfTest::DEFAULT_NR_EXTERN_GLOBALS
+	set testcase(nr_static_globals) $GenPerfTest::DEFAULT_NR_STATIC_GLOBALS
+	set testcase(nr_extern_functions) $GenPerfTest::DEFAULT_NR_EXTERN_FUNCTIONS
+	set testcase(nr_static_functions) $GenPerfTest::DEFAULT_NR_STATIC_FUNCTIONS
+
+	set testcase(class_specs) $GenPerfTest::DEFAULT_CLASS_SPECS
+	set testcase(nr_members) $GenPerfTest::DEFAULT_NR_MEMBERS
+	set testcase(nr_static_members) $GenPerfTest::DEFAULT_NR_STATIC_MEMBERS
+	set testcase(nr_methods) $GenPerfTest::DEFAULT_NR_METHODS
+	set testcase(nr_static_methods) $GenPerfTest::DEFAULT_NR_STATIC_METHODS
+
+	# The location of this file drives the location of all other files.
+	# The choice is derived from standard_output_file.  We don't use it
+	# because of the parallel build support, we want each worker's log/sum
+	# files to go in different directories, but we don't want their output
+	# to go in different directories.
+	# N.B. The value here must be kept in sync with Makefile.in.
+	global objdir
+	set name_no_spaces [_convert_spaces $name]
+	set testcase(binfile) "$objdir/gdb.perf/outputs/$name_no_spaces/$name_no_spaces"
+
+	return [array get testcase]
+    }
+
+    proc _verify_parameter_lengths { self_var } {
+	upvar 1 $self_var self
+	set params {
+	    nr_shlibs nr_compunits
+	    nr_extern_globals nr_static_globals
+	    nr_extern_functions nr_static_functions
+	    class_specs
+	    nr_members nr_static_members
+	    nr_methods nr_static_methods
+	}
+	set nr_runs [llength $self(run_names)]
+	foreach p $params {
+	    set n [llength $self($p)]
+	    if { $n > 1 } {
+		if { $n != $nr_runs } {
+		    error "Bad number of values for parameter $p"
+		}
+		set values $self($p)
+		for { set i 0 } { $i < $n - 1 } { incr i } {
+		    if { [lindex $values $i] > [lindex $values [expr $i + 1]] } {
+			error "Values of parameter $p are not increasing"
+		    }
+		}
+	    }
+	}
+    }
+
+    # Verify the testcase is valid (as best we can, this isn't exhaustive).
+
+    proc _verify_testcase { self_var } {
+	upvar 1 $self_var self
+	_verify_parameter_lengths self
+    }
+
+    # Return the value of parameter PARAM for run RUN_NR.
+
+    proc _get_param { param run_nr } {
+	if { [llength $param] == 1 } {
+	    # Since PARAM may be a list of lists we need to use lindex.  This
+	    # also works for scalars (scalars are degenerate lists).
+	    return [lindex $param 0]
+	}
+	return [lindex $param $run_nr]
+    }
+
+    # Return non-zero if all files (binaries + shlibs) can be compiled from
+    # one set of object files.  This is a simple optimization to speed up
+    # test build times.  This happens if the only variation among runs is
+    # nr_shlibs or nr_compunits.
+
+    proc _static_object_files_p { self_var } {
+	upvar 1 $self_var self
+	set object_file_params {
+	    nr_extern_globals nr_static_globals
+	    nr_extern_functions nr_static_functions
+	}
+	set static 1
+	foreach p $object_file_params {
+	    if { [llength $self($p)] > 1 } {
+		set static 0
+	    }
+	}
+	return $static
+    }
+
+    # Return non-zero if classes are enabled.
+
+    proc _classes_enabled_p { self_var run_nr } {
+	upvar 1 $self_var self
+	set class_specs [_get_param $self(class_specs) $run_nr]
+	foreach elm $class_specs {
+	    if { [llength $elm] != 2 } {
+		error "Bad class spec: $elm"
+	    }
+	    if { [lindex $elm 1] > 0 } {
+		return 1
+	    }
+	}
+	return 0
+    }
+
+    # Spaces in file names are a pain, remove them.
+    # They appear if the user puts spaces in the test name or run name.
+
+    proc _convert_spaces { file_name } {
+	return [regsub -all " " $file_name "-"]
+    }
+
+    # Return the path to put source/object files in for run number RUN_NR.
+
+    proc _make_object_dir_name { self_var static run_nr } {
+	upvar 1 $self_var self
+	# Note: The output directory already includes the name of the test
+	# description file.
+	set bindir [file dirname $self(binfile)]
+	# Put the pieces in a subdirectory, there are a lot of them.
+	if $static {
+	    return "$bindir/pieces"
+	} else {
+	    set run_name [_convert_spaces [lindex $self(run_names) $run_nr]]
+	    return "$bindir/pieces/$run_name"
+	}
+    }
+
+    # CU_NR is either the compilation unit number or "main".
+    # RUN_NR is ignored if STATIC is non-zero.
+
+    proc _make_binary_source_name { self_var static run_nr cu_nr } {
+	upvar 1 $self_var self
+	set source_suffix $GenPerfTest::suffixes($self(language))
+	if { !$static } {
+	    set run_name [_get_param $self(run_names) $run_nr]
+	    set source_name "${run_name}-${cu_nr}.$source_suffix"
+	} else {
+	    set source_name "$self(name)-${cu_nr}.$source_suffix"
+	}
+	return "[_make_object_dir_name self $static $run_nr]/[_convert_spaces $source_name]"
+    }
+
+    proc _make_binary_main_source_name { self_var static run_nr } {
+	upvar 1 $self_var self
+	return [_make_binary_source_name self $static $run_nr "main"]
+    }
+
+    # Generated object files get put in the same directory as their source.
+
+    proc _make_binary_object_name { self_var static run_nr cu_nr } {
+	upvar 1 $self_var self
+	set source_name [_make_binary_source_name self $static $run_nr $cu_nr]
+	return [file rootname $source_name].o
+    }
+
+    proc _make_shlib_source_name { self_var static run_nr so_nr cu_nr } {
+	upvar 1 $self_var self
+	set source_suffix $GenPerfTest::suffixes($self(language))
+	if { !$static } {
+	    set run_name [_get_param $self(run_names) $run_nr]
+	    set source_name "$self(name)-${run_name}-lib${so_nr}-${cu_nr}.$source_suffix"
+	} else {
+	    set source_name "$self(name)-lib${so_nr}-${cu_nr}.$source_suffix"
+	}
+	return "[_make_object_dir_name self $static $run_nr]/[_convert_spaces $source_name]"
+    }
+
+    # Return the list of source/object files for the binary.
+    # The source file for main() is returned, as well as the names of all the
+    # object file "pieces".
+    # If STATIC is non-zero the source files are unchanged for each run.
+
+    proc _make_binary_input_file_names { self_var static run_nr } {
+	upvar 1 $self_var self
+	set nr_compunits [_get_param $self(nr_compunits) $run_nr]
+	set result [_make_binary_main_source_name self $static $run_nr]
+	for { set cu_nr 0 } { $cu_nr < $nr_compunits } { incr cu_nr } {
+	    lappend result [_make_binary_object_name self $static $run_nr $cu_nr]
+	}
+	return $result
+    }
+
+    proc _make_binary_name { self_var run_nr } {
+	upvar 1 $self_var self
+	set run_name [_get_param $self(run_names) $run_nr]
+	set exe_name "$self(binfile)-[_convert_spaces ${run_name}]"
+	return $exe_name
+    }
+
+    proc _make_shlib_name { self_var static run_nr so_nr } {
+	upvar 1 $self_var self
+	if { !$static } {
+	    set run_name [_get_param $self(run_names) $run_nr]
+	    set lib_name "$self(name)-${run_name}-lib${so_nr}"
+	} else {
+	    set lib_name "$self(name)-lib${so_nr}"
+	}
+	set output_dir [file dirname $self(binfile)]
+	return "[_make_object_dir_name self $static $run_nr]/[_convert_spaces $lib_name]"
+    }
+
+    proc _create_file { self_var path } {
+	upvar 1 $self_var self
+	verbose -log "Creating file: $path"
+	set f [open $path "w"]
+	return $f
+    }
+
+    proc _write_header { self_var f } {
+	upvar 1 $self_var self
+	puts $f "// DO NOT EDIT, machine generated file."
+	puts $f "// See perftest.exp:GenPerfTest."
+    }
+
+    proc _write_static_globals { self_var f run_nr } {
+	upvar 1 $self_var self
+	puts $f ""
+	set nr_static_globals [_get_param $self(nr_static_globals) $run_nr]
+	# Rather than parameterize the number of const/non-const globals,
+	# and their types, we keep it simple for now.	Even the number of
+	# bss/non-bss globals may be useful; later, if warranted.
+	for { set i 0 } { $i < $nr_static_globals } { incr i } {
+	    if { $i % 2 == 0 } {
+		set const "const "
+	    } else {
+		set const ""
+	    }
+	    puts $f "static ${const}int static_global_$i = $i;"
+	}
+    }
+
+    # ID is "" for the binary, and a unique symbol prefix for each SO.
+
+    proc _write_extern_globals { self_var f run_nr id cu_nr } {
+	upvar 1 $self_var self
+	puts $f ""
+	set nr_extern_globals [_get_param $self(nr_extern_globals) $run_nr]
+	# Rather than parameterize the number of const/non-const globals,
+	# and their types, we keep it simple for now.	Even the number of
+	# bss/non-bss globals may be useful; later, if warranted.
+	for { set i 0 } { $i < $nr_extern_globals } { incr i } {
+	    if { $i % 2 == 0 } {
+		set const "const "
+	    } else {
+		set const ""
+	    }
+	    puts $f "${const}int ${id}global_${cu_nr}_$i = $cu_nr * 1000 + $i;"
+	}
+    }
+
+    proc _write_static_functions { self_var f run_nr } {
+	upvar 1 $self_var self
+	set nr_static_functions [_get_param $self(nr_static_functions) $run_nr]
+	for { set i 0 } { $i < $nr_static_functions } { incr i } {
+	    puts $f ""
+	    puts $f "static void"
+	    puts $f "static_function_$i (void)"
+	    puts $f "{"
+	    puts $f "}"
+	}
+    }
+
+    # ID is "" for the binary, and a unique symbol prefix for each SO.
+
+    proc _write_extern_functions { self_var f run_nr id cu_nr } {
+	upvar 1 $self_var self
+	set nr_extern_functions [_get_param $self(nr_extern_functions) $run_nr]
+	for { set i 0 } { $i < $nr_extern_functions } { incr i } {
+	    puts $f ""
+	    puts $f "void"
+	    puts $f "${id}function_${cu_nr}_$i (void)"
+	    puts $f "{"
+	    puts $f "}"
+	}
+    }
+
+    proc _write_classes { self_var f run_nr cu_nr } {
+	upvar 1 $self_var self
+	set class_specs [_get_param $self(class_specs) $run_nr]
+	set nr_members [_get_param $self(nr_members) $run_nr]
+	set nr_static_members [_get_param $self(nr_static_members) $run_nr]
+	set nr_methods [_get_param $self(nr_methods) $run_nr]
+	set nr_static_methods [_get_param $self(nr_static_methods) $run_nr]
+	foreach spec $class_specs {
+	    set depth [lindex $spec 0]
+	    set nr_classes [lindex $spec 1]
+	    puts $f ""
+	    for { set i 0 } { $i < $depth } { incr i } {
+		puts $f "namespace ns_${i}"
+		puts $f "\{"
+	    }
+	    for { set c 0 } { $c < $nr_classes } { incr c } {
+		set class_name "class_${cu_nr}_${c}"
+		puts $f "class $class_name"
+		puts $f "\{"
+		puts $f " public:"
+		for { set i 0 } { $i < $nr_members } { incr i } {
+		    puts $f "  int member_$i;"
+		}
+		for { set i 0 } { $i < $nr_static_members } { incr i } {
+		    # Rather than parameterize the number of const/non-const
+		    # members, and their types, we keep it simple for now.
+		    if { $i % 2 == 0 } {
+			puts $f "  static const int static_member_$i = $i;"
+		    } else {
+			puts $f "  static int static_member_$i;"
+		    }
+		}
+		for { set i 0 } { $i < $nr_methods } { incr i } {
+		    puts $f "  void method_$i (void);"
+		}
+		for { set i 0 } { $i < $nr_static_methods } { incr i } {
+		    puts $f "  static void static_method_$i (void);"
+		}
+		puts $f "\};"
+		_write_static_members self $f $run_nr $class_name
+		_write_methods self $f $run_nr $class_name
+		_write_static_methods self $f $run_nr $class_name
+	    }
+	    for { set i 0 } { $i < $depth } { incr i } {
+		puts $f "\}"
+	    }
+	}
+    }
+
+    proc _write_static_members { self_var f run_nr class_name } {
+	upvar 1 $self_var self
+	puts $f ""
+	set nr_static_members [_get_param $self(nr_static_members) $run_nr]
+	# Rather than parameterize the number of const/non-const
+	# members, and their types, we keep it simple for now.
+	for { set i 0 } { $i < $nr_static_members } { incr i } {
+	    if { $i % 2 == 0 } {
+		# Static const members are initialized inline.
+	    } else {
+		puts $f "int ${class_name}::static_member_$i = $i;"
+	    }
+	}
+    }
+
+    proc _write_methods { self_var f run_nr class_name } {
+	upvar 1 $self_var self
+	set nr_methods [_get_param $self(nr_methods) $run_nr]
+	for { set i 0 } { $i < $nr_methods } { incr i } {
+	    puts $f ""
+	    puts $f "void"
+	    puts $f "${class_name}::method_$i (void)"
+	    puts $f "{"
+	    puts $f "}"
+	}
+    }
+
+    proc _write_static_methods { self_var f run_nr class_name } {
+	upvar 1 $self_var self
+	set nr_static_methods [_get_param $self(nr_static_methods) $run_nr]
+	for { set i 0 } { $i < $nr_static_methods } { incr i } {
+	    puts $f ""
+	    puts $f "void"
+	    puts $f "${class_name}::static_method_$i (void)"
+	    puts $f "{"
+	    puts $f "}"
+	}
+    }
+
+    proc _gen_binary_compunit_source { self_var static run_nr cu_nr } {
+	upvar 1 $self_var self
+	set source_file [_make_binary_source_name self $static $run_nr $cu_nr]
+	set f [_create_file self $source_file]
+	_write_header self $f
+	_write_static_globals self $f $run_nr
+	_write_extern_globals self $f $run_nr "" $cu_nr
+	_write_static_functions self $f $run_nr
+	_write_extern_functions self $f $run_nr "" $cu_nr
+	if [_classes_enabled_p self $run_nr] {
+	    _write_classes self $f $run_nr $cu_nr
+	}
+	close $f
+	return $source_file
+    }
+
+    proc _gen_shlib_compunit_source { self_var static run_nr so_nr cu_nr } {
+	upvar 1 $self_var self
+	set source_file [_make_shlib_source_name self $static $run_nr $so_nr $cu_nr]
+	set f [_create_file self $source_file]
+	_write_header self $f
+	_write_static_globals self $f $run_nr
+	_write_extern_globals self $f $run_nr "shlib${so_nr}_" $cu_nr
+	_write_static_functions self $f $run_nr
+	_write_extern_functions self $f $run_nr "shlib${so_nr}_" $cu_nr
+	if [_classes_enabled_p self $run_nr] {
+	    _write_classes self $f $run_nr $cu_nr
+	}
+	close $f
+	return $source_file
+    }
+
+    proc _gen_shlib_source { self_var static run_nr so_nr } {
+	upvar 1 $self_var self
+	set result ""
+	set nr_compunits [_get_param $self(nr_compunits) $run_nr]
+	for { set cu_nr 0 } { $cu_nr < $nr_compunits } { incr cu_nr } {
+	    lappend result [_gen_shlib_compunit_source self $static $run_nr $so_nr $cu_nr]
+	}
+	return $result
+    }
+
+    proc _compile_binary_pieces { self_var worker_nr static run_nr } {
+	upvar 1 $self_var self
+	set object_dir [_make_object_dir_name self $static $run_nr]
+	file mkdir $object_dir
+	set compile_flags {debug}
+	set nr_compunits [_get_param $self(nr_compunits) $run_nr]
+	global PERF_TEST_COMPILE_PARALLELISM
+	set nr_workers $PERF_TEST_COMPILE_PARALLELISM
+	for { set cu_nr $worker_nr } { $cu_nr < $nr_compunits } { incr cu_nr $nr_workers } {
+	    set source_file [_gen_binary_compunit_source self $static $run_nr $cu_nr]
+	    set object_file [_make_binary_object_name self $static $run_nr $cu_nr]
+	    if { [gdb_compile $source_file $object_file object $compile_flags] != "" } {
+		return -1
+	    }
+	}
+	return 0
+    }
+
+    # Helper function to compile the pieces of a shlib.
+    # Note: gdb_compile_shlib{,_pthreads} don't support first building object
+    # files and then building the shlib.  Therefore our hands are tied, and we
+    # just build the shlib in one step.  This is less of a parallelization
+    # problem if there are multiple shlibs: Each worker can build a different
+    # shlib.
+
+    proc _compile_shlib { self_var static run_nr so_nr } {
+	upvar 1 $self_var self
+	set source_files [_gen_shlib_source self $static $run_nr $so_nr]
+	set shlib_file [_make_shlib_name self $static $run_nr $so_nr]
+	set compile_flags {debug}
+	if { [gdb_compile_shlib $source_files $shlib_file $compile_flags] != "" } {
+	    return -1
+	}
+	return 0
+    }
+
+    # Compile the pieces of the binary and possible shlibs for the test.
+    # The result is 0 for success, -1 for failure.
+
+    proc _compile_pieces { self_var worker_nr } {
+	upvar 1 $self_var self
+	global PERF_TEST_COMPILE_PARALLELISM
+	set nr_workers $PERF_TEST_COMPILE_PARALLELISM
+	set nr_runs [llength $self(run_names)]
+	set static [_static_object_files_p self]
+	file mkdir "[file dirname $self(binfile)]/pieces"
+	if $static {
+	    # All the pieces look the same (run over run) so just build all the
+	    # shlibs of the last run (which is the largest).
+	    set last_run [expr $nr_runs - 1]
+	    set nr_shlibs [_get_param $self(nr_shlibs) $last_run]
+	    for { set so_nr $worker_nr } { $so_nr < $nr_shlibs } { incr so_nr $nr_workers } {
+		if { [_compile_shlib self $static $last_run $so_nr] < 0 } {
+		    return -1
+		}
+	    }
+	    if { [_compile_binary_pieces self $worker_nr $static $last_run] < 0 } {
+		return -1
+	    }
+	} else {
+	    for { set run_nr 0 } { $run_nr < $nr_runs } { incr run_nr } {
+		set nr_shlibs [_get_param $self(nr_shlibs) $run_nr]
+		for { set so_nr $worker_nr } { $so_nr < $nr_shlibs } { incr so_nr $nr_workers } {
+		    if { [_compile_shlib self $static $run_nr $so_nr] < 0 } {
+			return -1
+		    }
+		}
+		if { [_compile_binary_pieces self $worker_nr $static $run_nr] < 0 } {
+		    return -1
+		}
+	    }
+	}
+	return 0
+    }
+
+    proc compile_pieces { self_var worker_nr } {
+	upvar 1 $self_var self
+	verbose -log "GenPerfTest::compile_pieces, started worker $worker_nr [timestamp -format %c]"
+	verbose -log "self: [array get self]"
+	_verify_testcase self
+	if { [_compile_pieces self $worker_nr] < 0 } {
+	    verbose -log "GenPerfTest::compile_pieces, worker $worker_nr failed [timestamp -format %c]"
+	    return -1
+	}
+	verbose -log "GenPerfTest::compile_pieces, worker $worker_nr done [timestamp -format %c]"
+	return 0
+    }
+
+    proc _generate_main_source { self_var static run_nr } {
+	upvar 1 $self_var self
+	set main_source_file [_make_binary_main_source_name self $static $run_nr]
+	set f [_create_file self $main_source_file]
+	_write_header self $f
+	puts $f ""
+	puts $f "int"
+	puts $f "main (void)"
+	puts $f "{"
+	puts $f "  return 0;"
+	puts $f "}"
+	close $f
+    }
+
+    proc _make_shlib_flags { self_var static run_nr } {
+	upvar 1 $self_var self
+	set nr_shlibs [_get_param $self(nr_shlibs) $run_nr]
+	set result ""
+	for { set i 0 } { $i < $nr_shlibs } { incr i } {
+	    lappend result "shlib=[_make_shlib_name self $static $run_nr $i]"
+	}
+	return $result
+    }
+
+    proc _compile_binary { self_var static run_nr } {
+	upvar 1 $self_var self
+	set input_files [_make_binary_input_file_names self $static $run_nr]
+	set binary_file [_make_binary_name self $run_nr]
+	set compile_flags "debug [_make_shlib_flags self $static $run_nr]"
+	if { [gdb_compile $input_files $binary_file executable $compile_flags] != "" } {
+	    return -1
+	}
+	return 0
+    }
+
+    # Compile the binary for the test.
+    # This assumes the pieces of the binary (all the .o's, except for main())
+    # have already been built with compile_pieces.
+    # There's no need to compile any shlibs here, as compile_pieces will have
+    # already built them too.
+    # The result is 0 for success, -1 for failure.
+
+    proc _compile { self_var } {
+	upvar 1 $self_var self
+	set nr_runs [llength $self(run_names)]
+	set static [_static_object_files_p self]
+	for { set run_nr 0 } { $run_nr < $nr_runs } { incr run_nr } {
+	    _generate_main_source self $static $run_nr
+	    if { [_compile_binary self $static $run_nr] < 0 } {
+		return -1
+	    }
+	}
+	return 0
+    }
+
+    proc compile { self_var } {
+	upvar 1 $self_var self
+	verbose -log "GenPerfTest::compile, started [timestamp -format %c]"
+	verbose -log "self: [array get self]"
+	_verify_testcase self
+	if { [_compile self] < 0 } {
+	    verbose -log "GenPerfTest::compile, failed [timestamp -format %c]"
+	    return -1
+	}
+	verbose -log "GenPerfTest::compile, done [timestamp -format %c]"
+	return 0
+    }
+}
+
+if ![info exists PERF_TEST_COMPILE_PARALLELISM] {
+    set PERF_TEST_COMPILE_PARALLELISM $GenPerfTest::DEFAULT_PERF_TEST_COMPILE_PARALLELISM
+}

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] Monster testcase generator for performance testsuite
  2015-01-02 10:07 [RFC] Monster testcase generator for performance testsuite Doug Evans
@ 2015-01-05 13:32 ` Yao Qi
  2015-01-06  0:54   ` Doug Evans
  0 siblings, 1 reply; 8+ messages in thread
From: Yao Qi @ 2015-01-05 13:32 UTC (permalink / raw)
  To: Doug Evans; +Cc: gdb-patches

Doug Evans <xdje42@gmail.com> writes:

Doug,
First of all, it is great to have such generator for performance testing,
but it doesn't have to be a monster and we don't need parallel build so
far.  The parallel build will get the generator over-complicated.  See
more below:

> This patch adds preliminary support for generating large programs.
> "Large" as in 10000 compunits or 5000 shared libraries or 3M ELF symbols.
>

Is there any reason we define the workload like this?  Can they
represent the typical and practical super large program?  I feel that the
workload you defined is too heavy to be practical, and the overweight
causes the long compilation time you mentioned below.

> There's still a bit more I want to add to this, but it's at a point
> where I can use it, and thus now's a good time to get some feedback.
>
> One difference between these tests and current perf tests is that
> one .exp is used to build the program and another .exp is used to
> run the test.  These programs take awhile to compile and link.
> Generating the sources for these monster testcases takes hardly any time
> at all relative to the amount of time to compile them.  I measured 13.5
> minutes to compile the included gmonster1 benchmark (with -j6!), and about
> an equivalent amount of time to run the benchmark.  Therefore it makes
> sense to be able to use one program in multiple performance tests, and
> therefore it makes sense to separate the build from the test run.

Compilation and run takes about 10 minutes respectively.  However, I
don't understand the importance that making tests running for 10
minutes, which is too long for a perf test case.  IMO, a-two-minute-run
program should be representative enough...

>
> These tests currently require separate build-perf and check-perf steps,
> which is different from normal perf tests.  However, due to the time
> it takes to build the program I've added support for building the pieces
> of the test in parallel, and hooking this parallel build support into
> the existing framework required some pragmatic compromise.

... so the parallel build part may not be needed.

> Running the gmonster1-ptype benchmark requires about 8G to link the program,
> and 11G to run it under gdb.  I still need to add the ability to
> have a small version enabled by default, and turn on the bigger version
> from the command line.  I don't expect everyone to have a big enough
> machine to run the test configuration that I do.

It looks like a monster rather than a perf test case :)  It is good to
have a small version enabled by default, which requires less than 1 G,
for example, to run it under GDB.  How much time it takes to compile
(sequential build) and run the small version?

-- 
Yao (齐尧)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] Monster testcase generator for performance testsuite
  2015-01-05 13:32 ` Yao Qi
@ 2015-01-06  0:54   ` Doug Evans
  2015-01-07  9:39     ` Yao Qi
  0 siblings, 1 reply; 8+ messages in thread
From: Doug Evans @ 2015-01-06  0:54 UTC (permalink / raw)
  To: Yao Qi; +Cc: gdb-patches

On Mon, Jan 5, 2015 at 5:32 AM, Yao Qi <yao@codesourcery.com> wrote:
> Doug Evans <xdje42@gmail.com> writes:
>
> Doug,
> First of all, it is great to have such generator for performance testing,
> but it doesn't have to be a monster and we don't need parallel build so
> far.  The parallel build will get the generator over-complicated.  See
> more below.
>
>> This patch adds preliminary support for generating large programs.
>> "Large" as in 10000 compunits or 5000 shared libraries or 3M ELF symbols.
>>
>
> Is there any reason we define the workload like this?  Can they
> represent the typical and practical super large program?  I feel that the
> workload you defined is too heavy to be practical, and the overweight
> causes the long compilation time you mentioned below.

Those are just loose (i.e., informal) characterizations of real programs
my users run gdb on.
And that's an incomplete list btw.
So, yes, they do represent practical super large programs.
The programs these benchmarks will be based on are as real as it gets.
As for whether they're typical ... depends on what you're used to I guess. :-)

>> There's still a bit more I want to add to this, but it's at a point
>> where I can use it, and thus now's a good time to get some feedback.
>>
>> One difference between these tests and current perf tests is that
>> one .exp is used to build the program and another .exp is used to
>> run the test.  These programs take awhile to compile and link.
>> Generating the sources for these monster testcases takes hardly any time
>> at all relative to the amount of time to compile them.  I measured 13.5
>> minutes to compile the included gmonster1 benchmark (with -j6!), and about
>> an equivalent amount of time to run the benchmark.  Therefore it makes
>> sense to be able to use one program in multiple performance tests, and
>> therefore it makes sense to separate the build from the test run.
>
> Compilation and run takes about 10 minutes respectively.  However, I
> don't understand the importance that making tests running for 10
> minutes, which is too long for a perf test case.  IMO, a-two-minute-run
> program should be representative enough...

Depends.
I'm not suggesting compile/run time is the defining characteristic
that makes them useful. gmonster1 (and others) are intended to be
representative of real programs (gmonster1 isn't there yet, but it's
not because it's too big ..., I still have to tweak the kind of bigness
it has, as well as add more specially crafted code to exercise real issues).
Its compile time is what it is. The program is that big.
As for test run time, that depends on the test.
At the moment it's still early, and I'm still writing tests and
calibrating them.

As for general importance,

If a change to gdb increases the time it takes to run a particular command
by one second is that ok? Maybe. And if my users see the increase
become ten seconds is that still ok? Also maybe, but I'd like to make the
case that it'd be preferable to have mechanisms in place to find out sooner
than later.

Similarly, if a change to gdb increases memory usage by 40MB is that ok?
Maybe. And if my users see that increase become 400MB is that still ok?
Possibly (depending on the nature of the change). But, again, one of my
goals here is to have in place mechanisms to find out sooner than later.

Note that, as I said, there's more I wish to add here.
For example, it's not enough to just machine generate a bunch of generic
code. We also need the ability to add specific cases that trip gdb up,
and thus I also plan to add the ability to add hand-written code to
these benchmarks.
Plus, my plan is to make gmonster1 contain a variety of such cases
and use it in multiple benchmarks. Otherwise we're compiling/linking
multiple programs and I *am* trying to cut down on build times here! :-)

>> These tests currently require separate build-perf and check-perf steps,
>> which is different from normal perf tests.  However, due to the time
>> it takes to build the program I've added support for building the pieces
>> of the test in parallel, and hooking this parallel build support into
>> the existing framework required some pragmatic compromise.
>
> ... so the parallel build part may not be needed.

I'm not sure what the hangup is on supporting parallel builds here.
Can you elaborate? It's really not that much code, and while I could
have done things differently, I'm just using mechanisms that are
already in place. The only real "complexity" is that the existing
mechanism is per-.exp-file based, so I needed one .exp file per worker.
I think we could simplify this with some cleverness, but this isn't
what I want to focus on right now. Any change will just be to the
infrastructure, not to the tests. If someone wants to propose a different
mechanism to achieve the parallelism go for it. OTOH, there is value
in using existing mechanisms. Another way to go (and I'm not suggesting
this is a better or worse way, it's just an example) would be to have
hand-written worker .exp files and check those in. I don't have a
strong opinion on that, machine generating them is easy enough and
gives me some flexibility (which is nice) in these early stages.

>> Running the gmonster1-ptype benchmark requires about 8G to link the program,
>> and 11G to run it under gdb.  I still need to add the ability to
>> have a small version enabled by default, and turn on the bigger version
>> from the command line.  I don't expect everyone to have a big enough
>> machine to run the test configuration that I do.
>
> It looks like a monster rather than a perf test case :)

Depends.  How long do your users still wait for gdb to do something?
My users are still waiting too long for several things (e.g., startup time).
And I want to be able to measure what my users see.
And I want to be able to provide upstream with demonstrations of that.

> It is good to
> have a small version enabled by default, which requires less than 1 G,
> for example, to run it under GDB.  How much time it takes to compile
> (sequential build) and run the small version?

There are mechanisms in place to control the amount of parallelism.
One could make it part of the test spec, but I'm not sure it'd be useful
enough.  Thus I think there's no need to compile small testcases
serially.

As for what upstream wants the "default" to be, I don't have
a strong opinion, beyond it being minimally useful.  If the default isn't
useful to me, it's easy enough to tweak the test with a local change
to cover what I need.

Note that I'm not expecting the default to be these
super long times, which I noted in my original email. OTOH, I do want
the harness to be able to usefully handle (as in not wait an hour for the
testcase to be built) the kind of large programs that I need to run the
tests on.  Thus my plan is to have a harness that can handle what
I need, but have defaults that don't impose that on everyone.
Given appropriate knobs it will be easy enough to have useful
defaults and still be able to run the tests with larger programs.
And then if my runs find a problem, it will be straightforward for
me to provide a demonstration of what I'm seeing (which is part
of what I want to accomplish here).

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] Monster testcase generator for performance testsuite
  2015-01-06  0:54   ` Doug Evans
@ 2015-01-07  9:39     ` Yao Qi
  2015-01-07 22:33       ` Doug Evans
  0 siblings, 1 reply; 8+ messages in thread
From: Yao Qi @ 2015-01-07  9:39 UTC (permalink / raw)
  To: Doug Evans; +Cc: gdb-patches

Doug Evans <dje@google.com> writes:

> If a change to gdb increases the time it takes to run a particular command
> by one second is that ok? Maybe. And if my users see the increase
> become ten seconds is that still ok? Also maybe, but I'd like to make the
> case that it'd be preferable to have mechanisms in place to find out sooner
> than later.
>

Yeah, I agree that it is better to find out problems sooner than later.
That is why we create perf test cases.  If one second time increase is
sufficient to find the performance problem, isn't it good?  Why do we
still need to run a bigger version which demonstrated ten seconds increase?

> Similarly, if a change to gdb increases memory usage by 40MB is that ok?
> Maybe. And if my users see that increase become 400MB is that still ok?
> Possibly (depending on the nature of the change). But, again, one of my
> goals here is to have in place mechanisms to find out sooner than later.
>

Similarly, if 40MB memory usage increase is sufficient to show the
performance problem, why do we still have to use a bigger one?

Perf test case is used to demonstrate the real performance problems in
some super large programs, but it doesn't mean the perf test case should
be as big as these super large programs.

> Note that, as I said, there's more I wish to add here.
> For example, it's not enough to just machine generate a bunch of generic
> code. We also need the ability to add specific cases that trip gdb up,
> and thus I also plan to add the ability to add hand-written code to
> these benchmarks.
> Plus, my plan is to make gmonster1 contain a variety of such cases
> and use it in multiple benchmarks. Otherwise we're compiling/linking
> multiple programs and I *am* trying to cut down on build times here! :-)
>

That sounds interesting...

>>> These tests currently require separate build-perf and check-perf steps,
>>> which is different from normal perf tests.  However, due to the time
>>> it takes to build the program I've added support for building the pieces
>>> of the test in parallel, and hooking this parallel build support into
>>> the existing framework required some pragmatic compromise.
>>
>> ... so the parallel build part may not be needed.
>
> I'm not sure what the hangup is on supporting parallel builds here.
> Can you elaborate? It's really not that much code, and while I could

I'd like keep gdb perf test simple.

> have done things differently, I'm just using mechanisms that are
> already in place. The only real "complexity" is that the existing
> mechanism is per-.exp-file based, so I needed one .exp file per worker.
> I think we could simplify this with some cleverness, but this isn't
> what I want to focus on right now. Any change will just be to the
> infrastructure, not to the tests. If someone wants to propose a different
> mechanism to achieve the parallelism go for it. OTOH, there is value
> in using existing mechanisms. Another way to go (and I'm not suggesting
> this is a better or worse way, it's just an example) would be to have
> hand-written worker .exp files and check those in. I don't have a
> strong opinion on that, machine generating them is easy enough and
> gives me some flexibility (which is nice) in these early stages.
>
>>> Running the gmonster1-ptype benchmark requires about 8G to link the program,
>>> and 11G to run it under gdb.  I still need to add the ability to
>>> have a small version enabled by default, and turn on the bigger version
>>> from the command line.  I don't expect everyone to have a big enough
>>> machine to run the test configuration that I do.
>>
>> It looks like a monster rather than a perf test case :)
>
> Depends.  How long do your users still wait for gdb to do something?
> My users are still waiting too long for several things (e.g., startup time).
> And I want to be able to measure what my users see.
> And I want to be able to provide upstream with demonstrations of that.
>

IMO, your expectation is beyond the scope or the purpose perf test
case.  The purpose of each perf test case is to make sure there is no
performance regression and to expose performance problems as code
evolves.  It is not reasonable to me that we measure what users see by
running our perf test cases.  Each perf test case is to measure the
performance on gdb on a certain path, so it doesn't have to behave
exactly the same as the application users are debugging.

>> It is good to
>> have a small version enabled by default, which requires less than 1 G,
>> for example, to run it under GDB.  How much time it takes to compile
>> (sequential build) and run the small version?
>
> There are mechanisms in place to control the amount of parallelism.
> One could make it part of the test spec, but I'm not sure it'd be useful
> enough.  Thus I think there's no need to compile small testcases
> serially.
>

Is it possible (or necessary) that we divide it to two parts, 1) perf
test case generator and 2) parallel build?  As we increase the size
generated perf test cases, the long compilation time can justify having
parallel build.

> As for what upstream wants the "default" to be, I don't have
> a strong opinion, beyond it being minimally useful.  If the default isn't
> useful to me, it's easy enough to tweak the test with a local change
> to cover what I need.
>
> Note that I'm not expecting the default to be these
> super long times, which I noted in my original email. OTOH, I do want
> the harness to be able to usefully handle (as in not wait an hour for the
> testcase to be built) the kind of large programs that I need to run the
> tests on.  Thus my plan is to have a harness that can handle what
> I need, but have defaults that don't impose that on everyone.
> Given appropriate knobs it will be easy enough to have useful
> defaults and still be able to run the tests with larger programs.
> And then if my runs find a problem, it will be straightforward for
> me to provide a demonstration of what I'm seeing (which is part
> of what I want to accomplish here).

Yeah, I agree.

-- 
Yao (齐尧)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] Monster testcase generator for performance testsuite
  2015-01-07  9:39     ` Yao Qi
@ 2015-01-07 22:33       ` Doug Evans
  2015-01-08  1:55         ` Yao Qi
  0 siblings, 1 reply; 8+ messages in thread
From: Doug Evans @ 2015-01-07 22:33 UTC (permalink / raw)
  To: Yao Qi; +Cc: gdb-patches

On Wed, Jan 7, 2015 at 1:39 AM, Yao Qi <yao@codesourcery.com> wrote:
> Doug Evans <dje@google.com> writes:
>
>> If a change to gdb increases the time it takes to run a particular command
>> by one second is that ok? Maybe. And if my users see the increase
>> become ten seconds is that still ok? Also maybe, but I'd like to make the
>> case that it'd be preferable to have mechanisms in place to find out sooner
>> than later.
>>
>
> Yeah, I agree that it is better to find out problems sooner than later.
> That is why we create perf test cases.  If one second time increase is
> sufficient to find the performance problem, isn't it good?  Why do we
> still need to run a bigger version which demonstrated ten seconds increase?

Some performance problems only present themselves at scale.
We need a perf test framework that lets us explore such things.

The point of the 1 second vs 10 second scenario is that the community
may find that 1 second is acceptable (IOW *not* a performance problem
significant enough to address).  It'll depend on the situation.
But at scale the performance may be untenable, causing one to want
to rethink one's algorithm or data structure or whatever.

Similar issues arise elsewhere btw.
E.g., gdb may handle 10 or 100 threads ok, but how about 1000 threads?

>> Similarly, if a change to gdb increases memory usage by 40MB is that ok?
>> Maybe. And if my users see that increase become 400MB is that still ok?
>> Possibly (depending on the nature of the change). But, again, one of my
>> goals here is to have in place mechanisms to find out sooner than later.
>>
>
> Similarly, if 40MB memory usage increase is sufficient to show the
> performance problem, why do we still have to use a bigger one?
>
> Perf test case is used to demonstrate the real performance problems in
> some super large programs, but it doesn't mean the perf test case should
> be as big as these super large programs.

One may think 40MB is a reasonable price to pay for some change
or some new feature.  But at scale that price may become unbearable.
So, yes, we do need perf testcases that let one exercise gdb at scale.

>>>> These tests currently require separate build-perf and check-perf steps,
>>>> which is different from normal perf tests.  However, due to the time
>>>> it takes to build the program I've added support for building the pieces
>>>> of the test in parallel, and hooking this parallel build support into
>>>> the existing framework required some pragmatic compromise.
>>>
>>> ... so the parallel build part may not be needed.
>>
>> I'm not sure what the hangup is on supporting parallel builds here.
>> Can you elaborate? It's really not that much code, and while I could
>
> I'd like keep gdb perf test simple.

How simple?  What about parallel builds adds too much complexity?
make check-parallel adds complexity, but I'm guessing no one is
advocating removing it, or was advocating against checking it in.

>>> It looks like a monster rather than a perf test case :)
>>
>> Depends.  How long do your users still wait for gdb to do something?
>> My users are still waiting too long for several things (e.g., startup time).
>> And I want to be able to measure what my users see.
>> And I want to be able to provide upstream with demonstrations of that.
>>
>
> IMO, your expectation is beyond the scope or the purpose perf test
> case.  The purpose of each perf test case is to make sure there is no
> performance regression and to expose performance problems as code
> evolves.

It's precisely within the scope and purpose of the perf testsuite!
We need to measure how well gdb will work on real programs,
and make sure changes introduced don't adversely affect such programs.
How do you know a feature/change/improvement will work at scale unless
you test it at scale?

> It is not reasonable to me that we measure what users see by
> running our perf test cases.

Perf test cases aren't an end unto themselves.
They exist to help serve our users.  If we're not able to measure
what our users see, how do we know what their gdb experience is?

> Each perf test case is to measure the
> performance on gdb on a certain path, so it doesn't have to behave
> exactly the same as the application users are debugging.
>
>>> It is good to
>>> have a small version enabled by default, which requires less than 1 G,
>>> for example, to run it under GDB.  How much time it takes to compile
>>> (sequential build) and run the small version?
>>
>> There are mechanisms in place to control the amount of parallelism.
>> One could make it part of the test spec, but I'm not sure it'd be useful
>> enough.  Thus I think there's no need to compile small testcases
>> serially.
>>
>
> Is it possible (or necessary) that we divide it to two parts, 1) perf
> test case generator and 2) parallel build?  As we increase the size
> generated perf test cases, the long compilation time can justify having
> parallel build.

I'm not sure what you're advocating for here.
Can you rephrase/elaborate?

>> As for what upstream wants the "default" to be, I don't have
>> a strong opinion, beyond it being minimally useful.  If the default isn't
>> useful to me, it's easy enough to tweak the test with a local change
>> to cover what I need.
>>
>> Note that I'm not expecting the default to be these
>> super long times, which I noted in my original email. OTOH, I do want
>> the harness to be able to usefully handle (as in not wait an hour for the
>> testcase to be built) the kind of large programs that I need to run the
>> tests on.  Thus my plan is to have a harness that can handle what
>> I need, but have defaults that don't impose that on everyone.
>> Given appropriate knobs it will be easy enough to have useful
>> defaults and still be able to run the tests with larger programs.
>> And then if my runs find a problem, it will be straightforward for
>> me to provide a demonstration of what I'm seeing (which is part
>> of what I want to accomplish here).
>
> Yeah, I agree.
>
> --
> Yao (齐尧)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] Monster testcase generator for performance testsuite
  2015-01-07 22:33       ` Doug Evans
@ 2015-01-08  1:55         ` Yao Qi
  2015-01-23  7:45           ` Doug Evans
  0 siblings, 1 reply; 8+ messages in thread
From: Yao Qi @ 2015-01-08  1:55 UTC (permalink / raw)
  To: Doug Evans; +Cc: gdb-patches

Doug Evans <dje@google.com> writes:

> The point of the 1 second vs 10 second scenario is that the community
> may find that 1 second is acceptable (IOW *not* a performance problem
> significant enough to address).  It'll depend on the situation.
> But at scale the performance may be untenable, causing one to want
> to rethink one's algorithm or data structure or whatever.

Right, the algorithm may be reconsidered when the program goes to large
scale.
>
> Similar issues arise elsewhere btw.
> E.g., gdb may handle 10 or 100 threads ok, but how about 1000 threads?

Then, I have to run the program with 1000 threads.

>>> Similarly, if a change to gdb increases memory usage by 40MB is that ok?
>>> Maybe. And if my users see that increase become 400MB is that still ok?
>>> Possibly (depending on the nature of the change). But, again, one of my
>>> goals here is to have in place mechanisms to find out sooner than later.
>>>
>>
>> Similarly, if 40MB memory usage increase is sufficient to show the
>> performance problem, why do we still have to use a bigger one?
>>
>> Perf test case is used to demonstrate the real performance problems in
>> some super large programs, but it doesn't mean the perf test case should
>> be as big as these super large programs.
>
> One may think 40MB is a reasonable price to pay for some change
> or some new feature.  But at scale that price may become unbearable.
> So, yes, we do need perf testcases that let one exercise gdb at scale.

Hmmm, that makes sense to me.

>
>>>>> These tests currently require separate build-perf and check-perf steps,
>>>>> which is different from normal perf tests.  However, due to the time
>>>>> it takes to build the program I've added support for building the pieces
>>>>> of the test in parallel, and hooking this parallel build support into
>>>>> the existing framework required some pragmatic compromise.
>>>>
>>>> ... so the parallel build part may not be needed.
>>>
>>> I'm not sure what the hangup is on supporting parallel builds here.
>>> Can you elaborate? It's really not that much code, and while I could
>>
>> I'd like keep gdb perf test simple.
>
> How simple?  What about parallel builds adds too much complexity?
> make check-parallel adds complexity, but I'm guessing no one is
> advocating removing it, or was advocating against checking it in.
>

Well, 'make check-parallel' is useful and parallel build in perf test
case generator is useful too.  However at first I feel that parallel
build in perf test case generator is a plus, not a must.  I thought we
could have a perf test case generator without parallel build.

>>>> It looks like a monster rather than a perf test case :)
>>>
>>> Depends.  How long do your users still wait for gdb to do something?
>>> My users are still waiting too long for several things (e.g., startup time).
>>> And I want to be able to measure what my users see.
>>> And I want to be able to provide upstream with demonstrations of that.
>>>
>>
>> IMO, your expectation is beyond the scope or the purpose perf test
>> case.  The purpose of each perf test case is to make sure there is no
>> performance regression and to expose performance problems as code
>> evolves.
>
> It's precisely within the scope and purpose of the perf testsuite!
> We need to measure how well gdb will work on real programs,
> and make sure changes introduced don't adversely affect such programs.
> How do you know a feature/change/improvement will work at scale unless
> you test it at scale?
>

We should test it at scale.

>> Each perf test case is to measure the
>> performance on gdb on a certain path, so it doesn't have to behave
>> exactly the same as the application users are debugging.
>>
>>>> It is good to
>>>> have a small version enabled by default, which requires less than 1 G,
>>>> for example, to run it under GDB.  How much time it takes to compile
>>>> (sequential build) and run the small version?
>>>
>>> There are mechanisms in place to control the amount of parallelism.
>>> One could make it part of the test spec, but I'm not sure it'd be useful
>>> enough.  Thus I think there's no need to compile small testcases
>>> serially.
>>>
>>
>> Is it possible (or necessary) that we divide it to two parts, 1) perf
>> test case generator and 2) parallel build?  As we increase the size
>> generated perf test cases, the long compilation time can justify having
>> parallel build.
>
> I'm not sure what you're advocating for here.
> Can you rephrase/elaborate?

Can we have a perf test case generator without using parallel build? and
we can add building perf test cases in parallel in next step.  I'd like
to add new things gradually.

If you think it isn't necessary to do things in these two steps, I am
OK too.  I don't have a strong opinion on this now.  I'll take a look at
your patch in details.

-- 
Yao (齐尧)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] Monster testcase generator for performance testsuite
  2015-01-08  1:55         ` Yao Qi
@ 2015-01-23  7:45           ` Doug Evans
  0 siblings, 0 replies; 8+ messages in thread
From: Doug Evans @ 2015-01-23  7:45 UTC (permalink / raw)
  To: Yao Qi; +Cc: gdb-patches

Yao Qi <yao@codesourcery.com> writes:
> Can we have a perf test case generator without using parallel build? and
> we can add building perf test cases in parallel in next step.  I'd like
> to add new things gradually.
>
> If you think it isn't necessary to do things in these two steps, I am
> OK too.  I don't have a strong opinion on this now.  I'll take a look at
> your patch in details.

Cool, thanks.

With testcases this big, a parallel build is really a must.
I've also added the beginnings of sha1sum tracking so that
incremental builds are even faster.

This is still a work in progress, but it's at a point where
I'm getting good data from it.

Still need to add, e.g., an output verifier (data isn't valid unless
the correct answer was printed, and manual verification is a pain /
error prone).

To use, e.g.,

bash$ make -j5 build-perf RUNTESTFLAGS="gmonster1.exp gmonster2.exp"
bash$ make check-perf RUNTESTFLAGS="gdb.perf/gm*-*.exp GDB=/path/to/gdb"


diff --git a/gdb/testsuite/Makefile.in b/gdb/testsuite/Makefile.in
index 53cb754..c350162 100644
--- a/gdb/testsuite/Makefile.in
+++ b/gdb/testsuite/Makefile.in
@@ -227,13 +227,31 @@ do-check-parallel: $(TEST_TARGETS)
 
 @GMAKE_TRUE@check/%.exp:
 @GMAKE_TRUE@	-mkdir -p outputs/$*
-@GMAKE_TRUE@	@$(DO_RUNTEST) GDB_PARALLEL=yes --outdir=outputs/$* $*.exp $(RUNTESTFLAGS)
+@GMAKE_TRUE@	@$(DO_RUNTEST) GDB_PARALLEL=. --outdir=outputs/$* $*.exp $(RUNTESTFLAGS)
 
 check/no-matching-tests-found:
 	@echo ""
 	@echo "No matching tests found."
 	@echo ""
 
+@GMAKE_TRUE@pieces/%.exp:
+@GMAKE_TRUE@	mkdir -p gdb.perf/outputs/$*
+@GMAKE_TRUE@	$(DO_RUNTEST) --status --outdir=gdb.perf/outputs/$* lib/build-piece.exp PIECE=gdb.perf/pieces/$*.exp WORKER=$* GDB_PARALLEL=gdb.perf $(RUNTESTFLAGS) GDB_PERFTEST_MODE=build-pieces
+
+# GDB_PERFTEST_MODE appears *after* RUNTESTFLAGS here because we don't want
+# anything in RUNTESTFLAGS to override it.
+# We don't delete previous directories here as these programs can take
+# awhile to build, and perftest.exp has support for deciding whether to
+# recompile them.  If you want to remove these directories, make clean.
+@GMAKE_TRUE@build-perf: $(abs_builddir)/site.exp
+@GMAKE_TRUE@	mkdir -p gdb.perf/pieces
+@GMAKE_TRUE@	@: Step 1: Generate the build .exp files.
+@GMAKE_TRUE@	$(DO_RUNTEST) --status --directory=gdb.perf --outdir gdb.perf/pieces GDB_PARALLEL=gdb.perf $(RUNTESTFLAGS) GDB_PERFTEST_MODE=gen-build-exps
+@GMAKE_TRUE@	@: Step 2: Compile the pieces.
+@GMAKE_TRUE@	$(MAKE) $$(cd gdb.perf && echo pieces/*/*.exp)
+@GMAKE_TRUE@	@: Step 3: Do the final link.
+@GMAKE_TRUE@	$(DO_RUNTEST) --status --directory=gdb.perf --outdir gdb.perf GDB_PARALLEL=gdb.perf $(RUNTESTFLAGS) GDB_PERFTEST_MODE=compile
+
 check-perf: all $(abs_builddir)/site.exp
 	@if test ! -d gdb.perf; then mkdir gdb.perf; fi
 	$(DO_RUNTEST) --directory=gdb.perf --outdir gdb.perf GDB_PERFTEST_MODE=both $(RUNTESTFLAGS)
@@ -245,6 +263,7 @@ clean mostlyclean:
 	-rm -f core.* *.tf *.cl tracecommandsscript copy1.txt zzz-gdbscript
 	-rm -f *.dwo *.dwp
 	-rm -rf outputs temp cache
+	-rm -rf gdb.perf/pieces gdb.perf/outputs gdb.perf/temp gdb.perf/cache
 	-rm -f read1.so expect-read1
 	if [ x"${ALL_SUBDIRS}" != x ] ; then \
 	    for dir in ${ALL_SUBDIRS}; \
diff --git a/gdb/testsuite/gdb.perf/gm-hello.cc b/gdb/testsuite/gdb.perf/gm-hello.cc
new file mode 100644
index 0000000..05b36e8
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gm-hello.cc
@@ -0,0 +1,18 @@
+/* Copyright (C) 2015 Free Software Foundation, Inc.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+#include <string>
+
+std::string hello ("Hello.");
diff --git a/gdb/testsuite/gdb.perf/gmonster-null-lookup.py b/gdb/testsuite/gdb.perf/gmonster-null-lookup.py
new file mode 100644
index 0000000..9bb839e
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster-null-lookup.py
@@ -0,0 +1,44 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Test handling of lookup of a symbol that doesn't exist.
+
+from perftest import perftest
+from perftest import measure
+from perftest import utils
+
+class NullLookup(perftest.TestCaseWithBasicMeasurements):
+    def __init__(self, name, run_names, binfile):
+        # We want to measure time in this test.
+        super(NullLookup, self).__init__(name)
+        self.run_names = run_names
+        self.binfile = binfile
+
+    def warm_up(self):
+        pass
+
+    def execute_test(self):
+        for run in self.run_names:
+            this_run_binfile = "%s-%s" % (self.binfile,
+                                          utils.convert_spaces(run))
+            utils.select_file(this_run_binfile)
+            utils.runto_main()
+            utils.safe_execute("mt expand-symtabs")
+            iteration = 5
+            while iteration > 0:
+                utils.safe_execute("mt flush-symbol-cache")
+                func = lambda: utils.safe_execute("p symbol_not_found")
+                self.measure.measure(func, run)
+                iteration -= 1
diff --git a/gdb/testsuite/gdb.perf/gmonster-ptype-string.py b/gdb/testsuite/gdb.perf/gmonster-ptype-string.py
new file mode 100644
index 0000000..d39f4ce
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster-ptype-string.py
@@ -0,0 +1,45 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Measure speed of ptype of a std::string object.
+
+from perftest import perftest
+from perftest import measure
+from perftest import utils
+
+class GmonsterPtypeString(perftest.TestCaseWithBasicMeasurements):
+    def __init__(self, name, run_names, binfile):
+        # We want to measure time in this test.
+        super(GmonsterPtypeString, self).__init__(name)
+        self.run_names = run_names
+        self.binfile = binfile
+
+    def warm_up(self):
+        pass
+
+    def execute_test(self):
+        for run in self.run_names:
+            this_run_binfile = "%s-%s" % (self.binfile,
+                                          utils.convert_spaces(run))
+            utils.select_file(this_run_binfile)
+            utils.runto_main()
+            utils.safe_execute("mt expand-symtabs")
+            iteration = 5
+            while iteration > 0:
+                utils.safe_execute("mt flush-symbol-cache")
+                func1 = lambda: utils.safe_execute("ptype hello")
+                func = lambda: utils.run_n_times(2, func1)
+                self.measure.measure(func, run)
+                iteration -= 1
diff --git a/gdb/testsuite/gdb.perf/gmonster1-null-lookup.exp b/gdb/testsuite/gdb.perf/gmonster1-null-lookup.exp
new file mode 100644
index 0000000..4600b95
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster1-null-lookup.exp
@@ -0,0 +1,44 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Measure speed of lookup of a symbol that doesn't exist.
+# Test parameters are the standard GenPerfTest parameters.
+
+load_lib perftest.exp
+
+if [skip_perf_tests] {
+    return 0
+}
+
+set testprog "gmonster1"
+
+GenPerfTest::load_test_description ${testprog}.exp
+
+# This variable is required by perftest.exp.
+# This isn't the name of the test program, it's the name of the .py test.
+# The harness assumes they are the same, which is not the case here.
+set testfile "gmonster-null-lookup"
+
+array set testcase [make_testcase_config]
+
+PerfTest::assemble {
+    # Compilation is handled by ${testprog}.exp.
+    return 0
+} {
+    clean_restart
+} {
+    global testcase
+    gdb_test "python NullLookup('$testprog:$testfile', [tcl_string_list_to_python_list $testcase(run_names)], '$testcase(binfile)').run()"
+}
diff --git a/gdb/testsuite/gdb.perf/gmonster1-ptype-string.exp b/gdb/testsuite/gdb.perf/gmonster1-ptype-string.exp
new file mode 100644
index 0000000..26327aa
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster1-ptype-string.exp
@@ -0,0 +1,44 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Measure speed of ptype on a simple class.
+# Test parameters are the standard GenPerfTest parameters.
+
+load_lib perftest.exp
+
+if [skip_perf_tests] {
+    return 0
+}
+
+set testprog "gmonster1"
+
+GenPerfTest::load_test_description ${testprog}.exp
+
+# This variable is required by perftest.exp.
+# This isn't the name of the test program, it's the name of the .py test.
+# The harness assumes they are the same, which is not the case here.
+set testfile "gmonster-ptype-string"
+
+array set testcase [make_testcase_config]
+
+PerfTest::assemble {
+    # Compilation is handled by ${testprog}.exp.
+    return 0
+} {
+    clean_restart
+} {
+    global testcase
+    gdb_test "python GmonsterPtypeString('$testprog:$testfile', [tcl_string_list_to_python_list $testcase(run_names)], '$testcase(binfile)').run()"
+}
diff --git a/gdb/testsuite/gdb.perf/gmonster1.cc b/gdb/testsuite/gdb.perf/gmonster1.cc
new file mode 100644
index 0000000..0627a09
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster1.cc
@@ -0,0 +1,20 @@
+/* Copyright (C) 2015 Free Software Foundation, Inc.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+int
+main ()
+{
+  return 0;
+}
diff --git a/gdb/testsuite/gdb.perf/gmonster1.exp b/gdb/testsuite/gdb.perf/gmonster1.exp
new file mode 100644
index 0000000..fdaa191
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster1.exp
@@ -0,0 +1,79 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Perftest description file for building the "gmonster1" benchmark.
+# Where does the name come from?  The benchmark is derived from one of the
+# monster programs at Google.
+#
+# Perftest descriptions are loaded thrice:
+# 1) To generate the build .exp files
+#    GDB_PERFTEST_MODE=gen-build-exps
+#    This step allows for parallel builds of the majority of pieces of the
+#    test binary and shlibs.
+# 2) To compile the "pieces" of the binary and shlibs.
+#    "Pieces" are the bulk of the machine-generated sources of the test.
+#    This step is driven by lib/build-piece.exp.
+#    GDB_PERFTEST_MODE=build-pieces
+# 3) To perform the final link of the binary and shlibs.
+#    GDB_PERFTEST_MODE=compile
+
+load_lib perftest.exp
+
+if [skip_perf_tests] {
+    return 0
+}
+
+if ![info exists MONSTER] {
+    set MONSTER "n"
+}
+
+proc make_testcase_config { } {
+    global MONSTER
+
+    set program_name "gmonster1"
+    array set testcase [GenPerfTest::init_testcase $program_name]
+
+    set testcase(language) c++
+
+    # binary_sources needs to be embedded in an outer list because remember
+    # each element of the outer list is for each run, and here we want to use
+    # the same value for all runs.
+    set testcase(binary_sources) { { gmonster1.cc gm-hello.cc } }
+
+    if { $MONSTER == "y" } {
+	set testcase(run_names) { 10-cus 100-cus 1000-cus 10000-cus }
+	set testcase(nr_compunits) { 10 100 1000 10000 }
+    } else {
+	set testcase(run_names) { 1-cu 10-cus 100-cus }
+	set testcase(nr_compunits) { 1 10 100 }
+    }
+    set testcase(nr_shlibs) { 0 }
+
+    set testcase(nr_extern_functions) 10
+    set testcase(nr_static_functions) 10
+
+    # class_specs needs to be embedded in an outer list because remember
+    # each element of the outer list is for each run, and here we want to use
+    # the same value for all runs.
+    set testcase(class_specs) { { { 0 10 } { 1 10 } { 2 10 } } }
+    set testcase(nr_members) 10
+    set testcase(nr_static_members) 10
+    set testcase(nr_methods) 10
+    set testcase(nr_static_methods) 10
+
+    return [array get testcase]
+}
+
+GenPerfTest::standard_driver gmonster1.exp make_testcase_config
diff --git a/gdb/testsuite/gdb.perf/gmonster2-null-lookup.exp b/gdb/testsuite/gdb.perf/gmonster2-null-lookup.exp
new file mode 100644
index 0000000..c6d3d91
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster2-null-lookup.exp
@@ -0,0 +1,44 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Measure speed of lookup of a symbol that doesn't exist.
+# Test parameters are the standard GenPerfTest parameters.
+
+load_lib perftest.exp
+
+if [skip_perf_tests] {
+    return 0
+}
+
+set testprog "gmonster2"
+
+GenPerfTest::load_test_description ${testprog}.exp
+
+# This variable is required by perftest.exp.
+# This isn't the name of the test program, it's the name of the .py test.
+# The harness assumes they are the same, which is not the case here.
+set testfile "gmonster-null-lookup"
+
+array set testcase [make_testcase_config]
+
+PerfTest::assemble {
+    # Compilation is handled by ${testprog}.exp.
+    return 0
+} {
+    clean_restart
+} {
+    global testcase
+    gdb_test "python NullLookup('$testprog:$testfile', [tcl_string_list_to_python_list $testcase(run_names)], '$testcase(binfile)').run()"
+}
diff --git a/gdb/testsuite/gdb.perf/gmonster2-ptype-string.exp b/gdb/testsuite/gdb.perf/gmonster2-ptype-string.exp
new file mode 100644
index 0000000..23fa38d
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster2-ptype-string.exp
@@ -0,0 +1,44 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Measure blah with lots of shared libraries
+# Test parameters are the standard GenPerfTest parameters.
+
+load_lib perftest.exp
+
+if [skip_perf_tests] {
+    return 0
+}
+
+set testprog "gmonster2"
+
+GenPerfTest::load_test_description ${testprog}.exp
+
+# This variable is required by perftest.exp.
+# This isn't the name of the test program, it's the name of the .py test.
+# The harness assumes they are the same, which is not the case here.
+set testfile "gmonster-ptype-string"
+
+array set testcase [make_testcase_config]
+
+PerfTest::assemble {
+    # Compilation is handled by ${testprog}.exp.
+    return 0
+} {
+    clean_restart
+} {
+    global testcase
+    gdb_test "python GmonsterPtypeString('$testprog:$testfile', [tcl_string_list_to_python_list $testcase(run_names)], '$testcase(binfile)').run()"
+}
diff --git a/gdb/testsuite/gdb.perf/gmonster2.cc b/gdb/testsuite/gdb.perf/gmonster2.cc
new file mode 100644
index 0000000..0627a09
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster2.cc
@@ -0,0 +1,20 @@
+/* Copyright (C) 2015 Free Software Foundation, Inc.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+int
+main ()
+{
+  return 0;
+}
diff --git a/gdb/testsuite/gdb.perf/gmonster2.exp b/gdb/testsuite/gdb.perf/gmonster2.exp
new file mode 100644
index 0000000..6d62876
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster2.exp
@@ -0,0 +1,79 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Perftest description file for building the "gmonster2" benchmark.
+# Where does the name come from?  The benchmark is derived from one of the
+# monster programs at Google.
+#
+# Perftest descriptions are loaded thrice:
+# 1) To generate the build .exp files
+#    GDB_PERFTEST_MODE=gen-build-exps
+#    This step allows for parallel builds of the majority of pieces of the
+#    test binary and shlibs.
+# 2) To compile the "pieces" of the binary and shlibs.
+#    "Pieces" are the bulk of the machine-generated sources of the test.
+#    This step is driven by lib/build-piece.exp.
+#    GDB_PERFTEST_MODE=build-pieces
+# 3) To perform the final link of the binary and shlibs.
+#    GDB_PERFTEST_MODE=compile
+
+load_lib perftest.exp
+
+if [skip_perf_tests] {
+    return 0
+}
+
+if ![info exists MONSTER] {
+    set MONSTER "n"
+}
+
+proc make_testcase_config { } {
+    global MONSTER
+
+    set program_name "gmonster2"
+    array set testcase [GenPerfTest::init_testcase $program_name]
+
+    set testcase(language) c++
+
+    # binary_sources needs to be embedded in an outer list because remember
+    # each element of the outer list is for each run, and here we want to use
+    # the same value for all runs.
+    set testcase(binary_sources) { { gmonster2.cc gm-hello.cc } }
+
+    if { $MONSTER == "y" } {
+	set testcase(run_names) { 10-sos 100-sos 1000-sos }
+	set testcase(nr_shlibs) { 10 100 1000 }
+    } else {
+	set testcase(run_names) { 1-so 10-sos 100-sos }
+	set testcase(nr_shlibs) { 1 10 100 }
+    }
+    set testcase(nr_compunits) 10
+
+    set testcase(nr_extern_functions) 10
+    set testcase(nr_static_functions) 10
+
+    # class_specs needs to be embedded in an outer list because remember
+    # each element of the outer list is for each run, and here we want to use
+    # the same value for all runs.
+    set testcase(class_specs) { { { 0 10 } { 1 10 } { 2 10 } } }
+    set testcase(nr_members) 10
+    set testcase(nr_static_members) 10
+    set testcase(nr_methods) 10
+    set testcase(nr_static_methods) 10
+
+    return [array get testcase]
+}
+
+GenPerfTest::standard_driver gmonster2.exp make_testcase_config
diff --git a/gdb/testsuite/gdb.perf/lib/perftest/utils.py b/gdb/testsuite/gdb.perf/lib/perftest/utils.py
new file mode 100644
index 0000000..ed44500
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/lib/perftest/utils.py
@@ -0,0 +1,56 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+import gdb
+
+def safe_execute(command):
+    """Execute command, ignoring any gdb errors."""
+    result = None
+    try:
+        result = gdb.execute(command, to_string=True)
+    except gdb.error:
+        pass
+    return result
+
+
+def convert_spaces(file_name):
+    """Return file_name with all spaces replaced with "-"."""
+    return file_name.replace(" ", "-")
+
+
+def select_file(file_name):
+    """Select a file for debugging.
+
+    N.B. This turns confirmation off.
+    """
+    safe_execute("set confirm off")
+    gdb.execute("file %s" % (file_name))
+
+
+def runto_main():
+    """Run the program to "main".
+
+    N.B. This turns confirmation off.
+    """
+    safe_execute("set confirm off")
+    gdb.execute("tbreak main")
+    gdb.execute("run")
+
+
+def run_n_times(count, func):
+    """Execute func count times."""
+    while count > 0:
+        func()
+        count -= 1
diff --git a/gdb/testsuite/lib/build-piece.exp b/gdb/testsuite/lib/build-piece.exp
new file mode 100644
index 0000000..c48774c
--- /dev/null
+++ b/gdb/testsuite/lib/build-piece.exp
@@ -0,0 +1,36 @@
+# Copyright (C) 2014 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Utility to bootstrap building a piece of a performance test in a
+# parallel build.
+# See testsuite/Makefile.in:pieces/%.exp.
+
+# Dejagnu presents a kind of API to .exp files, but using this file to
+# bootstrap the parallel build process breaks that.  Before invoking $PIECE
+# set various globals to their expected values.  The tests may not use these
+# today, but if/when they do the error modes are confusing, so fix it now.
+
+# $subdir is set to "lib", because that is where this file lives,
+# which is not what tests expect.  The makefile sets WORKER for us.
+# Its value is <name>/<name>-<number>.
+set subdir [file dirname $WORKER]
+
+# $gdb_test_file_name is set to this file, build-piece, which is not what
+# tests expect.  This assumes each piece's build .exp file lives in
+# $objdir/gdb.perf/pieces/<name>.
+# See perftest.exp:GenPerfTest::gen_build_exp_files.
+set gdb_test_file_name [file tail [file dirname $PIECE]]
+
+source $PIECE
diff --git a/gdb/testsuite/lib/cache.exp b/gdb/testsuite/lib/cache.exp
index 8df04b9..9565b39 100644
--- a/gdb/testsuite/lib/cache.exp
+++ b/gdb/testsuite/lib/cache.exp
@@ -35,7 +35,7 @@ proc gdb_do_cache {name} {
     }
 
     if {[info exists GDB_PARALLEL]} {
-	set cache_filename [file join $objdir cache $cache_name]
+	set cache_filename [file join $objdir $GDB_PARALLEL cache $cache_name]
 	if {[file exists $cache_filename]} {
 	    set fd [open $cache_filename]
 	    set gdb_data_cache($cache_name) [read -nonewline $fd]
diff --git a/gdb/testsuite/lib/gdb.exp b/gdb/testsuite/lib/gdb.exp
index a6f200f..4415801 100644
--- a/gdb/testsuite/lib/gdb.exp
+++ b/gdb/testsuite/lib/gdb.exp
@@ -3777,7 +3777,7 @@ proc standard_output_file {basename} {
     global objdir subdir gdb_test_file_name GDB_PARALLEL
 
     if {[info exists GDB_PARALLEL]} {
-	set dir [file join $objdir outputs $subdir $gdb_test_file_name]
+	set dir [file join $objdir $GDB_PARALLEL outputs $subdir $gdb_test_file_name]
 	file mkdir $dir
 	return [file join $dir $basename]
     } else {
@@ -3791,7 +3791,7 @@ proc standard_temp_file {basename} {
     global objdir GDB_PARALLEL
 
     if {[info exists GDB_PARALLEL]} {
-	return [file join $objdir temp $basename]
+	return [file join $objdir $GDB_PARALLEL temp $basename]
     } else {
 	return $basename
     }
@@ -4693,17 +4693,27 @@ proc build_executable { testname executable {sources ""} {options {debug}} } {
     return [eval build_executable_from_specs $arglist]
 }
 
-# Starts fresh GDB binary and loads EXECUTABLE into GDB. EXECUTABLE is
-# the basename of the binary.
-proc clean_restart { executable } {
+# Starts fresh GDB binary and loads an optional executable into GDB.
+# Usage: clean_restart [executable]
+# EXECUTABLE is the basename of the binary.
+
+proc clean_restart { args } {
     global srcdir
     global subdir
-    set binfile [standard_output_file ${executable}]
+
+    if { [llength $args] > 1 } {
+	error "bad number of args: [llength $args]"
+    }
 
     gdb_exit
     gdb_start
     gdb_reinitialize_dir $srcdir/$subdir
-    gdb_load ${binfile}
+
+    if { [llength $args] >= 1 } {
+	set executable [lindex $args 0]
+	set binfile [standard_output_file ${executable}]
+	gdb_load ${binfile}
+    }
 }
 
 # Prepares for testing by calling build_executable_full, then
@@ -4907,7 +4917,10 @@ if {[info exists GDB_PARALLEL]} {
     if {[is_remote host]} {
 	unset GDB_PARALLEL
     } else {
-	file mkdir outputs temp cache
+	file mkdir \
+	    [file join $GDB_PARALLEL outputs] \
+	    [file join $GDB_PARALLEL temp] \
+	    [file join $GDB_PARALLEL cache]
     }
 }
 
diff --git a/gdb/testsuite/lib/perftest.exp b/gdb/testsuite/lib/perftest.exp
index 7c334ac..1aca310 100644
--- a/gdb/testsuite/lib/perftest.exp
+++ b/gdb/testsuite/lib/perftest.exp
@@ -12,6 +12,10 @@
 #
 # You should have received a copy of the GNU General Public License
 # along with this program.  If not, see <http://www.gnu.org/licenses/>.
+#
+# Notes:
+# 1) This follows a Python convention for marking internal vs public functions.
+# Internal functions are prefixed with "_".
 
 namespace eval PerfTest {
     # The name of python file on build.
@@ -42,14 +46,7 @@ namespace eval PerfTest {
     # actual compilation.  Return zero if compilation is successful,
     # otherwise return non-zero.
     proc compile {body} {
-	global GDB_PERFTEST_MODE
-
-	if { [info exists GDB_PERFTEST_MODE]
-	     && [string compare $GDB_PERFTEST_MODE "run"] } {
-	    return [uplevel 2 $body]
-	}
-
-	return 0
+	return [uplevel 2 $body]
     }
 
     # Start up GDB.
@@ -82,14 +79,24 @@ namespace eval PerfTest {
     proc assemble {compile startup run} {
 	global GDB_PERFTEST_MODE
 
-	if { [eval compile {$compile}] } {
-	    untested "Could not compile source files."
+	if ![info exists GDB_PERFTEST_MODE] {
+	    return
+	}
+
+	if { "$GDB_PERFTEST_MODE" == "gen-build-exps"
+	     || "$GDB_PERFTEST_MODE" == "build-pieces" } {
 	    return
 	}
 
+	if { [string compare $GDB_PERFTEST_MODE "run"] } {
+	    if { [eval compile {$compile}] } {
+		untested "Could not compile source files."
+		return
+	    }
+	}
+
 	# Don't execute the run if GDB_PERFTEST_MODE=compile.
-	if { [info exists GDB_PERFTEST_MODE]
-	     && [string compare $GDB_PERFTEST_MODE "compile"] == 0} {
+	if { [string compare $GDB_PERFTEST_MODE "compile"] == 0} {
 	    return
 	}
 
@@ -110,10 +117,11 @@ proc skip_perf_tests { } {
 
     if [info exists GDB_PERFTEST_MODE] {
 
-	if { "$GDB_PERFTEST_MODE" != "compile"
+	if { "$GDB_PERFTEST_MODE" != "gen-build-exps"
+	     && "$GDB_PERFTEST_MODE" != "build-pieces"
+	     && "$GDB_PERFTEST_MODE" != "compile"
 	     && "$GDB_PERFTEST_MODE" != "run"
 	     && "$GDB_PERFTEST_MODE" != "both" } {
-	    # GDB_PERFTEST_MODE=compile|run|both is allowed.
 	    error "Unknown value of GDB_PERFTEST_MODE."
 	    return 1
 	}
@@ -123,3 +131,958 @@ proc skip_perf_tests { } {
 
     return 1
 }
+
+# Given a list of tcl strings, return the same list as the text form of a
+# python list.
+
+proc tcl_string_list_to_python_list { l } {
+    proc quote { text } {
+	return "\"$text\""
+    }
+    set quoted_list ""
+    foreach elm $l {
+	lappend quoted_list [quote $elm]
+    }
+    return "([join $quoted_list {, }])"
+}
+
+# A simple testcase generator.
+#
+# Usage Notes:
+#
+# 1) The length of each parameter list must either be one, in which case the
+# same value is used for each run, or the length must match all other
+# parameters of length greater than one.
+#
+# 2) Values for parameters that vary across runs must appear in increasing
+# order.  E.g. nr_shlibs = { 0 1 10 } is good, { 1 0 10 } is bad.
+# This rule simplifies the code a bit, without being onerous on the user:
+#  a) Report generation doesn't have to sort the output by run, it'll already
+#  be sorted.
+#  b) In the static object file case, the last run can be used used to generate
+#  all the source files.
+#
+# TODO:
+# 1) Lots.  E.g., having functions call each other within an objfile and across
+# objfiles to measure things like backtrace times.
+# 2) Lots.  E.g., inline methods.
+#
+# Implementation Notes:
+#
+# 1) The implementation would be a bit simpler if we could assume Tcl 8.5.
+# Then we could use a dictionary to record the testcase instead of an array.
+# With the array we use here, there is only one copy of it and instead of
+# passing its value we pass its name.  Yay Tcl.
+#
+# 2) Array members cannot (apparently) be references in the conditional
+# expression of a for loop (-> variable not found error).  That is why they're
+# all extracted before the for loop.
+
+if ![info exists CAT_PROGRAM] {
+    set CAT_PROGRAM "/bin/cat"
+}
+
+if ![info exists SHA1SUM_PROGRAM] {
+    set SHA1SUM_PROGRAM "/usr/bin/sha1sum"
+}
+
+namespace eval GenPerfTest {
+
+    # The default level of compilation parallelism we support.
+    set DEFAULT_PERF_TEST_COMPILE_PARALLELISM 10
+
+    # The language of the test.
+    set DEFAULT_LANGUAGE "c"
+
+    # Extra source files for the binary.
+    # This must at least include the file with main(),
+    # each test must supply its own.
+    set DEFAULT_BINARY_SOURCES {}
+
+    # The number of shared libraries to create.
+    set DEFAULT_NR_SHLIBS 0
+
+    # The number of compunits in each objfile.
+    set DEFAULT_NR_COMPUNITS 1
+
+    # The number of public globals in each compunit.
+    set DEFAULT_NR_EXTERN_GLOBALS 1
+
+    # The number of static globals in each compunit.
+    set DEFAULT_NR_STATIC_GLOBALS 1
+
+    # The number of public functions in each compunit.
+    set DEFAULT_NR_EXTERN_FUNCTIONS 1
+
+    # The number of static functions in each compunit.
+    set DEFAULT_NR_STATIC_FUNCTIONS 1
+
+    # List of pairs of class depth and number of classes at that depth.
+    # By "depth" here we mean nesting within a namespace.
+    # E.g.,
+    # class foo {};
+    # namespace n { class foo {}; class bar {}; }
+    # would be represented as { { 0 1 } { 1 2 } }.
+    # This is only used if the selected language permits it.
+    set DEFAULT_CLASS_SPECS {}
+
+    # Number of members in each class.
+    # This is only used if classes are enabled.
+    set DEFAULT_NR_MEMBERS 0
+
+    # Number of static members in each class.
+    # This is only used if classes are enabled.
+    set DEFAULT_NR_STATIC_MEMBERS 0
+
+    # Number of methods in each class.
+    # This is only used if classes are enabled.
+    set DEFAULT_NR_METHODS 0
+
+    # Number of static methods in each class.
+    # This is only used if classes are enabled.
+    set DEFAULT_NR_STATIC_METHODS 0
+
+    set suffixes(c) "c"
+    set suffixes(c++) "cc"
+
+    # Helper function to generate .exp build files.
+
+    proc _gen_build_exp_files { program_name nr_workers output_dir code } {
+	verbose -log "_gen_build_exp_files: $nr_workers workers"
+	for { set i 0 } { $i < $nr_workers } { incr i } {
+	    set file_name "$output_dir/${program_name}-${i}.exp"
+	    verbose -log "_gen_build_exp_files: Generating $file_name"
+	    set f [open $file_name "w"]
+	    puts $f "# DO NOT EDIT, machine generated file."
+	    puts $f "# See perftest.exp:GenPerfTest::gen_build_exp_files."
+	    puts $f ""
+	    puts $f "set worker_nr $i"
+	    puts $f ""
+	    puts $f "# The body of the file is supplied by the test."
+	    puts $f ""
+	    puts $f $code
+	    close $f
+	}
+	return 0
+    }
+
+    # Generate .exp files to build all the "pieces" of the testcase.
+    # This doesn't include "main" or any test-specific stuff.
+    # This mostly consists of the "bulk" (aka "crap" :-)) of the testcase to
+    # give gdb something meaty to chew on.
+    # The result is 0 for success, -1 for failure.
+    #
+    # Benchmarks generated by some of the tests are big.  I mean really big.
+    # And it's a pain to build one piece at a time, we need a parallel build.
+    # To achieve this, given the framework we're working with, we generate
+    # several .exp files, and then let testsuite/Makefile.in's support for
+    # parallel runs of the testsuite to do its thing.
+
+    proc gen_build_exp_files { test_description_exp make_config_thunk_name } {
+	global objdir PERF_TEST_COMPILE_PARALLELISM
+
+	if { [file tail $test_description_exp] != $test_description_exp } {
+	    error "test description file contains directory name"
+	}
+
+	set program_name [file rootname $test_description_exp]
+
+	set output_dir "$objdir/gdb.perf/pieces/$program_name"
+	file mkdir $output_dir
+
+	# N.B. The generation code below cannot reference anything that exists
+	# here, the code isn't run until later, in another process.  That is
+	# why we split up the assignment to $code.
+	# TODO(dje): Not the cleanest way, but simple enough for now.
+	set code {
+	    # This code is put in each copy of the generated .exp file.
+
+	    load_lib perftest.exp
+
+	    GenPerfTest::load_test_description}
+	append code " $test_description_exp"
+	append code {
+
+	    array set testcase [}
+	append code "$make_config_thunk_name"
+	append code {]
+
+	    if { [GenPerfTest::compile_pieces testcase $worker_nr] < 0 } {
+		return -1
+	    }
+
+	    return 0
+	}
+
+	return [_gen_build_exp_files $program_name $PERF_TEST_COMPILE_PARALLELISM $output_dir $code]
+    }
+
+    # Load a perftest description.
+    # Test descriptions are used to build the input files (binary + shlibs)
+    # of one or more performance tests.
+
+    proc load_test_description { basename } {
+	global srcdir
+
+	if { [file tail $basename] != $basename } {
+	    error "test description file contains directory name"
+	}
+
+	verbose -log "load_file $srcdir/gdb.perf/$basename"
+	if { [load_file $srcdir/gdb.perf/$basename] == 0 } {
+	    error "Unable to load test description $basename"
+	}
+    }
+
+    # Create a testcase object for test NAME.
+    # The caller must call this as:
+    # array set my_test [GenPerfTest::init_testcase $name]
+
+    proc init_testcase { name } {
+	set testcase(name) $name
+	set testcase(language) $GenPerfTest::DEFAULT_LANGUAGE
+	set testcase(run_names) [list $name]
+	set testcase(binary_sources) $GenPerfTest::DEFAULT_BINARY_SOURCES
+	set testcase(nr_shlibs) $GenPerfTest::DEFAULT_NR_SHLIBS
+	set testcase(nr_compunits) $GenPerfTest::DEFAULT_NR_COMPUNITS
+
+	set testcase(nr_extern_globals) $GenPerfTest::DEFAULT_NR_EXTERN_GLOBALS
+	set testcase(nr_static_globals) $GenPerfTest::DEFAULT_NR_STATIC_GLOBALS
+	set testcase(nr_extern_functions) $GenPerfTest::DEFAULT_NR_EXTERN_FUNCTIONS
+	set testcase(nr_static_functions) $GenPerfTest::DEFAULT_NR_STATIC_FUNCTIONS
+
+	set testcase(class_specs) $GenPerfTest::DEFAULT_CLASS_SPECS
+	set testcase(nr_members) $GenPerfTest::DEFAULT_NR_MEMBERS
+	set testcase(nr_static_members) $GenPerfTest::DEFAULT_NR_STATIC_MEMBERS
+	set testcase(nr_methods) $GenPerfTest::DEFAULT_NR_METHODS
+	set testcase(nr_static_methods) $GenPerfTest::DEFAULT_NR_STATIC_METHODS
+
+	# The location of this file drives the location of all other files.
+	# The choice is derived from standard_output_file.  We don't use it
+	# because of the parallel build support, we want each worker's log/sum
+	# files to go in different directories, but we don't want their output
+	# to go in different directories.
+	# N.B. The value here must be kept in sync with Makefile.in.
+	global objdir
+	set name_no_spaces [_convert_spaces $name]
+	set testcase(binfile) "$objdir/gdb.perf/outputs/$name_no_spaces/$name_no_spaces"
+
+	return [array get testcase]
+    }
+
+    proc _verify_parameter_lengths { self_var } {
+	upvar 1 $self_var self
+	set params {
+	    binary_sources
+	    nr_shlibs nr_compunits
+	    nr_extern_globals nr_static_globals
+	    nr_extern_functions nr_static_functions
+	    class_specs
+	    nr_members nr_static_members
+	    nr_methods nr_static_methods
+	}
+	set nr_runs [llength $self(run_names)]
+	foreach p $params {
+	    set n [llength $self($p)]
+	    if { $n > 1 } {
+		if { $n != $nr_runs } {
+		    error "Bad number of values for parameter $p"
+		}
+		set values $self($p)
+		for { set i 0 } { $i < $n - 1 } { incr i } {
+		    if { [lindex $values $i] > [lindex $values [expr $i + 1]] } {
+			error "Values of parameter $p are not increasing"
+		    }
+		}
+	    }
+	}
+    }
+
+    # Verify the testcase is valid (as best we can, this isn't exhaustive).
+
+    proc _verify_testcase { self_var } {
+	upvar 1 $self_var self
+	_verify_parameter_lengths self
+
+	# Each test must supply its own main().  We don't check for main here,
+	# but we do verify the test supplied something.
+	if { [llength $self(binary_sources)] == 0 } {
+	    error "Missing value for binary_sources"
+	}
+    }
+
+    # Return the value of parameter PARAM for run RUN_NR.
+
+    proc _get_param { param run_nr } {
+	if { [llength $param] == 1 } {
+	    # Since PARAM may be a list of lists we need to use lindex.  This
+	    # also works for scalars (scalars are degenerate lists).
+	    return [lindex $param 0]
+	}
+	return [lindex $param $run_nr]
+    }
+
+    # Return non-zero if all files (binaries + shlibs) can be compiled from
+    # one set of object files.  This is a simple optimization to speed up
+    # test build times.  This happens if the only variation among runs is
+    # nr_shlibs or nr_compunits.
+
+    proc _static_object_files_p { self_var } {
+	upvar 1 $self_var self
+	# These values are either scalars, or can vary across runs but don't
+	# affect whether we can share the generated object objects between
+	# runs.
+	set static_object_file_params {
+	    name language run_names nr_shlibs nr_compunits binary_sources
+	}
+	foreach name [array names self] {
+	    if { [lsearch $static_object_file_params $name] < 0 } {
+		# name is not in static_object_file_params.
+		if { [llength $self($name)] > 1 } {
+		    # The user could provide a list that is all the same value,
+		    # so check for that.
+		    set first_value [lindex $self($name) 0]
+		    foreach elm [lrange $self($name) 1 end] {
+			if { $elm != $first_value } {
+			    return 0
+			}
+		    }
+		}
+	    }
+	}
+	return 1
+    }
+
+    # Return non-zero if classes are enabled.
+
+    proc _classes_enabled_p { self_var run_nr } {
+	upvar 1 $self_var self
+	set class_specs [_get_param $self(class_specs) $run_nr]
+	foreach elm $class_specs {
+	    if { [llength $elm] != 2 } {
+		error "Bad class spec: $elm"
+	    }
+	    if { [lindex $elm 1] > 0 } {
+		return 1
+	    }
+	}
+	return 0
+    }
+
+    # Spaces in file names are a pain, remove them.
+    # They appear if the user puts spaces in the test name or run name.
+
+    proc _convert_spaces { file_name } {
+	return [regsub -all " " $file_name "-"]
+    }
+
+    # Return the compilation flags for the test.
+
+    proc _compile_flags { self_var } {
+	upvar 1 $self_var self
+	set result {debug}
+	switch $self(language) {
+	    c++ {
+		lappend result "c++"
+	    }
+	}
+	return $result
+    }
+
+    # Return the path to put source/object files in for run number RUN_NR.
+
+    proc _make_object_dir_name { self_var static run_nr } {
+	upvar 1 $self_var self
+	# Note: The output directory already includes the name of the test
+	# description file.
+	set bindir [file dirname $self(binfile)]
+	# Put the pieces in a subdirectory, there are a lot of them.
+	if $static {
+	    return "$bindir/pieces"
+	} else {
+	    set run_name [_convert_spaces [lindex $self(run_names) $run_nr]]
+	    return "$bindir/pieces/$run_name"
+	}
+    }
+
+    # CU_NR is either the compilation unit number or "main".
+    # RUN_NR is ignored if STATIC is non-zero.
+
+    proc _make_binary_source_name { self_var static run_nr cu_nr } {
+	upvar 1 $self_var self
+	set source_suffix $GenPerfTest::suffixes($self(language))
+	if { !$static } {
+	    set run_name [_get_param $self(run_names) $run_nr]
+	    set source_name "${run_name}-${cu_nr}.$source_suffix"
+	} else {
+	    set source_name "$self(name)-${cu_nr}.$source_suffix"
+	}
+	return "[_make_object_dir_name self $static $run_nr]/[_convert_spaces $source_name]"
+    }
+
+    # Generated object files get put in the same directory as their source.
+
+    proc _make_binary_object_name { self_var static run_nr cu_nr } {
+	upvar 1 $self_var self
+	set source_name [_make_binary_source_name self $static $run_nr $cu_nr]
+	return [file rootname $source_name].o
+    }
+
+    proc _make_shlib_source_name { self_var static run_nr so_nr cu_nr } {
+	upvar 1 $self_var self
+	set source_suffix $GenPerfTest::suffixes($self(language))
+	if { !$static } {
+	    set run_name [_get_param $self(run_names) $run_nr]
+	    set source_name "$self(name)-${run_name}-lib${so_nr}-${cu_nr}.$source_suffix"
+	} else {
+	    set source_name "$self(name)-lib${so_nr}-${cu_nr}.$source_suffix"
+	}
+	return "[_make_object_dir_name self $static $run_nr]/[_convert_spaces $source_name]"
+    }
+
+    # Return the list of source/object files for the binary.
+    # This is the source files specified in test param binary_sources as well
+    # as the names of all the object file "pieces".
+    # STATIC is the value of _static_object_files_p for the test.
+
+    proc _make_binary_input_file_names { self_var static run_nr } {
+	upvar 1 $self_var self
+	global srcdir subdir
+	set nr_compunits [_get_param $self(nr_compunits) $run_nr]
+	set result {}
+	foreach source [_get_param $self(binary_sources) $run_nr] {
+	    lappend result "$srcdir/$subdir/$source"
+	}
+	for { set cu_nr 0 } { $cu_nr < $nr_compunits } { incr cu_nr } {
+	    lappend result [_make_binary_object_name self $static $run_nr $cu_nr]
+	}
+	return $result
+    }
+
+    proc _make_binary_name { self_var run_nr } {
+	upvar 1 $self_var self
+	set run_name [_get_param $self(run_names) $run_nr]
+	set exe_name "$self(binfile)-[_convert_spaces ${run_name}]"
+	return $exe_name
+    }
+
+    proc _make_shlib_name { self_var static run_nr so_nr } {
+	upvar 1 $self_var self
+	if { !$static } {
+	    set run_name [_get_param $self(run_names) $run_nr]
+	    set lib_name "$self(name)-${run_name}-lib${so_nr}"
+	} else {
+	    set lib_name "$self(name)-lib${so_nr}"
+	}
+	set output_dir [file dirname $self(binfile)]
+	return "[_make_object_dir_name self $static $run_nr]/[_convert_spaces $lib_name]"
+    }
+
+    proc _create_file { self_var path } {
+	upvar 1 $self_var self
+	verbose -log "Creating file: $path"
+	set f [open $path "w"]
+	return $f
+    }
+
+    proc _write_header { self_var f } {
+	upvar 1 $self_var self
+	puts $f "// DO NOT EDIT, machine generated file."
+	puts $f "// See perftest.exp:GenPerfTest."
+    }
+
+    proc _write_static_globals { self_var f run_nr } {
+	upvar 1 $self_var self
+	puts $f ""
+	set nr_static_globals [_get_param $self(nr_static_globals) $run_nr]
+	# Rather than parameterize the number of const/non-const globals,
+	# and their types, we keep it simple for now.	Even the number of
+	# bss/non-bss globals may be useful; later, if warranted.
+	for { set i 0 } { $i < $nr_static_globals } { incr i } {
+	    if { $i % 2 == 0 } {
+		set const "const "
+	    } else {
+		set const ""
+	    }
+	    puts $f "static ${const}int static_global_$i = $i;"
+	}
+    }
+
+    # ID is "" for the binary, and a unique symbol prefix for each SO.
+
+    proc _write_extern_globals { self_var f run_nr id cu_nr } {
+	upvar 1 $self_var self
+	puts $f ""
+	set nr_extern_globals [_get_param $self(nr_extern_globals) $run_nr]
+	# Rather than parameterize the number of const/non-const globals,
+	# and their types, we keep it simple for now.	Even the number of
+	# bss/non-bss globals may be useful; later, if warranted.
+	for { set i 0 } { $i < $nr_extern_globals } { incr i } {
+	    if { $i % 2 == 0 } {
+		set const "const "
+	    } else {
+		set const ""
+	    }
+	    puts $f "${const}int ${id}global_${cu_nr}_$i = $cu_nr * 1000 + $i;"
+	}
+    }
+
+    proc _write_static_functions { self_var f run_nr } {
+	upvar 1 $self_var self
+	set nr_static_functions [_get_param $self(nr_static_functions) $run_nr]
+	for { set i 0 } { $i < $nr_static_functions } { incr i } {
+	    puts $f ""
+	    puts $f "static void"
+	    puts $f "static_function_$i (void)"
+	    puts $f "{"
+	    puts $f "}"
+	}
+    }
+
+    # ID is "" for the binary, and a unique symbol prefix for each SO.
+
+    proc _write_extern_functions { self_var f run_nr id cu_nr } {
+	upvar 1 $self_var self
+	set nr_extern_functions [_get_param $self(nr_extern_functions) $run_nr]
+	for { set i 0 } { $i < $nr_extern_functions } { incr i } {
+	    puts $f ""
+	    puts $f "void"
+	    puts $f "${id}function_${cu_nr}_$i (void)"
+	    puts $f "{"
+	    puts $f "}"
+	}
+    }
+
+    proc _write_classes { self_var f run_nr cu_nr } {
+	upvar 1 $self_var self
+	set class_specs [_get_param $self(class_specs) $run_nr]
+	set nr_members [_get_param $self(nr_members) $run_nr]
+	set nr_static_members [_get_param $self(nr_static_members) $run_nr]
+	set nr_methods [_get_param $self(nr_methods) $run_nr]
+	set nr_static_methods [_get_param $self(nr_static_methods) $run_nr]
+	foreach spec $class_specs {
+	    set depth [lindex $spec 0]
+	    set nr_classes [lindex $spec 1]
+	    puts $f ""
+	    for { set i 0 } { $i < $depth } { incr i } {
+		puts $f "namespace ns_${i}"
+		puts $f "\{"
+	    }
+	    for { set c 0 } { $c < $nr_classes } { incr c } {
+		set class_name "class_${cu_nr}_${c}"
+		puts $f "class $class_name"
+		puts $f "\{"
+		puts $f " public:"
+		for { set i 0 } { $i < $nr_members } { incr i } {
+		    puts $f "  int member_$i;"
+		}
+		for { set i 0 } { $i < $nr_static_members } { incr i } {
+		    # Rather than parameterize the number of const/non-const
+		    # members, and their types, we keep it simple for now.
+		    if { $i % 2 == 0 } {
+			puts $f "  static const int static_member_$i = $i;"
+		    } else {
+			puts $f "  static int static_member_$i;"
+		    }
+		}
+		for { set i 0 } { $i < $nr_methods } { incr i } {
+		    puts $f "  void method_$i (void);"
+		}
+		for { set i 0 } { $i < $nr_static_methods } { incr i } {
+		    puts $f "  static void static_method_$i (void);"
+		}
+		puts $f "\};"
+		_write_static_members self $f $run_nr $class_name
+		_write_methods self $f $run_nr $class_name
+		_write_static_methods self $f $run_nr $class_name
+	    }
+	    for { set i 0 } { $i < $depth } { incr i } {
+		puts $f "\}"
+	    }
+	}
+    }
+
+    proc _write_static_members { self_var f run_nr class_name } {
+	upvar 1 $self_var self
+	puts $f ""
+	set nr_static_members [_get_param $self(nr_static_members) $run_nr]
+	# Rather than parameterize the number of const/non-const
+	# members, and their types, we keep it simple for now.
+	for { set i 0 } { $i < $nr_static_members } { incr i } {
+	    if { $i % 2 == 0 } {
+		# Static const members are initialized inline.
+	    } else {
+		puts $f "int ${class_name}::static_member_$i = $i;"
+	    }
+	}
+    }
+
+    proc _write_methods { self_var f run_nr class_name } {
+	upvar 1 $self_var self
+	set nr_methods [_get_param $self(nr_methods) $run_nr]
+	for { set i 0 } { $i < $nr_methods } { incr i } {
+	    puts $f ""
+	    puts $f "void"
+	    puts $f "${class_name}::method_$i (void)"
+	    puts $f "{"
+	    puts $f "}"
+	}
+    }
+
+    proc _write_static_methods { self_var f run_nr class_name } {
+	upvar 1 $self_var self
+	set nr_static_methods [_get_param $self(nr_static_methods) $run_nr]
+	for { set i 0 } { $i < $nr_static_methods } { incr i } {
+	    puts $f ""
+	    puts $f "void"
+	    puts $f "${class_name}::static_method_$i (void)"
+	    puts $f "{"
+	    puts $f "}"
+	}
+    }
+
+    proc _gen_binary_compunit_source { self_var static run_nr cu_nr } {
+	upvar 1 $self_var self
+	set source_file [_make_binary_source_name self $static $run_nr $cu_nr]
+	set f [_create_file self $source_file]
+	_write_header self $f
+	_write_static_globals self $f $run_nr
+	_write_extern_globals self $f $run_nr "" $cu_nr
+	_write_static_functions self $f $run_nr
+	_write_extern_functions self $f $run_nr "" $cu_nr
+	if [_classes_enabled_p self $run_nr] {
+	    _write_classes self $f $run_nr $cu_nr
+	}
+	close $f
+	return $source_file
+    }
+
+    # Generate the sources for the pieces of the binary.
+    # The result is a list of source file names and accompanying object file
+    # names.  The pieces are split across workers.
+    # E.g., with 10 workers the result for worker 0 is
+    # { { source0 object0 } { source10 object10 } ... }
+
+    proc _gen_binary_source { self_var worker_nr static run_nr } {
+	upvar 1 $self_var self
+	verbose -log "GenPerfTest::_gen_binary_source worker $worker_nr run $run_nr, started [timestamp -format %c]"
+	set nr_compunits [_get_param $self(nr_compunits) $run_nr]
+	global PERF_TEST_COMPILE_PARALLELISM
+	set nr_workers $PERF_TEST_COMPILE_PARALLELISM
+	set result {}
+	for { set cu_nr $worker_nr } { $cu_nr < $nr_compunits } { incr cu_nr $nr_workers } {
+	    set source_file [_gen_binary_compunit_source self $static $run_nr $cu_nr]
+	    set object_file [_make_binary_object_name self $static $run_nr $cu_nr]
+	    lappend result [list $source_file $object_file]
+	}
+	verbose -log "GenPerfTest::_gen_binary_source worker $worker_nr run $run_nr, done [timestamp -format %c]"
+	return $result
+    }
+
+    proc _gen_shlib_compunit_source { self_var static run_nr so_nr cu_nr } {
+	upvar 1 $self_var self
+	set source_file [_make_shlib_source_name self $static $run_nr $so_nr $cu_nr]
+	set f [_create_file self $source_file]
+	_write_header self $f
+	_write_static_globals self $f $run_nr
+	_write_extern_globals self $f $run_nr "shlib${so_nr}_" $cu_nr
+	_write_static_functions self $f $run_nr
+	_write_extern_functions self $f $run_nr "shlib${so_nr}_" $cu_nr
+	if [_classes_enabled_p self $run_nr] {
+	    _write_classes self $f $run_nr $cu_nr
+	}
+	close $f
+	return $source_file
+    }
+
+    proc _gen_shlib_source { self_var static run_nr so_nr } {
+	upvar 1 $self_var self
+	verbose -log "GenPerfTest::_gen_shlib_source run $run_nr so $so_nr, started [timestamp -format %c]"
+	set result ""
+	set nr_compunits [_get_param $self(nr_compunits) $run_nr]
+	for { set cu_nr 0 } { $cu_nr < $nr_compunits } { incr cu_nr } {
+	    lappend result [_gen_shlib_compunit_source self $static $run_nr $so_nr $cu_nr]
+	}
+	verbose -log "GenPerfTest::_gen_shlib_source run $run_nr so $so_nr, done [timestamp -format %c]"
+	return $result
+    }
+
+    # Write all sidebad non-file inputs, as well as OPTIONS to INPUTS_FILE.
+
+    proc _write_inputs_file { inputs_file options } {
+	global env
+	set f [open $inputs_file "w"]
+	puts $f "options: $options"
+	puts $f "PATH: [getenv PATH]"
+	close $f
+    }
+
+    # Generate the sha1sum of all the inputs.
+    # The result is a list of { error_code text }.
+    # Upon success error_code is zero and text is the sha1sum.
+    # Otherwise, error_code is non_zero and text is the error message.
+
+    proc _gen_sha1sum_for_inputs { source inputs } {
+	global CAT_PROGRAM SHA1SUM_PROGRAM
+	set catch_result [catch "exec $CAT_PROGRAM $source $inputs | $SHA1SUM_PROGRAM" output]
+        return [list $catch_result $output]
+    }
+
+    # Return the contents of TEXT_FILE.
+    # It is assumed TEXT_FILE exists and is readable.
+    # This is used for reading files containing sha1sums, the
+    # last newline is removed.
+
+    proc _read_file { text_file } {
+	set f [open $text_file "r"]
+	set result [read -nonewline $f]
+	close $f
+	return $result
+    }
+
+    # Write TEXT to TEXT_FILE.
+    # It is assumed TEXT_FILE can be opened/created and written to.
+
+    proc _write_file { text_file text } {
+	set f [open $text_file "w"]
+	puts $f $text
+	close $f
+    }
+
+    # Wrapper on gdb_compile* that computes sha1sums of inputs to decide
+    # whether the compile is needed.
+    # The result is the result of gdb_compile*: "" == success, otherwise
+    # a compilation error occurred and the output is an error message.
+    # This doesn't take all inputs into account, just the useful ones.
+    # As an extension (or simplification) on gdb_compile*, if TYPE is
+    # shlib then call gdb_compile_shlib, otherwise call gdb_compile.
+    # Other possibilities *could* be handled this way, e.g., pthreads.  TBD.
+
+    proc _perftest_compile { source dest type options } {
+	verbose -log "_perftest_compile $source $dest $type $options"
+	# To keep things simple, we put all non-file inputs into a file and
+	# then cat all input files through sha1sum.
+	set sha1sum_file ${dest}.sha1sum
+	set sha1new_file ${dest}.sha1new
+	set inputs_file ${dest}.inputs
+	_write_inputs_file $inputs_file $options
+	set sha1sum [_gen_sha1sum_for_inputs $source $inputs_file]
+	if { [lindex $sha1sum 0] != 0 } {
+	    return "sha1sum generation error: [lindex $sha1sum 1]"
+	}
+	set sha1sum [lindex $sha1sum 1]
+	if [file exists $sha1sum_file] {
+	    set last_sha1sum [_read_file $sha1sum_file]
+	    verbose -log "last: $last_sha1sum, new: $sha1sum"
+	    if { $sha1sum == $last_sha1sum } {
+		verbose -log "using existing build for $dest"
+		return ""
+	    }
+	}
+	# No such luck, we need to compile.
+	file delete $sha1sum_file
+	if { $type == "shlib" } {
+	    set result [gdb_compile_shlib $source $dest $options]
+	} else {
+	    set result [gdb_compile $source $dest $type $options]
+	}
+	if { $result == "" } {
+	    verbose -log "wrote sha1sum: $sha1sum"
+	    _write_file $sha1sum_file $sha1sum
+	}
+	return $result
+    }
+
+    proc _compile_binary_pieces { self_var worker_nr static run_nr } {
+	upvar 1 $self_var self
+	set compile_flags [_compile_flags self]
+	set nr_compunits [_get_param $self(nr_compunits) $run_nr]
+	global PERF_TEST_COMPILE_PARALLELISM
+	set nr_workers $PERF_TEST_COMPILE_PARALLELISM
+	# Generate the source first so we can more easily measure how long that
+	# takes.  [It doesn't take hardly any time at all, relative to the time
+	# it takes to compile it, but this will provide numbers to show that.]
+	set todo_list [_gen_binary_source self $worker_nr $static $run_nr]
+	verbose -log "GenPerfTest::_compile_binary_pieces worker $worker_nr run $run_nr, started [timestamp -format %c]"
+	foreach elm $todo_list {
+	    set source_file [lindex $elm 0]
+	    set object_file [lindex $elm 1]
+	    if { [_perftest_compile $source_file $object_file object $compile_flags] != "" } {
+		verbose -log "GenPerfTest::_compile_binary_pieces worker $worker_nr run $run_nr, failed [timestamp -format %c]"
+		return -1
+	    }
+	}
+	verbose -log "GenPerfTest::_compile_binary_pieces worker $worker_nr run $run_nr, done [timestamp -format %c]"
+	return 0
+    }
+
+    # Helper function to compile the pieces of a shlib.
+    # Note: gdb_compile_shlib{,_pthreads} don't support first building object
+    # files and then building the shlib.  Therefore our hands are tied, and we
+    # just build the shlib in one step.  This is less of a parallelization
+    # problem if there are multiple shlibs: Each worker can build a different
+    # shlib.  If this proves to be a problem in practice we can enhance
+    # gdb_compile_shlib* then.
+
+    proc _compile_shlib { self_var static run_nr so_nr } {
+	upvar 1 $self_var self
+	set source_files [_gen_shlib_source self $static $run_nr $so_nr]
+	set shlib_file [_make_shlib_name self $static $run_nr $so_nr]
+	set compile_flags [_compile_flags self]
+	if { [_perftest_compile $source_files $shlib_file shlib $compile_flags] != "" } {
+	    return -1
+	}
+	return 0
+    }
+
+    # Compile the pieces of the binary and possible shlibs for the test.
+    # The result is 0 for success, -1 for failure.
+
+    proc _compile_pieces { self_var worker_nr } {
+	upvar 1 $self_var self
+	global PERF_TEST_COMPILE_PARALLELISM
+	set nr_workers $PERF_TEST_COMPILE_PARALLELISM
+	set nr_runs [llength $self(run_names)]
+	set static [_static_object_files_p self]
+	verbose -log "_compile_pieces: static flag: $static"
+	file mkdir "[file dirname $self(binfile)]/pieces"
+	if $static {
+	    # All the pieces look the same (run over run) so just build all the
+	    # shlibs of the last run (which is the largest).
+	    set last_run [expr $nr_runs - 1]
+	    set nr_shlibs [_get_param $self(nr_shlibs) $last_run]
+	    set object_dir [_make_object_dir_name self $static ignored]
+	    file mkdir $object_dir
+	    for { set so_nr $worker_nr } { $so_nr < $nr_shlibs } { incr so_nr $nr_workers } {
+		if { [_compile_shlib self $static $last_run $so_nr] < 0 } {
+		    return -1
+		}
+	    }
+	    if { [_compile_binary_pieces self $worker_nr $static $last_run] < 0 } {
+		return -1
+	    }
+	} else {
+	    for { set run_nr 0 } { $run_nr < $nr_runs } { incr run_nr } {
+		set nr_shlibs [_get_param $self(nr_shlibs) $run_nr]
+		set object_dir [_make_object_dir_name self $static $run_nr]
+		file mkdir $object_dir
+		for { set so_nr $worker_nr } { $so_nr < $nr_shlibs } { incr so_nr $nr_workers } {
+		    if { [_compile_shlib self $static $run_nr $so_nr] < 0 } {
+			return -1
+		    }
+		}
+		if { [_compile_binary_pieces self $worker_nr $static $run_nr] < 0 } {
+		    return -1
+		}
+	    }
+	}
+	return 0
+    }
+
+    proc compile_pieces { self_var worker_nr } {
+	upvar 1 $self_var self
+	verbose -log "GenPerfTest::compile_pieces worker $worker_nr, started [timestamp -format %c]"
+	verbose -log "self: [array get self]"
+	_verify_testcase self
+	if { [_compile_pieces self $worker_nr] < 0 } {
+	    verbose -log "GenPerfTest::compile_pieces worker $worker_nr, failed [timestamp -format %c]"
+	    return -1
+	}
+	verbose -log "GenPerfTest::compile_pieces worker $worker_nr, done [timestamp -format %c]"
+	return 0
+    }
+
+    proc _make_shlib_flags { self_var static run_nr } {
+	upvar 1 $self_var self
+	set nr_shlibs [_get_param $self(nr_shlibs) $run_nr]
+	set result ""
+	for { set i 0 } { $i < $nr_shlibs } { incr i } {
+	    lappend result "shlib=[_make_shlib_name self $static $run_nr $i]"
+	}
+	return $result
+    }
+
+    proc _compile_binary { self_var static run_nr } {
+	upvar 1 $self_var self
+	set input_files [_make_binary_input_file_names self $static $run_nr]
+	set binary_file [_make_binary_name self $run_nr]
+	set compile_flags [_compile_flags self]
+	set shlib_flags [_make_shlib_flags self $static $run_nr]
+	if { $shlib_flags != "" } {
+	    set compile_flags "$compile_flags $shlib_flags"
+	}
+	if { [_perftest_compile $input_files $binary_file executable $compile_flags] != "" } {
+	    return -1
+	}
+	return 0
+    }
+
+    # Helper function for compile.
+    # The result is 0 for success, -1 for failure.
+
+    proc _compile { self_var } {
+	upvar 1 $self_var self
+	set nr_runs [llength $self(run_names)]
+	set static [_static_object_files_p self]
+	verbose -log "_compile: static flag: $static"
+	for { set run_nr 0 } { $run_nr < $nr_runs } { incr run_nr } {
+	    if { [_compile_binary self $static $run_nr] < 0 } {
+		return -1
+	    }
+	}
+	return 0
+    }
+
+    # Main entry point to compile the test program.
+    # It is assumed all the pieces of the binary (all the .o's, except those
+    # from test-supplied sources) have already been built with compile_pieces.
+    # There's no need to compile any shlibs here, as compile_pieces will have
+    # already built them too.
+    # The result is 0 for success, -1 for failure.
+
+    proc compile { self_var } {
+	upvar 1 $self_var self
+	verbose -log "GenPerfTest::compile, started [timestamp -format %c]"
+	verbose -log "self: [array get self]"
+	_verify_testcase self
+	if { [_compile self] < 0 } {
+	    verbose -log "GenPerfTest::compile, failed [timestamp -format %c]"
+	    return -1
+	}
+	verbose -log "GenPerfTest::compile, done [timestamp -format %c]"
+	return 0
+    }
+
+    proc standard_driver { exp_file_name make_config_thunk_name } {
+	global GDB_PERFTEST_MODE
+	switch $GDB_PERFTEST_MODE {
+	    gen-build-exps {
+		if { [GenPerfTest::gen_build_exp_files $exp_file_name \
+			  $make_config_thunk_name] < 0 } {
+		    return -1
+		}
+	    }
+	    build-pieces {
+		;# Nothing to do.
+	    }
+	    compile {
+		array set testcase [$make_config_thunk_name]
+		if { [GenPerfTest::compile testcase] < 0 } {
+		    return -1
+		}
+	    }
+	    run {
+		;# Nothing to do.
+	    }
+	    both {
+		;# Don't do anything here.  Tests that use us must have
+		;# explicitly separate compile/run steps.
+	    }
+	}
+	return 0
+    }
+}
+
+if ![info exists PERF_TEST_COMPILE_PARALLELISM] {
+    set PERF_TEST_COMPILE_PARALLELISM $GenPerfTest::DEFAULT_PERF_TEST_COMPILE_PARALLELISM
+}

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [RFC] Monster testcase generator for performance testsuite
@ 2015-06-11  1:48 Doug Evans
  0 siblings, 0 replies; 8+ messages in thread
From: Doug Evans @ 2015-06-11  1:48 UTC (permalink / raw)
  To: gdb-patches; +Cc: keiths, yao.qi

Hi.

fyi, here's the latest version of my monster testcase generator.

I added a README so that there's something to read in the patch.
However, I think a better place for most of it is the wiki.

I hope to submit a real patch soon.
I'm sending this now because it's a good time to give another heads up.

Last discussion of the patch was here.
https://sourceware.org/ml/gdb-patches/2015-01/msg00013.html

diff --git a/gdb/testsuite/Makefile.in b/gdb/testsuite/Makefile.in
index c064f06..ffda7b9 100644
--- a/gdb/testsuite/Makefile.in
+++ b/gdb/testsuite/Makefile.in
@@ -227,16 +227,48 @@ do-check-parallel: $(TEST_TARGETS)

  @GMAKE_TRUE@check/%.exp:
  @GMAKE_TRUE@	-mkdir -p outputs/$*
-@GMAKE_TRUE@	@$(DO_RUNTEST) GDB_PARALLEL=yes --outdir=outputs/$* $*.exp  
$(RUNTESTFLAGS)
+@GMAKE_TRUE@	@$(DO_RUNTEST) GDB_PARALLEL=. --outdir=outputs/$* $*.exp  
$(RUNTESTFLAGS)

  check/no-matching-tests-found:
  	@echo ""
  	@echo "No matching tests found."
  	@echo ""

+# Utility rule invoked by step 2 of the build-perf rule.
+@GMAKE_TRUE@workers/%.worker:
+@GMAKE_TRUE@	mkdir -p gdb.perf/outputs/$*
+@GMAKE_TRUE@	$(DO_RUNTEST) --status --outdir=gdb.perf/outputs/$*  
lib/build-piece.exp WORKER=$* GDB_PARALLEL=gdb.perf $(RUNTESTFLAGS)  
GDB_PERFTEST_MODE=compile GDB_PERFTEST_SUBMODE=build-pieces
+
+# Utility rule to build tests that support it in parallel.
+# The build is broken into 3 steps distinguished by GDB_PERFTEST_SUBMODE:
+# gen-workers, build-pieces, final.
+#
+# GDB_PERFTEST_MODE appears *after* RUNTESTFLAGS here because we don't want
+# anything in RUNTESTFLAGS to override it.
+#
+# We don't delete the outputs directory here as these programs can take
+# awhile to build, and perftest.exp has support for deciding whether to
+# recompile them.  If you want to remove these directories, make clean.
+#
+# The point of step 1 is to construct the set of worker tasks for step 2.
+# All of the information needed by build-piece.exp is contained in the name
+# of the generated .worker file.
+@GMAKE_TRUE@build-perf: $(abs_builddir)/site.exp
+@GMAKE_TRUE@	rm -rf gdb.perf/workers
+@GMAKE_TRUE@	mkdir -p gdb.perf/workers
+@GMAKE_TRUE@	@: Step 1: Generate the build .worker files.
+@GMAKE_TRUE@	$(DO_RUNTEST) --status --directory=gdb.perf --outdir  
gdb.perf/workers GDB_PARALLEL=gdb.perf $(RUNTESTFLAGS)  
GDB_PERFTEST_MODE=compile GDB_PERFTEST_SUBMODE=gen-workers
+@GMAKE_TRUE@	@: Step 2: Compile the pieces.  Here is the build parallelism.
+@GMAKE_TRUE@	$(MAKE) $$(cd gdb.perf && echo workers/*/*.worker)
+@GMAKE_TRUE@	@: Step 3: Do the final link.
+@GMAKE_TRUE@	$(DO_RUNTEST) --status --directory=gdb.perf --outdir gdb.perf  
GDB_PARALLEL=gdb.perf $(RUNTESTFLAGS) GDB_PERFTEST_MODE=compile  
GDB_PERFTEST_SUBMODE=final
+
+# The default is to both compile and run the tests.
+GDB_PERFTEST_MODE = both
+
  check-perf: all $(abs_builddir)/site.exp
  	@if test ! -d gdb.perf; then mkdir gdb.perf; fi
-	$(DO_RUNTEST) --directory=gdb.perf --outdir gdb.perf  
GDB_PERFTEST_MODE=both $(RUNTESTFLAGS)
+	$(DO_RUNTEST) --directory=gdb.perf --outdir gdb.perf  
GDB_PERFTEST_MODE=$(GDB_PERFTEST_MODE) $(RUNTESTFLAGS)

  force:;

@@ -245,6 +277,7 @@ clean mostlyclean:
  	-rm -f core.* *.tf *.cl tracecommandsscript copy1.txt zzz-gdbscript
  	-rm -f *.dwo *.dwp
  	-rm -rf outputs temp cache
+	-rm -rf gdb.perf/workers gdb.perf/outputs gdb.perf/temp gdb.perf/cache
  	-rm -f read1.so expect-read1
  	if [ x"${ALL_SUBDIRS}" != x ] ; then \
  	    for dir in ${ALL_SUBDIRS}; \
diff --git a/gdb/testsuite/gdb.base/watchpoint.exp  
b/gdb/testsuite/gdb.base/watchpoint.exp
index b2924d7..fcc9a8d 100644
--- a/gdb/testsuite/gdb.base/watchpoint.exp
+++ b/gdb/testsuite/gdb.base/watchpoint.exp
@@ -464,12 +464,11 @@ proc test_complex_watchpoint {} {
  		pass $test
  	    }
  	    -re "can't compute CFA for this frame.*\r\n$gdb_prompt $" {
-		global compiler_info no_hw
+		global no_hw

  		# GCC < 4.5.0 does not get LOCATIONS_VALID set by dwarf2read.c.
  		# Therefore epilogue unwinder gets applied which is
  		# incompatible with dwarf2_frame_cfa.
-		verbose -log "compiler_info: $compiler_info"
  		if {$no_hw && ([test_compiler_info {gcc-[0-3]-*}]
  			       || [test_compiler_info {gcc-4-[0-4]-*}])} {
  		    xfail "$test (old GCC has broken watchpoints in epilogues)"
diff --git a/gdb/testsuite/gdb.cp/temargs.exp  
b/gdb/testsuite/gdb.cp/temargs.exp
index e5aff51..f086a63 100644
--- a/gdb/testsuite/gdb.cp/temargs.exp
+++ b/gdb/testsuite/gdb.cp/temargs.exp
@@ -34,7 +34,6 @@ if {![runto_main]} {
  # NOTE: prepare_for_testing calls get_compiler_info, which we need
  # for the test_compiler_info calls.
  # gcc 4.4 and earlier don't emit enough info for some of our template  
tests.
-verbose -log "compiler_info: $compiler_info"
  set have_older_template_gcc 0
  set have_pr_41736_fixed 1
  set have_pr_45024_fixed 1
diff --git a/gdb/testsuite/gdb.perf/README b/gdb/testsuite/gdb.perf/README
new file mode 100644
index 0000000..86fb9bd
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/README
@@ -0,0 +1,192 @@
+The GDB Performance Testsuite
+=============================
+
+This README contains notes on hacking on GDB's performance testsuite.
+For notes on GDB's regular testsuite or how to run the performance  
testsuite,
+see ../README.
+
+Generated tests
+***************
+
+The testcase generator lets us easily test GDB on large programs.
+The "monster" tests are mocks of real programs where GDB's
+performance has been a problem.  Often it is difficult to build
+these monster programs, but when measuring performance one doesn't
+need the "real" program, all one needs is something that looks like
+the real program along the axis one is measuring; for example, the
+number of CUs (compilation units).
+
+Structure of generated tests
+****************************
+
+Generated tests consist of a binary and potentially any number of
+shared libraries.  One of these shared libraries, called "tail", is
+special.  It is used to provide mocks of system provided code, and
+contains no generated code.  Typically system-provided libraries
+are searched last which can have significant performance consequences,
+so we provide a means to exercise that.
+
+The binary and the generated shared libraries can have a mix of
+manually written and generated code.  Manually written code is
+specified with the {binary,gen_shlib}_extra_sources config parameters,
+which are lists of source files in testsuite/gdb.perf.  Generated
+files are controlled with various configuration knobs.
+
+Once a large test program is built, it makes sense to use it as much
+as possible (i.e., with multiple tests).  Therefore perf data collection
+for generated tests is split into two passes: the first pass builds
+all the generated tests, and the second pass runs all the performance
+tests.  The first pass is called "build-perf" and the second pass is
+called "check-perf".  See ../README for instructions on running the tests.
+
+Generated test directory layout
+*******************************
+
+All output lives under testsuite/gdb.perf in the build directory.
+
+Because some of the tests can get really large (and take potentially
+minutes to compile), parallelism is built into their compilation.
+Note however that we don't run the tests in parallel as it can skew
+the results.
+
+To keep things simple and stay consistent, we use the same
+mechanism used by "make check-parallel".  There is one catch: we need
+one .exp for each "worker" but the .exp file must come from the source
+tree.  To avoid generating .exp files for each worker we invoke
+lib/build-piece.exp for each worker with different arguments.
+The file build.piece.exp lives in "lib" to prevent dejagnu from finding
+it when it goes to look for .exp scripts to run.
+
+Another catch is that each parallel build worker needs its own directory
+so that their gdb.{log,sum} files don't collide.  On the other hand
+its easier if their output (all the object files and shared libraries)
+are in the same directory.
+
+The above considerations yield the resulting layout:
+
+$objdir/testsuite/gdb.perf/
+
+	gdb.log, gdb.sum: result of doing final link and running tests
+
+	workers/
+
+		gdb.log, gdb.sum: result of gen-workers step
+
+		$program_name/
+
+			${program_name}-0.worker
+			...
+			${program_name}-N.worker: input to build-pieces step
+
+	outputs/
+
+		${program_name}/
+
+			${program_name}-0/
+			...
+			${program_name}-N/
+
+				gdb.log, gdb.sum: for each build-piece worker
+
+			pieces/
+
+				generated sources, object files, shlibs
+
+			${run_name_1}: binary for test config #1
+			...
+			${run_name_N}: binary for test config #N
+
+Generated test configuration knobs
+**********************************
+
+The monster program generator provides various knobs for building various
+kinds of monster programs.  For a list of the knobs see function
+GenPerfTest::init_testcase in testsuite/lib/perftest.exp.
+Most knobs are self-explanatory.
+Here is a description of the less obvious ones.
+
+binary_extra_sources
+
+	This is the list of non-machine generated sources that go
+	into the test binary.  There must be at least one: the one
+	with main.
+
+class_specs
+
+	List of pairs of class depth and number of classes at that depth.
+	By "depth" here we mean nesting within a namespace.
+
+	E.g.,
+	class foo {};
+	namespace n { class foo {}; class bar {}; }
+	would be represented as { { 0 1 } { 1 2 } }.
+
+	The naming of each namespace is "ns_<depth>".
+	The naming of each class is "class_<cu_nr>_<class_nr>",
+	where <cu_nr> is the number of the compilation unit the
+	class is defined in.
+	There is currently no support for defining classes in headers
+	(something to be added when needed).
+
+	There's currently no support for nesting classes in classes.
+
+Misc. configuration knobs
+*************************
+
+These knobs control building or running of the test and are specified
+like any global Tcl variable.
+
+CAT_PROGRAM
+
+	Default is /bin/cat, you shouldn't need to change this.
+
+SHA1SUM_PROGRAM
+
+	Default is /usr/bin/sha1sum.
+
+PERF_TEST_COMPILE_PARALLELISM
+
+	An integer, specifies the amount of parallelism in the builds.
+	Akin to make's -j flag.  The default is 10.
+
+Writing a generated test program
+********************************
+
+The best way to write a generated test program is to take an existing
+one as boilerplate.  Two good examples are gmonster1.exp and gmonster2.exp.
+gmonster1.exp builds a big binary with various custom manually written
+code, and gmonster2 is (essentially) the equivalent binary split up over
+several shared libraries.
+
+Writing a performance test that uses a generated program
+********************************************************
+
+The best way to write a test is to take an existing one as boilerplate.
+Good examples are gmonster1-*.exp and gmonster2-*.exp.
+
+The naming used thus far is that "foo.exp" builds the test program
+and there is one "foo-bar.exp" file for each performance test
+that uses test program "foo".
+
+In addition to writing the test driver .exp script, one must also
+write a python script that is used to run the test.
+This contents of this script is defined by the performance testsuite
+harness.  It defines a class, which is a subclass of one of the
+classes in gdb.perf/lib/perftest/perftest.py.
+See gmonster-null-lookup.py for an example.
+
+Note: Since gmonster1 and gmonster2 are treated as being variations of
+the same program, each test shares the same python script.
+E.g., gmonster1-null-lookup.exp and gmonster2-null-lookup.exp
+both use gmonster-null-lookup.py.
+
+Running performance tests for generated programs
+************************************************
+
+There are two steps: build and run.
+
+Example:
+
+bash$ make -j10 build-perf RUNTESTFLAGS="gmonster1.exp"
+bash$ make -j10 check-perf RUNTESTFLAGS="gmonster1-null-lookup.exp" \
+    GDB_PERFTEST_MODE=run
diff --git a/gdb/testsuite/gdb.perf/backtrace.exp  
b/gdb/testsuite/gdb.perf/backtrace.exp
index a88064b..0ae4b5b 100644
--- a/gdb/testsuite/gdb.perf/backtrace.exp
+++ b/gdb/testsuite/gdb.perf/backtrace.exp
@@ -58,9 +58,12 @@ PerfTest::assemble {

      gdb_breakpoint "fun2"
      gdb_continue_to_breakpoint "fun2"
+
+    return 0
  } {
      global BACKTRACE_DEPTH

      gdb_test "python BackTrace\($BACKTRACE_DEPTH\).run()"

+    return 0
  }
diff --git a/gdb/testsuite/gdb.perf/disassemble.exp  
b/gdb/testsuite/gdb.perf/disassemble.exp
index fe943d8..67e9815 100644
--- a/gdb/testsuite/gdb.perf/disassemble.exp
+++ b/gdb/testsuite/gdb.perf/disassemble.exp
@@ -52,6 +52,9 @@ PerfTest::assemble {
      if ![runto_main] {
  	return -1
      }
+
+    return 0
  } {
      gdb_test "python Disassemble\(\).run()"
+    return 0
  }
diff --git a/gdb/testsuite/gdb.perf/gm-hello.cc  
b/gdb/testsuite/gdb.perf/gm-hello.cc
new file mode 100644
index 0000000..80a7dbc
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gm-hello.cc
@@ -0,0 +1,25 @@
+/* Copyright (C) 2015 Free Software Foundation, Inc.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.   
*/
+
+#include <string>
+#include "gm-utils.h"
+
+#ifdef SHLIB
+#define HELLO CONCAT2 (hello_, SHLIB)
+#else
+#define HELLO hello
+#endif
+
+std::string HELLO ("Hello.");
diff --git a/gdb/testsuite/gdb.perf/gm-pervasive-typedef.cc  
b/gdb/testsuite/gdb.perf/gm-pervasive-typedef.cc
new file mode 100644
index 0000000..e7712dc
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gm-pervasive-typedef.cc
@@ -0,0 +1,30 @@
+/* Copyright (C) 2015 Free Software Foundation, Inc.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.   
*/
+
+#include "gm-pervasive-typedef.h"
+
+my_int use_of_my_int;
+
+void
+call_use_my_int_1 (my_int x)
+{
+  use_of_my_int = use_my_int (x);
+}
+
+void
+call_use_my_int ()
+{
+  call_use_my_int_1 (42);
+}
diff --git a/gdb/testsuite/gdb.perf/gm-pervasive-typedef.h  
b/gdb/testsuite/gdb.perf/gm-pervasive-typedef.h
new file mode 100644
index 0000000..6b65c27
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gm-pervasive-typedef.h
@@ -0,0 +1,30 @@
+/* Copyright (C) 2015 Free Software Foundation, Inc.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.   
*/
+
+/* This file is used to create the conditions for the perf regression
+   in pr 16253.  */
+
+#ifndef GM_PERVASIVE_TYPEDEF_H
+#define GM_PERVASIVE_TYPEDEF_H
+
+typedef int my_int;
+
+static my_int
+use_my_int (my_int x)
+{
+  return x + 1;
+}
+
+#endif /* GM_PERVASIVE_TYPEDEF_H */
diff --git a/gdb/testsuite/gdb.perf/gm-std.cc  
b/gdb/testsuite/gdb.perf/gm-std.cc
new file mode 100644
index 0000000..89ac500
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gm-std.cc
@@ -0,0 +1,36 @@
+/* Copyright (C) 2015 Free Software Foundation, Inc.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.   
*/
+
+#include <iostream>
+#include "gm-std.h"
+
+namespace gm_std
+{
+
+ostream cerr;
+
+void
+init ()
+{
+  cerr.stream = &std::cerr;
+}
+
+template class basic_ostream<char>;
+
+template
+ostream&
+operator<< (ostream& out, const char* s);
+
+}
diff --git a/gdb/testsuite/gdb.perf/gm-std.h  
b/gdb/testsuite/gdb.perf/gm-std.h
new file mode 100644
index 0000000..8bda713
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gm-std.h
@@ -0,0 +1,57 @@
+/* Copyright (C) 2015 Free Software Foundation, Inc.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.   
*/
+
+#ifndef GM_STD_H
+#define GM_STD_H
+
+#include <iostream>
+
+namespace gm_std
+{
+
+// Mock std::cerr, so we don't have to worry about the vagaries of the
+// system-provided one.  E.g., gcc pr 65669.
+// This contains just enough to exercise what we want to.
+template<typename T>
+  class basic_ostream
+{
+ public:
+  std::ostream *stream;
+};
+
+template<typename T>
+  basic_ostream<T>&
+operator<< (basic_ostream<T>& out, const char* s)
+{
+  (*out.stream) << s;
+  return out;
+}
+
+typedef basic_ostream<char> ostream;
+
+// Inhibit implicit instantiations for required instantiations,
+// which are defined via explicit instantiations elsewhere.
+extern template class basic_ostream<char>;
+extern template ostream& operator<< (ostream&, const char*);
+
+extern ostream cerr;
+
+// Call this from main so we don't have to do the same tricks that
+// libstcd++ does with ios init'n.
+extern void init ();
+
+}
+
+#endif /* GM_STD_H */
diff --git a/gdb/testsuite/gdb.perf/gm-use-cerr.cc  
b/gdb/testsuite/gdb.perf/gm-use-cerr.cc
new file mode 100644
index 0000000..5ef453b
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gm-use-cerr.cc
@@ -0,0 +1,29 @@
+/* Copyright (C) 2015 Free Software Foundation, Inc.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.   
*/
+
+#include "gm-std.h"
+#include "gm-utils.h"
+
+#ifdef SHLIB
+#define WRITE_CERR XCONCAT2 (write_cerr_, SHLIB)
+#else
+#define WRITE_CERR write_cerr
+#endif
+
+void
+WRITE_CERR ()
+{
+  gm_std::cerr << "Yikes!\n";
+}
diff --git a/gdb/testsuite/gdb.perf/gm-utils.h  
b/gdb/testsuite/gdb.perf/gm-utils.h
new file mode 100644
index 0000000..f95ae06
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gm-utils.h
@@ -0,0 +1,25 @@
+/* Copyright (C) 2015 Free Software Foundation, Inc.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.   
*/
+
+#ifndef GM_UTILS_H
+#define GM_UTILS_H
+
+/* Names borrowed from include/symcat.h.  */
+#define CONCAT2(a,b) a ## b
+#define XCONCAT2(a,b) CONCAT2 (a, b)
+#define STRINGX(s) #s
+#define XSTRING(s) STRINGX (s)
+
+#endif /* GM_UTILS_H */
diff --git a/gdb/testsuite/gdb.perf/gmonster-backtrace.py  
b/gdb/testsuite/gdb.perf/gmonster-backtrace.py
new file mode 100644
index 0000000..b64df0b
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster-backtrace.py
@@ -0,0 +1,44 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Measure performance of selecting a file to debug and then running to  
main.
+
+from perftest import perftest
+from perftest import measure
+from perftest import utils
+
+class GmonsterRuntoMain(perftest.TestCaseWithBasicMeasurements):
+    def __init__(self, name, run_names, binfile):
+        # We want to measure time in this test.
+        super(GmonsterRuntoMain, self).__init__(name)
+        self.run_names = run_names
+        self.binfile = binfile
+
+    def warm_up(self):
+        pass
+
+    def _doit(self, binfile):
+        utils.select_file(binfile)
+        utils.runto_main()
+
+    def execute_test(self):
+        for run in self.run_names:
+            this_run_binfile = "%s-%s" % (self.binfile,
+                                          utils.convert_spaces(run))
+            iteration = 5
+            while iteration > 0:
+                func = lambda: self._doit(this_run_binfile)
+                self.measure.measure(func, run)
+                iteration -= 1
diff --git a/gdb/testsuite/gdb.perf/gmonster-null-lookup.py  
b/gdb/testsuite/gdb.perf/gmonster-null-lookup.py
new file mode 100644
index 0000000..9bb839e
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster-null-lookup.py
@@ -0,0 +1,44 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Test handling of lookup of a symbol that doesn't exist.
+
+from perftest import perftest
+from perftest import measure
+from perftest import utils
+
+class NullLookup(perftest.TestCaseWithBasicMeasurements):
+    def __init__(self, name, run_names, binfile):
+        # We want to measure time in this test.
+        super(NullLookup, self).__init__(name)
+        self.run_names = run_names
+        self.binfile = binfile
+
+    def warm_up(self):
+        pass
+
+    def execute_test(self):
+        for run in self.run_names:
+            this_run_binfile = "%s-%s" % (self.binfile,
+                                          utils.convert_spaces(run))
+            utils.select_file(this_run_binfile)
+            utils.runto_main()
+            utils.safe_execute("mt expand-symtabs")
+            iteration = 5
+            while iteration > 0:
+                utils.safe_execute("mt flush-symbol-cache")
+                func = lambda: utils.safe_execute("p symbol_not_found")
+                self.measure.measure(func, run)
+                iteration -= 1
diff --git a/gdb/testsuite/gdb.perf/gmonster-pervasive-typedef.py  
b/gdb/testsuite/gdb.perf/gmonster-pervasive-typedef.py
new file mode 100644
index 0000000..e22562a
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster-pervasive-typedef.py
@@ -0,0 +1,43 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Exercise the perf issue from pr 16253.
+
+from perftest import perftest
+from perftest import measure
+from perftest import utils
+
+class PervasiveTypedef(perftest.TestCaseWithBasicMeasurements):
+    def __init__(self, name, run_names, binfile):
+        # We want to measure time in this test.
+        super(PervasiveTypedef, self).__init__(name)
+        self.run_names = run_names
+        self.binfile = binfile
+
+    def warm_up(self):
+        pass
+
+    def func(self):
+        utils.select_file(self.this_run_binfile)
+        utils.safe_execute("ptype call_use_my_int_1")
+
+    def execute_test(self):
+        for run in self.run_names:
+            self.this_run_binfile = "%s-%s" % (self.binfile,
+                                               utils.convert_spaces(run))
+            iteration = 5
+            while iteration > 0:
+                self.measure.measure(self.func, run)
+                iteration -= 1
diff --git a/gdb/testsuite/gdb.perf/gmonster-print-cerr.py  
b/gdb/testsuite/gdb.perf/gmonster-print-cerr.py
new file mode 100644
index 0000000..30e86e5
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster-print-cerr.py
@@ -0,0 +1,44 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Test printing of std::cerr.
+
+from perftest import perftest
+from perftest import measure
+from perftest import utils
+
+class PrintCerr(perftest.TestCaseWithBasicMeasurements):
+    def __init__(self, name, run_names, binfile):
+        # We want to measure time in this test.
+        super(PrintCerr, self).__init__(name)
+        self.run_names = run_names
+        self.binfile = binfile
+
+    def warm_up(self):
+        pass
+
+    def execute_test(self):
+        for run in self.run_names:
+            this_run_binfile = "%s-%s" % (self.binfile,
+                                          utils.convert_spaces(run))
+            utils.select_file(this_run_binfile)
+            utils.runto_main()
+            #utils.safe_execute("mt expand-symtabs")
+            iteration = 5
+            while iteration > 0:
+                utils.safe_execute("mt flush-symbol-cache")
+                func = lambda: utils.safe_execute("print gm_std::cerr")
+                self.measure.measure(func, run)
+                iteration -= 1
diff --git a/gdb/testsuite/gdb.perf/gmonster-ptype-string.py  
b/gdb/testsuite/gdb.perf/gmonster-ptype-string.py
new file mode 100644
index 0000000..d39f4ce
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster-ptype-string.py
@@ -0,0 +1,45 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Measure speed of ptype of a std::string object.
+
+from perftest import perftest
+from perftest import measure
+from perftest import utils
+
+class GmonsterPtypeString(perftest.TestCaseWithBasicMeasurements):
+    def __init__(self, name, run_names, binfile):
+        # We want to measure time in this test.
+        super(GmonsterPtypeString, self).__init__(name)
+        self.run_names = run_names
+        self.binfile = binfile
+
+    def warm_up(self):
+        pass
+
+    def execute_test(self):
+        for run in self.run_names:
+            this_run_binfile = "%s-%s" % (self.binfile,
+                                          utils.convert_spaces(run))
+            utils.select_file(this_run_binfile)
+            utils.runto_main()
+            utils.safe_execute("mt expand-symtabs")
+            iteration = 5
+            while iteration > 0:
+                utils.safe_execute("mt flush-symbol-cache")
+                func1 = lambda: utils.safe_execute("ptype hello")
+                func = lambda: utils.run_n_times(2, func1)
+                self.measure.measure(func, run)
+                iteration -= 1
diff --git a/gdb/testsuite/gdb.perf/gmonster-runto-main.py  
b/gdb/testsuite/gdb.perf/gmonster-runto-main.py
new file mode 100644
index 0000000..b64df0b
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster-runto-main.py
@@ -0,0 +1,44 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Measure performance of selecting a file to debug and then running to  
main.
+
+from perftest import perftest
+from perftest import measure
+from perftest import utils
+
+class GmonsterRuntoMain(perftest.TestCaseWithBasicMeasurements):
+    def __init__(self, name, run_names, binfile):
+        # We want to measure time in this test.
+        super(GmonsterRuntoMain, self).__init__(name)
+        self.run_names = run_names
+        self.binfile = binfile
+
+    def warm_up(self):
+        pass
+
+    def _doit(self, binfile):
+        utils.select_file(binfile)
+        utils.runto_main()
+
+    def execute_test(self):
+        for run in self.run_names:
+            this_run_binfile = "%s-%s" % (self.binfile,
+                                          utils.convert_spaces(run))
+            iteration = 5
+            while iteration > 0:
+                func = lambda: self._doit(this_run_binfile)
+                self.measure.measure(func, run)
+                iteration -= 1
diff --git a/gdb/testsuite/gdb.perf/gmonster1-backtrace.exp  
b/gdb/testsuite/gdb.perf/gmonster1-backtrace.exp
new file mode 100644
index 0000000..d7df396
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster1-backtrace.exp
@@ -0,0 +1,25 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Measure performance of a backtrace in a large executable.
+# Test parameters are the standard GenPerfTest parameters.
+
+load_lib perftest.exp
+
+if [skip_perf_tests] {
+    return 0
+}
+
+GenPerfTest::standard_run_driver gmonster1.exp gmonster-backtrace.py  
make_testcase_config GmonsterBacktrace
diff --git a/gdb/testsuite/gdb.perf/gmonster1-null-lookup.exp  
b/gdb/testsuite/gdb.perf/gmonster1-null-lookup.exp
new file mode 100644
index 0000000..5f48c79
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster1-null-lookup.exp
@@ -0,0 +1,25 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Measure speed of lookup of a symbol that doesn't exist.
+# Test parameters are the standard GenPerfTest parameters.
+
+load_lib perftest.exp
+
+if [skip_perf_tests] {
+    return 0
+}
+
+GenPerfTest::standard_run_driver gmonster1.exp make_testcase_config  
gmonster-null-lookup.py NullLookup
diff --git a/gdb/testsuite/gdb.perf/gmonster1-pervasive-typedef.exp  
b/gdb/testsuite/gdb.perf/gmonster1-pervasive-typedef.exp
new file mode 100644
index 0000000..ab68a3e
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster1-pervasive-typedef.exp
@@ -0,0 +1,27 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Measure the speed of "ptype func" where a parameter of the function is a
+# typedef used pervasively.  This exercises the perf regression introduced  
by
+# the original patch to pr 16253.
+# Test parameters are the standard GenPerfTest parameters.
+
+load_lib perftest.exp
+
+if [skip_perf_tests] {
+    return 0
+}
+
+GenPerfTest::standard_run_driver gmonster1.exp make_testcase_config  
gmonster-pervasive-typedef.py PervasiveTypedef
diff --git a/gdb/testsuite/gdb.perf/gmonster1-print-cerr.exp  
b/gdb/testsuite/gdb.perf/gmonster1-print-cerr.exp
new file mode 100644
index 0000000..20d2716
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster1-print-cerr.exp
@@ -0,0 +1,25 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Measure speed of printing std::cerr.
+# Test parameters are the standard GenPerfTest parameters.
+
+load_lib perftest.exp
+
+if [skip_perf_tests] {
+    return 0
+}
+
+GenPerfTest::standard_run_driver gmonster1.exp make_testcase_config  
gmonster-print-cerr.py PrintCerr
diff --git a/gdb/testsuite/gdb.perf/gmonster1-ptype-string.exp  
b/gdb/testsuite/gdb.perf/gmonster1-ptype-string.exp
new file mode 100644
index 0000000..8eadbe7
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster1-ptype-string.exp
@@ -0,0 +1,25 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Measure speed of ptype on a simple class in a library that is searched  
late.
+# Test parameters are the standard GenPerfTest parameters.
+
+load_lib perftest.exp
+
+if [skip_perf_tests] {
+    return 0
+}
+
+GenPerfTest::standard_run_driver gmonster1.exp make_testcase_config  
gmonster-ptype-string.py GmonsterPtypeString
diff --git a/gdb/testsuite/gdb.perf/gmonster1-runto-main.exp  
b/gdb/testsuite/gdb.perf/gmonster1-runto-main.exp
new file mode 100644
index 0000000..665f94c
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster1-runto-main.exp
@@ -0,0 +1,25 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Measure performance of selecting a file to debug and then running to  
main.
+# Test parameters are the standard GenPerfTest parameters.
+
+load_lib perftest.exp
+
+if [skip_perf_tests] {
+    return 0
+}
+
+GenPerfTest::standard_run_driver gmonster1.exp make_testcase_config  
gmonster-runto-main.py GmonsterRuntoMain
diff --git a/gdb/testsuite/gdb.perf/gmonster1.cc  
b/gdb/testsuite/gdb.perf/gmonster1.cc
new file mode 100644
index 0000000..4ae9837
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster1.cc
@@ -0,0 +1,24 @@
+/* Copyright (C) 2015 Free Software Foundation, Inc.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.   
*/
+
+#include "gm-std.h"
+
+int
+main ()
+{
+  gm_std::init ();
+
+  return 0;
+}
diff --git a/gdb/testsuite/gdb.perf/gmonster1.exp  
b/gdb/testsuite/gdb.perf/gmonster1.exp
new file mode 100644
index 0000000..e951c74
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster1.exp
@@ -0,0 +1,86 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Perftest description file for building the "gmonster1" benchmark.
+# Where does the name come from?  The benchmark is derived from one of the
+# monster programs at Google.
+#
+# Perftest descriptions are loaded thrice:
+# 1) To generate the build .exp files
+#    GDB_PERFTEST_MODE=gen-build-exps
+#    This step allows for parallel builds of the majority of pieces of the
+#    test binary and shlibs.
+# 2) To compile the "pieces" of the binary and shlibs.
+#    "Pieces" are the bulk of the machine-generated sources of the test.
+#    This step is driven by lib/build-piece.exp.
+#    GDB_PERFTEST_MODE=build-pieces
+# 3) To perform the final link of the binary and shlibs.
+#    GDB_PERFTEST_MODE=compile
+#
+# Example usage:
+# bash$ make -j5 build-perf RUNTESTFLAGS="gmonster1.exp gmonster2.exp"
+# bash$ make check-perf RUNTESTFLAGS="gdb.perf/gm*-*.exp GDB=/path/to/gdb"
+
+load_lib perftest.exp
+
+if [skip_perf_tests] {
+    return 0
+}
+
+if ![info exists MONSTER] {
+    set MONSTER "n"
+}
+
+proc make_testcase_config { } {
+    global MONSTER
+
+    set program_name "gmonster1"
+    array set testcase [GenPerfTest::init_testcase $program_name]
+
+    set testcase(language) c++
+
+    # *_{sources,headers} need to be embedded in an outer list
+    # because remember each element of the outer list is for each run, and
+    # here we want to use the same value for all runs.
+    set testcase(binary_extra_sources) { { gmonster1.cc gm-hello.cc  
gm-use-cerr.cc gm-pervasive-typedef.cc } }
+    set testcase(binary_extra_headers) { { <stdint.h> gm-utils.h gm-std.h  
gm-pervasive-typedef.h } }
+    set testcase(tail_shlib_sources) { { gm-std.cc } }
+    set testcase(tail_shlib_headers) { { gm-std.h } }
+
+    if { $MONSTER == "y" } {
+	set testcase(run_names) { 10-cus 100-cus 1000-cus 10000-cus }
+	set testcase(nr_compunits) { 10 100 1000 10000 }
+    } else {
+	set testcase(run_names) { 1-cu 10-cus 100-cus }
+	set testcase(nr_compunits) { 1 10 100 }
+    }
+    set testcase(nr_gen_shlibs) { 0 }
+
+    set testcase(nr_extern_functions) 10
+    set testcase(nr_static_functions) 10
+
+    # class_specs needs to be embedded in an outer list because remember
+    # each element of the outer list is for each run, and here we want to  
use
+    # the same value for all runs.
+    set testcase(class_specs) { { { 0 10 } { 1 10 } { 2 10 } } }
+    set testcase(nr_members) 10
+    set testcase(nr_static_members) 10
+    set testcase(nr_methods) 10
+    set testcase(nr_static_methods) 10
+
+    return [array get testcase]
+}
+
+GenPerfTest::standard_compile_driver gmonster1.exp make_testcase_config
diff --git a/gdb/testsuite/gdb.perf/gmonster2-backtrace.exp  
b/gdb/testsuite/gdb.perf/gmonster2-backtrace.exp
new file mode 100644
index 0000000..a8073a5
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster2-backtrace.exp
@@ -0,0 +1,26 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Measure performance of a backtrace in a large executable
+# with lots of shared libraries.
+# Test parameters are the standard GenPerfTest parameters.
+
+load_lib perftest.exp
+
+if [skip_perf_tests] {
+    return 0
+}
+
+GenPerfTest::standard_run_driver gmonster2.exp gmonster-backtrace.py  
make_testcase_config GmonsterBacktrace
diff --git a/gdb/testsuite/gdb.perf/gmonster2-null-lookup.exp  
b/gdb/testsuite/gdb.perf/gmonster2-null-lookup.exp
new file mode 100644
index 0000000..10c8e20
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster2-null-lookup.exp
@@ -0,0 +1,26 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Measure speed of lookup of a symbol that doesn't exist
+# with lots of shared libraries.
+# Test parameters are the standard GenPerfTest parameters.
+
+load_lib perftest.exp
+
+if [skip_perf_tests] {
+    return 0
+}
+
+GenPerfTest::standard_run_driver gmonster2.exp make_testcase_config  
gmonster-null-lookup.py NullLookup
diff --git a/gdb/testsuite/gdb.perf/gmonster2-pervasive-typedef.exp  
b/gdb/testsuite/gdb.perf/gmonster2-pervasive-typedef.exp
new file mode 100644
index 0000000..cc5d428
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster2-pervasive-typedef.exp
@@ -0,0 +1,27 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Measure the speed of "ptype func" where a parameter of the function is a
+# typedef used pervasively.  This exercises the perf regression introduced  
by
+# the original patch to pr 16253.
+# Test parameters are the standard GenPerfTest parameters.
+
+load_lib perftest.exp
+
+if [skip_perf_tests] {
+    return 0
+}
+
+GenPerfTest::standard_run_driver gmonster2.exp make_testcase_config  
gmonster-pervasive-typedef.py PervasiveTypedef
diff --git a/gdb/testsuite/gdb.perf/gmonster2-print-cerr.exp  
b/gdb/testsuite/gdb.perf/gmonster2-print-cerr.exp
new file mode 100644
index 0000000..a0c7d8b
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster2-print-cerr.exp
@@ -0,0 +1,25 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Measure speed of printing std::cerr with lots of shared libraries.
+# Test parameters are the standard GenPerfTest parameters.
+
+load_lib perftest.exp
+
+if [skip_perf_tests] {
+    return 0
+}
+
+GenPerfTest::standard_run_driver gmonster2.exp make_testcase_config  
gmonster-print-cerr.py PrintCerr
diff --git a/gdb/testsuite/gdb.perf/gmonster2-ptype-string.exp  
b/gdb/testsuite/gdb.perf/gmonster2-ptype-string.exp
new file mode 100644
index 0000000..2c53435
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster2-ptype-string.exp
@@ -0,0 +1,26 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Measure speed of ptype on a simple class in a library that is searched  
late
+# with lots of shared libraries
+# Test parameters are the standard GenPerfTest parameters.
+
+load_lib perftest.exp
+
+if [skip_perf_tests] {
+    return 0
+}
+
+GenPerfTest::standard_run_driver gmonster2.exp make_testcase_config  
gmonster-ptype-string.py GmonsterPtypeString
diff --git a/gdb/testsuite/gdb.perf/gmonster2-runto-main.exp  
b/gdb/testsuite/gdb.perf/gmonster2-runto-main.exp
new file mode 100644
index 0000000..45a3d85
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster2-runto-main.exp
@@ -0,0 +1,26 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Measure performance of selecting a file to debug and then running to main
+# with lots of shared libraries
+# Test parameters are the standard GenPerfTest parameters.
+
+load_lib perftest.exp
+
+if [skip_perf_tests] {
+    return 0
+}
+
+GenPerfTest::standard_run_driver gmonster2.exp make_testcase_config  
gmonster-runto-main.py GmonsterRuntoMain
diff --git a/gdb/testsuite/gdb.perf/gmonster2.cc  
b/gdb/testsuite/gdb.perf/gmonster2.cc
new file mode 100644
index 0000000..4ae9837
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster2.cc
@@ -0,0 +1,24 @@
+/* Copyright (C) 2015 Free Software Foundation, Inc.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.   
*/
+
+#include "gm-std.h"
+
+int
+main ()
+{
+  gm_std::init ();
+
+  return 0;
+}
diff --git a/gdb/testsuite/gdb.perf/gmonster2.exp  
b/gdb/testsuite/gdb.perf/gmonster2.exp
new file mode 100644
index 0000000..db7d91c
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/gmonster2.exp
@@ -0,0 +1,88 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Perftest description file for building the "gmonster2" benchmark.
+# Where does the name come from?  The benchmark is derived from one of the
+# monster programs at Google.
+#
+# Perftest descriptions are loaded thrice:
+# 1) To generate the build .exp files
+#    GDB_PERFTEST_MODE=gen-build-exps
+#    This step allows for parallel builds of the majority of pieces of the
+#    test binary and shlibs.
+# 2) To compile the "pieces" of the binary and shlibs.
+#    "Pieces" are the bulk of the machine-generated sources of the test.
+#    This step is driven by lib/build-piece.exp.
+#    GDB_PERFTEST_MODE=build-pieces
+# 3) To perform the final link of the binary and shlibs.
+#    GDB_PERFTEST_MODE=compile
+#
+# Example usage:
+# bash$ make -j5 build-perf RUNTESTFLAGS="gmonster1.exp gmonster2.exp"
+# bash$ make check-perf RUNTESTFLAGS="gdb.perf/gm*-*.exp GDB=/path/to/gdb"
+
+load_lib perftest.exp
+
+if [skip_perf_tests] {
+    return 0
+}
+
+if ![info exists MONSTER] {
+    set MONSTER "n"
+}
+
+proc make_testcase_config { } {
+    global MONSTER
+
+    set program_name "gmonster2"
+    array set testcase [GenPerfTest::init_testcase $program_name]
+
+    set testcase(language) c++
+
+    # *_{sources,headers} need to be embedded in an outer list
+    # because remember each element of the outer list is for each run, and
+    # here we want to use the same value for all runs.
+    set testcase(binary_extra_sources) { { gmonster2.cc gm-hello.cc  
gm-use-cerr.cc } }
+    set testcase(binary_extra_headers) { { gm-utils.h gm-std.h } }
+    set testcase(gen_shlib_extra_sources) { { gm-hello.cc gm-use-cerr.cc }  
}
+    set testcase(gen_shlib_extra_headers) { { gm-utils.h gm-std.h } }
+    set testcase(tail_shlib_sources) { { gm-std.cc } }
+    set testcase(tail_shlib_headers) { { gm-std.h } }
+
+    if { $MONSTER == "y" } {
+	set testcase(run_names) { 10-sos 100-sos 1000-sos }
+	set testcase(nr_gen_shlibs) { 10 100 1000 }
+    } else {
+	set testcase(run_names) { 1-so 10-sos 100-sos }
+	set testcase(nr_gen_shlibs) { 1 10 100 }
+    }
+    set testcase(nr_compunits) 10
+
+    set testcase(nr_extern_functions) 10
+    set testcase(nr_static_functions) 10
+
+    # class_specs needs to be embedded in an outer list because remember
+    # each element of the outer list is for each run, and here we want to  
use
+    # the same value for all runs.
+    set testcase(class_specs) { { { 0 10 } { 1 10 } { 2 10 } } }
+    set testcase(nr_members) 10
+    set testcase(nr_static_members) 10
+    set testcase(nr_methods) 10
+    set testcase(nr_static_methods) 10
+
+    return [array get testcase]
+}
+
+GenPerfTest::standard_compile_driver gmonster2.exp make_testcase_config
diff --git a/gdb/testsuite/gdb.perf/lib/perftest/measure.py  
b/gdb/testsuite/gdb.perf/lib/perftest/measure.py
index f0ecd48..fc767b6 100644
--- a/gdb/testsuite/gdb.perf/lib/perftest/measure.py
+++ b/gdb/testsuite/gdb.perf/lib/perftest/measure.py
@@ -103,6 +103,7 @@ class MeasurementCpuTime(Measurement):
          else:
              cpu_time = time.clock() - self.start_time
          self.result.record (id, cpu_time)
+        print ("elapsed cpu time %s" % (cpu_time))

  class MeasurementWallTime(Measurement):
      """Measurement on Wall time."""
@@ -117,6 +118,7 @@ class MeasurementWallTime(Measurement):
      def stop(self, id):
          wall_time = time.time() - self.start_time
          self.result.record (id, wall_time)
+        print ("elapsed wall time %s" % (wall_time))

  class MeasurementVmSize(Measurement):
      """Measurement on memory usage represented by VmSize."""
@@ -144,3 +146,4 @@ class MeasurementVmSize(Measurement):
      def stop(self, id):
          memory_used = self._compute_process_memory_usage("VmSize:")
          self.result.record (id, memory_used)
+        print ("vm used %s" % (memory_used))
diff --git a/gdb/testsuite/gdb.perf/lib/perftest/utils.py  
b/gdb/testsuite/gdb.perf/lib/perftest/utils.py
new file mode 100644
index 0000000..66df8cf
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/lib/perftest/utils.py
@@ -0,0 +1,65 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+import gdb
+
+def safe_execute(command):
+    """Execute command, ignoring any gdb errors."""
+    result = None
+    try:
+        result = gdb.execute(command, to_string=True)
+    except gdb.error:
+        pass
+    return result
+
+
+def convert_spaces(file_name):
+    """Return file_name with all spaces replaced with "-"."""
+    return file_name.replace(" ", "-")
+
+
+def select_file(file_name):
+    """Select a file for debugging.
+
+    N.B. This turns confirmation off.
+    """
+    safe_execute("set confirm off")
+    print ("Selecting file %s" % (file_name))
+    gdb.execute("file %s" % (file_name))
+
+
+def runto(location):
+    """Run the program to location.
+
+    N.B. This turns confirmation off.
+    """
+    safe_execute("set confirm off")
+    gdb.execute("tbreak %s" % (location))
+    gdb.execute("run")
+
+
+def runto_main():
+    """Run the program to "main".
+
+    N.B. This turns confirmation off.
+    """
+    runto("main")
+
+
+def run_n_times(count, func):
+    """Execute func count times."""
+    while count > 0:
+        func()
+        count -= 1
diff --git a/gdb/testsuite/gdb.perf/single-step.exp  
b/gdb/testsuite/gdb.perf/single-step.exp
index 74c6de0..d5aa7e2 100644
--- a/gdb/testsuite/gdb.perf/single-step.exp
+++ b/gdb/testsuite/gdb.perf/single-step.exp
@@ -47,10 +47,12 @@ PerfTest::assemble {
  	fail "Can't run to main"
  	return -1
      }
+    return 0
  } {
      global SINGLE_STEP_COUNT

      gdb_test_no_output "python SingleStep\(${SINGLE_STEP_COUNT}\).run()"
      # Terminate the loop.
      gdb_test "set variable flag = 0"
+    return 0
  }
diff --git a/gdb/testsuite/gdb.perf/skip-prologue.exp  
b/gdb/testsuite/gdb.perf/skip-prologue.exp
index 35db047..03d666b 100644
--- a/gdb/testsuite/gdb.perf/skip-prologue.exp
+++ b/gdb/testsuite/gdb.perf/skip-prologue.exp
@@ -52,6 +52,7 @@ PerfTest::assemble {
  	fail "Can't run to main"
  	return -1
      }
+    return 0
  } {
      global SKIP_PROLOGUE_COUNT

@@ -66,4 +67,5 @@ PerfTest::assemble {
  	    pass $test
  	}
      }
+    return 0
  }
diff --git a/gdb/testsuite/gdb.perf/solib.exp  
b/gdb/testsuite/gdb.perf/solib.exp
index 4edc2ea..078a372 100644
--- a/gdb/testsuite/gdb.perf/solib.exp
+++ b/gdb/testsuite/gdb.perf/solib.exp
@@ -80,8 +80,10 @@ PerfTest::assemble {
  	fail "Can't run to main"
  	return -1
      }
+    return 0
  } {
      global SOLIB_COUNT

      gdb_test_no_output "python SolibLoadUnload\($SOLIB_COUNT\).run()"
+    return 0
  }
diff --git a/gdb/testsuite/lib/build-piece.exp  
b/gdb/testsuite/lib/build-piece.exp
new file mode 100644
index 0000000..a81530c
--- /dev/null
+++ b/gdb/testsuite/lib/build-piece.exp
@@ -0,0 +1,39 @@
+# Copyright (C) 2014 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# Utility to bootstrap building a piece of a performance test in a
+# parallel build.
+# See testsuite/Makefile.in:workers/%.worker.
+# WORKER is set by the makefile and is
+# "{program_name}/{program_name}-{worker_nr}".
+
+regexp "^\(.+\)/\(.+\)-\(\[0-9\]+\)$" $WORKER entire_match PROGRAM_NAME  
pname2 WORKER_NR
+
+if { ![info exists entire_match] || $entire_match != $WORKER } {
+    error "Bad value for WORKER: $WORKER"
+}
+if { $PROGRAM_NAME != $pname2 } {
+    error "Bad value for WORKER: $WORKER"
+}
+
+# $subdir is set to "lib", because that is where this file lives,
+# which is not what tests expect.
+set subdir "gdb.perf"
+
+# $gdb_test_file_name is set to this file, build-piece, which is not what
+# tests expect.
+set gdb_test_file_name $PROGRAM_NAME
+
+source $srcdir/$subdir/${gdb_test_file_name}.exp
diff --git a/gdb/testsuite/lib/cache.exp b/gdb/testsuite/lib/cache.exp
index 8df04b9..9565b39 100644
--- a/gdb/testsuite/lib/cache.exp
+++ b/gdb/testsuite/lib/cache.exp
@@ -35,7 +35,7 @@ proc gdb_do_cache {name} {
      }

      if {[info exists GDB_PARALLEL]} {
-	set cache_filename [file join $objdir cache $cache_name]
+	set cache_filename [file join $objdir $GDB_PARALLEL cache $cache_name]
  	if {[file exists $cache_filename]} {
  	    set fd [open $cache_filename]
  	    set gdb_data_cache($cache_name) [read -nonewline $fd]
diff --git a/gdb/testsuite/lib/future.exp b/gdb/testsuite/lib/future.exp
index 2fb635b..485da63 100644
--- a/gdb/testsuite/lib/future.exp
+++ b/gdb/testsuite/lib/future.exp
@@ -124,14 +124,15 @@ proc gdb_default_target_compile {source destfile type  
options} {
  	error "Must supply an output filename for the compile to  
default_target_compile"
      }

+    set early_flags ""
      set add_flags ""
      set libs ""
      set compiler_type "c"
      set compiler ""
      set linker ""
      # linker_opts_order is one  
of "sources-then-flags", "flags-then-sources".
-    # The order shouldn't matter.  It's done this way to preserve
-    # existing behavior.
+    # The order matters for things like -Wl,--as-needed.  The default is to
+    # preserve existing behavior.
      set linker_opts_order "sources-then-flags"
      set ldflags ""
      set dest [target_info name]
@@ -229,6 +230,10 @@ proc gdb_default_target_compile {source destfile type  
options} {
  	    regsub "^compiler=" $i "" tmp
  	    set compiler $tmp
  	}
+	if {[regexp "^early_flags=" $i]} {
+	    regsub "^early_flags=" $i "" tmp
+	    append early_flags " $tmp"
+	}
  	if {[regexp "^additional_flags=" $i]} {
  	    regsub "^additional_flags=" $i "" tmp
  	    append add_flags " $tmp"
@@ -462,15 +467,15 @@ proc gdb_default_target_compile {source destfile type  
options} {
      # become confused about the name of the actual source file.
      switch $type {
  	"object" {
-	    set opts "$add_flags $sources"
+	    set opts "$early_flags $add_flags $sources"
  	}
  	"executable" {
  	    switch $linker_opts_order {
  		"flags-then-sources" {
-		    set opts "$add_flags $sources"
+		    set opts "$early_flags $add_flags $sources"
  		}
  		"sources-then-flags" {
-		    set opts "$sources $add_flags"
+		    set opts "$early_flags $sources $add_flags"
  		}
  		default {
  		    error "Invalid value for board_info linker_opts_order"
@@ -478,7 +483,7 @@ proc gdb_default_target_compile {source destfile type  
options} {
  	    }
  	}
  	default {
-	    set opts "$sources $add_flags"
+	    set opts "$early_flags $sources $add_flags"
  	}
      }

diff --git a/gdb/testsuite/lib/gdb.exp b/gdb/testsuite/lib/gdb.exp
index 41797e7..33a8127 100644
--- a/gdb/testsuite/lib/gdb.exp
+++ b/gdb/testsuite/lib/gdb.exp
@@ -2728,12 +2728,20 @@ gdb_caching_proc target_is_gdbserver {
      return $is_gdbserver
  }

-set compiler_info		"unknown"
+# N.B. compiler_info is intended to be local to this file.
+# Call test_compiler_info with no arguments to fetch its value.
+# Yes, this is counterintuitive when there's get_compiler_info,
+# but that's the current API.
+if [info exists compiler_info] {
+    unset compiler_info
+}
+
  set gcc_compiled		0
  set hp_cc_compiler		0
  set hp_aCC_compiler		0

  # Figure out what compiler I am using.
+# The result is cached so only the first invocation runs the compiler.
  #
  # ARG can be empty or "C++".  If empty, "C" is assumed.
  #
@@ -2800,6 +2808,11 @@ proc get_compiler_info {{arg ""}} {
      global hp_cc_compiler
      global hp_aCC_compiler

+    if [info exists compiler_info] {
+	# Already computed.
+	return 0
+    }
+
      # Choose which file to preprocess.
      set ifile "${srcdir}/lib/compiler.c"
      if { $arg == "c++" } {
@@ -2841,8 +2854,14 @@ proc get_compiler_info {{arg ""}} {
  	}
      }

-    # Reset to unknown compiler if any diagnostics happened.
+    # Set to unknown if for some reason compiler_info didn't get defined.
+    if ![info exists compiler_info] {
+	verbose -log "get_compiler_info: compiler_info not provided"
+	set compiler_info "unknown"
+    }
+    # Also set to unknown compiler if any diagnostics happened.
      if { $unknown } {
+	verbose -log "get_compiler_info: got unexpected diagnostics"
  	set compiler_info "unknown"
      }

@@ -2876,18 +2895,18 @@ proc get_compiler_info {{arg ""}} {
      return 0
  }

+# Return the compiler_info string if no arg is provided.
+# Otherwise the argument is a glob-style expression to match against
+# compiler_info.
+
  proc test_compiler_info { {compiler ""} } {
      global compiler_info
+    get_compiler_info

-     # if no arg, return the compiler_info string
-
-     if [string match "" $compiler] {
-         if [info exists compiler_info] {
-             return $compiler_info
-         } else {
-             perror "No compiler info found."
-         }
-     }
+    # If no arg, return the compiler_info string.
+    if [string match "" $compiler] {
+	return $compiler_info
+    }

      return [string match $compiler $compiler_info]
  }
@@ -2966,6 +2985,13 @@ proc gdb_compile {source dest type options} {
  		      || [istarget *-*-cygwin*]) } {
  		    lappend new_options "additional_flags=-Wl,--enable-auto-import"
  		}
+		if { [test_compiler_info "gcc-*"] || [test_compiler_info "clang-*"] } {
+		    # Undo debian's change in the default.
+		    # Put it at the front to not override any user-provided
+		    # value, and to make sure it appears in front of all the
+		    # shlibs!
+		    lappend new_options "early_flags=-Wl,--no-as-needed"
+		}
              }
  	} elseif { $opt == "shlib_load" } {
  	    set shlib_load 1
@@ -3894,7 +3920,7 @@ proc standard_output_file {basename} {
      global objdir subdir gdb_test_file_name GDB_PARALLEL

      if {[info exists GDB_PARALLEL]} {
-	set dir [file join $objdir outputs $subdir $gdb_test_file_name]
+	set dir [file join $objdir $GDB_PARALLEL outputs $subdir  
$gdb_test_file_name]
  	file mkdir $dir
  	return [file join $dir $basename]
      } else {
@@ -3908,7 +3934,7 @@ proc standard_temp_file {basename} {
      global objdir GDB_PARALLEL

      if {[info exists GDB_PARALLEL]} {
-	return [file join $objdir temp $basename]
+	return [file join $objdir $GDB_PARALLEL temp $basename]
      } else {
  	return $basename
      }
@@ -4796,18 +4822,27 @@ proc build_executable { testname executable  
{sources ""} {options {debug}} } {
      return [eval build_executable_from_specs $arglist]
  }

-# Starts fresh GDB binary and loads EXECUTABLE into GDB. EXECUTABLE is
-# the basename of the binary.
-# The return value is 0 for success, -1 for failure.
-proc clean_restart { executable } {
+# Starts fresh GDB binary and loads an optional executable into GDB.
+# Usage: clean_restart [executable]
+# EXECUTABLE is the basename of the binary.
+
+proc clean_restart { args } {
      global srcdir
      global subdir
-    set binfile [standard_output_file ${executable}]
+
+    if { [llength $args] > 1 } {
+	error "bad number of args: [llength $args]"
+    }

      gdb_exit
      gdb_start
      gdb_reinitialize_dir $srcdir/$subdir
-    return [gdb_load ${binfile}]
+
+    if { [llength $args] >= 1 } {
+	set executable [lindex $args 0]
+	set binfile [standard_output_file ${executable}]
+	gdb_load ${binfile}
+    }
  }

  # Prepares for testing by calling build_executable_full, then
@@ -5011,7 +5046,10 @@ if {[info exists GDB_PARALLEL]} {
      if {[is_remote host]} {
  	unset GDB_PARALLEL
      } else {
-	file mkdir outputs temp cache
+	file mkdir \
+	    [file join $GDB_PARALLEL outputs] \
+	    [file join $GDB_PARALLEL temp] \
+	    [file join $GDB_PARALLEL cache]
      }
  }

diff --git a/gdb/testsuite/lib/perftest.exp b/gdb/testsuite/lib/perftest.exp
index 7c334ac..88c92e4 100644
--- a/gdb/testsuite/lib/perftest.exp
+++ b/gdb/testsuite/lib/perftest.exp
@@ -12,6 +12,10 @@
  #
  # You should have received a copy of the GNU General Public License
  # along with this program.  If not, see <http://www.gnu.org/licenses/>.
+#
+# Notes:
+# 1) This follows a Python convention for marking internal vs public  
functions.
+# Internal functions are prefixed with "_".

  namespace eval PerfTest {
      # The name of python file on build.
@@ -42,14 +46,13 @@ namespace eval PerfTest {
      # actual compilation.  Return zero if compilation is successful,
      # otherwise return non-zero.
      proc compile {body} {
-	global GDB_PERFTEST_MODE
-
-	if { [info exists GDB_PERFTEST_MODE]
-	     && [string compare $GDB_PERFTEST_MODE "run"] } {
-	    return [uplevel 2 $body]
-	}
+	return [uplevel 2 $body]
+    }

-	return 0
+    # Run the startup code.  Return zero if startup is successful,
+    # otherwise return non-zero.
+    proc startup {body} {
+	return [uplevel 2 $body]
      }

      # Start up GDB.
@@ -57,7 +60,8 @@ namespace eval PerfTest {
  	uplevel 2 $body
      }

-    # Run the performance test.
+    # Run the performance test.  Return zero if the run is successful,
+    # otherwise return non-zero.
      proc run {body} {
  	global timeout
  	global GDB_PERFTEST_TIMEOUT
@@ -68,36 +72,56 @@ namespace eval PerfTest {
  	} else {
  	    set timeout 3000
  	}
-	uplevel 2 $body
+	set result [uplevel 2 $body]

  	set timeout $oldtimeout
+	return $result
      }

      # The top-level interface to PerfTest.
      # COMPILE is the tcl code to generate and compile source files.
-    # Return zero if compilation is successful,  otherwise return
-    # non-zero.
      # STARTUP is the tcl code to start up GDB.
      # RUN is the tcl code to drive GDB to do some operations.
+    # Each of COMPILE, STARTUP, and RUN return zero if successful, and
+    # non-zero if there's a failure.
+
      proc assemble {compile startup run} {
  	global GDB_PERFTEST_MODE

-	if { [eval compile {$compile}] } {
-	    untested "Could not compile source files."
+	if ![info exists GDB_PERFTEST_MODE] {
  	    return
  	}

+	if { [string compare $GDB_PERFTEST_MODE "run"] != 0 } {
+	    if { [eval compile {$compile}] } {
+		untested "Could not compile source files."
+		return
+	    }
+	}
+
  	# Don't execute the run if GDB_PERFTEST_MODE=compile.
-	if { [info exists GDB_PERFTEST_MODE]
-	     && [string compare $GDB_PERFTEST_MODE "compile"] == 0} {
+	if { [string compare $GDB_PERFTEST_MODE "compile"] == 0} {
+	    return
+	}
+
+	verbose -log "PerfTest::assemble, startup ..."
+
+	if [eval startup {$startup}] {
+	    fail "startup"
  	    return
  	}

-	eval $startup
+	verbose -log "PerfTest::assemble, done startup"

  	_setup_perftest

-	eval run {$run}
+	verbose -log "PerfTest::assemble, run ..."
+
+	if [eval run {$run}] {
+	    fail "run"
+	}
+
+	verbose -log "PerfTest::assemble, run complete."

  	_teardown_perftest
      }
@@ -109,11 +133,9 @@ proc skip_perf_tests { } {
      global GDB_PERFTEST_MODE

      if [info exists GDB_PERFTEST_MODE] {
-
  	if { "$GDB_PERFTEST_MODE" != "compile"
  	     && "$GDB_PERFTEST_MODE" != "run"
  	     && "$GDB_PERFTEST_MODE" != "both" } {
-	    # GDB_PERFTEST_MODE=compile|run|both is allowed.
  	    error "Unknown value of GDB_PERFTEST_MODE."
  	    return 1
  	}
@@ -123,3 +145,1248 @@ proc skip_perf_tests { } {

      return 1
  }
+
+# Given a list of tcl strings, return the same list as the text form of a
+# python list.
+
+proc tcl_string_list_to_python_list { l } {
+    proc quote { text } {
+	return "\"$text\""
+    }
+    set quoted_list ""
+    foreach elm $l {
+	lappend quoted_list [quote $elm]
+    }
+    return "([join $quoted_list {, }])"
+}
+
+# A simple testcase generator.
+#
+# Usage Notes:
+#
+# 1) The length of each parameter list must either be one, in which case  
the
+# same value is used for each run, or the length must match all other
+# parameters of length greater than one.
+#
+# 2) Values for parameters that vary across runs must appear in increasing
+# order.  E.g. nr_gen_shlibs = { 0 1 10 } is good, { 1 0 10 } is bad.
+# This rule simplifies the code a bit, without being onerous on the user:
+#  a) Report generation doesn't have to sort the output by run, it'll  
already
+#  be sorted.
+#  b) In the static object file case, the last run can be used used to  
generate
+#  all the source files.
+#
+# TODO:
+# 1) have functions call each other within an objfile and across
+#    objfiles to measure things like backtrace times
+# 2) inline methods
+# 3) anonymous namespaces
+#
+# Implementation Notes:
+#
+# 1) The implementation would be a bit simpler if we could assume Tcl 8.5.
+#    Then we could use a dictionary to record the testcase instead of an  
array.
+#    With the array we use here, there is only one copy of it and instead  
of
+#    passing its value we pass its name.  Yay Tcl.  An alternative is to  
just
+#    use a global variable.
+#
+# 2) Because these programs can be rather large, we try to avoid  
recompilation
+#    where we can.  We don't have a makefile: we could generate one but  
it's
+#    not clear that's simpler than our chosen mechanism which is to record
+#    sums of all the inputs, and detect if an input has changed that way.
+
+if ![info exists CAT_PROGRAM] {
+    set CAT_PROGRAM "/bin/cat"
+}
+
+# TODO(dje): Time md5sum vs sha1sum with our testcases.
+if ![info exists SHA1SUM_PROGRAM] {
+    set SHA1SUM_PROGRAM "/usr/bin/sha1sum"
+}
+
+namespace eval GenPerfTest {
+
+    # The default level of compilation parallelism we support.
+    set DEFAULT_PERF_TEST_COMPILE_PARALLELISM 10
+
+    # The language of the test.
+    set DEFAULT_LANGUAGE "c"
+
+    # Extra source files for the binary.
+    # This must at least include the file with main(),
+    # and each test must supply its own.
+    set DEFAULT_BINARY_EXTRA_SOURCES {}
+
+    # Header files used by generated files and extra sources.
+    set DEFAULT_BINARY_EXTRA_HEADERS {}
+
+    # Extra source files for each generated shlib.
+    # The compiler passes -DSHLIB=NNN which the source can use, for  
example,
+    # to define unique symbols for each shlib.
+    set DEFAULT_GEN_SHLIB_EXTRA_SOURCES {}
+
+    # Header files used by generated files and extra sources.
+    set DEFAULT_GEN_SHLIB_EXTRA_HEADERS {}
+
+    # Source files for a tail shlib, or empty if none.
+    # This library is loaded after all other shlibs (except any system  
shlibs
+    # like libstdc++).  It is useful for exercising issues that can appear
+    # with system shlibs, without having to cope with implementation  
details
+    # and bugs in system shlibs.  E.g., gcc pr 65669.
+    set DEFAULT_TAIL_SHLIB_SOURCES {}
+
+    # Header files for the tail shlib.
+    set DEFAULT_TAIL_SHLIB_HEADERS {}
+
+    # The number of shared libraries to create.
+    set DEFAULT_NR_GEN_SHLIBS 0
+
+    # The number of compunits in each objfile.
+    set DEFAULT_NR_COMPUNITS 1
+
+    # The number of public globals in each compunit.
+    set DEFAULT_NR_EXTERN_GLOBALS 1
+
+    # The number of static globals in each compunit.
+    set DEFAULT_NR_STATIC_GLOBALS 1
+
+    # The number of public functions in each compunit.
+    set DEFAULT_NR_EXTERN_FUNCTIONS 1
+
+    # The number of static functions in each compunit.
+    set DEFAULT_NR_STATIC_FUNCTIONS 1
+
+    # List of pairs of class depth and number of classes at that depth.
+    # By "depth" here we mean nesting within a namespace.
+    # E.g.,
+    # class foo {};
+    # namespace n { class foo {}; class bar {}; }
+    # would be represented as { { 0 1 } { 1 2 } }.
+    # This is only used if the selected language permits it.
+    set DEFAULT_CLASS_SPECS {}
+
+    # Number of members in each class.
+    # This is only used if classes are enabled.
+    set DEFAULT_NR_MEMBERS 0
+
+    # Number of static members in each class.
+    # This is only used if classes are enabled.
+    set DEFAULT_NR_STATIC_MEMBERS 0
+
+    # Number of methods in each class.
+    # This is only used if classes are enabled.
+    set DEFAULT_NR_METHODS 0
+
+    # Number of static methods in each class.
+    # This is only used if classes are enabled.
+    set DEFAULT_NR_STATIC_METHODS 0
+
+    set suffixes(c) "c"
+    set suffixes(c++) "cc"
+
+    # Generate .worker files that control building all the "pieces" of the
+    # testcase.  This doesn't include "main" or any test-specific stuff.
+    # This mostly consists of the "bulk" (aka "crap" :-)) of the testcase  
to
+    # give gdb something meaty to chew on.
+    # The result is 0 for success, -1 for failure.
+    #
+    # Benchmarks generated by some of the tests are big.  I mean really  
big.
+    # And it's a pain to build one piece at a time, we need a parallel  
build.
+    # To achieve this, given the framework we're working with, we need to
+    # generate arguments to pass to a parallel make.  This is done by
+    # generating several files and then passing the file names to the  
parallel
+    # make.  All of the needed info is contained in the file name, so we  
could
+    # do this differently, but this is pretty simple and flexible.
+
+    proc gen_worker_files { test_description_exp } {
+	global objdir PERF_TEST_COMPILE_PARALLELISM
+
+	if { [file tail $test_description_exp] != $test_description_exp } {
+	    error "test description file contains directory name"
+	}
+
+	set program_name [file rootname $test_description_exp]
+	set workers_dir "$objdir/gdb.perf/workers/$program_name"
+	file mkdir $workers_dir
+
+	set nr_workers $PERF_TEST_COMPILE_PARALLELISM
+	verbose -log "gen_worker_files: $test_description_exp $nr_workers workers"
+
+	for { set i 0 } { $i < $nr_workers } { incr i } {
+	    set file_name "${workers_dir}/${program_name}-${i}.worker"
+	    verbose -log "gen_worker_files: Generating $file_name"
+	    set f [open $file_name "w"]
+	    puts $f "# DO NOT EDIT, machine generated file."
+	    puts $f "# See perftest.exp:GenPerfTest::gen_worker_files."
+	    close $f
+	}
+
+	return 0
+    }
+
+    # Load a perftest description.
+    # Test descriptions are used to build the input files (binary + shlibs)
+    # of one or more performance tests.
+
+    proc load_test_description { basename } {
+	global srcdir
+
+	if { [file tail $basename] != $basename } {
+	    error "test description file contains directory name"
+	}
+
+	verbose -log "load_file $srcdir/gdb.perf/$basename"
+	if { [load_file $srcdir/gdb.perf/$basename] == 0 } {
+	    error "Unable to load test description $basename"
+	}
+    }
+
+    # Create a testcase object for test NAME.
+    # The caller must call this as:
+    # array set my_test [GenPerfTest::init_testcase $name]
+
+    proc init_testcase { name } {
+	set testcase(name) $name
+	set testcase(language) $GenPerfTest::DEFAULT_LANGUAGE
+	set testcase(run_names) [list $name]
+	set testcase(binary_extra_sources)  
$GenPerfTest::DEFAULT_BINARY_EXTRA_SOURCES
+	set testcase(binary_extra_headers)  
$GenPerfTest::DEFAULT_BINARY_EXTRA_HEADERS
+	set testcase(gen_shlib_extra_sources)  
$GenPerfTest::DEFAULT_GEN_SHLIB_EXTRA_SOURCES
+	set testcase(gen_shlib_extra_headers)  
$GenPerfTest::DEFAULT_GEN_SHLIB_EXTRA_HEADERS
+	set testcase(tail_shlib_sources) $GenPerfTest::DEFAULT_TAIL_SHLIB_SOURCES
+	set testcase(tail_shlib_headers) $GenPerfTest::DEFAULT_TAIL_SHLIB_HEADERS
+	set testcase(nr_gen_shlibs) $GenPerfTest::DEFAULT_NR_GEN_SHLIBS
+	set testcase(nr_compunits) $GenPerfTest::DEFAULT_NR_COMPUNITS
+
+	set testcase(nr_extern_globals) $GenPerfTest::DEFAULT_NR_EXTERN_GLOBALS
+	set testcase(nr_static_globals) $GenPerfTest::DEFAULT_NR_STATIC_GLOBALS
+	set testcase(nr_extern_functions)  
$GenPerfTest::DEFAULT_NR_EXTERN_FUNCTIONS
+	set testcase(nr_static_functions)  
$GenPerfTest::DEFAULT_NR_STATIC_FUNCTIONS
+
+	set testcase(class_specs) $GenPerfTest::DEFAULT_CLASS_SPECS
+	set testcase(nr_members) $GenPerfTest::DEFAULT_NR_MEMBERS
+	set testcase(nr_static_members) $GenPerfTest::DEFAULT_NR_STATIC_MEMBERS
+	set testcase(nr_methods) $GenPerfTest::DEFAULT_NR_METHODS
+	set testcase(nr_static_methods) $GenPerfTest::DEFAULT_NR_STATIC_METHODS
+
+	# The location of this file drives the location of all other files.
+	# The choice is derived from standard_output_file.  We don't use it
+	# because of the parallel build support, we want each worker's log/sum
+	# files to go in different directories, but we don't want their output
+	# to go in different directories.
+	# N.B. The value here must be kept in sync with Makefile.in.
+	global objdir
+	set name_no_spaces [_convert_spaces $name]
+	set  
testcase(binfile) "$objdir/gdb.perf/outputs/$name_no_spaces/$name_no_spaces"
+
+	return [array get testcase]
+    }
+
+    proc _verify_parameter_lengths { self_var } {
+	upvar 1 $self_var self
+	# TODO(dje): Do we want *_headers here?
+	set params {
+	    binary_extra_sources binary_extra_headers
+	    gen_shlib_extra_sources gen_shlib_extra_headers
+	    tail_shlib_sources tail_shlib_headers
+	    nr_gen_shlibs nr_compunits
+	    nr_extern_globals nr_static_globals
+	    nr_extern_functions nr_static_functions
+	    class_specs
+	    nr_members nr_static_members
+	    nr_methods nr_static_methods
+	}
+	set nr_runs [llength $self(run_names)]
+	foreach p $params {
+	    set n [llength $self($p)]
+	    if { $n > 1 } {
+		if { $n != $nr_runs } {
+		    error "Bad number of values for parameter $p"
+		}
+		set values $self($p)
+		for { set i 0 } { $i < $n - 1 } { incr i } {
+		    if { [lindex $values $i] > [lindex $values [expr $i + 1]] } {
+			error "Values of parameter $p are not increasing"
+		    }
+		}
+	    }
+	}
+    }
+
+    # Verify the testcase is valid (as best we can, this isn't exhaustive).
+
+    proc _verify_testcase { self_var } {
+	upvar 1 $self_var self
+	_verify_parameter_lengths self
+
+	# Each test must supply its own main().  We don't check for main here,
+	# but we do verify the test supplied something.
+	if { [llength $self(binary_extra_sources)] == 0 } {
+	    error "Missing value for binary_extra_sources"
+	}
+    }
+
+    # Return the value of parameter PARAM for run RUN_NR.
+
+    proc _get_param { param run_nr } {
+	if { [llength $param] == 1 } {
+	    # Since PARAM may be a list of lists we need to use lindex.  This
+	    # also works for scalars (scalars are degenerate lists).
+	    return [lindex $param 0]
+	}
+	return [lindex $param $run_nr]
+    }
+
+    # Return non-zero if all files (binaries + shlibs) can be compiled from
+    # one set of object files.  This is a simple optimization to speed up
+    # test build times.  This happens if the only variation among runs is
+    # nr_gen_shlibs or nr_compunits.
+
+    proc _static_object_files_p { self_var } {
+	upvar 1 $self_var self
+	# These values are either scalars, or can vary across runs but don't
+	# affect whether we can share the generated object objects between
+	# runs.
+	set static_object_file_params {
+	    name language run_names nr_gen_shlibs nr_compunits
+	    binary_extra_sources gen_shlib_extra_sources tail_shlib_sources
+	}
+	foreach name [array names self] {
+	    if { [lsearch $static_object_file_params $name] < 0 } {
+		# name is not in static_object_file_params.
+		if { [llength $self($name)] > 1 } {
+		    # The user could provide a list that is all the same value,
+		    # so check for that.
+		    set first_value [lindex $self($name) 0]
+		    foreach elm [lrange $self($name) 1 end] {
+			if { $elm != $first_value } {
+			    return 0
+			}
+		    }
+		}
+	    }
+	}
+	return 1
+    }
+
+    # Return non-zero if classes are enabled.
+
+    proc _classes_enabled_p { self_var run_nr } {
+	upvar 1 $self_var self
+	set class_specs [_get_param $self(class_specs) $run_nr]
+	foreach elm $class_specs {
+	    if { [llength $elm] != 2 } {
+		error "Bad class spec: $elm"
+	    }
+	    if { [lindex $elm 1] > 0 } {
+		return 1
+	    }
+	}
+	return 0
+    }
+
+    # Spaces in file names are a pain, remove them.
+    # They appear if the user puts spaces in the test name or run name.
+
+    proc _convert_spaces { file_name } {
+	return [regsub -all " " $file_name "-"]
+    }
+
+    # Return the compilation flags for the test.
+
+    proc _compile_options { self_var } {
+	upvar 1 $self_var self
+	set result {debug}
+	switch $self(language) {
+	    c++ {
+		lappend result "c++"
+	    }
+	}
+	return $result
+    }
+
+    # Return the path to put source/object files in for run number RUN_NR.
+
+    proc _make_object_dir_name { self_var static run_nr } {
+	upvar 1 $self_var self
+	# Note: The output directory already includes the name of the test
+	# description file.
+	set bindir [file dirname $self(binfile)]
+	# Put the pieces in a subdirectory, there are a lot of them.
+	if $static {
+	    return "$bindir/pieces"
+	} else {
+	    set run_name [_convert_spaces [lindex $self(run_names) $run_nr]]
+	    return "$bindir/pieces/$run_name"
+	}
+    }
+
+    # CU_NR is either the compilation unit number or "main".
+    # RUN_NR is ignored if STATIC is non-zero.
+
+    proc _make_binary_source_name { self_var static run_nr cu_nr } {
+	upvar 1 $self_var self
+	set source_suffix $GenPerfTest::suffixes($self(language))
+	if { !$static } {
+	    set run_name [_get_param $self(run_names) $run_nr]
+	    set source_name "${run_name}-${cu_nr}.$source_suffix"
+	} else {
+	    set source_name "$self(name)-${cu_nr}.$source_suffix"
+	}
+	return "[_make_object_dir_name self $static $run_nr]/[_convert_spaces  
$source_name]"
+    }
+
+    # Generated object files get put in the same directory as their source.
+    # WARNING: This means that we can't do parallel compiles from the same
+    # source file, they have to have different names.
+
+    proc _make_binary_object_name { self_var static run_nr cu_nr } {
+	upvar 1 $self_var self
+	set source_name [_make_binary_source_name self $static $run_nr $cu_nr]
+	return [file rootname $source_name].o
+    }
+
+    # CU_NAME is either "CU-number.cc" (with suffix) or a name from
+    # gen_shlib_extra_sources or tail_shlib_sources.
+
+    proc _make_shlib_source_name { self_var static run_nr so_nr cu_name } {
+	upvar 1 $self_var self
+	if { !$static } {
+	    set run_name [_get_param $self(run_names) $run_nr]
+	    set source_name "$self(name)-${run_name}-lib${so_nr}-${cu_name}"
+	} else {
+	    set source_name "$self(name)-lib${so_nr}-${cu_name}"
+	}
+	return "[_make_object_dir_name self $static $run_nr]/[_convert_spaces  
$source_name]"
+    }
+
+    # Return the list of source/object files for the binary.
+    # This is the source files specified in test param  
binary_extra_sources as
+    # well as the names of all the object file "pieces".
+    # STATIC is the value of _static_object_files_p for the test.
+
+    proc _make_binary_input_file_names { self_var static run_nr } {
+	upvar 1 $self_var self
+	global srcdir subdir
+	set nr_compunits [_get_param $self(nr_compunits) $run_nr]
+	set result {}
+	foreach source [_get_param $self(binary_extra_sources) $run_nr] {
+	    lappend result "$srcdir/$subdir/$source"
+	}
+	for { set cu_nr 0 } { $cu_nr < $nr_compunits } { incr cu_nr } {
+	    lappend result [_make_binary_object_name self $static $run_nr $cu_nr]
+	}
+	return $result
+    }
+
+    proc _make_binary_name { self_var run_nr } {
+	upvar 1 $self_var self
+	set run_name [_get_param $self(run_names) $run_nr]
+	set exe_name "$self(binfile)-[_convert_spaces ${run_name}]"
+	return $exe_name
+    }
+
+    # SO_NAME is either a shlib number or "tail".
+
+    proc _make_shlib_name { self_var static run_nr so_name } {
+	upvar 1 $self_var self
+	if { !$static } {
+	    set run_name [_get_param $self(run_names) $run_nr]
+	    set lib_name "$self(name)-${run_name}-lib${so_name}.so"
+	} else {
+	    set lib_name "$self(name)-lib${so_name}.so"
+	}
+	set output_dir [file dirname $self(binfile)]
+	return "[_make_object_dir_name self $static $run_nr]/[_convert_spaces  
$lib_name]"
+    }
+
+    proc _create_file { self_var path } {
+	upvar 1 $self_var self
+	verbose -log "Creating file: $path"
+	set f [open $path "w"]
+	return $f
+    }
+
+    proc _write_intro { self_var f } {
+	upvar 1 $self_var self
+	puts $f "// DO NOT EDIT, machine generated file."
+	puts $f "// See perftest.exp:GenPerfTest."
+    }
+
+    proc _write_includes { self_var f includes } {
+	upvar 1 $self_var self
+	if { [llength $includes] > 0 } {
+	    puts $f ""
+	}
+	foreach i $includes {
+	    switch -glob -- $i {
+		"<*>" {
+		    puts $f "#include $i"
+		}
+		default {
+		    puts $f "#include \"$i\""
+		}
+	    }
+	}
+    }
+
+    proc _write_static_globals { self_var f run_nr } {
+	upvar 1 $self_var self
+	puts $f ""
+	set nr_static_globals [_get_param $self(nr_static_globals) $run_nr]
+	# Rather than parameterize the number of const/non-const globals,
+	# and their types, we keep it simple for now.	Even the number of
+	# bss/non-bss globals may be useful; later, if warranted.
+	for { set i 0 } { $i < $nr_static_globals } { incr i } {
+	    if { $i % 2 == 0 } {
+		set const "const "
+	    } else {
+		set const ""
+	    }
+	    puts $f "static ${const}int static_global_$i = $i;"
+	}
+    }
+
+    # ID is "" for the binary, and a unique symbol prefix for each SO.
+
+    proc _write_extern_globals { self_var f run_nr id cu_nr } {
+	upvar 1 $self_var self
+	puts $f ""
+	set nr_extern_globals [_get_param $self(nr_extern_globals) $run_nr]
+	# Rather than parameterize the number of const/non-const globals,
+	# and their types, we keep it simple for now.	Even the number of
+	# bss/non-bss globals may be useful; later, if warranted.
+	for { set i 0 } { $i < $nr_extern_globals } { incr i } {
+	    if { $i % 2 == 0 } {
+		set const "const "
+	    } else {
+		set const ""
+	    }
+	    puts $f "${const}int ${id}global_${cu_nr}_$i = $cu_nr * 1000 + $i;"
+	}
+    }
+
+    proc _write_static_functions { self_var f run_nr } {
+	upvar 1 $self_var self
+	set nr_static_functions [_get_param $self(nr_static_functions) $run_nr]
+	for { set i 0 } { $i < $nr_static_functions } { incr i } {
+	    puts $f ""
+	    puts $f "static void"
+	    puts $f "static_function_$i (void)"
+	    puts $f "{"
+	    puts $f "}"
+	}
+    }
+
+    # ID is "" for the binary, and a unique symbol prefix for each SO.
+
+    proc _write_extern_functions { self_var f run_nr id cu_nr } {
+	upvar 1 $self_var self
+	set nr_extern_functions [_get_param $self(nr_extern_functions) $run_nr]
+	for { set i 0 } { $i < $nr_extern_functions } { incr i } {
+	    puts $f ""
+	    puts $f "void"
+	    puts $f "${id}function_${cu_nr}_$i (void)"
+	    puts $f "{"
+	    puts $f "}"
+	}
+    }
+
+    proc _write_classes { self_var f run_nr cu_nr } {
+	upvar 1 $self_var self
+	set class_specs [_get_param $self(class_specs) $run_nr]
+	set nr_members [_get_param $self(nr_members) $run_nr]
+	set nr_static_members [_get_param $self(nr_static_members) $run_nr]
+	set nr_methods [_get_param $self(nr_methods) $run_nr]
+	set nr_static_methods [_get_param $self(nr_static_methods) $run_nr]
+	foreach spec $class_specs {
+	    set depth [lindex $spec 0]
+	    set nr_classes [lindex $spec 1]
+	    puts $f ""
+	    for { set i 0 } { $i < $depth } { incr i } {
+		puts $f "namespace ns_${i}"
+		puts $f "\{"
+	    }
+	    for { set c 0 } { $c < $nr_classes } { incr c } {
+		set class_name "class_${cu_nr}_${c}"
+		puts $f "class $class_name"
+		puts $f "\{"
+		puts $f " public:"
+		for { set i 0 } { $i < $nr_members } { incr i } {
+		    puts $f "  int member_$i;"
+		}
+		for { set i 0 } { $i < $nr_static_members } { incr i } {
+		    # Rather than parameterize the number of const/non-const
+		    # members, and their types, we keep it simple for now.
+		    if { $i % 2 == 0 } {
+			puts $f "  static const int static_member_$i = $i;"
+		    } else {
+			puts $f "  static int static_member_$i;"
+		    }
+		}
+		for { set i 0 } { $i < $nr_methods } { incr i } {
+		    puts $f "  void method_$i (void);"
+		}
+		for { set i 0 } { $i < $nr_static_methods } { incr i } {
+		    puts $f "  static void static_method_$i (void);"
+		}
+		puts $f "\};"
+		_write_static_members self $f $run_nr $class_name
+		_write_methods self $f $run_nr $class_name
+		_write_static_methods self $f $run_nr $class_name
+	    }
+	    for { set i 0 } { $i < $depth } { incr i } {
+		puts $f "\}"
+	    }
+	}
+    }
+
+    proc _write_static_members { self_var f run_nr class_name } {
+	upvar 1 $self_var self
+	puts $f ""
+	set nr_static_members [_get_param $self(nr_static_members) $run_nr]
+	# Rather than parameterize the number of const/non-const
+	# members, and their types, we keep it simple for now.
+	for { set i 0 } { $i < $nr_static_members } { incr i } {
+	    if { $i % 2 == 0 } {
+		# Static const members are initialized inline.
+	    } else {
+		puts $f "int ${class_name}::static_member_$i = $i;"
+	    }
+	}
+    }
+
+    proc _write_methods { self_var f run_nr class_name } {
+	upvar 1 $self_var self
+	set nr_methods [_get_param $self(nr_methods) $run_nr]
+	for { set i 0 } { $i < $nr_methods } { incr i } {
+	    puts $f ""
+	    puts $f "void"
+	    puts $f "${class_name}::method_$i (void)"
+	    puts $f "{"
+	    puts $f "}"
+	}
+    }
+
+    proc _write_static_methods { self_var f run_nr class_name } {
+	upvar 1 $self_var self
+	set nr_static_methods [_get_param $self(nr_static_methods) $run_nr]
+	for { set i 0 } { $i < $nr_static_methods } { incr i } {
+	    puts $f ""
+	    puts $f "void"
+	    puts $f "${class_name}::static_method_$i (void)"
+	    puts $f "{"
+	    puts $f "}"
+	}
+    }
+
+    proc _gen_binary_compunit_source { self_var static run_nr cu_nr } {
+	upvar 1 $self_var self
+	set source_file [_make_binary_source_name self $static $run_nr $cu_nr]
+	set f [_create_file self $source_file]
+	_write_intro self $f
+	_write_includes self $f [_get_param $self(binary_extra_headers) $run_nr]
+	_write_static_globals self $f $run_nr
+	_write_extern_globals self $f $run_nr "" $cu_nr
+	_write_static_functions self $f $run_nr
+	_write_extern_functions self $f $run_nr "" $cu_nr
+	if [_classes_enabled_p self $run_nr] {
+	    _write_classes self $f $run_nr $cu_nr
+	}
+	close $f
+	return $source_file
+    }
+
+    # Generate the sources for the pieces of the binary.
+    # The result is a list of source file names and accompanying object  
file
+    # names.  The pieces are split across workers.
+    # E.g., with 10 workers the result for worker 0 is
+    # { { source0 object0 } { source10 object10 } ... }
+
+    proc _gen_binary_source { self_var worker_nr static run_nr } {
+	upvar 1 $self_var self
+	verbose -log "GenPerfTest::_gen_binary_source worker $worker_nr run  
$run_nr, started [timestamp -format %c]"
+	set nr_compunits [_get_param $self(nr_compunits) $run_nr]
+	global PERF_TEST_COMPILE_PARALLELISM
+	set nr_workers $PERF_TEST_COMPILE_PARALLELISM
+	set result {}
+	for { set cu_nr $worker_nr } { $cu_nr < $nr_compunits } { incr cu_nr  
$nr_workers } {
+	    set source_file [_gen_binary_compunit_source self $static $run_nr  
$cu_nr]
+	    set object_file [_make_binary_object_name self $static $run_nr $cu_nr]
+	    lappend result [list $source_file $object_file]
+	}
+	verbose -log "GenPerfTest::_gen_binary_source worker $worker_nr run  
$run_nr, done [timestamp -format %c]"
+	return $result
+    }
+
+    proc _gen_shlib_compunit_source { self_var static run_nr so_nr cu_nr }  
{
+	upvar 1 $self_var self
+	set source_suffix $GenPerfTest::suffixes($self(language))
+	set source_file [_make_shlib_source_name self $static $run_nr  
$so_nr "${cu_nr}.${source_suffix}"]
+	set f [_create_file self $source_file]
+	_write_intro self $f
+	_write_includes self $f [_get_param $self(gen_shlib_extra_headers)  
$run_nr]
+	_write_static_globals self $f $run_nr
+	_write_extern_globals self $f $run_nr "shlib${so_nr}_" $cu_nr
+	_write_static_functions self $f $run_nr
+	_write_extern_functions self $f $run_nr "shlib${so_nr}_" $cu_nr
+	if [_classes_enabled_p self $run_nr] {
+	    _write_classes self $f $run_nr $cu_nr
+	}
+	close $f
+	return $source_file
+    }
+
+    # gdb_compile_shlib doesn't support parallel builds of shlibs from
+    # common sources: the .o file path will be the same across all shlibs.
+    # gen_shlib_extra_sources may be common across all shlibs but they're  
each
+    # compiled with -DSHLIB=$SHLIB so we need different .o files for each
+    # shlib, and therefore we need different source files for each shlib.
+    # If this turns out to be too cumbersome we can augment  
gdb_compile_shlib.
+
+    proc _gen_shlib_common_source { self_var static run_nr so_nr  
source_name } {
+	upvar 1 $self_var self
+	global srcdir
+	set source_file [_make_shlib_source_name self $static $run_nr $so_nr  
$source_name]
+	file copy -force "$srcdir/gdb.perf/$source_name" ${source_file}
+	return $source_file
+    }
+
+    proc _gen_shlib_source { self_var static run_nr so_nr } {
+	upvar 1 $self_var self
+	verbose -log "GenPerfTest::_gen_shlib_source run $run_nr so $so_nr,  
started [timestamp -format %c]"
+	set result ""
+	set nr_compunits [_get_param $self(nr_compunits) $run_nr]
+	for { set cu_nr 0 } { $cu_nr < $nr_compunits } { incr cu_nr } {
+	    lappend result [_gen_shlib_compunit_source self $static $run_nr  
$so_nr $cu_nr]
+	}
+	foreach source_name [_get_param $self(gen_shlib_extra_sources) $run_nr] {
+	    lappend result [_gen_shlib_common_source self $static $run_nr $so_nr  
$source_name]
+	}
+	verbose -log "GenPerfTest::_gen_shlib_source run $run_nr so $so_nr, done  
[timestamp -format %c]"
+	return $result
+    }
+
+    # Write Tcl array ARRAY_NAME to F.
+
+    proc _write_tcl_array { self_var f array_name } {
+	upvar 1 $self_var self
+	if { "$array_name" != "$self_var" } {
+	    global $array_name
+	}
+	puts $f "== $array_name =="
+	foreach { name value } [array get $array_name] {
+	    puts $f "$name: $value"
+	}
+    }
+
+    # Write global Tcl state used for compilation to F.
+    # If anything changes we want to recompile.
+
+    proc _write_tcl_state { self_var f dest } {
+	upvar 1 $self_var self
+
+	# TODO(dje): gdb_default_target_compile references a lot of global
+	# state.  Can we capture it all?  For now these are the important ones.
+
+	set vars { CC_FOR_TARGET CXX_FOR_TARGET CFLAGS_FOR_TARGET }
+	foreach v $vars {
+	    global $v
+	    if [info exists $v] {
+		eval set value $$v
+		puts $f "$v: $value"
+	    }
+	}
+
+	puts $f ""
+	_write_tcl_array self $f target_info
+	puts $f ""
+	_write_tcl_array self $f board_info
+    }
+
+    # Write all sideband non-file inputs, as well as OPTIONS to  
INPUTS_FILE.
+    # If anything changes we want to recompile.
+
+    proc _write_inputs_file { self_var dest inputs_file options } {
+	upvar 1 $self_var self
+	global env
+	set f [open $inputs_file "w"]
+	_write_tcl_array self $f self
+	puts $f ""
+	puts $f "options: $options"
+	puts $f "PATH: [getenv PATH]"
+	puts $f ""
+	_write_tcl_state self $f $dest
+	close $f
+    }
+
+    # Generate the sha1sum of all the inputs.
+    # The result is a list of { error_code text }.
+    # Upon success error_code is zero and text is the sha1sum.
+    # Otherwise, error_code is non_zero and text is the error message.
+
+    proc _gen_sha1sum_for_inputs { source_files header_files inputs } {
+	global srcdir subdir CAT_PROGRAM SHA1SUM_PROGRAM
+	set header_paths ""
+	foreach f $header_files {
+	    switch -glob -- $f {
+		"<*>" {
+		    # skip
+		}
+		default {
+		    append header_paths " $srcdir/$subdir/$f"
+		}
+	    }
+	}
+	verbose -log "_gen_sha1sum_for_inputs: summing $source_files  
$header_paths $inputs"
+	set catch_result [catch "exec $CAT_PROGRAM $source_files $header_paths  
$inputs | $SHA1SUM_PROGRAM" output]
+        return [list $catch_result $output]
+    }
+
+    # Return the contents of TEXT_FILE.
+    # It is assumed TEXT_FILE exists and is readable.
+    # This is used for reading files containing sha1sums, the
+    # last newline is removed.
+
+    proc _read_file { text_file } {
+	set f [open $text_file "r"]
+	set result [read -nonewline $f]
+	close $f
+	return $result
+    }
+
+    # Write TEXT to TEXT_FILE.
+    # It is assumed TEXT_FILE can be opened/created and written to.
+
+    proc _write_file { text_file text } {
+	set f [open $text_file "w"]
+	puts $f $text
+	close $f
+    }
+
+    # Wrapper on gdb_compile* that computes sha1sums of inputs to decide
+    # whether the compile is needed.
+    # The result is the result of gdb_compile*: "" == success, otherwise
+    # a compilation error occurred and the output is an error message.
+    # This doesn't take all inputs into account, just the useful ones.
+    # As an extension (or simplification) on gdb_compile*, if TYPE is
+    # shlib then call gdb_compile_shlib, otherwise call gdb_compile.
+    # Other possibilities *could* be handled this way, e.g., pthreads.   
TBD.
+
+    proc _perftest_compile { self_var source_files header_files dest type  
options } {
+	upvar 1 $self_var self
+	verbose -log "_perftest_compile $source_files $header_files $dest $type  
$options"
+	# To keep things simple, we put all non-file inputs into a file and
+	# then cat all input files through sha1sum.
+	set sha1sum_file ${dest}.sha1sum
+	set sha1new_file ${dest}.sha1new
+	set inputs_file ${dest}.inputs
+	global srcdir subdir
+	set all_options $options
+	lappend all_options "incdir=$srcdir/$subdir"
+	_write_inputs_file self $dest $inputs_file $all_options
+	set sha1sum [_gen_sha1sum_for_inputs $source_files $header_files  
$inputs_file]
+	if { [lindex $sha1sum 0] != 0 } {
+	    return "sha1sum generation error: [lindex $sha1sum 1]"
+	}
+	set sha1sum [lindex $sha1sum 1]
+	if ![file exists $dest] {
+	    file delete $sha1sum_file
+	}
+	if [file exists $sha1sum_file] {
+	    set last_sha1sum [_read_file $sha1sum_file]
+	    verbose -log "last: $last_sha1sum, new: $sha1sum"
+	    if { $sha1sum == $last_sha1sum } {
+		verbose -log "using existing build for $dest"
+		return ""
+	    }
+	}
+	# No such luck, we need to compile.
+	file delete $sha1sum_file
+	if { $type == "shlib" } {
+	    set result [gdb_compile_shlib $source_files $dest $all_options]
+	} else {
+	    set result [gdb_compile $source_files $dest $type $all_options]
+	}
+	if { $result == "" } {
+	    _write_file $sha1sum_file $sha1sum
+	    verbose -log "wrote sha1sum: $sha1sum"
+	}
+	return $result
+    }
+
+    proc _compile_binary_pieces { self_var worker_nr static run_nr } {
+	upvar 1 $self_var self
+	set compile_options [_compile_options self]
+	set nr_compunits [_get_param $self(nr_compunits) $run_nr]
+	set header_files [_get_param $self(binary_extra_headers) $run_nr]
+	global PERF_TEST_COMPILE_PARALLELISM
+	set nr_workers $PERF_TEST_COMPILE_PARALLELISM
+	# Generate the source first so we can more easily measure how long that
+	# takes.  [It doesn't take hardly any time at all, relative to the time
+	# it takes to compile it, but this will provide numbers to show that.]
+	set todo_list [_gen_binary_source self $worker_nr $static $run_nr]
+	verbose -log "GenPerfTest::_compile_binary_pieces worker $worker_nr run  
$run_nr, started [timestamp -format %c]"
+	foreach elm $todo_list {
+	    set source_file [lindex $elm 0]
+	    set object_file [lindex $elm 1]
+	    set compile_result [_perftest_compile self $source_file $header_files  
$object_file object $compile_options]
+	    if { $compile_result != "" } {
+		verbose -log "GenPerfTest::_compile_binary_pieces worker $worker_nr run  
$run_nr, failed [timestamp -format %c]"
+		verbose -log $compile_result
+		return -1
+	    }
+	}
+	verbose -log "GenPerfTest::_compile_binary_pieces worker $worker_nr run  
$run_nr, done [timestamp -format %c]"
+	return 0
+    }
+
+    # Helper function to compile the pieces of a shlib.
+    # Note: gdb_compile_shlib{,_pthreads} don't support first building  
object
+    # files and then building the shlib.  Therefore our hands are tied,  
and we
+    # just build the shlib in one step.  This is less of a parallelization
+    # problem if there are multiple shlibs: Each worker can build a  
different
+    # shlib.  If this proves to be a problem in practice we can enhance
+    # gdb_compile_shlib* then.
+
+    proc _compile_shlib { self_var static run_nr so_nr } {
+	upvar 1 $self_var self
+	set source_files [_gen_shlib_source self $static $run_nr $so_nr]
+	set header_files [_get_param $self(gen_shlib_extra_headers) $run_nr]
+	set shlib_file [_make_shlib_name self $static $run_nr $so_nr]
+	set compile_options "[_compile_options self]  
additional_flags=-DSHLIB=$so_nr"
+	set compile_result [_perftest_compile self $source_files $header_files  
$shlib_file shlib $compile_options]
+	if { $compile_result != "" } {
+	    verbose -log "_compile_shlib failed: $compile_result"
+	    return -1
+	}
+	return 0
+    }
+
+    proc _gen_tail_shlib_source { self_var static run_nr } {
+	upvar 1 $self_var self
+	verbose -log "GenPerfTest::_gen_tail_shlib_source run $run_nr"
+	set source_files [_get_param $self(tail_shlib_sources) $run_nr]
+	if { [llength $source_files] == 0 } {
+	    return ""
+	}
+	set result ""
+	foreach source_name $source_files {
+	    lappend result [_gen_shlib_common_source self $static $run_nr tail  
$source_name]
+	}
+	return $result
+    }
+
+    proc _make_tail_shlib_name { self_var static run_nr } {
+	upvar 1 $self_var self
+	set source_files [_get_param $self(tail_shlib_sources) $run_nr]
+	if { [llength $source_files] == 0 } {
+	    return ""
+	}
+	return [_make_shlib_name self $static $run_nr "tail"]
+    }
+
+    # Helper function to compile the tail shlib, if it's specified.
+
+    proc _compile_tail_shlib { self_var static run_nr } {
+	upvar 1 $self_var self
+	set source_files [_gen_tail_shlib_source self $static $run_nr]
+	if { [llength $source_files] == 0 } {
+	    return 0
+	}
+	set header_files [_get_param $self(tail_shlib_headers) $run_nr]
+	set shlib_file [_make_tail_shlib_name self $static $run_nr]
+	set compile_options [_compile_options self]
+	set compile_result [_perftest_compile self $source_files $header_files  
$shlib_file shlib $compile_options]
+	if { $compile_result != "" } {
+	    verbose -log "_compile_tail_shlib failed: $compile_result"
+	    return -1
+	}
+	verbose -log "_compile_tail_shlib failed: succeeded"
+	return 0
+    }
+
+    # Compile the pieces of the binary and possible shlibs for the test.
+    # The result is 0 for success, -1 for failure.
+
+    proc _compile_pieces { self_var worker_nr } {
+	upvar 1 $self_var self
+	global PERF_TEST_COMPILE_PARALLELISM
+	set nr_workers $PERF_TEST_COMPILE_PARALLELISM
+	set nr_runs [llength $self(run_names)]
+	set static [_static_object_files_p self]
+	verbose -log "_compile_pieces: static flag: $static"
+	file mkdir "[file dirname $self(binfile)]/pieces"
+	if $static {
+	    # All the generated pieces look the same (run over run) so just
+	    # build all the shlibs of the last run (which is the largest).
+	    set last_run [expr $nr_runs - 1]
+	    set nr_gen_shlibs [_get_param $self(nr_gen_shlibs) $last_run]
+	    set object_dir [_make_object_dir_name self $static ignored]
+	    file mkdir $object_dir
+	    for { set so_nr $worker_nr } { $so_nr < $nr_gen_shlibs } { incr so_nr  
$nr_workers } {
+		if { [_compile_shlib self $static $last_run $so_nr] < 0 } {
+		    return -1
+		}
+	    }
+	    # We don't shard building of tail-shlib, so only build it once.
+	    if { $worker_nr == 0 } {
+		if { [_compile_tail_shlib self $static $last_run] < 0 } {
+		    return -1
+		}
+	    }
+	    if { [_compile_binary_pieces self $worker_nr $static $last_run] < 0 }  
{
+		return -1
+	    }
+	} else {
+	    for { set run_nr 0 } { $run_nr < $nr_runs } { incr run_nr } {
+		set nr_gen_shlibs [_get_param $self(nr_gen_shlibs) $run_nr]
+		set object_dir [_make_object_dir_name self $static $run_nr]
+		file mkdir $object_dir
+		for { set so_nr $worker_nr } { $so_nr < $nr_gen_shlibs } { incr so_nr  
$nr_workers } {
+		    if { [_compile_shlib self $static $run_nr $so_nr] < 0 } {
+			return -1
+		    }
+		}
+		# We don't shard building of tail-shlib, so only build it once.
+		if { $worker_nr == 0 } {
+		    if { [_compile_tail_shlib self $static $run_nr] < 0 } {
+			return -1
+		    }
+		}
+		if { [_compile_binary_pieces self $worker_nr $static $run_nr] < 0 } {
+		    return -1
+		}
+	    }
+	}
+	return 0
+    }
+
+    # Main function invoked by each worker.
+    # This builds all the things that are possible to build in parallel,
+    # sharded up among all the workers.
+
+    proc compile_pieces { self_var worker_nr } {
+	upvar 1 $self_var self
+	verbose -log "GenPerfTest::compile_pieces worker $worker_nr, started  
[timestamp -format %c]"
+	verbose -log "self: [array get self]"
+	_verify_testcase self
+	if { [_compile_pieces self $worker_nr] < 0 } {
+	    verbose -log "GenPerfTest::compile_pieces worker $worker_nr, failed  
[timestamp -format %c]"
+	    return -1
+	}
+	verbose -log "GenPerfTest::compile_pieces worker $worker_nr, done  
[timestamp -format %c]"
+	return 0
+    }
+
+    proc _make_shlib_options { self_var static run_nr } {
+	upvar 1 $self_var self
+	set nr_gen_shlibs [_get_param $self(nr_gen_shlibs) $run_nr]
+	set result ""
+	for { set i 0 } { $i < $nr_gen_shlibs } { incr i } {
+	    lappend result "shlib=[_make_shlib_name self $static $run_nr $i]"
+	}
+	set tail_shlib_name [_make_tail_shlib_name self $static $run_nr]
+	if { "$tail_shlib_name" != "" } {
+	    lappend result "shlib=$tail_shlib_name"
+	}
+	return $result
+    }
+
+    proc _compile_binary { self_var static run_nr } {
+	upvar 1 $self_var self
+	set input_files [_make_binary_input_file_names self $static $run_nr]
+	set header_files [_get_param $self(binary_extra_headers) $run_nr]
+	set binary_file [_make_binary_name self $run_nr]
+	set compile_options [_compile_options self]
+	set shlib_options [_make_shlib_options self $static $run_nr]
+	if { [llength $shlib_options] > 0 } {
+	    append compile_options " " $shlib_options
+	}
+	set compile_result [_perftest_compile self $input_files $header_files  
$binary_file executable $compile_options]
+	if { $compile_result != "" } {
+	    verbose -log "_compile_binary failed: $compile_result"
+	    return -1
+	}
+	return 0
+    }
+
+    # Helper function for compile.
+    # The result is 0 for success, -1 for failure.
+
+    proc _compile { self_var } {
+	upvar 1 $self_var self
+	set nr_runs [llength $self(run_names)]
+	set static [_static_object_files_p self]
+	verbose -log "_compile: static flag: $static"
+	for { set run_nr 0 } { $run_nr < $nr_runs } { incr run_nr } {
+	    if { [_compile_binary self $static $run_nr] < 0 } {
+		return -1
+	    }
+	}
+	return 0
+    }
+
+    # Main function to compile the test program.
+    # It is assumed all the pieces of the binary (all the .o's, except  
those
+    # from test-supplied sources) have already been built with  
compile_pieces.
+    # There's no need to compile any shlibs here, as compile_pieces will  
have
+    # already built them too.
+    # The result is 0 for success, -1 for failure.
+
+    proc compile { self_var } {
+	upvar 1 $self_var self
+	verbose -log "GenPerfTest::compile, started [timestamp -format %c]"
+	verbose -log "self: [array get self]"
+	_verify_testcase self
+	if { [_compile self] < 0 } {
+	    verbose -log "GenPerfTest::compile, failed [timestamp -format %c]"
+	    return -1
+	}
+	verbose -log "GenPerfTest::compile, done [timestamp -format %c]"
+	return 0
+    }
+
+    # Main function for running a test.
+    # It is assumed that the test program has already been built.
+
+    proc run { builder_exp_file_name make_config_thunk_name py_file_name  
test_class_name } {
+	verbose -log "GenPerfTest::run, started [timestamp -format %c]"
+	verbose -log "GenPerfTest::run, $builder_exp_file_name  
$make_config_thunk_name $py_file_name $test_class_name"
+
+	set testprog [file rootname $builder_exp_file_name]
+
+	# This variable is required by perftest.exp.
+	# This isn't the name of the test program, it's the name of the .py
+	# test.  The harness assumes they are the same, which is not the case
+	# here.
+	global testfile
+	set testfile [file rootname $py_file_name]
+
+	GenPerfTest::load_test_description $builder_exp_file_name
+
+	array set testcase [$make_config_thunk_name]
+
+	PerfTest::assemble {
+	    # Compilation is handled elsewhere.
+	    return 0
+	} {
+	    clean_restart
+	    return 0
+	} {
+	    global gdb_prompt
+	    gdb_test_multiple "python ${test_class_name}('$testprog:$testfile',  
[tcl_string_list_to_python_list  
$testcase(run_names)], '$testcase(binfile)').run()" "run test" {
+		-re "Error while executing Python code.\[\r\n\]+$gdb_prompt $" {
+		    return -1
+		}
+		-re "\[\r\n\]+$gdb_prompt $" {
+		}
+	    }
+	    return 0
+	}
+	verbose -log "GenPerfTest::run, done [timestamp -format %c]"
+	return 0
+    }
+
+    # This function is invoked by the testcase builder scripts
+    # (e.g., gmonster[12].exp).
+    # It is not invoked by the testcase runner scripts
+    # (e.g., gmonster[12]-*.exp).
+
+    proc standard_compile_driver { exp_file_name make_config_thunk_name } {
+	global GDB_PERFTEST_MODE GDB_PERFTEST_SUBMODE
+	if ![info exists GDB_PERFTEST_SUBMODE] {
+	    # Probably a plain "make check-perf", nothing to do.
+	    # Give the user a reason why we're not running this test.
+	    verbose -log "Test must be compiled/run in separate steps."
+	    return 0
+	}
+	switch -glob -- "$GDB_PERFTEST_MODE/$GDB_PERFTEST_SUBMODE" {
+	    compile/gen-workers {
+		if { [GenPerfTest::gen_worker_files $exp_file_name] < 0 } {
+		    fail $GDB_PERFTEST_MODE
+		    return -1
+		}
+		pass $GDB_PERFTEST_MODE
+	    }
+	    compile/build-pieces {
+		array set testcase [$make_config_thunk_name]
+		global PROGRAM_NAME WORKER_NR
+		set output_dir "gdb.perf/pieces"
+		if { [GenPerfTest::compile_pieces testcase $WORKER_NR] < 0 } {
+		    fail $GDB_PERFTEST_MODE
+		    # This gdb.log lives in a different place, help the user
+		    # find it.
+		    send_user "check  
${output_dir}/${PROGRAM_NAME}-${WORKER_NR}/gdb.log\n"
+		    return -1
+		}
+		pass $GDB_PERFTEST_MODE
+	    }
+	    compile/final {
+		array set testcase [$make_config_thunk_name]
+		if { [GenPerfTest::compile testcase] < 0 } {
+		    fail $GDB_PERFTEST_MODE
+		    return -1
+		}
+		pass $GDB_PERFTEST_MODE
+	    }
+	    run/* - both/* {
+		# Since the builder script is a .exp file living in gdb.perf
+		# we can get here (dejagnu will find this file for a default
+		# "make check-perf").  We can also get here when
+		# standard_run_driver loads the builder .exp file.
+	    }
+	    default {
+		error "Bad value for GDB_PERFTEST_MODE/GDB_PERFTEST_SUBMODE:  
$GDB_PERFTEST_MODE/$GDB_PERFTEST_SUBMODE"
+	    }
+	}
+	return 0
+    }
+
+    # This function is invoked by the testcase runner scripts
+    # (e.g., gmonster[12]-*.exp).
+    # It is not invoked by the testcase builder scripts
+    # (e.g., gmonster[12].exp).
+    #
+    # These tests are built separately with
+    # "make build-perf" and run with
+    # "make check-perf GDB_PERFTEST_MODE=run".
+    # Eventually we can support GDB_PERFTEST_MODE=both, but for now we  
don't.
+
+    proc standard_run_driver { builder_exp_file_name  
make_config_thunk_name py_file_name test_class_name } {
+	global GDB_PERFTEST_MODE
+	# First step is to compile the test.
+	switch $GDB_PERFTEST_MODE {
+	    compile - both {
+		# Here is where we'd add code to support a plain
+		# "make check-perf".
+	    }
+	    run {
+	    }
+	    default {
+		error "Bad value for GDB_PERFTEST_MODE: $GDB_PERFTEST_MODE"
+	    }
+	}
+	# Now run the test.
+	switch $GDB_PERFTEST_MODE {
+	    compile {
+	    }
+	    both {
+		# Give the user a reason why we're not running this test.
+		verbose -log "Test must be compiled/run in separate steps."
+	    }
+	    run {
+		if { [GenPerfTest::run $builder_exp_file_name $make_config_thunk_name  
$py_file_name $test_class_name] < 0 } {
+		    fail $GDB_PERFTEST_MODE
+		    return -1
+		}
+		pass $GDB_PERFTEST_MODE
+	    }
+	}
+	return 0
+    }
+}
+
+if ![info exists PERF_TEST_COMPILE_PARALLELISM] {
+    set PERF_TEST_COMPILE_PARALLELISM  
$GenPerfTest::DEFAULT_PERF_TEST_COMPILE_PARALLELISM
+}

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2015-06-11  1:48 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-02 10:07 [RFC] Monster testcase generator for performance testsuite Doug Evans
2015-01-05 13:32 ` Yao Qi
2015-01-06  0:54   ` Doug Evans
2015-01-07  9:39     ` Yao Qi
2015-01-07 22:33       ` Doug Evans
2015-01-08  1:55         ` Yao Qi
2015-01-23  7:45           ` Doug Evans
2015-06-11  1:48 Doug Evans

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).