public inbox for gdb-patches@sourceware.org
 help / color / mirror / Atom feed
* [PATCH 2/3] Perf test framework
  2013-09-25 14:27 [PATCH 0/3 V2] GDB Performance testing Yao Qi
@ 2013-09-25 14:27 ` Yao Qi
  2013-10-09  6:37   ` Doug Evans
  2013-09-25 14:27 ` [PATCH 3/3] Test on solib load and unload Yao Qi
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 14+ messages in thread
From: Yao Qi @ 2013-09-25 14:27 UTC (permalink / raw)
  To: gdb-patches

This patch adds a basic framework to do performance testing for GDB.
perftest.py is about the test case, testresult.py is about test
results, and how are they saved.  reporter.py is about how results
are reported (in what format).  measure.py is about measuring the
execution of tests by a collection of measurements.

In V2, there are several changes to address Doug and Sanimir's
comments.

 - Add copyright header and docstring in perftest/__init__.py
 - Remove config.py.
 - Fix docstring format.
 - Rename classes "SingleVariable" to "SingleStatistic".
 - Don't extend gdb.Function in class TestCase.  Add a new method run
   to run the test case so that we can pass parameters to test.
 - Allow to customize whether to warm up and to append test log.
 - Move time measurement into test harness.  Add a new class
   Measurement for a specific measurement and a new class Measure to
   measure them for a given test case.
 - A new class ResultFactory to create instances of TestResult.
 - New file lib/perftest.exp, which is to do some preparations and
   cleanups to simplify each *.exp file.
 - Skip compilation step if GDB_PERFORMANCE_SKIP_COMPILE exists.

gdb/testsuite/

	* lib/perftest.exp: New.
	* gdb.perf/lib/perftest/__init__.py: New.
	* gdb.perf/lib/perftest/measure.py: New.
	* gdb.perf/lib/perftest/perftest.py: New.
	* gdb.perf/lib/perftest/reporter.py: New.
	* gdb.perf/lib/perftest/testresult.py: New.
---
 gdb/testsuite/gdb.perf/lib/perftest/__init__.py   |   17 +++
 gdb/testsuite/gdb.perf/lib/perftest/measure.py    |  113 +++++++++++++++++++
 gdb/testsuite/gdb.perf/lib/perftest/perftest.py   |   71 ++++++++++++
 gdb/testsuite/gdb.perf/lib/perftest/reporter.py   |   68 ++++++++++++
 gdb/testsuite/gdb.perf/lib/perftest/testresult.py |   59 ++++++++++
 gdb/testsuite/lib/perftest.exp                    |  121 +++++++++++++++++++++
 6 files changed, 449 insertions(+), 0 deletions(-)
 create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/__init__.py
 create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/measure.py
 create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/perftest.py
 create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/reporter.py
 create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/testresult.py
 create mode 100644 gdb/testsuite/lib/perftest.exp

diff --git a/gdb/testsuite/gdb.perf/lib/perftest/__init__.py b/gdb/testsuite/gdb.perf/lib/perftest/__init__.py
new file mode 100644
index 0000000..7739a3e
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/lib/perftest/__init__.py
@@ -0,0 +1,17 @@
+# Copyright (C) 2013 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+"""
+GDB performance testing framework.
+"""
diff --git a/gdb/testsuite/gdb.perf/lib/perftest/measure.py b/gdb/testsuite/gdb.perf/lib/perftest/measure.py
new file mode 100644
index 0000000..8bede59
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/lib/perftest/measure.py
@@ -0,0 +1,113 @@
+# Copyright (C) 2013 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+import time
+
+class Measure(object):
+    """
+    A class that measure and collect the interesting data for a given
+    testcase.
+
+    An instance of Measure has a collection of measurements, and each
+    of them is to measure a given aspect, such as time and memory.
+
+    """
+
+    def __init__(self, measurements, result_factory):
+        """Constructor of measure.
+
+        measurements is a collection of 'enum's, which are about the
+        id of measurements.  A collection of measurements are created
+        according to the ids in argument measurements.
+
+        """
+        self.ms = []
+        for m in measurements:
+            self.ms.append(Measurement.create_instance(m, result_factory))
+
+    def measure(self, func, id):
+        """Measure the operations done by func with a collection of measurements."""
+        for m in self.ms:
+            m.start(id)
+
+        func()
+
+        for m in self.ms:
+            m.stop(id)
+
+    def report(self, reporter, name):
+        """Report the measured results."""
+        for m in self.ms:
+            m.report(reporter, name)
+
+class Measurement(object):
+    """A measurement for a certain aspect."""
+
+    # enum of each measurement.
+    TIME = 1
+
+    @classmethod
+    def create_instance(cls, id, result_factory):
+        """A factory method to create an instance of Measurement subclass
+        according to id."""
+        if id == Measurement.TIME:
+            return MeasurementTime(result_factory.create_result())
+        else:
+            raise RuntimeError("Unknown type of measurement.")
+
+    def __init__(self, result):
+        """Constructor of Measurement.
+
+        Attribute result is the TestResult associated with measurement.
+
+        """
+        self.result = result
+
+    def start(self, id):
+        """Abstract method to start the measurement."""
+        raise NotImplementedError("Abstract Method:start")
+
+    def stop(self, id):
+        """Abstract method to stop the measurement.
+
+        When the measurement is stopped, we've got something, and
+        record them in result.
+
+        """
+        raise NotImplementedError("Abstract Method:stop.")
+
+    def decorate_name(self, name):
+        """Decorate the name when report."""
+        return name
+
+    def report(self, reporter, name):
+        """Report the measured data by argument reporter."""
+        self.result.report(reporter, self.decorate_name(name))
+
+class MeasurementTime(Measurement):
+    """Measurement on time."""
+    def __init__(self, result):
+        super(MeasurementTime, self).__init__(result)
+        self.start_time = 0
+
+    def start(self, id):
+        self.start_time = time.clock()
+
+    def stop(self, id):
+        elapsed_time = time.clock() - self.start_time
+        self.result.record (id, elapsed_time)
+
+    def decorate_name(self, name):
+        return name + " time"
diff --git a/gdb/testsuite/gdb.perf/lib/perftest/perftest.py b/gdb/testsuite/gdb.perf/lib/perftest/perftest.py
new file mode 100644
index 0000000..bd81cf0
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/lib/perftest/perftest.py
@@ -0,0 +1,71 @@
+# Copyright (C) 2013 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+import testresult
+import reporter
+from measure import Measure
+
+class TestCase(object):
+    """
+    Base class of all performance testing cases.
+
+    Each sub-class should override methods execute_test, in which
+    several GDB operations are performed and measured by attribute
+    measure.  Sub-class can also override method warm_up optionally
+    if the test case needs warm up.
+
+    """
+
+    def __init__(self, name, measure):
+        """
+        Constructor of TestCase.
+
+        Construct an instance of TestCase with a name and a measure
+        which is to measure the test by several different measurements.
+
+        """
+        self.name = name
+        self.measure = measure
+
+    def execute_test(self):
+        """Abstract method to do the actual tests."""
+        raise NotImplementedError("Abstract Method.")
+
+    def warm_up(self):
+        """Do some operations to warm up the environment."""
+
+    def run(self, warm_up=True, append=True):
+        """
+        Run this test case.
+
+        It is a template method to invoke method warm_up,
+        execute_test, and finally report the measured results.
+        If parameter warm_up is True, run method warm_up.  If parameter
+        append is True, the test result will be appended instead of
+        overwritten.
+
+        """
+        if warm_up:
+            self.warm_up()
+
+        self.execute_test()
+        self.measure.report(reporter.TextReporter(append), self.name)
+
+class SingleStatisticTestCase(TestCase):
+    """Test case with a single statistic."""
+
+    def __init__(self, measures, name):
+        super (SingleStatisticTestCase, self).__init__ (name, Measure(measures,
+                                                                      testresult.SingleStatisticResultFactory()))
diff --git a/gdb/testsuite/gdb.perf/lib/perftest/reporter.py b/gdb/testsuite/gdb.perf/lib/perftest/reporter.py
new file mode 100644
index 0000000..7da86cf
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/lib/perftest/reporter.py
@@ -0,0 +1,68 @@
+# Copyright (C) 2013 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+class Reporter(object):
+    """
+    Base class of reporter to report test results in a certain format.
+
+    Subclass, which is specific to a report format, should overwrite
+    methods report, start and end.
+
+    """
+
+    def __init__(self, append):
+        """Constructor of Reporter.
+
+        attribute append is used to determine whether to append or
+        overwrite log file.
+
+        """
+        self.append = append
+
+    def report(self, *args):
+        raise NotImplementedError("Abstract Method:report.")
+
+    def start(self):
+        """Invoked when reporting is started."""
+        raise NotImplementedError("Abstract Method:start.")
+
+    def end(self):
+        """Invoked when reporting is done.
+
+        It must be overridden to do some cleanups, such as closing file
+        descriptors.
+
+        """
+        raise NotImplementedError("Abstract Method:end.")
+
+class TextReporter(Reporter):
+    """Report results in a plain text file 'perftest.log'."""
+
+    def __init__(self, append):
+        super (TextReporter, self).__init__(Reporter(append))
+        self.txt_log = None
+
+    def report(self, *args):
+        self.txt_log.write(' '.join(str(arg) for arg in args))
+        self.txt_log.write('\n')
+
+    def start(self):
+        if self.append:
+            self.txt_log = open ("perftest.log", 'a+');
+        else:
+            self.txt_log = open ("perftest.log", 'w');
+
+    def end(self):
+        self.txt_log.close ()
diff --git a/gdb/testsuite/gdb.perf/lib/perftest/testresult.py b/gdb/testsuite/gdb.perf/lib/perftest/testresult.py
new file mode 100644
index 0000000..51a066f
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/lib/perftest/testresult.py
@@ -0,0 +1,59 @@
+# Copyright (C) 2013 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+class TestResult(object):
+    """
+    Base class to record and report test results.
+
+    Method record is to record the results of test case, and report
+    method is to report the recorded results by a given reporter.
+
+    """
+
+    def record(self, parameter, result):
+        raise NotImplementedError("Abstract Method:record.")
+
+    def report(self, reporter, name):
+        """Report the test results by reporter."""
+        raise NotImplementedError("Abstract Method:report.")
+
+class SingleStatisticTestResult(TestResult):
+    """Test results for the test case with a single statistic."""
+
+    def __init__(self):
+        super (SingleStatisticTestResult, self).__init__ ()
+        self.results = dict ()
+
+    def record(self, parameter, result):
+        self.results[parameter] = result
+
+    def report(self, reporter, name):
+        reporter.start()
+        for key in sorted(self.results.iterkeys()):
+            reporter.report(name, key, self.results[key])
+        reporter.end()
+
+class ResultFactory(object):
+    """A factory to create an instance of TestResult."""
+
+    def create_result(self):
+        """Create an instance of TestResult."""
+        raise NotImplementedError("Abstract Method:create_result.")
+
+class SingleStatisticResultFactory(ResultFactory):
+    """A factory to create an instance of SingleStatisticTestResult."""
+
+    def create_result(self):
+        return SingleStatisticTestResult()
diff --git a/gdb/testsuite/lib/perftest.exp b/gdb/testsuite/lib/perftest.exp
new file mode 100644
index 0000000..3c04989
--- /dev/null
+++ b/gdb/testsuite/lib/perftest.exp
@@ -0,0 +1,121 @@
+# Copyright (C) 2013 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+namespace eval PerfTest {
+    # The name of python file on build.
+    variable remote_python_file
+
+    # The source files are compiled successfully or not.
+    variable compiled_ok
+
+    # A private method to set up GDB for performance testing.
+    proc _setup_gdb {} {
+	variable remote_python_file
+	global srcdir subdir testfile
+
+	set remote_python_file [gdb_remote_download host ${srcdir}/${subdir}/${testfile}.py]
+
+	# Set sys.path for module perftest.
+	gdb_test_no_output "python import os, sys"
+	gdb_test_no_output "python sys.path.insert\(0, os.path.abspath\(\"${srcdir}/${subdir}/lib\"\)\)"
+	gdb_test_no_output "python exec (open ('${remote_python_file}').read ())"
+    }
+
+    # A private method to do some cleanups when performance test is
+    # finished.
+    proc _teardown {} {
+	variable remote_python_file
+
+	remote_file host delete $remote_python_file
+    }
+
+    # Compile source files of test case.  BODY is the tcl code to do
+    # actual compilation and it should invoke 'PerfTest::compiled' if
+    # compilation is successful.
+    proc compile {body} {
+	global GDB_PERFORMANCE_SKIP_COMPILE
+
+	if ![info exists GDB_PERFORMANCE_SKIP_COMPILE] {
+	    variable compiled_ok
+
+	    set compiled_ok 0
+	    uplevel 2 $body
+
+	    if {!$compiled_ok} {
+		untested "Could not compile source files."
+		return -1
+	    }
+	}
+    }
+
+    # Mark the compilation is finished successfully.
+    proc compiled {} {
+	variable compiled_ok
+
+	set compiled_ok 1
+    }
+
+    # Start GDB.
+    proc setup_gdb {body} {
+	variable compiled_ok
+
+	if {!$compiled_ok} {
+	    return
+	}
+
+	uplevel 2 $body
+    }
+
+    # Run the performance test.
+    proc run {body} {
+	global timeout
+
+	set timeout 3000
+	uplevel 2 $body
+    }
+
+    # The top-level interface to PerfTest.
+    # PREPARE is the tcl code to prepare for the performance testing,
+    # including generating and compiling source files, starting up GDB.
+    # RUN is the tcl code to drive GDB to do some operations.
+    proc assemble {prepare run} {
+	variable compiled_ok
+
+	set compiled_ok 1
+	eval $prepare
+
+	if {!$compiled_ok} {
+	    return
+	}
+
+	_setup_gdb
+
+	eval $run
+
+	_teardown
+    }
+}
+
+# Return true if performance tests are skipped.
+
+proc skip_perf_tests { } {
+    global GDB_PERFORMANCE
+
+    if [info exists GDB_PERFORMANCE] {
+	return 0
+    }
+
+    return 1
+}
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 3/3] Test on solib load and unload
  2013-09-25 14:27 [PATCH 0/3 V2] GDB Performance testing Yao Qi
  2013-09-25 14:27 ` [PATCH 2/3] Perf test framework Yao Qi
@ 2013-09-25 14:27 ` Yao Qi
  2013-09-27 14:09   ` Gary Benson
  2013-09-25 14:27 ` [PATCH 1/3] New make target 'check-perf' and new dir gdb.perf Yao Qi
  2013-10-06  1:44 ` [PATCH 0/3 V2] GDB Performance testing Yao Qi
  3 siblings, 1 reply; 14+ messages in thread
From: Yao Qi @ 2013-09-25 14:27 UTC (permalink / raw)
  To: gdb-patches

This patch is to add a test case to on the performance of GDB handling
load and unload of shared library.

In V2, there are some changes,

 - A new proc gdb_produce_source to produce source files.  I tried to
   move all source file generation code out of solib.exp, but
   compilation step still needs to know the generated file names.  I
   have to hard-code the file names in compilation step, which is not
   good to me, so I give up on this moving.
 - SOLIB_NUMBER -> SOLIB_COUNT
 - New variable SOLIB_DLCLOSE_REVERSED_ORDER to control the order of
   iterating a list of shared libs to dlclose them.
 - New variable GDB_PERFORMANCE to enable these perf test cases.
 - Remove dlsym call in solib.c.
 - Update solib.py for the updated framework.

gdb/testsuite/

	* lib/gdb.exp (gdb_produce_source): New procedure.
	* gdb.perf/solib.c: New.
	* gdb.perf/solib.exp: New.
	* gdb.perf/solib.py: New.
---
 gdb/testsuite/gdb.perf/solib.c   |   71 ++++++++++++++++++++++++++++++++++
 gdb/testsuite/gdb.perf/solib.exp |   78 ++++++++++++++++++++++++++++++++++++++
 gdb/testsuite/gdb.perf/solib.py  |   45 ++++++++++++++++++++++
 gdb/testsuite/lib/gdb.exp        |   16 ++++++++
 4 files changed, 210 insertions(+), 0 deletions(-)
 create mode 100644 gdb/testsuite/gdb.perf/solib.c
 create mode 100644 gdb/testsuite/gdb.perf/solib.exp
 create mode 100644 gdb/testsuite/gdb.perf/solib.py

diff --git a/gdb/testsuite/gdb.perf/solib.c b/gdb/testsuite/gdb.perf/solib.c
new file mode 100644
index 0000000..82b1efa
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/solib.c
@@ -0,0 +1,71 @@
+/* This testcase is part of GDB, the GNU debugger.
+
+   Copyright (C) 2013 Free Software Foundation, Inc.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+#include <stdio.h>
+#include <stdlib.h>
+
+#ifdef __WIN32__
+#include <windows.h>
+#define dlopen(name, mode) LoadLibrary (TEXT (name))
+# define dlsym(handle, func) GetProcAddress (handle, func)
+#define dlclose(handle) FreeLibrary (handle)
+#else
+#include <dlfcn.h>
+#endif
+
+void
+do_test (int number)
+{
+  void **handles;
+  char libname[40];
+  int i;
+
+  handles = malloc (sizeof (void *) * number);
+
+  for (i = 0; i < number; i++)
+    {
+      sprintf (libname, "solib-lib%d", i);
+      handles[i] = dlopen (libname, RTLD_LAZY);
+      if (handles[i] == NULL)
+	{
+	  printf ("ERROR on dlopen %s\n", libname);
+	  return;
+	}
+    }
+
+  /* Unload shared libraries in different orders.  */
+#ifndef SOLIB_DLCLOSE_REVERSED_ORDER
+  for (i = 0; i < number; i++)
+#else
+  for (i = number - 1; i >= 0; i--)
+#endif
+    dlclose (handles[i]);
+
+  free (handles);
+}
+
+static void
+end (void)
+{}
+
+int
+main (void)
+{
+  end ();
+
+  return 0;
+}
diff --git a/gdb/testsuite/gdb.perf/solib.exp b/gdb/testsuite/gdb.perf/solib.exp
new file mode 100644
index 0000000..ab41b4e
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/solib.exp
@@ -0,0 +1,78 @@
+# Copyright (C) 2013 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# This test case is to test the performance of GDB when it is handling
+# the shared libraries of inferior are loaded and unloaded.
+load_lib perftest.exp
+
+if [skip_perf_tests] {
+    return 0
+}
+
+standard_testfile .c
+set executable $testfile
+set expfile $testfile.exp
+
+# make check-perf RUNTESTFLAGS='solib.exp SOLIB_COUNT=1024'
+if ![info exists SOLIB_COUNT] {
+    set SOLIB_COUNT 128
+}
+
+PerfTest::assemble {
+    compile {
+	for {set i 0} {$i < $SOLIB_COUNT} {incr i} {
+
+	    # Produce source files.
+	    set libname "solib-lib$i"
+	    set src [standard_output_file $libname.c]
+	    set exe [standard_output_file $libname]
+
+	    gdb_produce_source $src { "int shr$i (void) {return 0;}" }
+
+	    # Compile.
+	    if { [gdb_compile_shlib $src $exe {debug}] != "" } {
+		untested "Couldn't compile $src."
+		return -1
+	    }
+
+	    # Delete object files to save some space.
+	    file delete [standard_output_file  "solib-lib$i.c.o"]
+	}
+
+	set compile_flags {debug shlib_load}
+	global SOLIB_DLCLOSE_REVERSED_ORDER
+
+	if [info exists SOLIB_DLCLOSE_REVERSED_ORDER] {
+	    lappend compile_flags "additional_flags=-DSOLIB_DLCLOSE_REVERSED_ORDER"
+	}
+
+	if { [gdb_compile "$srcdir/$subdir/$srcfile" ${binfile} executable  $compile_flags] == "" } {
+	    PerfTest::compiled
+	}
+    }
+
+    setup_gdb {
+	clean_restart $binfile
+
+	if ![runto_main] {
+	    fail "Can't run to main"
+	    return -1
+	}
+    }
+} {
+    run {
+	gdb_test_no_output "python SolibLoadUnload\($SOLIB_COUNT\).run()"
+    }
+}
diff --git a/gdb/testsuite/gdb.perf/solib.py b/gdb/testsuite/gdb.perf/solib.py
new file mode 100644
index 0000000..9376814
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/solib.py
@@ -0,0 +1,45 @@
+# Copyright (C) 2013 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# This test case is to test the speed of GDB when it is handling the
+# shared libraries of inferior are loaded and unloaded.
+
+from perftest import perftest
+from perftest import measure
+
+class SolibLoadUnload(perftest.SingleStatisticTestCase):
+    def __init__(self, solib_number):
+        # We want to measure time in this test.
+        super (SolibLoadUnload, self).__init__ ([measure.Measurement.TIME],
+                                                "solib")
+        self.solib_number = solib_number
+
+    def warm_up(self):
+        do_test_command = "call do_test (%d)" % self.solib_number
+        gdb.execute(do_test_command)
+        gdb.execute(do_test_command)
+
+    def execute_test(self):
+        num = self.solib_number
+        iteration = 5;
+
+        while num > 0 and iteration > 0:
+            do_test_command = "call do_test (%d)" % num
+            func = lambda: gdb.execute (do_test_command)
+
+            self.measure.measure(func, num)
+
+            num = num / 2
+            iteration -= 1
diff --git a/gdb/testsuite/lib/gdb.exp b/gdb/testsuite/lib/gdb.exp
index 5e3331a..22349f7 100644
--- a/gdb/testsuite/lib/gdb.exp
+++ b/gdb/testsuite/lib/gdb.exp
@@ -1796,6 +1796,22 @@ proc supports_reverse {} {
     return 0
 }
 
+# Produce source file NAME and write SOURCES into it.
+
+proc gdb_produce_source { name sources } {
+    set index 0
+    set f [open $name "w"]
+
+    while { ${index} < [llength ${sources}] } {
+	set line [lindex ${sources} ${index}]
+	set index [expr ${index} + 1]
+
+	set line [uplevel list $line]
+	puts $f $line
+    }
+    close $f
+}
+
 # Return 1 if target is ILP32.
 # This cannot be decided simply from looking at the target string,
 # as it might depend on externally passed compiler options like -m64.
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 0/3 V2] GDB Performance testing
@ 2013-09-25 14:27 Yao Qi
  2013-09-25 14:27 ` [PATCH 2/3] Perf test framework Yao Qi
                   ` (3 more replies)
  0 siblings, 4 replies; 14+ messages in thread
From: Yao Qi @ 2013-09-25 14:27 UTC (permalink / raw)
  To: gdb-patches

Hello,
Here is the V2 of GDB performance testing.  The changes in V2 can be
found in each patch, and they address most of review comments.

Here are some points V2 doesn't address, and I'd like revisit them
and consider them in the next iteration.

 - Utilities to generate source files.  We need them to generate source
   files to compose a reasonably sized program for perf testing.  However,
   current test case doesn't require these utilities.
 - Generated files, including source files, object files and executables
   are not removed when the test is done, because the next run may skip
   compilation step.

The basic usages of performance testing are unchanged, like

$ make check-perf
$ make check-perf RUNTESTFLAGS="--target_board=native-gdbserver solib.exp"
$ make check-perf RUNTESTFLAGS="solib.exp SOLIB_COUNT=1024"

We can skip compilation step like this,

$ make check-perf RUNTESTFLAGS='solib.exp GDB_PERFORMANCE_SKIP_COMPILE=yes'

in default, compilation is not skipped.

The skeleton of .exp is like this:

load_lib perftest.exp

if [skip_perf_tests] {
    return 0
}

PerfTest::assemble {
    compile {

      PerfTest::compiled
    }
    setup_gdb {
    }
} {
    run {
    }
}

PerfTest::assemble is inspired by Dwarf::assemble.  It has three
procedures, compile, setup_gdb and run.  Each test should fill in
the body of these three procedures for its purpose.  If variable
GDB_PERFORMANCE_SKIP_COMPILE exists, the procedure compile is not
invoked.  If the test case uses pre-compiled program, it can be
written like this:

PerfTest::assemble {
    setup_gdb {
    }
} {
    run {
    }
}

I'll update gdb/testsuite/README later.

*** BLURB HERE ***

Yao Qi (3):
  New make target 'check-perf' and new dir gdb.perf
  Perf test framework
  Test on solib load and unload

 gdb/Makefile.in                                   |    8 ++
 gdb/testsuite/Makefile.in                         |    4 +
 gdb/testsuite/configure                           |    3 +-
 gdb/testsuite/configure.ac                        |    2 +-
 gdb/testsuite/gdb.perf/Makefile.in                |   15 +++
 gdb/testsuite/gdb.perf/lib/perftest/__init__.py   |   17 +++
 gdb/testsuite/gdb.perf/lib/perftest/measure.py    |  114 +++++++++++++++++++++
 gdb/testsuite/gdb.perf/lib/perftest/perftest.py   |   71 +++++++++++++
 gdb/testsuite/gdb.perf/lib/perftest/reporter.py   |   68 ++++++++++++
 gdb/testsuite/gdb.perf/lib/perftest/testresult.py |   59 +++++++++++
 gdb/testsuite/gdb.perf/solib.c                    |   71 +++++++++++++
 gdb/testsuite/gdb.perf/solib.exp                  |   77 ++++++++++++++
 gdb/testsuite/gdb.perf/solib.py                   |   45 ++++++++
 gdb/testsuite/lib/gdb.exp                         |   16 +++
 gdb/testsuite/lib/perftest.exp                    |   67 ++++++++++++
 15 files changed, 635 insertions(+), 2 deletions(-)
 create mode 100644 gdb/testsuite/gdb.perf/Makefile.in
 create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/__init__.py
 create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/measure.py
 create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/perftest.py
 create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/reporter.py
 create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/testresult.py
 create mode 100644 gdb/testsuite/gdb.perf/solib.c
 create mode 100644 gdb/testsuite/gdb.perf/solib.exp
 create mode 100644 gdb/testsuite/gdb.perf/solib.py
 create mode 100644 gdb/testsuite/lib/perftest.exp

-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/3] New make target 'check-perf' and new dir gdb.perf
  2013-09-25 14:27 [PATCH 0/3 V2] GDB Performance testing Yao Qi
  2013-09-25 14:27 ` [PATCH 2/3] Perf test framework Yao Qi
  2013-09-25 14:27 ` [PATCH 3/3] Test on solib load and unload Yao Qi
@ 2013-09-25 14:27 ` Yao Qi
  2013-10-09  5:13   ` Doug Evans
  2013-10-06  1:44 ` [PATCH 0/3 V2] GDB Performance testing Yao Qi
  3 siblings, 1 reply; 14+ messages in thread
From: Yao Qi @ 2013-09-25 14:27 UTC (permalink / raw)
  To: gdb-patches

We add a new dir gdb.perf in testsuite for all performance tests.
However, current 'make check' logic will either run dejagnu in
directory testsuite or iterate all gdb.* directories which has *.exp
files.  Both of them will run tests in gdb.perf.  We want to achieve:

 1) typical 'make check' should not run performance tests.  In each perf
    test case, GDB_PERFORMANCE is checked.  If it doesn't exist, return.
 2) run perf tests easily.  We add a new makefile target 'check-perf'.

V2 is simpler than V1, since we don't have to filter out gdb.perf
directory.

gdb:

2013-09-25  Yao Qi  <yao@codesourcery.com>

	* Makefile.in (check-perf): New target.

gdb/testsuite:

2013-09-25  Yao Qi  <yao@codesourcery.com>

	* Makefile.in (check-perf): New target.
	* configure.ac (AC_OUTPUT): Output Makefile in gdb.perf.
	* configure: Re-generated.
	* gdb.perf/Makefile.in: New.
---
 gdb/Makefile.in                    |    8 ++++++++
 gdb/testsuite/Makefile.in          |    4 ++++
 gdb/testsuite/configure            |    3 ++-
 gdb/testsuite/configure.ac         |    2 +-
 gdb/testsuite/gdb.perf/Makefile.in |   15 +++++++++++++++
 5 files changed, 30 insertions(+), 2 deletions(-)
 create mode 100644 gdb/testsuite/gdb.perf/Makefile.in

diff --git a/gdb/Makefile.in b/gdb/Makefile.in
index 3b0b5c7..8bdda9e 100644
--- a/gdb/Makefile.in
+++ b/gdb/Makefile.in
@@ -1003,6 +1003,14 @@ check: force
 	  $(MAKE) $(TARGET_FLAGS_TO_PASS) check; \
 	else true; fi
 
+check-perf: force
+	@if [ -f testsuite/Makefile ]; then \
+	  rootme=`pwd`; export rootme; \
+	  rootsrc=`cd $(srcdir); pwd`; export rootsrc; \
+	  cd testsuite; \
+	  $(MAKE) $(TARGET_FLAGS_TO_PASS) check-perf; \
+	else true; fi
+
 # The idea is to parallelize testing of multilibs, for example:
 #   make -j3 check//sh-hms-sim/{-m1,-m2,-m3,-m3e,-m4}/{,-nofpu}
 # will run 3 concurrent sessions of check, eventually testing all 10
diff --git a/gdb/testsuite/Makefile.in b/gdb/testsuite/Makefile.in
index a7b3d5c..287f445 100644
--- a/gdb/testsuite/Makefile.in
+++ b/gdb/testsuite/Makefile.in
@@ -187,6 +187,10 @@ check-gdb.base%: all $(abs_builddir)/site.exp
 	@if test ! -d gdb.base$*; then mkdir gdb.base$*; fi
 	$(DO_RUNTEST) $(BASE$*_FILES) --outdir gdb.base$* $(RUNTESTFLAGS)
 
+check-perf: all $(abs_builddir)/site.exp
+	@if test ! -d gdb.perf; then mkdir gdb.perf; fi
+	$(DO_RUNTEST) --directory=gdb.perf --outdir gdb.perf $(RUNTESTFLAGS) GDB_PERFORMANCE=yes
+
 subdir_do: force
 	@for i in $(DODIRS); do \
 		if [ -d ./$$i ] ; then \
diff --git a/gdb/testsuite/configure b/gdb/testsuite/configure
index a40c144..da590f3 100755
--- a/gdb/testsuite/configure
+++ b/gdb/testsuite/configure
@@ -3448,7 +3448,7 @@ done
 
 
 
-ac_config_files="$ac_config_files Makefile gdb.ada/Makefile gdb.arch/Makefile gdb.asm/Makefile gdb.base/Makefile gdb.btrace/Makefile gdb.cell/Makefile gdb.cp/Makefile gdb.disasm/Makefile gdb.dwarf2/Makefile gdb.fortran/Makefile gdb.go/Makefile gdb.server/Makefile gdb.java/Makefile gdb.hp/Makefile gdb.hp/gdb.objdbg/Makefile gdb.hp/gdb.base-hp/Makefile gdb.hp/gdb.aCC/Makefile gdb.hp/gdb.compat/Makefile gdb.hp/gdb.defects/Makefile gdb.linespec/Makefile gdb.mi/Makefile gdb.modula2/Makefile gdb.multi/Makefile gdb.objc/Makefile gdb.opencl/Makefile gdb.opt/Makefile gdb.pascal/Makefile gdb.python/Makefile gdb.reverse/Makefile gdb.stabs/Makefile gdb.threads/Makefile gdb.trace/Makefile gdb.xml/Makefile"
+ac_config_files="$ac_config_files Makefile gdb.ada/Makefile gdb.arch/Makefile gdb.asm/Makefile gdb.base/Makefile gdb.btrace/Makefile gdb.cell/Makefile gdb.cp/Makefile gdb.disasm/Makefile gdb.dwarf2/Makefile gdb.fortran/Makefile gdb.go/Makefile gdb.server/Makefile gdb.java/Makefile gdb.hp/Makefile gdb.hp/gdb.objdbg/Makefile gdb.hp/gdb.base-hp/Makefile gdb.hp/gdb.aCC/Makefile gdb.hp/gdb.compat/Makefile gdb.hp/gdb.defects/Makefile gdb.linespec/Makefile gdb.mi/Makefile gdb.modula2/Makefile gdb.multi/Makefile gdb.objc/Makefile gdb.opencl/Makefile gdb.opt/Makefile gdb.pascal/Makefile gdb.perf/Makefile gdb.python/Makefile gdb.reverse/Makefile gdb.stabs/Makefile gdb.threads/Makefile gdb.trace/Makefile gdb.xml/Makefile"
 
 cat >confcache <<\_ACEOF
 # This file is a shell script that caches the results of configure
@@ -4176,6 +4176,7 @@ do
     "gdb.opencl/Makefile") CONFIG_FILES="$CONFIG_FILES gdb.opencl/Makefile" ;;
     "gdb.opt/Makefile") CONFIG_FILES="$CONFIG_FILES gdb.opt/Makefile" ;;
     "gdb.pascal/Makefile") CONFIG_FILES="$CONFIG_FILES gdb.pascal/Makefile" ;;
+    "gdb.perf/Makefile") CONFIG_FILES="$CONFIG_FILES gdb.perf/Makefile" ;;
     "gdb.python/Makefile") CONFIG_FILES="$CONFIG_FILES gdb.python/Makefile" ;;
     "gdb.reverse/Makefile") CONFIG_FILES="$CONFIG_FILES gdb.reverse/Makefile" ;;
     "gdb.stabs/Makefile") CONFIG_FILES="$CONFIG_FILES gdb.stabs/Makefile" ;;
diff --git a/gdb/testsuite/configure.ac b/gdb/testsuite/configure.ac
index 9e07021..94f96cc 100644
--- a/gdb/testsuite/configure.ac
+++ b/gdb/testsuite/configure.ac
@@ -97,5 +97,5 @@ AC_OUTPUT([Makefile \
   gdb.hp/gdb.defects/Makefile gdb.linespec/Makefile \
   gdb.mi/Makefile gdb.modula2/Makefile gdb.multi/Makefile \
   gdb.objc/Makefile gdb.opencl/Makefile gdb.opt/Makefile gdb.pascal/Makefile \
-  gdb.python/Makefile gdb.reverse/Makefile gdb.stabs/Makefile \
+  gdb.perf/Makefile gdb.python/Makefile gdb.reverse/Makefile gdb.stabs/Makefile \
   gdb.threads/Makefile gdb.trace/Makefile gdb.xml/Makefile])
diff --git a/gdb/testsuite/gdb.perf/Makefile.in b/gdb/testsuite/gdb.perf/Makefile.in
new file mode 100644
index 0000000..2071d12
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/Makefile.in
@@ -0,0 +1,15 @@
+VPATH = @srcdir@
+srcdir = @srcdir@
+
+.PHONY: all clean mostlyclean distclean realclean
+
+PROGS = 
+
+all info install-info dvi install uninstall installcheck check:
+	@echo "Nothing to be done for $@..."
+
+clean mostlyclean:
+	-rm -f *.o *.diff *~ core $(PROGS)
+
+distclean maintainer-clean realclean: clean
+	-rm -f Makefile config.status config.log gdb.log gdb.sum
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/3] Test on solib load and unload
  2013-09-25 14:27 ` [PATCH 3/3] Test on solib load and unload Yao Qi
@ 2013-09-27 14:09   ` Gary Benson
  2013-09-27 15:16     ` Yao Qi
  0 siblings, 1 reply; 14+ messages in thread
From: Gary Benson @ 2013-09-27 14:09 UTC (permalink / raw)
  To: Yao Qi; +Cc: gdb-patches

Hi Yao,

Yao Qi wrote:

> This patch is to add a test case to on the performance of GDB
> handling load and unload of shared library.

Thanks for writing this.  I ran it on the setup I had for testing the
probes code, and it does pick up on the differences between probes
and non-probes.

> +void
> +do_test (int number)
> +{
> +  void **handles;
> +  char libname[40];
> +  int i;
> +
> +  handles = malloc (sizeof (void *) * number);
> +
> +  for (i = 0; i < number; i++)
> +    {
> +      sprintf (libname, "solib-lib%d", i);
> +      handles[i] = dlopen (libname, RTLD_LAZY);
> +      if (handles[i] == NULL)
> +	{
> +	  printf ("ERROR on dlopen %s\n", libname);
> +	  return;
> +	}
> +    }
> +
> +  /* Unload shared libraries in different orders.  */
> +#ifndef SOLIB_DLCLOSE_REVERSED_ORDER
> +  for (i = 0; i < number; i++)
> +#else
> +  for (i = number - 1; i >= 0; i--)
> +#endif
> +    dlclose (handles[i]);
> +
> +  free (handles);
> +}

I'm not sure how important this is, but this test is profiling both
load and unload.  It's theoretically possible that a change could be
made that improved one phase while degrading the other, so it would
be nice to see separate timings if that's not too hard to implement.
For the avoidance of doubt I don't consider the absence of separate
timings a blocker.

Thanks,
Gary

-- 
http://gbenson.net/

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/3] Test on solib load and unload
  2013-09-27 14:09   ` Gary Benson
@ 2013-09-27 15:16     ` Yao Qi
  2013-09-27 15:50       ` Gary Benson
  0 siblings, 1 reply; 14+ messages in thread
From: Yao Qi @ 2013-09-27 15:16 UTC (permalink / raw)
  To: gdb-patches

On 09/27/2013 09:59 PM, Gary Benson wrote:
> I'm not sure how important this is, but this test is profiling both
> load and unload.  It's theoretically possible that a change could be
> made that improved one phase while degrading the other, so it would
> be nice to see separate timings if that's not too hard to implement.
> For the avoidance of doubt I don't consider the absence of separate
> timings a blocker.

Gary,
It shouldn't be hard to implement. Probably, we can split class
SolibLoadUnload into two classes and each class is to measure load
and unload respectively.

-- 
Yao (齐尧)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/3] Test on solib load and unload
  2013-09-27 15:16     ` Yao Qi
@ 2013-09-27 15:50       ` Gary Benson
  0 siblings, 0 replies; 14+ messages in thread
From: Gary Benson @ 2013-09-27 15:50 UTC (permalink / raw)
  To: Yao Qi; +Cc: gdb-patches

Yao Qi wrote:
> On 09/27/2013 09:59 PM, Gary Benson wrote:
> > I'm not sure how important this is, but this test is profiling both
> > load and unload.  It's theoretically possible that a change could be
> > made that improved one phase while degrading the other, so it would
> > be nice to see separate timings if that's not too hard to implement.
> > For the avoidance of doubt I don't consider the absence of separate
> > timings a blocker.
> 
> It shouldn't be hard to implement. Probably, we can split class
> SolibLoadUnload into two classes and each class is to measure load
> and unload respectively.

Awesome :)

Cheers,
Gary

-- 
http://gbenson.net/

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/3 V2] GDB Performance testing
  2013-09-25 14:27 [PATCH 0/3 V2] GDB Performance testing Yao Qi
                   ` (2 preceding siblings ...)
  2013-09-25 14:27 ` [PATCH 1/3] New make target 'check-perf' and new dir gdb.perf Yao Qi
@ 2013-10-06  1:44 ` Yao Qi
  3 siblings, 0 replies; 14+ messages in thread
From: Yao Qi @ 2013-10-06  1:44 UTC (permalink / raw)
  To: gdb-patches

On 09/25/2013 10:26 PM, Yao Qi wrote:
> Here are some points V2 doesn't address, and I'd like revisit them
> and consider them in the next iteration.
>
>   - Utilities to generate source files.  We need them to generate source
>     files to compose a reasonably sized program for perf testing.  However,
>     current test case doesn't require these utilities.
>   - Generated files, including source files, object files and executables
>     are not removed when the test is done, because the next run may skip
>     compilation step.
>
> The basic usages of performance testing are unchanged, like
>
> $ make check-perf
> $ make check-perf RUNTESTFLAGS="--target_board=native-gdbserver solib.exp"
> $ make check-perf RUNTESTFLAGS="solib.exp SOLIB_COUNT=1024"
>
> We can skip compilation step like this,
>
> $ make check-perf RUNTESTFLAGS='solib.exp GDB_PERFORMANCE_SKIP_COMPILE=yes'
>
> in default, compilation is not skipped.
>
> The skeleton of .exp is like this:
>
> load_lib perftest.exp
>
> if [skip_perf_tests] {
>      return 0
> }
>
> PerfTest::assemble {
>      compile {
>
>        PerfTest::compiled
>      }
>      setup_gdb {
>      }
> } {
>      run {
>      }
> }
>
> PerfTest::assemble is inspired by Dwarf::assemble.  It has three
> procedures, compile, setup_gdb and run.  Each test should fill in
> the body of these three procedures for its purpose.  If variable
> GDB_PERFORMANCE_SKIP_COMPILE exists, the procedure compile is not
> invoked.  If the test case uses pre-compiled program, it can be
> written like this:
>
> PerfTest::assemble {
>      setup_gdb {
>      }
> } {
>      run {
>      }
> }

Ping.  https://sourceware.org/ml/gdb-patches/2013-09/msg00894.html

-- 
Yao (齐尧)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/3] New make target 'check-perf' and new dir gdb.perf
  2013-09-25 14:27 ` [PATCH 1/3] New make target 'check-perf' and new dir gdb.perf Yao Qi
@ 2013-10-09  5:13   ` Doug Evans
  2013-10-10  0:29     ` Yao Qi
  0 siblings, 1 reply; 14+ messages in thread
From: Doug Evans @ 2013-10-09  5:13 UTC (permalink / raw)
  To: Yao Qi; +Cc: gdb-patches

On Wed, Sep 25, 2013 at 7:26 AM, Yao Qi <yao@codesourcery.com> wrote:
> We add a new dir gdb.perf in testsuite for all performance tests.
> However, current 'make check' logic will either run dejagnu in
> directory testsuite or iterate all gdb.* directories which has *.exp
> files.  Both of them will run tests in gdb.perf.  We want to achieve:
>
>  1) typical 'make check' should not run performance tests.  In each perf
>     test case, GDB_PERFORMANCE is checked.  If it doesn't exist, return.
>  2) run perf tests easily.  We add a new makefile target 'check-perf'.
>
> V2 is simpler than V1, since we don't have to filter out gdb.perf
> directory.
>
> gdb:
>
> 2013-09-25  Yao Qi  <yao@codesourcery.com>
>
>         * Makefile.in (check-perf): New target.
>
> gdb/testsuite:
>
> 2013-09-25  Yao Qi  <yao@codesourcery.com>
>
>         * Makefile.in (check-perf): New target.
>         * configure.ac (AC_OUTPUT): Output Makefile in gdb.perf.
>         * configure: Re-generated.
>         * gdb.perf/Makefile.in: New.
> ---
>  gdb/Makefile.in                    |    8 ++++++++
>  gdb/testsuite/Makefile.in          |    4 ++++
>  gdb/testsuite/configure            |    3 ++-
>  gdb/testsuite/configure.ac         |    2 +-
>  gdb/testsuite/gdb.perf/Makefile.in |   15 +++++++++++++++
>  5 files changed, 30 insertions(+), 2 deletions(-)
>  create mode 100644 gdb/testsuite/gdb.perf/Makefile.in
>
> diff --git a/gdb/Makefile.in b/gdb/Makefile.in
> index 3b0b5c7..8bdda9e 100644
> --- a/gdb/Makefile.in
> +++ b/gdb/Makefile.in
> @@ -1003,6 +1003,14 @@ check: force
>           $(MAKE) $(TARGET_FLAGS_TO_PASS) check; \
>         else true; fi
>
> +check-perf: force
> +       @if [ -f testsuite/Makefile ]; then \
> +         rootme=`pwd`; export rootme; \
> +         rootsrc=`cd $(srcdir); pwd`; export rootsrc; \
> +         cd testsuite; \
> +         $(MAKE) $(TARGET_FLAGS_TO_PASS) check-perf; \
> +       else true; fi
> +
>  # The idea is to parallelize testing of multilibs, for example:
>  #   make -j3 check//sh-hms-sim/{-m1,-m2,-m3,-m3e,-m4}/{,-nofpu}
>  # will run 3 concurrent sessions of check, eventually testing all 10
> diff --git a/gdb/testsuite/Makefile.in b/gdb/testsuite/Makefile.in
> index a7b3d5c..287f445 100644
> --- a/gdb/testsuite/Makefile.in
> +++ b/gdb/testsuite/Makefile.in
> @@ -187,6 +187,10 @@ check-gdb.base%: all $(abs_builddir)/site.exp
>         @if test ! -d gdb.base$*; then mkdir gdb.base$*; fi
>         $(DO_RUNTEST) $(BASE$*_FILES) --outdir gdb.base$* $(RUNTESTFLAGS)
>
> +check-perf: all $(abs_builddir)/site.exp
> +       @if test ! -d gdb.perf; then mkdir gdb.perf; fi

I realize this is cut-n-paste-n-tweaking existing code,
I'm just curious under what scenario the mkdir is needed (except for a
broken build tree).

> +       $(DO_RUNTEST) --directory=gdb.perf --outdir gdb.perf $(RUNTESTFLAGS) GDB_PERFORMANCE=yes

I don't know if it'll ever be useful, but if GDB_PERFORMANCE=yes
appears before RUNTESTFLAGS, then it can be overridden on the command
line.

> +
>  subdir_do: force
>         @for i in $(DODIRS); do \
>                 if [ -d ./$$i ] ; then \
> diff --git a/gdb/testsuite/configure.ac b/gdb/testsuite/configure.ac
> index 9e07021..94f96cc 100644
> --- a/gdb/testsuite/configure.ac
> +++ b/gdb/testsuite/configure.ac
> @@ -97,5 +97,5 @@ AC_OUTPUT([Makefile \
>    gdb.hp/gdb.defects/Makefile gdb.linespec/Makefile \
>    gdb.mi/Makefile gdb.modula2/Makefile gdb.multi/Makefile \
>    gdb.objc/Makefile gdb.opencl/Makefile gdb.opt/Makefile gdb.pascal/Makefile \
> -  gdb.python/Makefile gdb.reverse/Makefile gdb.stabs/Makefile \
> +  gdb.perf/Makefile gdb.python/Makefile gdb.reverse/Makefile gdb.stabs/Makefile \
>    gdb.threads/Makefile gdb.trace/Makefile gdb.xml/Makefile])
> diff --git a/gdb/testsuite/gdb.perf/Makefile.in b/gdb/testsuite/gdb.perf/Makefile.in
> new file mode 100644
> index 0000000..2071d12
> --- /dev/null
> +++ b/gdb/testsuite/gdb.perf/Makefile.in
> @@ -0,0 +1,15 @@
> +VPATH = @srcdir@
> +srcdir = @srcdir@
> +
> +.PHONY: all clean mostlyclean distclean realclean
> +
> +PROGS =
> +
> +all info install-info dvi install uninstall installcheck check:
> +       @echo "Nothing to be done for $@..."
> +
> +clean mostlyclean:
> +       -rm -f *.o *.diff *~ core $(PROGS)
> +
> +distclean maintainer-clean realclean: clean
> +       -rm -f Makefile config.status config.log gdb.log gdb.sum
> --
> 1.7.7.6

Hi.
This part is ok with me.

I was thinking there is one time when IWBN to run the tests in parallel:
If I make a change to the test harness, I may want to run all the
tests in some reduced-size mode to verify I haven't broken anything.
I won't care what the perf results are - I'll just want to know that
things still work.  :-)  This can be left for later though.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/3] Perf test framework
  2013-09-25 14:27 ` [PATCH 2/3] Perf test framework Yao Qi
@ 2013-10-09  6:37   ` Doug Evans
  2013-10-10  3:04     ` Yao Qi
  0 siblings, 1 reply; 14+ messages in thread
From: Doug Evans @ 2013-10-09  6:37 UTC (permalink / raw)
  To: Yao Qi; +Cc: gdb-patches

On Wed, Sep 25, 2013 at 7:26 AM, Yao Qi <yao@codesourcery.com> wrote:
> This patch adds a basic framework to do performance testing for GDB.
> perftest.py is about the test case, testresult.py is about test
> results, and how are they saved.  reporter.py is about how results
> are reported (in what format).  measure.py is about measuring the
> execution of tests by a collection of measurements.

Hi.
Some comments inline.

> In V2, there are several changes to address Doug and Sanimir's
> comments.
>
>  - Add copyright header and docstring in perftest/__init__.py
>  - Remove config.py.
>  - Fix docstring format.
>  - Rename classes "SingleVariable" to "SingleStatistic".
>  - Don't extend gdb.Function in class TestCase.  Add a new method run
>    to run the test case so that we can pass parameters to test.
>  - Allow to customize whether to warm up and to append test log.
>  - Move time measurement into test harness.  Add a new class
>    Measurement for a specific measurement and a new class Measure to
>    measure them for a given test case.

I'm not sure what your plans are, but another thing we need to measure
is memory usage.
There are other things too, but time (both cpu and wall) and memory
usage are a good start.
Most basic performance tests might want all three (e.g., why run a
test three times, it's all good data).
How will MultiStatistic tests work?

>  - A new class ResultFactory to create instances of TestResult.
>  - New file lib/perftest.exp, which is to do some preparations and
>    cleanups to simplify each *.exp file.
>  - Skip compilation step if GDB_PERFORMANCE_SKIP_COMPILE exists.

For my uses, I need to be able to do the compilation and skip running
the tests (the tests will be run in a separate step).
Just a suggestion, but how about GDB_PERFORMANCE=yes|compile|run
[or whatever spellings work - just trying to reduce the number of variables]

> gdb/testsuite/
>
>         * lib/perftest.exp: New.
>         * gdb.perf/lib/perftest/__init__.py: New.
>         * gdb.perf/lib/perftest/measure.py: New.
>         * gdb.perf/lib/perftest/perftest.py: New.
>         * gdb.perf/lib/perftest/reporter.py: New.
>         * gdb.perf/lib/perftest/testresult.py: New.
> ---
>  gdb/testsuite/gdb.perf/lib/perftest/__init__.py   |   17 +++
>  gdb/testsuite/gdb.perf/lib/perftest/measure.py    |  113 +++++++++++++++++++
>  gdb/testsuite/gdb.perf/lib/perftest/perftest.py   |   71 ++++++++++++
>  gdb/testsuite/gdb.perf/lib/perftest/reporter.py   |   68 ++++++++++++
>  gdb/testsuite/gdb.perf/lib/perftest/testresult.py |   59 ++++++++++
>  gdb/testsuite/lib/perftest.exp                    |  121 +++++++++++++++++++++
>  6 files changed, 449 insertions(+), 0 deletions(-)
>  create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/__init__.py
>  create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/measure.py
>  create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/perftest.py
>  create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/reporter.py
>  create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/testresult.py
>  create mode 100644 gdb/testsuite/lib/perftest.exp
>
> diff --git a/gdb/testsuite/gdb.perf/lib/perftest/__init__.py b/gdb/testsuite/gdb.perf/lib/perftest/__init__.py
> new file mode 100644
> index 0000000..7739a3e
> --- /dev/null
> +++ b/gdb/testsuite/gdb.perf/lib/perftest/__init__.py
> @@ -0,0 +1,17 @@
> +# Copyright (C) 2013 Free Software Foundation, Inc.
> +
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program.  If not, see <http://www.gnu.org/licenses/>.
> +"""
> +GDB performance testing framework.
> +"""
> diff --git a/gdb/testsuite/gdb.perf/lib/perftest/measure.py b/gdb/testsuite/gdb.perf/lib/perftest/measure.py
> new file mode 100644
> index 0000000..8bede59
> --- /dev/null
> +++ b/gdb/testsuite/gdb.perf/lib/perftest/measure.py
> @@ -0,0 +1,113 @@
> +# Copyright (C) 2013 Free Software Foundation, Inc.
> +
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program.  If not, see <http://www.gnu.org/licenses/>.
> +
> +import time
> +
> +class Measure(object):
> +    """
> +    A class that measure and collect the interesting data for a given
> +    testcase.

style nit: the first line of a doc string is on one line.
ref: http://www.python.org/dev/peps/pep-0257/

> +
> +    An instance of Measure has a collection of measurements, and each
> +    of them is to measure a given aspect, such as time and memory.
> +
> +    """
> +
> +    def __init__(self, measurements, result_factory):
> +        """Constructor of measure.
> +
> +        measurements is a collection of 'enum's, which are about the
> +        id of measurements.  A collection of measurements are created
> +        according to the ids in argument measurements.
> +
> +        """
> +        self.ms = []
> +        for m in measurements:
> +            self.ms.append(Measurement.create_instance(m, result_factory))
> +
> +    def measure(self, func, id):
> +        """Measure the operations done by func with a collection of measurements."""
> +        for m in self.ms:
> +            m.start(id)
> +
> +        func()
> +
> +        for m in self.ms:
> +            m.stop(id)
> +
> +    def report(self, reporter, name):
> +        """Report the measured results."""
> +        for m in self.ms:
> +            m.report(reporter, name)
> +
> +class Measurement(object):
> +    """A measurement for a certain aspect."""
> +
> +    # enum of each measurement.
> +    TIME = 1
> +
> +    @classmethod
> +    def create_instance(cls, id, result_factory):
> +        """A factory method to create an instance of Measurement subclass
> +        according to id."""
> +        if id == Measurement.TIME:
> +            return MeasurementTime(result_factory.create_result())
> +        else:
> +            raise RuntimeError("Unknown type of measurement.")

I don't know factories that well, but IWBN if every measurement kind
didn't need an entry here.

> +
> +    def __init__(self, result):
> +        """Constructor of Measurement.
> +
> +        Attribute result is the TestResult associated with measurement.
> +
> +        """
> +        self.result = result
> +
> +    def start(self, id):
> +        """Abstract method to start the measurement."""
> +        raise NotImplementedError("Abstract Method:start")
> +
> +    def stop(self, id):
> +        """Abstract method to stop the measurement.
> +
> +        When the measurement is stopped, we've got something, and
> +        record them in result.
> +
> +        """
> +        raise NotImplementedError("Abstract Method:stop.")
> +
> +    def decorate_name(self, name):
> +        """Decorate the name when report."""
> +        return name
> +
> +    def report(self, reporter, name):
> +        """Report the measured data by argument reporter."""
> +        self.result.report(reporter, self.decorate_name(name))
> +
> +class MeasurementTime(Measurement):

"Time" is ambiguous.  wall time or cpu time?
I'd rename this to make it explicit what kind is being measured (in
the class name and its output).

> +    """Measurement on time."""
> +    def __init__(self, result):
> +        super(MeasurementTime, self).__init__(result)
> +        self.start_time = 0
> +
> +    def start(self, id):
> +        self.start_time = time.clock()
> +
> +    def stop(self, id):
> +        elapsed_time = time.clock() - self.start_time
> +        self.result.record (id, elapsed_time)
> +
> +    def decorate_name(self, name):
> +        return name + " time"
> diff --git a/gdb/testsuite/gdb.perf/lib/perftest/perftest.py b/gdb/testsuite/gdb.perf/lib/perftest/perftest.py
> new file mode 100644
> index 0000000..bd81cf0
> --- /dev/null
> +++ b/gdb/testsuite/gdb.perf/lib/perftest/perftest.py
> @@ -0,0 +1,71 @@
> +# Copyright (C) 2013 Free Software Foundation, Inc.
> +
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program.  If not, see <http://www.gnu.org/licenses/>.
> +
> +import testresult
> +import reporter
> +from measure import Measure
> +
> +class TestCase(object):

PerfTestCase?

> +    """
> +    Base class of all performance testing cases.
> +
> +    Each sub-class should override methods execute_test, in which
> +    several GDB operations are performed and measured by attribute
> +    measure.  Sub-class can also override method warm_up optionally
> +    if the test case needs warm up.
> +
> +    """
> +
> +    def __init__(self, name, measure):
> +        """
> +        Constructor of TestCase.
> +
> +        Construct an instance of TestCase with a name and a measure
> +        which is to measure the test by several different measurements.
> +

It's odd to have a blank line here.  I couldn't find anything
definitive in pep257 though.

> +        """

IIUC, pep257 says have a blank line here.

> +        self.name = name
> +        self.measure = measure
> +
> +    def execute_test(self):
> +        """Abstract method to do the actual tests."""
> +        raise NotImplementedError("Abstract Method.")
> +
> +    def warm_up(self):
> +        """Do some operations to warm up the environment."""
> +
> +    def run(self, warm_up=True, append=True):
> +        """
> +        Run this test case.
> +
> +        It is a template method to invoke method warm_up,
> +        execute_test, and finally report the measured results.
> +        If parameter warm_up is True, run method warm_up.  If parameter
> +        append is True, the test result will be appended instead of
> +        overwritten.
> +
> +        """
> +        if warm_up:
> +            self.warm_up()
> +
> +        self.execute_test()
> +        self.measure.report(reporter.TextReporter(append), self.name)
> +
> +class SingleStatisticTestCase(TestCase):
> +    """Test case with a single statistic."""
> +
> +    def __init__(self, measures, name):
> +        super (SingleStatisticTestCase, self).__init__ (name, Measure(measures,
> +                                                                      testresult.SingleStatisticResultFactory()))
> diff --git a/gdb/testsuite/gdb.perf/lib/perftest/reporter.py b/gdb/testsuite/gdb.perf/lib/perftest/reporter.py
> new file mode 100644
> index 0000000..7da86cf
> --- /dev/null
> +++ b/gdb/testsuite/gdb.perf/lib/perftest/reporter.py
> @@ -0,0 +1,68 @@
> +# Copyright (C) 2013 Free Software Foundation, Inc.
> +
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program.  If not, see <http://www.gnu.org/licenses/>.
> +
> +class Reporter(object):
> +    """
> +    Base class of reporter to report test results in a certain format.
> +
> +    Subclass, which is specific to a report format, should overwrite
> +    methods report, start and end.
> +
> +    """
> +
> +    def __init__(self, append):
> +        """Constructor of Reporter.
> +
> +        attribute append is used to determine whether to append or
> +        overwrite log file.
> +
> +        """
> +        self.append = append
> +
> +    def report(self, *args):
> +        raise NotImplementedError("Abstract Method:report.")
> +
> +    def start(self):
> +        """Invoked when reporting is started."""
> +        raise NotImplementedError("Abstract Method:start.")
> +
> +    def end(self):
> +        """Invoked when reporting is done.
> +
> +        It must be overridden to do some cleanups, such as closing file
> +        descriptors.
> +
> +        """
> +        raise NotImplementedError("Abstract Method:end.")
> +
> +class TextReporter(Reporter):
> +    """Report results in a plain text file 'perftest.log'."""
> +
> +    def __init__(self, append):
> +        super (TextReporter, self).__init__(Reporter(append))
> +        self.txt_log = None
> +
> +    def report(self, *args):
> +        self.txt_log.write(' '.join(str(arg) for arg in args))
> +        self.txt_log.write('\n')
> +
> +    def start(self):
> +        if self.append:
> +            self.txt_log = open ("perftest.log", 'a+');
> +        else:
> +            self.txt_log = open ("perftest.log", 'w');
> +
> +    def end(self):
> +        self.txt_log.close ()
> diff --git a/gdb/testsuite/gdb.perf/lib/perftest/testresult.py b/gdb/testsuite/gdb.perf/lib/perftest/testresult.py
> new file mode 100644
> index 0000000..51a066f
> --- /dev/null
> +++ b/gdb/testsuite/gdb.perf/lib/perftest/testresult.py
> @@ -0,0 +1,59 @@
> +# Copyright (C) 2013 Free Software Foundation, Inc.
> +
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program.  If not, see <http://www.gnu.org/licenses/>.
> +
> +class TestResult(object):
> +    """
> +    Base class to record and report test results.
> +
> +    Method record is to record the results of test case, and report
> +    method is to report the recorded results by a given reporter.
> +
> +    """
> +
> +    def record(self, parameter, result):
> +        raise NotImplementedError("Abstract Method:record.")
> +
> +    def report(self, reporter, name):
> +        """Report the test results by reporter."""
> +        raise NotImplementedError("Abstract Method:report.")
> +
> +class SingleStatisticTestResult(TestResult):
> +    """Test results for the test case with a single statistic."""
> +
> +    def __init__(self):
> +        super (SingleStatisticTestResult, self).__init__ ()
> +        self.results = dict ()
> +
> +    def record(self, parameter, result):
> +        self.results[parameter] = result
> +
> +    def report(self, reporter, name):
> +        reporter.start()
> +        for key in sorted(self.results.iterkeys()):
> +            reporter.report(name, key, self.results[key])
> +        reporter.end()
> +
> +class ResultFactory(object):
> +    """A factory to create an instance of TestResult."""
> +
> +    def create_result(self):
> +        """Create an instance of TestResult."""
> +        raise NotImplementedError("Abstract Method:create_result.")
> +
> +class SingleStatisticResultFactory(ResultFactory):
> +    """A factory to create an instance of SingleStatisticTestResult."""
> +
> +    def create_result(self):
> +        return SingleStatisticTestResult()
> diff --git a/gdb/testsuite/lib/perftest.exp b/gdb/testsuite/lib/perftest.exp
> new file mode 100644
> index 0000000..3c04989
> --- /dev/null
> +++ b/gdb/testsuite/lib/perftest.exp
> @@ -0,0 +1,121 @@
> +# Copyright (C) 2013 Free Software Foundation, Inc.
> +
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program.  If not, see <http://www.gnu.org/licenses/>.
> +
> +namespace eval PerfTest {
> +    # The name of python file on build.
> +    variable remote_python_file
> +
> +    # The source files are compiled successfully or not.
> +    variable compiled_ok
> +
> +    # A private method to set up GDB for performance testing.
> +    proc _setup_gdb {} {
> +       variable remote_python_file
> +       global srcdir subdir testfile
> +
> +       set remote_python_file [gdb_remote_download host ${srcdir}/${subdir}/${testfile}.py]
> +
> +       # Set sys.path for module perftest.
> +       gdb_test_no_output "python import os, sys"
> +       gdb_test_no_output "python sys.path.insert\(0, os.path.abspath\(\"${srcdir}/${subdir}/lib\"\)\)"
> +       gdb_test_no_output "python exec (open ('${remote_python_file}').read ())"
> +    }
> +
> +    # A private method to do some cleanups when performance test is
> +    # finished.
> +    proc _teardown {} {
> +       variable remote_python_file
> +
> +       remote_file host delete $remote_python_file
> +    }
> +
> +    # Compile source files of test case.  BODY is the tcl code to do
> +    # actual compilation and it should invoke 'PerfTest::compiled' if
> +    # compilation is successful.
> +    proc compile {body} {
> +       global GDB_PERFORMANCE_SKIP_COMPILE
> +
> +       if ![info exists GDB_PERFORMANCE_SKIP_COMPILE] {
> +           variable compiled_ok
> +
> +           set compiled_ok 0
> +           uplevel 2 $body
> +
> +           if {!$compiled_ok} {
> +               untested "Could not compile source files."
> +               return -1
> +           }
> +       }
> +    }
> +
> +    # Mark the compilation is finished successfully.
> +    proc compiled {} {
> +       variable compiled_ok
> +
> +       set compiled_ok 1
> +    }
> +
> +    # Start GDB.
> +    proc setup_gdb {body} {
> +       variable compiled_ok
> +
> +       if {!$compiled_ok} {
> +           return
> +       }
> +
> +       uplevel 2 $body
> +    }
> +
> +    # Run the performance test.
> +    proc run {body} {
> +       global timeout
> +
> +       set timeout 3000

Probably want to parameterize the timeout.

> +       uplevel 2 $body
> +    }
> +
> +    # The top-level interface to PerfTest.
> +    # PREPARE is the tcl code to prepare for the performance testing,
> +    # including generating and compiling source files, starting up GDB.
> +    # RUN is the tcl code to drive GDB to do some operations.
> +    proc assemble {prepare run} {

According to this comment "prepare" combines compiling source files
with starting up gdb.
Can we separate them?
OTOH, there's the call to _setup_gdb after the "eval $prepare".
It's a bit confusing.

> +       variable compiled_ok
> +
> +       set compiled_ok 1
> +       eval $prepare
> +
> +       if {!$compiled_ok} {
> +           return
> +       }
> +
> +       _setup_gdb
> +
> +       eval $run
> +
> +       _teardown
> +    }
> +}
> +
> +# Return true if performance tests are skipped.
> +
> +proc skip_perf_tests { } {
> +    global GDB_PERFORMANCE
> +
> +    if [info exists GDB_PERFORMANCE] {
> +       return 0
> +    }
> +
> +    return 1
> +}
> --
> 1.7.7.6
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/3] New make target 'check-perf' and new dir gdb.perf
  2013-10-09  5:13   ` Doug Evans
@ 2013-10-10  0:29     ` Yao Qi
  2013-10-15 17:14       ` Doug Evans
  0 siblings, 1 reply; 14+ messages in thread
From: Yao Qi @ 2013-10-10  0:29 UTC (permalink / raw)
  To: Doug Evans; +Cc: gdb-patches

On 10/09/2013 01:13 PM, Doug Evans wrote:
> I realize this is cut-n-paste-n-tweaking existing code,
> I'm just curious under what scenario the mkdir is needed (except for a
> broken build tree).
>

These dirs (testsuite/gdb.*) are there after configure and make.

>> >+       $(DO_RUNTEST) --directory=gdb.perf --outdir gdb.perf $(RUNTESTFLAGS) GDB_PERFORMANCE=yes
> I don't know if it'll ever be useful, but if GDB_PERFORMANCE=yes
> appears before RUNTESTFLAGS, then it can be overridden on the command
> line.
>

We should have the flexibility to set GDB_PERFORMANCE=yes|compile|run in 
the command line.

>
> I was thinking there is one time when IWBN to run the tests in parallel:
> If I make a change to the test harness, I may want to run all the
> tests in some reduced-size mode to verify I haven't broken anything.
> I won't care what the perf results are - I'll just want to know that
> things still work.:-)   This can be left for later though.

Or we can add some test cases to perf test harness, and they can be 
started by 'make check' too.

-- 
Yao (齐尧)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/3] Perf test framework
  2013-10-09  6:37   ` Doug Evans
@ 2013-10-10  3:04     ` Yao Qi
  2013-10-15 20:14       ` Doug Evans
  0 siblings, 1 reply; 14+ messages in thread
From: Yao Qi @ 2013-10-10  3:04 UTC (permalink / raw)
  To: Doug Evans; +Cc: gdb-patches

On 10/09/2013 02:37 PM, Doug Evans wrote:
> I'm not sure what your plans are, but another thing we need to measure
> is memory usage.

I plan to add memory measurement later.  Current framework supports
multiple measurements in a single test case.

class SolibLoadUnload(perftest.SingleStatisticTestCase):
    def __init__(self, solib_number):
        # We want to measure time in this test.
        super (SolibLoadUnload, self).__init__ ([measure.Measurement.TIME],
                                                "solib")


Once we want to measure memory usage, we can modify it like:

class SolibLoadUnload(perftest.SingleStatisticTestCase):
    def __init__(self, solib_number):
        # We want to measure time in this test.
        super (SolibLoadUnload, self).__init__ ([measure.Measurement.TIME,
                                                 measure.Measurement.MEMORY],
                                                "solib")
> There are other things too, but time (both cpu and wall) and memory
> usage are a good start.
> Most basic performance tests might want all three (e.g., why run a
> test three times, it's all good data).

All data needed by each measurement can be collected in one run.

> How will MultiStatistic tests work?

All our test cases are SingleStatistic because they are quite simple,
so we don't think carefully how MultiStatistic tests work.  I'd like to
consider this when we really need a MultiStatistic test case.

> 
>> >  - A new class ResultFactory to create instances of TestResult.
>> >  - New file lib/perftest.exp, which is to do some preparations and
>> >    cleanups to simplify each *.exp file.
>> >  - Skip compilation step if GDB_PERFORMANCE_SKIP_COMPILE exists.
> For my uses, I need to be able to do the compilation and skip running
> the tests (the tests will be run in a separate step).
> Just a suggestion, but how about GDB_PERFORMANCE=yes|compile|run
> [or whatever spellings work - just trying to reduce the number of variables]
> 

That makes sense to me.

>> >+
>> >+class Measurement(object):
>> >+    """A measurement for a certain aspect."""
>> >+
>> >+    # enum of each measurement.
>> >+    TIME = 1
>> >+
>> >+    @classmethod
>> >+    def create_instance(cls, id, result_factory):
>> >+        """A factory method to create an instance of Measurement subclass
>> >+        according to id."""
>> >+        if id == Measurement.TIME:
>> >+            return MeasurementTime(result_factory.create_result())
>> >+        else:
>> >+            raise RuntimeError("Unknown type of measurement.")
> I don't know factories that well, but IWBN if every measurement kind
> didn't need an entry here.
> 

In each test case, the enum of Measurement are specified in a
collection, like,

[measure.Measurement.CPU_TIME, measure.Measurement.WALL_TIME,
measure.Measurement.MEMORY]

the static method create_instance creates the instance of Measurement
sub-classes, which looks reasonable to me.

Without this static method, each test case has to create a collection
of Measurement instances for its own.

>> >+
>> >+class MeasurementTime(Measurement):
> "Time" is ambiguous.  wall time or cpu time?
> I'd rename this to make it explicit what kind is being measured (in
> the class name and its output).
> 

Agreed.

>> >+
>> >+import testresult
>> >+import reporter
>> >+from measure import Measure
>> >+
>> >+class TestCase(object):
> PerfTestCase?
> 

I'd like to leave "TestCase" as-is because they have been in package
perftest.

>> >+    """
>> >+    Base class of all performance testing cases.
>> >+
>> >+    Each sub-class should override methods execute_test, in which
>> >+    several GDB operations are performed and measured by attribute
>> >+    measure.  Sub-class can also override method warm_up optionally
>> >+    if the test case needs warm up.
>> >+
>> >+    """
>> >+
>> >+    def __init__(self, name, measure):
>> >+        """
>> >+        Constructor of TestCase.
>> >+
>> >+        Construct an instance of TestCase with a name and a measure
>> >+        which is to measure the test by several different measurements.
>> >+
> It's odd to have a blank line here.  I couldn't find anything
> definitive in pep257 though.
> 
>> >+        """
> IIUC, pep257 says have a blank line here.
> 

I was confused by this sentence "Docstrings documenting functions or
methods generally don't have this requirement, unless the function or
method's body is written as a number of blank-line separated sections
-- in this case, treat the docstring as another section, and precede it
with a blank line".

and follow the example in pep-0257

def complex(real=0.0, imag=0.0):
    """Form a complex number.

    Keyword arguments:
    real -- the real part (default 0.0)
    imag -- the imaginary part (default 0.0)

    """
    if imag == 0.0 and real == 0.0: return complex_zero
    ...

>> >+
>> >+    # Run the performance test.
>> >+    proc run {body} {
>> >+       global timeout
>> >+
>> >+       set timeout 3000
> Probably want to parameterize the timeout.
> 

OK.

>> >+       uplevel 2 $body
>> >+    }
>> >+
>> >+    # The top-level interface to PerfTest.
>> >+    # PREPARE is the tcl code to prepare for the performance testing,
>> >+    # including generating and compiling source files, starting up GDB.
>> >+    # RUN is the tcl code to drive GDB to do some operations.
>> >+    proc assemble {prepare run} {
> According to this comment "prepare" combines compiling source files
> with starting up gdb.
> Can we separate them?

OK.

> OTOH, there's the call to _setup_gdb after the "eval $prepare".
> It's a bit confusing.
> 

What is confusing? the name "_setup_gdb"?  $prepare is to compile
sources and start up gdb, and proc _setup_gdb is to set up perf test in
a running gdb.  How about rename "_setup_gdb" with "_setup" or
"_setup_perftest"?

-- 
Yao (齐尧)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/3] New make target 'check-perf' and new dir gdb.perf
  2013-10-10  0:29     ` Yao Qi
@ 2013-10-15 17:14       ` Doug Evans
  0 siblings, 0 replies; 14+ messages in thread
From: Doug Evans @ 2013-10-15 17:14 UTC (permalink / raw)
  To: Yao Qi; +Cc: gdb-patches

Yao Qi writes:
 > > I was thinking there is one time when IWBN to run the tests in parallel:
 > > If I make a change to the test harness, I may want to run all the
 > > tests in some reduced-size mode to verify I haven't broken anything.
 > > I won't care what the perf results are - I'll just want to know that
 > > things still work.:-)   This can be left for later though.
 > 
 > Or we can add some test cases to perf test harness, and they can be 
 > started by 'make check' too.

Sure.
Still, IWBN to be able to run the real tests to do regression checks.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/3] Perf test framework
  2013-10-10  3:04     ` Yao Qi
@ 2013-10-15 20:14       ` Doug Evans
  0 siblings, 0 replies; 14+ messages in thread
From: Doug Evans @ 2013-10-15 20:14 UTC (permalink / raw)
  To: Yao Qi; +Cc: gdb-patches

Yao Qi writes:
 > On 10/09/2013 02:37 PM, Doug Evans wrote:
 > > I'm not sure what your plans are, but another thing we need to measure
 > > is memory usage.
 > 
 > I plan to add memory measurement later.  Current framework supports
 > multiple measurements in a single test case.
 > 
 > class SolibLoadUnload(perftest.SingleStatisticTestCase):
 >     def __init__(self, solib_number):
 >         # We want to measure time in this test.
 >         super (SolibLoadUnload, self).__init__ ([measure.Measurement.TIME],
 >                                                 "solib")
 > 
 > 
 > Once we want to measure memory usage, we can modify it like:
 > 
 > class SolibLoadUnload(perftest.SingleStatisticTestCase):
 >     def __init__(self, solib_number):
 >         # We want to measure time in this test.
 >         super (SolibLoadUnload, self).__init__ ([measure.Measurement.TIME,
 >                                                  measure.Measurement.MEMORY],
 >                                                 "solib")

SingleStatisticTestCase seems out of place now.
MultiStatisticTestCase?
OTOH, if the set of statistics is passed as a list, do
we need to distinguish Single vs Multi here?
[We may need to distinguish them elsewhere still.]

 > > There are other things too, but time (both cpu and wall) and memory
 > > usage are a good start.
 > > Most basic performance tests might want all three (e.g., why run a
 > > test three times, it's all good data).
 > 
 > All data needed by each measurement can be collected in one run.

Awesome.

 > > How will MultiStatistic tests work?
 > 
 > All our test cases are SingleStatistic because they are quite simple,
 > so we don't think carefully how MultiStatistic tests work.  I'd like to
 > consider this when we really need a MultiStatistic test case.

We don't need to implement the Multi case in the first pass,
but we do need it today IMO.
In my perf testing/monitoring, anytime I've wanted any of cpu/wall time
and memory usage I've found it very useful to have all the others as well.
And since we're only going to run the tests once (instead of, e.g., once
for time and once for memory), I'd say let's just plan for collecting
all three anytime we want any one of them.
[Well, if someone *really* only wants to collect one statistic, ok,
but I wouldn't be surprised if I submit patches to add the others.]
Depending on the test case there are other statistics as well
that I've found useful, but they're more test-specific.

 > >> >+
 > >> >+class Measurement(object):
 > >> >+    """A measurement for a certain aspect."""
 > >> >+
 > >> >+    # enum of each measurement.
 > >> >+    TIME = 1
 > >> >+
 > >> >+    @classmethod
 > >> >+    def create_instance(cls, id, result_factory):
 > >> >+        """A factory method to create an instance of Measurement subclass
 > >> >+        according to id."""
 > >> >+        if id == Measurement.TIME:
 > >> >+            return MeasurementTime(result_factory.create_result())
 > >> >+        else:
 > >> >+            raise RuntimeError("Unknown type of measurement.")
 > > I don't know factories that well, but IWBN if every measurement kind
 > > didn't need an entry here.
 > > 
 > 
 > In each test case, the enum of Measurement are specified in a
 > collection, like,
 > 
 > [measure.Measurement.CPU_TIME, measure.Measurement.WALL_TIME,
 > measure.Measurement.MEMORY]
 > 
 > the static method create_instance creates the instance of Measurement
 > sub-classes, which looks reasonable to me.
 > 
 > Without this static method, each test case has to create a collection
 > of Measurement instances for its own.

measure.Measurement.CPU_TIME is already a lot of characters.
If one replaced that with, say, a call to construct a measurement object
what is lost?
[That's an honest question ... I don't know the answer.]

Or, if using a factory is the way to go, can new measurements register
themselves with the factory, and then have them be requested by name
(as a string)?
[The problem I'm wondering about, and maybe it's not a problem, is
having to edit a central location every time a new measurement is added.]

 > >> >+
 > >> >+import testresult
 > >> >+import reporter
 > >> >+from measure import Measure
 > >> >+
 > >> >+class TestCase(object):
 > > PerfTestCase?
 > > 
 > 
 > I'd like to leave "TestCase" as-is because they have been in package
 > perftest.

Ah.

 > >> >+    """
 > >> >+    Base class of all performance testing cases.
 > >> >+
 > >> >+    Each sub-class should override methods execute_test, in which
 > >> >+    several GDB operations are performed and measured by attribute
 > >> >+    measure.  Sub-class can also override method warm_up optionally
 > >> >+    if the test case needs warm up.
 > >> >+
 > >> >+    """
 > >> >+
 > >> >+    def __init__(self, name, measure):
 > >> >+        """
 > >> >+        Constructor of TestCase.
 > >> >+
 > >> >+        Construct an instance of TestCase with a name and a measure
 > >> >+        which is to measure the test by several different measurements.
 > >> >+
 > > It's odd to have a blank line here.  I couldn't find anything
 > > definitive in pep257 though.
 > > 
 > >> >+        """
 > > IIUC, pep257 says have a blank line here.
 > > 
 > 
 > I was confused by this sentence "Docstrings documenting functions or
 > methods generally don't have this requirement, unless the function or
 > method's body is written as a number of blank-line separated sections
 > -- in this case, treat the docstring as another section, and precede it
 > with a blank line".
 > 
 > and follow the example in pep-0257
 > 
 > def complex(real=0.0, imag=0.0):
 >     """Form a complex number.
 > 
 >     Keyword arguments:
 >     real -- the real part (default 0.0)
 >     imag -- the imaginary part (default 0.0)
 > 
 >     """
 >     if imag == 0.0 and real == 0.0: return complex_zero
 >     ...
 > 

That example is odd IMO.
[The stated reasons are so that emacs's fill-paragraph works,
but I'm not sure it's a convention one is expected to always follow.]

 > >> >+       uplevel 2 $body
 > >> >+    }
 > >> >+
 > >> >+    # The top-level interface to PerfTest.
 > >> >+    # PREPARE is the tcl code to prepare for the performance testing,
 > >> >+    # including generating and compiling source files, starting up GDB.
 > >> >+    # RUN is the tcl code to drive GDB to do some operations.
 > >> >+    proc assemble {prepare run} {
 > > According to this comment "prepare" combines compiling source files
 > > with starting up gdb.
 > > Can we separate them?
 > 
 > OK.
 > 
 > > OTOH, there's the call to _setup_gdb after the "eval $prepare".
 > > It's a bit confusing.
 > > 
 > 
 > What is confusing? the name "_setup_gdb"?  $prepare is to compile
 > sources and start up gdb, and proc _setup_gdb is to set up perf test in
 > a running gdb.  How about rename "_setup_gdb" with "_setup" or
 > "_setup_perftest"?

"setup" and "prepare" are essentially synonymous,
thus it's confusing to appear to be doing the same thing twice.
The names used need to better distinguish what each is doing.

_setup_perftest is a definite improvement.

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2013-10-15 20:14 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-09-25 14:27 [PATCH 0/3 V2] GDB Performance testing Yao Qi
2013-09-25 14:27 ` [PATCH 2/3] Perf test framework Yao Qi
2013-10-09  6:37   ` Doug Evans
2013-10-10  3:04     ` Yao Qi
2013-10-15 20:14       ` Doug Evans
2013-09-25 14:27 ` [PATCH 3/3] Test on solib load and unload Yao Qi
2013-09-27 14:09   ` Gary Benson
2013-09-27 15:16     ` Yao Qi
2013-09-27 15:50       ` Gary Benson
2013-09-25 14:27 ` [PATCH 1/3] New make target 'check-perf' and new dir gdb.perf Yao Qi
2013-10-09  5:13   ` Doug Evans
2013-10-10  0:29     ` Yao Qi
2013-10-15 17:14       ` Doug Evans
2013-10-06  1:44 ` [PATCH 0/3 V2] GDB Performance testing Yao Qi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).