* [RFC] GDB performance testing infrastructure
@ 2013-08-14 13:01 Yao Qi
2013-08-21 20:39 ` Tom Tromey
` (2 more replies)
0 siblings, 3 replies; 40+ messages in thread
From: Yao Qi @ 2013-08-14 13:01 UTC (permalink / raw)
To: gdb-patches
Hi,
Here is a proposal of GDB performance testing infrastructure.
We'd like to know how people think about this, especially on,
1) What performance issues this infrastructure can test or
handle,
2) What does this infrastructure look like? What it can do
and what it can't do.
I've written some micro-benchmarks, and run them in this
infrastructure prototype. The results look reasonable and
interesting.
Table of Contents
_________________
1 Motivation and Goals
.. 1.1 Goals
2 Known works
3 Design
.. 3.1 Requirements
.. 3.2 Design
4 Example
.. 4.1 single step
.. 4.2 shared library
1 Motivation and Goals
======================
GDB development process has no standard mechanism to show the
performance of GDB snapshot or release is improved or worsened. We
run regression tests which address only questions of functionality.
Performance regressions do show up periodically.
We really needs performance testing in GDB development, especially for
these following areas, to make sure there is no performance regression
introduced in the development.
* Remote debugging. It is slower to read from the remote target, and
worse, GDB reads the same memory regions in multiple times, or reads
the consecutive memory by multiple packets.
* Symbols. Some of the performance problems in GDB are related to
symbols. When GDB is used to debug large programs in real life,
such as LibreOffice, which has a huge number of symbols, it is a
challenge to GDB to organize them in an efficient way. We can also
find some bugs reported in bugzilla, such as [PR15412], [PR14125],
etc. Issues are documented on [wiki].
* Shared library. When a program needs a large number of shared
libraries, GDB will be slow. Gary improved the performance in this
area, but there is still an open bug on scalability ([PR15590]).
* Tracepoint. Tracepoint is designed to be efficient on collecting
data in the inferior, so we need performance tests to guarantee that
tracepoint is still efficient enough. Note that we a test
`gdb.trace/tspeed.exp', but there are still some rooms to improve.
[PR15412] http://sourceware.org/bugzilla/show_bug.cgi?id=15412
[PR14125] http://sourceware.org/bugzilla/show_bug.cgi?id=14125
[wiki] http://sourceware.org/gdb/wiki/SymbolHandling
[PR15590] http://sourceware.org/bugzilla/show_bug.cgi?id=15590
1.1 Goals
~~~~~~~~~
The goals in this project are:
1. Collect performance data of GDB in various areas under different
supported configurations. These areas or aspects include
performing single step, thread-specific breakpoint, stack
backtrace, symbol lookup, shared library load/unload etc.
Configurations includes native debugging and remote debugging with
GDBserver. This framework include some micro-benchmarks and
utilities to record the performance data, such as execution time
and memory usage of micro-benchmarks.
2. Detect performance regressions. We collected the performance data
of each micro-benchmark, and we need to detect or identify the
performance regression by comparing with the previous run. It is
more powerful to associate it with continuous testing.
2 Known works
=============
* [LNT] It was written for LLVM, but is *designed* to be usable for
the performance testing of any software. It is written in python,
well-documented and easy to set up. LNT spawn the compiler first
and then target program, record the time usages of compiler and
target program in json format. No interaction is involved. The
performance data collection in LNT is relatively simple, because it
is targeted to compiler. The performance testing part is done, and
the next step is to show the data and detect performance
regressions. LNT does a lot work here. The performance data in
json format can be imported to a database, and shown through [web].
The performance regression will be highlighted in red.
* [lldb] LLDB has a [performance.py] to measure the speed and memory
usage of LLDB. It captures the internal events, feeds some events
and record the time usages. It handles interactions by consuming
debugging events, and take some actions accordingly. It only
collects performance data, doesn't detect performance regressions.
* libstdc++-v3 There is directory performance in
libstdc++-v3/testsuite/ and a header testsuite_performance.h in
testsuite/util/. Test cases are compiled with the header, and run
with some large data set, to calculate the time usage. It is
suitable for performance testing for a library.
[LNT] http://llvm.org/docs/lnt/index.html
[web] http://llvm.org/perf/db_default/v4/nts/recent_activity
[lldb] http://lldb.llvm.org/
[performance.py]
http://llvm.org/viewvc/llvm-project/lldb/trunk/examples/python/performance.py
3 Design
========
3.1 Requirements
~~~~~~~~~~~~~~~~
+ Drive GDB to do some operations and record the performance data.
Especially to drive GDB for these cases:
* Libraries are loaded or unloaded in a program, which has a large
number shared libraries, 4096 libraries, for example,
* Look up a symbol in a program which has a large number of symbols,
1 million, for example,
* Do single step, disassembly or other operations in remote
debugging,
+ Both native debugging and remote debugging are supported.
+ Display the performance data in some format, plain text or html.
+ Detect the performance regressions. In functional regression
testing, we can simply diff the two `gdb.sum' files and get to know
the regressions or progressions. In performance testing, we need to
analyze the performance data in two runs to find the regression
instead of simply comparing them by diff.
+ Highlight regressions. It makes sense to show the regression or
progression greater than a certain threshold, 5%, for example.
The first three requires are the minimum set, and can be met in a
short term. Our ultimate goal is to keep track of the performance of
GDB, and improve its performance in some areas, instead of developing
a full-functional performance testing framework. In the long term, we
can improve the framework gradually and meet the last two
requirements.
3.2 Design
~~~~~~~~~~
+ Use `dejagnu' to invoke compiler to compile test case and start GDB
(and/or GDBserver). It is same as regression functional testing we
do nowadays. We choose `dejagnu' here because `dejagnu' handles GDB
testing, especially when GDBserver is used, very well. We don't
have to re-invent the wheel in python.
+ GDB load a python script, in which some operations are performed and
performance data (time and memory usage) is collected into a file.
The performance test is driven by python, because GDB has a good
python binding now. We can use python too to collect performance
data, process them and draw graph, which is very convenient.
+ Emulate the effect of large program, instead of using real large
program. Performance problem shows up when the program is *large*
enough, in terms of a large number of symbols or shared libraries.
Using real large program can trigger the problem, but other people
are hard to reproduce it. The test like this can be run regularly.
1. When we test the performance of GDB handling shared library, we
can use .exp script to generate a large number of c files,
compile them to shared libraries, and let main executable load
these libraries in order to measure the performance.
2. When we test the performance of GDB reading symbols in and
looking for symbols, we either can fake a lot of debug
information in the executable or fake a lot of `objfile',
`symtab' and `symbol' in GDB. we may extend `jit.c' to add
symbols on the fly. `jit.c' is able to add `objfile' and
`symtab' to GDB from external reader. We can factor this part to
add `objfile', `symtab', and `symbol' to GDB for the performance
testing purpose. However, I may be wrong.
4 Example
=========
4.1 single step
~~~~~~~~~~~~~~~
For micro-benchmark `single-step', there are three source files,
`single-step.c', `single-step.py' and `single-step.exp'.
`single-step.exp' is similar to our regression tests in `gdb.python'
directory,
,----
| if ![runto_main] {
| return -1
| }
|
| set remote_python_file [remote_download host ${srcdir}/${subdir}/${testfile}.py]
|
| gdb_test_no_output "python exec (open ('${remote_python_file}').read ())"
|
| send_gdb "call \$perftest()\n"
| set timeout 300
| gdb_expect {
| -re "\"Done\".*${gdb_prompt} $" {
| }
| timeout {}
| }
|
| remote_file host delete ${remote_python_file}
`----
`single-step.py' is to drive GDB to do command `stepi' repeatedly and
record the time usage. Note that class `SingleStep' can be abstracted
in a better way, for example, moving common code to class `TestCase',
and extending it in class `SingleStep'.
,----
| import gdb
| import time
|
| class SingleStep (gdb.Function):
| def __init__(self):
| # Each test has to register a convenience function 'perftest'.
| super (SingleStep, self).__init__ ("perftest")
|
| def execute_test(self):
| test_log = open ("perftest.log", 'a+');
|
| # Execute command 'stepi' in a number of times, and record the
| # time usage.
| for i in range(1, 5):
| start_time = time.clock()
| for j in range(0, i * 300):
| gdb.execute ("stepi");
| elapsed_time = time.clock() - start_time
| print >>test_log, 'single step %d in %s' % (i * 300, elapsed_time)
|
| test_log.close ()
| def invoke(self):
| self.execute_test()
| return "Done"
|
| SingleStep ()
`----
* Run `single-step' with GDBserver
,----
| $ make check RUNTESTFLAGS='--target_board=native-gdbserver single-step.exp'
`----
and the result `perftest.log' looks like, each row is about the time
usage for doing a certain number of `stepi'
,----
| single step 300 in 0.19
| single step 600 in 0.35
| single step 900 in 0.57
| single step 1200 in 0.75
`----
* Run `single-step' without GDBserver
,----
| $ make check RUNTESTFLAGS='--target_board=unix single-step.exp'
`----
and the result `perftest.log' looks like,
,----
| single step 300 in 0.06
| single step 600 in 0.08
| single step 900 in 0.14
| single step 1200 in 0.18
`----
4.2 shared library
~~~~~~~~~~~~~~~~~~
For micro-benchmark `solib', which is testing the performance of GDB
handling shared libraries load and unload, there are three source
files, `solib.c', `solib.py' and `solib.exp'.
`solib.exp' is to generate many c files, and compile them into shared
libraries. `solib.c' is main program which load these libraries
dynamically. `solib.py' is a python script to call some inferior
functions to load libraries and measure the time usages.
Here is the performance data, and each row is about the time usage of
handling loading and unloading a certain number of shared libraries.
We can use this data to track the performance of GDB on handling
shared libraries.
,----
| solib 128 in 0.53
| solib 256 in 1.94
| solib 512 in 8.31
| solib 1024 in 47.34
| solib 2048 in 384.75
`----
--
Yao (é½å°§)
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC] GDB performance testing infrastructure
2013-08-14 13:01 [RFC] GDB performance testing infrastructure Yao Qi
@ 2013-08-21 20:39 ` Tom Tromey
2013-08-27 6:21 ` Yao Qi
2013-08-27 13:49 ` Agovic, Sanimir
2013-08-28 4:17 ` [RFC 0/3] GDB Performance testing Yao Qi
2 siblings, 1 reply; 40+ messages in thread
From: Tom Tromey @ 2013-08-21 20:39 UTC (permalink / raw)
To: Yao Qi; +Cc: gdb-patches
>>>>> "Yao" == Yao Qi <yao@codesourcery.com> writes:
Yao> Here is a proposal of GDB performance testing infrastructure.
Yao> We'd like to know how people think about this, especially on,
Yao> 1) What performance issues this infrastructure can test or
Yao> handle,
Yao> 2) What does this infrastructure look like? What it can do
Yao> and what it can't do.
I think this looks good. I have a few questions and whatnot, nothing
serious.
Yao> + GDB load a python script, in which some operations are performed and
Yao> performance data (time and memory usage) is collected into a file.
Yao> The performance test is driven by python, because GDB has a good
Yao> python binding now. We can use python too to collect performance
Yao> data, process them and draw graph, which is very convenient.
I wonder whether there are cases where the needed API isn't readily
exposed to Python.
I suppose that is motivation to add them though :-)
Yao> 2. When we test the performance of GDB reading symbols in and
Yao> looking for symbols, we either can fake a lot of debug
Yao> information in the executable or fake a lot of `objfile',
Yao> `symtab' and `symbol' in GDB. we may extend `jit.c' to add
Yao> symbols on the fly. `jit.c' is able to add `objfile' and
Yao> `symtab' to GDB from external reader. We can factor this part to
Yao> add `objfile', `symtab', and `symbol' to GDB for the performance
Yao> testing purpose. However, I may be wrong.
I tend to think it is better to go through the normal symbol reading
paths. The JIT code does things specially; and performance testing that
may not show improvements or regressions in "ordinary" uses.
Yao> * Run `single-step' with GDBserver
Yao> ,----
Yao> | $ make check RUNTESTFLAGS='--target_board=native-gdbserver single-step.exp'
Do you anticipate that these tests will be run by default?
One concern I have is that if we generate truly large test cases, then
running the test suite could become quite painful. Also, it seems that
performance tests are best run on a quiet system -- so running them by
default may in general not yield worthwhile data.
Yao> Here is the performance data, and each row is about the time usage of
Yao> handling loading and unloading a certain number of shared libraries.
Yao> We can use this data to track the performance of GDB on handling
Yao> shared libraries.
Yao> ,----
Yao> | solib 128 in 0.53
Yao> | solib 256 in 1.94
Yao> | solib 512 in 8.31
Yao> | solib 1024 in 47.34
Yao> | solib 2048 in 384.75
Yao> `----
Perhaps the .py code can deliver Python objects to some test harness
rather than just printing data free-form? Then we can emit the data in
more easily manipulated forms.
Tom
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC] GDB performance testing infrastructure
2013-08-21 20:39 ` Tom Tromey
@ 2013-08-27 6:21 ` Yao Qi
0 siblings, 0 replies; 40+ messages in thread
From: Yao Qi @ 2013-08-27 6:21 UTC (permalink / raw)
To: Tom Tromey; +Cc: gdb-patches
On 08/22/2013 04:38 AM, Tom Tromey wrote:
> Yao> + GDB load a python script, in which some operations are performed and
> Yao> performance data (time and memory usage) is collected into a file.
> Yao> The performance test is driven by python, because GDB has a good
> Yao> python binding now. We can use python too to collect performance
> Yao> data, process them and draw graph, which is very convenient.
>
> I wonder whether there are cases where the needed API isn't readily
> exposed to Python.
>
> I suppose that is motivation to add them though:-)
Right, as we write more and more test cases, we do need more python
APIs for different components in GDB.
>
> Yao> 2. When we test the performance of GDB reading symbols in and
> Yao> looking for symbols, we either can fake a lot of debug
> Yao> information in the executable or fake a lot of `objfile',
> Yao> `symtab' and `symbol' in GDB. we may extend `jit.c' to add
> Yao> symbols on the fly. `jit.c' is able to add `objfile' and
> Yao> `symtab' to GDB from external reader. We can factor this part to
> Yao> add `objfile', `symtab', and `symbol' to GDB for the performance
> Yao> testing purpose. However, I may be wrong.
>
> I tend to think it is better to go through the normal symbol reading
> paths. The JIT code does things specially; and performance testing that
> may not show improvements or regressions in "ordinary" uses.
>
I am OK on this way. On each machine the performance testing is
deployed, people have to find some large executables with debug info,
and track the performance of GDB loading them and searching for some
symbols.
> Yao> * Run `single-step' with GDBserver
> Yao> ,----
> Yao> | $ make check RUNTESTFLAGS='--target_board=native-gdbserver single-step.exp'
>
> Do you anticipate that these tests will be run by default?
>
No.
> One concern I have is that if we generate truly large test cases, then
> running the test suite could become quite painful. Also, it seems that
> performance tests are best run on a quiet system -- so running them by
> default may in general not yield worthwhile data.
I plan to add a new makefile target 'check-perf' to run all performance
testing cases.
>
> Yao> Here is the performance data, and each row is about the time usage of
> Yao> handling loading and unloading a certain number of shared libraries.
> Yao> We can use this data to track the performance of GDB on handling
> Yao> shared libraries.
>
> Yao> ,----
> Yao> | solib 128 in 0.53
> Yao> | solib 256 in 1.94
> Yao> | solib 512 in 8.31
> Yao> | solib 1024 in 47.34
> Yao> | solib 2048 in 384.75
> Yao> `----
>
> Perhaps the .py code can deliver Python objects to some test harness
> rather than just printing data free-form? Then we can emit the data in
> more easily manipulated forms.
Agreed. In my experiments, I save test result in a python object, and
print them into plain text or json format later.
I'll post patches soon...
--
Yao (é½å°§)
^ permalink raw reply [flat|nested] 40+ messages in thread
* RE: [RFC] GDB performance testing infrastructure
2013-08-14 13:01 [RFC] GDB performance testing infrastructure Yao Qi
2013-08-21 20:39 ` Tom Tromey
@ 2013-08-27 13:49 ` Agovic, Sanimir
2013-08-28 3:04 ` Yao Qi
2013-08-28 4:17 ` [RFC 0/3] GDB Performance testing Yao Qi
2 siblings, 1 reply; 40+ messages in thread
From: Agovic, Sanimir @ 2013-08-27 13:49 UTC (permalink / raw)
To: 'Yao Qi'; +Cc: gdb-patches
Hello Yao,
I like the overall proposal for a "micro" benchmark suite. Some comments below.
> -----Original Message-----
> From: gdb-patches-owner@sourceware.org [mailto:gdb-patches-owner@sourceware.org] On Behalf
> Of Yao Qi
> Sent: Wednesday, August 14, 2013 03:01 PM
> To: gdb-patches@sourceware.org
> Subject: [RFC] GDB performance testing infrastructure
>
> * Remote debugging. It is slower to read from the remote target, and
> worse, GDB reads the same memory regions in multiple times, or reads
> the consecutive memory by multiple packets.
>
Once gdb and gdbserver share most of the target code, the overhead will be
caused by the serial protocol roundtrips. But this will take a while...
> * Tracepoint. Tracepoint is designed to be efficient on collecting
> data in the inferior, so we need performance tests to guarantee that
> tracepoint is still efficient enough. Note that we a test
> `gdb.trace/tspeed.exp', but there are still some rooms to improve.
>
Afaik the tracepoint functionality is quite separated from gdb may be tested
in isolation. Having a generic benchmark framework covering the most parts of
gdb is probably _the_ way to go but I see some room for specialized benchmarks
e.g. for tracepoints with a custom driver. But my knowledge is too vague on
the topic.
> 2. Detect performance regressions. We collected the performance data
> of each micro-benchmark, and we need to detect or identify the
> performance regression by comparing with the previous run. It is
> more powerful to associate it with continuous testing.
>
Something really simple, so simple that one could run it silently with every
make invokation. For a newcomer, it took me some time to get used to make
check e.g. setup, run, and interpret the tests with various settings. Something
simpler would help to run it more often.
>
> 2 Known works
> =============
>
> * [LNT] It was written for LLVM, but is *designed* to be usable for
> the performance testing of any software. It is written in python,
> well-documented and easy to set up. LNT spawn the compiler first
> and then target program, record the time usages of compiler and
> target program in json format. No interaction is involved. The
> performance data collection in LNT is relatively simple, because it
> is targeted to compiler. The performance testing part is done, and
> the next step is to show the data and detect performance
> regressions. LNT does a lot work here. The performance data in
> json format can be imported to a database, and shown through [web].
> The performance regression will be highlighted in red.
>
> * [lldb] LLDB has a [performance.py] to measure the speed and memory
> usage of LLDB. It captures the internal events, feeds some events
> and record the time usages. It handles interactions by consuming
> debugging events, and take some actions accordingly. It only
> collects performance data, doesn't detect performance regressions.
>
> * libstdc++-v3 There is directory performance in
> libstdc++-v3/testsuite/ and a header testsuite_performance.h in
> testsuite/util/. Test cases are compiled with the header, and run
> with some large data set, to calculate the time usage. It is
> suitable for performance testing for a library.
>
I like to add the Machine Interface (MI) to the list, but it is quite rudimentary:
$ gdb -interpreter mi -q debugee
[...]
-enable-timings
^done
(gdb)
-break-insert -f main
^done,bkpt={...},time={wallclock="0.00656",user="0.00000",system="0.00000"}
[...]
(gdb)
-exec-step
^running
*running,thread-id="1"
(gdb)
*stopped,[...],time={wallclock="0.19425",user="0.09700",system="0.04200"}
(gdb)
With -enable-timings[1] enabled, every result record has a time triple
appended, even for async[2] ones. If we come up with a full mi parser
one could run tests w/o timings. A mi result is quite json-ish.
(To be honest I do not know how timings are composed of =D)
In addition there are some tools for plotting benchmark results[3].
[1] http://sourceware.org/gdb/onlinedocs/gdb/GDB_002fMI-Miscellaneous-Commands.html
[2] https://sourceware.org/gdb/onlinedocs/gdb/GDB_002fMI-Async-Records.html
[3] http://speed.pypy.org/
-Sanimir
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC] GDB performance testing infrastructure
2013-08-27 13:49 ` Agovic, Sanimir
@ 2013-08-28 3:04 ` Yao Qi
2013-09-19 0:36 ` Doug Evans
0 siblings, 1 reply; 40+ messages in thread
From: Yao Qi @ 2013-08-28 3:04 UTC (permalink / raw)
To: Agovic, Sanimir; +Cc: gdb-patches
On 08/27/2013 09:49 PM, Agovic, Sanimir wrote:
>> * Remote debugging. It is slower to read from the remote target, and
>> > worse, GDB reads the same memory regions in multiple times, or reads
>> > the consecutive memory by multiple packets.
>> >
> Once gdb and gdbserver share most of the target code, the overhead will be
> caused by the serial protocol roundtrips. But this will take a while...
Sanimir, thanks for your comments!
One of the motivations of the performance testing is to measure the
overhead of RSP in some scenarios, and look for the opportunities to
improve it, or add a completely new protocol, which is an extreme case.
Once the infrastructure is ready, we can write some tests to see how
efficient or inefficient RSP is.
>
>> > * Tracepoint. Tracepoint is designed to be efficient on collecting
>> > data in the inferior, so we need performance tests to guarantee that
>> > tracepoint is still efficient enough. Note that we a test
>> > `gdb.trace/tspeed.exp', but there are still some rooms to improve.
>> >
> Afaik the tracepoint functionality is quite separated from gdb may be tested
> in isolation. Having a generic benchmark framework covering the most parts of
> gdb is probably_the_ way to go but I see some room for specialized benchmarks
> e.g. for tracepoints with a custom driver. But my knowledge is too vague on
> the topic.
>
Well, it is sort of design trade-off. We need a framework generic
enough to handle most of the testing requirements for different GDB
modules, (such as solib, symbols, backtrace, disassemble, etc), on the
other hand, we want each test is specialized for the corresponding GDB
module, so that we can find more details.
I am inclined to handle testing to _all_ modules under this generic
framework.
>> > 2. Detect performance regressions. We collected the performance data
>> > of each micro-benchmark, and we need to detect or identify the
>> > performance regression by comparing with the previous run. It is
>> > more powerful to associate it with continuous testing.
>> >
> Something really simple, so simple that one could run it silently with every
> make invokation. For a newcomer, it took me some time to get used to make
> check e.g. setup, run, and interpret the tests with various settings. Something
> simpler would help to run it more often.
>
Yes, I agree, everything should be simple. I assume that people
running performance testing should be familiar with GDB regular
regression test, like 'make check'. We'll provide 'make check-perf' to
run performance testing ,and it doesn't add extra difficulties on top of
'make check', from user's point of view, IMO.
> I like to add the Machine Interface (MI) to the list, but it is quite rudimentary:
>
> $ gdb -interpreter mi -q debugee
> [...]
> -enable-timings
> ^done
> (gdb)
> -break-insert -f main
> ^done,bkpt={...},time={wallclock="0.00656",user="0.00000",system="0.00000"}
> [...]
> (gdb)
> -exec-step
> ^running
> *running,thread-id="1"
> (gdb)
> *stopped,[...],time={wallclock="0.19425",user="0.09700",system="0.04200"}
> (gdb)
>
> With -enable-timings[1] enabled, every result record has a time triple
> appended, even for async[2] ones. If we come up with a full mi parser
> one could run tests w/o timings. A mi result is quite json-ish.
Thanks for the input.
>
> (To be honest I do not know how timings are composed of =D)
>
> In addition there are some tools for plotting benchmark results[3].
>
> [1]http://sourceware.org/gdb/onlinedocs/gdb/GDB_002fMI-Miscellaneous-Commands.html
> [2]https://sourceware.org/gdb/onlinedocs/gdb/GDB_002fMI-Async-Records.html
> [3]http://speed.pypy.org/
I am using speed to track and show the performance data I got from the
GDB performance tests. It is able to associate the performance data to
the commit, so easy to find which commit causes regressions. However,
my impression is that speed or its dependent packages are not
well-maintained nowadays.
After some search online, I like the chromium performance test and its
plot, personally. It is integrated with buildbot (a customized version).
http://build.chromium.org/f/chromium/perf/dashboard/overview.html
However, as I said in this proposal, let us focus on goal #1 first, get
the framework ready and collect performance data.
--
Yao (é½å°§)
^ permalink raw reply [flat|nested] 40+ messages in thread
* [RFC 3/3] Test on solib load and unload
2013-08-28 4:17 ` [RFC 0/3] GDB Performance testing Yao Qi
2013-08-28 4:17 ` [RFC 2/3] Perf test framework Yao Qi
@ 2013-08-28 4:17 ` Yao Qi
2013-08-28 4:27 ` Yao Qi
2013-08-28 4:17 ` [RFC 1/3] New make target 'check-perf' and new dir gdb.perf Yao Qi
2013-09-19 17:25 ` [RFC 0/3] GDB Performance testing Doug Evans
3 siblings, 1 reply; 40+ messages in thread
From: Yao Qi @ 2013-08-28 4:17 UTC (permalink / raw)
To: gdb-patches
This patch is to add a test case to on the performance of GDB handling
load and unload of shared library.
gdb/testsuite/
* gdb.perf/solib.c: New.
* gdb.perf/solib.exp: New.
* gdb.perf/solib.py: New.
---
gdb/testsuite/gdb.perf/solib.c | 79 ++++++++++++++++++++++++++++++++++
gdb/testsuite/gdb.perf/solib.exp | 86 ++++++++++++++++++++++++++++++++++++++
gdb/testsuite/gdb.perf/solib.py | 48 +++++++++++++++++++++
3 files changed, 213 insertions(+), 0 deletions(-)
create mode 100644 gdb/testsuite/gdb.perf/solib.c
create mode 100644 gdb/testsuite/gdb.perf/solib.exp
create mode 100644 gdb/testsuite/gdb.perf/solib.py
diff --git a/gdb/testsuite/gdb.perf/solib.c b/gdb/testsuite/gdb.perf/solib.c
new file mode 100644
index 0000000..948b286
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/solib.c
@@ -0,0 +1,79 @@
+/* This testcase is part of GDB, the GNU debugger.
+
+ Copyright (C) 2013 Free Software Foundation, Inc.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 3 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program. If not, see <http://www.gnu.org/licenses/>. */
+
+#include <stdio.h>
+#include <stdlib.h>
+
+#ifdef __WIN32__
+#include <windows.h>
+#define dlopen(name, mode) LoadLibrary (TEXT (name))
+#ifdef _WIN32_WCE
+# define dlsym(handle, func) GetProcAddress (handle, TEXT (func))
+#else
+# define dlsym(handle, func) GetProcAddress (handle, func)
+#endif
+#define dlclose(handle) FreeLibrary (handle)
+#else
+#include <dlfcn.h>
+#endif
+
+void
+do_test (int number)
+{
+ void **handles;
+ char libname[40];
+ int i;
+
+ handles = malloc (sizeof (void *) * number);
+
+ for (i = 0; i < number; i++)
+ {
+ sprintf (libname, "solib-lib%d", i);
+ handles[i] = dlopen (libname, RTLD_LAZY);
+ if (handles[i] == NULL)
+ {
+ printf ("ERROR on dlopen %s\n", libname);
+ return;
+ }
+ }
+
+ for (i = 0; i < number; i++)
+ {
+ char funname[20];
+ void *p;
+
+ sprintf (funname, "shr%d", i);
+ p = dlsym (handles[i], funname);
+ }
+
+ for (i = 0; i < number; i++)
+ dlclose (handles[i]);
+
+ free (handles);
+}
+
+static void
+end (void)
+{}
+
+int
+main (void)
+{
+ end ();
+
+ return 0;
+}
diff --git a/gdb/testsuite/gdb.perf/solib.exp b/gdb/testsuite/gdb.perf/solib.exp
new file mode 100644
index 0000000..8b50968
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/solib.exp
@@ -0,0 +1,86 @@
+# Copyright (C) 2013 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+# This test case is to test the speed of GDB when it is handling the
+# shared libraries of inferior are loaded and unloaded.
+
+standard_testfile .c
+set executable $testfile
+set expfile $testfile.exp
+
+# make check RUNTESTFLAGS='solib.exp SOLIB_NUMBER=1024'
+if ![info exists SOLIB_NUMBER] {
+ set SOLIB_NUMBER 128
+}
+
+for {set i 0} {$i < $SOLIB_NUMBER} {incr i} {
+
+ # Produce source files.
+ set libname "solib-lib$i"
+ set src [standard_temp_file $libname.c]
+ set exe [standard_temp_file $libname]
+
+ set code "int shr$i (void) {return $i;}"
+ set f [open $src "w"]
+ puts $f $code
+ close $f
+
+ # Compile.
+ if { [gdb_compile_shlib $src $exe {debug}] != "" } {
+ untested "Couldn't compile $src."
+ return -1
+ }
+
+ # Delete object files to save some space.
+ file delete [standard_temp_file "solib-lib$i.c.o"]
+}
+
+if { [prepare_for_testing ${testfile}.exp ${binfile} ${srcfile} {debug shlib_load} ] } {
+ return -1
+}
+
+clean_restart $binfile
+
+if ![runto_main] {
+ fail "Can't run to main"
+ return -1
+}
+
+set remote_python_file [gdb_remote_download host ${srcdir}/${subdir}/${testfile}.py]
+
+# Set sys.path for module perftest.
+gdb_test_no_output "python import os, sys"
+gdb_test_no_output "python sys.path.insert\(0, os.path.abspath\(\"${srcdir}/${subdir}/lib\"\)\)"
+
+gdb_test_no_output "python exec (open ('${remote_python_file}').read ())"
+
+gdb_test_no_output "python SolibLoadUnload\($SOLIB_NUMBER\)"
+
+# Call the convenience function registered by python script.
+send_gdb "call \$perftest()\n"
+gdb_expect 3000 {
+ -re "\"Done\".*${gdb_prompt} $" {
+ }
+ timeout {}
+}
+
+remote_file host delete ${remote_python_file}
+
+# Remove these libraries and source files.
+
+for {set i 0} {$i < $SOLIB_NUMBER} {incr i} {
+ file delete "solib-lib$i"
+ file delete "solib-lib$i.c"
+}
diff --git a/gdb/testsuite/gdb.perf/solib.py b/gdb/testsuite/gdb.perf/solib.py
new file mode 100644
index 0000000..7cc9c4a
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/solib.py
@@ -0,0 +1,48 @@
+# Copyright (C) 2013 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+# This test case is to test the speed of GDB when it is handling the
+# shared libraries of inferior are loaded and unloaded.
+
+import gdb
+import time
+
+from perftest import perftest
+
+class SolibLoadUnload(perftest.SingleVariableTestCase):
+ def __init__(self, solib_number):
+ super (SolibLoadUnload, self).__init__ ("solib")
+ self.solib_number = solib_number
+
+ def execute_test(self):
+ num = self.solib_number
+ iteration = 5;
+
+ # Warm up.
+ do_test_command = "call do_test (%d)" % num
+ gdb.execute (do_test_command)
+ gdb.execute (do_test_command)
+
+ while num > 0 and iteration > 0:
+ do_test_command = "call do_test (%d)" % num
+
+ start_time = time.clock()
+ gdb.execute (do_test_command)
+ elapsed_time = time.clock() - start_time
+
+ self.result.record (num, elapsed_time)
+
+ num = num / 2
+ iteration -= 1
--
1.7.7.6
^ permalink raw reply [flat|nested] 40+ messages in thread
* [RFC 2/3] Perf test framework
2013-08-28 4:17 ` [RFC 0/3] GDB Performance testing Yao Qi
@ 2013-08-28 4:17 ` Yao Qi
2013-08-28 9:57 ` Agovic, Sanimir
2013-09-19 19:09 ` Doug Evans
2013-08-28 4:17 ` [RFC 3/3] Test on solib load and unload Yao Qi
` (2 subsequent siblings)
3 siblings, 2 replies; 40+ messages in thread
From: Yao Qi @ 2013-08-28 4:17 UTC (permalink / raw)
To: gdb-patches
This patch adds a basic framework to do performance testing for GDB.
perftest.py is about the test case, testresult.py is about test
results, and how are they saved. reporter.py is about how results
are reported (in what format).
gdb/testsuite/
* gdb.perf/lib/perftest/__init__.py: New.
* gdb.perf/lib/perftest/perftest.py: New.
* gdb.perf/lib/perftest/reporter.py: New.
* gdb.perf/lib/perftest/testresult.py: New.
* gdb.perf/lib/perftest/config.py: New.
---
gdb/testsuite/gdb.perf/lib/perftest/config.py | 40 +++++++++++++++++
gdb/testsuite/gdb.perf/lib/perftest/perftest.py | 49 +++++++++++++++++++++
gdb/testsuite/gdb.perf/lib/perftest/reporter.py | 38 ++++++++++++++++
gdb/testsuite/gdb.perf/lib/perftest/testresult.py | 42 ++++++++++++++++++
4 files changed, 169 insertions(+), 0 deletions(-)
create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/__init__.py
create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/config.py
create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/perftest.py
create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/reporter.py
create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/testresult.py
diff --git a/gdb/testsuite/gdb.perf/lib/perftest/__init__.py b/gdb/testsuite/gdb.perf/lib/perftest/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/gdb/testsuite/gdb.perf/lib/perftest/config.py b/gdb/testsuite/gdb.perf/lib/perftest/config.py
new file mode 100644
index 0000000..db24b16
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/lib/perftest/config.py
@@ -0,0 +1,40 @@
+# Copyright (C) 2013 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+import ConfigParser
+import reporter
+
+class PerfTestConfig(object):
+ """
+ Create the right objects according to file perftest.ini.
+ """
+
+ def __init__(self):
+ self.config = ConfigParser.ConfigParser()
+ self.config.read("perftest.ini")
+
+ def get_reporter(self):
+ """Create an instance of class Reporter which is determined by
+ the option 'type' in section '[Reporter]'."""
+ if not self.config.has_section('Reporter'):
+ return reporter.TextReporter()
+ if not self.config.has_option('Reporter', 'type'):
+ return reporter.TextReporter()
+
+ name = self.config.get('Reporter', 'type')
+ cls = getattr(reporter, name)
+ return cls()
+
+perftestconfig = PerfTestConfig()
diff --git a/gdb/testsuite/gdb.perf/lib/perftest/perftest.py b/gdb/testsuite/gdb.perf/lib/perftest/perftest.py
new file mode 100644
index 0000000..b15fd39
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/lib/perftest/perftest.py
@@ -0,0 +1,49 @@
+# Copyright (C) 2013 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+import gdb
+import testresult
+from config import perftestconfig
+
+class TestCase(gdb.Function):
+ """Base class of all performance testing cases. It registers a GDB
+ convenience function 'perftest'. Invoke this convenience function
+ in GDB will call method 'invoke'."""
+
+ def __init__(self, result):
+ # Each test case registers a convenience function 'perftest'.
+ super (TestCase, self).__init__ ("perftest")
+ self.result = result
+
+ def execute_test(self):
+ """Abstract method to do the actual tests."""
+ raise RuntimeError("Abstract Method.")
+
+ def __report(self, reporter):
+ # Private method to report the testing result by 'reporter'.
+ self.result.report (reporter)
+
+ def invoke(self):
+ """Call method 'execute_test' and '__report'."""
+
+ self.execute_test()
+ self.__report(perftestconfig.get_reporter())
+ return "Done"
+
+class SingleVariableTestCase(TestCase):
+ """Test case with a single variable."""
+
+ def __init__(self, name):
+ super (SingleVariableTestCase, self).__init__ (testresult.SingleVariableTestResult (name))
diff --git a/gdb/testsuite/gdb.perf/lib/perftest/reporter.py b/gdb/testsuite/gdb.perf/lib/perftest/reporter.py
new file mode 100644
index 0000000..e27b2ae
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/lib/perftest/reporter.py
@@ -0,0 +1,38 @@
+# Copyright (C) 2013 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+class Reporter(object):
+ """Base class of reporter, which is about reporting test results in
+ different formatss."""
+
+ def report(self, arg1, arg2, arg3):
+ raise RuntimeError("Abstract Method.")
+
+ def end(self):
+ """Invoked when reporting is done. Usually it can be overridden
+ to do some cleanups, such as closing file descriptors."""
+ raise RuntimeError("Abstract Method:end.")
+
+class TextReporter(Reporter):
+ """Report results in plain text 'perftest.log'."""
+
+ def __init__(self):
+ self.txt_log = open ("perftest.log", 'a+');
+
+ def report(self, arg1, arg2, arg3):
+ print >>self.txt_log, '%s %s %s' % (arg1, arg2, arg3)
+
+ def end(self):
+ self.txt_log.close ()
diff --git a/gdb/testsuite/gdb.perf/lib/perftest/testresult.py b/gdb/testsuite/gdb.perf/lib/perftest/testresult.py
new file mode 100644
index 0000000..9912326
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/lib/perftest/testresult.py
@@ -0,0 +1,42 @@
+# Copyright (C) 2013 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+class TestResult(object):
+ """Base class to record or save test results."""
+
+ def __init__(self, name):
+ self.name = name
+
+ def record (self, variable, result):
+ raise RuntimeError("Abstract Method:record.")
+
+ def report (self, reporter):
+ """Report the test results by reporter."""
+ raise RuntimeError("Abstract Method:report.")
+
+class SingleVariableTestResult(TestResult):
+ """Test results for the test case with a single variable. """
+
+ def __init__(self, name):
+ super (SingleVariableTestResult, self).__init__ (name)
+ self.results = dict ()
+
+ def record(self, variable, result):
+ self.results[variable] = result
+
+ def report(self, reporter):
+ for key in sorted(self.results.iterkeys()):
+ reporter.report (self.name, key, self.results[key])
+ reporter.end ()
--
1.7.7.6
^ permalink raw reply [flat|nested] 40+ messages in thread
* [RFC 1/3] New make target 'check-perf' and new dir gdb.perf
2013-08-28 4:17 ` [RFC 0/3] GDB Performance testing Yao Qi
2013-08-28 4:17 ` [RFC 2/3] Perf test framework Yao Qi
2013-08-28 4:17 ` [RFC 3/3] Test on solib load and unload Yao Qi
@ 2013-08-28 4:17 ` Yao Qi
2013-08-28 9:40 ` Agovic, Sanimir
` (2 more replies)
2013-09-19 17:25 ` [RFC 0/3] GDB Performance testing Doug Evans
3 siblings, 3 replies; 40+ messages in thread
From: Yao Qi @ 2013-08-28 4:17 UTC (permalink / raw)
To: gdb-patches
When we add performance tests, we think typical 'make check' should
not run performance tests. We add a new makefile target 'check-perf'
to run performance test only.
We also add a new dir gdb.perf in testsuite for all performance tests.
However, current 'make check' logic will either run dejagnu in
directory testsuite or iterate all gdb.* directories which has *.exp
files. Both of them will run tests in gdb.perf, so we have to filter
gdb.perf out. In makefile target 'check-single', we pass a list of
gdb.* directories except gdb.perf. We also update $(TEST_DIRS) to
filter out gdb.perf too, so that tests in gdb.perf can't be run in
target check-parallel.
gdb:
2013-08-27 Yao Qi <yao@codesourcery.com>
* Makefile.in (check-perf): New target.
gdb/testsuite:
2013-08-27 Yao Qi <yao@codesourcery.com>
* Makefile.in (TEST_SRC_DIRS): New variable.
(check-single): Pass directories $(TEST_SRC_DIRS) to runtest.
(TEST_DIRS): Use $(TEST_SRC_DIRS).
(check-perf): New target.
* configure.ac (AC_OUTPUT): Output Makefile in gdb.perf.
* configure: Re-generated.
* gdb.perf/Makefile.in: New.
---
gdb/Makefile.in | 8 ++++++++
gdb/testsuite/Makefile.in | 15 ++++++++++++---
gdb/testsuite/configure | 3 ++-
gdb/testsuite/configure.ac | 2 +-
gdb/testsuite/gdb.perf/Makefile.in | 15 +++++++++++++++
5 files changed, 38 insertions(+), 5 deletions(-)
create mode 100644 gdb/testsuite/gdb.perf/Makefile.in
diff --git a/gdb/Makefile.in b/gdb/Makefile.in
index c75ec38..98bcc1a 100644
--- a/gdb/Makefile.in
+++ b/gdb/Makefile.in
@@ -1003,6 +1003,14 @@ check: force
$(MAKE) $(TARGET_FLAGS_TO_PASS) check; \
else true; fi
+check-perf: force
+ @if [ -f testsuite/Makefile ]; then \
+ rootme=`pwd`; export rootme; \
+ rootsrc=`cd $(srcdir); pwd`; export rootsrc; \
+ cd testsuite; \
+ $(MAKE) $(TARGET_FLAGS_TO_PASS) check-perf; \
+ else true; fi
+
# The idea is to parallelize testing of multilibs, for example:
# make -j3 check//sh-hms-sim/{-m1,-m2,-m3,-m3e,-m4}/{,-nofpu}
# will run 3 concurrent sessions of check, eventually testing all 10
diff --git a/gdb/testsuite/Makefile.in b/gdb/testsuite/Makefile.in
index a7b3d5c..34590de 100644
--- a/gdb/testsuite/Makefile.in
+++ b/gdb/testsuite/Makefile.in
@@ -151,13 +151,18 @@ DO_RUNTEST = \
export TCL_LIBRARY ; fi ; \
$(RUNTEST)
+# A list of all directories named "gdb.*" which also hold a .exp file.
+# We filter out gdb.perf because it contains performance testing cases,
+# and we don't want to run them together with other regression tests.
+# They should be run separately by 'make check-perf'.
+TEST_SRC_DIRS = $(filter-out gdb.perf,$(sort $(notdir $(patsubst %/,%,$(dir $(wildcard $(srcdir)/gdb.*/*.exp))))))
+
check-single: all $(abs_builddir)/site.exp
- $(DO_RUNTEST) $(RUNTESTFLAGS)
+ $(DO_RUNTEST) --directory="$(TEST_SRC_DIRS)" $(RUNTESTFLAGS)
-# A list of all directories named "gdb.*" which also hold a .exp file.
# We filter out gdb.base and add fake entries, because that directory
# takes the longest to process, and so we split it in half.
-TEST_DIRS = gdb.base1 gdb.base2 $(filter-out gdb.base,$(sort $(notdir $(patsubst %/,%,$(dir $(wildcard $(srcdir)/gdb.*/*.exp))))))
+TEST_DIRS = gdb.base1 gdb.base2 $(filter-out gdb.base ,$(TEST_SRC_DIRS))
TEST_TARGETS = $(addprefix check-,$(TEST_DIRS))
@@ -187,6 +192,10 @@ check-gdb.base%: all $(abs_builddir)/site.exp
@if test ! -d gdb.base$*; then mkdir gdb.base$*; fi
$(DO_RUNTEST) $(BASE$*_FILES) --outdir gdb.base$* $(RUNTESTFLAGS)
+check-perf: all $(abs_builddir)/site.exp
+ @if test ! -d gdb.perf; then mkdir gdb.perf; fi
+ $(DO_RUNTEST) --direcotry=gdb.perf --outdir gdb.perf $(RUNTESTFLAGS)
+
subdir_do: force
@for i in $(DODIRS); do \
if [ -d ./$$i ] ; then \
diff --git a/gdb/testsuite/configure b/gdb/testsuite/configure
index a40c144..da590f3 100755
--- a/gdb/testsuite/configure
+++ b/gdb/testsuite/configure
@@ -3448,7 +3448,7 @@ done
-ac_config_files="$ac_config_files Makefile gdb.ada/Makefile gdb.arch/Makefile gdb.asm/Makefile gdb.base/Makefile gdb.btrace/Makefile gdb.cell/Makefile gdb.cp/Makefile gdb.disasm/Makefile gdb.dwarf2/Makefile gdb.fortran/Makefile gdb.go/Makefile gdb.server/Makefile gdb.java/Makefile gdb.hp/Makefile gdb.hp/gdb.objdbg/Makefile gdb.hp/gdb.base-hp/Makefile gdb.hp/gdb.aCC/Makefile gdb.hp/gdb.compat/Makefile gdb.hp/gdb.defects/Makefile gdb.linespec/Makefile gdb.mi/Makefile gdb.modula2/Makefile gdb.multi/Makefile gdb.objc/Makefile gdb.opencl/Makefile gdb.opt/Makefile gdb.pascal/Makefile gdb.python/Makefile gdb.reverse/Makefile gdb.stabs/Makefile gdb.threads/Makefile gdb.trace/Makefile gdb.xml/Makefile"
+ac_config_files="$ac_config_files Makefile gdb.ada/Makefile gdb.arch/Makefile gdb.asm/Makefile gdb.base/Makefile gdb.btrace/Makefile gdb.cell/Makefile gdb.cp/Makefile gdb.disasm/Makefile gdb.dwarf2/Makefile gdb.fortran/Makefile gdb.go/Makefile gdb.server/Makefile gdb.java/Makefile gdb.hp/Makefile gdb.hp/gdb.objdbg/Makefile gdb.hp/gdb.base-hp/Makefile gdb.hp/gdb.aCC/Makefile gdb.hp/gdb.compat/Makefile gdb.hp/gdb.defects/Makefile gdb.linespec/Makefile gdb.mi/Makefile gdb.modula2/Makefile gdb.multi/Makefile gdb.objc/Makefile gdb.opencl/Makefile gdb.opt/Makefile gdb.pascal/Makefile gdb.perf/Makefile gdb.python/Makefile gdb.reverse/Makefile gdb.stabs/Makefile gdb.threads/Makefile gdb.trace/Makefile gdb.xml/Makefile"
cat >confcache <<\_ACEOF
# This file is a shell script that caches the results of configure
@@ -4176,6 +4176,7 @@ do
"gdb.opencl/Makefile") CONFIG_FILES="$CONFIG_FILES gdb.opencl/Makefile" ;;
"gdb.opt/Makefile") CONFIG_FILES="$CONFIG_FILES gdb.opt/Makefile" ;;
"gdb.pascal/Makefile") CONFIG_FILES="$CONFIG_FILES gdb.pascal/Makefile" ;;
+ "gdb.perf/Makefile") CONFIG_FILES="$CONFIG_FILES gdb.perf/Makefile" ;;
"gdb.python/Makefile") CONFIG_FILES="$CONFIG_FILES gdb.python/Makefile" ;;
"gdb.reverse/Makefile") CONFIG_FILES="$CONFIG_FILES gdb.reverse/Makefile" ;;
"gdb.stabs/Makefile") CONFIG_FILES="$CONFIG_FILES gdb.stabs/Makefile" ;;
diff --git a/gdb/testsuite/configure.ac b/gdb/testsuite/configure.ac
index 9e07021..94f96cc 100644
--- a/gdb/testsuite/configure.ac
+++ b/gdb/testsuite/configure.ac
@@ -97,5 +97,5 @@ AC_OUTPUT([Makefile \
gdb.hp/gdb.defects/Makefile gdb.linespec/Makefile \
gdb.mi/Makefile gdb.modula2/Makefile gdb.multi/Makefile \
gdb.objc/Makefile gdb.opencl/Makefile gdb.opt/Makefile gdb.pascal/Makefile \
- gdb.python/Makefile gdb.reverse/Makefile gdb.stabs/Makefile \
+ gdb.perf/Makefile gdb.python/Makefile gdb.reverse/Makefile gdb.stabs/Makefile \
gdb.threads/Makefile gdb.trace/Makefile gdb.xml/Makefile])
diff --git a/gdb/testsuite/gdb.perf/Makefile.in b/gdb/testsuite/gdb.perf/Makefile.in
new file mode 100644
index 0000000..2071d12
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/Makefile.in
@@ -0,0 +1,15 @@
+VPATH = @srcdir@
+srcdir = @srcdir@
+
+.PHONY: all clean mostlyclean distclean realclean
+
+PROGS =
+
+all info install-info dvi install uninstall installcheck check:
+ @echo "Nothing to be done for $@..."
+
+clean mostlyclean:
+ -rm -f *.o *.diff *~ core $(PROGS)
+
+distclean maintainer-clean realclean: clean
+ -rm -f Makefile config.status config.log gdb.log gdb.sum
--
1.7.7.6
^ permalink raw reply [flat|nested] 40+ messages in thread
* [RFC 0/3] GDB Performance testing
2013-08-14 13:01 [RFC] GDB performance testing infrastructure Yao Qi
2013-08-21 20:39 ` Tom Tromey
2013-08-27 13:49 ` Agovic, Sanimir
@ 2013-08-28 4:17 ` Yao Qi
2013-08-28 4:17 ` [RFC 2/3] Perf test framework Yao Qi
` (3 more replies)
2 siblings, 4 replies; 40+ messages in thread
From: Yao Qi @ 2013-08-28 4:17 UTC (permalink / raw)
To: gdb-patches
This patch series implement GDB performance testing infrastructure
according to the design I posted here
<https://sourceware.org/ml/gdb-patches/2013-08/msg00380.html>
Here are some highlights:
- Performance testing can be run via 'make check-perf'
- GDB and GDBServer is started by dejagnu, so the usage of
'make check-perf' is same as the usage of existing 'make check'.
- Performance test result is saved in testsuite/perftest.log, which
is appended in multiple runs.
- Workload of each test can be customized by passing parameters to
'make check-perf'.
The basic usages and the outputs are as follows:
$ make check-perf
$ cat ./testsuite/perftest.log
solib 8 0.01
solib 16 0.03
solib 32 0.07
solib 64 0.19
solib 128 0.54
$ make check-perf RUNTESTFLAGS="--target_board=native-gdbserver solib.exp"
$ cat ./testsuite/perftest.log
solib 8 0.03
solib 16 0.05
solib 32 0.11
solib 64 0.26
solib 128 0.78
Specify the number of solibs in the test.
$ make check-perf RUNTESTFLAGS="--target_board=native-gdbserver solib.exp SOLIB_NUMBER=1024"
$ cat ./testsuite/perftest.log
solib 64 0.25
solib 128 0.7
solib 256 2.38
solib 512 9.67
solib 1024 53.0
GDB python doesn't know the perftest package located in
testsuite/gdb.perf/lib, so in every test, we need the following two
statements to add the path to sys.path.
gdb_test_no_output "python import os, sys"
gdb_test_no_output "python sys.path.insert\(0, os.path.abspath\(\"${srcdir}/${subdir}/lib\"\)\)"
I'll add other test cases when the basic form of test case is
determined.
*** BLURB HERE ***
Yao Qi (3):
New make target 'check-perf' and new dir gdb.perf
Perf test framework
Test on solib load and unload
gdb/Makefile.in | 8 ++
gdb/testsuite/Makefile.in | 15 +++-
gdb/testsuite/configure | 3 +-
gdb/testsuite/configure.ac | 2 +-
gdb/testsuite/gdb.perf/Makefile.in | 15 ++++
gdb/testsuite/gdb.perf/lib/perftest/config.py | 40 ++++++++++
gdb/testsuite/gdb.perf/lib/perftest/perftest.py | 49 ++++++++++++
gdb/testsuite/gdb.perf/lib/perftest/reporter.py | 38 +++++++++
gdb/testsuite/gdb.perf/lib/perftest/testresult.py | 42 ++++++++++
gdb/testsuite/gdb.perf/solib.c | 79 +++++++++++++++++++
gdb/testsuite/gdb.perf/solib.exp | 86 +++++++++++++++++++++
gdb/testsuite/gdb.perf/solib.py | 48 ++++++++++++
12 files changed, 420 insertions(+), 5 deletions(-)
create mode 100644 gdb/testsuite/gdb.perf/Makefile.in
create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/__init__.py
create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/config.py
create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/perftest.py
create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/reporter.py
create mode 100644 gdb/testsuite/gdb.perf/lib/perftest/testresult.py
create mode 100644 gdb/testsuite/gdb.perf/solib.c
create mode 100644 gdb/testsuite/gdb.perf/solib.exp
create mode 100644 gdb/testsuite/gdb.perf/solib.py
--
1.7.7.6
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC 3/3] Test on solib load and unload
2013-08-28 4:17 ` [RFC 3/3] Test on solib load and unload Yao Qi
@ 2013-08-28 4:27 ` Yao Qi
2013-08-28 11:31 ` Agovic, Sanimir
` (3 more replies)
0 siblings, 4 replies; 40+ messages in thread
From: Yao Qi @ 2013-08-28 4:27 UTC (permalink / raw)
To: gdb-patches
On 08/28/2013 12:16 PM, Yao Qi wrote:
> +for {set i 0} {$i < $SOLIB_NUMBER} {incr i} {
> + file delete "solib-lib$i"
> + file delete "solib-lib$i.c"
> +}
I should use proc standard_temp_file here. Here is the updated version.
--
Yao (é½å°§)
gdb/testsuite/
* gdb.perf/solib.c: New.
* gdb.perf/solib.exp: New.
* gdb.perf/solib.py: New.
---
gdb/testsuite/gdb.perf/solib.c | 79 ++++++++++++++++++++++++++++++++++
gdb/testsuite/gdb.perf/solib.exp | 86 ++++++++++++++++++++++++++++++++++++++
gdb/testsuite/gdb.perf/solib.py | 48 +++++++++++++++++++++
3 files changed, 213 insertions(+), 0 deletions(-)
create mode 100644 gdb/testsuite/gdb.perf/solib.c
create mode 100644 gdb/testsuite/gdb.perf/solib.exp
create mode 100644 gdb/testsuite/gdb.perf/solib.py
diff --git a/gdb/testsuite/gdb.perf/solib.c b/gdb/testsuite/gdb.perf/solib.c
new file mode 100644
index 0000000..948b286
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/solib.c
@@ -0,0 +1,79 @@
+/* This testcase is part of GDB, the GNU debugger.
+
+ Copyright (C) 2013 Free Software Foundation, Inc.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 3 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program. If not, see <http://www.gnu.org/licenses/>. */
+
+#include <stdio.h>
+#include <stdlib.h>
+
+#ifdef __WIN32__
+#include <windows.h>
+#define dlopen(name, mode) LoadLibrary (TEXT (name))
+#ifdef _WIN32_WCE
+# define dlsym(handle, func) GetProcAddress (handle, TEXT (func))
+#else
+# define dlsym(handle, func) GetProcAddress (handle, func)
+#endif
+#define dlclose(handle) FreeLibrary (handle)
+#else
+#include <dlfcn.h>
+#endif
+
+void
+do_test (int number)
+{
+ void **handles;
+ char libname[40];
+ int i;
+
+ handles = malloc (sizeof (void *) * number);
+
+ for (i = 0; i < number; i++)
+ {
+ sprintf (libname, "solib-lib%d", i);
+ handles[i] = dlopen (libname, RTLD_LAZY);
+ if (handles[i] == NULL)
+ {
+ printf ("ERROR on dlopen %s\n", libname);
+ return;
+ }
+ }
+
+ for (i = 0; i < number; i++)
+ {
+ char funname[20];
+ void *p;
+
+ sprintf (funname, "shr%d", i);
+ p = dlsym (handles[i], funname);
+ }
+
+ for (i = 0; i < number; i++)
+ dlclose (handles[i]);
+
+ free (handles);
+}
+
+static void
+end (void)
+{}
+
+int
+main (void)
+{
+ end ();
+
+ return 0;
+}
diff --git a/gdb/testsuite/gdb.perf/solib.exp b/gdb/testsuite/gdb.perf/solib.exp
new file mode 100644
index 0000000..8e7eaf8
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/solib.exp
@@ -0,0 +1,86 @@
+# Copyright (C) 2013 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+# This test case is to test the speed of GDB when it is handling the
+# shared libraries of inferior are loaded and unloaded.
+
+standard_testfile .c
+set executable $testfile
+set expfile $testfile.exp
+
+# make check RUNTESTFLAGS='solib.exp SOLIB_NUMBER=1024'
+if ![info exists SOLIB_NUMBER] {
+ set SOLIB_NUMBER 128
+}
+
+for {set i 0} {$i < $SOLIB_NUMBER} {incr i} {
+
+ # Produce source files.
+ set libname "solib-lib$i"
+ set src [standard_temp_file $libname.c]
+ set exe [standard_temp_file $libname]
+
+ set code "int shr$i (void) {return $i;}"
+ set f [open $src "w"]
+ puts $f $code
+ close $f
+
+ # Compile.
+ if { [gdb_compile_shlib $src $exe {debug}] != "" } {
+ untested "Couldn't compile $src."
+ return -1
+ }
+
+ # Delete object files to save some space.
+ file delete [standard_temp_file "solib-lib$i.c.o"]
+}
+
+if { [prepare_for_testing ${testfile}.exp ${binfile} ${srcfile} {debug shlib_load} ] } {
+ return -1
+}
+
+clean_restart $binfile
+
+if ![runto_main] {
+ fail "Can't run to main"
+ return -1
+}
+
+set remote_python_file [gdb_remote_download host ${srcdir}/${subdir}/${testfile}.py]
+
+# Set sys.path for module perftest.
+gdb_test_no_output "python import os, sys"
+gdb_test_no_output "python sys.path.insert\(0, os.path.abspath\(\"${srcdir}/${subdir}/lib\"\)\)"
+
+gdb_test_no_output "python exec (open ('${remote_python_file}').read ())"
+
+gdb_test_no_output "python SolibLoadUnload\($SOLIB_NUMBER\)"
+
+# Call the convenience function registered by python script.
+send_gdb "call \$perftest()\n"
+gdb_expect 3000 {
+ -re "\"Done\".*${gdb_prompt} $" {
+ }
+ timeout {}
+}
+
+remote_file host delete ${remote_python_file}
+
+# Remove these libraries and source files.
+
+for {set i 0} {$i < $SOLIB_NUMBER} {incr i} {
+ file delete [standard_temp_file "solib-lib$i"]
+ file delete [standard_temp_file "solib-lib$i.c"]
+}
diff --git a/gdb/testsuite/gdb.perf/solib.py b/gdb/testsuite/gdb.perf/solib.py
new file mode 100644
index 0000000..7cc9c4a
--- /dev/null
+++ b/gdb/testsuite/gdb.perf/solib.py
@@ -0,0 +1,48 @@
+# Copyright (C) 2013 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+# This test case is to test the speed of GDB when it is handling the
+# shared libraries of inferior are loaded and unloaded.
+
+import gdb
+import time
+
+from perftest import perftest
+
+class SolibLoadUnload(perftest.SingleVariableTestCase):
+ def __init__(self, solib_number):
+ super (SolibLoadUnload, self).__init__ ("solib")
+ self.solib_number = solib_number
+
+ def execute_test(self):
+ num = self.solib_number
+ iteration = 5;
+
+ # Warm up.
+ do_test_command = "call do_test (%d)" % num
+ gdb.execute (do_test_command)
+ gdb.execute (do_test_command)
+
+ while num > 0 and iteration > 0:
+ do_test_command = "call do_test (%d)" % num
+
+ start_time = time.clock()
+ gdb.execute (do_test_command)
+ elapsed_time = time.clock() - start_time
+
+ self.result.record (num, elapsed_time)
+
+ num = num / 2
+ iteration -= 1
--
1.7.7.6
^ permalink raw reply [flat|nested] 40+ messages in thread
* RE: [RFC 1/3] New make target 'check-perf' and new dir gdb.perf
2013-08-28 4:17 ` [RFC 1/3] New make target 'check-perf' and new dir gdb.perf Yao Qi
@ 2013-08-28 9:40 ` Agovic, Sanimir
2013-09-19 17:47 ` Doug Evans
2013-09-20 18:59 ` Tom Tromey
2 siblings, 0 replies; 40+ messages in thread
From: Agovic, Sanimir @ 2013-08-28 9:40 UTC (permalink / raw)
To: 'Yao Qi', gdb-patches
lgtm, however I cannot approve your patch.
Minor comment below.
-Sanimir
> -----Original Message-----
> From: gdb-patches-owner@sourceware.org [mailto:gdb-patches-owner@sourceware.org] On Behalf
> Of Yao Qi
> Sent: Wednesday, August 28, 2013 06:17 AM
> To: gdb-patches@sourceware.org
> Subject: [RFC 1/3] New make target 'check-perf' and new dir gdb.perf
>
> diff --git a/gdb/testsuite/Makefile.in b/gdb/testsuite/Makefile.in
> index a7b3d5c..34590de 100644
> --- a/gdb/testsuite/Makefile.in
> +++ b/gdb/testsuite/Makefile.in
> @@ -151,13 +151,18 @@ DO_RUNTEST = \
> export TCL_LIBRARY ; fi ; \
> $(RUNTEST)
>
> +# A list of all directories named "gdb.*" which also hold a .exp file.
> +# We filter out gdb.perf because it contains performance testing cases,
> +# and we don't want to run them together with other regression tests.
> +# They should be run separately by 'make check-perf'.
> +TEST_SRC_DIRS = $(filter-out gdb.perf,$(sort $(notdir $(patsubst %/,%,$(dir $(wildcard
> $(srcdir)/gdb.*/*.exp))))))
> +
> check-single: all $(abs_builddir)/site.exp
> - $(DO_RUNTEST) $(RUNTESTFLAGS)
> + $(DO_RUNTEST) --directory="$(TEST_SRC_DIRS)" $(RUNTESTFLAGS)
>
> -# A list of all directories named "gdb.*" which also hold a .exp file.
> # We filter out gdb.base and add fake entries, because that directory
> # takes the longest to process, and so we split it in half.
> -TEST_DIRS = gdb.base1 gdb.base2 $(filter-out gdb.base,$(sort $(notdir $(patsubst
> %/,%,$(dir $(wildcard $(srcdir)/gdb.*/*.exp))))))
> +TEST_DIRS = gdb.base1 gdb.base2 $(filter-out gdb.base ,$(TEST_SRC_DIRS))
>
You added whitespace after gdb.base.....................^
-Sanimir
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
^ permalink raw reply [flat|nested] 40+ messages in thread
* RE: [RFC 2/3] Perf test framework
2013-08-28 4:17 ` [RFC 2/3] Perf test framework Yao Qi
@ 2013-08-28 9:57 ` Agovic, Sanimir
2013-09-03 1:45 ` Yao Qi
2013-09-19 19:09 ` Doug Evans
1 sibling, 1 reply; 40+ messages in thread
From: Agovic, Sanimir @ 2013-08-28 9:57 UTC (permalink / raw)
To: 'Yao Qi', gdb-patches
lgtm, however I cannot approve your patch.
Some optional comments below.
-Sanimir
> -----Original Message-----
> From: gdb-patches-owner@sourceware.org [mailto:gdb-patches-owner@sourceware.org] On Behalf
> Of Yao Qi
> Sent: Wednesday, August 28, 2013 06:17 AM
> To: gdb-patches@sourceware.org
> Subject: [RFC 2/3] Perf test framework
>
> ---
> gdb/testsuite/gdb.perf/lib/perftest/config.py | 40 +++++++++++++++++
> gdb/testsuite/gdb.perf/lib/perftest/perftest.py | 49 +++++++++++++++++++++
> gdb/testsuite/gdb.perf/lib/perftest/reporter.py | 38 ++++++++++++++++
> gdb/testsuite/gdb.perf/lib/perftest/testresult.py | 42 ++++++++++++++++++
>
I would put a copyright header and a module __doc__ into __init__.py
> diff --git a/gdb/testsuite/gdb.perf/lib/perftest/perftest.py
> b/gdb/testsuite/gdb.perf/lib/perftest/perftest.py
> new file mode 100644
> index 0000000..b15fd39
> --- /dev/null
> +++ b/gdb/testsuite/gdb.perf/lib/perftest/perftest.py
> @@ -0,0 +1,49 @@
[...]
> +
> +class TestCase(gdb.Function):
> + """Base class of all performance testing cases. It registers a GDB
> + convenience function 'perftest'. Invoke this convenience function
> + in GDB will call method 'invoke'."""
> +
> + def __init__(self, result):
> + # Each test case registers a convenience function 'perftest'.
> + super (TestCase, self).__init__ ("perftest")
> + self.result = result
> +
> + def execute_test(self):
> + """Abstract method to do the actual tests."""
> + raise RuntimeError("Abstract Method.")
>
You may use module abc instead:
| class TestCase(gdb.Function):
| __metaclass__ = abc.ABCMeta
| @abc.abstractmethod
| def execute_test(self): pass
Instantiating TestCase and subclasses without overriding execute_test will fail.
> diff --git a/gdb/testsuite/gdb.perf/lib/perftest/reporter.py
> b/gdb/testsuite/gdb.perf/lib/perftest/reporter.py
> new file mode 100644
> index 0000000..e27b2ae
> --- /dev/null
> +++ b/gdb/testsuite/gdb.perf/lib/perftest/reporter.py
> @@ -0,0 +1,38 @@
> +# Copyright (C) 2013 Free Software Foundation, Inc.
> +
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program. If not, see <http://www.gnu.org/licenses/>.
> +
> +class Reporter(object):
> + """Base class of reporter, which is about reporting test results in
> + different formatss."""
> +
> + def report(self, arg1, arg2, arg3):
>
Using *args should handle most cases well.
> + raise RuntimeError("Abstract Method.")
> +
> + def end(self):
> + """Invoked when reporting is done. Usually it can be overridden
> + to do some cleanups, such as closing file descriptors."""
> + raise RuntimeError("Abstract Method:end.")
>
The doc state _can be overridden_ thus you should provide a default implementation
instead of raising an exception, or write _must be overridden_.
> +
> +class TextReporter(Reporter):
> + """Report results in plain text 'perftest.log'."""
> +
> + def __init__(self):
> + self.txt_log = open ("perftest.log", 'a+');
> +
> + def report(self, arg1, arg2, arg3):
> + print >>self.txt_log, '%s %s %s' % (arg1, arg2, arg3)
>
Afaik >> is deprecated, self.txt_log.write(' '.join(args))
> +
> + def end(self):
> + self.txt_log.close ()
> diff --git a/gdb/testsuite/gdb.perf/lib/perftest/testresult.py
> b/gdb/testsuite/gdb.perf/lib/perftest/testresult.py
> new file mode 100644
> index 0000000..9912326
> --- /dev/null
> +++ b/gdb/testsuite/gdb.perf/lib/perftest/testresult.py
> @@ -0,0 +1,42 @@
[...]
> +
> +class SingleVariableTestResult(TestResult):
> + """Test results for the test case with a single variable. """
> +
> + def __init__(self, name):
> + super (SingleVariableTestResult, self).__init__ (name)
> + self.results = dict ()
> +
> + def record(self, variable, result):
> + self.results[variable] = result
> +
> + def report(self, reporter):
> + for key in sorted(self.results.iterkeys()):
> + reporter.report (self.name, key, self.results[key])
> + reporter.end ()
>
You may use: for key,value in sorted(self.results.iteritems(), key=itemgetter(0))
-Sanimir
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
^ permalink raw reply [flat|nested] 40+ messages in thread
* RE: [RFC 3/3] Test on solib load and unload
2013-08-28 4:27 ` Yao Qi
@ 2013-08-28 11:31 ` Agovic, Sanimir
2013-09-03 1:59 ` Yao Qi
2013-09-02 15:24 ` Blanc, Nicolas
` (2 subsequent siblings)
3 siblings, 1 reply; 40+ messages in thread
From: Agovic, Sanimir @ 2013-08-28 11:31 UTC (permalink / raw)
To: 'Yao Qi', gdb-patches
lgtm for the python part. I lack some tcl/dejagnu skills to review the other part.
See comments below.
-Sanimir
> -----Original Message-----
> From: gdb-patches-owner@sourceware.org [mailto:gdb-patches-owner@sourceware.org] On Behalf
> Of Yao Qi
> Sent: Wednesday, August 28, 2013 06:26 AM
> To: gdb-patches@sourceware.org
> Subject: Re: [RFC 3/3] Test on solib load and unload
>
> diff --git a/gdb/testsuite/gdb.perf/solib.c b/gdb/testsuite/gdb.perf/solib.c
> new file mode 100644
> index 0000000..948b286
> --- /dev/null
> +++ b/gdb/testsuite/gdb.perf/solib.c
> @@ -0,0 +1,79 @@
[...]
> +void
> +do_test (int number)
> +{
> + void **handles;
> + char libname[40];
> + int i;
> +
> + handles = malloc (sizeof (void *) * number);
> +
> + for (i = 0; i < number; i++)
> + {
> + sprintf (libname, "solib-lib%d", i);
> + handles[i] = dlopen (libname, RTLD_LAZY);
> + if (handles[i] == NULL)
> + {
> + printf ("ERROR on dlopen %s\n", libname);
> + return;
> + }
> + }
> +
> + for (i = 0; i < number; i++)
> + {
> + char funname[20];
> + void *p;
> +
> + sprintf (funname, "shr%d", i);
> + p = dlsym (handles[i], funname);
>
Does dlsym has any perf impact on the debugger?
> diff --git a/gdb/testsuite/gdb.perf/solib.exp b/gdb/testsuite/gdb.perf/solib.exp
> new file mode 100644
> index 0000000..8e7eaf8
> --- /dev/null
> +++ b/gdb/testsuite/gdb.perf/solib.exp
> @@ -0,0 +1,86 @@
[...]
> +
> +set remote_python_file [gdb_remote_download host ${srcdir}/${subdir}/${testfile}.py]
> +
> +# Set sys.path for module perftest.
> +gdb_test_no_output "python import os, sys"
> +gdb_test_no_output "python sys.path.insert\(0,
> os.path.abspath\(\"${srcdir}/${subdir}/lib\"\)\)"
> +
> +gdb_test_no_output "python exec (open ('${remote_python_file}').read ())"
>
The lines above seem pretty generic and could move into an own proc.
> +
> +gdb_test_no_output "python SolibLoadUnload\($SOLIB_NUMBER\)"
> +
> +# Call the convenience function registered by python script.
> +send_gdb "call \$perftest()\n"
>
Not sure if a convenience function is necessary:
python SolibLoadUnload().execute_test()
could do the job as well.
> diff --git a/gdb/testsuite/gdb.perf/solib.py b/gdb/testsuite/gdb.perf/solib.py
> new file mode 100644
> index 0000000..7cc9c4a
> --- /dev/null
> +++ b/gdb/testsuite/gdb.perf/solib.py
> @@ -0,0 +1,48 @@
> +# Copyright (C) 2013 Free Software Foundation, Inc.
> +
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program. If not, see <http://www.gnu.org/licenses/>.
> +
> +# This test case is to test the speed of GDB when it is handling the
> +# shared libraries of inferior are loaded and unloaded.
> +
> +import gdb
> +import time
> +
> +from perftest import perftest
> +
> +class SolibLoadUnload(perftest.SingleVariableTestCase):
> + def __init__(self, solib_number):
> + super (SolibLoadUnload, self).__init__ ("solib")
> + self.solib_number = solib_number
> +
> + def execute_test(self):
> + num = self.solib_number
> + iteration = 5;
> +
> + # Warm up.
> + do_test_command = "call do_test (%d)" % num
> + gdb.execute (do_test_command)
> + gdb.execute (do_test_command)
> +
> + while num > 0 and iteration > 0:
> + do_test_command = "call do_test (%d)" % num
>
This may raise a TypeError if num % 2 != 0
> +
> + start_time = time.clock()
> + gdb.execute (do_test_command)
> + elapsed_time = time.clock() - start_time
> +
> + self.result.record (num, elapsed_time)
> +
> + num = num / 2
> + iteration -= 1
>
You may consider observing solibs loads/unloads to compute the time
between the events.
Can you re-run the sample with turned off garbage collector? It may
cause some jitter if turned on.
-Sanimir
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
^ permalink raw reply [flat|nested] 40+ messages in thread
* RE: [RFC 3/3] Test on solib load and unload
2013-08-28 4:27 ` Yao Qi
2013-08-28 11:31 ` Agovic, Sanimir
@ 2013-09-02 15:24 ` Blanc, Nicolas
2013-09-03 2:04 ` Yao Qi
2013-09-19 22:45 ` Doug Evans
2013-09-20 19:14 ` Tom Tromey
3 siblings, 1 reply; 40+ messages in thread
From: Blanc, Nicolas @ 2013-09-02 15:24 UTC (permalink / raw)
To: Yao Qi; +Cc: gdb-patches
Hi Yao,
I found your test useful for confirming that the extra logic of my patch [1] has no
measurable impact on SO unloading.
> + for (i = 0; i < number; i++)
> + dlclose (handles[i]);
The loop above closes SOs in FIFO style, which might be optimal for GDB.
You could alternate closing from the front and closing from the back.
I found a bit odd that "make check-perf" is not recognized in the top gdb folder,
as "make check" is. But again, it's a minor point to me.
Regards,
Nicolas
[1] http://sourceware.org/ml/gdb-patches/2013-07/msg00684.html
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC 2/3] Perf test framework
2013-08-28 9:57 ` Agovic, Sanimir
@ 2013-09-03 1:45 ` Yao Qi
2013-09-03 6:38 ` Agovic, Sanimir
0 siblings, 1 reply; 40+ messages in thread
From: Yao Qi @ 2013-09-03 1:45 UTC (permalink / raw)
To: Agovic, Sanimir; +Cc: gdb-patches
Sanimir,
Thanks for the review.
On 08/28/2013 05:57 PM, Agovic, Sanimir wrote:
> I would put a copyright header and a module __doc__ into __init__.py
>
What do you mean by "put a module __doc__ into __init__.py"?
>> >diff --git a/gdb/testsuite/gdb.perf/lib/perftest/perftest.py
>> >b/gdb/testsuite/gdb.perf/lib/perftest/perftest.py
>> >new file mode 100644
>> >index 0000000..b15fd39
>> >--- /dev/null
>> >+++ b/gdb/testsuite/gdb.perf/lib/perftest/perftest.py
>> >@@ -0,0 +1,49 @@
> [...]
>> >+
>> >+class TestCase(gdb.Function):
>> >+ """Base class of all performance testing cases. It registers a GDB
>> >+ convenience function 'perftest'. Invoke this convenience function
>> >+ in GDB will call method 'invoke'."""
>> >+
>> >+ def __init__(self, result):
>> >+ # Each test case registers a convenience function 'perftest'.
>> >+ super (TestCase, self).__init__ ("perftest")
>> >+ self.result = result
>> >+
>> >+ def execute_test(self):
>> >+ """Abstract method to do the actual tests."""
>> >+ raise RuntimeError("Abstract Method.")
>> >
> You may use module abc instead:
> | class TestCase(gdb.Function):
> | __metaclass__ = abc.ABCMeta
> | @abc.abstractmethod
> | def execute_test(self): pass
>
> Instantiating TestCase and subclasses without overriding execute_test will fail.
>
Abstract Base Class should be useful here. It was introduced in python
2.6, but GDB should support some older pythons. I don't know what are
the versions of python GDB support, 2.4 ~ 3.0?, but some code in
gdb/python/ is written with 2.4 considered.
Other comments are addressed, and I'll post the updated patch in next
round.
--
Yao (é½å°§)
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC 3/3] Test on solib load and unload
2013-08-28 11:31 ` Agovic, Sanimir
@ 2013-09-03 1:59 ` Yao Qi
2013-09-03 6:33 ` Agovic, Sanimir
0 siblings, 1 reply; 40+ messages in thread
From: Yao Qi @ 2013-09-03 1:59 UTC (permalink / raw)
To: Agovic, Sanimir; +Cc: gdb-patches
On 08/28/2013 07:31 PM, Agovic, Sanimir wrote:
>> + for (i = 0; i < number; i++)
>> >+ {
>> >+ char funname[20];
>> >+ void *p;
>> >+
>> >+ sprintf (funname, "shr%d", i);
>> >+ p = dlsym (handles[i], funname);
>> >
> Does dlsym has any perf impact on the debugger?
>
Probably no much performance impact on the debugger, IMO. dlsym is to
resolve symbol in runtime, debugger is not much involved.
>> >+
>> >+gdb_test_no_output "python SolibLoadUnload\($SOLIB_NUMBER\)"
>> >+
>> >+# Call the convenience function registered by python script.
>> >+send_gdb "call \$perftest()\n"
>> >
> Not sure if a convenience function is necessary:
> python SolibLoadUnload().execute_test()
> could do the job as well.
>
Convenience function is useful to de-couple solib.py and solib.exp.
solib.py adds a convenience function, while solib.exp is to call it.
>> >+
>> >+ start_time = time.clock()
>> >+ gdb.execute (do_test_command)
>> >+ elapsed_time = time.clock() - start_time
>> >+
>> >+ self.result.record (num, elapsed_time)
>> >+
>> >+ num = num / 2
>> >+ iteration -= 1
>> >
> You may consider observing solibs loads/unloads to compute the time
> between the events.
> Can you re-run the sample with turned off garbage collector? It may
> cause some jitter if turned on.
I don't know how much time is spent on jitter, but python code is
simple and most of the time should be spent on GDB, which is what we
want. Thanks for your suggestion. I'll re-run it with gc turned off,
to see if I can get something different.
--
Yao (é½å°§)
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC 3/3] Test on solib load and unload
2013-09-02 15:24 ` Blanc, Nicolas
@ 2013-09-03 2:04 ` Yao Qi
2013-09-03 7:50 ` Blanc, Nicolas
0 siblings, 1 reply; 40+ messages in thread
From: Yao Qi @ 2013-09-03 2:04 UTC (permalink / raw)
To: Blanc, Nicolas; +Cc: gdb-patches
Hi, Nicolas,
On 09/02/2013 11:24 PM, Blanc, Nicolas wrote:
>> >+ for (i = 0; i < number; i++)
>> >+ dlclose (handles[i]);
> The loop above closes SOs in FIFO style, which might be optimal for GDB.
> You could alternate closing from the front and closing from the back.
That is a good idea. I'll rewrite the code, and see how much the
performance is impacted by the order.
>
> I found a bit odd that "make check-perf" is not recognized in the top gdb folder,
> as "make check" is. But again, it's a minor point to me.
Can you elaborate? "make check-perf" should be equivalent to "make
check", but running different sets of test cases.
--
Yao (é½å°§)
^ permalink raw reply [flat|nested] 40+ messages in thread
* RE: [RFC 3/3] Test on solib load and unload
2013-09-03 1:59 ` Yao Qi
@ 2013-09-03 6:33 ` Agovic, Sanimir
0 siblings, 0 replies; 40+ messages in thread
From: Agovic, Sanimir @ 2013-09-03 6:33 UTC (permalink / raw)
To: 'Yao Qi'; +Cc: gdb-patches
> -----Original Message-----
> From: Yao Qi [mailto:yao@codesourcery.com]
> Sent: Tuesday, September 03, 2013 03:59 AM
> To: Agovic, Sanimir
> Cc: gdb-patches@sourceware.org
> Subject: Re: [RFC 3/3] Test on solib load and unload
>
> On 08/28/2013 07:31 PM, Agovic, Sanimir wrote:
> >> + for (i = 0; i < number; i++)
> >> >+ {
> >> >+ char funname[20];
> >> >+ void *p;
> >> >+
> >> >+ sprintf (funname, "shr%d", i);
> >> >+ p = dlsym (handles[i], funname);
> >> >
> > Does dlsym has any perf impact on the debugger?
> >
>
> Probably no much performance impact on the debugger, IMO. dlsym is to
> resolve symbol in runtime, debugger is not much involved.
>
Without impact/side-effect I`d rather remove it, given that this is a
NOP for the debugger. But this is nothing I`m worried about.
> >> >+
> >> >+gdb_test_no_output "python SolibLoadUnload\($SOLIB_NUMBER\)"
> >> >+
> >> >+# Call the convenience function registered by python script.
> >> >+send_gdb "call \$perftest()\n"
> >> >
> > Not sure if a convenience function is necessary:
> > python SolibLoadUnload().execute_test()
> > could do the job as well.
> >
>
> Convenience function is useful to de-couple solib.py and solib.exp.
> solib.py adds a convenience function, while solib.exp is to call it.
>
Got it.
> >> >+
> >> >+ start_time = time.clock()
> >> >+ gdb.execute (do_test_command)
> >> >+ elapsed_time = time.clock() - start_time
> >> >+
> >> >+ self.result.record (num, elapsed_time)
> >> >+
> >> >+ num = num / 2
> >> >+ iteration -= 1
> >> >
> > You may consider observing solibs loads/unloads to compute the time
> > between the events.
> > Can you re-run the sample with turned off garbage collector? It may
> > cause some jitter if turned on.
>
> I don't know how much time is spent on jitter, but python code is
> simple and most of the time should be spent on GDB, which is what we
> want. Thanks for your suggestion. I'll re-run it with gc turned off,
> to see if I can get something different.
>
Thanks.
-Sanimir
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
^ permalink raw reply [flat|nested] 40+ messages in thread
* RE: [RFC 2/3] Perf test framework
2013-09-03 1:45 ` Yao Qi
@ 2013-09-03 6:38 ` Agovic, Sanimir
0 siblings, 0 replies; 40+ messages in thread
From: Agovic, Sanimir @ 2013-09-03 6:38 UTC (permalink / raw)
To: 'Yao Qi'; +Cc: gdb-patches
> -----Original Message-----
> From: gdb-patches-owner@sourceware.org [mailto:gdb-patches-owner@sourceware.org] On Behalf
> Of Yao Qi
> Sent: Tuesday, September 03, 2013 03:44 AM
> To: Agovic, Sanimir
> Cc: gdb-patches@sourceware.org
> Subject: Re: [RFC 2/3] Perf test framework
>
> On 08/28/2013 05:57 PM, Agovic, Sanimir wrote:
> > I would put a copyright header and a module __doc__ into __init__.py
> >
>
> What do you mean by "put a module __doc__ into __init__.py"?
>
Add a comment in __init__.py and briefly describe the module, it will show up
during help(perftest)/pydoc.
> Abstract Base Class should be useful here. It was introduced in python
> 2.6, but GDB should support some older pythons. I don't know what are
> the versions of python GDB support, 2.4 ~ 3.0?, but some code in
> gdb/python/ is written with 2.4 considered.
>
I see. As an alternative you may consider throwing NotImplementedError instead.
> Other comments are addressed, and I'll post the updated patch in next
> round.
>
Thanks.
-Sanimir
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
^ permalink raw reply [flat|nested] 40+ messages in thread
* RE: [RFC 3/3] Test on solib load and unload
2013-09-03 2:04 ` Yao Qi
@ 2013-09-03 7:50 ` Blanc, Nicolas
0 siblings, 0 replies; 40+ messages in thread
From: Blanc, Nicolas @ 2013-09-03 7:50 UTC (permalink / raw)
To: Yao Qi; +Cc: gdb-patches
>> I found a bit odd that "make check-perf" is not recognized in the top
>> gdb folder, as "make check" is. But again, it's a minor point to me.
>
> Can you elaborate? "make check-perf" should be equivalent to "make check", but running different sets of test cases.
Sure, I can run "make check" from the top build directory but not "make check-perf":
~/build$ make check
make[1]: Entering directory `/home/users/nblanc/build`
~/build$ make check-perf
make: *** No rule to make target `check-perf'. Stop.
The command works fine inside build/gdb:
~/build/gdb$ make check-perf
make[1]: Entering directory `/home/users/nblanc/build/gdb/testsuite'
I hope this is helpful.
Regards,
Nicolas
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC] GDB performance testing infrastructure
2013-08-28 3:04 ` Yao Qi
@ 2013-09-19 0:36 ` Doug Evans
0 siblings, 0 replies; 40+ messages in thread
From: Doug Evans @ 2013-09-19 0:36 UTC (permalink / raw)
To: Yao Qi; +Cc: Agovic, Sanimir, gdb-patches
Yao Qi writes:
> On 08/27/2013 09:49 PM, Agovic, Sanimir wrote:
> >> * Remote debugging. It is slower to read from the remote target, and
> >> > worse, GDB reads the same memory regions in multiple times, or reads
> >> > the consecutive memory by multiple packets.
> >> >
> > Once gdb and gdbserver share most of the target code, the overhead will be
> > caused by the serial protocol roundtrips. But this will take a while...
>
> Sanimir, thanks for your comments!
>
> One of the motivations of the performance testing is to measure the
> overhead of RSP in some scenarios, and look for the opportunities to
> improve it, or add a completely new protocol, which is an extreme case.
For reference sake,
a big part of the "reading same memory region multiple times"
and "consecutive memory by multiple packets" is gdb's inability to use
its dcache (apropos dcache) for text segments. Blech.
> Once the infrastructure is ready, we can write some tests to see how
> efficient or inefficient RSP is.
"set debug remote 1" and you're there. 1/2 :-)
"But seriously ..."
Latency can be a huge problem with any remote protocol.
Running gdb+gdbserver on the same machine can hide issues
without tracking, e.g., packet counts in addition to cpu/wall time.
[*both* cpu and wall time are useful]
I hope the test harness will incorporate this.
> >> > * Tracepoint. Tracepoint is designed to be efficient on collecting
> >> > data in the inferior, so we need performance tests to guarantee that
> >> > tracepoint is still efficient enough. Note that we a test
> >> > `gdb.trace/tspeed.exp', but there are still some rooms to improve.
> >> >
> > Afaik the tracepoint functionality is quite separated from gdb may be tested
> > in isolation. Having a generic benchmark framework covering the most parts of
> > gdb is probably_the_ way to go but I see some room for specialized benchmarks
> > e.g. for tracepoints with a custom driver. But my knowledge is too vague on
> > the topic.
> >
>
> Well, it is sort of design trade-off. We need a framework generic
> enough to handle most of the testing requirements for different GDB
> modules, (such as solib, symbols, backtrace, disassemble, etc), on the
> other hand, we want each test is specialized for the corresponding GDB
> module, so that we can find more details.
>
> I am inclined to handle testing to _all_ modules under this generic
> framework.
Agreed.
> >> > 2. Detect performance regressions. We collected the performance data
> >> > of each micro-benchmark, and we need to detect or identify the
> >> > performance regression by comparing with the previous run. It is
> >> > more powerful to associate it with continuous testing.
> >> >
> > Something really simple, so simple that one could run it silently with every
> > make invokation. For a newcomer, it took me some time to get used to make
> > check e.g. setup, run, and interpret the tests with various settings. Something
> > simpler would help to run it more often.
> >
>
> Yes, I agree, everything should be simple. I assume that people
> running performance testing should be familiar with GDB regular
> regression test, like 'make check'. We'll provide 'make check-perf' to
> run performance testing ,and it doesn't add extra difficulties on top of
> 'make check', from user's point of view, IMO.
>
> > I like to add the Machine Interface (MI) to the list, but it is quite rudimentary:
> >
> > $ gdb -interpreter mi -q debugee
> > [...]
> > -enable-timings
> > ^done
> > (gdb)
> > -break-insert -f main
> > ^done,bkpt={...},time={wallclock="0.00656",user="0.00000",system="0.00000"}
> > [...]
> > (gdb)
> > -exec-step
> > ^running
> > *running,thread-id="1"
> > (gdb)
> > *stopped,[...],time={wallclock="0.19425",user="0.09700",system="0.04200"}
> > (gdb)
> >
> > With -enable-timings[1] enabled, every result record has a time triple
> > appended, even for async[2] ones. If we come up with a full mi parser
> > one could run tests w/o timings. A mi result is quite json-ish.
>
> Thanks for the input.
>
> >
> > (To be honest I do not know how timings are composed of =D)
> >
> > In addition there are some tools for plotting benchmark results[3].
> >
> > [1]http://sourceware.org/gdb/onlinedocs/gdb/GDB_002fMI-Miscellaneous-Commands.html
> > [2]https://sourceware.org/gdb/onlinedocs/gdb/GDB_002fMI-Async-Records.html
> > [3]http://speed.pypy.org/
>
> I am using speed to track and show the performance data I got from the
> GDB performance tests. It is able to associate the performance data to
> the commit, so easy to find which commit causes regressions. However,
> my impression is that speed or its dependent packages are not
> well-maintained nowadays.
>
> After some search online, I like the chromium performance test and its
> plot, personally. It is integrated with buildbot (a customized version).
>
> http://build.chromium.org/f/chromium/perf/dashboard/overview.html
>
> However, as I said in this proposal, let us focus on goal #1 first, get
> the framework ready and collect performance data.
Agreed.
Let's get a good framework in place.
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC 0/3] GDB Performance testing
2013-08-28 4:17 ` [RFC 0/3] GDB Performance testing Yao Qi
` (2 preceding siblings ...)
2013-08-28 4:17 ` [RFC 1/3] New make target 'check-perf' and new dir gdb.perf Yao Qi
@ 2013-09-19 17:25 ` Doug Evans
3 siblings, 0 replies; 40+ messages in thread
From: Doug Evans @ 2013-09-19 17:25 UTC (permalink / raw)
To: Yao Qi; +Cc: gdb-patches
Yao Qi writes:
> This patch series implement GDB performance testing infrastructure
> according to the design I posted here
> <https://sourceware.org/ml/gdb-patches/2013-08/msg00380.html>
>
> Here are some highlights:
>
> - Performance testing can be run via 'make check-perf'
> - GDB and GDBServer is started by dejagnu, so the usage of
> 'make check-perf' is same as the usage of existing 'make check'.
> - Performance test result is saved in testsuite/perftest.log, which
> is appended in multiple runs.
> - Workload of each test can be customized by passing parameters to
> 'make check-perf'.
These are all great, modulo "Consistency Is Good" tells me performance
test results should overwrite perftest.log. That's what "make check" does.
One could make that configurable, but IWBN if the default behaviour
was consistent.
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC 1/3] New make target 'check-perf' and new dir gdb.perf
2013-08-28 4:17 ` [RFC 1/3] New make target 'check-perf' and new dir gdb.perf Yao Qi
2013-08-28 9:40 ` Agovic, Sanimir
@ 2013-09-19 17:47 ` Doug Evans
2013-09-20 19:00 ` Tom Tromey
2013-09-20 18:59 ` Tom Tromey
2 siblings, 1 reply; 40+ messages in thread
From: Doug Evans @ 2013-09-19 17:47 UTC (permalink / raw)
To: Yao Qi; +Cc: gdb-patches
Yao Qi writes:
> When we add performance tests, we think typical 'make check' should
> not run performance tests. We add a new makefile target 'check-perf'
> to run performance test only.
>
> We also add a new dir gdb.perf in testsuite for all performance tests.
> However, current 'make check' logic will either run dejagnu in
> directory testsuite or iterate all gdb.* directories which has *.exp
> files. Both of them will run tests in gdb.perf, so we have to filter
> gdb.perf out. In makefile target 'check-single', we pass a list of
> gdb.* directories except gdb.perf. We also update $(TEST_DIRS) to
> filter out gdb.perf too, so that tests in gdb.perf can't be run in
> target check-parallel.
An alternative is as Tom suggests, do something like
"if [skip_perf_tests] ..." at the top of each perf.exp file.
That has the flair consistency, but I'm not sure which I like better,
though I do like consistency.
Thoughts?
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC 2/3] Perf test framework
2013-08-28 4:17 ` [RFC 2/3] Perf test framework Yao Qi
2013-08-28 9:57 ` Agovic, Sanimir
@ 2013-09-19 19:09 ` Doug Evans
2013-09-20 8:04 ` Yao Qi
1 sibling, 1 reply; 40+ messages in thread
From: Doug Evans @ 2013-09-19 19:09 UTC (permalink / raw)
To: Yao Qi; +Cc: gdb-patches
Hi. Various comments inline.
[Others have commented as well, I'm leaving those alone where I don't
have anything to add.]
Yao Qi writes:
> diff --git a/gdb/testsuite/gdb.perf/lib/perftest/__init__.py b/gdb/testsuite/gdb.perf/lib/perftest/__init__.py
> new file mode 100644
> index 0000000..e69de29
> diff --git a/gdb/testsuite/gdb.perf/lib/perftest/config.py b/gdb/testsuite/gdb.perf/lib/perftest/config.py
> new file mode 100644
> index 0000000..db24b16
> --- /dev/null
> +++ b/gdb/testsuite/gdb.perf/lib/perftest/config.py
> @@ -0,0 +1,40 @@
> +# Copyright (C) 2013 Free Software Foundation, Inc.
> +
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program. If not, see <http://www.gnu.org/licenses/>.
> +
> +import ConfigParser
> +import reporter
> +
> +class PerfTestConfig(object):
> + """
> + Create the right objects according to file perftest.ini.
> + """
> +
> + def __init__(self):
> + self.config = ConfigParser.ConfigParser()
> + self.config.read("perftest.ini")
> +
> + def get_reporter(self):
> + """Create an instance of class Reporter which is determined by
> + the option 'type' in section '[Reporter]'."""
> + if not self.config.has_section('Reporter'):
> + return reporter.TextReporter()
> + if not self.config.has_option('Reporter', 'type'):
> + return reporter.TextReporter()
> +
> + name = self.config.get('Reporter', 'type')
> + cls = getattr(reporter, name)
> + return cls()
> +
> +perftestconfig = PerfTestConfig()
What do you see perftest.ini containing over time?
While the file format is pretty trivial, it is another file format.
Do we need it?
> diff --git a/gdb/testsuite/gdb.perf/lib/perftest/perftest.py b/gdb/testsuite/gdb.perf/lib/perftest/perftest.py
> new file mode 100644
> index 0000000..b15fd39
> --- /dev/null
> +++ b/gdb/testsuite/gdb.perf/lib/perftest/perftest.py
> @@ -0,0 +1,49 @@
> +# Copyright (C) 2013 Free Software Foundation, Inc.
> +
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program. If not, see <http://www.gnu.org/licenses/>.
> +
> +import gdb
> +import testresult
> +from config import perftestconfig
> +
> +class TestCase(gdb.Function):
> + """Base class of all performance testing cases. It registers a GDB
> + convenience function 'perftest'. Invoke this convenience function
> + in GDB will call method 'invoke'."""
Coding conventions for doc strings are here:
http://www.python.org/dev/peps/pep-0257
> +
> + def __init__(self, result):
> + # Each test case registers a convenience function 'perftest'.
> + super (TestCase, self).__init__ ("perftest")
> + self.result = result
> +
> + def execute_test(self):
> + """Abstract method to do the actual tests."""
> + raise RuntimeError("Abstract Method.")
> +
> + def __report(self, reporter):
> + # Private method to report the testing result by 'reporter'.
> + self.result.report (reporter)
Private methods should be _foo.
-> _report?
ref: http://www.python.org/dev/peps/pep-0008/#method-names-and-instance-variables
> +
> + def invoke(self):
> + """Call method 'execute_test' and '__report'."""
As I read this, this comment just says what this function is doing.
I'm guessing the point is to say that all such methods must, at the least,
do these two things. This should be spelled out.
It would also be good to document that "invoke" is what GDB calls
to perform this function.
Also, I'm wondering why execute_test is public and
__report(-> _report) is private?
> +
> + self.execute_test()
> + self.__report(perftestconfig.get_reporter())
> + return "Done"
> +
> +class SingleVariableTestCase(TestCase):
> + """Test case with a single variable."""
I think this needs more documentation.
What does "single variable" refer to? A single statistic, like wall time?
> +
> + def __init__(self, name):
> + super (SingleVariableTestCase, self).__init__ (testresult.SingleVariableTestResult (name))
> diff --git a/gdb/testsuite/gdb.perf/lib/perftest/reporter.py b/gdb/testsuite/gdb.perf/lib/perftest/reporter.py
> new file mode 100644
> index 0000000..e27b2ae
> --- /dev/null
> +++ b/gdb/testsuite/gdb.perf/lib/perftest/reporter.py
> @@ -0,0 +1,38 @@
> +# Copyright (C) 2013 Free Software Foundation, Inc.
> +
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program. If not, see <http://www.gnu.org/licenses/>.
> +
> +class Reporter(object):
> + """Base class of reporter, which is about reporting test results in
> + different formatss."""
See pep 257 for doc string conventions.
> +
> + def report(self, arg1, arg2, arg3):
> + raise RuntimeError("Abstract Method.")
> +
> + def end(self):
> + """Invoked when reporting is done. Usually it can be overridden
> + to do some cleanups, such as closing file descriptors."""
> + raise RuntimeError("Abstract Method:end.")
> +
> +class TextReporter(Reporter):
> + """Report results in plain text 'perftest.log'."""
> +
> + def __init__(self):
> + self.txt_log = open ("perftest.log", 'a+');
While I can appreciate the potential user friendliness of appending
to perftest.log, I'm not comfortable with it as a default given all
the ways I envision myself using this. At least not yet.
I'd rather have the default be to overwrite.
An option to specify which would be ok.
> +
> + def report(self, arg1, arg2, arg3):
> + print >>self.txt_log, '%s %s %s' % (arg1, arg2, arg3)
It would be good to rename arg1,2,3 and/or document their intended contents.
> +
> + def end(self):
> + self.txt_log.close ()
> diff --git a/gdb/testsuite/gdb.perf/lib/perftest/testresult.py b/gdb/testsuite/gdb.perf/lib/perftest/testresult.py
> new file mode 100644
> index 0000000..9912326
> --- /dev/null
> +++ b/gdb/testsuite/gdb.perf/lib/perftest/testresult.py
> @@ -0,0 +1,42 @@
> +# Copyright (C) 2013 Free Software Foundation, Inc.
> +
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program. If not, see <http://www.gnu.org/licenses/>.
> +
> +class TestResult(object):
> + """Base class to record or save test results."""
> +
> + def __init__(self, name):
> + self.name = name
> +
> + def record (self, variable, result):
> + raise RuntimeError("Abstract Method:record.")
> +
> + def report (self, reporter):
> + """Report the test results by reporter."""
> + raise RuntimeError("Abstract Method:report.")
> +
> +class SingleVariableTestResult(TestResult):
> + """Test results for the test case with a single variable. """
> +
> + def __init__(self, name):
> + super (SingleVariableTestResult, self).__init__ (name)
> + self.results = dict ()
> +
> + def record(self, variable, result):
> + self.results[variable] = result
As things read (to me anyway), the class only handles a single variable,
but the "record" method makes the variable a parameter.
There's a disconnect here.
Maybe the "variable" parameter to "record" is misnamed.
E.g., if testing the wall time of performing something over a range of values,
e.g., 1 solib, 8, solibs, 256 solibs, "variable" would be 1,8,256?
If that's the case, please rename "variable".
I realize it's what is being varied run after run, but
it just doesn't read well.
There are two "variables" (so to speak) here:
1) What one is changing run after run. E.g. # solibs
2) What one is measuring. E.g. wall time, cpu time, memory used.
The name "variable" feels too ambiguous.
OTOH, if the performance testing world has a well established convention
for what the word "variable" means, maybe I could live with it. :-)
> +
> + def report(self, reporter):
> + for key in sorted(self.results.iterkeys()):
> + reporter.report (self.name, key, self.results[key])
> + reporter.end ()
IIUC, calling end() here closes the file.
But this function didn't open the file.
It would be cleaner to either open+close the file here or do neither.
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC 3/3] Test on solib load and unload
2013-08-28 4:27 ` Yao Qi
2013-08-28 11:31 ` Agovic, Sanimir
2013-09-02 15:24 ` Blanc, Nicolas
@ 2013-09-19 22:45 ` Doug Evans
2013-09-20 19:19 ` Tom Tromey
2013-09-22 6:25 ` Yao Qi
2013-09-20 19:14 ` Tom Tromey
3 siblings, 2 replies; 40+ messages in thread
From: Doug Evans @ 2013-09-19 22:45 UTC (permalink / raw)
To: Yao Qi; +Cc: gdb-patches
Hi. Comments inline.
Yao Qi writes:
> diff --git a/gdb/testsuite/gdb.perf/solib.exp b/gdb/testsuite/gdb.perf/solib.exp
> new file mode 100644
> index 0000000..8e7eaf8
> --- /dev/null
> +++ b/gdb/testsuite/gdb.perf/solib.exp
> @@ -0,0 +1,86 @@
> +# Copyright (C) 2013 Free Software Foundation, Inc.
> +
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program. If not, see <http://www.gnu.org/licenses/>.
> +
> +# This test case is to test the speed of GDB when it is handling the
> +# shared libraries of inferior are loaded and unloaded.
> +
> +standard_testfile .c
> +set executable $testfile
> +set expfile $testfile.exp
> +
> +# make check RUNTESTFLAGS='solib.exp SOLIB_NUMBER=1024'
SOLIB_NUMBER doesn't read very well.
How about NUM_SOLIBS?
> +if ![info exists SOLIB_NUMBER] {
> + set SOLIB_NUMBER 128
> +}
> +
> +for {set i 0} {$i < $SOLIB_NUMBER} {incr i} {
> +
> + # Produce source files.
> + set libname "solib-lib$i"
> + set src [standard_temp_file $libname.c]
> + set exe [standard_temp_file $libname]
> +
> + set code "int shr$i (void) {return $i;}"
> + set f [open $src "w"]
> + puts $f $code
> + close $f
IWBN if the test harness provided utilities for generating source
files instead of hardcoding the generating of them in the test.
Parameters to such a set of functions would include things like the name
of a high level entry point (what one might pass to dlsym), the number
of functions in the file, the number of classes, etc.
> +
> + # Compile.
> + if { [gdb_compile_shlib $src $exe {debug}] != "" } {
> + untested "Couldn't compile $src."
> + return -1
> + }
> +
> + # Delete object files to save some space.
> + file delete [standard_temp_file "solib-lib$i.c.o"]
> +}
> +
> +if { [prepare_for_testing ${testfile}.exp ${binfile} ${srcfile} {debug shlib_load} ] } {
> + return -1
> +}
> +
> +clean_restart $binfile
> +
> +if ![runto_main] {
> + fail "Can't run to main"
> + return -1
> +}
> +
> +set remote_python_file [gdb_remote_download host ${srcdir}/${subdir}/${testfile}.py]
> +
> +# Set sys.path for module perftest.
> +gdb_test_no_output "python import os, sys"
> +gdb_test_no_output "python sys.path.insert\(0, os.path.abspath\(\"${srcdir}/${subdir}/lib\"\)\)"
> +
> +gdb_test_no_output "python exec (open ('${remote_python_file}').read ())"
> +
> +gdb_test_no_output "python SolibLoadUnload\($SOLIB_NUMBER\)"
> +
> +# Call the convenience function registered by python script.
> +send_gdb "call \$perftest()\n"
> +gdb_expect 3000 {
> + -re "\"Done\".*${gdb_prompt} $" {
> + }
> + timeout {}
> +}
> +
> +remote_file host delete ${remote_python_file}
> +
> +# Remove these libraries and source files.
> +
> +for {set i 0} {$i < $SOLIB_NUMBER} {incr i} {
> + file delete [standard_temp_file "solib-lib$i"]
> + file delete [standard_temp_file "solib-lib$i.c"]
> +}
I like tests that leave things behind afterwards so that if I want to
run things by hand afterwards I can easily do so.
Let "make clean" clean up build artifacts.
[Our testsuite "make clean" rules are always lagging behind, but with some
conventions in the perf testsuite we can make this a tractable problem.
E.g., It's mostly (though not completely) executables that "make clean" lags
behind in cleaning up, but if they all ended with the same suffix, then they
would get cleaned up as easily as "rm -f *.o" cleans up object files.
If one went this route, one would want to do the same with foo.so of course.
That's not the only way to make this a tractable problem, just a possibility.]
Separately,
We were discussing perf testsuite usage here, and IWBN if there was a mode
where compilation was separated from perf testing.
E.g., and this wouldn't be the default of course,
one could do an initial "make check-perf" that just built the binaries,
and then a second "make check-perf" that used the prebuilt binaries to
collect performance data.
[In between could be various things, like shipping the tests out to
other machines.]
I'm just offering this as an idea. I can imagine implementing this
in various ways. Whether we can agree on one ... dunno.
One thought was to reduce the actual perf collection part of .exp scripts
to one line that invoked invokes some function passing it the name of
the python script or some such.
For example,
We want to be able to run the perf tests in parallel, but we don't want
test data polluted because, for example, several copies of gcc were also
running compiling other tests or other tests were running.
> diff --git a/gdb/testsuite/gdb.perf/solib.py b/gdb/testsuite/gdb.perf/solib.py
> new file mode 100644
> index 0000000..7cc9c4a
> --- /dev/null
> +++ b/gdb/testsuite/gdb.perf/solib.py
> @@ -0,0 +1,48 @@
> +# Copyright (C) 2013 Free Software Foundation, Inc.
> +
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program. If not, see <http://www.gnu.org/licenses/>.
> +
> +# This test case is to test the speed of GDB when it is handling the
> +# shared libraries of inferior are loaded and unloaded.
> +
> +import gdb
> +import time
> +
> +from perftest import perftest
> +
> +class SolibLoadUnload(perftest.SingleVariableTestCase):
> + def __init__(self, solib_number):
> + super (SolibLoadUnload, self).__init__ ("solib")
> + self.solib_number = solib_number
> +
> + def execute_test(self):
> + num = self.solib_number
> + iteration = 5;
> +
> + # Warm up.
> + do_test_command = "call do_test (%d)" % num
> + gdb.execute (do_test_command)
> + gdb.execute (do_test_command)
I often collect data for both cold and hot caches.
It's important to have both sets of data.
[Cold caches is important because that's what users see after a first build
(in a distributed build the files aren't necessarily on one's machine yet).
Hot caches are important because it helps remove one source of variability
from the results.]
Getting cold caches involves doing things like (effectively)
sudo /bin/sh -c "echo 3 >/proc/sys/vm/drop_caches"
but it also involves doing other things that aren't necessarily
relevant elsewhere. [Obviously doing things like sudo adds wrinkles
to running the test. With appropriate hooks it's handled in a way that
doesn't affect normal runs.]
Getting hot caches is relatively easy (to a first approximation), but
to also test with cold caches we don't want to hard code warmups in the test.
Thus we want these lines to be moved elsewhere,
and have test harness provide hooks to control this.
> +
> + while num > 0 and iteration > 0:
> + do_test_command = "call do_test (%d)" % num
> +
> + start_time = time.clock()
> + gdb.execute (do_test_command)
> + elapsed_time = time.clock() - start_time
IWBN (IMO) if the test harness provided utilities to measure things like
wall time, cpu time, memory usage, and whatever other data we want to collect.
[These utilities, could e.g., just farm out to time.clock(),
if that was the appropriate thing to do,
but the tests themselves stick to the test harness API.]
> +
> + self.result.record (num, elapsed_time)
> +
> + num = num / 2
> + iteration -= 1
> --
> 1.7.7.6
Thoughts?
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC 2/3] Perf test framework
2013-09-19 19:09 ` Doug Evans
@ 2013-09-20 8:04 ` Yao Qi
2013-09-20 16:51 ` Doug Evans
2013-09-20 17:12 ` Doug Evans
0 siblings, 2 replies; 40+ messages in thread
From: Yao Qi @ 2013-09-20 8:04 UTC (permalink / raw)
To: Doug Evans; +Cc: gdb-patches
Doug, thanks for the review.
On 09/20/2013 03:09 AM, Doug Evans wrote:
> > +class PerfTestConfig(object):
> > + """
> > + Create the right objects according to file perftest.ini.
> > + """
> > +
> > + def __init__(self):
> > + self.config = ConfigParser.ConfigParser()
> > + self.config.read("perftest.ini")
> > +
> > + def get_reporter(self):
> > + """Create an instance of class Reporter which is determined by
> > + the option 'type' in section '[Reporter]'."""
> > + if not self.config.has_section('Reporter'):
> > + return reporter.TextReporter()
> > + if not self.config.has_option('Reporter', 'type'):
> > + return reporter.TextReporter()
> > +
> > + name = self.config.get('Reporter', 'type')
> > + cls = getattr(reporter, name)
> > + return cls()
> > +
> > +perftestconfig = PerfTestConfig()
>
> What do you see perftest.ini containing over time?
> While the file format is pretty trivial, it is another file format.
> Do we need it?
I was wondering that we can support json format, so I create class
PerfTestConfig and perftest.ini is used to determine which format to
be used. I agree that we can remove PerfTestConfig since we
only support only one format (plain text) nowadays.
> > +
> > + def invoke(self):
> > + """Call method 'execute_test' and '__report'."""
>
> As I read this, this comment just says what this function is doing.
> I'm guessing the point is to say that all such methods must, at the least,
> do these two things. This should be spelled out.
> It would also be good to document that "invoke" is what GDB calls
> to perform this function.
>
> Also, I'm wondering why execute_test is public and
> __report(-> _report) is private?
_report is private to be used only in class TestCase. I double checked
that private method can be overridden in python, so execute_test can be
private too. I'll do it in V2.
>
> > +
> > + self.execute_test()
> > + self.__report(perftestconfig.get_reporter())
> > + return "Done"
> > +
> > +class SingleVariableTestCase(TestCase):
> > + """Test case with a single variable."""
>
> I think this needs more documentation.
> What does "single variable" refer to? A single statistic, like wall time?
>
Yes, like wall time. How about "SingleStatisticTestCase" or
"SingleParameterTestCase"?
> > +
> > +class TestResult(object):
> > + """Base class to record or save test results."""
> > +
> > + def __init__(self, name):
> > + self.name = name
> > +
> > + def record (self, variable, result):
> > + raise RuntimeError("Abstract Method:record.")
> > +
> > + def report (self, reporter):
> > + """Report the test results by reporter."""
> > + raise RuntimeError("Abstract Method:report.")
> > +
> > +class SingleVariableTestResult(TestResult):
> > + """Test results for the test case with a single variable. """
> > +
> > + def __init__(self, name):
> > + super (SingleVariableTestResult, self).__init__ (name)
> > + self.results = dict ()
> > +
> > + def record(self, variable, result):
> > + self.results[variable] = result
>
> As things read (to me anyway), the class only handles a single variable,
> but the "record" method makes the variable a parameter.
> There's a disconnect here.
>
> Maybe the "variable" parameter to "record" is misnamed.
> E.g., if testing the wall time of performing something over a range of values,
> e.g., 1 solib, 8, solibs, 256 solibs, "variable" would be 1,8,256?
Yes, for solib test, "variable" is the number of shared libraries, and
"result" is the time usage for loading or unloading such number of
shared libraries.
> If that's the case, please rename "variable".
> I realize it's what is being varied run after run, but
> it just doesn't read well.
>
> There are two "variables" (so to speak) here:
> 1) What one is changing run after run. E.g. # solibs
> 2) What one is measuring. E.g. wall time, cpu time, memory used.
>
> The name "variable" feels too ambiguous.
> OTOH, if the performance testing world has a well established convention
> for what the word "variable" means, maybe I could live with it.:-)
>
How about renaming "variable" by "parameter"?
--
Yao (é½å°§)
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC 2/3] Perf test framework
2013-09-20 8:04 ` Yao Qi
@ 2013-09-20 16:51 ` Doug Evans
2013-09-22 2:54 ` Yao Qi
2013-09-20 17:12 ` Doug Evans
1 sibling, 1 reply; 40+ messages in thread
From: Doug Evans @ 2013-09-20 16:51 UTC (permalink / raw)
To: Yao Qi; +Cc: gdb-patches
On Fri, Sep 20, 2013 at 1:03 AM, Yao Qi <yao@codesourcery.com> wrote:
> Doug, thanks for the review.
>
>
> On 09/20/2013 03:09 AM, Doug Evans wrote:
>>
>> > +class PerfTestConfig(object):
>> > + """
>> > + Create the right objects according to file perftest.ini.
>> > + """
>> > +
>> > + def __init__(self):
>> > + self.config = ConfigParser.ConfigParser()
>> > + self.config.read("perftest.ini")
>> > +
>> > + def get_reporter(self):
>> > + """Create an instance of class Reporter which is determined
>> by
>> > + the option 'type' in section '[Reporter]'."""
>> > + if not self.config.has_section('Reporter'):
>> > + return reporter.TextReporter()
>> > + if not self.config.has_option('Reporter', 'type'):
>> > + return reporter.TextReporter()
>> > +
>> > + name = self.config.get('Reporter', 'type')
>> > + cls = getattr(reporter, name)
>> > + return cls()
>> > +
>> > +perftestconfig = PerfTestConfig()
>>
>> What do you see perftest.ini containing over time?
>> While the file format is pretty trivial, it is another file format.
>> Do we need it?
>
>
> I was wondering that we can support json format, so I create class
> PerfTestConfig and perftest.ini is used to determine which format to
> be used. I agree that we can remove PerfTestConfig since we
> only support only one format (plain text) nowadays.
Hi. I wasn't suggesting removing support for more reporting formats.
We'll be adding our own. :-)
I'm just wondering, if all that pertest.ini will contain is the report
format, do we need it?
Or, can we specify the format (and whatever else is desired/needed)
via some other means?
How will the user specify the desired report format?
>> > +
>> > + def invoke(self):
>> > + """Call method 'execute_test' and '__report'."""
>>
>> As I read this, this comment just says what this function is doing.
>> I'm guessing the point is to say that all such methods must, at the least,
>> do these two things. This should be spelled out.
>> It would also be good to document that "invoke" is what GDB calls
>> to perform this function.
>>
>> Also, I'm wondering why execute_test is public and
>> __report(-> _report) is private?
>
>
> _report is private to be used only in class TestCase. I double checked that
> private method can be overridden in python, so execute_test can be private
> too. I'll do it in V2.
Yeah, "private" is a convention expressed by coding style, not an enforced rule.
Another thought I had was: what if a test tried to have two perf tests?
As things stand (and IIUC!), the user is intended to instantiate the
subclass of TestCase.
But the gdb.Function name is not a parameter, it's always "perftest".
It's reasonable to only require one such test
(and thus having two gdb.Functions with name "perftest" is "pilot error"),
but we should detect and report it somehow.
[It may still work, depending on the ordering of how the test does
things, but I wouldn't want to depend on it.]
OTOH, what's the cost of relaxing this restriction and making the
gdb.Function name a parameter?
[it could have a default value so the common case is unchanged]
Could be missing something of course. :-)
>> > +
>> > + self.execute_test()
>> > + self.__report(perftestconfig.get_reporter())
>> > + return "Done"
>> > +
>> > +class SingleVariableTestCase(TestCase):
>> > + """Test case with a single variable."""
>>
>> I think this needs more documentation.
>> What does "single variable" refer to? A single statistic, like wall time?
>>
>
> Yes, like wall time. How about "SingleStatisticTestCase" or
> "SingleParameterTestCase"?
"Statistic" is nice and clear.
>> > +
>> > +class TestResult(object):
>> > + """Base class to record or save test results."""
>> > +
>> > + def __init__(self, name):
>> > + self.name = name
>> > +
>> > + def record (self, variable, result):
>> > + raise RuntimeError("Abstract Method:record.")
>> > +
>> > + def report (self, reporter):
>> > + """Report the test results by reporter."""
>> > + raise RuntimeError("Abstract Method:report.")
>> > +
>> > +class SingleVariableTestResult(TestResult):
>> > + """Test results for the test case with a single variable. """
>> > +
>> > + def __init__(self, name):
>> > + super (SingleVariableTestResult, self).__init__ (name)
>> > + self.results = dict ()
>> > +
>> > + def record(self, variable, result):
>> > + self.results[variable] = result
>>
>> As things read (to me anyway), the class only handles a single variable,
>> but the "record" method makes the variable a parameter.
>> There's a disconnect here.
>>
>> Maybe the "variable" parameter to "record" is misnamed.
>> E.g., if testing the wall time of performing something over a range of
>> values,
>> e.g., 1 solib, 8, solibs, 256 solibs, "variable" would be 1,8,256?
>
>
> Yes, for solib test, "variable" is the number of shared libraries, and
> "result" is the time usage for loading or unloading such number of
> shared libraries.
>
>
>> If that's the case, please rename "variable".
>> I realize it's what is being varied run after run, but
>> it just doesn't read well.
>>
>> There are two "variables" (so to speak) here:
>> 1) What one is changing run after run. E.g. # solibs
>> 2) What one is measuring. E.g. wall time, cpu time, memory used.
>>
>> The name "variable" feels too ambiguous.
>> OTOH, if the performance testing world has a well established convention
>> for what the word "variable" means, maybe I could live with it.:-)
>>
>
> How about renaming "variable" by "parameter"?
Assuming we're talking about the "variable" argument to the "record" method,
yeah, "parameter" sounds ok to me. "test_parameter" may be even better.
[
Though, for grin's sake, one can then have a discussion about the
difference between function arguments and function parameters, and
whether someone will be confused by the fact that we now have a
parameter named "parameter". 1/2 :-)
ref: http://stackoverflow.com/questions/156767/whats-the-difference-between-an-argument-and-a-parameter
]
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC 2/3] Perf test framework
2013-09-20 8:04 ` Yao Qi
2013-09-20 16:51 ` Doug Evans
@ 2013-09-20 17:12 ` Doug Evans
1 sibling, 0 replies; 40+ messages in thread
From: Doug Evans @ 2013-09-20 17:12 UTC (permalink / raw)
To: Yao Qi; +Cc: gdb-patches
On Fri, Sep 20, 2013 at 1:03 AM, Yao Qi <yao@codesourcery.com> wrote:
> _report is private to be used only in class TestCase. I double checked that
> private method can be overridden in python, so execute_test can be private
> too. I'll do it in V2.
Sorry for the followup.
If execute_test is intended to be overridden in subclasses I wouldn't
make it private.
Leaving it as public is fine.
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC 1/3] New make target 'check-perf' and new dir gdb.perf
2013-08-28 4:17 ` [RFC 1/3] New make target 'check-perf' and new dir gdb.perf Yao Qi
2013-08-28 9:40 ` Agovic, Sanimir
2013-09-19 17:47 ` Doug Evans
@ 2013-09-20 18:59 ` Tom Tromey
2 siblings, 0 replies; 40+ messages in thread
From: Tom Tromey @ 2013-09-20 18:59 UTC (permalink / raw)
To: Yao Qi; +Cc: gdb-patches
Yao> +check-perf: all $(abs_builddir)/site.exp
Yao> + @if test ! -d gdb.perf; then mkdir gdb.perf; fi
Is this line really needed?
Just curious. I was thinking perhaps the parallel-mode approach to
outputs is preferable, but when running the perf tests parallel doesn't
make a whole lot of sense.
Yao> + $(DO_RUNTEST) --direcotry=gdb.perf --outdir gdb.perf $(RUNTESTFLAGS)
Typo, "--directory".
Tom
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC 1/3] New make target 'check-perf' and new dir gdb.perf
2013-09-19 17:47 ` Doug Evans
@ 2013-09-20 19:00 ` Tom Tromey
0 siblings, 0 replies; 40+ messages in thread
From: Tom Tromey @ 2013-09-20 19:00 UTC (permalink / raw)
To: Doug Evans; +Cc: Yao Qi, gdb-patches
Doug> An alternative is as Tom suggests, do something like
Doug> "if [skip_perf_tests] ..." at the top of each perf.exp file.
The specific reason I wanted this was so that a plain "runtest" would do
the "right" thing. To ask for performance tests you'd have to write
something like "runtest GDB_PERFORMANCE=yes"; which the Makefile can
easily do.
Tom
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC 3/3] Test on solib load and unload
2013-08-28 4:27 ` Yao Qi
` (2 preceding siblings ...)
2013-09-19 22:45 ` Doug Evans
@ 2013-09-20 19:14 ` Tom Tromey
3 siblings, 0 replies; 40+ messages in thread
From: Tom Tromey @ 2013-09-20 19:14 UTC (permalink / raw)
To: Yao Qi; +Cc: gdb-patches
>>>>> "Yao" == Yao Qi <yao@codesourcery.com> writes:
Yao> I should use proc standard_temp_file here.
I think instead it is better to use standard_output_file.
These are outputs for a particular test.
standard_temp_file has a vague contract. Maybe we ought to get rid of
it. I introduced it to have a place to put temporary files which aren't
strictly associated with a particular test but which might be created by
one of many tests ... mostly, the various feature-querying procs create
files like this.
Tom
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC 3/3] Test on solib load and unload
2013-09-19 22:45 ` Doug Evans
@ 2013-09-20 19:19 ` Tom Tromey
2013-10-05 0:34 ` Doug Evans
2013-09-22 6:25 ` Yao Qi
1 sibling, 1 reply; 40+ messages in thread
From: Tom Tromey @ 2013-09-20 19:19 UTC (permalink / raw)
To: Doug Evans; +Cc: Yao Qi, gdb-patches
Doug> I like tests that leave things behind afterwards so that if I want to
Doug> run things by hand afterwards I can easily do so.
Yes, that's definitely good.
Doug> Let "make clean" clean up build artifacts.
Doug> [Our testsuite "make clean" rules are always lagging behind, but with some
Doug> conventions in the perf testsuite we can make this a tractable problem.
I really dislike the "make clean" rules, mostly because they mean
maintaining a huge number of Makefiles just for this one purpose.
In GDB_PARALLEL mode, "make clean" works by zapping a few
directories... much nicer :-). On my branch I removed all those
Makefiles too. I'm curious whether I ought to try to upstream this.
Tom
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC 2/3] Perf test framework
2013-09-20 16:51 ` Doug Evans
@ 2013-09-22 2:54 ` Yao Qi
2013-09-22 23:14 ` Doug Evans
0 siblings, 1 reply; 40+ messages in thread
From: Yao Qi @ 2013-09-22 2:54 UTC (permalink / raw)
To: Doug Evans; +Cc: gdb-patches
On 09/21/2013 12:51 AM, Doug Evans wrote:
>> I was wondering that we can support json format, so I create class
>> >PerfTestConfig and perftest.ini is used to determine which format to
>> >be used. I agree that we can remove PerfTestConfig since we
>> >only support only one format (plain text) nowadays.
> Hi. I wasn't suggesting removing support for more reporting formats.
> We'll be adding our own.:-)
>
> I'm just wondering, if all that pertest.ini will contain is the report
> format, do we need it?
perftest.ini only contains report format so far, but the perf test
framework needs more and more customizations, so perftest.ini will
contain more stuffs.
> Or, can we specify the format (and whatever else is desired/needed)
> via some other means?
by env var? I thought of this, but this doesn't scale to me, if we have
more to set.
> How will the user specify the desired report format?
>
in testsuite/perftest.ini
[Reporter]
type = TextReporter
In short, we don't have anything to customize in perf test, so I am OK
to remove PerfTestConfig. Once we want to do customization, I still
prefer to do it through config file, like perftest.ini.
--
Yao (é½å°§)
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC 3/3] Test on solib load and unload
2013-09-19 22:45 ` Doug Evans
2013-09-20 19:19 ` Tom Tromey
@ 2013-09-22 6:25 ` Yao Qi
2013-09-23 0:14 ` Doug Evans
1 sibling, 1 reply; 40+ messages in thread
From: Yao Qi @ 2013-09-22 6:25 UTC (permalink / raw)
To: Doug Evans; +Cc: gdb-patches
On 09/20/2013 06:44 AM, Doug Evans wrote:
> > +standard_testfile .c
> > +set executable $testfile
> > +set expfile $testfile.exp
> > +
> > +# make check RUNTESTFLAGS='solib.exp SOLIB_NUMBER=1024'
>
> SOLIB_NUMBER doesn't read very well.
> How about NUM_SOLIBS?
>
I should mention the naming convention I used here before. It is
"TEST_PARAMETER". "SOLIB" is like a name space, and all variables used
in this test should be prefixed by "SOLIB_". I tried "." to replace
"_", but "." is not allowed to use.
If we write a test case on backtrace, and we need variable to control
the depth of bt, we can name it "BACKTRACE_DEPTH".
> > +if ![info exists SOLIB_NUMBER] {
> > + set SOLIB_NUMBER 128
> > +}
> > +
> > +for {set i 0} {$i < $SOLIB_NUMBER} {incr i} {
> > +
> > + # Produce source files.
> > + set libname "solib-lib$i"
> > + set src [standard_temp_file $libname.c]
> > + set exe [standard_temp_file $libname]
> > +
> > + set code "int shr$i (void) {return $i;}"
> > + set f [open $src "w"]
> > + puts $f $code
> > + close $f
>
> IWBN if the test harness provided utilities for generating source
> files instead of hardcoding the generating of them in the test.
> Parameters to such a set of functions would include things like the name
> of a high level entry point (what one might pass to dlsym), the number
> of functions in the file, the number of classes, etc.
>
IMO, it is not the perf test framework's responsibility to generate
source files and I am not sure the utilities like these can be reused
by other tests.
We can add a new proc gdb_produce_source with two parameters, NAME and
SOURCES. NAME is the file name and SOURCES is a list of lines of source
code we want to write to file NAME. For instance,
gdb_produce_source $src { "int shr$i (void) {return 0;}" }
It can be used here and replace some code in gdb.exp.
--
Yao (é½å°§)
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC 2/3] Perf test framework
2013-09-22 2:54 ` Yao Qi
@ 2013-09-22 23:14 ` Doug Evans
0 siblings, 0 replies; 40+ messages in thread
From: Doug Evans @ 2013-09-22 23:14 UTC (permalink / raw)
To: Yao Qi; +Cc: gdb-patches
On Sat, Sep 21, 2013 at 7:53 PM, Yao Qi <yao@codesourcery.com> wrote:
> On 09/21/2013 12:51 AM, Doug Evans wrote:
>>>
>>> I was wondering that we can support json format, so I create class
>>> >PerfTestConfig and perftest.ini is used to determine which format to
>>> >be used. I agree that we can remove PerfTestConfig since we
>>> >only support only one format (plain text) nowadays.
>>
>> Hi. I wasn't suggesting removing support for more reporting formats.
>> We'll be adding our own.:-)
>>
>> I'm just wondering, if all that pertest.ini will contain is the report
>> format, do we need it?
>
>
> perftest.ini only contains report format so far, but the perf test framework
> needs more and more customizations, so perftest.ini will contain more
> stuffs.
>
>
>> Or, can we specify the format (and whatever else is desired/needed)
>> via some other means?
>
>
> by env var? I thought of this, but this doesn't scale to me, if we have
> more to set.
No, I wasn't suggesting using env vars ... :-)
[Yikes!]
I was thinking of just python.
>> How will the user specify the desired report format?
>>
>
> in testsuite/perftest.ini
> [Reporter]
> type = TextReporter
>
> In short, we don't have anything to customize in perf test, so I am OK to
> remove PerfTestConfig. Once we want to do customization, I still prefer to
> do it through config file, like perftest.ini.
I'm all for incremental complication.
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC 3/3] Test on solib load and unload
2013-09-22 6:25 ` Yao Qi
@ 2013-09-23 0:14 ` Doug Evans
2013-09-24 2:31 ` Yao Qi
0 siblings, 1 reply; 40+ messages in thread
From: Doug Evans @ 2013-09-23 0:14 UTC (permalink / raw)
To: Yao Qi; +Cc: gdb-patches
On Sat, Sep 21, 2013 at 11:24 PM, Yao Qi <yao@codesourcery.com> wrote:
> On 09/20/2013 06:44 AM, Doug Evans wrote:
>>
>> > +standard_testfile .c
>> > +set executable $testfile
>> > +set expfile $testfile.exp
>> > +
>> > +# make check RUNTESTFLAGS='solib.exp SOLIB_NUMBER=1024'
>>
>> SOLIB_NUMBER doesn't read very well.
>> How about NUM_SOLIBS?
>>
>
> I should mention the naming convention I used here before. It is
> "TEST_PARAMETER". "SOLIB" is like a name space, and all variables used in
> this test should be prefixed by "SOLIB_". I tried "." to replace "_", but
> "." is not allowed to use.
Ah.
SOLIB_NUMBER still reads bad enough to me that I'm hoping we can agree
on a different name.
SOLIB_COUNT?
>> > +if ![info exists SOLIB_NUMBER] {
>> > + set SOLIB_NUMBER 128
>> > +}
>> > +
>> > +for {set i 0} {$i < $SOLIB_NUMBER} {incr i} {
>> > +
>> > + # Produce source files.
>> > + set libname "solib-lib$i"
>> > + set src [standard_temp_file $libname.c]
>> > + set exe [standard_temp_file $libname]
>> > +
>> > + set code "int shr$i (void) {return $i;}"
>> > + set f [open $src "w"]
>> > + puts $f $code
>> > + close $f
>>
>> IWBN if the test harness provided utilities for generating source
>> files instead of hardcoding the generating of them in the test.
>> Parameters to such a set of functions would include things like the name
>> of a high level entry point (what one might pass to dlsym), the number
>> of functions in the file, the number of classes, etc.
>>
>
> IMO, it is not the perf test framework's responsibility to generate
> source files and I am not sure the utilities like these can be reused
> by other tests.
I think it is the test framework's responsibility to provide the utilities to.
Large tests (of the size we need to collect data for) are best not
written by hand, and if we're going to machine generate source, I
would rather such generators come from the framework than always be
hardcoded into every such test. [Obviously some tests may have unique
needs though.]
> We can add a new proc gdb_produce_source with two parameters, NAME and
> SOURCES. NAME is the file name and SOURCES is a list of lines of source
> code we want to write to file NAME. For instance,
>
> gdb_produce_source $src { "int shr$i (void) {return 0;}" }
>
> It can be used here and replace some code in gdb.exp.
Here's an incomplete list of some of the axes we need to test (in random order):
- # threads
- # shared libs
- # ELF symbols
- # object files
- # types (e.g., # DWARF type units)
- stack depth
- # pretty-printers?
We need more than just gdb_produce_source.
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC 3/3] Test on solib load and unload
2013-09-23 0:14 ` Doug Evans
@ 2013-09-24 2:31 ` Yao Qi
2013-10-05 0:37 ` Doug Evans
0 siblings, 1 reply; 40+ messages in thread
From: Yao Qi @ 2013-09-24 2:31 UTC (permalink / raw)
To: Doug Evans; +Cc: gdb-patches
On 09/23/2013 08:14 AM, Doug Evans wrote:
> I think it is the test framework's responsibility to provide the utilities to.
> Large tests (of the size we need to collect data for) are best not
> written by hand, and if we're going to machine generate source, I
> would rather such generators come from the framework than always be
> hardcoded into every such test. [Obviously some tests may have unique
> needs though.]
>
Doug,
Generating source is easy in this test case. However, I am not sure it
is easy to generate source for other perf test cases, like symbols and
types. Supposing we want to generate source files have 1 million
classes, with some hierarchies, the generation script can't be simple,
IMO. On the other hand, I don't know how representative the generated
program is, compared with the real large applications, such as
openoffice, clang, etc.
>> >We can add a new proc gdb_produce_source with two parameters, NAME and
>> >SOURCES. NAME is the file name and SOURCES is a list of lines of source
>> >code we want to write to file NAME. For instance,
>> >
>> > gdb_produce_source $src { "int shr$i (void) {return 0;}" }
>> >
>> >It can be used here and replace some code in gdb.exp.
> Here's an incomplete list of some of the axes we need to test (in random order):
> - # threads
> - # shared libs
They are not hard to generate.
> - # ELF symbols
> - # object files
> - # types (e.g., # DWARF type units)
I am not familiar with type and symbols, but I assume we need some
scripts to generate source files having a large number of different
symbols and types, which looks hard.
> - stack depth
It is not hard to generate either.
> - # pretty-printers?
>
I am OK to add utilities to generate sources for shared libs and stack
depth, but I am still unable to find an approach to generate source
files for perf tests on symbols and types.
--
Yao (é½å°§)
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC 3/3] Test on solib load and unload
2013-09-20 19:19 ` Tom Tromey
@ 2013-10-05 0:34 ` Doug Evans
2013-10-07 16:31 ` Tom Tromey
0 siblings, 1 reply; 40+ messages in thread
From: Doug Evans @ 2013-10-05 0:34 UTC (permalink / raw)
To: Tom Tromey; +Cc: Yao Qi, gdb-patches
On Fri, Sep 20, 2013 at 12:19 PM, Tom Tromey <tromey@redhat.com> wrote:
> Doug> I like tests that leave things behind afterwards so that if I want to
> Doug> run things by hand afterwards I can easily do so.
>
> Yes, that's definitely good.
>
> Doug> Let "make clean" clean up build artifacts.
> Doug> [Our testsuite "make clean" rules are always lagging behind, but with some
> Doug> conventions in the perf testsuite we can make this a tractable problem.
>
> I really dislike the "make clean" rules, mostly because they mean
> maintaining a huge number of Makefiles just for this one purpose.
>
> In GDB_PARALLEL mode, "make clean" works by zapping a few
> directories... much nicer :-). On my branch I removed all those
> Makefiles too. I'm curious whether I ought to try to upstream this.
zapping directories is another way to go, I know it's been discussed
on and off for years.
I hesitate to mention it because I too am not sure whether upstream
would accept it.
It is nicer ...
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC 3/3] Test on solib load and unload
2013-09-24 2:31 ` Yao Qi
@ 2013-10-05 0:37 ` Doug Evans
0 siblings, 0 replies; 40+ messages in thread
From: Doug Evans @ 2013-10-05 0:37 UTC (permalink / raw)
To: Yao Qi; +Cc: gdb-patches
On Mon, Sep 23, 2013 at 7:30 PM, Yao Qi <yao@codesourcery.com> wrote:
> On 09/23/2013 08:14 AM, Doug Evans wrote:
>>
>> I think it is the test framework's responsibility to provide the utilities
>> to.
>> Large tests (of the size we need to collect data for) are best not
>> written by hand, and if we're going to machine generate source, I
>> would rather such generators come from the framework than always be
>> hardcoded into every such test. [Obviously some tests may have unique
>> needs though.]
>>
>
> Doug,
> Generating source is easy in this test case. However, I am not sure it is
> easy to generate source for other perf test cases, like symbols and types.
> Supposing we want to generate source files have 1 million classes, with some
> hierarchies, the generation script can't be simple, IMO. On the other hand,
> I don't know how representative the generated program is, compared with the
> real large applications, such as openoffice, clang, etc.
Hi.
Found this in my inbox and realized I hadn't replied.
[At least I can't find a reply.]
It's easy enough to generate programs with a million symbols.
A quick hack that used bash did it in a reasonable amount of time.
[I'm not suggesting we use bash, I just used it as a quick hack to see
how long it would take.]
As for being representative, gdb doesn't care what the program does,
the program just has to look representative.
E.g., 10K DWARF CUs, 500K DWARF TUs, 4M ELF symbols, 5000 shared libs,
and so on.
I don't envision the scripts to generate this being too complex.
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [RFC 3/3] Test on solib load and unload
2013-10-05 0:34 ` Doug Evans
@ 2013-10-07 16:31 ` Tom Tromey
0 siblings, 0 replies; 40+ messages in thread
From: Tom Tromey @ 2013-10-07 16:31 UTC (permalink / raw)
To: Doug Evans; +Cc: Yao Qi, gdb-patches
>>>>> "Doug" == Doug Evans <dje@google.com> writes:
Tom> In GDB_PARALLEL mode, "make clean" works by zapping a few
Tom> directories... much nicer :-). On my branch I removed all those
Tom> Makefiles too. I'm curious whether I ought to try to upstream this.
Doug> zapping directories is another way to go, I know it's been
Doug> discussed on and off for years. I hesitate to mention it because
Doug> I too am not sure whether upstream would accept it. It is nicer
Doug> ...
FWIW the reason I haven't prepped & submitted the patches for this is
that we decided that plain "runtest" would not use the same directory
layout as parallel mode. So, the Makefiles are still needed for cleanup
in this mode.
If nobody cares about that -- I certainly don't, I never use "make
clean" in the test suite -- then I'm happy to go ahead.
Tom
^ permalink raw reply [flat|nested] 40+ messages in thread
end of thread, other threads:[~2013-10-07 16:31 UTC | newest]
Thread overview: 40+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-08-14 13:01 [RFC] GDB performance testing infrastructure Yao Qi
2013-08-21 20:39 ` Tom Tromey
2013-08-27 6:21 ` Yao Qi
2013-08-27 13:49 ` Agovic, Sanimir
2013-08-28 3:04 ` Yao Qi
2013-09-19 0:36 ` Doug Evans
2013-08-28 4:17 ` [RFC 0/3] GDB Performance testing Yao Qi
2013-08-28 4:17 ` [RFC 2/3] Perf test framework Yao Qi
2013-08-28 9:57 ` Agovic, Sanimir
2013-09-03 1:45 ` Yao Qi
2013-09-03 6:38 ` Agovic, Sanimir
2013-09-19 19:09 ` Doug Evans
2013-09-20 8:04 ` Yao Qi
2013-09-20 16:51 ` Doug Evans
2013-09-22 2:54 ` Yao Qi
2013-09-22 23:14 ` Doug Evans
2013-09-20 17:12 ` Doug Evans
2013-08-28 4:17 ` [RFC 3/3] Test on solib load and unload Yao Qi
2013-08-28 4:27 ` Yao Qi
2013-08-28 11:31 ` Agovic, Sanimir
2013-09-03 1:59 ` Yao Qi
2013-09-03 6:33 ` Agovic, Sanimir
2013-09-02 15:24 ` Blanc, Nicolas
2013-09-03 2:04 ` Yao Qi
2013-09-03 7:50 ` Blanc, Nicolas
2013-09-19 22:45 ` Doug Evans
2013-09-20 19:19 ` Tom Tromey
2013-10-05 0:34 ` Doug Evans
2013-10-07 16:31 ` Tom Tromey
2013-09-22 6:25 ` Yao Qi
2013-09-23 0:14 ` Doug Evans
2013-09-24 2:31 ` Yao Qi
2013-10-05 0:37 ` Doug Evans
2013-09-20 19:14 ` Tom Tromey
2013-08-28 4:17 ` [RFC 1/3] New make target 'check-perf' and new dir gdb.perf Yao Qi
2013-08-28 9:40 ` Agovic, Sanimir
2013-09-19 17:47 ` Doug Evans
2013-09-20 19:00 ` Tom Tromey
2013-09-20 18:59 ` Tom Tromey
2013-09-19 17:25 ` [RFC 0/3] GDB Performance testing Doug Evans
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).