summaryrefslogtreecommitdiffstats
path: root/lldb/test/benchmarks
Commit message (Collapse)AuthorAgeFilesLines
* Move lldb/test to lldb/packages/Python/lldbsuite/test.Zachary Turner2015-10-2815-1114/+0
| | | | | | | | | | | This is the conclusion of an effort to get LLDB's Python code structured into a bona-fide Python package. This has a number of benefits, but most notably the ability to more easily share Python code between different but related pieces of LLDB's Python infrastructure (for example, `scripts` can now share code with `test`). llvm-svn: 251532
* Rename `lldb_shared` to `use_lldb_suite`.Zachary Turner2015-10-2711-11/+11
| | | | llvm-svn: 251444
* Add from __future__ import print_function everywhere.Zachary Turner2015-10-2311-55/+77
| | | | | | | | | | | | | Apparently there were tons of instances I missed last time, I guess I accidentally ran 2to3 non-recursively. This should be every occurrence of a print statement fixed to use a print function as well as from __future__ import print_function being added to every file. After this patch print statements will stop working everywhere in the test suite, and the print function should be used instead. llvm-svn: 251121
* Update every test to import `lldb_shared`.Zachary Turner2015-10-2211-87/+22
| | | | | | | | | | | | | | | | | | | | | This is necessary in order to allow third party modules to be located under lldb/third_party rather than under the test folder directly. Since we're already touching every test file anyway, we also go ahead and delete the unittest2 import and main block wherever possible. The ability to run a test as a standalone file has already been broken for some time, and if we decide we want this back, we should use unittest instead of unittest2. A few places could not have the import of unittest2 removed,because they depend on the unittest2.expectedFailure or skip decorators. Removing all those was orthogonal in spirit to the purpose of this CL, so the import of unittest2 remains in those files that were using it for its test decorators. Those can be addressed separately. llvm-svn: 251055
* Merge dwarf and dsym testsTamas Berghammer2015-09-3011-13/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently most of the test files have a separate dwarf and a separate dsym test with almost identical content (only the build step is different). With adding dwo symbol file handling to the test suit it would increase this to a 3-way duplication. The purpose of this change is to eliminate this redundancy with generating 2 test case (one dwarf and one dsym) for each test function specified (dwo handling will be added at a later commit). Main design goals: * There should be no boilerplate code in each test file to support the multiple debug info in most of the tests (custom scenarios are acceptable in special cases) so adding a new test case is easier and we can't miss one of the debug info type. * In case of a test failure, the debug symbols used during the test run have to be cleanly visible from the output of dotest.py to make debugging easier both from build bot logs and from local test runs * Each test case should have a unique, fully qualified name so we can run exactly 1 test with "-f <test-case>.<test-function>" syntax * Test output should be grouped based on test files the same way as it happens now (displaying dwarf/dsym results separately isn't preferable) Proposed solution (main logic in lldbtest.py, rest of them are test cases fixed up for the new style): * Have only 1 test fuction in the test files what will run for all debug info separately and this test function should call just "self.build(...)" to build an inferior with the right debug info * When a class is created by python (the class object, not the class instance), we will generate a new test method for each debug info format in the test class with the name "<test-function>_<debug-info>" and remove the original test method. This way unittest2 see multiple test methods (1 for each debug info, pretty much as of now) and will handle the test selection and the failure reporting correctly (the debug info will be visible from the end of the test name) * Add new annotation @no_debug_info_test to disable the generation of multiple tests for each debug info format when the test don't have an inferior Differential revision: http://reviews.llvm.org/D13028 llvm-svn: 248883
* Reversed r238363, because the message is inconsistentSean Callanan2015-07-011-1/+1
| | | | | | with all the other assertion messages. llvm-svn: 241212
* [TestBase.runCmd] Better error message when runCmd fails.Siva Chandra2015-05-271-1/+1
| | | | | | | | | | | | | | | | | | | | | Summary: Before: AssertionError: False is not True : Process is launched successfully After: AssertionError: False is not True : Command 'run a.out' failed. >>> error: invalid target, create a target using the 'target create' command >>> Process could not be launched successfully Reviewers: clayborg Reviewed By: clayborg Subscribers: lldb-commits, vharron Differential Revision: http://reviews.llvm.org/D9948 llvm-svn: 238363
* Refactored lldb executable name discoveryVince Harron2015-05-1810-18/+18
| | | | | | | | | | | | | | | | | | | | The lldb executable was referenced through the code by 7 different (effectively) global variables. global lldbExecutablePath global lldbExecutable os.environ['LLDB_EXEC'] os.environ['LLDB_TEST'] dotest.lldbExec dotest.lldbHere lldbtest.lldbExec This change uses one global variable lldbtest_config.lldbExec to replace them all. Differential Revision: http://reviews.llvm.org/D9817 llvm-svn: 237600
* un-skipped a bunch of tests on LinuxVince Harron2015-05-041-1/+0
| | | | | | | | | Some have been marked as skipIfLinux for years. The seem to be passing so I've enabled them. Differential Revision: http://reviews.llvm.org/D9428 llvm-svn: 236403
* Replace sys.platform skips in tests with @skip decorators which check ↵Robert Flack2015-03-301-1/+1
| | | | | | | | | | | | | | | | against remote platform. Adds @skipIfPlatform and @skipUnlessPlatform decorators which will skip if / unless the target platform is in the provided platform list. Test Plan: ninja check-lldb shows no regressions. When running cross platform, tests which cannot run on the target platform are skipped. Differential Revision: http://reviews.llvm.org/D8665 llvm-svn: 233547
* Add a benchmark test case that shows how much slower repeat 'continue' ↵Enrico Granata2015-01-223-0/+120
| | | | | | commands are than going through the SB API directly llvm-svn: 226852
* XFAIL pexpect tests on Windows.Zachary Turner2015-01-209-0/+11
| | | | | | | | | | | | | At some point we will need to either provide a pexpect equivalent on Windows, or provide some other method of doing out-of-process tests. Even with a pexpect replacement, it may be worth re-evaluating some of these tests to see if they would be better served as in-process tests. The larger issue of coming up with a pexpect replacement on Windows is tracked in http://llvm.org/pr22274. llvm-svn: 226614
* Fixes a number of issue related to test portability on Windows.Zachary Turner2014-07-1810-10/+12
| | | | | | | | | | | | | | | | | | | | | | | | 99% of this CL is simply moving calls to "import pexpect" to a more narrow scope - i.e. the function that actually runs a particular test. This way the test suite can run on Windows, which doesn't have pexpect, and the individual tests that use pexpect can be disabled on a platform-specific basis. Additionally, this CL fixes a few other cases of non-portability. Notably, using "ps" to get the command line, and os.uname() to determine the architecture don't work on Windows. Finally, this also adds a stubbed out builder_win32 module. The full test suite runs correctly on Windows after this CL, although there is still some work remaining on the C++ side to fix one-shot script commands from LLDB (e.g. script print "foo"), which currently deadlock. Reviewed by: Todd Fiala Differential Revision: http://reviews.llvm.org/D4573 llvm-svn: 213343
* Massive test suite cleanup to stop everyone from manually having to compute ↵Greg Clayton2013-12-1010-10/+10
| | | | | | | | | | "mydir" inside each test case. This has led to many test suite failures because of copy and paste where new test cases were based off of other test cases and the "mydir" variable wasn't updated. Now you can call your superclasses "compute_mydir()" function with "__file__" as the sole argument and the relative path will be computed for you. llvm-svn: 196985
* Test file renaming.Johnny Chen2012-04-231-2/+2
| | | | llvm-svn: 155369
* Move some print stmts to the test method, where they get printed only if the ↵Johnny Chen2011-12-101-4/+8
| | | | | | | | test is qualified to run under the current test driver run configuration. llvm-svn: 146320
* Modified the script to have the flexibility of specifying the gdb executable ↵Johnny Chen2011-12-071-1/+22
| | | | | | | | | | | path for use in the benchmark against lldb's disassembly speed. Note that the lldb executable path can already be specified using the LLDB_EXEC env variable. rdar://problem/7511194 llvm-svn: 146050
* This benchmark is meant to run the locally built 'lldb' binary, not the ↵Johnny Chen2011-10-281-1/+1
| | | | | | binary on the PATH env variable. llvm-svn: 143169
* Establish a baseline for bench.py score by using a fixed lldb executable as theJohnny Chen2011-10-263-13/+17
| | | | | | | | | | inferior program for the lldb debugger to operate on. The fixed lldb executable corresponds to r142902. Plus some minor modifications to the test benchmark to conform to way bench.py is meant to be invoked. llvm-svn: 143075
* Add another metric for startup delay -- run to breakpoint, which measures ↵Johnny Chen2011-10-261-0/+8
| | | | | | | | | | | | | | | | the time from issuing the run command till the first breakpoint hit. Example: $ ./dotest.py -v +b -n -x '-F Driver::MainLoop()' -p TestStartupDelays.py 1: test_startup_delay (TestStartupDelays.StartupDelaysBench) Test start up delays creating a target and setting a breakpoint. ... lldb startup delay (create fresh target) benchmark: Avg: 0.124496 (Laps: 30, Total Elapsed Time: 3.734883) lldb startup delay (set first breakpoint) benchmark: Avg: 0.220828 (Laps: 30, Total Elapsed Time: 6.624847) lldb startup delay (run to breakpoint) benchmark: Avg: 0.478159 (Laps: 30, Total Elapsed Time: 14.344774) ok llvm-svn: 142993
* Benchmark the turnaround time starting a debugger and run to the breakpoint ↵Johnny Chen2011-10-251-0/+122
| | | | | | | | | | | | | | | | | | | | | with lldb vs. gdb. An example (with /Developer/usr/bin/lldb vs. /usr/bin/gdb): [13:05:04] johnny:/Volumes/data/lldb/svn/trunk/test $ ./dotest.py -v +b -n -p TestCompileRunToBreakpointTurnaround.py 1: test_run_lldb_then_gdb (TestCompileRunToBreakpointTurnaround.CompileRunToBreakpointBench) Benchmark turnaround time with lldb vs. gdb. ... lldb turnaround benchmark: Avg: 4.574600 (Laps: 3, Total Elapsed Time: 13.723799) gdb turnaround benchmark: Avg: 7.966713 (Laps: 3, Total Elapsed Time: 23.900139) lldb_avg/gdb_avg: 0.574214 ok ---------------------------------------------------------------------- Ran 1 test in 55.462s OK llvm-svn: 142949
* Add bench.py as a driver script to run some benchmarks on lldb.Johnny Chen2011-10-223-1/+143
| | | | | | | | | | | | | | | Add benchmarks for expression evaluations (TestExpressionCmd.py) and disassembly (TestDoAttachThenDisassembly.py). An example: [17:45:55] johnny:/Volumes/data/lldb/svn/trunk/test $ ./bench.py 2>&1 | grep -P '^lldb.*benchmark:' lldb startup delay (create fresh target) benchmark: Avg: 0.104274 (Laps: 30, Total Elapsed Time: 3.128214) lldb startup delay (set first breakpoint) benchmark: Avg: 0.102216 (Laps: 30, Total Elapsed Time: 3.066470) lldb frame variable benchmark: Avg: 1.649162 (Laps: 20, Total Elapsed Time: 32.983245) lldb stepping benchmark: Avg: 0.104409 (Laps: 50, Total Elapsed Time: 5.220461) lldb expr cmd benchmark: Avg: 0.206774 (Laps: 25, Total Elapsed Time: 5.169350) lldb disassembly benchmark: Avg: 0.089086 (Laps: 10, Total Elapsed Time: 0.890859) llvm-svn: 142708
* Add a benchmark for measuring the response time of the 'frame variable' command.Johnny Chen2011-10-211-0/+79
| | | | | | | | | | | | | | | | | | | | | Example (start the lldb inferior, break at the Driver::MainLoop() function, and issue 'frame variable'): $ ./dotest.py -v +b -x '-F Driver::MainLoop()' -n -p TestFrameVariableResponse.py ---------------------------------------------------------------------- Collected 1 test 1: test_startup_delay (TestFrameVariableResponse.FrameVariableResponseBench) Test response time for the 'frame variable' command. ... lldb frame variable benchmark: Avg: 1.636897 (Laps: 20, Total Elapsed Time: 32.737944) ok ---------------------------------------------------------------------- Ran 1 test in 65.105s OK llvm-svn: 142678
* Rephrase benchmark output display.Johnny Chen2011-10-211-3/+2
| | | | llvm-svn: 142676
* Fix wrong directory name.Johnny Chen2011-10-212-2/+2
| | | | llvm-svn: 142629
* Add a benchmark for measuring start up delays of lldb, including:Johnny Chen2011-10-201-0/+82
| | | | | | | | | | | | | | | | | | | | o create a fresh target; and o set the first breakpoint Example (using lldb to set a breakpoint on lldb's Driver::MainLoop function): ./dotest.py -v +b -x '-F Driver::MainLoop()' -p TestStartupDelays.py ... 1: test_startup_delay (TestStartupDelays.StartupDelaysBench) Test start up delays creating a target and setting a breakpoint. ... lldb startup delays benchmark: create fresh target: Avg: 0.106732 (Laps: 15, Total Elapsed Time: 1.600985) set first breakpoint: Avg: 0.102589 (Laps: 15, Total Elapsed Time: 1.538832) ok llvm-svn: 142628
* Directory renaming: example -> expression.Johnny Chen2011-10-203-0/+189
| | | | llvm-svn: 142602
* Directory renaming: example -> expression.Johnny Chen2011-10-203-189/+0
| | | | llvm-svn: 142601
* Parameterize the iteration count used when running benchmarks, instead of ↵Johnny Chen2011-10-205-12/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | hard-coded inside the test case. Add a '-y count' option to the test driver for this purpose. An example: $ ./dotest.py -v -y 25 +b -p TestDisassembly.py ... ---------------------------------------------------------------------- Collected 2 tests 1: test_run_gdb_then_lldb (TestDisassembly.DisassembleDriverMainLoop) Test disassembly on a large function with lldb vs. gdb. ... gdb benchmark: Avg: 0.226305 (Laps: 25, Total Elapsed Time: 5.657614) lldb benchmark: Avg: 0.113864 (Laps: 25, Total Elapsed Time: 2.846606) lldb_avg/gdb_avg: 0.503146 ok 2: test_run_lldb_then_gdb (TestDisassembly.DisassembleDriverMainLoop) Test disassembly on a large function with lldb vs. gdb. ... lldb benchmark: Avg: 0.113008 (Laps: 25, Total Elapsed Time: 2.825201) gdb benchmark: Avg: 0.225240 (Laps: 25, Total Elapsed Time: 5.631001) lldb_avg/gdb_avg: 0.501723 ok ---------------------------------------------------------------------- Ran 2 tests in 41.346s OK llvm-svn: 142598
* Remove stale code.Johnny Chen2011-10-201-4/+0
| | | | llvm-svn: 142595
* Remove stale code.Johnny Chen2011-10-201-4/+1
| | | | llvm-svn: 142594
* Modify lldbtest.Base.runHooks() to now take the following keyword arguments:Johnny Chen2011-10-191-1/+1
| | | | | | | | | | | | child=None, child_prompt=None, use_cmd_api=False By default, expect a pexpect spawned child and child prompt to be supplied (use_cmd_api=False). If use_cmd_api is true, ignore the child and child prompt and use self.runCmd() to run the hooks one by one. Modify existing client to reflect the change. llvm-svn: 142532
* Extract the run hooks functionality into the base class lldbtest.Base.Johnny Chen2011-10-191-6/+2
| | | | llvm-svn: 142469
* Add a more generic stepping benchmark, which uses the '-k' option of the ↵Johnny Chen2011-10-111-0/+71
| | | | | | | | | | | | | | test driver to be able to specify the runhook(s) to bring the debug session to a certain state before running the benchmarking logic. An example, ./dotest.py -v -t +b -k 'process attach -n Mail' -k 'thread backtrace all' -p TestRunHooksThenSteppings.py spawns lldb, attaches to the 'Mail' application, does a backtrace for all threads, and then runs the benchmark to step the inferior multiple times. llvm-svn: 141740
* Add '-e' and '-x' options to the test driver to be able to specify an ↵Johnny Chen2011-10-101-0/+82
| | | | | | | | | | | | executable (full path) and the breakpoint specification for the benchmark purpose. This is used by TestSteppingSpeed.py to benchmark the lldb stepping speed. Without '-e' and 'x' specified, the test defaults to run the built lldb against itself and stopped on Driver::MainLoop, then stepping for 50 times. rdar://problem/7511193 llvm-svn: 141584
* If we spawn an lldb process for test (via pexpect), do not load the init ↵Johnny Chen2011-10-073-5/+5
| | | | | | | | | file unless told otherwise. Set up self.lldbOption to be "--no-lldbibit" unless env variable NO_LLDBIBIT is defined and equals "NO". Also add "-nx" to gdb spawned. llvm-svn: 141384
* Add a new attribute self.lldbHere, representing the fullpath to the 'lldb' ↵Johnny Chen2011-08-262-2/+2
| | | | | | | | | | | | | | | | executable built locally from the source tree. This is distinguished from self.lldbExec, which can be used by test/benchmarks to measure the performances against other debuggers. You can use environment variable LLDB_EXEC to specify self.lldbExec to the dotest.py test driver, otherwise it is going to be populated with self.lldbHere. Modify the regular tests under test dir, i.e., not test/benchmarks, to use self.lldbHere. Also modify the benchmarks tests to use self.lldbHere when it needs an 'lldb' executable with debug info to do the performance measurements. llvm-svn: 138608
* Check in a customized benchmark which compares the Xcode 4.1 vs. Xcode 4.2's ↵Johnny Chen2011-08-091-0/+92
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | gdb disassembly speed on lldb's Driver::MainLoop function which is ~1190 lines of x86 assembly code. This file is not exercised during the normal test suite run, i.e., no +b option specified. So it should be ok. The following is the benchmark result on my MBP running OSX Lion: [17:38:46] johnny:/Volumes/data/lldb/svn/trunk/test $ ./dotest.py -v +b -p TestFlintVsSlate /Volumes/data/lldb/svn/trunk/build/Debug LLDB-71 Path: /Volumes/data/lldb/svn/trunk URL: https://johnny@llvm.org/svn/llvm-project/lldb/trunk Repository Root: https://johnny@llvm.org/svn/llvm-project Repository UUID: 91177308-0d34-0410-b5e6-96231b3b80d8 Revision: 137008 Node Kind: directory Schedule: normal Last Changed Author: gclayton Last Changed Rev: 137008 Last Changed Date: 2011-08-05 17:50:36 -0700 (Fri, 05 Aug 2011) Session logs for test failures/errors/unexpected successes will go into directory '2011-08-08-17_38_52' Command invoked: python ./dotest.py -v +b -p TestFlintVsSlate ---------------------------------------------------------------------- Collected 2 tests 1: test_run_41_then_42 (TestFlintVsSlateGDBDisassembly.FlintVsSlateGDBDisassembly) Test disassembly on a large function with 4.1 vs. 4.2's gdb. ... 4.1 gdb benchmark: Avg: 0.205623 (Laps: 5, Total Elapsed Time: 1.028113) 4.2 gdb benchmark: Avg: 0.201970 (Laps: 5, Total Elapsed Time: 1.009849) gdb_42_avg/gdb_41_avg: 0.982236 ok 2: test_run_42_then_41 (TestFlintVsSlateGDBDisassembly.FlintVsSlateGDBDisassembly) Test disassembly on a large function with 4.1 vs. 4.2's gdb. ... 4.2 gdb benchmark: Avg: 0.202602 (Laps: 5, Total Elapsed Time: 1.013012) 4.1 gdb benchmark: Avg: 0.204418 (Laps: 5, Total Elapsed Time: 1.022089) gdb_42_avg/gdb_41_avg: 0.991119 ok ---------------------------------------------------------------------- Ran 2 tests in 15.688s OK llvm-svn: 137092
* Print out the stopwatch (which contains laps, total elapsed time, and average)Johnny Chen2011-08-041-2/+2
| | | | | | instead of just the average. llvm-svn: 136932
* Add a benchmark comparing lldb vs. gdb with disassembly on a large function ↵Johnny Chen2011-08-041-0/+127
| | | | | | | | | | | | | | | | | | | | | (lldb's Driver::MainLoop()). Sample run on my OSX Lion (MacBook Pro): 1: test_run_gdb_then_lldb (TestDisassembly.DisassembleDriverMainLoop) Test disassembly on a large function with lldb vs. gdb. ... gdb benchmark: Avg: 0.201802 (Laps: 5, Total Elapsed Time: 1.009008) lldb benchmark: Avg: 0.109569 (Laps: 5, Total Elapsed Time: 0.547843) lldb_avg/gdb_avg: 0.542952 ok 2: test_run_lldb_then_gdb (TestDisassembly.DisassembleDriverMainLoop) Test disassembly on a large function with lldb vs. gdb. ... lldb benchmark: Avg: 0.109580 (Laps: 5, Total Elapsed Time: 0.547902) gdb benchmark: Avg: 0.201587 (Laps: 5, Total Elapsed Time: 1.007936) lldb_avg/gdb_avg: 0.543588 ok llvm-svn: 136931
* Fix typos.Johnny Chen2011-08-031-1/+1
| | | | llvm-svn: 136809
* Add license header comment.Johnny Chen2011-08-031-0/+8
| | | | llvm-svn: 136808
* Add the real benchmarks comparing lldb against gdb for repeated expression ↵Johnny Chen2011-08-022-21/+111
| | | | | | | | | | | | | | | | | | | evaluations. Modify lldbbench.py so that lldbtest.line_number() utility function is available to BenchBase client as just line_number(), and modify lldbtest.py so that self.lldbExec (the full path for the 'lldb' executable) is available to BenchBase client as well. An example run of the test case on my MacBook Pro running Lion: 1: test_compare_lldb_to_gdb (TestRepeatedExprs.RepeatedExprsCase) Test repeated expressions with lldb vs. gdb. ... lldb_avg: 0.204339 gdb_avg: 0.205721 lldb_avg/gdb_avg: 0.993284 ok llvm-svn: 136740
* Simple renaming: self.swatch -> self.stopwatch.Johnny Chen2011-08-021-2/+2
| | | | llvm-svn: 136666
* Add a Stopwatch utility class to lldbench.py module and initialize an ↵Johnny Chen2011-08-024-9/+49
| | | | | | | | | | | | | instance of Stopwatch (self.swatch) within the BenchBase's setUp() instance method to be available to all the child classes. Use self.swatch to measure elapsed time in TestRepeatedExprs.py, which needs to be modified later on to actually measure repeated expression evaluations within the context of lldb as well as gdb. llvm-svn: 136664
* Add an abstract base class called BenchBase to be inherited by benchmark tests.Johnny Chen2011-08-011-2/+2
| | | | | | Modify the example TestRepeatedExprs.py to use BenchBase, instead. llvm-svn: 136649
* Add a @benchmarks_test decorator for test method we want to categorize as ↵Johnny Chen2011-07-303-0/+48
benchmarks test. The test driver now takes an option "+b" which enables to run just the benchmarks tests. By default, tests decorated with the @benchmarks_test decorator do not get run. Add an example benchmarks test directory which contains nothing for the time being, just to demonstrate the @benchmarks_test concept. For example, $ ./dotest.py -v benchmarks ... ---------------------------------------------------------------------- Collected 2 tests 1: test_with_gdb (TestRepeatedExprs.RepeatedExprssCase) Test repeated expressions with gdb. ... skipped 'benchmarks tests' 2: test_with_lldb (TestRepeatedExprs.RepeatedExprssCase) Test repeated expressions with lldb. ... skipped 'benchmarks tests' ---------------------------------------------------------------------- Ran 2 tests in 0.047s OK (skipped=2) $ ./dotest.py -v +b benchmarks ... ---------------------------------------------------------------------- Collected 2 tests 1: test_with_gdb (TestRepeatedExprs.RepeatedExprssCase) Test repeated expressions with gdb. ... running test_with_gdb benchmarks result for test_with_gdb ok 2: test_with_lldb (TestRepeatedExprs.RepeatedExprssCase) Test repeated expressions with lldb. ... running test_with_lldb benchmarks result for test_with_lldb ok ---------------------------------------------------------------------- Ran 2 tests in 0.270s OK Also mark some Python API tests which are missing the @python_api_test decorator. llvm-svn: 136553
OpenPOWER on IntegriCloud