summaryrefslogtreecommitdiffstats
path: root/lldb/packages/Python/lldbsuite/test/dosep.py
Commit message (Collapse)AuthorAgeFilesLines
* Fix buildbot regression by rL339929: NameError: global name 'test_directory' ↵Jan Kratochvil2018-10-031-1/+1
| | | | | | | | | | | | | | | | | | | | | is not defined With buildbot slave under test - I get after rL339929: http://lab.llvm.org:8014/builders/lldb-x86_64-fedora-28-cmake/builds/243/steps/test1/logs/stdio File "/home/buildbot/lldbroot/lldb-x86_64-fedora-28-cmake/scripts/../llvm/tools/lldb/test/dotest.py", line 7, in <module> lldbsuite.test.run_suite() File "/quad/home/buildbot/lldbroot/lldb-x86_64-fedora-28-cmake/llvm/tools/lldb/packages/Python/lldbsuite/test/dotest.py", line 1177, in run_suite configuration.results_formatter_object) File "/quad/home/buildbot/lldbroot/lldb-x86_64-fedora-28-cmake/llvm/tools/lldb/packages/Python/lldbsuite/test/dosep.py", line 1692, in main dst = core.replace(test_directory, "")[1:] NameError: global name 'test_directory' is not defined Patch by Vedant Kumar. Differential Revision: https://reviews.llvm.org/D51874 llvm-svn: 343726
* [dotest] Make --test-subdir work with --no-multiprocessVedant Kumar2018-08-161-11/+2
| | | | | | | | | | | | | | | | | | | The single-process test runner is invoked in a number of different scenarios, including when multiple test dirs are specified or (afaict) when lit is used to drive the test suite. Unfortunately the --test-subdir option did not work with the single process test runner, breaking an important use case (using lit to run swift-lldb Linux tests): Failure URL: https://ci.swift.org/job/swift-PR-Linux/6841 We won't be able to run lldb tests within swift PR testing without filtering down the set of tests. This change makes --test-subdir work with the single-process runner. llvm-svn: 339929
* Two more dosep-paralellization fallout fixesPavel Labath2018-02-191-32/+14
| | | | | | | | | | | | | | | | The first issue is about the flaky test rerun logic. This was grouping tests by subdir and passing them into walk_and_invoke in the incorrect form. This part can be just deleted as its not needed anymore. The second problem (which I noticed while investigating the first one) was that the "-p" switch was not working in multiprocessing mode. This happened because we were returning None from process_file instead of a tuple full of empty values for tests that did not match the -p regex. Both of these would be caught earlier if python was a more strongly typed language. :/ llvm-svn: 325519
* [dosep] Run tests in a more parallel fashionPavel Labath2018-02-161-55/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Summary: Due to in-tree builds, we were parallelizing the tests at the directory level. Now that the tests are built out-of-tree, we can remove this limitation and paralelize at file level instead. This decreases test suite time by about 10% for me, which is not world-shattering, but it makes the code slightly simpler and will also allow us to merge tests which were artificially spread over multiple folders (TestConcurrentEvents...) to work-around this limitation. To make this work, I've also needed to include the test file name in the build directory name, as just the test method name is not unique enough (plenty of tests have a test method called "test" or similar). While doing this, I've found a couple of tests that are taking waaay longer then they ought to (TestBreakpointCaseSensitivity -- 90 seconds), which I plan to look into in the future. Reviewers: aprantl Subscribers: lldb-commits Differential Revision: https://reviews.llvm.org/D43335 llvm-svn: 325322
* dotest.py: remove the ability to specify different architectures/compilers ↵Pavel Labath2017-03-151-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | in a single invocation Summary: This has been broken at least since the new test result framework was added, which was over a year ago. It looks like nobody has missed it since. Removing this makes the gmodules handling code saner, as it already did not know how to handle the multiple-compilers case. My motivation for this is libc++ data formatters support on android -- I am trying make a central way of determining whether libc++ tests can be run, and without this, I would have to resort to similar hacks as the gmodules code. Reviewers: jingham, zturner Subscribers: danalbert, tfiala, lldb-commits Differential Revision: https://reviews.llvm.org/D30779 llvm-svn: 297811
* test infra: clear file-charged issues on rerun of fileTodd Fiala2016-10-011-3/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change addresses the corner case bug in the test infrastructure where a test file times out *outside* of any running test method. In those cases, the issue was charged to the file, not to a test method within the file. When that file is re-run successfully, none of the test-method-level successes would clear the file-level issue. This change fixes that: for all test files that are getting rerun (whether by being marked flaky or via the --rerun-all-issues flag), file-level test issues are searched for in each of those files. Each file-level issue found in the rerun file list then gets cleared. A test of this feature is added to issue_verification, using the technique there of moving the *.py.park file to *.py to do an end-to-end validation. This change also adds a .gitignore entry for pyenv project-level files and fixes up a few minor pep8 formatting violations in files I touched. Fixes: llvm.org/pr27423 llvm-svn: 282990
* added Linux support for test timeout samplingTodd Fiala2016-09-261-4/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is the Linux counterpart to the sampling support I added on the macOS side. This change also introduces zip-file compression if the size of the sample output is greater than 10 KB. The Linux side can be quite large and the textual content is averaging over a 10x compression factor on tests that I force to time out. When compression takes place, the filename becomes: {session_dir}/{TestFilename.py}-{pid}.sample.zip This support relies on the linux 'perf' tool. If it isn't present, the behavior is to ignore pre-kill processing of the timed out test process. Note calling the perf tool under the timeout command appears to nuke the profiled process. This was causing the timeout kill logic to fail due to the process having disappeared. I modified the kill logic to catch the case of the process not existing, and I have it ignore the kill request in that case. Any other exception is still raised. Reviewers: labath Subscribers: lldb-commits Differential Revision: https://reviews.llvm.org/D24890 llvm-svn: 282436
* add hook for calling platform-dependent pre-kill action on a timed out testTodd Fiala2016-09-231-27/+124
| | | | | | | differential review: https://reviews.llvm.org/D24850 reviewers: clayborg, labath llvm-svn: 282258
* *** This commit represents a complete reformatting of the LLDB source codeKate Stone2016-09-061-10/+25
| | | | | | | | | | | | | | | | | | | | | | | *** to conform to clang-format’s LLVM style. This kind of mass change has *** two obvious implications: Firstly, merging this particular commit into a downstream fork may be a huge effort. Alternatively, it may be worth merging all changes up to this commit, performing the same reformatting operation locally, and then discarding the merge for this particular commit. The commands used to accomplish this reformatting were as follows (with current working directory as the root of the repository): find . \( -iname "*.c" -or -iname "*.cpp" -or -iname "*.h" -or -iname "*.mm" \) -exec clang-format -i {} + find . -iname "*.py" -exec autopep8 --in-place --aggressive --aggressive {} + ; The version of clang-format used was 3.9.0, and autopep8 was 1.2.4. Secondly, “blame” style tools will generally point to this commit instead of a meaningful prior commit. There are alternatives available that will attempt to look through this change and find the appropriate prior commit. YMMV. llvm-svn: 280751
* Print a warning if the directory passed to --test-subdir doesn't end up existingEnrico Granata2016-07-251-0/+2
| | | | llvm-svn: 276709
* Revert "[test] Report error when inferior test processes exit with a ↵Pavel Labath2016-07-181-3/+2
| | | | | | | | | | | | non-zero code" This reverts r275782. The problem with the commit is that it reports an additional "exit (1)" error for every file containing a failing test, which is far more than I had intended to do. I'll need to come up with a more fine-grained way of achieving the result. llvm-svn: 275791
* [test] Report error when inferior test processes exit with a non-zero codePavel Labath2016-07-181-2/+3
| | | | | | | | | | | | | | | | | | | Summary: We've run into this problem when the test errored out so early (because it could not connect to the remote device), that the code in D20193 did not catch the error. This resulted in the test suite reporting success with 0 tests being run. This patch makes sure that any non-zero exit code from the inferior process gets reported as an error. Basically I expand the concept of "exceptional exits", which was previously being used for signals to cover these cases as well. Reviewers: tfiala, zturner Subscribers: lldb-commits Differential Revision: https://reviews.llvm.org/D22404 llvm-svn: 275782
* Allow custom formatting of session log file names.Zachary Turner2016-05-171-0/+1
| | | | | | Differential Revision: http://reviews.llvm.org/D20306 llvm-svn: 269793
* test infra: move test event-related handling into its own packageTodd Fiala2016-04-201-8/+7
| | | | | | | | | | | | | | | | | | | | | | | This change moves all the test event handling and its related ResultsFormatter classes out of the packages/Python/lldbsuite/test dir into a packages/Python/lldbsuite/test_event package. Formatters are moved into a sub-package under that. I am limiting the scope of this change to just the motion and a few minor issues caught by a static Python checker (e.g. removing unused import statements). This is a pre-step for adding package-level tests to the test event system. I also intend to simplify test event results formatter selection after I make sure this doesn't break anybody. See: http://reviews.llvm.org/D19288 Reviewed by: Pavel Labath llvm-svn: 266885
* test infra cleanup: convert test_runner lib into packageTodd Fiala2016-04-191-7/+3
| | | | | | | | | | | | | | | | | | | | | Also does the following: * adopts PEP8 naming convention for OptionalWith class (now optional_with). * moves test_runner/lldb_utils.py to lldbsuite/support/optional_with.py. * packages tests in a subpackage of test_runner per recommendations in http://the-hitchhikers-guide-to-packaging.readthedocs.org/en/latest/creation.html Tests can be run from within pacakges/Python/lldbsuite/test via this command: python -m unittest discover test_runner The primary cleanup this allows is avoiding the need to muck with the PYTHONPATH variable from within the source files. This also aids some of the static code checkers as they don't need to run code to determine the proper python path. llvm-svn: 266710
* fix a race is the LLDB test suite results collectionTodd Fiala2016-04-181-3/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The race boiled down to this: If a test worker queue is able to run the test inferior and clean up before the dosep.py listener socket is spun up, and the worker queue is the last one (as would be the case when there's only one test rerunning in the rerun queue), then the test suite will exit the main loop before having a chance to process any test events coming from the test inferior or the worker queue job control. I found this race to be far more likely on fast hardware. Our Linux CI is one such example. While it will show up primarily during meta test events generated by a worker thread when a test inferior times out or exits with an exceptional exit (e.g. seg fault), it only requires that the OS takes longer to hook up the listener socket than it takes for the final test inferior and worker thread to shut down. See: http://reviews.llvm.org/D19214 reviewed by: Pavel Labath llvm-svn: 266624
* Fix dotest.py '-p' option for multi-process modeStephane Sezer2016-04-051-0/+4
| | | | | | | | | | | | | | | | | Summary: The '-p' option for dotest.py was ignored in multiprocess mode, as the -p argument to the inferior would overwrite the -p argument passed on the command line. Reviewers: zturner, tfiala Subscribers: lldb-commits, sas Differential Revision: http://reviews.llvm.org/D18779 Change by Francis Ricci <fjricci@fb.com> llvm-svn: 265422
* fixed test suite crash when --platform-name doesn't start with 'remote-'Todd Fiala2016-01-221-9/+7
| | | | | | | | | | Also removes Darwin test case files from the expectedTimeout hard-coded file list. See: http://reviews.llvm.org/D16423 llvm-svn: 258542
* Remove last XTIMEOUTs from android testsPavel Labath2016-01-191-6/+1
| | | | | | | | TestHelloWorld seems to be passing now as far as I can tell. TestExitDuringStep is still hanging. I have marked the relevant tests as flaky, which should handle the timeouts now as well. I'll be monitoring the buildbots for fallout. llvm-svn: 258114
* Remove some Windows->Android XTIMEOUTsPavel Labath2016-01-071-5/+0
| | | | llvm-svn: 257052
* Remove XTIMEOUT from TestMultithreaded on linuxPavel Labath2016-01-061-5/+1
| | | | | | instead, mark the test as expected flaky, which will trigger a rerun in case the test hangs. llvm-svn: 256935
* Remove XTIMEOUT from TestEvents on linuxPavel Labath2016-01-051-1/+0
| | | | | | | I'm getting rid of the expected timeouts. I'll XFAIL/skip any tests that show up as failing after this (I haven't seen any when running locally, but maybe the buildbot will disagree). llvm-svn: 256827
* Remove XTIMEOUT from TestRegisters on linuxPavel Labath2016-01-041-1/+0
| | | | | | | I suspect the test was hanging due to the attach deadlock. This was fixed and the test has passed last 200 buildbot runs. llvm-svn: 256755
* Remove XTIMEOUT from TestThreadStepOut on linuxPavel Labath2016-01-041-1/+0
| | | | | | The whole test is skipped already, so it's not running anyway. llvm-svn: 256752
* Remove XTIMEOUT from TestHelloWorld on linuxPavel Labath2016-01-041-3/+0
| | | | | | | I think it was timing out because of the attach deadlocks, which are now fixed. In any case, it has passed last 200 buildbot runs, so I am enabling it. llvm-svn: 256748
* Remove XTIMEOUT from TestExitDuringStep on linuxPavel Labath2016-01-041-1/+0
| | | | | | | The test has passed last 200 buildbot runs, so it's hopefully working now. I'll watch buildbots for signs of trouble. llvm-svn: 256746
* Remove XTIMEOUT from TestCreateAfterAttach on linuxPavel Labath2016-01-041-1/+0
| | | | | | | I believe the cause for this was the attach lockup fixed in r246756. I will enable this tests and observe the buildbots for signs of problems. llvm-svn: 256744
* Remove TestConnectRemote from XTIMEOUTsPavel Labath2016-01-041-1/+0
| | | | | | The test in question was removed in r249613. llvm-svn: 256741
* test infra: force rerun to use parallel runnerTodd Fiala2015-12-171-1/+5
| | | | | | | | | | | We've now seen the rerun test phase hang in a few scenarios. Eliminate the serial test runner (which is not exercised nearly as much as the others), by using a multi-worker test runner strategy with a single worker. This should rule out whether this is related to the serial test runner strategy. llvm-svn: 255880
* [test] Add ability to expect timeoutsPavel Labath2015-12-161-1/+0
| | | | | | | | | | | | | | | | | Summary: This adds ability to mark test that do not complete due to hangs, crashes, etc., as "expected", to avoid flagging the build red for a known problem. Functionally, this extends the scope of the existing expectedFailureXXX decorators to cover these states as well. Once this is in, I will start replacing the magic list of failing tests in dosep.py with our regular annotations which should hopefully make code simpler. Reviewers: tfiala Subscribers: lldb-commits Differential Revision: http://reviews.llvm.org/D15530 llvm-svn: 255763
* test-infra: refactored new summary results into base ResultsFormatter classTodd Fiala2015-12-151-15/+10
| | | | | | | This allows more specialized formatters to still reuse the results summarization display from the base class. llvm-svn: 255676
* test infra: enable single-worker rerun phase for flakey tests.Todd Fiala2015-12-141-14/+88
| | | | | | | | | | | | | | | | Use of --rerun-all-issues will enable any test method failure, not just test methods marked with the flakey decorator, to rerun. Currently this does not change the flakey logic's immediate rerun attempt. I want to make sure this doesn't cause any significant issues before changing that part. The rerun reporting is only known to work properly with the default (new) BasicResultsFormatter reporting. Once we work out any issues, I'll go back and make sure the curses output handles it properly as well. llvm-svn: 255543
* test infra: adds book-keeping for rerunnable testsTodd Fiala2015-12-121-10/+25
| | | | | | | | | | | | | | Also adds full path info for exceptional exits and timeouts when no test method is currently running. Adds --rerun-all-issues command line arg. If specified, all test issues are eligible for rerun. If not specified, only tests marked flakey are eligible for rerun. The actual rerunning will occur in an upcoming change. This change just handles tha accounting of what should be rerun. llvm-svn: 255438
* Decouple test execution and test finder logic in parallel test runner.Todd Fiala2015-12-121-16/+22
| | | | llvm-svn: 255400
* Add expected timeout support to test event architecture.Todd Fiala2015-12-111-1/+7
| | | | llvm-svn: 255363
* Remove the --output-on-success command line argument from dotest.Zachary Turner2015-12-101-16/+3
| | | | llvm-svn: 255277
* Fix new summary to include exceptional exit count in determining exit valueTodd Fiala2015-12-091-1/+3
| | | | | | | | | | | | The main dotest.py should exit with a system return code of 1 on any issue. This change fixes a place where I omitted counting the exceptional exit value to determine if we should return 1 when using the new summary results. This change also puts a banner around the Issue Details section that comes before the Test Result Summary. llvm-svn: 255138
* wire timeouts and exceptional inferior process exits through the test event ↵Todd Fiala2015-12-091-20/+147
| | | | | | | | | | | | | | | | | | | | | | | system The results formatter system is now fed timeouts and exceptional process exits (i.e. inferior dotest.py process that exited by signal on POSIX systems). If a timeout or exceptional exit happens while a test method is running on the worker queue, the timeout or exceptional exit is charged and reported against that test method. Otherwise, if no test method was running at the time of the timeout or exceptional exit, only the test filename will be reported as the TIMEOUT or ERROR. Implements: https://llvm.org/bugs/show_bug.cgi?id=24830 https://llvm.org/bugs/show_bug.cgi?id=25703 In support of: https://llvm.org/bugs/show_bug.cgi?id=25450 llvm-svn: 255097
* Rename test_results.py to result_formatter.py.Zachary Turner2015-12-071-3/+3
| | | | | | | | There is already a class called LLDBTestResults which I would like to move into a separate file, but the most appropriate filename was taken. llvm-svn: 254946
* Adds candidate formatter for replacing legacy summary results.Todd Fiala2015-12-021-29/+63
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Also cleans up some usages of strings where symbolic names were safer and made more sense. Try a test run with something like this to check out the new basic results formatter (not used by default): time test/dotest.py --executable `pwd`/build/Debug/lldb --results-formatter lldbsuite.test.basic_results_formatter.BasicResultsFormatter --results-file stdout This will yield something like: Testing: 1 test suites, 8 threads 1 out of 1 test suites processed - TestHelp.py Test Results Total Test Methods Run (excluding reruns): 13 Test Method rerun count: 0 =================== Test Result Summary =================== Success: 13 Expected Failure: 0 Failure: 0 Error: 0 Unexpected Success: 0 Skip: 0 Whereas something with a bit of error will look more like this: 42 out of 42 test suites processed - TestSymbolTable.py Test Results Total Test Methods Run (excluding reruns): 166 Test Method rerun count: 0 =================== Test Result Summary =================== Success: 93 Expected Failure: 10 Failure: 2 Error: 2 Unexpected Success: 0 Skip: 59 Details: FAIL: TestModulesInlineFunctions.ModulesInlineFunctionsTestCase.test_expr_dsym (/Users/tfiala/work/lldb-tot/git-svn/lldb/packages/Python/lldbsuite/test/lang/objc/modules-inline-functions/TestModulesInlineFunctions.py) FAIL: TestModulesInlineFunctions.ModulesInlineFunctionsTestCase.test_expr_dwarf (/Users/tfiala/work/lldb-tot/git-svn/lldb/packages/Python/lldbsuite/test/lang/objc/modules-inline-functions/TestModulesInlineFunctions.py) ERROR: TestObjCCheckers.ObjCCheckerTestCase.test_objc_checker_dsym (/Users/tfiala/work/lldb-tot/git-svn/lldb/packages/Python/lldbsuite/test/lang/objc/objc-checker/TestObjCCheckers.py) ERROR: TestObjCCheckers.ObjCCheckerTestCase.test_objc_checker_dwarf (/Users/tfiala/work/lldb-tot/git-svn/lldb/packages/Python/lldbsuite/test/lang/objc/objc-checker/TestObjCCheckers.py) The Details header only prints if there are any issues to report. The Details section has tags that should get picked up using the normal issue text scrapers (e.g. buildbot). Test numbers reported are strictly test method runs. The rerun bit at the top is in support of the multi-pass test runner code (to run the low-load, single worker test pass for tests that failed the first run), which I'll be able to put up for review after this. ResultsFormatters now have the ability to indicate they replace the legacy summary, as this one does. Once we come to agreement on the exact format, I will switch us over to using this by default. llvm-svn: 254530
* Bump up test timeout interval on Darwin from 4 to 6 minutes.Todd Fiala2015-11-111-0/+3
| | | | | | | We have several tests that TIMEOUT under heavy load but just need a bit more time to complete. llvm-svn: 252703
* Add --curses shortcut for specifying the curses-based test results formatter.Todd Fiala2015-11-091-0/+4
| | | | | | | This commit closes the following review: http://reviews.llvm.org/D14488 llvm-svn: 252498
* Make Windows always use multiprocessing-pool.Zachary Turner2015-11-061-5/+4
| | | | | | | | | We still see "Too many file handles" errors on Windows even with lower numbers of cores. It's not clear what the right balance is, and the bar seems to move as more tests get added. So just use the strategy that works until we can investigate more deeply. llvm-svn: 252325
* Python 3 - Turn on absolute imports, and fix existing imports.Zachary Turner2015-11-051-8/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Absolute imports were introduced in Python 2.5 as a feature (e.g. from __future__ import absolute_import), and made default in Python 3. When absolute imports are enabled, the import system changes in a couple of ways: 1) The `import foo` syntax will *only* search sys.path. If `foo` isn't in sys.path, it won't be found. Period. Without absolute imports, the import system will also search the same directory that the importing file resides in, so that you can easily import from the same folder. 2) From inside a package, you can use a dot syntax to refer to higher levels of the current package. For example, if you are in the package lldbsuite.test.utility, then ..foo refers to lldbsuite.test.foo. You can use this notation with the `from X import Y` syntax to write intra-package references. For example, using the previous locationa s a starting point, writing `from ..support import seven` would import lldbsuite.support.seven Since this is now the default behavior in Python 3, this means that importing from the same directory with `import foo` *no longer works*. As a result, the only way to have portable code is to force absolute imports for all versions of Python. See PEP 0328 [https://www.python.org/dev/peps/pep-0328/] for more information about absolute and relative imports. Differential Revision: http://reviews.llvm.org/D14342 Reviewed By: Todd Fiala llvm-svn: 252191
* Introduce seven.cmp_ and use it instead of cmpZachary Turner2015-11-031-1/+2
| | | | llvm-svn: 251982
* Remove `use_lldb_suite` from the package, and don't import it anymore.Zachary Turner2015-11-031-1/+1
| | | | | | | | | | | | | | | | This module was originally intended to be imported by top-level scripts to be able to find the LLDB packages and third party libraries. Packages themselves shouldn't need to import it, because by the time it gets into the package, the top-level script should have already done this. Indeed, it was just adding the same values to sys.path multiple times, so this patch is essentially no functional change. To make sure it doesn't get re-introduced, we also delete the `use_lldb_suite` module from `lldbsuite/test`, although the original copy still remains in `lldb/test` llvm-svn: 251963
* [dosep] Fix-up callers of process_dir, after it got its argument removedPavel Labath2015-11-021-2/+2
| | | | llvm-svn: 251830
* Make dosep correctly invoke the top-level script when forking outZachary Turner2015-11-021-3/+4
| | | | | | | | | | | | | | | | | | | | packages/Python/lldbsuite is now a Python package, and it relies on its __init__.py being called to do package-level initialization. If you exec packages/Python/lldbsuite/dotest.py directly, you won't get this package level initialization, and things will fail. But without this patch, this is exactly what dosep itself does. To launch the multi-processing fork, it was hardcoding a path to dotest.py and exec'ing it from inside the package. The fix here is to get the path of the top-level script, and then exec'ing that instead. A more robust solution would involve refactoring the code so that dosep execs some internal script that imports lldbsuite, but that's a bit more involved. Differential Revision: http://reviews.llvm.org/D14157 Reviewed by: Todd Fiala llvm-svn: 251819
* Move lldb/test to lldb/packages/Python/lldbsuite/test.Zachary Turner2015-10-281-0/+1435
This is the conclusion of an effort to get LLDB's Python code structured into a bona-fide Python package. This has a number of benefits, but most notably the ability to more easily share Python code between different but related pieces of LLDB's Python infrastructure (for example, `scripts` can now share code with `test`). llvm-svn: 251532
OpenPOWER on IntegriCloud