summaryrefslogtreecommitdiffstats
path: root/llvm/docs/TestSuiteMakefileGuide.rst
diff options
context:
space:
mode:
authorMatthias Braun <matze@braunis.de>2018-08-31 21:47:01 +0000
committerMatthias Braun <matze@braunis.de>2018-08-31 21:47:01 +0000
commit4f340e975e4d612203968244c191ba10a816b263 (patch)
tree4dd1bff5924a27ae95e4edeeb387c9092b9b4e31 /llvm/docs/TestSuiteMakefileGuide.rst
parentcaf66727797061f212e91741b6ff21f0927cffbe (diff)
downloadbcm5719-llvm-4f340e975e4d612203968244c191ba10a816b263.tar.gz
bcm5719-llvm-4f340e975e4d612203968244c191ba10a816b263.zip
Revamp test-suite documentation
- Remove duplication: Both TestingGuide and TestSuiteMakefileGuide would give a similar overview over the test-suite. - Present cmake/lit as the default/normal way of running the test-suite: - Move information about the cmake/lit testsuite into the new TestSuiteGuide.rst file. Mark the remaining information in TestSuiteMakefilesGuide.rst as deprecated. - General simplification and shorting of language. - Remove paragraphs about tests known to fail as everything should pass nowadays. - Remove paragraph about zlib requirement; it's not required anymore since we copied a zlib source snapshot into the test-suite. - Remove paragraph about comparison with "native compiler". Correctness is always checked against reference outputs nowadays. - Change cmake/lit quickstart section to recommend `pip` for installing lit and use `CMAKE_C_COMPILER` and a cache file in the example as that is what most people will end up doing anyway. Also a section about compare.py to quickstart. - Document `Bitcode` and `MicroBenchmarks` directories. - Add section with commonly used cmake configuration options. - Add section about showing and comparing result files via compare.py. - Add section about using external benchmark suites. - Add section about using custom benchmark suites. - Add section about profile guided optimization. - Add section about cross-compilation and running on external devices. Differential Revision: https://reviews.llvm.org/D51465 llvm-svn: 341260
Diffstat (limited to 'llvm/docs/TestSuiteMakefileGuide.rst')
-rw-r--r--llvm/docs/TestSuiteMakefileGuide.rst170
1 files changed, 11 insertions, 159 deletions
diff --git a/llvm/docs/TestSuiteMakefileGuide.rst b/llvm/docs/TestSuiteMakefileGuide.rst
index b6f32262b06..dfd12d72f67 100644
--- a/llvm/docs/TestSuiteMakefileGuide.rst
+++ b/llvm/docs/TestSuiteMakefileGuide.rst
@@ -1,161 +1,13 @@
-=====================
-LLVM test-suite Guide
-=====================
+======================================
+test-suite Makefile Guide (deprecated)
+======================================
.. contents::
- :local:
+ :local:
Overview
========
-This document describes the features of the Makefile-based LLVM
-test-suite as well as the cmake based replacement. This way of interacting
-with the test-suite is deprecated in favor of running the test-suite using LNT,
-but may continue to prove useful for some users. See the Testing
-Guide's :ref:`test-suite Quickstart <test-suite-quickstart>` section for more
-information.
-
-Test suite Structure
-====================
-
-The ``test-suite`` module contains a number of programs that can be
-compiled with LLVM and executed. These programs are compiled using the
-native compiler and various LLVM backends. The output from the program
-compiled with the native compiler is assumed correct; the results from
-the other programs are compared to the native program output and pass if
-they match.
-
-When executing tests, it is usually a good idea to start out with a
-subset of the available tests or programs. This makes test run times
-smaller at first and later on this is useful to investigate individual
-test failures. To run some test only on a subset of programs, simply
-change directory to the programs you want tested and run ``gmake``
-there. Alternatively, you can run a different test using the ``TEST``
-variable to change what tests or run on the selected programs (see below
-for more info).
-
-In addition for testing correctness, the ``test-suite`` directory also
-performs timing tests of various LLVM optimizations. It also records
-compilation times for the compilers and the JIT. This information can be
-used to compare the effectiveness of LLVM's optimizations and code
-generation.
-
-``test-suite`` tests are divided into three types of tests: MultiSource,
-SingleSource, and External.
-
-- ``test-suite/SingleSource``
-
- The SingleSource directory contains test programs that are only a
- single source file in size. These are usually small benchmark
- programs or small programs that calculate a particular value. Several
- such programs are grouped together in each directory.
-
-- ``test-suite/MultiSource``
-
- The MultiSource directory contains subdirectories which contain
- entire programs with multiple source files. Large benchmarks and
- whole applications go here.
-
-- ``test-suite/External``
-
- The External directory contains Makefiles for building code that is
- external to (i.e., not distributed with) LLVM. The most prominent
- members of this directory are the SPEC 95 and SPEC 2000 benchmark
- suites. The ``External`` directory does not contain these actual
- tests, but only the Makefiles that know how to properly compile these
- programs from somewhere else. The presence and location of these
- external programs is configured by the test-suite ``configure``
- script.
-
-Each tree is then subdivided into several categories, including
-applications, benchmarks, regression tests, code that is strange
-grammatically, etc. These organizations should be relatively self
-explanatory.
-
-Some tests are known to fail. Some are bugs that we have not fixed yet;
-others are features that we haven't added yet (or may never add). In the
-regression tests, the result for such tests will be XFAIL (eXpected
-FAILure). In this way, you can tell the difference between an expected
-and unexpected failure.
-
-The tests in the test suite have no such feature at this time. If the
-test passes, only warnings and other miscellaneous output will be
-generated. If a test fails, a large <program> FAILED message will be
-displayed. This will help you separate benign warnings from actual test
-failures.
-
-Running the test suite via CMake
-================================
-
-To run the test suite, you need to use the following steps:
-
-#. The test suite uses the lit test runner to run the test-suite,
- you need to have lit installed first. Check out LLVM and install lit:
-
- .. code-block:: bash
-
- % svn co http://llvm.org/svn/llvm-project/llvm/trunk llvm
- % cd llvm/utils/lit
- % sudo python setup.py install # Or without sudo, install in virtual-env.
- running install
- running bdist_egg
- running egg_info
- writing lit.egg-info/PKG-INFO
- ...
- % lit --version
- lit 0.5.0dev
-
-#. Check out the ``test-suite`` module with:
-
- .. code-block:: bash
-
- % svn co http://llvm.org/svn/llvm-project/test-suite/trunk test-suite
-
-#. Use CMake to configure the test suite in a new directory. You cannot build
- the test suite in the source tree.
-
- .. code-block:: bash
-
- % mkdir test-suite-build
- % cd test-suite-build
- % cmake ../test-suite
-
-#. Build the benchmarks, using the makefiles CMake generated.
-
-.. code-block:: bash
-
- % make
- Scanning dependencies of target timeit-target
- [ 0%] Building C object tools/CMakeFiles/timeit-target.dir/timeit.c.o
- [ 0%] Linking C executable timeit-target
- [ 0%] Built target timeit-target
- Scanning dependencies of target fpcmp-host
- [ 0%] [TEST_SUITE_HOST_CC] Building host executable fpcmp
- [ 0%] Built target fpcmp-host
- Scanning dependencies of target timeit-host
- [ 0%] [TEST_SUITE_HOST_CC] Building host executable timeit
- [ 0%] Built target timeit-host
-
-
-#. Run the tests with lit:
-
-.. code-block:: bash
-
- % lit -v -j 1 . -o results.json
- -- Testing: 474 tests, 1 threads --
- PASS: test-suite :: MultiSource/Applications/ALAC/decode/alacconvert-decode.test (1 of 474)
- ********** TEST 'test-suite :: MultiSource/Applications/ALAC/decode/alacconvert-decode.test' RESULTS **********
- compile_time: 0.2192
- exec_time: 0.0462
- hash: "59620e187c6ac38b36382685ccd2b63b"
- size: 83348
- **********
- PASS: test-suite :: MultiSource/Applications/ALAC/encode/alacconvert-encode.test (2 of 474)
-
-
-Running the test suite via Makefiles (deprecated)
-=================================================
-
First, all tests are executed within the LLVM object directory tree.
They *are not* executed inside of the LLVM source tree. This is because
the test suite creates temporary files during execution.
@@ -208,7 +60,7 @@ you have the suite checked out and configured, you don't need to do it
again (unless the test code or configure script changes).
Configuring External Tests
---------------------------
+==========================
In order to run the External tests in the ``test-suite`` module, you
must specify *--with-externals*. This must be done during the
@@ -237,8 +89,8 @@ names known to LLVM include:
Others are added from time to time, and can be determined from
``configure``.
-Running different tests
------------------------
+Running Different Tests
+=======================
In addition to the regular "whole program" tests, the ``test-suite``
module also provides a mechanism for compiling the programs in different
@@ -257,8 +109,8 @@ LLVM research group. They may still be valuable, however, as a guide to
writing your own TEST Makefile for any optimization or analysis passes
that you develop with LLVM.
-Generating test output
-----------------------
+Generating Test Output
+======================
There are a number of ways to run the tests and generate output. The
most simple one is simply running ``gmake`` with no arguments. This will
@@ -283,8 +135,8 @@ running with ``TEST=<type>``). The ``report`` also generate a file
called ``report.<type>.raw.out`` containing the output of the entire
test run.
-Writing custom tests for the test suite
----------------------------------------
+Writing Custom Tests for the test-suite
+=======================================
Assuming you can run the test suite, (e.g.
"``gmake TEST=nightly report``" should work), it is really easy to run
OpenPOWER on IntegriCloud