diff options
Diffstat (limited to 'docs/TestingGuide.html')
-rw-r--r-- | docs/TestingGuide.html | 813 |
1 files changed, 813 insertions, 0 deletions
diff --git a/docs/TestingGuide.html b/docs/TestingGuide.html new file mode 100644 index 0000000..cb88037 --- /dev/null +++ b/docs/TestingGuide.html @@ -0,0 +1,813 @@ +<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" + "http://www.w3.org/TR/html4/strict.dtd"> +<html> +<head> + <title>LLVM Test Suite Guide</title> + <link rel="stylesheet" href="llvm.css" type="text/css"> +</head> +<body> + +<div class="doc_title"> + LLVM Test Suite Guide +</div> + +<ol> + <li><a href="#overview">Overview</a></li> + <li><a href="#Requirements">Requirements</a></li> + <li><a href="#quick">Quick Start</a></li> + <li><a href="#org">LLVM Test Suite Organization</a> + <ul> + <li><a href="#codefragments">Code Fragments</a></li> + <li><a href="#wholeprograms">Whole Programs</a></li> + </ul> + </li> + <li><a href="#tree">LLVM Test Suite Tree</a></li> + <li><a href="#dgstructure">DejaGNU Structure</a></li> + <li><a href="#progstructure"><tt>llvm-test</tt> Structure</a></li> + <li><a href="#run">Running the LLVM Tests</a> + <ul> + <li><a href="#customtest">Writing custom tests for llvm-test</a></li> + </ul> + </li> + <li><a href="#nightly">Running the nightly tester</a></li> +</ol> + +<div class="doc_author"> + <p>Written by John T. Criswell, <a + href="http://llvm.x10sys.com/rspencer">Reid Spencer</a>, and Tanya Lattner</p> +</div> + +<!--=========================================================================--> +<div class="doc_section"><a name="overview">Overview</a></div> +<!--=========================================================================--> + +<div class="doc_text"> + +<p>This document is the reference manual for the LLVM test suite. It documents +the structure of the LLVM test suite, the tools needed to use it, and how to add +and run tests.</p> + +</div> + +<!--=========================================================================--> +<div class="doc_section"><a name="Requirements">Requirements</a></div> +<!--=========================================================================--> + +<div class="doc_text"> + +<p>In order to use the LLVM test suite, you will need all of the software +required to build LLVM, plus the following:</p> + +<dl> +<dt><a href="http://www.gnu.org/software/dejagnu/">DejaGNU</a></dt> +<dd>The Feature and Regressions tests are organized and run by DejaGNU.</dd> +<dt><a href="http://expect.nist.gov/">Expect</a></dt> +<dd>Expect is required by DejaGNU.</dd> +<dt><a href="http://www.tcl.tk/software/tcltk/">tcl</a></dt> +<dd>Tcl is required by DejaGNU. </dd> + +<dt><a href="http://www.netlib.org/f2c">F2C</a></dt> +<dd>For now, LLVM does not have a Fortran front-end, but using F2C, we can run +Fortran benchmarks. F2C support must be enabled via <tt>configure</tt> if not +installed in a standard place. F2C requires three items: the <tt>f2c</tt> +executable, <tt>f2c.h</tt> to compile the generated code, and <tt>libf2c.a</tt> +to link generated code. By default, given an F2C directory <tt>$DIR</tt>, the +configure script will search <tt>$DIR/bin</tt> for <tt>f2c</tt>, +<tt>$DIR/include</tt> for <tt>f2c.h</tt>, and <tt>$DIR/lib</tt> for +<tt>libf2c.a</tt>. The default <tt>$DIR</tt> values are: <tt>/usr</tt>, +<tt>/usr/local</tt>, <tt>/sw</tt>, and <tt>/opt</tt>. If you installed F2C in a +different location, you must tell <tt>configure</tt>: + +<ul> +<li><tt>./configure --with-f2c=$DIR</tt><br> +This will specify a new <tt>$DIR</tt> for the above-described search +process. This will only work if the binary, header, and library are in their +respective subdirectories of <tt>$DIR</tt>.</li> + +<li><tt>./configure --with-f2c-bin=/binary/path --with-f2c-inc=/include/path +--with-f2c-lib=/lib/path</tt><br> +This allows you to specify the F2C components separately. Note: if you choose +this route, you MUST specify all three components, and you need to only specify +<em>directories</em> where the files are located; do NOT include the +filenames themselves on the <tt>configure</tt> line.</li> +</ul></dd> +</dl> + +<p>Darwin (Mac OS X) developers can simplify the installation of Expect and tcl +by using fink. <tt>fink install expect</tt> will install both. Alternatively, +Darwinports users can use <tt>sudo port install expect</tt> to install Expect +and tcl.</p> + +</div> + +<!--=========================================================================--> +<div class="doc_section"><a name="quick">Quick Start</a></div> +<!--=========================================================================--> + +<div class="doc_text"> + + <p>The tests are located in two separate Subversion modules. The basic feature + and regression tests are in the main "llvm" module under the directory + <tt>llvm/test</tt>. A more comprehensive test suite that includes whole +programs in C and C++ is in the <tt>test-suite</tt> module. This module should +be checked out to the <tt>llvm/projects</tt> directory as llvm-test (for +historical purpose). When you <tt>configure</tt> the <tt>llvm</tt> module, +the <tt>llvm-test</tt> directory will be automatically configured. +Alternatively, you can configure the <tt>test-suite</tt> module manually.</p> +<p>To run all of the simple tests in LLVM using DejaGNU, use the master Makefile + in the <tt>llvm/test</tt> directory:</p> +<pre> +% gmake -C llvm/test +</pre> +or<br> +<pre> +% gmake check +</pre> + +<p>To run only a subdirectory of tests in llvm/test using DejaGNU (ie. +Regression/Transforms), just set the TESTSUITE variable to the path of the +subdirectory (relative to <tt>llvm/test</tt>):</p> +<pre> +% gmake -C llvm/test TESTSUITE=Regression/Transforms +</pre> + +<p><b>Note: If you are running the tests with <tt>objdir != subdir</tt>, you +must have run the complete testsuite before you can specify a +subdirectory.</b></p> + +<p>To run the comprehensive test suite (tests that compile and execute whole +programs), run the <tt>llvm-test</tt> tests:</p> + +<pre> +% cd llvm/projects +% svn co http://llvm.org/svn/llvm-project/test-suite/trunk llvm-test +% cd llvm-test +% ./configure --with-llvmsrc=$LLVM_SRC_ROOT --with-llvmobj=$LLVM_OBJ_ROOT +% gmake +</pre> + +</div> + +<!--=========================================================================--> +<div class="doc_section"><a name="org">LLVM Test Suite Organization</a></div> +<!--=========================================================================--> + +<div class="doc_text"> + +<p>The LLVM test suite contains two major categories of tests: code +fragments and whole programs. Code fragments are in the <tt>llvm</tt> module +under the <tt>llvm/test</tt> directory. The whole programs +test suite is in the <tt>llvm-test</tt> module under the main directory.</p> + +</div> + +<!-- _______________________________________________________________________ --> +<div class="doc_subsection"><a name="codefragments">Code Fragments</a></div> +<!-- _______________________________________________________________________ --> + +<div class="doc_text"> + +<p>Code fragments are small pieces of code that test a specific feature of LLVM +or trigger a specific bug in LLVM. They are usually written in LLVM assembly +language, but can be written in other languages if the test targets a particular +language front end.</p> + +<p>Code fragments are not complete programs, and they are never executed to +determine correct behavior.</p> + +<p>These code fragment tests are located in the <tt>llvm/test/Features</tt> and +<tt>llvm/test/Regression</tt> directories.</p> + +</div> + +<!-- _______________________________________________________________________ --> +<div class="doc_subsection"><a name="wholeprograms">Whole Programs</a></div> +<!-- _______________________________________________________________________ --> + +<div class="doc_text"> + +<p>Whole Programs are pieces of code which can be compiled and linked into a +stand-alone program that can be executed. These programs are generally written +in high level languages such as C or C++, but sometimes they are written +straight in LLVM assembly.</p> + +<p>These programs are compiled and then executed using several different +methods (native compiler, LLVM C backend, LLVM JIT, LLVM native code generation, +etc). The output of these programs is compared to ensure that LLVM is compiling +the program correctly.</p> + +<p>In addition to compiling and executing programs, whole program tests serve as +a way of benchmarking LLVM performance, both in terms of the efficiency of the +programs generated as well as the speed with which LLVM compiles, optimizes, and +generates code.</p> + +<p>All "whole program" tests are located in the <tt>test-suite</tt> Subversion +module.</p> + +</div> + +<!--=========================================================================--> +<div class="doc_section"><a name="tree">LLVM Test Suite Tree</a></div> +<!--=========================================================================--> + +<div class="doc_text"> + +<p>Each type of test in the LLVM test suite has its own directory. The major +subtrees of the test suite directory tree are as follows:</p> + +<ul> + <li><tt>llvm/test</tt> + <p>This directory contains a large array of small tests + that exercise various features of LLVM and to ensure that regressions do not + occur. The directory is broken into several sub-directories, each focused on + a particular area of LLVM. A few of the important ones are:<ul> + <li><tt>Analysis</tt>: checks Analysis passes.</li> + <li><tt>Archive</tt>: checks the Archive library.</li> + <li><tt>Assembler</tt>: checks Assembly reader/writer functionality.</li> + <li><tt>Bitcode</tt>: checks Bitcode reader/writer functionality.</li> + <li><tt>CodeGen</tt>: checks code generation and each target.</li> + <li><tt>Features</tt>: checks various features of the LLVM language.</li> + <li><tt>Linker</tt>: tests bitcode linking.</li> + <li><tt>Transforms</tt>: tests each of the scalar, IPO, and utility + transforms to ensure they make the right transformations.</li> + <li><tt>Verifier</tt>: tests the IR verifier.</li> + </ul></p> + <p>Typically when a bug is found in LLVM, a regression test containing + just enough code to reproduce the problem should be written and placed + somewhere underneath this directory. In most cases, this will be a small + piece of LLVM assembly language code, often distilled from an actual + application or benchmark.</p></li> + +<li><tt>test-suite</tt> +<p>The <tt>test-suite</tt> module contains programs that can be compiled +with LLVM and executed. These programs are compiled using the native compiler +and various LLVM backends. The output from the program compiled with the +native compiler is assumed correct; the results from the other programs are +compared to the native program output and pass if they match.</p> + +<p>In addition for testing correctness, the <tt>llvm-test</tt> directory also +performs timing tests of various LLVM optimizations. It also records +compilation times for the compilers and the JIT. This information can be +used to compare the effectiveness of LLVM's optimizations and code +generation.</p></li> + +<li><tt>llvm-test/SingleSource</tt> +<p>The SingleSource directory contains test programs that are only a single +source file in size. These are usually small benchmark programs or small +programs that calculate a particular value. Several such programs are grouped +together in each directory.</p></li> + +<li><tt>llvm-test/MultiSource</tt> +<p>The MultiSource directory contains subdirectories which contain entire +programs with multiple source files. Large benchmarks and whole applications +go here.</p></li> + +<li><tt>llvm-test/External</tt> +<p>The External directory contains Makefiles for building code that is external +to (i.e., not distributed with) LLVM. The most prominent members of this +directory are the SPEC 95 and SPEC 2000 benchmark suites. The presence and +location of these external programs is configured by the llvm-test +<tt>configure</tt> script.</p></li> + +</ul> + +</div> +<!--=========================================================================--> +<div class="doc_section"><a name="dgstructure">DejaGNU Structure</a></div> +<!--=========================================================================--> +<div class="doc_text"> + <p>The LLVM test suite is partially driven by DejaGNU and partially driven by + GNU Make. Specifically, the Features and Regression tests are all driven by + DejaGNU. The <tt>llvm-test</tt> module is currently driven by a set of + Makefiles.</p> + + <p>The DejaGNU structure is very simple, but does require some information to + be set. This information is gathered via <tt>configure</tt> and is written + to a file, <tt>site.exp</tt> in <tt>llvm/test</tt>. The <tt>llvm/test</tt> + Makefile does this work for you.</p> + + <p>In order for DejaGNU to work, each directory of tests must have a + <tt>dg.exp</tt> file. DejaGNU looks for this file to determine how to run the + tests. This file is just a Tcl script and it can do anything you want, but + we've standardized it for the LLVM regression tests. It simply loads a Tcl + library (<tt>test/lib/llvm.exp</tt>) and calls the <tt>llvm_runtests</tt> + function defined in that library with a list of file names to run. The names + are obtained by using Tcl's glob command. Any directory that contains only + directories does not need the <tt>dg.exp</tt> file.</p> + + <p>The <tt>llvm-runtests</tt> function lookas at each file that is passed to + it and gathers any lines together that match "RUN:". This are the "RUN" lines + that specify how the test is to be run. So, each test script must contain + RUN lines if it is to do anything. If there are no RUN lines, the + <tt>llvm-runtests</tt> function will issue an error and the test will + fail.</p> + + <p>RUN lines are specified in the comments of the test program using the + keyword <tt>RUN</tt> followed by a colon, and lastly the command (pipeline) + to execute. Together, these lines form the "script" that + <tt>llvm-runtests</tt> executes to run the test case. The syntax of the + RUN lines is similar to a shell's syntax for pipelines including I/O + redirection and variable substitution. However, even though these lines + may <i>look</i> like a shell script, they are not. RUN lines are interpreted + directly by the Tcl <tt>exec</tt> command. They are never executed by a + shell. Consequently the syntax differs from normal shell script syntax in a + few ways. You can specify as many RUN lines as needed.</p> + + <p>Each RUN line is executed on its own, distinct from other lines unless + its last character is <tt>\</tt>. This continuation character causes the RUN + line to be concatenated with the next one. In this way you can build up long + pipelines of commands without making huge line lengths. The lines ending in + <tt>\</tt> are concatenated until a RUN line that doesn't end in <tt>\</tt> is + found. This concatenated set or RUN lines then constitutes one execution. + Tcl will substitute variables and arrange for the pipeline to be executed. If + any process in the pipeline fails, the entire line (and test case) fails too. + </p> + + <p> Below is an example of legal RUN lines in a <tt>.ll</tt> file:</p> + <pre> + ; RUN: llvm-as < %s | llvm-dis > %t1 + ; RUN: llvm-dis < %s.bc-13 > %t2 + ; RUN: diff %t1 %t2 + </pre> + + <p>As with a Unix shell, the RUN: lines permit pipelines and I/O redirection + to be used. However, the usage is slightly different than for Bash. To check + what's legal, see the documentation for the + <a href="http://www.tcl.tk/man/tcl8.5/TclCmd/exec.htm#M2">Tcl exec</a> + command and the + <a href="http://www.tcl.tk/man/tcl8.5/tutorial/Tcl26.html">tutorial</a>. + The major differences are:</p> + <ul> + <li>You can't do <tt>2>&1</tt>. That will cause Tcl to write to a + file named <tt>&1</tt>. Usually this is done to get stderr to go through + a pipe. You can do that in tcl with <tt>|&</tt> so replace this idiom: + <tt>... 2>&1 | grep</tt> with <tt>... |& grep</tt></li> + <li>You can only redirect to a file, not to another descriptor and not from + a here document.</li> + <li>tcl supports redirecting to open files with the @ syntax but you + shouldn't use that here.</li> + </ul> + + <p>There are some quoting rules that you must pay attention to when writing + your RUN lines. In general nothing needs to be quoted. Tcl won't strip off any + ' or " so they will get passed to the invoked program. For example:</p> + <pre> + ... | grep 'find this string' + </pre> + <p>This will fail because the ' characters are passed to grep. This would + instruction grep to look for <tt>'find</tt> in the files <tt>this</tt> and + <tt>string'</tt>. To avoid this use curly braces to tell Tcl that it should + treat everything enclosed as one value. So our example would become:</p> + <pre> + ... | grep {find this string} + </pre> + <p>Additionally, the characters <tt>[</tt> and <tt>]</tt> are treated + specially by Tcl. They tell Tcl to interpret the content as a command to + execute. Since these characters are often used in regular expressions this can + have disastrous results and cause the entire test run in a directory to fail. + For example, a common idiom is to look for some basicblock number:</p> + <pre> + ... | grep bb[2-8] + </pre> + <p>This, however, will cause Tcl to fail because its going to try to execute + a program named "2-8". Instead, what you want is this:</p> + <pre> + ... | grep {bb\[2-8\]} + </pre> + <p>Finally, if you need to pass the <tt>\</tt> character down to a program, + then it must be doubled. This is another Tcl special character. So, suppose + you had: + <pre> + ... | grep 'i32\*' + </pre> + <p>This will fail to match what you want (a pointer to i32). First, the + <tt>'</tt> do not get stripped off. Second, the <tt>\</tt> gets stripped off + by Tcl so what grep sees is: <tt>'i32*'</tt>. That's not likely to match + anything. To resolve this you must use <tt>\\</tt> and the <tt>{}</tt>, like + this:</p> + <pre> + ... | grep {i32\\*} + </pre> + +</div> + +<!-- _______________________________________________________________________ --> +<div class="doc_subsection"><a name="dgvars">Vars And Substitutions</a></div> +<div class="doc_text"> + <p>With a RUN line there are a number of substitutions that are permitted. In + general, any Tcl variable that is available in the <tt>substitute</tt> + function (in <tt>test/lib/llvm.exp</tt>) can be substituted into a RUN line. + To make a substitution just write the variable's name preceded by a $. + Additionally, for compatibility reasons with previous versions of the test + library, certain names can be accessed with an alternate syntax: a % prefix. + These alternates are deprecated and may go away in a future version. + </p> + Here are the available variable names. The alternate syntax is listed in + parentheses.</p> + <dl style="margin-left: 25px"> + <dt><b>$test</b> (%s)</dt> + <dd>The full path to the test case's source. This is suitable for passing + on the command line as the input to an llvm tool.</dd> + <dt><b>$srcdir</b></dt> + <dd>The source directory from where the "<tt>make check</tt>" was run.</dd> + <dt><b>objdir</b></dt> + <dd>The object directory that corresponds to the </tt>$srcdir</tt>.</dd> + <dt><b>subdir</b></dt> + <dd>A partial path from the <tt>test</tt> directory that contains the + sub-directory that contains the test source being executed.</dd> + <dt><b>srcroot</b></dt> + <dd>The root directory of the LLVM src tree.</dd> + <dt><b>objroot</b></dt> + <dd>The root directory of the LLVM object tree. This could be the same + as the srcroot.</dd> + <dt><b>path</b><dt> + <dd>The path to the directory that contains the test case source. This is + for locating any supporting files that are not generated by the test, but + used by the test.</dd> + <dt><b>tmp</b></dt> + <dd>The path to a temporary file name that could be used for this test case. + The file name won't conflict with other test cases. You can append to it if + you need multiple temporaries. This is useful as the destination of some + redirected output.</dd> + <dt><b>llvmlibsdir</b> (%llvmlibsdir)</dt> + <dd>The directory where the LLVM libraries are located.</dd> + <dt><b>target_triplet</b> (%target_triplet)</dt> + <dd>The target triplet that corresponds to the current host machine (the one + running the test cases). This should probably be called "host".<dd> + <dt><b>prcontext</b> (%prcontext)</dt> + <dd>Path to the prcontext tcl script that prints some context around a + line that matches a pattern. This isn't strictly necessary as the test suite + is run with its PATH altered to include the test/Scripts directory where + the prcontext script is located. Note that this script is similar to + <tt>grep -C</tt> but you should use the <tt>prcontext</tt> script because + not all platforms support <tt>grep -C</tt>.</dd> + <dt><b>llvmgcc</b> (%llvmgcc)</dt> + <dd>The full path to the <tt>llvm-gcc</tt> executable as specified in the + configured LLVM environment</dd> + <dt><b>llvmgxx</b> (%llvmgxx)</dt> + <dd>The full path to the <tt>llvm-gxx</tt> executable as specified in the + configured LLVM environment</dd> + <dt><b>llvmgcc_version</b> (%llvmgcc_version)</dt> + <dd>The full version number of the <tt>llvm-gcc</tt> executable.</dd> + <dt><b>llvmgccmajvers</b> (%llvmgccmajvers)</dt> + <dd>The major version number of the <tt>llvm-gcc</tt> executable.</dd> + <dt><b>gccpath</b></dt> + <dd>The full path to the C compiler used to <i>build </i> LLVM. Note that + this might not be gcc.</dd> + <dt><b>gxxpath</b></dt> + <dd>The full path to the C++ compiler used to <i>build </i> LLVM. Note that + this might not be g++.</dd> + <dt><b>compile_c</b> (%compile_c)</dt> + <dd>The full command line used to compile LLVM C source code. This has all + the configured -I, -D and optimization options.</dd> + <dt><b>compile_cxx</b> (%compile_cxx)</dt> + <dd>The full command used to compile LLVM C++ source code. This has + all the configured -I, -D and optimization options.</dd> + <dt><b>link</b> (%link)</dt> + <dd>This full link command used to link LLVM executables. This has all the + configured -I, -L and -l options.</dd> + <dt><b>shlibext</b> (%shlibext)</dt> + <dd>The suffix for the host platforms share library (dll) files. This + includes the period as the first character.</dd> + </dl> + <p>To add more variables, two things need to be changed. First, add a line in + the <tt>test/Makefile</tt> that creates the <tt>site.exp</tt> file. This will + "set" the variable as a global in the site.exp file. Second, in the + <tt>test/lib/llvm.exp</tt> file, in the substitute proc, add the variable name + to the list of "global" declarations at the beginning of the proc. That's it, + the variable can then be used in test scripts.</p> +</div> + +<!-- _______________________________________________________________________ --> +<div class="doc_subsection"><a name="dgfeatures">Other Features</a></div> +<div class="doc_text"> + <p>To make RUN line writing easier, there are several shell scripts located + in the <tt>llvm/test/Scripts</tt> directory. For example:</p> + <dl> + <dt><b>ignore</b></dt> + <dd>This script runs its arguments and then always returns 0. This is useful + in cases where the test needs to cause a tool to generate an error (e.g. to + check the error output). However, any program in a pipeline that returns a + non-zero result will cause the test to fail. This script overcomes that + issue and nicely documents that the test case is purposefully ignoring the + result code of the tool</dd> + <dt><b>not</b></dt> + <dd>This script runs its arguments and then inverts the result code from + it. Zero result codes become 1. Non-zero result codes become 0. This is + useful to invert the result of a grep. For example "not grep X" means + succeed only if you don't find X in the input.</dd> + </dl> + + <p>Sometimes it is necessary to mark a test case as "expected fail" or XFAIL. + You can easily mark a test as XFAIL just by including <tt>XFAIL: </tt> on a + line near the top of the file. This signals that the test case should succeed + if the test fails. Such test cases are counted separately by DejaGnu. To + specify an expected fail, use the XFAIL keyword in the comments of the test + program followed by a colon and one or more regular expressions (separated by + a comma). The regular expressions allow you to XFAIL the test conditionally + by host platform. The regular expressions following the : are matched against + the target triplet or llvmgcc version number for the host machine. If there is + a match, the test is expected to fail. If not, the test is expected to + succeed. To XFAIL everywhere just specify <tt>XFAIL: *</tt>. When matching + the llvm-gcc version, you can specify the major (e.g. 3) or full version + (i.e. 3.4) number. Here is an example of an <tt>XFAIL</tt> line:</p> + <pre> + ; XFAIL: darwin,sun,llvmgcc4 + </pre> + + <p>To make the output more useful, the <tt>llvm_runtest</tt> function wil + scan the lines of the test case for ones that contain a pattern that matches + PR[0-9]+. This is the syntax for specifying a PR (Problem Report) number that + is related to the test case. The numer after "PR" specifies the LLVM bugzilla + number. When a PR number is specified, it will be used in the pass/fail + reporting. This is useful to quickly get some context when a test fails.</p> + + <p>Finally, any line that contains "END." will cause the special + interpretation of lines to terminate. This is generally done right after the + last RUN: line. This has two side effects: (a) it prevents special + interpretation of lines that are part of the test program, not the + instructions to the test case, and (b) it speeds things up for really big test + cases by avoiding interpretation of the remainder of the file.</p> + +</div> + +<!--=========================================================================--> +<div class="doc_section"><a name="progstructure"><tt>llvm-test</tt> +Structure</a></div> +<!--=========================================================================--> + +<div class="doc_text"> + +<p>As mentioned previously, the <tt>llvm-test</tt> module provides three types +of tests: MultiSource, SingleSource, and External. Each tree is then subdivided +into several categories, including applications, benchmarks, regression tests, +code that is strange grammatically, etc. These organizations should be +relatively self explanatory.</p> + +<p>In addition to the regular "whole program" tests, the <tt>llvm-test</tt> +module also provides a mechanism for compiling the programs in different ways. +If the variable TEST is defined on the gmake command line, the test system will +include a Makefile named <tt>TEST.<value of TEST variable>.Makefile</tt>. +This Makefile can modify build rules to yield different results.</p> + +<p>For example, the LLVM nightly tester uses <tt>TEST.nightly.Makefile</tt> to +create the nightly test reports. To run the nightly tests, run <tt>gmake +TEST=nightly</tt>.</p> + +<p>There are several TEST Makefiles available in the tree. Some of them are +designed for internal LLVM research and will not work outside of the LLVM +research group. They may still be valuable, however, as a guide to writing your +own TEST Makefile for any optimization or analysis passes that you develop with +LLVM.</p> + +<p>Note, when configuring the <tt>llvm-test</tt> module, you might want to +specify the following configuration options:</p> +<dl> + <dt><i>--enable-spec2000</i> + <dt><i>--enable-spec2000=<<tt>directory</tt>></i> + <dd> + Enable the use of SPEC2000 when testing LLVM. This is disabled by default + (unless <tt>configure</tt> finds SPEC2000 installed). By specifying + <tt>directory</tt>, you can tell configure where to find the SPEC2000 + benchmarks. If <tt>directory</tt> is left unspecified, <tt>configure</tt> + uses the default value + <tt>/home/vadve/shared/benchmarks/speccpu2000/benchspec</tt>. + <p> + <dt><i>--enable-spec95</i> + <dt><i>--enable-spec95=<<tt>directory</tt>></i> + <dd> + Enable the use of SPEC95 when testing LLVM. It is similar to the + <i>--enable-spec2000</i> option. + <p> + <dt><i>--enable-povray</i> + <dt><i>--enable-povray=<<tt>directory</tt>></i> + <dd> + Enable the use of Povray as an external test. Versions of Povray written + in C should work. This option is similar to the <i>--enable-spec2000</i> + option. +</dl> +</div> + +<!--=========================================================================--> +<div class="doc_section"><a name="run">Running the LLVM Tests</a></div> +<!--=========================================================================--> + +<div class="doc_text"> + +<p>First, all tests are executed within the LLVM object directory tree. They +<i>are not</i> executed inside of the LLVM source tree. This is because the +test suite creates temporary files during execution.</p> + +<p>The master Makefile in llvm/test is capable of running only the DejaGNU +driven tests. By default, it will run all of these tests.</p> + +<p>To run only the DejaGNU driven tests, run <tt>gmake</tt> at the +command line in <tt>llvm/test</tt>. To run a specific directory of tests, use +the TESTSUITE variable. +</p> + +<p>For example, to run the Regression tests, type +<tt>gmake TESTSUITE=Regression</tt> in <tt>llvm/tests</tt>.</p> + +<p>Note that there are no Makefiles in <tt>llvm/test/Features</tt> and +<tt>llvm/test/Regression</tt>. You must use DejaGNU from the <tt>llvm/test</tt> +directory to run them.</p> + +<p>To run the <tt>llvm-test</tt> suite, you need to use the following steps: +</p> +<ol> + <li>cd into the llvm/projects directory</li> + <li>check out the <tt>test-suite</tt> module with:<br/> + <tt>svn co http://llvm.org/svn/llvm-project/test-suite/trunk llvm-test<br/> + This will get the test suite into <tt>llvm/projects/llvm-test</tt></li> + <li>configure the test suite. You can do this one of two ways: + <ol> + <li>Use the regular llvm configure:<br/> + <tt>cd $LLVM_OBJ_ROOT ; $LLVM_SRC_ROOT/configure</tt><br/> + This will ensure that the <tt>projects/llvm-test</tt> directory is also + properly configured.</li> + <li>Use the <tt>configure</tt> script found in the <tt>llvm-test</tt> source + directory:<br/> + <tt>$LLVM_SRC_ROOT/projects/llvm-test/configure + --with-llvmsrc=$LLVM_SRC_ROOT --with-llvmobj=$LLVM_OBJ_ROOT</tt> + </li> + </ol> + <li>gmake</li> +</ol> +<p>Note that the second and third steps only need to be done once. After you +have the suite checked out and configured, you don't need to do it again (unless +the test code or configure script changes).</p> + +<p>To make a specialized test (use one of the +<tt>llvm-test/TEST.<type>.Makefile</tt>s), just run:<br/> +<tt>gmake TEST=<type> test</tt><br/>For example, you could run the +nightly tester tests using the following commands:</p> + +<pre> + % cd llvm/projects/llvm-test + % gmake TEST=nightly test +</pre> + +<p>Regardless of which test you're running, the results are printed on standard +output and standard error. You can redirect these results to a file if you +choose.</p> + +<p>Some tests are known to fail. Some are bugs that we have not fixed yet; +others are features that we haven't added yet (or may never add). In DejaGNU, +the result for such tests will be XFAIL (eXpected FAILure). In this way, you +can tell the difference between an expected and unexpected failure.</p> + +<p>The tests in <tt>llvm-test</tt> have no such feature at this time. If the +test passes, only warnings and other miscellaneous output will be generated. If +a test fails, a large <program> FAILED message will be displayed. This +will help you separate benign warnings from actual test failures.</p> + +</div> + +<!-- _______________________________________________________________________ --> +<div class="doc_subsection"> +<a name="customtest">Writing custom tests for llvm-test</a></div> +<!-- _______________________________________________________________________ --> + +<div class="doc_text"> + +<p>Assuming you can run llvm-test, (e.g. "<tt>gmake TEST=nightly report</tt>" +should work), it is really easy to run optimizations or code generator +components against every program in the tree, collecting statistics or running +custom checks for correctness. At base, this is how the nightly tester works, +it's just one example of a general framework.</p> + +<p>Lets say that you have an LLVM optimization pass, and you want to see how +many times it triggers. First thing you should do is add an LLVM +<a href="ProgrammersManual.html#Statistic">statistic</a> to your pass, which +will tally counts of things you care about.</p> + +<p>Following this, you can set up a test and a report that collects these and +formats them for easy viewing. This consists of two files, an +"<tt>llvm-test/TEST.XXX.Makefile</tt>" fragment (where XXX is the name of your +test) and an "<tt>llvm-test/TEST.XXX.report</tt>" file that indicates how to +format the output into a table. There are many example reports of various +levels of sophistication included with llvm-test, and the framework is very +general.</p> + +<p>If you are interested in testing an optimization pass, check out the +"libcalls" test as an example. It can be run like this:<p> + +<div class="doc_code"> +<pre> +% cd llvm/projects/llvm-test/MultiSource/Benchmarks # or some other level +% make TEST=libcalls report +</pre> +</div> + +<p>This will do a bunch of stuff, then eventually print a table like this:</p> + +<div class="doc_code"> +<pre> +Name | total | #exit | +... +FreeBench/analyzer/analyzer | 51 | 6 | +FreeBench/fourinarow/fourinarow | 1 | 1 | +FreeBench/neural/neural | 19 | 9 | +FreeBench/pifft/pifft | 5 | 3 | +MallocBench/cfrac/cfrac | 1 | * | +MallocBench/espresso/espresso | 52 | 12 | +MallocBench/gs/gs | 4 | * | +Prolangs-C/TimberWolfMC/timberwolfmc | 302 | * | +Prolangs-C/agrep/agrep | 33 | 12 | +Prolangs-C/allroots/allroots | * | * | +Prolangs-C/assembler/assembler | 47 | * | +Prolangs-C/bison/mybison | 74 | * | +... +</pre> +</div> + +<p>This basically is grepping the -stats output and displaying it in a table. +You can also use the "TEST=libcalls report.html" target to get the table in HTML +form, similarly for report.csv and report.tex.</p> + +<p>The source for this is in llvm-test/TEST.libcalls.*. The format is pretty +simple: the Makefile indicates how to run the test (in this case, +"<tt>opt -simplify-libcalls -stats</tt>"), and the report contains one line for +each column of the output. The first value is the header for the column and the +second is the regex to grep the output of the command for. There are lots of +example reports that can do fancy stuff.</p> + +</div> + + +<!--=========================================================================--> +<div class="doc_section"><a name="nightly">Running the nightly tester</a></div> +<!--=========================================================================--> + +<div class="doc_text"> + +<p> +The <a href="http://llvm.org/nightlytest/">LLVM Nightly Testers</a> +automatically check out an LLVM tree, build it, run the "nightly" +program test (described above), run all of the feature and regression tests, +delete the checked out tree, and then submit the results to +<a href="http://llvm.org/nightlytest/">http://llvm.org/nightlytest/</a>. +After test results are submitted to +<a href="http://llvm.org/nightlytest/">http://llvm.org/nightlytest/</a>, +they are processed and displayed on the tests page. An email to +<a href="http://lists.cs.uiuc.edu/pipermail/llvm-testresults/"> +llvm-testresults@cs.uiuc.edu</a> summarizing the results is also generated. +This testing scheme is designed to ensure that programs don't break as well +as keep track of LLVM's progress over time.</p> + +<p>If you'd like to set up an instance of the nightly tester to run on your +machine, take a look at the comments at the top of the +<tt>utils/NewNightlyTest.pl</tt> file. If you decide to set up a nightly tester +please choose a unique nickname and invoke <tt>utils/NewNightlyTest.pl</tt> +with the "-nickname [yournickname]" command line option. + +<p>You can create a shell script to encapsulate the running of the script. +The optimized x86 Linux nightly test is run from just such a script:</p> + +<div class="doc_code"> +<pre> +#!/bin/bash +BASE=/proj/work/llvm/nightlytest +export BUILDDIR=$BASE/build +export WEBDIR=$BASE/testresults +export LLVMGCCDIR=/proj/work/llvm/cfrontend/install +export PATH=/proj/install/bin:$LLVMGCCDIR/bin:$PATH +export LD_LIBRARY_PATH=/proj/install/lib +cd $BASE +cp /proj/work/llvm/llvm/utils/NewNightlyTest.pl . +nice ./NewNightlyTest.pl -nice -release -verbose -parallel -enable-linscan \ + -nickname NightlyTester -noexternals > output.log 2>&1 +</pre> +</div> + +<p>It is also possible to specify the the location your nightly test results +are submitted. You can do this by passing the command line option +"-submit-server [server_address]" and "-submit-script [script_on_server]" to +<tt>utils/NewNightlyTest.pl</tt>. For example, to submit to the llvm.org +nightly test results page, you would invoke the nightly test script with +"-submit-server llvm.org -submit-script /nightlytest/NightlyTestAccept.cgi". +If these options are not specified, the nightly test script sends the results +to the llvm.org nightly test results page.</p> + +<p>Take a look at the <tt>NewNightlyTest.pl</tt> file to see what all of the +flags and strings do. If you start running the nightly tests, please let us +know. Thanks!</p> + +</div> + +<!-- *********************************************************************** --> + +<hr> +<address> + <a href="http://jigsaw.w3.org/css-validator/check/referer"><img + src="http://jigsaw.w3.org/css-validator/images/vcss" alt="Valid CSS!"></a> + <a href="http://validator.w3.org/check/referer"><img + src="http://www.w3.org/Icons/valid-html401" alt="Valid HTML 4.01!" /></a> + + John T. Criswell, Reid Spencer, and Tanya Lattner<br> + <a href="http://llvm.org">The LLVM Compiler Infrastructure</a><br/> + Last modified: $Date$ +</address> +</body> +</html> |