Testbed » History » Version 5
Anonymous, 01/25/2016 11:51 AM
1 | 1 | Anonymous | h1. Testbed |
---|---|---|---|
2 | |||
3 | h2. The S/PHI/nX testbed |
||
4 | |||
5 | 5 | Anonymous | The S/PHI/nX testbed allows to check for basic functionality of the compiled programs after [[building]]. It can be found in the sphinx/testbed folder. Running the testbed requires GNU make, which is the default make on most Unix systems. |
6 | 1 | Anonymous | |
7 | Setup |
||
8 | |||
9 | <pre> |
||
10 | cd sphinx/testbed |
||
11 | ./setup |
||
12 | ./configure --with-sxdir=BUILD/PATH/TO/BE/CHECKED |
||
13 | </pre> |
||
14 | |||
15 | The configure line can be omitted if the build to be checked is ../src (standard S/PHI/nX user build). When debug and release mode have been compiled, typical choices for BUILD/PATH/TO/BE/CHECKED will be ../debug or ../release . |
||
16 | 2 | Anonymous | |
17 | h2. Running the testbed |
||
18 | |||
19 | The testbed can be run with |
||
20 | |||
21 | <pre> |
||
22 | make level1 level2 |
||
23 | </pre> |
||
24 | |||
25 | On multi-processor machines, parallel make ( make -j N ) is possible; where N is the number of processors. Level 1 checks are typically quick to run. Level 2 checks may be slow, in particular by the debug mode. The two levels can be run independently. |
||
26 | |||
27 | It is also possible to run individual tests via |
||
28 | <pre> |
||
29 | make -C TO_BE_TESTED run |
||
30 | </pre> |
||
31 | |||
32 | Reports |
||
33 | |||
34 | Reports are generated as a set of html pages in the report/ folder via |
||
35 | |||
36 | <pre> |
||
37 | make reports |
||
38 | </pre> |
||
39 | |||
40 | Parallel make is not possible. |
||
41 | |||
42 | The report can then be viewed with your favorite web browser, e.g. |
||
43 | |||
44 | <pre> |
||
45 | firefox report/index.html |
||
46 | </pre> |
||
47 | |||
48 | Individual test reports are generated via |
||
49 | |||
50 | <pre> |
||
51 | make -C TO_BE_TESTED report |
||
52 | </pre> |
||
53 | |||
54 | The test report will be attached to the end of the report/index.html file. Note that for technical reasons, graphs from previous test failures will be included in the new report if the report/ folder has not been cleaned. |
||
55 | 3 | Anonymous | |
56 | h2. Failures? |
||
57 | |||
58 | p(. If some tests fail, this need not mean that the code is broken. We have not spent particular efforts in developing a reliable test metrics. Typically, one should look at the detailed analysis. Differences in the number of steps and small differences in energy may result from hard-ware and compilation-dependent numerical noise. Dramatic deterioration of the convergence behavior, however, is a strong hint for a serious problem. |
||
59 | |||
60 | p(. Some tests fail also if the necessary utility programs are not in the search path (e.g. perl, ncdump, etc.). |
||
61 | |||
62 | p(. If in doubt, contact the SPHInX user mailing list sxusers at mpie de after registration (sxusers-subscribe@…). |
||
63 | 4 | Anonymous | |
64 | h2. Cleaning |
||
65 | |||
66 | The testbed can be cleaned with |
||
67 | |||
68 | <pre> |
||
69 | make cleanall |
||
70 | </pre> |
||
71 | |||
72 | Individual tests can be cleaned via |
||
73 | |||
74 | <pre> |
||
75 | make -C TO_BE_TESTED clean |
||
76 | </pre> |
||
77 | |||
78 | This does not clean the corresponding report. |