1 Automated Testing System {#dev_guides__tests}
2 ======================================
4 @section testmanual_1 Introduction
6 This document provides overview and practical guidelines for work with OCCT automatic testing system.
7 Reading this section *Introduction* should be sufficient for OCCT developers to use the test system
8 to control non-regression of the modifications they implement in OCCT. Other sections provide
9 more in-depth description of the test system, required for modifying the tests and adding new test cases.
11 @subsection testmanual_1_1 Basic Information
13 OCCT automatic testing system is organized around DRAW Test Harness [1],
14 a console application based on Tcl (a scripting language) interpreter extended by OCCT-related commands.
15 Standard OCCT tests are included with OCCT sources and are located in subdirectory *tests*
16 of the OCCT root folder. Other test folders can be included in the scope of the test system,
17 e.g. for testing applications based on OCCT.
18 Logically the tests are organized in three levels:
20 * Group: group of related test grids, usually relating to some part of OCCT functionality (e.g. blend)
21 * Grid: set of test cases within a group, usually aimed at testing some particular aspect or mode of execution of the relevant functionality (e.g. buildevol)
22 * Test case: script implementing individual test (e.g. K4)
24 Some tests involve data files (typically CAD models) which are located separately
25 and are not included with OCCT code. The archive with publicly available
26 test data files should be downloaded and installed independently on OCCT code from dev.opencascade.org.
28 @subsection testmanual_1_2 Intended Use of Automatic Tests
30 Each modification made in OCCT code must be checked for non-regression
31 by running the whole set of tests. The developer who does the modification
32 is responsible for running and ensuring non-regression on the tests that are available to him.
33 Note that many tests are based on data files that are confidential and thus available only at OPEN CASCADE.
34 Thus official certification testing of the changes before integration to master branch
35 of official OCCT Git repository (and finally to the official release) is performed by OPEN CASCADE in any case.
37 Each new non-trivial modification (improvement, bug fix, new feature) in OCCT
38 should be accompanied by a relevant test case suitable for verifying that modification.
39 This test case is to be added by developer who provides the modification.
40 If a modification affects result of some test case(s),
41 either the modification should be corrected (if it causes regression)
42 or affected test cases should be updated to account for the modification.
44 The modifications made in the OCCT code and related test scripts
45 should be included in the same integration to master branch.
47 @subsection testmanual_1_3 Quick Start
49 @subsubsection testmanual_1_3_1 Setup
51 Before running tests, make sure to define environment variable CSF_TestDataPath
52 pointing to the directory containing test data files.
53 (The publicly available data files can be downloaded
54 from http://dev.opencascade.org separately from OCCT code.)
55 The recommended way for that is adding a file *DrawInitAppli*
56 in the directory which is current at the moment of starting DRAWEXE (normally it is $CASROOT).
57 This file is evaluated automatically at the DRAW start. Example:
59 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
60 set env(CSF_TestDataPath) d:/occt/tests_data
61 return ;# this is to avoid an echo of the last command above in cout
62 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
64 All tests are run from DRAW command prompt, thus first run draw.tcl or draw.sh to start DRAW.
66 @subsubsection testmanual_1_3_2 Running Tests
68 To run all tests, type command *testgrid* followed by path
69 to the new directory where results will be saved.
70 It is recommended that this directory should be new or empty;
71 use option –overwrite to allow writing logs in existing non-empty directory.
75 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
76 Draw[]> testgrid d:/occt/results-2012-02-27
77 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
79 If empty string is given as log directory name, the name will be generated automatically
80 using current date and time, prefixed by *results_*. Example:
82 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
84 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
86 For running only some group or a grid of tests,
87 give additional arguments indicating group and (if needed) grid. Example:
89 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
90 Draw[]> testgrid d:/occt/results-2012-02-28 blend simple
91 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
93 As the tests progress, the result of each test case is reported.
94 At the end of the log summary of test cases is output,
95 including list of detected regressions and improvements, if any. Example:
98 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
101 CASE 3rdparty export A1: OK
103 CASE pipe standard B1: BAD (known problem)
104 CASE pipe standard C1: OK
106 Total cases: 208 BAD, 31 SKIPPED, 3 IMPROVEMENT, 1791 OK
107 Elapsed time: 1 Hours 14 Minutes 33.7384512019 Seconds
108 Detailed logs are saved in D:/occt/results_2012-06-04T0919
109 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
111 The tests are considered as non-regressive if only OK, BAD (i.e. known problem),
112 and SKIPPED (i.e. not executed, e.g. because of lack of data file) statuses are reported.
113 See <a href="#testmanual_3_4">Grid’s *cases.list* file</a> chapter for details.
115 The detailed logs of running tests are saved in the specified directory and its sub-directories.
116 Cumulative HTML report summary.html provides links to reports on each test case.
117 An additional report TESTS-summary.xml is output in JUnit-style XML format
118 that can be used for integration with Jenkins or other continuous integration system.
119 Type *help testgrid* in DRAW prompt to get help on additional options supported by testgrid command.
121 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
122 Draw[3]> help testgrid
123 testgrid: Run all tests, or specified group, or one grid
124 Use: testgrid logdir [group [grid]] [options...]
126 -verbose {0-2}: verbose level, 0 by default, can be set to 1 or 2
127 -parallel N: run in parallel with up to N processes (default 0)
128 -refresh N: save summary logs every N seconds (default 60)
129 -overwrite: force writing logs in existing non-empty directory
130 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
132 @subsubsection testmanual_1_3_3 Running Single Test
134 To run single test, type command *test*’ followed by names of group, grid, and test case.
138 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
139 Draw[1]> test blend simple A1
140 CASE blend simple A1: OK
142 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
144 Note that normally intermediate output of the script is not shown.
145 To see intermediate commands and their output, type command *decho on*
146 before running the test case. (Type ‘decho off’ to disable echoing when not needed.)
147 The detailed log of the test can also be obtained after the test execution by command *dlog get*.
149 @section testmanual_2 Organization of Test Scripts
151 @subsection testmanual_2_1 General Layout
153 Standard OCCT tests are located in subdirectory tests of the OCCT root folder ($CASROOT).
154 Additional test folders can be added to the test system
155 by defining environment variable CSF_TestScriptsPath.
156 This should be list of paths separated by semicolons (*;*) on Windows
157 or colons (*:*) on Linux or Mac. Upon DRAW launch,
158 path to tests sub-folder of OCCT is added at the end of this variable automatically.
159 Each test folder is expected to contain:
161 * Optional file parse.rules defining patterns for interpretation of test results, common for all groups in this folder
162 * One or several test group directories.
164 Each group directory contains:
166 * File grids.list that identifies this test group and defines list of test grids in it.
167 * Test grids (sub-directories), each containing set of scripts for test cases, and optional files cases.list, parse.rules, begin, and end.
168 * Optional sub-directory data
169 * Optional file parse.rules
170 * Optional files begin and end
172 By convention, names of test groups, grids, and cases should contain no spaces and be lowercase.
173 Names begin, end, data, parse.rules, grids.list, cases.list are reserved.
174 General layout of test scripts is shown on Figure 1.
176 @image html /dev_guides/tests/images/tests_image001.png
177 @image latex /dev_guides/tests/images/tests_image001.png
179 Figure 1. Layout of tests folder
181 @subsection testmanual_2_2 Test Groups
183 @subsubsection testmanual_2_2_1 Group Names
185 Test folder usually contains several directories representing test groups (Group 1, Group N).
186 Each directory contains test grids for certain OCCT functionality.
187 The name of directory corresponds to this functionality.
196 @subsubsection testmanual_2_2_2 Group's *grids.list* File
198 The test group must contain file *grids.list* file
199 which defines ordered list of grids in this group in the following format:
215 @subsubsection testmanual_2_2_3 Group's *begin* File
217 The file *begin* is a Tcl script. It is executed before every test in current group.
218 Usually it loads necessary Draw commands, sets common parameters and defines
219 additional Tcl functions used in test scripts.
222 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
223 pload TOPTEST ;# load topological command
224 set cpulimit 300 ;# set maximum time allowed for script execution
225 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
227 @subsubsection testmanual_2_2_4 Group's *end* File
229 The file end is a TCL script. It is executed after every test in current group.
230 Usually it checks the results of script work, makes a snap-shot
231 of the viewer and writes *TEST COMPLETED* to the output.
232 Note: *TEST COMPLETED* string should be presented in output
233 in order to signal that test is finished without crash.
234 See <a href="#testmanual_3">Creation And Modification Of Tests</a> chapter for more information.
237 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
238 if { [isdraw result] } {
241 puts *Error: The result shape can not be built*
243 puts *TEST COMPLETED*
244 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
246 @subsubsection testmanual_2_2_5 Group’s *parse.rules* File
248 The test group may contain *parse.rules* file.
249 This file defines patterns used for analysis of the test execution log
250 and deciding the status of the test run.
251 Each line in the file should specify a status (single word),
252 followed by a regular expression delimited by slashes (*/*)
253 that will be matched against lines in the test output log to check if it corresponds to this status.
254 The regular expressions support subset of the Perl re syntax.
255 The rest of the line can contain a comment message
256 which will be added to the test report when this status is detected.
259 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
260 FAILED /\b[Ee]xception\b/ exception
261 FAILED /\bError\b/ error
262 SKIPPED /Cannot open file for reading/ data file is missing
263 SKIPPED /Could not read file .*, abandon/ data file is missing
264 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
266 Lines starting with a *#* character and blank lines are ignored to allow comments and spacing.
267 See <a href="#testmanual_2_3_4">Interpretation of test results</a> chapter for details.
269 If a line matches several rules, the first one applies.
270 Rules defined in the grid are checked first, then rules in group,
271 then rules in the test root directory. This allows defining some rules on the grid level
272 with status IGNORE to ignore messages that would otherwise be treated as errors due to the group level rules.
275 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
276 FAILED /\bFaulty\b/ bad shape
277 IGNORE /^Error [23]d = [\d.-]+/ debug output of blend command
278 IGNORE /^Tcl Exception: tolerance ang : [\d.-]+/ blend failure
279 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
281 @subsection testmanual_2_3 Test Grids
283 @subsubsection testmanual_2_3_1 Grid Names
285 Group folder can have several sub-directories (Grid 1… Grid N) defining test grids.
286 Each test grid directory contains a set of related test cases.
287 The name of directory should correspond to its contents.
296 Where **caf** is the name of test group and *basic*, *bugs*, *presentation*, etc are the names of grids.
298 @subsubsection testmanual_2_3_2 Grid’s *begin* File
300 The file *begin* is a TCL script. It is executed before every test in current grid.
301 Usually it sets variables specific for the current grid.
304 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
305 set command bopfuse ;# command tested in this grid
306 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
308 @subsubsection testmanual_2_3_3 Grid’s *end* File
310 The file *end* is a TCL script. It is executed after every test in current grid.
311 Usually it executes specific sequence of commands common for all tests in the grid.
314 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
315 vdump $logdir/${casename}.gif ;# makes a snap-shot of AIS viewer
316 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
318 @subsubsection testmanual_2_3_4 Grid’s *cases.list* File
320 The grid directory can contain an optional file cases.list
321 defining alternative location of the test cases.
322 This file should contain singe line defining the relative path to collection of test cases.
328 This option is used for creation of several grids of tests with the same data files
329 and operations but performed with differing parameters.
330 The common scripts are usually located place in common
331 subdirectory of the test group (data/simple as in example).
332 If cases.list file exists then grid directory should not contain any test cases.
333 The specific parameters and pre- and post-processing commands
334 for the tests execution in this grid should be defined in the begin and end files.
336 @subsection testmanual_2_4 Test Cases
338 The test case is TCL script which performs some operations using DRAW commands
339 and produces meaningful messages that can be used to check the result for being valid.
342 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
343 pcylinder c1 10 20 ;# create first cylinder
344 pcylinder c2 5 20 ;# create second cylinder
345 ttranslate c2 5 0 10 ;# translate second cylinder to x,y,z
346 bsection result c1 c2 ;# create a section of two cylinders
347 checksection result ;# will output error message if result is bad
348 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
350 The test case can have any name (except reserved names begin, end, data, cases.list, parse.rules).
351 For systematic grids it is usually a capital English letter followed by a number.
362 Such naming facilitates compact representation of results
363 of tests execution in tabular format within HTML reports.
365 @subsection testmanual_2_5 Directory *data*
367 The test group may contain subdirectory data.
368 Usually it contains data files used in tests (BREP, IGES, STEP, etc.)
369 and / or test scripts shared by different test grids
370 (in subdirectories, see <a href="#testmanual_2_3_4">Grid’s *cases.list* file</a> chapter).
372 @section testmanual_3 Creation And Modification Of Tests
374 This section describes how to add new tests and update existing ones.
376 @subsection testmanual_3_1 Choosing Group, Grid, and Test Case Name
378 The new tests are usually added in context of processing some bugs.
379 Such tests in general should be added to group bugs, in the grid
380 corresponding to the affected OCCT functionality.
381 New grids can be added as necessary to contain tests on functionality not yet covered by existing test grids.
382 The test case name in the bugs group should be prefixed by ID
383 of the corresponding issue in Mantis (without leading zeroes).
384 It is recommended to add a suffix providing a hint on the situation being tested.
385 If more than one test is added for a bug, they should be distinguished by suffixes;
386 either meaningful or just ordinal numbers.
390 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
394 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
396 In the case if new test corresponds to functionality for which
397 specific group of tests exists (e.g. group mesh for BRepMesh issues),
398 this test can be added (or moved later by OCC team) to this group.
400 @subsection testmanual_3_2 Adding Data Files Required for a Test
402 It is advisable that tests scripts should be made self-contained whenever possible,
403 so as to be usable in environments where data files are not available.
404 For that simple geometric objects and shapes can be created using DRAW commands in the test script itself.
405 If test requires some data file, it should be put to subdirectory data of the test grid.
406 Note that when test is integrated to master branch,
407 OCC team can move data file to data files repository,
408 so as to keep OCCT sources repository clean from big data files.
409 When preparing a test script, try to minimize size of involved data model.
410 For instance, if problem detected on a big shape can be reproduced on a single face
411 extracted from that shape, use only this face in the test.
413 @subsection testmanual_3_3 Implementation of the Script
415 Test should run commands necessary to perform the operations being tested,
416 in a clean DRAW session. This includes loading necessary functionality by *pload* command,
417 if this is not done by *begin* script. The messages produced by commands in standard output
418 should include identifiable messages on the discovered problems if any.
419 Usually the script represents a set of commands that a person would run interactively
420 to perform the operation and see its results, with additional comments to explain what happens.
423 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
424 # Simple test of fusing box and sphere
429 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
431 Make sure that file parse.rules in the grid or group directory contains
432 regular expression to catch possible messages indicating failure of the test.
433 For instance, for catching errors reported by *checkshape* command
434 relevant grids define a rule to recognize its report by the word *Faulty*: FAILED /\bFaulty\b/ bad shape
435 For the messages generated in the script the most natural way is to use the word *Error* in the message.
438 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
439 set expected_length 11
440 if { [expr $actual_length - $expected_length] > 0.001 } {
441 puts *Error: The length of the edge should be $expected_length*
443 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
445 At the end, the test script should output *TEST COMPLETED* string
446 to mark successful completion of the script.
447 This is often done by the end script in the grid.
448 When test script requires data file, use Tcl procedure *locate_data_file*
449 to get path to the data file, rather than explicit path.
450 This will allow easy move of the data file from OCCT repository
451 to the data files repository without a need to update test script.
454 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
455 stepread [locate_data_file CAROSKI_COUPELLE.step] a *
456 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
458 When test needs to produce some snapshots or other artifacts,
459 use Tcl variable logdir as location where such files should be put.
460 Command *testgrid* sets this variable to the subdirectory of the results folder
461 corresponding to the grid. Command *test* sets it to $CASROOT/tmp unless it is already defined.
462 Use Tcl variable casename to prefix all the files produced by the test.
463 This variable is set to the name of the test case.
466 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
467 xwd $logdir/${casename}.gif
468 vdisplay result; vfit
469 vdump $logdir/${casename}-axo.gif
471 vdump $logdir/${casename}-front.gif
472 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
476 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
480 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
482 @subsection testmanual_3_4 Interpretation of Test Results
484 The result of the test is evaluated by checking its output against patterns
485 defined in the files parse.rules of the grid and group.
486 The OCCT test system recognizes five statuses of the test execution:
488 * SKIPPED: reported if line matching SKIPPED pattern is found (prior to any FAILED pattern). This indicates that the test cannot be run in the current environment; most typical case is absence of the required data file.
489 * FAILED: reported if some line matching pattern with status FAILED is found (unless it is masked by preceding IGNORE pattern or a TODO statement, see below), or if message TEST COMPLETED is not found at the end. This indicates that test produces bad or unexpected result, and usually highlights a regression.
490 * BAD: reported if test script output contains one or several TODO statements and corresponding number of matching lines in the log. This indicates a known problem (see 3.5). The lines matching TODO statements are not checked against other patterns and thus will not cause a FAILED status.
491 * IMPROVEMENT: reported if test script output contains TODO statement for which no corresponding line is found. This is possible indication of improvement (known problem disappeared).
492 * OK: If none of the above statuses have been assigned. This means test passed without problems.
494 Other statuses can be specified in the parse.rules files, these will be classified as FAILED.
495 Before integration of the change to OCCT repository, all tests should return either OK or BAD status.
496 The new test created for unsolved problem should return BAD.
497 The new test created for a fixed problem should return FAILED without the fix, and OK with the fix.
499 @subsection testmanual_3_5 Marking BAD Cases
501 If the test produces invalid result at a certain moment then the corresponding bug
502 should be created in the OCCT issue tracker [3], and the problem should be marked as TODO in the test script.
503 The following statement should be added to such test script:
505 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
506 puts *TODO BugNumber ListOfPlatforms: RegularExpression*
507 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
511 * BugNumber is an ID of the bug in the tracker. For example: #12345
512 * ListOfPlatform is a list of platforms at which the bug is reproduced (e.g. Mandriva2008, Windows or All).
514 *Note: the platform name is custom for the OCCT test system;*
515 *it can be consulted as value of environment variable os_type defined in DRAW.*
519 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
520 Draw[2]> puts $env(os_type)
522 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
524 * RegularExpression is a regular expression which should be matched against the line indicating the problem in the script output.
527 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
528 puts *TODO #22622 Mandriva2008: Abort .* an exception was raised*
529 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
531 Parser checks the output of the test and if an output line matches
532 the RegularExpression then it will be assigned a BAD status instead of FAILED.
533 For each output line matching to an error expression a separate TODO line
534 must be added to mark the test as BAD.
535 If not all the TODO statements are found in the test log,
536 the test will be considered as possible improvement.
537 To mark the test as BAD for an incomplete case
538 (when final TEST COMPLETE message is missing)
539 the expression *TEST INCOMPLETE* should be used instead of regular expression.
543 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
544 puts *TODO OCC22817 All: exception.+There are no suitable edges*
545 puts *TODO OCC22817 All: \\*\\* Exception \\*\\**
546 puts *TODO OCC22817 All: TEST INCOMPLETE*
547 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
549 @section testmanual_4 Extended Use
551 @subsection testmanual_4_1 Running Tests on Older Versions of OCCT
553 Sometimes it might be necessary to run tests on previous versions of OCCT (up to to 6.5.3)
554 that do not include this test system. This can be done by adding DRAW configuration file DrawAppliInit
555 in the directory which is current by the moment of DRAW startup,
556 to load test commands and define necessary environment. Example
557 (assume that d:/occt contains up-to-date version of OCCT sources
558 with tests, and test data archive is unpacked to d:/test-data):
560 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
561 set env(CASROOT) d:/occt
562 set env(CSF_TestScriptsPath) $env(CASROOT)/tests
563 source $env(CASROOT)/src/DrawResources/TestCommands.tcl
564 set env(CSF_TestDataPath) d:/test-data
566 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
568 Note that on older versions of OCCT the tests are run in compatibility mode
569 and not all output of the test command can be captured;
570 this can lead to absence of some error messages (can be reported as improvement).
572 @subsection testmanual_4_2 Adding Custom Tests
574 You can extend the test system by adding your own tests.
575 For that it is necessary to add paths to the directory where these tests are located,
576 and additional data directory(ies), to the environment variables CSF_TestScriptsPath and CSF_TestDataPath.
577 The recommended way for doing this is using DRAW configuration file DrawAppliInit
578 located in the directory which is current by the moment of DRAW startup.
580 Use Tcl command *_path_separator* to insert platform-dependent separator to the path list.
582 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
583 set env(CSF_TestScriptsPath) \
584 $env(TestScriptsPath)[_path_separator]d:/MyOCCTProject/tests
585 set env(CSF_TestDataPath) \
586 d:/occt/test-data[_path_separator]d:/MyOCCTProject/tests
587 return ;# this is to avoid an echo of the last command above in cout
588 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
590 @section testmanual_5 References
592 -# DRAW Test Harness User’s Guide
593 -# Perl regular expressions, http://perldoc.perl.org/perlre.html
594 -# OCCT MantisBT issue tracker, http://tracker.dev.opencascade.org