0026964: Merge OCAF white-papers into OCAF user's guide
[occt.git] / dox / dev_guides / tests / tests.md
CommitLineData
ba06f8bb 1 Automated Testing System {#occt_dev_guides__tests}
72b7576f 2======================================
3
e5bd0d98 4@tableofcontents
5
72b7576f 6@section testmanual_1 Introduction
7
504a8968 8This document provides OCCT developers and contributors with an overview and practical guidelines for work with OCCT automatic testing system.
9
6d368502 10Reading the Introduction should be sufficient for developers to use the test system to control non-regression of the modifications they implement in OCCT. Other sections provide a more in-depth description of the test system, required for modifying the tests and adding new test cases.
72b7576f 11
12@subsection testmanual_1_1 Basic Information
13
9d99d3c1 14OCCT automatic testing system is organized around @ref occt_user_guides__test_harness "DRAW Test Harness", a console application based on Tcl (a scripting language) interpreter extended by OCCT-related commands.
e5bd0d98 15
504a8968 16Standard OCCT tests are included with OCCT sources and are located in subdirectory *tests* of the OCCT root folder. Other test folders can be included in the test system, e.g. for testing applications based on OCCT.
e5bd0d98 17
504a8968 18The tests are organized in three levels:
72b7576f 19
504a8968 20 * Group: a group of related test grids, usually testing a particular OCCT functionality (e.g. blend);
21 * Grid: a set of test cases within a group, usually aimed at testing some particular aspect or mode of execution of the relevant functionality (e.g. buildevol);
22 * Test case: a script implementing an individual test (e.g. K4).
23
24See <a href="#testmanual_5_1">Test Groups</a> for the current list of available test groups and grids.
72b7576f 25
936f43da 26Some tests involve data files (typically CAD models) which are located separately and are not included with OCCT code. The archive with publicly available test data files should be downloaded and installed independently on OCCT sources (see http://dev.opencascade.org).
72b7576f 27
28@subsection testmanual_1_2 Intended Use of Automatic Tests
29
30Each modification made in OCCT code must be checked for non-regression
6d368502 31by running the whole set of tests. The developer who makes the modification
504a8968 32is responsible for running and ensuring non-regression for the tests available to him.
33
6d368502 34Note that many tests are based on data files that are confidential and thus available only at OPEN CASCADE.
35The official certification testing of each change before its integration to master branch of official OCCT Git repository (and finally to the official release) is performed by OPEN CASCADE to ensure non-regression on all existing test cases and supported platforms.
504a8968 36
37Each new non-trivial modification (improvement, bug fix, new feature) in OCCT should be accompanied by a relevant test case suitable for verifying that modification. This test case is to be added by the developer who provides the modification.
72b7576f 38
504a8968 39If a modification affects the result of an existing test case, either the modification should be corrected (if it causes regression) or the affected test cases should be updated to account for the modification.
72b7576f 40
504a8968 41The modifications made in the OCCT code and related test scripts should be included in the same integration to the master branch.
72b7576f 42
43@subsection testmanual_1_3 Quick Start
44
45@subsubsection testmanual_1_3_1 Setup
46
504a8968 47Before running tests, make sure to define environment variable *CSF_TestDataPath* pointing to the directory containing test data files.
504a8968 48
49For this it is recommended to add a file *DrawAppliInit* in the directory which is current at the moment of starting DRAWEXE (normally it is OCCT root directory, <i>$CASROOT </i>). This file is evaluated automatically at the DRAW start.
50
51Example (Windows)
72b7576f 52
6d368502 53~~~~~{.tcl}
504a8968 54set env(CSF_TestDataPath) $env(CSF_TestDataPath)\;d:/occt/test-data
6d368502 55~~~~~
504a8968 56
504a8968 57Note that variable *CSF_TestDataPath* is set to default value at DRAW start, pointing at the folder <i>$CASROOT/data</i>.
87f42a3f 58In this example, subdirectory <i>d:/occt/test-data</i> is added to this path. Similar code could be used on Linux and Mac OS X except that on non-Windows platforms colon ":" should be used as path separator instead of semicolon ";".
72b7576f 59
936f43da 60All tests are run from DRAW command prompt (run *draw.bat* or *draw.sh* to start it).
72b7576f 61
62@subsubsection testmanual_1_3_2 Running Tests
63
504a8968 64To run all tests, type command *testgrid*
72b7576f 65
66Example:
67
504a8968 68~~~~~
69Draw[]> testgrid
70~~~~~
72b7576f 71
ae3eaf7b 72To run only a subset of test cases, give masks for group, grid, and test case names to be executed.
73Each argument is a list of file masks separated with commas or spaces; by default "*" is assumed.
72b7576f 74
504a8968 75Example:
72b7576f 76
504a8968 77~~~~~
87f42a3f 78Draw[]> testgrid bugs caf,moddata*,xde
504a8968 79~~~~~
72b7576f 80
72b7576f 81As the tests progress, the result of each test case is reported.
504a8968 82At the end of the log a summary of test cases is output,
83including the list of detected regressions and improvements, if any.
72b7576f 84
85
504a8968 86Example:
87
72b7576f 88~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
89 Tests summary
90
91 CASE 3rdparty export A1: OK
92 ...
93 CASE pipe standard B1: BAD (known problem)
94 CASE pipe standard C1: OK
95 No regressions
96 Total cases: 208 BAD, 31 SKIPPED, 3 IMPROVEMENT, 1791 OK
97 Elapsed time: 1 Hours 14 Minutes 33.7384512019 Seconds
98 Detailed logs are saved in D:/occt/results_2012-06-04T0919
99~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
100
504a8968 101The tests are considered as non-regressive if only OK, BAD (i.e. known problem), and SKIPPED (i.e. not executed, typically because of lack of a data file) statuses are reported. See <a href="#testmanual_3_5">Interpretation of test results</a> for details.
72b7576f 102
936f43da 103The results and detailed logs of the tests are saved by default to a new subdirectory of the subdirectory *results* in the current folder, whose name is generated automatically using the current date and time, prefixed by Git branch name (if Git is available and current sources are managed by Git).
ae3eaf7b 104If necessary, a non-default output directory can be specified using option <i> –outdir</i> followed by a path to the directory. This directory should be new or empty; use option <i>–overwrite</i> to allow writing results in the existing non-empty directory.
72b7576f 105
504a8968 106Example:
107~~~~~
87f42a3f 108Draw[]> testgrid -outdir d:/occt/last_results -overwrite
504a8968 109~~~~~
ae3eaf7b 110In the output directory, a cumulative HTML report <i>summary.html</i> provides links to reports on each test case. An additional report in JUnit-style XML format can be output for use in Jenkins or other continuous integration system.
504a8968 111
936f43da 112Type <i>help testgrid</i> in DRAW prompt to get help on options supported by *testgrid* command:
72b7576f 113
504a8968 114~~~~~
115Draw[3]> help testgrid
116testgrid: Run all tests, or specified group, or one grid
87f42a3f 117 Use: testgrid [groupmask [gridmask [casemask]]] [options...]
504a8968 118 Allowed options are:
119 -parallel N: run N parallel processes (default is number of CPUs, 0 to disable)
120 -refresh N: save summary logs every N seconds (default 60, minimal 1, 0 to disable)
121 -outdir dirname: set log directory (should be empty or non-existing)
122 -overwrite: force writing logs in existing non-empty directory
123 -xml filename: write XML report for Jenkins (in JUnit-like format)
936f43da 124 -beep: play sound signal at the end of the tests
87f42a3f 125 Groups, grids, and test cases to be executed can be specified by list of file
126 masks, separated by spaces or comma; default is all (*).
504a8968 127~~~~~
72b7576f 128
504a8968 129@subsubsection testmanual_1_3_3 Running a Single Test
130
87f42a3f 131To run a single test, type command *test* followed by names of group, grid, and test case.
72b7576f 132
133Example:
134
135~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
136 Draw[1]> test blend simple A1
137 CASE blend simple A1: OK
138 Draw[2]>
139~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
140
504a8968 141Note that normally an intermediate output of the script is not shown. The detailed log of the test can be obtained after the test execution by running command <i>"dlog get"</i>.
142
143To see intermediate commands and their output during the test execution, add one more argument
144<i>"echo"</i> at the end of the command line. Note that with this option the log is not collected and summary is not produced.
145
ae3eaf7b 146Type <i>help test</i> in DRAW prompt to get help on options supported by *test* command:
936f43da 147
148~~~~~
149Draw[3]> help test
150test: Run specified test case
151 Use: test group grid casename [options...]
152 Allowed options are:
153 -echo: all commands and results are echoed immediately,
154 but log is not saved and summary is not produced
155 It is also possible to use "1" instead of "-echo"
156 If echo is OFF, log is stored in memory and only summary
157 is output (the log can be obtained with command "dlog get")
158 -outfile filename: set log file (should be non-existing),
159 it is possible to save log file in text file or
160 in html file(with snapshot), for that "filename"
161 should have ".html" extension
162 -overwrite: force writing log in existing file
163 -beep: play sound signal at the end of the test
164 -errors: show all lines from the log report that are recognized as errors
165 This key will be ignored if the "-echo" key is already set.
166~~~~~
167
504a8968 168@subsubsection testmanual_1_3_4 Creating a New Test
169
170The detailed rules of creation of new tests are given in <a href="#testmanual_3">section 3</a>. The following short description covers the most typical situations:
171
6d368502 172Use prefix <i>bug</i> followed by Mantis issue ID and, if necessary, additional suffixes, for naming the test script, data files, and DRAW commands specific for this test case.
504a8968 173
6d368502 1741. If the test requires C++ code, add it as new DRAW command(s) in one of files in *QABugs* package.
1752. Add script(s) for the test case in the subfolder corresponding to the relevant OCCT module of the group *bugs* <i>($CASROOT/tests/bugs)</i>. See <a href="#testmanual_5_2">the correspondence map</a>.
504a8968 1763. In the test script:
177 * Load all necessary DRAW modules by command *pload*.
178 * Use command *locate_data_file* to get a path to data files used by test script. (Make sure to have this command not inside catch statement if it is used.)
179 * Use DRAW commands to reproduce the situation being tested.
6d368502 180 * Make sure that in case of failure the test produces message containing word "Error" or other recognized by test system as error (add new error patterns in file parse.rules if necessary).
181 * If test case reports error due to existing problem and the fix is not available, add @ref testmanual_3_6 "TODO" statement for each error to mark it as known problem. The TODO statements must be specific so as to match the actually generated messages but not all similar errors.
182 * To check expected output which should be obtained as a result of a test, add @ref testmanual_3_7 "REQUIRED" statement for each line of output to mark it as required.
183 * If test case produces error messages (contained in parse.rules) which are expected in that test and should not be considered as its failure (e.g. test for checkshape command), add REQUIRED statement for each error to mark it as required output.
1844. If the test uses data file(s) not yet present in the test database, these can be put to (sub)directory pointed out by *CSF_TestDataPath* variable for running test. The files should be attached to Mantis issue corresponding to the modification being tested.
504a8968 1855. Check that the test case runs as expected (test for fix: OK with the fix, FAILED without the fix; test for existing problem: BAD), and integrate to Git branch created for the issue.
186
187Example:
188
189* Added files:
936f43da 190
504a8968 191~~~~~
192git status –short
6d368502 193A tests/bugs/heal/data/bug210_a.brep
194A tests/bugs/heal/data/bug210_b.brep
504a8968 195A tests/bugs/heal/bug210_1
196A tests/bugs/heal/bug210_2
197~~~~~
198
199* Test script
200
6d368502 201~~~~~{.tcl}
504a8968 202puts "OCC210 (case 1): Improve FixShape for touching wires"
203
6d368502 204restore [locate_data_file bug210_a.brep] a
504a8968 205
206fixshape result a 0.01 0.01
207checkshape result
208~~~~~
72b7576f 209
210@section testmanual_2 Organization of Test Scripts
211
212@subsection testmanual_2_1 General Layout
213
214Standard OCCT tests are located in subdirectory tests of the OCCT root folder ($CASROOT).
72b7576f 215
504a8968 216Additional test folders can be added to the test system by defining environment variable *CSF_TestScriptsPath*. This should be list of paths separated by semicolons (*;*) on Windows
6d368502 217or colons (*:*) on Linux or Mac. Upon DRAW launch, path to *tests* subfolder of OCCT is added at the end of this variable automatically.
504a8968 218
219Each test folder is expected to contain:
6d368502 220 * Optional file *parse.rules* defining patterns for interpretation of test results, common for all groups in this folder
72b7576f 221 * One or several test group directories.
222
223Each group directory contains:
224
504a8968 225 * File *grids.list* that identifies this test group and defines list of test grids in it.
226 * Test grids (sub-directories), each containing set of scripts for test cases, and optional files *cases.list*, *parse.rules*, *begin* and *end*.
72b7576f 227 * Optional sub-directory data
72b7576f 228
504a8968 229By convention, names of test groups, grids, and cases should contain no spaces and be lower-case.
230The names *begin, end, data, parse.rules, grids.list* and *cases.list* are reserved.
72b7576f 231
504a8968 232General layout of test scripts is shown in Figure 1.
233
234@image html /dev_guides/tests/images/tests_image001.png "Layout of tests folder"
235@image latex /dev_guides/tests/images/tests_image001.png "Layout of tests folder"
72b7576f 236
72b7576f 237
238@subsection testmanual_2_2 Test Groups
239
240@subsubsection testmanual_2_2_1 Group Names
241
504a8968 242The names of directories of test groups containing systematic test grids correspond to the functionality tested by each group.
243
72b7576f 244Example:
245
504a8968 246~~~~~
72b7576f 247 caf
248 mesh
249 offset
504a8968 250~~~~~
72b7576f 251
504a8968 252Test group *bugs* is used to collect the tests coming from bug reports. Group *demo* collects tests of the test system, DRAW, samples, etc.
72b7576f 253
504a8968 254@subsubsection testmanual_2_2_2 File "grids.list"
255
256This test group contains file *grids.list*, which defines an ordered list of grids in this group in the following format:
72b7576f 257
258~~~~~~~~~~~~~~~~~
259001 gridname1
260002 gridname2
261...
262NNN gridnameN
263~~~~~~~~~~~~~~~~~
264
265Example:
266
267~~~~~~~~~~~~~~~~~
268 001 basic
269 002 advanced
270~~~~~~~~~~~~~~~~~
271
504a8968 272@subsubsection testmanual_2_2_3 File "begin"
72b7576f 273
504a8968 274This file is a Tcl script. It is executed before every test in the current group.
72b7576f 275Usually it loads necessary Draw commands, sets common parameters and defines
276additional Tcl functions used in test scripts.
504a8968 277
72b7576f 278Example:
279
280~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
281 pload TOPTEST ;# load topological command
282 set cpulimit 300 ;# set maximum time allowed for script execution
283~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
284
504a8968 285@subsubsection testmanual_2_2_4 File "end"
72b7576f 286
504a8968 287This file is a TCL script. It is executed after every test in the current group. Usually it checks the results of script work, makes a snap-shot of the viewer and writes *TEST COMPLETED* to the output.
72b7576f 288
504a8968 289Note: *TEST COMPLETED* string should be present in the output to indicate that the test is finished without crash.
290
291See <a href="#testmanual_3">section 3</a> for more information.
292
293Example:
294~~~~~
72b7576f 295 if { [isdraw result] } {
296 checkshape result
297 } else {
4ee1bdf4 298 puts "Error: The result shape can not be built"
72b7576f 299 }
4ee1bdf4 300 puts "TEST COMPLETED"
504a8968 301~~~~~
302
303@subsubsection testmanual_2_2_5 File "parse.rules"
304
305The test group may contain *parse.rules* file. This file defines patterns used for analysis of the test execution log and deciding the status of the test run.
306
307Each line in the file should specify a status (single word), followed by a regular expression delimited by slashes (*/*) that will be matched against lines in the test output log to check if it corresponds to this status.
308
6d368502 309The regular expressions should follow <a href="http://www.tcl.tk/man/tcl/TclCmd/re_syntax.htm">Tcl syntax</a>, with special exception that "\b" is considered as word limit (Perl-style), in addition to "\y" used in Tcl.
504a8968 310
311The rest of the line can contain a comment message, which will be added to the test report when this status is detected.
72b7576f 312
72b7576f 313Example:
314
504a8968 315~~~~~
6d368502 316 FAILED /\b[Ee]xception\b/ exception
317 FAILED /\bError\b/ error
72b7576f 318 SKIPPED /Cannot open file for reading/ data file is missing
319 SKIPPED /Could not read file .*, abandon/ data file is missing
504a8968 320~~~~~
72b7576f 321
322Lines starting with a *#* character and blank lines are ignored to allow comments and spacing.
72b7576f 323
504a8968 324See <a href="#testmanual_3_5">Interpretation of test results</a> chapter for details.
325
326If a line matches several rules, the first one applies. Rules defined in the grid are checked first, then rules in the group, then rules in the test root directory. This allows defining some rules on the grid level with status *IGNORE* to ignore messages that would otherwise be treated as errors due to the group level rules.
327
72b7576f 328Example:
329
330~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
e5bd0d98 331 FAILED /\\bFaulty\\b/ bad shape
72b7576f 332 IGNORE /^Error [23]d = [\d.-]+/ debug output of blend command
333 IGNORE /^Tcl Exception: tolerance ang : [\d.-]+/ blend failure
334~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
335
504a8968 336@subsubsection testmanual_2_2_6 Directory "data"
337The test group may contain subdirectory *data*, where test scripts shared by different test grids can be put. See also <a href="#testmanual_2_3_5">Directory *data*</a>.
338
72b7576f 339@subsection testmanual_2_3 Test Grids
340
341@subsubsection testmanual_2_3_1 Grid Names
342
504a8968 343The folder of a test group can have several sub-directories (Grid 1… Grid N) defining test grids.
344Each directory contains a set of related test cases. The name of a directory should correspond to its contents.
72b7576f 345
346Example:
347
504a8968 348~~~~~
72b7576f 349caf
350 basic
351 bugs
352 presentation
504a8968 353~~~~~
72b7576f 354
504a8968 355Here *caf* is the name of the test group and *basic*, *bugs*, *presentation*, etc. are the names of grids.
72b7576f 356
504a8968 357@subsubsection testmanual_2_3_2 File "begin"
358
359This file is a TCL script executed before every test in the current grid.
72b7576f 360
72b7576f 361Usually it sets variables specific for the current grid.
504a8968 362
72b7576f 363Example:
364
365~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
366 set command bopfuse ;# command tested in this grid
367~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
368
504a8968 369@subsubsection testmanual_2_3_3 File "end"
370
371This file is a TCL script executed after every test in current grid.
372
373Usually it executes a specific sequence of commands common for all tests in the grid.
72b7576f 374
72b7576f 375Example:
376
377~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
936f43da 378 vdump $imagedir/${casename}.png ;# makes a snap-shot of AIS viewer
72b7576f 379~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
380
504a8968 381@subsubsection testmanual_2_3_4 File "cases.list"
72b7576f 382
383The grid directory can contain an optional file cases.list
504a8968 384defining an alternative location of the test cases.
385This file should contain a single line defining the relative path to the collection of test cases.
72b7576f 386
387Example:
388
504a8968 389~~~~~
72b7576f 390../data/simple
504a8968 391~~~~~
392
393This option is used for creation of several grids of tests with the same data files and operations but performed with differing parameters. The common scripts are usually located place in the common
394subdirectory of the test group, <i>data/simple</i> for example.
72b7576f 395
504a8968 396If file *cases.list* exists, the grid directory should not contain any test cases.
72b7576f 397The specific parameters and pre- and post-processing commands
504a8968 398for test execution in this grid should be defined in the files *begin* and *end*.
399
400
401@subsubsection testmanual_2_3_5 Directory "data"
402
403The test grid may contain subdirectory *data*, containing data files used in tests (BREP, IGES, STEP, etc.) of this grid.
72b7576f 404
405@subsection testmanual_2_4 Test Cases
406
504a8968 407The test case is a TCL script, which performs some operations using DRAW commands
408and produces meaningful messages that can be used to check the validity of the result.
409
72b7576f 410Example:
411
412~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
413 pcylinder c1 10 20 ;# create first cylinder
414 pcylinder c2 5 20 ;# create second cylinder
415 ttranslate c2 5 0 10 ;# translate second cylinder to x,y,z
416 bsection result c1 c2 ;# create a section of two cylinders
417 checksection result ;# will output error message if result is bad
418~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
419
504a8968 420The test case can have any name (except for the reserved names *begin, end, data, cases.list* and *parse.rules*).
72b7576f 421For systematic grids it is usually a capital English letter followed by a number.
422
423Example:
424
504a8968 425~~~~~
72b7576f 426 A1
427 A2
428 B1
429 B2
504a8968 430~~~~~
72b7576f 431
504a8968 432Such naming facilitates compact representation of tests execution results in tabular format within HTML reports.
72b7576f 433
72b7576f 434
435@section testmanual_3 Creation And Modification Of Tests
436
437This section describes how to add new tests and update existing ones.
438
439@subsection testmanual_3_1 Choosing Group, Grid, and Test Case Name
440
504a8968 441The new tests are usually added in the frame of processing issues in OCCT Mantis tracker.
72b7576f 442Such tests in general should be added to group bugs, in the grid
504a8968 443corresponding to the affected OCCT functionality. See <a href="#testmanual_5_2">Mapping of OCCT functionality to grid names in group *bugs*</a>.
444
445New grids can be added as necessary to contain tests for the functionality not yet covered by existing test grids.
446The test case name in the bugs group should be prefixed by the ID of the corresponding issue in Mantis (without leading zeroes) with prefix *bug*. It is recommended to add a suffix providing a hint on the tested situation. If more than one test is added for a bug, they should be distinguished by suffixes; either meaningful or just ordinal numbers.
72b7576f 447
448Example:
449
450~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
504a8968 451 bug12345_coaxial
452 bug12345_orthogonal_1
453 bug12345_orthogonal_2
72b7576f 454~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
455
504a8968 456If the new test corresponds to a functionality already covered by the existing systematic test grid (e.g. group *mesh* for *BRepMesh* issues), this test can be added (or moved later by OCC team) to that grid.
72b7576f 457
458@subsection testmanual_3_2 Adding Data Files Required for a Test
459
ae3eaf7b 460It is advisable to make self-contained test scripts whenever possible, so as they could be used in the environments where data files are not available. For that simple geometric objects and shapes can be created using DRAW commands in the test script itself.
504a8968 461
ae3eaf7b 462If the test requires a data file, it should be put to the directory listed in environment variable *CSF_TestDataPath*.
936f43da 463Alternatively, it can be put to subdirectory *data* of the test grid.
464It is recommended to prefix the data file with the corresponding issue id prefixed by *bug*, e.g. *bug12345_face1.brep*, to avoid possible conflicts with names of existing data files.
504a8968 465
ae3eaf7b 466Note that when the test is integrated to the master branch, OCC team will move the data file to the data files repository, to keep OCCT sources repository clean from data files.
504a8968 467
ae3eaf7b 468When you prepare a test script, try to minimize the size of involved data model. For instance, if the problem detected on a big shape can be reproduced on a single face extracted from that shape, use only that face in the test.
504a8968 469
470
471@subsection testmanual_3_3 Adding new DRAW commands
472
473If the test cannot be implemented using available DRAW commands, consider the following possibilities:
474* If the existing DRAW command can be extended to enable possibility required for a test in a natural way (e.g. by adding an option to activate a specific mode of the algorithm), this way is recommended. This change should be appropriately documented in a relevant Mantis issue.
475* If the new command is needed to access OCCT functionality not exposed to DRAW previously, and this command can be potentially reused (for other tests), it should be added to the package where similar commands are implemented (use *getsource* DRAW command to get the package name). The name and arguments of the new command should be chosen to keep similarity with the existing commands. This change should be documented in a relevant Mantis issue.
476* Otherwise the new command implementing the actions needed for this particular test should be added in *QABugs* package. The command name should be formed by the Mantis issue ID prefixed by *bug*, e.g. *bug12345*.
477
478Note that a DRAW command is expected to return 0 in case of a normal completion, and 1 (Tcl exception) if it is incorrectly used (e.g. a wrong number of input arguments). Thus if the new command needs to report a test error, this should be done by outputting an appropriate error message rather than by returning a non-zero value.
ae3eaf7b 479File names must be encoded in the script rather than in the DRAW command and passed to the DRAW command as an argument.
504a8968 480
481@subsection testmanual_3_4 Script Implementation
482
483The test should run commands necessary to perform the tested operations, in general assuming a clean DRAW session. The required DRAW modules should be loaded by *pload* command, if it is not done by *begin* script. The messages produced by commands in a standard output should include identifiable messages on the discovered problems if any.
484
485Usually the script represents a set of commands that a person would run interactively to perform the operation and see its results, with additional comments to explain what happens.
486
72b7576f 487Example:
504a8968 488~~~~~
489# Simple test of fusing box and sphere
490box b 10 10 10
491sphere s 5
492bfuse result b s
493checkshape result
494~~~~~
72b7576f 495
504a8968 496Make sure that file *parse.rules* in the grid or group directory contains a regular expression to catch possible messages indicating the failure of the test.
497
498For instance, for catching errors reported by *checkshape* command relevant grids define a rule to recognize its report by the word *Faulty*:
499
500~~~~~
501FAILED /\bFaulty\b/ bad shape
502~~~~~
503
504For the messages generated in the script it is recommended to use the word 'Error' in the error message.
72b7576f 505
72b7576f 506Example:
507
504a8968 508~~~~~
509set expected_length 11
510if { [expr $actual_length - $expected_length] > 0.001 } {
511 puts "Error: The length of the edge should be $expected_length"
512}
513~~~~~
514
515At the end, the test script should output *TEST COMPLETED* string to mark a successful completion of the script. This is often done by the *end* script in the grid.
516
517When the test script requires a data file, use Tcl procedure *locate_data_file* to get a path to it, instead of putting the path explicitly. This will allow easy move of the data file from OCCT sources repository to the data files repository without the need to update the test script.
72b7576f 518
72b7576f 519Example:
520
504a8968 521~~~~~
522stepread [locate_data_file CAROSKI_COUPELLE.step] a *
523~~~~~
524
936f43da 525When the test needs to produce some snapshots or other artefacts, use Tcl variable *imagedir* as the location where such files should be put.
ae3eaf7b 526* Command *testgrid* sets this variable to the subdirectory of the results folder corresponding to the grid.
527* Command *test* by default creates a dedicated temporary directory in the system temporary folder (normally the one specified by environment variable *TempDir*, *TEMP*, or *TMP*) for each execution, and sets *imagedir* to that location.
528
529However if variable *imagedir* is defined on the top level of Tcl interpretor, command *test* will use it instead of creating a new directory.
936f43da 530
531Use Tcl variable *casename* to prefix all files produced by the test.
532This variable is set to the name of the test case.
ae3eaf7b 533
534The test system can recognize an image file (snapshot) and include it in HTML log and differences if its name starts with the name of the test case (use variable *casename*), optionally followed by underscore or dash and arbitrary suffix.
535
936f43da 536The image format (defined by extension) should be *png*.
72b7576f 537
72b7576f 538Example:
504a8968 539~~~~~
936f43da 540xwd $imagedir/${casename}.png
504a8968 541vdisplay result; vfit
936f43da 542vdump $imagedir/${casename}-axo.png
504a8968 543vfront; vfit
936f43da 544vdump $imagedir/${casename}-front.png
504a8968 545~~~~~
72b7576f 546
504a8968 547would produce:
548~~~~~
549A1.png
550A1-axo.png
551A1-front.png
552~~~~~
72b7576f 553
504a8968 554Note that OCCT must be built with FreeImage support to be able to produce usable images.
72b7576f 555
936f43da 556Other Tcl variables defined during the test execution are:
ae3eaf7b 557- *groupname*: name of the test group;
558- *gridname*: name of the test grid;
559- *dirname*: path to the root directory of the current set of test scripts.
936f43da 560
504a8968 561In order to ensure that the test works as expected in different environments, observe the following additional rules:
562* Avoid using external commands such as *grep, rm,* etc., as these commands can be absent on another system (e.g. on Windows); use facilities provided by Tcl instead.
563* Do not put call to *locate_data_file* in catch statement – this can prevent correct interpretation of the missing data file by the test system.
6d368502 564* Do not use commands *decho* and *dlog* in the test script, to avoid interference with use of these commands by the test system.
72b7576f 565
504a8968 566@subsection testmanual_3_5 Interpretation of test results
72b7576f 567
504a8968 568The result of the test is evaluated by checking its output against patterns defined in the files *parse.rules* of the grid and group.
72b7576f 569
504a8968 570The OCCT test system recognizes five statuses of the test execution:
571* SKIPPED: reported if a line matching SKIPPED pattern is found (prior to any FAILED pattern). This indicates that the test cannot be run in the current environment; the most typical case is the absence of the required data file.
6d368502 572* FAILED: reported if a line matching pattern with status FAILED is found (unless it is masked by the preceding IGNORE pattern or a TODO or REQUIRED statement), or if message TEST COMPLETED or at least one of REQUIRED patterns is not found. This indicates that the test has produced a bad or unexpected result, and usually means a regression.
573* BAD: reported if the test script output contains one or several TODO statements and the corresponding number of matching lines in the log. This indicates a known problem. The lines matching TODO statements are not checked against other patterns and thus will not cause a FAILED status.
504a8968 574* IMPROVEMENT: reported if the test script output contains a TODO statement for which no corresponding line is found. This is a possible indication of improvement (a known problem has disappeared).
575* OK: reported if none of the above statuses have been assigned. This means that the test has passed without problems.
72b7576f 576
504a8968 577Other statuses can be specified in *parse.rules* files, these will be classified as FAILED.
72b7576f 578
504a8968 579For integration of the change to OCCT repository, all tests should return either OK or BAD status.
580The new test created for an unsolved problem should return BAD. The new test created for a fixed problem should return FAILED without the fix, and OK with the fix.
72b7576f 581
504a8968 582@subsection testmanual_3_6 Marking BAD cases
72b7576f 583
ae3eaf7b 584If the test produces an invalid result at a certain moment then the corresponding bug should be created in the OCCT issue tracker located at http://tracker.dev.opencascade.org, and the problem should be marked as TODO in the test script.
72b7576f 585
504a8968 586The following statement should be added to such a test script:
587~~~~~
588puts "TODO BugNumber ListOfPlatforms: RegularExpression"
589~~~~~
590
591Here:
592* *BugNumber* is the bug ID in the tracker. For example: #12345.
6d368502 593* *ListOfPlatforms* is a list of platforms, at which the bug is reproduced (Linux, Windows, MacOS, or All). Note that the platform name is custom for the OCCT test system; it corresponds to the value of environment variable *os_type* defined in DRAW.
72b7576f 594
595Example:
504a8968 596~~~~~
597Draw[2]> puts $env(os_type)
598windows
599~~~~~
72b7576f 600
ae3eaf7b 601* RegularExpression is a regular expression, which should be matched against the line indicating the problem in the script output.
72b7576f 602
72b7576f 603Example:
504a8968 604~~~~~
605puts "TODO #22622 Mandriva2008: Abort .* an exception was raised"
606~~~~~
72b7576f 607
504a8968 608The parser checks the test output and if an output line matches the *RegularExpression* then it will be assigned a BAD status instead of FAILED.
72b7576f 609
504a8968 610A separate TODO line must be added for each output line matching an error expression to mark the test as BAD. If not all TODO messages are found in the test log, the test will be considered as possible improvement.
611
612To mark the test as BAD for an incomplete case (when the final *TEST COMPLETE* message is missing) the expression *TEST INCOMPLETE* should be used instead of the regular expression.
72b7576f 613
614Example:
615
504a8968 616~~~~~
617puts "TODO OCC22817 All: exception.+There are no suitable edges"
618puts "TODO OCC22817 All: \\*\\* Exception \\*\\*"
619puts "TODO OCC22817 All: TEST INCOMPLETE"
620~~~~~
72b7576f 621
6d368502 622@subsection testmanual_3_7 Marking required output
623
624To check expected output which must be obtained as a result of a test for it to be considered correct, add REQUIRED statement for each specific message.
625For that, the following statement should be added to such a test script:
626
627~~~~~
628puts "REQUIRED ListOfPlatforms: RegularExpression"
629~~~~~
630
631Here *ListOfPlatforms* and *RegularExpression* have the same meaning as in TODO statements described above.
632
633The REQUIRED statament can also be used to mask message that would normally be interpreted as error (according to rules defined in *parse.rules*) but should not be considered as such within current test.
634
635Example:
636~~~~~
637puts "TODO REQUIRED Linux: Faulty shapes in variables faulty_1 to faulty_5"
638~~~~~
639
640This statement notifies test system that errors reported by *checkshape* command are expected in that test case, and test should be considered as OK if this message appears, despite of presence of general rule stating that 'Faulty' signals failure.
504a8968 641
6d368502 642If output does not contain required statement, test case will be marked as FAILED.
504a8968 643
644@section testmanual_4 Advanced Use
72b7576f 645
646@subsection testmanual_4_1 Running Tests on Older Versions of OCCT
647
ae3eaf7b 648Sometimes it might be necessary to run tests on the previous versions of OCCT (<= 6.5.4) that do not include this test system. This can be done by adding DRAW configuration file *DrawAppliInit* in the directory, which is current by the moment of DRAW start-up, to load test commands and to define the necessary environment.
72b7576f 649
504a8968 650Note: in OCCT 6.5.3, file *DrawAppliInit* already exists in <i>$CASROOT/src/DrawResources</i>, new commands should be added to this file instead of a new one in the current directory.
651
652For example, let us assume that *d:/occt* contains an up-to-date version of OCCT sources with tests, and the test data archive is unpacked to *d:/test-data*):
653
654~~~~~
655set env(CASROOT) d:/occt
656set env(CSF_TestScriptsPath) $env(CASROOT)/tests
657source $env(CASROOT)/src/DrawResources/TestCommands.tcl
658set env(CSF_TestDataPath) $env(CASROOT)/data;d:/test-data
659return
660~~~~~
661
ae3eaf7b 662Note that on older versions of OCCT the tests are run in compatibility mode and thus not all output of the test command can be captured; this can lead to absence of some error messages (can be reported as either a failure or an improvement).
504a8968 663
664@subsection testmanual_4_2 Adding custom tests
665
666You can extend the test system by adding your own tests. For that it is necessary to add paths to the directory where these tests are located, and one or more additional data directories, to the environment variables *CSF_TestScriptsPath* and *CSF_TestDataPath*. The recommended way for doing this is using DRAW configuration file *DrawAppliInit* located in the directory which is current by the moment of DRAW start-up.
667
4ee1bdf4 668Use Tcl command <i>_path_separator</i> to insert a platform-dependent separator to the path list.
504a8968 669
670For example:
671~~~~~
672set env(CSF_TestScriptsPath) \
673 $env(TestScriptsPath)[_path_separator]d:/MyOCCTProject/tests
674set env(CSF_TestDataPath) \
675 d:/occt/test-data[_path_separator]d:/MyOCCTProject/data
676return ;# this is to avoid an echo of the last command above in cout
677~~~~~
678
679@subsection testmanual_4_3 Parallel execution of tests
680
681For better efficiency, on computers with multiple CPUs the tests can be run in parallel mode. This is default behavior for command *testgrid* : the tests are executed in parallel processes (their number is equal to the number of CPUs available on the system). In order to change this behavior, use option parallel followed by the number of processes to be used (1 or 0 to run sequentially).
72b7576f 682
87f42a3f 683Note that the parallel execution is only possible if Tcl extension package *Thread* is installed.
684If this package is not available, *testgrid* command will output a warning message.
72b7576f 685
504a8968 686@subsection testmanual_4_4 Checking non-regression of performance, memory, and visualization
72b7576f 687
504a8968 688Some test results are very dependent on the characteristics of the workstation, where they are performed, and thus cannot be checked by comparison with some predefined values. These results can be checked for non-regression (after a change in OCCT code) by comparing them with the results produced by the version without this change. The most typical case is comparing the result obtained in a branch created for integration of a fix (CR***) with the results obtained on the master branch before that change is made.
689
690OCCT test system provides a dedicated command *testdiff* for comparing CPU time of execution, memory usage, and images produced by the tests.
691
692~~~~~
693testdiff dir1 dir2 [groupname [gridname]] [options...]
694~~~~~
695Here *dir1* and *dir2* are directories containing logs of two test runs.
696
697Possible options are:
ba06f8bb 698* <i>-save \<filename\> </i> - saves the resulting log in a specified file (<i>$dir1/diff-$dir2.log</i> by default). HTML log is saved with the same name and extension .html;
504a8968 699* <i>-status {same|ok|all}</i> - allows filtering compared cases by their status:
700 * *same* - only cases with same status are compared (default);
701 * *ok* - only cases with OK status in both logs are compared;
702 * *all* - results are compared regardless of status;
ba06f8bb 703* <i>-verbose \<level\> </i> - defines the scope of output data:
504a8968 704 * 1 - outputs only differences;
705 * 2 - additionally outputs the list of logs and directories present in one of directories only;
706 * 3 - (by default) additionally outputs progress messages;
72b7576f 707
72b7576f 708Example:
504a8968 709
710~~~~~
711Draw[]> testdiff results-CR12345-2012-10-10T08:00 results-master-2012-10-09T21:20
712~~~~~
713
714@section testmanual_5 APPENDIX
715
716@subsection testmanual_5_1 Test groups
717
718@subsubsection testmanual_5_1_1 3rdparty
719
720This group allows testing the interaction of OCCT and 3rdparty products.
721
722DRAW module: VISUALIZATION.
723
724| Grid | Commands | Functionality |
725| :---- | :----- | :------- |
726| export | vexport | export of images to different formats |
727| fonts | vtrihedron, vcolorscale, vdrawtext | display of fonts |
728
729
730@subsubsection testmanual_5_1_2 blend
731
732This group allows testing blends (fillets) and related operations.
733
734DRAW module: MODELING.
735
736| Grid | Commands | Functionality |
737| :---- | :----- | :------- |
738| simple | blend | fillets on simple shapes |
739| complex | blend | fillets on complex shapes, non-trivial geometry |
740| tolblend_simple | tolblend, blend | |
741| buildevol | buildevol | |
742| tolblend_buildvol | tolblend, buildevol | use of additional command tolblend |
743| bfuseblend | bfuseblend | |
744| encoderegularity | encoderegularity | |
745
746@subsubsection testmanual_5_1_3 boolean
747
748This group allows testing Boolean operations.
749
750DRAW module: MODELING (packages *BOPTest* and *BRepTest*).
751
752Grids names are based on name of the command used, with suffixes:
753* <i>_2d</i> – for tests operating with 2d objects (wires, wires, 3d objects, etc.);
754* <i>_simple</i> – for tests operating on simple shapes (boxes, cylinders, toruses, etc.);
755* <i>_complex</i> – for tests dealing with complex shapes.
756
757| Grid | Commands | Functionality |
758| :---- | :----- | :------- |
759| bcommon_2d | bcommon | Common operation (old algorithm), 2d |
760| bcommon_complex | bcommon | Common operation (old algorithm), complex shapes |
761| bcommon_simple | bcommon | Common operation (old algorithm), simple shapes |
762| bcut_2d | bcut | Cut operation (old algorithm), 2d |
763| bcut_complex | bcut | Cut operation (old algorithm), complex shapes |
764| bcut_simple | bcut | Cut operation (old algorithm), simple shapes |
765| bcutblend | bcutblend | |
766| bfuse_2d | bfuse | Fuse operation (old algorithm), 2d |
767| bfuse_complex | bfuse | Fuse operation (old algorithm), complex shapes |
768| bfuse_simple | bfuse | Fuse operation (old algorithm), simple shapes |
769| bopcommon_2d | bopcommon | Common operation, 2d |
770| bopcommon_complex | bopcommon | Common operation, complex shapes |
771| bopcommon_simple | bopcommon | Common operation, simple shapes |
772| bopcut_2d | bopcut | Cut operation, 2d |
773| bopcut_complex | bopcut | Cut operation, complex shapes |
774| bopcut_simple | bopcut | Cut operation, simple shapes |
775| bopfuse_2d | bopfuse | Fuse operation, 2d |
776| bopfuse_complex | bopfuse | Fuse operation, complex shapes |
777| bopfuse_simple | bopfuse | Fuse operation, simple shapes |
778| bopsection | bopsection | Section |
779| boptuc_2d | boptuc | |
780| boptuc_complex | boptuc | |
781| boptuc_simple | boptuc | |
782| bsection | bsection | Section (old algorithm) |
783
784@subsubsection testmanual_5_1_4 bugs
785
786This group allows testing cases coming from Mantis issues.
787
788The grids are organized following OCCT module and category set for the issue in the Mantis tracker.
789See <a href="#testmanual_5_2">Mapping of OCCT functionality to grid names in group *bugs*</a> for details.
790
791@subsubsection testmanual_5_1_5 caf
792
793This group allows testing OCAF functionality.
794
795DRAW module: OCAFKERNEL.
796
797| Grid | Commands | Functionality |
798| :---- | :----- | :------- |
799| basic | | Basic attributes |
800| bugs | | Saving and restoring of document |
801| driver | | OCAF drivers |
802| named_shape | | *TNaming_NamedShape* attribute |
803| presentation | | *AISPresentation* attributes |
804| tree | | Tree construction attributes |
805| xlink | | XLink attributes |
806
807@subsubsection testmanual_5_1_6 chamfer
808
809This group allows testing chamfer operations.
810
811DRAW module: MODELING.
812
813The test grid name is constructed depending on the type of the tested chamfers. Additional suffix <i>_complex</i> is used for test cases involving complex geometry (e.g. intersections of edges forming a chamfer); suffix <i>_sequence</i> is used for grids where chamfers are computed sequentially.
814
815| Grid | Commands | Functionality |
816| :---- | :----- | :------- |
817| equal_dist | | Equal distances from edge |
818| equal_dist_complex | | Equal distances from edge, complex shapes |
819| equal_dist_sequence | | Equal distances from edge, sequential operations |
820| dist_dist | | Two distances from edge |
821| dist_dist_complex | | Two distances from edge, complex shapes |
822| dist_dist_sequence | | Two distances from edge, sequential operations |
823| dist_angle | | Distance from edge and given angle |
824| dist_angle_complex | | Distance from edge and given angle |
825| dist_angle_sequence | | Distance from edge and given angle |
826
827@subsubsection testmanual_5_1_7 demo
828
829This group allows demonstrating how testing cases are created, and testing DRAW commands and the test system as a whole.
830
831| Grid | Commands | Functionality |
832| :---- | :----- | :------- |
833| draw | getsource, restore | Basic DRAW commands |
834| testsystem | | Testing system |
835| samples | | OCCT samples |
836
837
838@subsubsection testmanual_5_1_8 draft
839
840This group allows testing draft operations.
841
842DRAW module: MODELING.
843
844| Grid | Commands | Functionality |
845| :---- | :----- | :------- |
846| Angle | depouille | Drafts with angle (inclined walls) |
847
848
849@subsubsection testmanual_5_1_9 feat
850
851This group allows testing creation of features on a shape.
852
853DRAW module: MODELING (package *BRepTest*).
854
855| Grid | Commands | Functionality |
856| :---- | :----- | :------- |
857| featdprism | | |
858| featlf | | |
859| featprism | | |
860| featrevol | | |
861| featrf | | |
862
863@subsubsection testmanual_5_1_10 heal
864
865This group allows testing the functionality provided by *ShapeHealing* toolkit.
866
867DRAW module: XSDRAW
868
869| Grid | Commands | Functionality |
870| :---- | :----- | :------- |
871| fix_shape | fixshape | Shape healing |
872| fix_gaps | fixwgaps | Fixing gaps between edges on a wire |
873| same_parameter | sameparameter | Fixing non-sameparameter edges |
874| fix_face_size | DT_ApplySeq | Removal of small faces |
875| elementary_to_revolution | DT_ApplySeq | Conversion of elementary surfaces to revolution |
876| direct_faces | directfaces | Correction of axis of elementary surfaces |
877| drop_small_edges | fixsmall | Removal of small edges |
878| split_angle | DT_SplitAngle | Splitting periodic surfaces by angle |
879| split_angle_advanced | DT_SplitAngle | Splitting periodic surfaces by angle |
880| split_angle_standard | DT_SplitAngle | Splitting periodic surfaces by angle |
881| split_closed_faces | DT_ClosedSplit | Splitting of closed faces |
882| surface_to_bspline | DT_ToBspl | Conversion of surfaces to b-splines |
883| surface_to_bezier | DT_ShapeConvert | Conversion of surfaces to bezier |
884| split_continuity | DT_ShapeDivide | Split surfaces by continuity criterion |
885| split_continuity_advanced | DT_ShapeDivide | Split surfaces by continuity criterion |
886| split_continuity_standard | DT_ShapeDivide | Split surfaces by continuity criterion |
887| surface_to_revolution_advanced | DT_ShapeConvertRev | Convert elementary surfaces to revolutions, complex cases |
888| surface_to_revolution_standard | DT_ShapeConvertRev | Convert elementary surfaces to revolutions, simple cases |
889
890@subsubsection testmanual_5_1_11 mesh
891
4ee1bdf4 892This group allows testing shape tessellation (*BRepMesh*) and shading.
504a8968 893
894DRAW modules: MODELING (package *MeshTest*), VISUALIZATION (package *ViewerTest*)
895
896| Grid | Commands | Functionality |
897| :---- | :----- | :------- |
898| advanced_shading | vdisplay | Shading, complex shapes |
899| standard_shading | vdisplay | Shading, simple shapes |
900| advanced_mesh | mesh | Meshing of complex shapes |
901| standard_mesh | mesh | Meshing of simple shapes |
902| advanced_incmesh | incmesh | Meshing of complex shapes |
903| standard_incmesh | incmesh | Meshing of simple shapes |
904| advanced_incmesh_parallel | incmesh | Meshing of complex shapes, parallel mode |
905| standard_incmesh_parallel | incmesh | Meshing of simple shapes, parallel mode |
906
907@subsubsection testmanual_5_1_12 mkface
908
909This group allows testing creation of simple surfaces.
910
911DRAW module: MODELING (package *BRepTest*)
912
913| Grid | Commands | Functionality |
914| :---- | :----- | :------- |
915| after_trim | mkface | |
916| after_offset | mkface | |
917| after_extsurf_and_offset | mkface | |
918| after_extsurf_and_trim | mkface | |
919| after_revsurf_and_offset | mkface | |
920| mkplane | mkplane | |
921
922@subsubsection testmanual_5_1_13 nproject
923
924This group allows testing normal projection of edges and wires onto a face.
925
926DRAW module: MODELING (package *BRepTest*)
927
928| Grid | Commands | Functionality |
929| :---- | :----- | :------- |
930| Base | nproject | |
931
932@subsubsection testmanual_5_1_14 offset
933
934This group allows testing offset functionality for curves and surfaces.
935
936DRAW module: MODELING (package *BRepTest*)
937
938| Grid | Commands | Functionality |
939| :---- | :----- | :------- |
940| compshape | offsetcompshape | Offset of shapes with removal of some faces |
941| faces_type_a | offsetparameter, offsetload, offsetperform | Offset on a subset of faces with a fillet |
942| faces_type_i | offsetparameter, offsetload, offsetperform | Offset on a subset of faces with a sharp edge |
943| shape_type_a | offsetparameter, offsetload, offsetperform | Offset on a whole shape with a fillet |
944| shape_type_i | offsetparameter, offsetload, offsetperform | Offset on a whole shape with a fillet |
945| shape | offsetshape | |
946| wire_closed_outside_0_005, wire_closed_outside_0_025, wire_closed_outside_0_075, wire_closed_inside_0_005, wire_closed_inside_0_025, wire_closed_inside_0_075, wire_unclosed_outside_0_005, wire_unclosed_outside_0_025, wire_unclosed_outside_0_075 | mkoffset | 2d offset of closed and unclosed planar wires with different offset step and directions of offset ( inside / outside ) |
947
948@subsubsection testmanual_5_1_15 pipe
949
950This group allows testing construction of pipes (sweeping of a contour along profile).
951
952DRAW module: MODELING (package *BRepTest*)
953
954| Grid | Commands | Functionality |
955| :---- | :----- | :------- |
956| Standard | pipe | |
957
958@subsubsection testmanual_5_1_16 prism
959
960This group allows testing construction of prisms.
961
962DRAW module: MODELING (package *BRepTest*)
963
964| Grid | Commands | Functionality |
965| :---- | :----- | :------- |
966| seminf | prism | |
967
968@subsubsection testmanual_5_1_17 sewing
969
970This group allows testing sewing of faces by connecting edges.
971
972DRAW module: MODELING (package *BRepTest*)
973
974| Grid | Commands | Functionality |
975| :---- | :----- | :------- |
976| tol_0_01 | sewing | Sewing faces with tolerance 0.01 |
977| tol_1 | sewing | Sewing faces with tolerance 1 |
978| tol_100 | sewing | Sewing faces with tolerance 100 |
979
980@subsubsection testmanual_5_1_18 thrusection
981
982This group allows testing construction of shell or a solid passing through a set of sections in a given sequence (loft).
983
984| Grid | Commands | Functionality |
985| :---- | :----- | :------- |
986| solids | thrusection | Lofting with resulting solid |
987| not_solids | thrusection | Lofting with resulting shell or face |
988
989@subsubsection testmanual_5_1_19 xcaf
990
991This group allows testing extended data exchange packages.
992
993| Grid | Commands | Functionality |
994| :---- | :----- | :------- |
995| dxc, dxc_add_ACL, dxc_add_CL, igs_to_dxc, igs_add_ACL, brep_to_igs_add_CL, stp_to_dxc, stp_add_ACL, brep_to_stp_add_CL, brep_to_dxc, add_ACL_brep, brep_add_CL | | Subgroups are divided by format of source file, by format of result file and by type of document modification. For example, *brep_to_igs* means that the source shape in brep format was added to the document, which was saved into igs format after that. The postfix *add_CL* means that colors and layers were initialized in the document before saving and the postfix *add_ACL* corresponds to the creation of assembly and initialization of colors and layers in a document before saving. |
996
997
998@subsection testmanual_5_2 Mapping of OCCT functionality to grid names in group *bugs*
999
1000| OCCT Module / Mantis category | Toolkits | Test grid in group bugs |
1001| :---------- | :--------- | :---------- |
1002| Application Framework | PTKernel, TKPShape, TKCDF, TKLCAF, TKCAF, TKBinL, TKXmlL, TKShapeSchema, TKPLCAF, TKBin, TKXml, TKPCAF, FWOSPlugin, TKStdLSchema, TKStdSchema, TKTObj, TKBinTObj, TKXmlTObj | caf |
1003| Draw | TKDraw, TKTopTest, TKViewerTest, TKXSDRAW, TKDCAF, TKXDEDRAW, TKTObjDRAW, TKQADraw, DRAWEXE, Problems of testing system | draw |
1004| Shape Healing | TKShHealing | heal |
1005| Mesh | TKMesh, TKXMesh | mesh |
1006| Data Exchange | TKIGES | iges |
1007| Data Exchange | TKSTEPBase, TKSTEPAttr, TKSTEP209, TKSTEP | step |
1008| Data Exchange | TKSTL, TKVRML | stlvrml |
1009| Data Exchange | TKXSBase, TKXCAF, TKXCAFSchema, TKXDEIGES, TKXDESTEP, TKXmlXCAF, TKBinXCAF | xde |
6268cc68 1010| Foundation Classes | TKernel, TKMath | fclasses |
504a8968 1011| Modeling_algorithms | TKGeomAlgo, TKTopAlgo, TKPrim, TKBO, TKBool, TKHLR, TKFillet, TKOffset, TKFeat, TKXMesh | modalg |
1012| Modeling Data | TKG2d, TKG3d, TKGeomBase, TKBRep | moddata |
6ce0df1e 1013| Visualization | TKService, TKV2d, TKV3d, TKOpenGl, TKMeshVS, TKNIS | vis |
504a8968 1014
1015
5ae01c85 1016@subsection testmanual_5_3 Recommended approaches to checking test results
504a8968 1017
1018@subsubsection testmanual_5_3_1 Shape validity
1019
1020Run command *checkshape* on the result (or intermediate) shape and make sure that *parse.rules* of the test grid or group reports bad shapes (usually recognized by word "Faulty") as error.
1021
1022Example
1023~~~~~
1024checkshape result
1025~~~~~
1026
5ae01c85 1027To check the number of faults in the shape command *checkfaults* can be used.
1028
1029Use: checkfaults shape source_shape [ref_value=0]
1030
1031The default syntax of *checkfaults* command:
1032~~~~~
1033checkfaults results a_1
1034~~~~~
1035
1036The command will check the number of faults in the source shape (*a_1*) and compare it
1037with number of faults in the resulting shape (*result*). If shape *result* contains
1038more faults, you will get an error:
1039~~~~~
1040checkfaults results a_1
1041Error : Number of faults is 5
1042~~~~~
1043It is possible to set the reference value for comparison (reference value is 4):
1044
1045~~~~~
1046checkfaults results a_1 4
1047~~~~~
1048
1049If number of faults in the resulting shape is unstable, reference value should be set to "-1".
1050As a result command *checkfaults* will return the following error:
1051
1052~~~~~
1053checkfaults results a_1 -1
1054Error : Number of faults is UNSTABLE
1055~~~~~
1056
504a8968 1057@subsubsection testmanual_5_3_2 Shape tolerance
1058The maximal tolerance of sub-shapes of each kind of the resulting shape can be extracted from output of tolerance command as follows:
1059
1060~~~~~
1061set tolerance [tolerance result]
1062regexp { *FACE +: +MAX=([-0-9.+eE]+)} $tolerance dummy max_face
1063regexp { *EDGE +: +MAX=([-0-9.+eE]+)} $tolerance dummy max_edgee
1064regexp { *VERTEX +: +MAX=([-0-9.+eE]+)} $tolerance dummy max_vertex
1065~~~~~
1066
5ae01c85 1067It is possible to use command *checkmaxtol* to check maximal tolerance of shape and compare it with reference value.
1068
fb60057d 1069Use: checkmaxtol shape [options...]
5ae01c85 1070
1071Allowed options are:
fb60057d 1072 * -ref: reference value of maximum tolerance
1073 * -source: list of shapes to compare with
5ae01c85 1074 * -min_tol: minimum tolerance for comparison
1075 * -multi_tol: tolerance multiplier
1076
5ae01c85 1077The default syntax of *checkmaxtol* command for comparison with the reference value:
1078~~~~~
fb60057d 1079checkmaxtol result -ref 0.00001
5ae01c85 1080~~~~~
1081
1082There is an opportunity to compare max tolerance of resulting shape with max tolerance of source shape.
1083In the following example command *checkmaxtol* gets max tolerance among objects *a_1* and *a_2*.
1084Then it chooses the maximum value between founded tolerance and value -min_tol (0.000001)
1085and multiply it on the coefficient -multi_tol (i.e. 2):
1086
1087~~~~~
fb60057d 1088checkmaxtol result -source {a_1 a_2} -min_tol 0.000001 -multi_tol 2
5ae01c85 1089~~~~~
1090
1091If the value of maximum tolerance more than founded tolerance for comparison, the command will return an error.
1092
fb60057d 1093Also, command *checkmaxtol* can be used to get max tolerance of the shape:
1094
1095~~~~~
1096set maxtol [checkmaxtol result]
1097~~~~~
1098
504a8968 1099@subsubsection testmanual_5_3_3 Shape volume, area, or length
1100
1101Use command *vprops, sprops,* or *lprops* to correspondingly measure volume, area, or length of the shape produced by the test. The value can be extracted from the result of the command by *regexp*.
1102
1103Example:
1104~~~~~
1105# check area of shape result with 1% tolerance
1106regexp {Mass +: +([-0-9.+eE]+)} [sprops result] dummy area
1107if { abs($area - $expected) > 0.1 + 0.01 * abs ($area) } {
1108 puts "Error: The area of result shape is $area, while expected $expected"
1109}
1110~~~~~
1111
1112@subsubsection testmanual_5_3_4 Memory leaks
1113
1114The test system measures the amount of memory used by each test case, and considerable deviations (as well as overall difference) comparing with reference results will be reported by *testdiff* command.
1115
1116The typical approach to checking memory leak on a particular operation is to run this operation in cycle measuring memory consumption at each step and comparing it with some threshold value. Note that file begin in group bugs defines command *checktrend* that can be used to analyze a sequence of memory measurements to get statistically based evaluation of the leak presence.
1117
1118Example:
1119~~~~~
1120set listmem {}
1121for {set i 1} {$i < 100} {incr i} {
1122 # run suspect operation
1123
1124 # check memory usage (with tolerance equal to half page size)
1125 lappend listmem [expr [meminfo w] / 1024]
1126 if { [checktrend $listmem 0 256 "Memory leak detected"] } {
1127 puts "No memory leak, $i iterations"
1128 break
1129 }
1130}
1131~~~~~
1132
1133@subsubsection testmanual_5_3_5 Visualization
1134
936f43da 1135Take a snapshot of the viewer, give it the name of the test case, and save in the directory indicated by Tcl variable *imagedir*.
504a8968 1136
1137~~~~~
1138vinit
1139vclear
1140vdisplay result
1141vsetdispmode 1
1142vfit
1143vzfit
1144vdump $imagedir/${casename}_shading.png
1145~~~~~
1146
1147This image will be included in the HTML log produced by *testgrid* command and will be checked for non-regression through comparison of images by command *testdiff*.
5ae01c85 1148
1149@subsubsection testmanual_5_3_6 Number of free edges
1150
1151To check the number of free edges run the command *checkfreebounds*.
1152
1153It compares number of free edges with reference value.
1154
1155Use: checkfreebounds shape ref_value [options...]
1156
1157Allowed options are:
1158 * -tol N: used tolerance (default -0.01)
1159 * -type N: used type, possible values are "closed" and "opened" (default "closed")
1160
1161~~~~~
1162checkfreebounds result 13
1163~~~~~
1164
1165Option -tol N is used to set tolerance for command *freebounds*, which is used within command *checkfreebounds*.
1166
1167Option -type N is used to select the type of counted free edges - closed or opened.
1168
1169If the number of free edges in the resulting shape is unstable, reference value should be set to "-1".
1170As a result command *checkfreebounds* will return the following error:
1171
1172~~~~~
1173checkfreebounds result -1
1174Error : Number of free edges is UNSTABLE
1175~~~~~
1176
1177@subsubsection testmanual_5_3_7 Compare numbers
1178
1179Procedure to check equality of two reals with some tolerance (relative and absolute)
1180
1181Use: checkreal name value expected tol_abs tol_rel
1182
1183~~~~~
1184checkreal "Some important value" $value 5 0.0001 0.01
1185~~~~~
1186
1187@subsubsection testmanual_5_3_8 Check number of sub-shapes
1188
1189Compare number of sub-shapes in "shape" with given reference data
1190
1191Use: checknbshapes shape [options...]
1192Allowed options are:
1193 * -vertex N
1194 * -edge N
1195 * -wire N
1196 * -face N
1197 * -shell N
1198 * -solid N
1199 * -compsolid N
1200 * -compound N
1201 * -shape N
1202 * -t: compare the number of sub-shapes in "shape" counting
1203 the same sub-shapes with different location as different sub-shapes.
1204 * -m msg: print "msg" in case of error
1205
1206~~~~~
1207checknbshapes result -vertex 8 -edge 4
1208~~~~~
1209
1210@subsubsection testmanual_5_3_9 Check pixel color
1211
1212To check pixel color command *checkcolor* can be used.
1213
1214Use: checkcolor x y red green blue
1215
1216 x y - pixel coordinates
1217
1218 red green blue - expected pixel color (values from 0 to 1)
1219
1220This procedure checks color with tolerance (5x5 area)
1221
1222Next example will compare color of point with coordinates x=100 y=100 with RGB color R=1 G=0 B=0.
1223If colors are not equal, procedure will check the nearest ones points (5x5 area)
1224~~~~~
1225checkcolor 100 100 1 0 0
1226~~~~~