0026668: Eliminate compile warnings obtained by building occt with vc14: conversion...
[occt.git] / dox / dev_guides / tests / tests.md
CommitLineData
ba06f8bb 1 Automated Testing System {#occt_dev_guides__tests}
72b7576f 2======================================
3
e5bd0d98 4@tableofcontents
5
72b7576f 6@section testmanual_1 Introduction
7
504a8968 8This document provides OCCT developers and contributors with an overview and practical guidelines for work with OCCT automatic testing system.
9
10Reading the Introduction is sufficient for OCCT developers to use the test system to control non-regression of the modifications they implement in OCCT. Other sections provide a more in-depth description of the test system, required for modifying the tests and adding new test cases.
72b7576f 11
12@subsection testmanual_1_1 Basic Information
13
ba06f8bb 14OCCT automatic testing system is organized around DRAW Test Harness @ref occt_user_guides__test_harness "DRAW Test Harness", a console application based on Tcl (a scripting language) interpreter extended by OCCT-related commands.
e5bd0d98 15
504a8968 16Standard OCCT tests are included with OCCT sources and are located in subdirectory *tests* of the OCCT root folder. Other test folders can be included in the test system, e.g. for testing applications based on OCCT.
e5bd0d98 17
504a8968 18The tests are organized in three levels:
72b7576f 19
504a8968 20 * Group: a group of related test grids, usually testing a particular OCCT functionality (e.g. blend);
21 * Grid: a set of test cases within a group, usually aimed at testing some particular aspect or mode of execution of the relevant functionality (e.g. buildevol);
22 * Test case: a script implementing an individual test (e.g. K4).
23
24See <a href="#testmanual_5_1">Test Groups</a> for the current list of available test groups and grids.
72b7576f 25
936f43da 26Some tests involve data files (typically CAD models) which are located separately and are not included with OCCT code. The archive with publicly available test data files should be downloaded and installed independently on OCCT sources (see http://dev.opencascade.org).
72b7576f 27
28@subsection testmanual_1_2 Intended Use of Automatic Tests
29
30Each modification made in OCCT code must be checked for non-regression
31by running the whole set of tests. The developer who does the modification
504a8968 32is responsible for running and ensuring non-regression for the tests available to him.
33
34Note that many tests are based on data files that are confidential and thus available only at OPEN CASCADE. Thus official certification testing of the changes before integration to master branch of official OCCT Git repository (and finally to the official release) is performed by OPEN CASCADE in any case.
35
36Each new non-trivial modification (improvement, bug fix, new feature) in OCCT should be accompanied by a relevant test case suitable for verifying that modification. This test case is to be added by the developer who provides the modification.
72b7576f 37
504a8968 38If a modification affects the result of an existing test case, either the modification should be corrected (if it causes regression) or the affected test cases should be updated to account for the modification.
72b7576f 39
504a8968 40The modifications made in the OCCT code and related test scripts should be included in the same integration to the master branch.
72b7576f 41
42@subsection testmanual_1_3 Quick Start
43
44@subsubsection testmanual_1_3_1 Setup
45
504a8968 46Before running tests, make sure to define environment variable *CSF_TestDataPath* pointing to the directory containing test data files.
504a8968 47
48For this it is recommended to add a file *DrawAppliInit* in the directory which is current at the moment of starting DRAWEXE (normally it is OCCT root directory, <i>$CASROOT </i>). This file is evaluated automatically at the DRAW start.
49
50Example (Windows)
72b7576f 51
52~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
504a8968 53set env(CSF_TestDataPath) $env(CSF_TestDataPath)\;d:/occt/test-data
72b7576f 54return ;# this is to avoid an echo of the last command above in cout
504a8968 55
72b7576f 56~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
504a8968 57Note that variable *CSF_TestDataPath* is set to default value at DRAW start, pointing at the folder <i>$CASROOT/data</i>.
87f42a3f 58In this example, subdirectory <i>d:/occt/test-data</i> is added to this path. Similar code could be used on Linux and Mac OS X except that on non-Windows platforms colon ":" should be used as path separator instead of semicolon ";".
72b7576f 59
936f43da 60All tests are run from DRAW command prompt (run *draw.bat* or *draw.sh* to start it).
72b7576f 61
62@subsubsection testmanual_1_3_2 Running Tests
63
504a8968 64To run all tests, type command *testgrid*
72b7576f 65
66Example:
67
504a8968 68~~~~~
69Draw[]> testgrid
70~~~~~
72b7576f 71
ae3eaf7b 72To run only a subset of test cases, give masks for group, grid, and test case names to be executed.
73Each argument is a list of file masks separated with commas or spaces; by default "*" is assumed.
72b7576f 74
504a8968 75Example:
72b7576f 76
504a8968 77~~~~~
87f42a3f 78Draw[]> testgrid bugs caf,moddata*,xde
504a8968 79~~~~~
72b7576f 80
72b7576f 81As the tests progress, the result of each test case is reported.
504a8968 82At the end of the log a summary of test cases is output,
83including the list of detected regressions and improvements, if any.
72b7576f 84
85
504a8968 86Example:
87
72b7576f 88~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
89 Tests summary
90
91 CASE 3rdparty export A1: OK
92 ...
93 CASE pipe standard B1: BAD (known problem)
94 CASE pipe standard C1: OK
95 No regressions
96 Total cases: 208 BAD, 31 SKIPPED, 3 IMPROVEMENT, 1791 OK
97 Elapsed time: 1 Hours 14 Minutes 33.7384512019 Seconds
98 Detailed logs are saved in D:/occt/results_2012-06-04T0919
99~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
100
504a8968 101The tests are considered as non-regressive if only OK, BAD (i.e. known problem), and SKIPPED (i.e. not executed, typically because of lack of a data file) statuses are reported. See <a href="#testmanual_3_5">Interpretation of test results</a> for details.
72b7576f 102
936f43da 103The results and detailed logs of the tests are saved by default to a new subdirectory of the subdirectory *results* in the current folder, whose name is generated automatically using the current date and time, prefixed by Git branch name (if Git is available and current sources are managed by Git).
ae3eaf7b 104If necessary, a non-default output directory can be specified using option <i> –outdir</i> followed by a path to the directory. This directory should be new or empty; use option <i>–overwrite</i> to allow writing results in the existing non-empty directory.
72b7576f 105
504a8968 106Example:
107~~~~~
87f42a3f 108Draw[]> testgrid -outdir d:/occt/last_results -overwrite
504a8968 109~~~~~
ae3eaf7b 110In the output directory, a cumulative HTML report <i>summary.html</i> provides links to reports on each test case. An additional report in JUnit-style XML format can be output for use in Jenkins or other continuous integration system.
504a8968 111
936f43da 112Type <i>help testgrid</i> in DRAW prompt to get help on options supported by *testgrid* command:
72b7576f 113
504a8968 114~~~~~
115Draw[3]> help testgrid
116testgrid: Run all tests, or specified group, or one grid
87f42a3f 117 Use: testgrid [groupmask [gridmask [casemask]]] [options...]
504a8968 118 Allowed options are:
119 -parallel N: run N parallel processes (default is number of CPUs, 0 to disable)
120 -refresh N: save summary logs every N seconds (default 60, minimal 1, 0 to disable)
121 -outdir dirname: set log directory (should be empty or non-existing)
122 -overwrite: force writing logs in existing non-empty directory
123 -xml filename: write XML report for Jenkins (in JUnit-like format)
936f43da 124 -beep: play sound signal at the end of the tests
87f42a3f 125 Groups, grids, and test cases to be executed can be specified by list of file
126 masks, separated by spaces or comma; default is all (*).
504a8968 127~~~~~
72b7576f 128
504a8968 129@subsubsection testmanual_1_3_3 Running a Single Test
130
87f42a3f 131To run a single test, type command *test* followed by names of group, grid, and test case.
72b7576f 132
133Example:
134
135~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
136 Draw[1]> test blend simple A1
137 CASE blend simple A1: OK
138 Draw[2]>
139~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
140
504a8968 141Note that normally an intermediate output of the script is not shown. The detailed log of the test can be obtained after the test execution by running command <i>"dlog get"</i>.
142
143To see intermediate commands and their output during the test execution, add one more argument
144<i>"echo"</i> at the end of the command line. Note that with this option the log is not collected and summary is not produced.
145
ae3eaf7b 146Type <i>help test</i> in DRAW prompt to get help on options supported by *test* command:
936f43da 147
148~~~~~
149Draw[3]> help test
150test: Run specified test case
151 Use: test group grid casename [options...]
152 Allowed options are:
153 -echo: all commands and results are echoed immediately,
154 but log is not saved and summary is not produced
155 It is also possible to use "1" instead of "-echo"
156 If echo is OFF, log is stored in memory and only summary
157 is output (the log can be obtained with command "dlog get")
158 -outfile filename: set log file (should be non-existing),
159 it is possible to save log file in text file or
160 in html file(with snapshot), for that "filename"
161 should have ".html" extension
162 -overwrite: force writing log in existing file
163 -beep: play sound signal at the end of the test
164 -errors: show all lines from the log report that are recognized as errors
165 This key will be ignored if the "-echo" key is already set.
166~~~~~
167
504a8968 168@subsubsection testmanual_1_3_4 Creating a New Test
169
170The detailed rules of creation of new tests are given in <a href="#testmanual_3">section 3</a>. The following short description covers the most typical situations:
171
ae3eaf7b 172Use prefix <i>bug</i> followed by Mantis issue ID and, if necessary, additional suffixes, for naming the test script and DRAW commands specific for this test case.
504a8968 173
1741. If the test requires C++ code, add it as new DRAW command(s) in one of files in *QABugs* package. Note that this package defines macros *QVERIFY* and *QCOMPARE*, thus code created for QTest or GoogleTest frameworks can be used with minimal modifications.
ae3eaf7b 1752. Add script(s) for the test case in the subfolder corresponding to the relevant OCCT module of the group bugs <i>($CASROOT/tests/bugs)</i>. See <a href="#testmanual_5_2">the correspondence map</a>.
504a8968 1763. In the test script:
177 * Load all necessary DRAW modules by command *pload*.
178 * Use command *locate_data_file* to get a path to data files used by test script. (Make sure to have this command not inside catch statement if it is used.)
179 * Use DRAW commands to reproduce the situation being tested.
180 * If test case is added to describe existing problem and the fix is not available, add TODO message for each error to mark it as known problem. The TODO statements must be specific so as to match the actually generated messages but not all similar errors.
87f42a3f 181 * Make sure that in case of failure the test produces message containing word "Error" or other recognized by test system as error (see files parse.rules).
504a8968 1824. If the test case uses data file(s) not yet present in the test database, these can be put to subfolder data of the test grid, and integrated to Git along with the test case.
1835. Check that the test case runs as expected (test for fix: OK with the fix, FAILED without the fix; test for existing problem: BAD), and integrate to Git branch created for the issue.
184
185Example:
186
187* Added files:
936f43da 188
504a8968 189~~~~~
190git status –short
191A tests/bugs/heal/data/OCC210a.brep
192A tests/bugs/heal/data/OCC210a.brep
193A tests/bugs/heal/bug210_1
194A tests/bugs/heal/bug210_2
195~~~~~
196
197* Test script
198
199~~~~~
200puts "OCC210 (case 1): Improve FixShape for touching wires"
201
202restore [locate_data_file OCC210a.brep] a
203
204fixshape result a 0.01 0.01
205checkshape result
206~~~~~
72b7576f 207
208@section testmanual_2 Organization of Test Scripts
209
210@subsection testmanual_2_1 General Layout
211
212Standard OCCT tests are located in subdirectory tests of the OCCT root folder ($CASROOT).
72b7576f 213
504a8968 214Additional test folders can be added to the test system by defining environment variable *CSF_TestScriptsPath*. This should be list of paths separated by semicolons (*;*) on Windows
ae3eaf7b 215or colons (*:*) on Linux or Mac. Upon DRAW launch, path to tests subfolder of OCCT is added at the end of this variable automatically.
504a8968 216
217Each test folder is expected to contain:
72b7576f 218 * Optional file parse.rules defining patterns for interpretation of test results, common for all groups in this folder
219 * One or several test group directories.
220
221Each group directory contains:
222
504a8968 223 * File *grids.list* that identifies this test group and defines list of test grids in it.
224 * Test grids (sub-directories), each containing set of scripts for test cases, and optional files *cases.list*, *parse.rules*, *begin* and *end*.
72b7576f 225 * Optional sub-directory data
72b7576f 226
504a8968 227By convention, names of test groups, grids, and cases should contain no spaces and be lower-case.
228The names *begin, end, data, parse.rules, grids.list* and *cases.list* are reserved.
72b7576f 229
504a8968 230General layout of test scripts is shown in Figure 1.
231
232@image html /dev_guides/tests/images/tests_image001.png "Layout of tests folder"
233@image latex /dev_guides/tests/images/tests_image001.png "Layout of tests folder"
72b7576f 234
72b7576f 235
236@subsection testmanual_2_2 Test Groups
237
238@subsubsection testmanual_2_2_1 Group Names
239
504a8968 240The names of directories of test groups containing systematic test grids correspond to the functionality tested by each group.
241
72b7576f 242Example:
243
504a8968 244~~~~~
72b7576f 245 caf
246 mesh
247 offset
504a8968 248~~~~~
72b7576f 249
504a8968 250Test group *bugs* is used to collect the tests coming from bug reports. Group *demo* collects tests of the test system, DRAW, samples, etc.
72b7576f 251
504a8968 252@subsubsection testmanual_2_2_2 File "grids.list"
253
254This test group contains file *grids.list*, which defines an ordered list of grids in this group in the following format:
72b7576f 255
256~~~~~~~~~~~~~~~~~
257001 gridname1
258002 gridname2
259...
260NNN gridnameN
261~~~~~~~~~~~~~~~~~
262
263Example:
264
265~~~~~~~~~~~~~~~~~
266 001 basic
267 002 advanced
268~~~~~~~~~~~~~~~~~
269
504a8968 270@subsubsection testmanual_2_2_3 File "begin"
72b7576f 271
504a8968 272This file is a Tcl script. It is executed before every test in the current group.
72b7576f 273Usually it loads necessary Draw commands, sets common parameters and defines
274additional Tcl functions used in test scripts.
504a8968 275
72b7576f 276Example:
277
278~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
279 pload TOPTEST ;# load topological command
280 set cpulimit 300 ;# set maximum time allowed for script execution
281~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
282
504a8968 283@subsubsection testmanual_2_2_4 File "end"
72b7576f 284
504a8968 285This file is a TCL script. It is executed after every test in the current group. Usually it checks the results of script work, makes a snap-shot of the viewer and writes *TEST COMPLETED* to the output.
72b7576f 286
504a8968 287Note: *TEST COMPLETED* string should be present in the output to indicate that the test is finished without crash.
288
289See <a href="#testmanual_3">section 3</a> for more information.
290
291Example:
292~~~~~
72b7576f 293 if { [isdraw result] } {
294 checkshape result
295 } else {
4ee1bdf4 296 puts "Error: The result shape can not be built"
72b7576f 297 }
4ee1bdf4 298 puts "TEST COMPLETED"
504a8968 299~~~~~
300
301@subsubsection testmanual_2_2_5 File "parse.rules"
302
303The test group may contain *parse.rules* file. This file defines patterns used for analysis of the test execution log and deciding the status of the test run.
304
305Each line in the file should specify a status (single word), followed by a regular expression delimited by slashes (*/*) that will be matched against lines in the test output log to check if it corresponds to this status.
306
936f43da 307The regular expressions support a subset of the Perl *re* syntax. See also <a href="http://perldoc.perl.org/perlre.html">Perl regular expressions</a>.
504a8968 308
309The rest of the line can contain a comment message, which will be added to the test report when this status is detected.
72b7576f 310
72b7576f 311Example:
312
504a8968 313~~~~~
e5bd0d98 314 FAILED /\\b[Ee]xception\\b/ exception
315 FAILED /\\bError\\b/ error
72b7576f 316 SKIPPED /Cannot open file for reading/ data file is missing
317 SKIPPED /Could not read file .*, abandon/ data file is missing
504a8968 318~~~~~
72b7576f 319
320Lines starting with a *#* character and blank lines are ignored to allow comments and spacing.
72b7576f 321
504a8968 322See <a href="#testmanual_3_5">Interpretation of test results</a> chapter for details.
323
324If a line matches several rules, the first one applies. Rules defined in the grid are checked first, then rules in the group, then rules in the test root directory. This allows defining some rules on the grid level with status *IGNORE* to ignore messages that would otherwise be treated as errors due to the group level rules.
325
72b7576f 326Example:
327
328~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
e5bd0d98 329 FAILED /\\bFaulty\\b/ bad shape
72b7576f 330 IGNORE /^Error [23]d = [\d.-]+/ debug output of blend command
331 IGNORE /^Tcl Exception: tolerance ang : [\d.-]+/ blend failure
332~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
333
504a8968 334@subsubsection testmanual_2_2_6 Directory "data"
335The test group may contain subdirectory *data*, where test scripts shared by different test grids can be put. See also <a href="#testmanual_2_3_5">Directory *data*</a>.
336
72b7576f 337@subsection testmanual_2_3 Test Grids
338
339@subsubsection testmanual_2_3_1 Grid Names
340
504a8968 341The folder of a test group can have several sub-directories (Grid 1… Grid N) defining test grids.
342Each directory contains a set of related test cases. The name of a directory should correspond to its contents.
72b7576f 343
344Example:
345
504a8968 346~~~~~
72b7576f 347caf
348 basic
349 bugs
350 presentation
504a8968 351~~~~~
72b7576f 352
504a8968 353Here *caf* is the name of the test group and *basic*, *bugs*, *presentation*, etc. are the names of grids.
72b7576f 354
504a8968 355@subsubsection testmanual_2_3_2 File "begin"
356
357This file is a TCL script executed before every test in the current grid.
72b7576f 358
72b7576f 359Usually it sets variables specific for the current grid.
504a8968 360
72b7576f 361Example:
362
363~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
364 set command bopfuse ;# command tested in this grid
365~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
366
504a8968 367@subsubsection testmanual_2_3_3 File "end"
368
369This file is a TCL script executed after every test in current grid.
370
371Usually it executes a specific sequence of commands common for all tests in the grid.
72b7576f 372
72b7576f 373Example:
374
375~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
936f43da 376 vdump $imagedir/${casename}.png ;# makes a snap-shot of AIS viewer
72b7576f 377~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
378
504a8968 379@subsubsection testmanual_2_3_4 File "cases.list"
72b7576f 380
381The grid directory can contain an optional file cases.list
504a8968 382defining an alternative location of the test cases.
383This file should contain a single line defining the relative path to the collection of test cases.
72b7576f 384
385Example:
386
504a8968 387~~~~~
72b7576f 388../data/simple
504a8968 389~~~~~
390
391This option is used for creation of several grids of tests with the same data files and operations but performed with differing parameters. The common scripts are usually located place in the common
392subdirectory of the test group, <i>data/simple</i> for example.
72b7576f 393
504a8968 394If file *cases.list* exists, the grid directory should not contain any test cases.
72b7576f 395The specific parameters and pre- and post-processing commands
504a8968 396for test execution in this grid should be defined in the files *begin* and *end*.
397
398
399@subsubsection testmanual_2_3_5 Directory "data"
400
401The test grid may contain subdirectory *data*, containing data files used in tests (BREP, IGES, STEP, etc.) of this grid.
72b7576f 402
403@subsection testmanual_2_4 Test Cases
404
504a8968 405The test case is a TCL script, which performs some operations using DRAW commands
406and produces meaningful messages that can be used to check the validity of the result.
407
72b7576f 408Example:
409
410~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
411 pcylinder c1 10 20 ;# create first cylinder
412 pcylinder c2 5 20 ;# create second cylinder
413 ttranslate c2 5 0 10 ;# translate second cylinder to x,y,z
414 bsection result c1 c2 ;# create a section of two cylinders
415 checksection result ;# will output error message if result is bad
416~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
417
504a8968 418The test case can have any name (except for the reserved names *begin, end, data, cases.list* and *parse.rules*).
72b7576f 419For systematic grids it is usually a capital English letter followed by a number.
420
421Example:
422
504a8968 423~~~~~
72b7576f 424 A1
425 A2
426 B1
427 B2
504a8968 428~~~~~
72b7576f 429
504a8968 430Such naming facilitates compact representation of tests execution results in tabular format within HTML reports.
72b7576f 431
72b7576f 432
433@section testmanual_3 Creation And Modification Of Tests
434
435This section describes how to add new tests and update existing ones.
436
437@subsection testmanual_3_1 Choosing Group, Grid, and Test Case Name
438
504a8968 439The new tests are usually added in the frame of processing issues in OCCT Mantis tracker.
72b7576f 440Such tests in general should be added to group bugs, in the grid
504a8968 441corresponding to the affected OCCT functionality. See <a href="#testmanual_5_2">Mapping of OCCT functionality to grid names in group *bugs*</a>.
442
443New grids can be added as necessary to contain tests for the functionality not yet covered by existing test grids.
444The test case name in the bugs group should be prefixed by the ID of the corresponding issue in Mantis (without leading zeroes) with prefix *bug*. It is recommended to add a suffix providing a hint on the tested situation. If more than one test is added for a bug, they should be distinguished by suffixes; either meaningful or just ordinal numbers.
72b7576f 445
446Example:
447
448~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
504a8968 449 bug12345_coaxial
450 bug12345_orthogonal_1
451 bug12345_orthogonal_2
72b7576f 452~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
453
504a8968 454If the new test corresponds to a functionality already covered by the existing systematic test grid (e.g. group *mesh* for *BRepMesh* issues), this test can be added (or moved later by OCC team) to that grid.
72b7576f 455
456@subsection testmanual_3_2 Adding Data Files Required for a Test
457
ae3eaf7b 458It is advisable to make self-contained test scripts whenever possible, so as they could be used in the environments where data files are not available. For that simple geometric objects and shapes can be created using DRAW commands in the test script itself.
504a8968 459
ae3eaf7b 460If the test requires a data file, it should be put to the directory listed in environment variable *CSF_TestDataPath*.
936f43da 461Alternatively, it can be put to subdirectory *data* of the test grid.
462It is recommended to prefix the data file with the corresponding issue id prefixed by *bug*, e.g. *bug12345_face1.brep*, to avoid possible conflicts with names of existing data files.
504a8968 463
ae3eaf7b 464Note that when the test is integrated to the master branch, OCC team will move the data file to the data files repository, to keep OCCT sources repository clean from data files.
504a8968 465
ae3eaf7b 466When you prepare a test script, try to minimize the size of involved data model. For instance, if the problem detected on a big shape can be reproduced on a single face extracted from that shape, use only that face in the test.
504a8968 467
468
469@subsection testmanual_3_3 Adding new DRAW commands
470
471If the test cannot be implemented using available DRAW commands, consider the following possibilities:
472* If the existing DRAW command can be extended to enable possibility required for a test in a natural way (e.g. by adding an option to activate a specific mode of the algorithm), this way is recommended. This change should be appropriately documented in a relevant Mantis issue.
473* If the new command is needed to access OCCT functionality not exposed to DRAW previously, and this command can be potentially reused (for other tests), it should be added to the package where similar commands are implemented (use *getsource* DRAW command to get the package name). The name and arguments of the new command should be chosen to keep similarity with the existing commands. This change should be documented in a relevant Mantis issue.
474* Otherwise the new command implementing the actions needed for this particular test should be added in *QABugs* package. The command name should be formed by the Mantis issue ID prefixed by *bug*, e.g. *bug12345*.
475
476Note that a DRAW command is expected to return 0 in case of a normal completion, and 1 (Tcl exception) if it is incorrectly used (e.g. a wrong number of input arguments). Thus if the new command needs to report a test error, this should be done by outputting an appropriate error message rather than by returning a non-zero value.
ae3eaf7b 477File names must be encoded in the script rather than in the DRAW command and passed to the DRAW command as an argument.
504a8968 478
479@subsection testmanual_3_4 Script Implementation
480
481The test should run commands necessary to perform the tested operations, in general assuming a clean DRAW session. The required DRAW modules should be loaded by *pload* command, if it is not done by *begin* script. The messages produced by commands in a standard output should include identifiable messages on the discovered problems if any.
482
483Usually the script represents a set of commands that a person would run interactively to perform the operation and see its results, with additional comments to explain what happens.
484
72b7576f 485Example:
504a8968 486~~~~~
487# Simple test of fusing box and sphere
488box b 10 10 10
489sphere s 5
490bfuse result b s
491checkshape result
492~~~~~
72b7576f 493
504a8968 494Make sure that file *parse.rules* in the grid or group directory contains a regular expression to catch possible messages indicating the failure of the test.
495
496For instance, for catching errors reported by *checkshape* command relevant grids define a rule to recognize its report by the word *Faulty*:
497
498~~~~~
499FAILED /\bFaulty\b/ bad shape
500~~~~~
501
502For the messages generated in the script it is recommended to use the word 'Error' in the error message.
72b7576f 503
72b7576f 504Example:
505
504a8968 506~~~~~
507set expected_length 11
508if { [expr $actual_length - $expected_length] > 0.001 } {
509 puts "Error: The length of the edge should be $expected_length"
510}
511~~~~~
512
513At the end, the test script should output *TEST COMPLETED* string to mark a successful completion of the script. This is often done by the *end* script in the grid.
514
515When the test script requires a data file, use Tcl procedure *locate_data_file* to get a path to it, instead of putting the path explicitly. This will allow easy move of the data file from OCCT sources repository to the data files repository without the need to update the test script.
72b7576f 516
72b7576f 517Example:
518
504a8968 519~~~~~
520stepread [locate_data_file CAROSKI_COUPELLE.step] a *
521~~~~~
522
936f43da 523When the test needs to produce some snapshots or other artefacts, use Tcl variable *imagedir* as the location where such files should be put.
ae3eaf7b 524* Command *testgrid* sets this variable to the subdirectory of the results folder corresponding to the grid.
525* Command *test* by default creates a dedicated temporary directory in the system temporary folder (normally the one specified by environment variable *TempDir*, *TEMP*, or *TMP*) for each execution, and sets *imagedir* to that location.
526
527However if variable *imagedir* is defined on the top level of Tcl interpretor, command *test* will use it instead of creating a new directory.
936f43da 528
529Use Tcl variable *casename* to prefix all files produced by the test.
530This variable is set to the name of the test case.
ae3eaf7b 531
532The test system can recognize an image file (snapshot) and include it in HTML log and differences if its name starts with the name of the test case (use variable *casename*), optionally followed by underscore or dash and arbitrary suffix.
533
936f43da 534The image format (defined by extension) should be *png*.
72b7576f 535
72b7576f 536Example:
504a8968 537~~~~~
936f43da 538xwd $imagedir/${casename}.png
504a8968 539vdisplay result; vfit
936f43da 540vdump $imagedir/${casename}-axo.png
504a8968 541vfront; vfit
936f43da 542vdump $imagedir/${casename}-front.png
504a8968 543~~~~~
72b7576f 544
504a8968 545would produce:
546~~~~~
547A1.png
548A1-axo.png
549A1-front.png
550~~~~~
72b7576f 551
504a8968 552Note that OCCT must be built with FreeImage support to be able to produce usable images.
72b7576f 553
936f43da 554Other Tcl variables defined during the test execution are:
ae3eaf7b 555- *groupname*: name of the test group;
556- *gridname*: name of the test grid;
557- *dirname*: path to the root directory of the current set of test scripts.
936f43da 558
504a8968 559In order to ensure that the test works as expected in different environments, observe the following additional rules:
560* Avoid using external commands such as *grep, rm,* etc., as these commands can be absent on another system (e.g. on Windows); use facilities provided by Tcl instead.
561* Do not put call to *locate_data_file* in catch statement – this can prevent correct interpretation of the missing data file by the test system.
72b7576f 562
504a8968 563@subsection testmanual_3_5 Interpretation of test results
72b7576f 564
504a8968 565The result of the test is evaluated by checking its output against patterns defined in the files *parse.rules* of the grid and group.
72b7576f 566
504a8968 567The OCCT test system recognizes five statuses of the test execution:
568* SKIPPED: reported if a line matching SKIPPED pattern is found (prior to any FAILED pattern). This indicates that the test cannot be run in the current environment; the most typical case is the absence of the required data file.
569* FAILED: reported if a line matching pattern with status FAILED is found (unless it is masked by the preceding IGNORE pattern or a TODO statement), or if message TEST COMPLETED is not found at the end. This indicates that the test has produced a bad or unexpected result, and usually means a regression.
570* BAD: reported if the test script output contains one or several TODO statements and the corresponding number of matching lines in the log. This indicates a known problem . The lines matching TODO statements are not checked against other patterns and thus will not cause a FAILED status.
571* IMPROVEMENT: reported if the test script output contains a TODO statement for which no corresponding line is found. This is a possible indication of improvement (a known problem has disappeared).
572* OK: reported if none of the above statuses have been assigned. This means that the test has passed without problems.
72b7576f 573
504a8968 574Other statuses can be specified in *parse.rules* files, these will be classified as FAILED.
72b7576f 575
504a8968 576For integration of the change to OCCT repository, all tests should return either OK or BAD status.
577The new test created for an unsolved problem should return BAD. The new test created for a fixed problem should return FAILED without the fix, and OK with the fix.
72b7576f 578
504a8968 579@subsection testmanual_3_6 Marking BAD cases
72b7576f 580
ae3eaf7b 581If the test produces an invalid result at a certain moment then the corresponding bug should be created in the OCCT issue tracker located at http://tracker.dev.opencascade.org, and the problem should be marked as TODO in the test script.
72b7576f 582
504a8968 583The following statement should be added to such a test script:
584~~~~~
585puts "TODO BugNumber ListOfPlatforms: RegularExpression"
586~~~~~
587
588Here:
589* *BugNumber* is the bug ID in the tracker. For example: #12345.
ae3eaf7b 590* *ListOfPlatforms* is a list of platforms, at which the bug is reproduced (e.g. Mandriva2008, Windows or All). Note that the platform name is custom for the OCCT test system; it corresponds to the value of environment variable *os_type* defined in DRAW.
72b7576f 591
592Example:
504a8968 593~~~~~
594Draw[2]> puts $env(os_type)
595windows
596~~~~~
72b7576f 597
ae3eaf7b 598* RegularExpression is a regular expression, which should be matched against the line indicating the problem in the script output.
72b7576f 599
72b7576f 600Example:
504a8968 601~~~~~
602puts "TODO #22622 Mandriva2008: Abort .* an exception was raised"
603~~~~~
72b7576f 604
504a8968 605The parser checks the test output and if an output line matches the *RegularExpression* then it will be assigned a BAD status instead of FAILED.
72b7576f 606
504a8968 607A separate TODO line must be added for each output line matching an error expression to mark the test as BAD. If not all TODO messages are found in the test log, the test will be considered as possible improvement.
608
609To mark the test as BAD for an incomplete case (when the final *TEST COMPLETE* message is missing) the expression *TEST INCOMPLETE* should be used instead of the regular expression.
72b7576f 610
611Example:
612
504a8968 613~~~~~
614puts "TODO OCC22817 All: exception.+There are no suitable edges"
615puts "TODO OCC22817 All: \\*\\* Exception \\*\\*"
616puts "TODO OCC22817 All: TEST INCOMPLETE"
617~~~~~
72b7576f 618
504a8968 619
620
621@section testmanual_4 Advanced Use
72b7576f 622
623@subsection testmanual_4_1 Running Tests on Older Versions of OCCT
624
ae3eaf7b 625Sometimes it might be necessary to run tests on the previous versions of OCCT (<= 6.5.4) that do not include this test system. This can be done by adding DRAW configuration file *DrawAppliInit* in the directory, which is current by the moment of DRAW start-up, to load test commands and to define the necessary environment.
72b7576f 626
504a8968 627Note: in OCCT 6.5.3, file *DrawAppliInit* already exists in <i>$CASROOT/src/DrawResources</i>, new commands should be added to this file instead of a new one in the current directory.
628
629For example, let us assume that *d:/occt* contains an up-to-date version of OCCT sources with tests, and the test data archive is unpacked to *d:/test-data*):
630
631~~~~~
632set env(CASROOT) d:/occt
633set env(CSF_TestScriptsPath) $env(CASROOT)/tests
634source $env(CASROOT)/src/DrawResources/TestCommands.tcl
635set env(CSF_TestDataPath) $env(CASROOT)/data;d:/test-data
636return
637~~~~~
638
ae3eaf7b 639Note that on older versions of OCCT the tests are run in compatibility mode and thus not all output of the test command can be captured; this can lead to absence of some error messages (can be reported as either a failure or an improvement).
504a8968 640
641@subsection testmanual_4_2 Adding custom tests
642
643You can extend the test system by adding your own tests. For that it is necessary to add paths to the directory where these tests are located, and one or more additional data directories, to the environment variables *CSF_TestScriptsPath* and *CSF_TestDataPath*. The recommended way for doing this is using DRAW configuration file *DrawAppliInit* located in the directory which is current by the moment of DRAW start-up.
644
4ee1bdf4 645Use Tcl command <i>_path_separator</i> to insert a platform-dependent separator to the path list.
504a8968 646
647For example:
648~~~~~
649set env(CSF_TestScriptsPath) \
650 $env(TestScriptsPath)[_path_separator]d:/MyOCCTProject/tests
651set env(CSF_TestDataPath) \
652 d:/occt/test-data[_path_separator]d:/MyOCCTProject/data
653return ;# this is to avoid an echo of the last command above in cout
654~~~~~
655
656@subsection testmanual_4_3 Parallel execution of tests
657
658For better efficiency, on computers with multiple CPUs the tests can be run in parallel mode. This is default behavior for command *testgrid* : the tests are executed in parallel processes (their number is equal to the number of CPUs available on the system). In order to change this behavior, use option parallel followed by the number of processes to be used (1 or 0 to run sequentially).
72b7576f 659
87f42a3f 660Note that the parallel execution is only possible if Tcl extension package *Thread* is installed.
661If this package is not available, *testgrid* command will output a warning message.
72b7576f 662
504a8968 663@subsection testmanual_4_4 Checking non-regression of performance, memory, and visualization
72b7576f 664
504a8968 665Some test results are very dependent on the characteristics of the workstation, where they are performed, and thus cannot be checked by comparison with some predefined values. These results can be checked for non-regression (after a change in OCCT code) by comparing them with the results produced by the version without this change. The most typical case is comparing the result obtained in a branch created for integration of a fix (CR***) with the results obtained on the master branch before that change is made.
666
667OCCT test system provides a dedicated command *testdiff* for comparing CPU time of execution, memory usage, and images produced by the tests.
668
669~~~~~
670testdiff dir1 dir2 [groupname [gridname]] [options...]
671~~~~~
672Here *dir1* and *dir2* are directories containing logs of two test runs.
673
674Possible options are:
ba06f8bb 675* <i>-save \<filename\> </i> - saves the resulting log in a specified file (<i>$dir1/diff-$dir2.log</i> by default). HTML log is saved with the same name and extension .html;
504a8968 676* <i>-status {same|ok|all}</i> - allows filtering compared cases by their status:
677 * *same* - only cases with same status are compared (default);
678 * *ok* - only cases with OK status in both logs are compared;
679 * *all* - results are compared regardless of status;
ba06f8bb 680* <i>-verbose \<level\> </i> - defines the scope of output data:
504a8968 681 * 1 - outputs only differences;
682 * 2 - additionally outputs the list of logs and directories present in one of directories only;
683 * 3 - (by default) additionally outputs progress messages;
72b7576f 684
72b7576f 685Example:
504a8968 686
687~~~~~
688Draw[]> testdiff results-CR12345-2012-10-10T08:00 results-master-2012-10-09T21:20
689~~~~~
690
691@section testmanual_5 APPENDIX
692
693@subsection testmanual_5_1 Test groups
694
695@subsubsection testmanual_5_1_1 3rdparty
696
697This group allows testing the interaction of OCCT and 3rdparty products.
698
699DRAW module: VISUALIZATION.
700
701| Grid | Commands | Functionality |
702| :---- | :----- | :------- |
703| export | vexport | export of images to different formats |
704| fonts | vtrihedron, vcolorscale, vdrawtext | display of fonts |
705
706
707@subsubsection testmanual_5_1_2 blend
708
709This group allows testing blends (fillets) and related operations.
710
711DRAW module: MODELING.
712
713| Grid | Commands | Functionality |
714| :---- | :----- | :------- |
715| simple | blend | fillets on simple shapes |
716| complex | blend | fillets on complex shapes, non-trivial geometry |
717| tolblend_simple | tolblend, blend | |
718| buildevol | buildevol | |
719| tolblend_buildvol | tolblend, buildevol | use of additional command tolblend |
720| bfuseblend | bfuseblend | |
721| encoderegularity | encoderegularity | |
722
723@subsubsection testmanual_5_1_3 boolean
724
725This group allows testing Boolean operations.
726
727DRAW module: MODELING (packages *BOPTest* and *BRepTest*).
728
729Grids names are based on name of the command used, with suffixes:
730* <i>_2d</i> – for tests operating with 2d objects (wires, wires, 3d objects, etc.);
731* <i>_simple</i> – for tests operating on simple shapes (boxes, cylinders, toruses, etc.);
732* <i>_complex</i> – for tests dealing with complex shapes.
733
734| Grid | Commands | Functionality |
735| :---- | :----- | :------- |
736| bcommon_2d | bcommon | Common operation (old algorithm), 2d |
737| bcommon_complex | bcommon | Common operation (old algorithm), complex shapes |
738| bcommon_simple | bcommon | Common operation (old algorithm), simple shapes |
739| bcut_2d | bcut | Cut operation (old algorithm), 2d |
740| bcut_complex | bcut | Cut operation (old algorithm), complex shapes |
741| bcut_simple | bcut | Cut operation (old algorithm), simple shapes |
742| bcutblend | bcutblend | |
743| bfuse_2d | bfuse | Fuse operation (old algorithm), 2d |
744| bfuse_complex | bfuse | Fuse operation (old algorithm), complex shapes |
745| bfuse_simple | bfuse | Fuse operation (old algorithm), simple shapes |
746| bopcommon_2d | bopcommon | Common operation, 2d |
747| bopcommon_complex | bopcommon | Common operation, complex shapes |
748| bopcommon_simple | bopcommon | Common operation, simple shapes |
749| bopcut_2d | bopcut | Cut operation, 2d |
750| bopcut_complex | bopcut | Cut operation, complex shapes |
751| bopcut_simple | bopcut | Cut operation, simple shapes |
752| bopfuse_2d | bopfuse | Fuse operation, 2d |
753| bopfuse_complex | bopfuse | Fuse operation, complex shapes |
754| bopfuse_simple | bopfuse | Fuse operation, simple shapes |
755| bopsection | bopsection | Section |
756| boptuc_2d | boptuc | |
757| boptuc_complex | boptuc | |
758| boptuc_simple | boptuc | |
759| bsection | bsection | Section (old algorithm) |
760
761@subsubsection testmanual_5_1_4 bugs
762
763This group allows testing cases coming from Mantis issues.
764
765The grids are organized following OCCT module and category set for the issue in the Mantis tracker.
766See <a href="#testmanual_5_2">Mapping of OCCT functionality to grid names in group *bugs*</a> for details.
767
768@subsubsection testmanual_5_1_5 caf
769
770This group allows testing OCAF functionality.
771
772DRAW module: OCAFKERNEL.
773
774| Grid | Commands | Functionality |
775| :---- | :----- | :------- |
776| basic | | Basic attributes |
777| bugs | | Saving and restoring of document |
778| driver | | OCAF drivers |
779| named_shape | | *TNaming_NamedShape* attribute |
780| presentation | | *AISPresentation* attributes |
781| tree | | Tree construction attributes |
782| xlink | | XLink attributes |
783
784@subsubsection testmanual_5_1_6 chamfer
785
786This group allows testing chamfer operations.
787
788DRAW module: MODELING.
789
790The test grid name is constructed depending on the type of the tested chamfers. Additional suffix <i>_complex</i> is used for test cases involving complex geometry (e.g. intersections of edges forming a chamfer); suffix <i>_sequence</i> is used for grids where chamfers are computed sequentially.
791
792| Grid | Commands | Functionality |
793| :---- | :----- | :------- |
794| equal_dist | | Equal distances from edge |
795| equal_dist_complex | | Equal distances from edge, complex shapes |
796| equal_dist_sequence | | Equal distances from edge, sequential operations |
797| dist_dist | | Two distances from edge |
798| dist_dist_complex | | Two distances from edge, complex shapes |
799| dist_dist_sequence | | Two distances from edge, sequential operations |
800| dist_angle | | Distance from edge and given angle |
801| dist_angle_complex | | Distance from edge and given angle |
802| dist_angle_sequence | | Distance from edge and given angle |
803
804@subsubsection testmanual_5_1_7 demo
805
806This group allows demonstrating how testing cases are created, and testing DRAW commands and the test system as a whole.
807
808| Grid | Commands | Functionality |
809| :---- | :----- | :------- |
810| draw | getsource, restore | Basic DRAW commands |
811| testsystem | | Testing system |
812| samples | | OCCT samples |
813
814
815@subsubsection testmanual_5_1_8 draft
816
817This group allows testing draft operations.
818
819DRAW module: MODELING.
820
821| Grid | Commands | Functionality |
822| :---- | :----- | :------- |
823| Angle | depouille | Drafts with angle (inclined walls) |
824
825
826@subsubsection testmanual_5_1_9 feat
827
828This group allows testing creation of features on a shape.
829
830DRAW module: MODELING (package *BRepTest*).
831
832| Grid | Commands | Functionality |
833| :---- | :----- | :------- |
834| featdprism | | |
835| featlf | | |
836| featprism | | |
837| featrevol | | |
838| featrf | | |
839
840@subsubsection testmanual_5_1_10 heal
841
842This group allows testing the functionality provided by *ShapeHealing* toolkit.
843
844DRAW module: XSDRAW
845
846| Grid | Commands | Functionality |
847| :---- | :----- | :------- |
848| fix_shape | fixshape | Shape healing |
849| fix_gaps | fixwgaps | Fixing gaps between edges on a wire |
850| same_parameter | sameparameter | Fixing non-sameparameter edges |
851| fix_face_size | DT_ApplySeq | Removal of small faces |
852| elementary_to_revolution | DT_ApplySeq | Conversion of elementary surfaces to revolution |
853| direct_faces | directfaces | Correction of axis of elementary surfaces |
854| drop_small_edges | fixsmall | Removal of small edges |
855| split_angle | DT_SplitAngle | Splitting periodic surfaces by angle |
856| split_angle_advanced | DT_SplitAngle | Splitting periodic surfaces by angle |
857| split_angle_standard | DT_SplitAngle | Splitting periodic surfaces by angle |
858| split_closed_faces | DT_ClosedSplit | Splitting of closed faces |
859| surface_to_bspline | DT_ToBspl | Conversion of surfaces to b-splines |
860| surface_to_bezier | DT_ShapeConvert | Conversion of surfaces to bezier |
861| split_continuity | DT_ShapeDivide | Split surfaces by continuity criterion |
862| split_continuity_advanced | DT_ShapeDivide | Split surfaces by continuity criterion |
863| split_continuity_standard | DT_ShapeDivide | Split surfaces by continuity criterion |
864| surface_to_revolution_advanced | DT_ShapeConvertRev | Convert elementary surfaces to revolutions, complex cases |
865| surface_to_revolution_standard | DT_ShapeConvertRev | Convert elementary surfaces to revolutions, simple cases |
866
867@subsubsection testmanual_5_1_11 mesh
868
4ee1bdf4 869This group allows testing shape tessellation (*BRepMesh*) and shading.
504a8968 870
871DRAW modules: MODELING (package *MeshTest*), VISUALIZATION (package *ViewerTest*)
872
873| Grid | Commands | Functionality |
874| :---- | :----- | :------- |
875| advanced_shading | vdisplay | Shading, complex shapes |
876| standard_shading | vdisplay | Shading, simple shapes |
877| advanced_mesh | mesh | Meshing of complex shapes |
878| standard_mesh | mesh | Meshing of simple shapes |
879| advanced_incmesh | incmesh | Meshing of complex shapes |
880| standard_incmesh | incmesh | Meshing of simple shapes |
881| advanced_incmesh_parallel | incmesh | Meshing of complex shapes, parallel mode |
882| standard_incmesh_parallel | incmesh | Meshing of simple shapes, parallel mode |
883
884@subsubsection testmanual_5_1_12 mkface
885
886This group allows testing creation of simple surfaces.
887
888DRAW module: MODELING (package *BRepTest*)
889
890| Grid | Commands | Functionality |
891| :---- | :----- | :------- |
892| after_trim | mkface | |
893| after_offset | mkface | |
894| after_extsurf_and_offset | mkface | |
895| after_extsurf_and_trim | mkface | |
896| after_revsurf_and_offset | mkface | |
897| mkplane | mkplane | |
898
899@subsubsection testmanual_5_1_13 nproject
900
901This group allows testing normal projection of edges and wires onto a face.
902
903DRAW module: MODELING (package *BRepTest*)
904
905| Grid | Commands | Functionality |
906| :---- | :----- | :------- |
907| Base | nproject | |
908
909@subsubsection testmanual_5_1_14 offset
910
911This group allows testing offset functionality for curves and surfaces.
912
913DRAW module: MODELING (package *BRepTest*)
914
915| Grid | Commands | Functionality |
916| :---- | :----- | :------- |
917| compshape | offsetcompshape | Offset of shapes with removal of some faces |
918| faces_type_a | offsetparameter, offsetload, offsetperform | Offset on a subset of faces with a fillet |
919| faces_type_i | offsetparameter, offsetload, offsetperform | Offset on a subset of faces with a sharp edge |
920| shape_type_a | offsetparameter, offsetload, offsetperform | Offset on a whole shape with a fillet |
921| shape_type_i | offsetparameter, offsetload, offsetperform | Offset on a whole shape with a fillet |
922| shape | offsetshape | |
923| wire_closed_outside_0_005, wire_closed_outside_0_025, wire_closed_outside_0_075, wire_closed_inside_0_005, wire_closed_inside_0_025, wire_closed_inside_0_075, wire_unclosed_outside_0_005, wire_unclosed_outside_0_025, wire_unclosed_outside_0_075 | mkoffset | 2d offset of closed and unclosed planar wires with different offset step and directions of offset ( inside / outside ) |
924
925@subsubsection testmanual_5_1_15 pipe
926
927This group allows testing construction of pipes (sweeping of a contour along profile).
928
929DRAW module: MODELING (package *BRepTest*)
930
931| Grid | Commands | Functionality |
932| :---- | :----- | :------- |
933| Standard | pipe | |
934
935@subsubsection testmanual_5_1_16 prism
936
937This group allows testing construction of prisms.
938
939DRAW module: MODELING (package *BRepTest*)
940
941| Grid | Commands | Functionality |
942| :---- | :----- | :------- |
943| seminf | prism | |
944
945@subsubsection testmanual_5_1_17 sewing
946
947This group allows testing sewing of faces by connecting edges.
948
949DRAW module: MODELING (package *BRepTest*)
950
951| Grid | Commands | Functionality |
952| :---- | :----- | :------- |
953| tol_0_01 | sewing | Sewing faces with tolerance 0.01 |
954| tol_1 | sewing | Sewing faces with tolerance 1 |
955| tol_100 | sewing | Sewing faces with tolerance 100 |
956
957@subsubsection testmanual_5_1_18 thrusection
958
959This group allows testing construction of shell or a solid passing through a set of sections in a given sequence (loft).
960
961| Grid | Commands | Functionality |
962| :---- | :----- | :------- |
963| solids | thrusection | Lofting with resulting solid |
964| not_solids | thrusection | Lofting with resulting shell or face |
965
966@subsubsection testmanual_5_1_19 xcaf
967
968This group allows testing extended data exchange packages.
969
970| Grid | Commands | Functionality |
971| :---- | :----- | :------- |
972| dxc, dxc_add_ACL, dxc_add_CL, igs_to_dxc, igs_add_ACL, brep_to_igs_add_CL, stp_to_dxc, stp_add_ACL, brep_to_stp_add_CL, brep_to_dxc, add_ACL_brep, brep_add_CL | | Subgroups are divided by format of source file, by format of result file and by type of document modification. For example, *brep_to_igs* means that the source shape in brep format was added to the document, which was saved into igs format after that. The postfix *add_CL* means that colors and layers were initialized in the document before saving and the postfix *add_ACL* corresponds to the creation of assembly and initialization of colors and layers in a document before saving. |
973
974
975@subsection testmanual_5_2 Mapping of OCCT functionality to grid names in group *bugs*
976
977| OCCT Module / Mantis category | Toolkits | Test grid in group bugs |
978| :---------- | :--------- | :---------- |
979| Application Framework | PTKernel, TKPShape, TKCDF, TKLCAF, TKCAF, TKBinL, TKXmlL, TKShapeSchema, TKPLCAF, TKBin, TKXml, TKPCAF, FWOSPlugin, TKStdLSchema, TKStdSchema, TKTObj, TKBinTObj, TKXmlTObj | caf |
980| Draw | TKDraw, TKTopTest, TKViewerTest, TKXSDRAW, TKDCAF, TKXDEDRAW, TKTObjDRAW, TKQADraw, DRAWEXE, Problems of testing system | draw |
981| Shape Healing | TKShHealing | heal |
982| Mesh | TKMesh, TKXMesh | mesh |
983| Data Exchange | TKIGES | iges |
984| Data Exchange | TKSTEPBase, TKSTEPAttr, TKSTEP209, TKSTEP | step |
985| Data Exchange | TKSTL, TKVRML | stlvrml |
986| Data Exchange | TKXSBase, TKXCAF, TKXCAFSchema, TKXDEIGES, TKXDESTEP, TKXmlXCAF, TKBinXCAF | xde |
6268cc68 987| Foundation Classes | TKernel, TKMath | fclasses |
504a8968 988| Modeling_algorithms | TKGeomAlgo, TKTopAlgo, TKPrim, TKBO, TKBool, TKHLR, TKFillet, TKOffset, TKFeat, TKXMesh | modalg |
989| Modeling Data | TKG2d, TKG3d, TKGeomBase, TKBRep | moddata |
6ce0df1e 990| Visualization | TKService, TKV2d, TKV3d, TKOpenGl, TKMeshVS, TKNIS | vis |
504a8968 991
992
5ae01c85 993@subsection testmanual_5_3 Recommended approaches to checking test results
504a8968 994
995@subsubsection testmanual_5_3_1 Shape validity
996
997Run command *checkshape* on the result (or intermediate) shape and make sure that *parse.rules* of the test grid or group reports bad shapes (usually recognized by word "Faulty") as error.
998
999Example
1000~~~~~
1001checkshape result
1002~~~~~
1003
5ae01c85 1004To check the number of faults in the shape command *checkfaults* can be used.
1005
1006Use: checkfaults shape source_shape [ref_value=0]
1007
1008The default syntax of *checkfaults* command:
1009~~~~~
1010checkfaults results a_1
1011~~~~~
1012
1013The command will check the number of faults in the source shape (*a_1*) and compare it
1014with number of faults in the resulting shape (*result*). If shape *result* contains
1015more faults, you will get an error:
1016~~~~~
1017checkfaults results a_1
1018Error : Number of faults is 5
1019~~~~~
1020It is possible to set the reference value for comparison (reference value is 4):
1021
1022~~~~~
1023checkfaults results a_1 4
1024~~~~~
1025
1026If number of faults in the resulting shape is unstable, reference value should be set to "-1".
1027As a result command *checkfaults* will return the following error:
1028
1029~~~~~
1030checkfaults results a_1 -1
1031Error : Number of faults is UNSTABLE
1032~~~~~
1033
504a8968 1034@subsubsection testmanual_5_3_2 Shape tolerance
1035The maximal tolerance of sub-shapes of each kind of the resulting shape can be extracted from output of tolerance command as follows:
1036
1037~~~~~
1038set tolerance [tolerance result]
1039regexp { *FACE +: +MAX=([-0-9.+eE]+)} $tolerance dummy max_face
1040regexp { *EDGE +: +MAX=([-0-9.+eE]+)} $tolerance dummy max_edgee
1041regexp { *VERTEX +: +MAX=([-0-9.+eE]+)} $tolerance dummy max_vertex
1042~~~~~
1043
5ae01c85 1044It is possible to use command *checkmaxtol* to check maximal tolerance of shape and compare it with reference value.
1045
fb60057d 1046Use: checkmaxtol shape [options...]
5ae01c85 1047
1048Allowed options are:
fb60057d 1049 * -ref: reference value of maximum tolerance
1050 * -source: list of shapes to compare with
5ae01c85 1051 * -min_tol: minimum tolerance for comparison
1052 * -multi_tol: tolerance multiplier
1053
5ae01c85 1054The default syntax of *checkmaxtol* command for comparison with the reference value:
1055~~~~~
fb60057d 1056checkmaxtol result -ref 0.00001
5ae01c85 1057~~~~~
1058
1059There is an opportunity to compare max tolerance of resulting shape with max tolerance of source shape.
1060In the following example command *checkmaxtol* gets max tolerance among objects *a_1* and *a_2*.
1061Then it chooses the maximum value between founded tolerance and value -min_tol (0.000001)
1062and multiply it on the coefficient -multi_tol (i.e. 2):
1063
1064~~~~~
fb60057d 1065checkmaxtol result -source {a_1 a_2} -min_tol 0.000001 -multi_tol 2
5ae01c85 1066~~~~~
1067
1068If the value of maximum tolerance more than founded tolerance for comparison, the command will return an error.
1069
fb60057d 1070Also, command *checkmaxtol* can be used to get max tolerance of the shape:
1071
1072~~~~~
1073set maxtol [checkmaxtol result]
1074~~~~~
1075
504a8968 1076@subsubsection testmanual_5_3_3 Shape volume, area, or length
1077
1078Use command *vprops, sprops,* or *lprops* to correspondingly measure volume, area, or length of the shape produced by the test. The value can be extracted from the result of the command by *regexp*.
1079
1080Example:
1081~~~~~
1082# check area of shape result with 1% tolerance
1083regexp {Mass +: +([-0-9.+eE]+)} [sprops result] dummy area
1084if { abs($area - $expected) > 0.1 + 0.01 * abs ($area) } {
1085 puts "Error: The area of result shape is $area, while expected $expected"
1086}
1087~~~~~
1088
1089@subsubsection testmanual_5_3_4 Memory leaks
1090
1091The test system measures the amount of memory used by each test case, and considerable deviations (as well as overall difference) comparing with reference results will be reported by *testdiff* command.
1092
1093The typical approach to checking memory leak on a particular operation is to run this operation in cycle measuring memory consumption at each step and comparing it with some threshold value. Note that file begin in group bugs defines command *checktrend* that can be used to analyze a sequence of memory measurements to get statistically based evaluation of the leak presence.
1094
1095Example:
1096~~~~~
1097set listmem {}
1098for {set i 1} {$i < 100} {incr i} {
1099 # run suspect operation
1100
1101 # check memory usage (with tolerance equal to half page size)
1102 lappend listmem [expr [meminfo w] / 1024]
1103 if { [checktrend $listmem 0 256 "Memory leak detected"] } {
1104 puts "No memory leak, $i iterations"
1105 break
1106 }
1107}
1108~~~~~
1109
1110@subsubsection testmanual_5_3_5 Visualization
1111
936f43da 1112Take a snapshot of the viewer, give it the name of the test case, and save in the directory indicated by Tcl variable *imagedir*.
504a8968 1113
1114~~~~~
1115vinit
1116vclear
1117vdisplay result
1118vsetdispmode 1
1119vfit
1120vzfit
1121vdump $imagedir/${casename}_shading.png
1122~~~~~
1123
1124This image will be included in the HTML log produced by *testgrid* command and will be checked for non-regression through comparison of images by command *testdiff*.
5ae01c85 1125
1126@subsubsection testmanual_5_3_6 Number of free edges
1127
1128To check the number of free edges run the command *checkfreebounds*.
1129
1130It compares number of free edges with reference value.
1131
1132Use: checkfreebounds shape ref_value [options...]
1133
1134Allowed options are:
1135 * -tol N: used tolerance (default -0.01)
1136 * -type N: used type, possible values are "closed" and "opened" (default "closed")
1137
1138~~~~~
1139checkfreebounds result 13
1140~~~~~
1141
1142Option -tol N is used to set tolerance for command *freebounds*, which is used within command *checkfreebounds*.
1143
1144Option -type N is used to select the type of counted free edges - closed or opened.
1145
1146If the number of free edges in the resulting shape is unstable, reference value should be set to "-1".
1147As a result command *checkfreebounds* will return the following error:
1148
1149~~~~~
1150checkfreebounds result -1
1151Error : Number of free edges is UNSTABLE
1152~~~~~
1153
1154@subsubsection testmanual_5_3_7 Compare numbers
1155
1156Procedure to check equality of two reals with some tolerance (relative and absolute)
1157
1158Use: checkreal name value expected tol_abs tol_rel
1159
1160~~~~~
1161checkreal "Some important value" $value 5 0.0001 0.01
1162~~~~~
1163
1164@subsubsection testmanual_5_3_8 Check number of sub-shapes
1165
1166Compare number of sub-shapes in "shape" with given reference data
1167
1168Use: checknbshapes shape [options...]
1169Allowed options are:
1170 * -vertex N
1171 * -edge N
1172 * -wire N
1173 * -face N
1174 * -shell N
1175 * -solid N
1176 * -compsolid N
1177 * -compound N
1178 * -shape N
1179 * -t: compare the number of sub-shapes in "shape" counting
1180 the same sub-shapes with different location as different sub-shapes.
1181 * -m msg: print "msg" in case of error
1182
1183~~~~~
1184checknbshapes result -vertex 8 -edge 4
1185~~~~~
1186
1187@subsubsection testmanual_5_3_9 Check pixel color
1188
1189To check pixel color command *checkcolor* can be used.
1190
1191Use: checkcolor x y red green blue
1192
1193 x y - pixel coordinates
1194
1195 red green blue - expected pixel color (values from 0 to 1)
1196
1197This procedure checks color with tolerance (5x5 area)
1198
1199Next example will compare color of point with coordinates x=100 y=100 with RGB color R=1 G=0 B=0.
1200If colors are not equal, procedure will check the nearest ones points (5x5 area)
1201~~~~~
1202checkcolor 100 100 1 0 0
1203~~~~~