0025574: gp_YawPitchRoll Euler Angle computation gives wrong results - warnings
[occt.git] / dox / dev_guides / tests / tests.md
... / ...
CommitLineData
1 Automated Testing System {#occt_dev_guides__tests}
2======================================
3
4@tableofcontents
5
6@section testmanual_intro Introduction
7
8This document provides OCCT developers and contributors with an overview and practical guidelines for work with OCCT automatic testing system.
9
10Reading the Introduction should be sufficient for developers to use the test system to control non-regression of the modifications they implement in OCCT. Other sections provide a more in-depth description of the test system, required for modifying the tests and adding new test cases.
11
12@subsection testmanual_intro_basic Basic Information
13
14OCCT automatic testing system is organized around @ref occt_user_guides__test_harness "DRAW Test Harness", a console application based on Tcl (a scripting language) interpreter extended by OCCT-related commands.
15
16Standard OCCT tests are included with OCCT sources and are located in subdirectory *tests* of the OCCT root folder. Other test folders can be included in the test system, e.g. for testing applications based on OCCT.
17
18The tests are organized in three levels:
19
20 * Group: a group of related test grids, usually testing a particular OCCT functionality (e.g. blend);
21 * Grid: a set of test cases within a group, usually aimed at testing some particular aspect or mode of execution of the relevant functionality (e.g. buildevol);
22 * Test case: a script implementing an individual test (e.g. K4).
23
24See @ref testmanual_5_1 "Test Groups" chapter for the current list of available test groups and grids.
25
26Some tests involve data files (typically CAD models) which are located separately and are not included with OCCT code. The archive with publicly available test data files should be downloaded and installed independently on OCCT sources (see http://dev.opencascade.org).
27
28@subsection testmanual_1_2 Intended Use of Automatic Tests
29
30Each modification made in OCCT code must be checked for non-regression
31by running the whole set of tests. The developer who makes the modification
32is responsible for running and ensuring non-regression for the tests available to him.
33
34Note that many tests are based on data files that are confidential and thus available only at OPEN CASCADE.
35The official certification testing of each change before its integration to master branch of official OCCT Git repository (and finally to the official release) is performed by OPEN CASCADE to ensure non-regression on all existing test cases and supported platforms.
36
37Each new non-trivial modification (improvement, bug fix, new feature) in OCCT should be accompanied by a relevant test case suitable for verifying that modification. This test case is to be added by the developer who provides the modification.
38
39If a modification affects the result of an existing test case, either the modification should be corrected (if it causes regression) or the affected test cases should be updated to account for the modification.
40
41The modifications made in the OCCT code and related test scripts should be included in the same integration to the master branch.
42
43@subsection testmanual_1_3 Quick Start
44
45@subsubsection testmanual_1_3_1 Setup
46
47Before running tests, make sure to define environment variable *CSF_TestDataPath* pointing to the directory containing test data files.
48
49For this it is recommended to add a file *DrawAppliInit* in the directory which is current at the moment of starting DRAWEXE (normally it is OCCT root directory, <i>$CASROOT </i>). This file is evaluated automatically at the DRAW start.
50
51Example (Windows)
52
53~~~~~{.tcl}
54set env(CSF_TestDataPath) $env(CSF_TestDataPath)\;d:/occt/test-data
55~~~~~
56
57Note that variable *CSF_TestDataPath* is set to default value at DRAW start, pointing at the folder <i>$CASROOT/data</i>.
58In this example, subdirectory <i>d:/occt/test-data</i> is added to this path. Similar code could be used on Linux and Mac OS X except that on non-Windows platforms colon ":" should be used as path separator instead of semicolon ";".
59
60All tests are run from DRAW command prompt (run *draw.bat* or *draw.sh* to start it).
61
62@subsubsection testmanual_1_3_2 Running Tests
63
64To run all tests, type command *testgrid*
65
66Example:
67
68~~~~~
69Draw[]> testgrid
70~~~~~
71
72To run only a subset of test cases, give masks for group, grid, and test case names to be executed.
73Each argument is a list of file masks separated with commas or spaces; by default "*" is assumed.
74
75Example:
76
77~~~~~
78Draw[]> testgrid bugs caf,moddata*,xde
79~~~~~
80
81As the tests progress, the result of each test case is reported.
82At the end of the log a summary of test cases is output,
83including the list of detected regressions and improvements, if any.
84
85
86Example:
87
88~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
89 Tests summary
90
91 CASE 3rdparty export A1: OK
92 ...
93 CASE pipe standard B1: BAD (known problem)
94 CASE pipe standard C1: OK
95 No regressions
96 Total cases: 208 BAD, 31 SKIPPED, 3 IMPROVEMENT, 1791 OK
97 Elapsed time: 1 Hours 14 Minutes 33.7384512019 Seconds
98 Detailed logs are saved in D:/occt/results_2012-06-04T0919
99~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
100
101The tests are considered as non-regressive if only OK, BAD (i.e. known problem), and SKIPPED (i.e. not executed, typically because of lack of a data file) statuses are reported. See @ref testmanual_details_results "Interpretation of test results" for details.
102
103The results and detailed logs of the tests are saved by default to a new subdirectory of the subdirectory *results* in the current folder, whose name is generated automatically using the current date and time, prefixed by Git branch name (if Git is available and current sources are managed by Git).
104If necessary, a non-default output directory can be specified using option <i> -outdir</i> followed by a path to the directory. This directory should be new or empty; use option <i>-overwrite</i> to allow writing results in the existing non-empty directory.
105
106Example:
107~~~~~
108Draw[]> testgrid -outdir d:/occt/last_results -overwrite
109~~~~~
110In the output directory, a cumulative HTML report <i>summary.html</i> provides links to reports on each test case. An additional report in JUnit-style XML format can be output for use in Jenkins or other continuous integration system.
111
112To re-run test cases which were detected as regressions on previous run, option <i>-regress dirname</i> should be used.
113<i>dirname</i> is path to directory containing results of previous run. Only test cases with *FAILED* and *IMPROVEMENT* statuses will be tested.
114
115Example:
116~~~~~
117Draw[]> testgrid -regress d:/occt/last_results
118~~~~~
119
120Type <i>help testgrid</i> in DRAW prompt to get help on options supported by *testgrid* command:
121
122~~~~~
123Draw[3]> help testgrid
124testgrid: Run all tests, or specified group, or one grid
125 Use: testgrid [groupmask [gridmask [casemask]]] [options...]
126 Allowed options are:
127 -parallel N: run N parallel processes (default is number of CPUs, 0 to disable)
128 -refresh N: save summary logs every N seconds (default 60, minimal 1, 0 to disable)
129 -outdir dirname: set log directory (should be empty or non-existing)
130 -overwrite: force writing logs in existing non-empty directory
131 -xml filename: write XML report for Jenkins (in JUnit-like format)
132 -beep: play sound signal at the end of the tests
133 -regress dirname: re-run only a set of tests that have been detected as regressions on some previous run.
134 Here "dirname" is path to directory containing results of previous run.
135 Groups, grids, and test cases to be executed can be specified by list of file
136 masks, separated by spaces or comma; default is all (*).
137~~~~~
138
139@subsubsection testmanual_1_3_3 Running a Single Test
140
141To run a single test, type command *test* followed by names of group, grid, and test case.
142
143Example:
144
145~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
146 Draw[1]> test blend simple A1
147 CASE blend simple A1: OK
148 Draw[2]>
149~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
150
151Note that normally an intermediate output of the script is not shown. The detailed log of the test can be obtained after the test execution by running command <i>"dlog get"</i>.
152
153To see intermediate commands and their output during the test execution, add one more argument
154<i>"echo"</i> at the end of the command line. Note that with this option the log is not collected and summary is not produced.
155
156Type <i>help test</i> in DRAW prompt to get help on options supported by *test* command:
157
158~~~~~
159Draw[3]> help test
160test: Run specified test case
161 Use: test group grid casename [options...]
162 Allowed options are:
163 -echo: all commands and results are echoed immediately,
164 but log is not saved and summary is not produced
165 It is also possible to use "1" instead of "-echo"
166 If echo is OFF, log is stored in memory and only summary
167 is output (the log can be obtained with command "dlog get")
168 -outfile filename: set log file (should be non-existing),
169 it is possible to save log file in text file or
170 in html file(with snapshot), for that "filename"
171 should have ".html" extension
172 -overwrite: force writing log in existing file
173 -beep: play sound signal at the end of the test
174 -errors: show all lines from the log report that are recognized as errors
175 This key will be ignored if the "-echo" key is already set.
176~~~~~
177
178@subsubsection testmanual_intro_quick_create Creating a New Test
179
180The detailed rules of creation of new tests are given in @ref testmanual_3 "Creation and modification of tests" chapter. The following short description covers the most typical situations:
181
182Use prefix <i>bug</i> followed by Mantis issue ID and, if necessary, additional suffixes, for naming the test script, data files, and DRAW commands specific for this test case.
183
1841. If the test requires C++ code, add it as new DRAW command(s) in one of files in *QABugs* package.
1852. Add script(s) for the test case in the subfolder corresponding to the relevant OCCT module of the group *bugs* <i>($CASROOT/tests/bugs)</i>. See @ref testmanual_5_2 "the correspondence map".
1863. In the test script:
187 * Load all necessary DRAW modules by command *pload*.
188 * Use command *locate_data_file* to get a path to data files used by test script. (Make sure to have this command not inside catch statement if it is used.)
189 * Use DRAW commands to reproduce the situation being tested.
190 * Make sure that in case of failure the test produces message containing word "Error" or other recognized by test system as error (add new error patterns in file parse.rules if necessary).
191 * If test case reports error due to existing problem and the fix is not available, add @ref testmanual_3_6 "TODO" statement for each error to mark it as known problem. The TODO statements must be specific so as to match the actually generated messages but not all similar errors.
192 * To check expected output which should be obtained as a result of a test, add @ref testmanual_3_7 "REQUIRED" statement for each line of output to mark it as required.
193 * If test case produces error messages (contained in parse.rules) which are expected in that test and should not be considered as its failure (e.g. test for checkshape command), add REQUIRED statement for each error to mark it as required output.
1944. If the test uses data file(s) not yet present in the test database, these can be put to (sub)directory pointed out by *CSF_TestDataPath* variable for running test. The files should be attached to Mantis issue corresponding to the modification being tested.
1955. Check that the test case runs as expected (test for fix: OK with the fix, FAILED without the fix; test for existing problem: BAD), and integrate to Git branch created for the issue.
196
197Example:
198
199* Added files:
200
201~~~~~
202git status -short
203A tests/bugs/heal/data/bug210_a.brep
204A tests/bugs/heal/data/bug210_b.brep
205A tests/bugs/heal/bug210_1
206A tests/bugs/heal/bug210_2
207~~~~~
208
209* Test script
210
211~~~~~{.tcl}
212puts "OCC210 (case 1): Improve FixShape for touching wires"
213
214restore [locate_data_file bug210_a.brep] a
215
216fixshape result a 0.01 0.01
217checkshape result
218~~~~~
219
220@section testmanual_2 Organization of Test Scripts
221
222@subsection testmanual_2_1 General Layout
223
224Standard OCCT tests are located in subdirectory tests of the OCCT root folder ($CASROOT).
225
226Additional test folders can be added to the test system by defining environment variable *CSF_TestScriptsPath*. This should be list of paths separated by semicolons (*;*) on Windows
227or colons (*:*) on Linux or Mac. Upon DRAW launch, path to *tests* subfolder of OCCT is added at the end of this variable automatically.
228
229Each test folder is expected to contain:
230 * Optional file *parse.rules* defining patterns for interpretation of test results, common for all groups in this folder
231 * One or several test group directories.
232
233Each group directory contains:
234
235 * File *grids.list* that identifies this test group and defines list of test grids in it.
236 * Test grids (sub-directories), each containing set of scripts for test cases, and optional files *cases.list*, *parse.rules*, *begin* and *end*.
237 * Optional sub-directory data
238
239By convention, names of test groups, grids, and cases should contain no spaces and be lower-case.
240The names *begin, end, data, parse.rules, grids.list* and *cases.list* are reserved.
241
242General layout of test scripts is shown in Figure 1.
243
244@image html /dev_guides/tests/images/tests_image001.png "Layout of tests folder"
245@image latex /dev_guides/tests/images/tests_image001.png "Layout of tests folder"
246
247
248@subsection testmanual_2_2 Test Groups
249
250@subsubsection testmanual_2_2_1 Group Names
251
252The names of directories of test groups containing systematic test grids correspond to the functionality tested by each group.
253
254Example:
255
256~~~~~
257 caf
258 mesh
259 offset
260~~~~~
261
262Test group *bugs* is used to collect the tests coming from bug reports. Group *demo* collects tests of the test system, DRAW, samples, etc.
263
264@subsubsection testmanual_2_2_2 File "grids.list"
265
266This test group contains file *grids.list*, which defines an ordered list of grids in this group in the following format:
267
268~~~~~~~~~~~~~~~~~
269001 gridname1
270002 gridname2
271...
272NNN gridnameN
273~~~~~~~~~~~~~~~~~
274
275Example:
276
277~~~~~~~~~~~~~~~~~
278 001 basic
279 002 advanced
280~~~~~~~~~~~~~~~~~
281
282@subsubsection testmanual_2_2_3 File "begin"
283
284This file is a Tcl script. It is executed before every test in the current group.
285Usually it loads necessary Draw commands, sets common parameters and defines
286additional Tcl functions used in test scripts.
287
288Example:
289
290~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
291 pload TOPTEST ;# load topological command
292 set cpulimit 300 ;# set maximum time allowed for script execution
293~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
294
295@subsubsection testmanual_2_2_4 File "end"
296
297This file is a TCL script. It is executed after every test in the current group. Usually it checks the results of script work, makes a snap-shot of the viewer and writes *TEST COMPLETED* to the output.
298
299Note: *TEST COMPLETED* string should be present in the output to indicate that the test is finished without crash.
300
301See @ref testmanual_3 "Creation and modification of tests" chapter for more information.
302
303Example:
304~~~~~
305 if { [isdraw result] } {
306 checkshape result
307 } else {
308 puts "Error: The result shape can not be built"
309 }
310 puts "TEST COMPLETED"
311~~~~~
312
313@subsubsection testmanual_2_2_5 File "parse.rules"
314
315The test group may contain *parse.rules* file. This file defines patterns used for analysis of the test execution log and deciding the status of the test run.
316
317Each line in the file should specify a status (single word), followed by a regular expression delimited by slashes (*/*) that will be matched against lines in the test output log to check if it corresponds to this status.
318
319The regular expressions should follow <a href="http://www.tcl.tk/man/tcl/TclCmd/re_syntax.htm">Tcl syntax</a>, with special exception that "\b" is considered as word limit (Perl-style), in addition to "\y" used in Tcl.
320
321The rest of the line can contain a comment message, which will be added to the test report when this status is detected.
322
323Example:
324
325~~~~~
326 FAILED /\b[Ee]xception\b/ exception
327 FAILED /\bError\b/ error
328 SKIPPED /Cannot open file for reading/ data file is missing
329 SKIPPED /Could not read file .*, abandon/ data file is missing
330~~~~~
331
332Lines starting with a *#* character and blank lines are ignored to allow comments and spacing.
333
334See @ref testmanual_details_results "Interpretation of test results" chapter for details.
335
336If a line matches several rules, the first one applies. Rules defined in the grid are checked first, then rules in the group, then rules in the test root directory. This allows defining some rules on the grid level with status *IGNORE* to ignore messages that would otherwise be treated as errors due to the group level rules.
337
338Example:
339
340~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
341 FAILED /\\bFaulty\\b/ bad shape
342 IGNORE /^Error [23]d = [\d.-]+/ debug output of blend command
343 IGNORE /^Tcl Exception: tolerance ang : [\d.-]+/ blend failure
344~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
345
346@subsubsection testmanual_2_2_6 Directory "data"
347The test group may contain subdirectory *data*, where test scripts shared by different test grids can be put. See also @ref testmanual_2_3_5 "Directory data".
348
349@subsection testmanual_2_3 Test Grids
350
351@subsubsection testmanual_2_3_1 Grid Names
352
353The folder of a test group can have several sub-directories (Grid 1… Grid N) defining test grids.
354Each directory contains a set of related test cases. The name of a directory should correspond to its contents.
355
356Example:
357
358~~~~~
359caf
360 basic
361 bugs
362 presentation
363~~~~~
364
365Here *caf* is the name of the test group and *basic*, *bugs*, *presentation*, etc. are the names of grids.
366
367@subsubsection testmanual_2_3_2 File "begin"
368
369This file is a TCL script executed before every test in the current grid.
370
371Usually it sets variables specific for the current grid.
372
373Example:
374
375~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
376 set command bopfuse ;# command tested in this grid
377~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
378
379@subsubsection testmanual_2_3_3 File "end"
380
381This file is a TCL script executed after every test in current grid.
382
383Usually it executes a specific sequence of commands common for all tests in the grid.
384
385Example:
386
387~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
388 vdump $imagedir/${casename}.png ;# makes a snap-shot of AIS viewer
389~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
390
391@subsubsection testmanual_2_3_4 File "cases.list"
392
393The grid directory can contain an optional file cases.list
394defining an alternative location of the test cases.
395This file should contain a single line defining the relative path to the collection of test cases.
396
397Example:
398
399~~~~~
400../data/simple
401~~~~~
402
403This option is used for creation of several grids of tests with the same data files and operations but performed with differing parameters. The common scripts are usually located place in the common
404subdirectory of the test group, <i>data/simple</i> for example.
405
406If file *cases.list* exists, the grid directory should not contain any test cases.
407The specific parameters and pre- and post-processing commands
408for test execution in this grid should be defined in the files *begin* and *end*.
409
410
411@subsubsection testmanual_2_3_5 Directory "data"
412
413The test grid may contain subdirectory *data*, containing data files used in tests (BREP, IGES, STEP, etc.) of this grid.
414
415@subsection testmanual_2_4 Test Cases
416
417The test case is a TCL script, which performs some operations using DRAW commands
418and produces meaningful messages that can be used to check the validity of the result.
419
420Example:
421
422~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
423 pcylinder c1 10 20 ;# create first cylinder
424 pcylinder c2 5 20 ;# create second cylinder
425 ttranslate c2 5 0 10 ;# translate second cylinder to x,y,z
426 bsection result c1 c2 ;# create a section of two cylinders
427 checksection result ;# will output error message if result is bad
428~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
429
430The test case can have any name (except for the reserved names *begin, end, data, cases.list* and *parse.rules*).
431For systematic grids it is usually a capital English letter followed by a number.
432
433Example:
434
435~~~~~
436 A1
437 A2
438 B1
439 B2
440~~~~~
441
442Such naming facilitates compact representation of tests execution results in tabular format within HTML reports.
443
444
445@section testmanual_3 Creation And Modification Of Tests
446
447This section describes how to add new tests and update existing ones.
448
449@subsection testmanual_3_1 Choosing Group, Grid, and Test Case Name
450
451The new tests are usually added in the frame of processing issues in OCCT Mantis tracker.
452Such tests in general should be added to group bugs, in the grid
453corresponding to the affected OCCT functionality. See @ref testmanual_5_2 "Mapping of OCCT functionality to grid names in group bugs".
454
455New grids can be added as necessary to contain tests for the functionality not yet covered by existing test grids.
456The test case name in the bugs group should be prefixed by the ID of the corresponding issue in Mantis (without leading zeroes) with prefix *bug*. It is recommended to add a suffix providing a hint on the tested situation. If more than one test is added for a bug, they should be distinguished by suffixes; either meaningful or just ordinal numbers.
457
458Example:
459
460~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl}
461 bug12345_coaxial
462 bug12345_orthogonal_1
463 bug12345_orthogonal_2
464~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
465
466If the new test corresponds to a functionality already covered by the existing systematic test grid (e.g. group *mesh* for *BRepMesh* issues), this test can be added (or moved later by OCC team) to that grid.
467
468@subsection testmanual_3_2 Adding Data Files Required for a Test
469
470It is advisable to make self-contained test scripts whenever possible, so as they could be used in the environments where data files are not available. For that simple geometric objects and shapes can be created using DRAW commands in the test script itself.
471
472If the test requires a data file, it should be put to the directory listed in environment variable *CSF_TestDataPath*.
473Alternatively, it can be put to subdirectory *data* of the test grid.
474It is recommended to prefix the data file with the corresponding issue id prefixed by *bug*, e.g. *bug12345_face1.brep*, to avoid possible conflicts with names of existing data files.
475
476Note that when the test is integrated to the master branch, OCC team will move the data file to the data files repository, to keep OCCT sources repository clean from data files.
477
478When you prepare a test script, try to minimize the size of involved data model. For instance, if the problem detected on a big shape can be reproduced on a single face extracted from that shape, use only that face in the test.
479
480
481@subsection testmanual_3_3 Adding new DRAW commands
482
483If the test cannot be implemented using available DRAW commands, consider the following possibilities:
484* If the existing DRAW command can be extended to enable possibility required for a test in a natural way (e.g. by adding an option to activate a specific mode of the algorithm), this way is recommended. This change should be appropriately documented in a relevant Mantis issue.
485* If the new command is needed to access OCCT functionality not exposed to DRAW previously, and this command can be potentially reused (for other tests), it should be added to the package where similar commands are implemented (use *getsource* DRAW command to get the package name). The name and arguments of the new command should be chosen to keep similarity with the existing commands. This change should be documented in a relevant Mantis issue.
486* Otherwise the new command implementing the actions needed for this particular test should be added in *QABugs* package. The command name should be formed by the Mantis issue ID prefixed by *bug*, e.g. *bug12345*.
487
488Note that a DRAW command is expected to return 0 in case of a normal completion, and 1 (Tcl exception) if it is incorrectly used (e.g. a wrong number of input arguments). Thus if the new command needs to report a test error, this should be done by outputting an appropriate error message rather than by returning a non-zero value.
489File names must be encoded in the script rather than in the DRAW command and passed to the DRAW command as an argument.
490
491@subsection testmanual_3_4 Script Implementation
492
493The test should run commands necessary to perform the tested operations, in general assuming a clean DRAW session. The required DRAW modules should be loaded by *pload* command, if it is not done by *begin* script. The messages produced by commands in a standard output should include identifiable messages on the discovered problems if any.
494
495Usually the script represents a set of commands that a person would run interactively to perform the operation and see its results, with additional comments to explain what happens.
496
497Example:
498~~~~~
499# Simple test of fusing box and sphere
500box b 10 10 10
501sphere s 5
502bfuse result b s
503checkshape result
504~~~~~
505
506Make sure that file *parse.rules* in the grid or group directory contains a regular expression to catch possible messages indicating the failure of the test.
507
508For instance, for catching errors reported by *checkshape* command relevant grids define a rule to recognize its report by the word *Faulty*:
509
510~~~~~
511FAILED /\bFaulty\b/ bad shape
512~~~~~
513
514For the messages generated in the script it is recommended to use the word 'Error' in the error message.
515
516Example:
517
518~~~~~
519set expected_length 11
520if { [expr $actual_length - $expected_length] > 0.001 } {
521 puts "Error: The length of the edge should be $expected_length"
522}
523~~~~~
524
525At the end, the test script should output *TEST COMPLETED* string to mark a successful completion of the script. This is often done by the *end* script in the grid.
526
527When the test script requires a data file, use Tcl procedure *locate_data_file* to get a path to it, instead of putting the path explicitly. This will allow easy move of the data file from OCCT sources repository to the data files repository without the need to update the test script.
528
529Example:
530
531~~~~~
532stepread [locate_data_file CAROSKI_COUPELLE.step] a *
533~~~~~
534
535When the test needs to produce some snapshots or other artefacts, use Tcl variable *imagedir* as the location where such files should be put.
536* Command *testgrid* sets this variable to the subdirectory of the results folder corresponding to the grid.
537* Command *test* by default creates a dedicated temporary directory in the system temporary folder (normally the one specified by environment variable *TempDir*, *TEMP*, or *TMP*) for each execution, and sets *imagedir* to that location.
538
539However if variable *imagedir* is defined on the top level of Tcl interpretor, command *test* will use it instead of creating a new directory.
540
541Use Tcl variable *casename* to prefix all files produced by the test.
542This variable is set to the name of the test case.
543
544The test system can recognize an image file (snapshot) and include it in HTML log and differences if its name starts with the name of the test case (use variable *casename*), optionally followed by underscore or dash and arbitrary suffix.
545
546The image format (defined by extension) should be *png*.
547
548Example:
549~~~~~
550xwd $imagedir/${casename}.png
551vdisplay result; vfit
552vdump $imagedir/${casename}-axo.png
553vfront; vfit
554vdump $imagedir/${casename}-front.png
555~~~~~
556
557would produce:
558~~~~~
559A1.png
560A1-axo.png
561A1-front.png
562~~~~~
563
564Note that OCCT must be built with FreeImage support to be able to produce usable images.
565
566Other Tcl variables defined during the test execution are:
567- *groupname*: name of the test group;
568- *gridname*: name of the test grid;
569- *dirname*: path to the root directory of the current set of test scripts.
570
571In order to ensure that the test works as expected in different environments, observe the following additional rules:
572* Avoid using external commands such as *grep, rm,* etc., as these commands can be absent on another system (e.g. on Windows); use facilities provided by Tcl instead.
573* Do not put call to *locate_data_file* in catch statement -- this can prevent correct interpretation of the missing data file by the test system.
574* Do not use commands *decho* and *dlog* in the test script, to avoid interference with use of these commands by the test system.
575
576@subsection testmanual_details_results Interpretation of test results
577
578The result of the test is evaluated by checking its output against patterns defined in the files *parse.rules* of the grid and group.
579
580The OCCT test system recognizes five statuses of the test execution:
581* SKIPPED: reported if a line matching SKIPPED pattern is found (prior to any FAILED pattern). This indicates that the test cannot be run in the current environment; the most typical case is the absence of the required data file.
582* FAILED: reported if a line matching pattern with status FAILED is found (unless it is masked by the preceding IGNORE pattern or a TODO or REQUIRED statement), or if message TEST COMPLETED or at least one of REQUIRED patterns is not found. This indicates that the test has produced a bad or unexpected result, and usually means a regression.
583* BAD: reported if the test script output contains one or several TODO statements and the corresponding number of matching lines in the log. This indicates a known problem. The lines matching TODO statements are not checked against other patterns and thus will not cause a FAILED status.
584* IMPROVEMENT: reported if the test script output contains a TODO statement for which no corresponding line is found. This is a possible indication of improvement (a known problem has disappeared).
585* OK: reported if none of the above statuses have been assigned. This means that the test has passed without problems.
586
587Other statuses can be specified in *parse.rules* files, these will be classified as FAILED.
588
589For integration of the change to OCCT repository, all tests should return either OK or BAD status.
590The new test created for an unsolved problem should return BAD. The new test created for a fixed problem should return FAILED without the fix, and OK with the fix.
591
592@subsection testmanual_3_6 Marking BAD cases
593
594If the test produces an invalid result at a certain moment then the corresponding bug should be created in the OCCT issue tracker located at http://tracker.dev.opencascade.org, and the problem should be marked as TODO in the test script.
595
596The following statement should be added to such a test script:
597~~~~~
598puts "TODO BugNumber ListOfPlatforms: RegularExpression"
599~~~~~
600
601Here:
602* *BugNumber* is the bug ID in the tracker. For example: #12345.
603* *ListOfPlatforms* is a list of platforms, at which the bug is reproduced (Linux, Windows, MacOS, or All). Note that the platform name is custom for the OCCT test system; it corresponds to the value of environment variable *os_type* defined in DRAW.
604
605Example:
606~~~~~
607Draw[2]> puts $env(os_type)
608windows
609~~~~~
610
611* RegularExpression is a regular expression, which should be matched against the line indicating the problem in the script output.
612
613Example:
614~~~~~
615puts "TODO #22622 Mandriva2008: Abort .* an exception was raised"
616~~~~~
617
618The parser checks the test output and if an output line matches the *RegularExpression* then it will be assigned a BAD status instead of FAILED.
619
620A separate TODO line must be added for each output line matching an error expression to mark the test as BAD. If not all TODO messages are found in the test log, the test will be considered as possible improvement.
621
622To mark the test as BAD for an incomplete case (when the final *TEST COMPLETE* message is missing) the expression *TEST INCOMPLETE* should be used instead of the regular expression.
623
624Example:
625
626~~~~~
627puts "TODO OCC22817 All: exception.+There are no suitable edges"
628puts "TODO OCC22817 All: \\*\\* Exception \\*\\*"
629puts "TODO OCC22817 All: TEST INCOMPLETE"
630~~~~~
631
632@subsection testmanual_3_7 Marking required output
633
634To check expected output which must be obtained as a result of a test for it to be considered correct, add REQUIRED statement for each specific message.
635For that, the following statement should be added to such a test script:
636
637~~~~~
638puts "REQUIRED ListOfPlatforms: RegularExpression"
639~~~~~
640
641Here *ListOfPlatforms* and *RegularExpression* have the same meaning as in TODO statements described above.
642
643The REQUIRED statement can also be used to mask message that would normally be interpreted as error (according to rules defined in *parse.rules*) but should not be considered as such within current test.
644
645Example:
646~~~~~
647puts "REQUIRED Linux: Faulty shapes in variables faulty_1 to faulty_5"
648~~~~~
649
650This statement notifies test system that errors reported by *checkshape* command are expected in that test case, and test should be considered as OK if this message appears, despite of presence of general rule stating that 'Faulty' signals failure.
651
652If output does not contain required statement, test case will be marked as FAILED.
653
654@section testmanual_4 Advanced Use
655
656@subsection testmanual_4_1 Running Tests on Older Versions of OCCT
657
658Sometimes it might be necessary to run tests on the previous versions of OCCT (<= 6.5.4) that do not include this test system. This can be done by adding DRAW configuration file *DrawAppliInit* in the directory, which is current by the moment of DRAW start-up, to load test commands and to define the necessary environment.
659
660Note: in OCCT 6.5.3, file *DrawAppliInit* already exists in <i>$CASROOT/src/DrawResources</i>, new commands should be added to this file instead of a new one in the current directory.
661
662For example, let us assume that *d:/occt* contains an up-to-date version of OCCT sources with tests, and the test data archive is unpacked to *d:/test-data*):
663
664~~~~~
665set env(CASROOT) d:/occt
666set env(CSF_TestScriptsPath) $env(CASROOT)/tests
667source $env(CASROOT)/src/DrawResources/TestCommands.tcl
668set env(CSF_TestDataPath) $env(CASROOT)/data;d:/test-data
669return
670~~~~~
671
672Note that on older versions of OCCT the tests are run in compatibility mode and thus not all output of the test command can be captured; this can lead to absence of some error messages (can be reported as either a failure or an improvement).
673
674@subsection testmanual_4_2 Adding custom tests
675
676You can extend the test system by adding your own tests. For that it is necessary to add paths to the directory where these tests are located, and one or more additional data directories, to the environment variables *CSF_TestScriptsPath* and *CSF_TestDataPath*. The recommended way for doing this is using DRAW configuration file *DrawAppliInit* located in the directory which is current by the moment of DRAW start-up.
677
678Use Tcl command <i>_path_separator</i> to insert a platform-dependent separator to the path list.
679
680For example:
681~~~~~
682set env(CSF_TestScriptsPath) \
683 $env(TestScriptsPath)[_path_separator]d:/MyOCCTProject/tests
684set env(CSF_TestDataPath) \
685 d:/occt/test-data[_path_separator]d:/MyOCCTProject/data
686return ;# this is to avoid an echo of the last command above in cout
687~~~~~
688
689@subsection testmanual_4_3 Parallel execution of tests
690
691For better efficiency, on computers with multiple CPUs the tests can be run in parallel mode. This is default behavior for command *testgrid* : the tests are executed in parallel processes (their number is equal to the number of CPUs available on the system). In order to change this behavior, use option parallel followed by the number of processes to be used (1 or 0 to run sequentially).
692
693Note that the parallel execution is only possible if Tcl extension package *Thread* is installed.
694If this package is not available, *testgrid* command will output a warning message.
695
696@subsection testmanual_4_4 Checking non-regression of performance, memory, and visualization
697
698Some test results are very dependent on the characteristics of the workstation, where they are performed, and thus cannot be checked by comparison with some predefined values. These results can be checked for non-regression (after a change in OCCT code) by comparing them with the results produced by the version without this change. The most typical case is comparing the result obtained in a branch created for integration of a fix (CR***) with the results obtained on the master branch before that change is made.
699
700OCCT test system provides a dedicated command *testdiff* for comparing CPU time of execution, memory usage, and images produced by the tests.
701
702~~~~~
703testdiff dir1 dir2 [groupname [gridname]] [options...]
704~~~~~
705Here *dir1* and *dir2* are directories containing logs of two test runs.
706
707Possible options are:
708* <i>-save \<filename\> </i> -- saves the resulting log in a specified file (<i>$dir1/diff-$dir2.log</i> by default). HTML log is saved with the same name and extension .html;
709* <i>-status {same|ok|all}</i> -- allows filtering compared cases by their status:
710 * *same* -- only cases with same status are compared (default);
711 * *ok* -- only cases with OK status in both logs are compared;
712 * *all* -- results are compared regardless of status;
713* <i>-verbose \<level\> </i> -- defines the scope of output data:
714 * 1 -- outputs only differences;
715 * 2 -- additionally outputs the list of logs and directories present in one of directories only;
716 * 3 -- (by default) additionally outputs progress messages;
717
718Example:
719
720~~~~~
721Draw[]> testdiff results-CR12345-2012-10-10T08:00 results-master-2012-10-09T21:20
722~~~~~
723
724@section testmanual_5 APPENDIX
725
726@subsection testmanual_5_1 Test groups
727
728@subsubsection testmanual_5_1_1 3rdparty
729
730This group allows testing the interaction of OCCT and 3rdparty products.
731
732DRAW module: VISUALIZATION.
733
734| Grid | Commands | Functionality |
735| :---- | :----- | :------- |
736| export | vexport | export of images to different formats |
737| fonts | vtrihedron, vcolorscale, vdrawtext | display of fonts |
738
739
740@subsubsection testmanual_5_1_2 blend
741
742This group allows testing blends (fillets) and related operations.
743
744DRAW module: MODELING.
745
746| Grid | Commands | Functionality |
747| :---- | :----- | :------- |
748| simple | blend | fillets on simple shapes |
749| complex | blend | fillets on complex shapes, non-trivial geometry |
750| tolblend_simple | tolblend, blend | |
751| buildevol | buildevol | |
752| tolblend_buildvol | tolblend, buildevol | use of additional command tolblend |
753| bfuseblend | bfuseblend | |
754| encoderegularity | encoderegularity | |
755
756@subsubsection testmanual_5_1_3 boolean
757
758This group allows testing Boolean operations.
759
760DRAW module: MODELING (packages *BOPTest* and *BRepTest*).
761
762Grids names are based on name of the command used, with suffixes:
763* <i>_2d</i> -- for tests operating with 2d objects (wires, wires, 3d objects, etc.);
764* <i>_simple</i> -- for tests operating on simple shapes (boxes, cylinders, toruses, etc.);
765* <i>_complex</i> -- for tests dealing with complex shapes.
766
767| Grid | Commands | Functionality |
768| :---- | :----- | :------- |
769| bcommon_2d | bcommon | Common operation (old algorithm), 2d |
770| bcommon_complex | bcommon | Common operation (old algorithm), complex shapes |
771| bcommon_simple | bcommon | Common operation (old algorithm), simple shapes |
772| bcut_2d | bcut | Cut operation (old algorithm), 2d |
773| bcut_complex | bcut | Cut operation (old algorithm), complex shapes |
774| bcut_simple | bcut | Cut operation (old algorithm), simple shapes |
775| bcutblend | bcutblend | |
776| bfuse_2d | bfuse | Fuse operation (old algorithm), 2d |
777| bfuse_complex | bfuse | Fuse operation (old algorithm), complex shapes |
778| bfuse_simple | bfuse | Fuse operation (old algorithm), simple shapes |
779| bopcommon_2d | bopcommon | Common operation, 2d |
780| bopcommon_complex | bopcommon | Common operation, complex shapes |
781| bopcommon_simple | bopcommon | Common operation, simple shapes |
782| bopcut_2d | bopcut | Cut operation, 2d |
783| bopcut_complex | bopcut | Cut operation, complex shapes |
784| bopcut_simple | bopcut | Cut operation, simple shapes |
785| bopfuse_2d | bopfuse | Fuse operation, 2d |
786| bopfuse_complex | bopfuse | Fuse operation, complex shapes |
787| bopfuse_simple | bopfuse | Fuse operation, simple shapes |
788| bopsection | bopsection | Section |
789| boptuc_2d | boptuc | |
790| boptuc_complex | boptuc | |
791| boptuc_simple | boptuc | |
792| bsection | bsection | Section (old algorithm) |
793
794@subsubsection testmanual_5_1_4 bugs
795
796This group allows testing cases coming from Mantis issues.
797
798The grids are organized following OCCT module and category set for the issue in the Mantis tracker.
799See @ref testmanual_5_2 "Mapping of OCCT functionality to grid names in group bugs" chapter for details.
800
801@subsubsection testmanual_5_1_5 caf
802
803This group allows testing OCAF functionality.
804
805DRAW module: OCAFKERNEL.
806
807| Grid | Commands | Functionality |
808| :---- | :----- | :------- |
809| basic | | Basic attributes |
810| bugs | | Saving and restoring of document |
811| driver | | OCAF drivers |
812| named_shape | | *TNaming_NamedShape* attribute |
813| presentation | | *AISPresentation* attributes |
814| tree | | Tree construction attributes |
815| xlink | | XLink attributes |
816
817@subsubsection testmanual_5_1_6 chamfer
818
819This group allows testing chamfer operations.
820
821DRAW module: MODELING.
822
823The test grid name is constructed depending on the type of the tested chamfers. Additional suffix <i>_complex</i> is used for test cases involving complex geometry (e.g. intersections of edges forming a chamfer); suffix <i>_sequence</i> is used for grids where chamfers are computed sequentially.
824
825| Grid | Commands | Functionality |
826| :---- | :----- | :------- |
827| equal_dist | | Equal distances from edge |
828| equal_dist_complex | | Equal distances from edge, complex shapes |
829| equal_dist_sequence | | Equal distances from edge, sequential operations |
830| dist_dist | | Two distances from edge |
831| dist_dist_complex | | Two distances from edge, complex shapes |
832| dist_dist_sequence | | Two distances from edge, sequential operations |
833| dist_angle | | Distance from edge and given angle |
834| dist_angle_complex | | Distance from edge and given angle |
835| dist_angle_sequence | | Distance from edge and given angle |
836
837@subsubsection testmanual_5_1_7 demo
838
839This group allows demonstrating how testing cases are created, and testing DRAW commands and the test system as a whole.
840
841| Grid | Commands | Functionality |
842| :---- | :----- | :------- |
843| draw | getsource, restore | Basic DRAW commands |
844| testsystem | | Testing system |
845| samples | | OCCT samples |
846
847
848@subsubsection testmanual_5_1_8 draft
849
850This group allows testing draft operations.
851
852DRAW module: MODELING.
853
854| Grid | Commands | Functionality |
855| :---- | :----- | :------- |
856| Angle | depouille | Drafts with angle (inclined walls) |
857
858
859@subsubsection testmanual_5_1_9 feat
860
861This group allows testing creation of features on a shape.
862
863DRAW module: MODELING (package *BRepTest*).
864
865| Grid | Commands | Functionality |
866| :---- | :----- | :------- |
867| featdprism | | |
868| featlf | | |
869| featprism | | |
870| featrevol | | |
871| featrf | | |
872
873@subsubsection testmanual_5_1_10 heal
874
875This group allows testing the functionality provided by *ShapeHealing* toolkit.
876
877DRAW module: XSDRAW
878
879| Grid | Commands | Functionality |
880| :---- | :----- | :------- |
881| fix_shape | fixshape | Shape healing |
882| fix_gaps | fixwgaps | Fixing gaps between edges on a wire |
883| same_parameter | sameparameter | Fixing non-sameparameter edges |
884| fix_face_size | DT_ApplySeq | Removal of small faces |
885| elementary_to_revolution | DT_ApplySeq | Conversion of elementary surfaces to revolution |
886| direct_faces | directfaces | Correction of axis of elementary surfaces |
887| drop_small_edges | fixsmall | Removal of small edges |
888| split_angle | DT_SplitAngle | Splitting periodic surfaces by angle |
889| split_angle_advanced | DT_SplitAngle | Splitting periodic surfaces by angle |
890| split_angle_standard | DT_SplitAngle | Splitting periodic surfaces by angle |
891| split_closed_faces | DT_ClosedSplit | Splitting of closed faces |
892| surface_to_bspline | DT_ToBspl | Conversion of surfaces to b-splines |
893| surface_to_bezier | DT_ShapeConvert | Conversion of surfaces to bezier |
894| split_continuity | DT_ShapeDivide | Split surfaces by continuity criterion |
895| split_continuity_advanced | DT_ShapeDivide | Split surfaces by continuity criterion |
896| split_continuity_standard | DT_ShapeDivide | Split surfaces by continuity criterion |
897| surface_to_revolution_advanced | DT_ShapeConvertRev | Convert elementary surfaces to revolutions, complex cases |
898| surface_to_revolution_standard | DT_ShapeConvertRev | Convert elementary surfaces to revolutions, simple cases |
899
900@subsubsection testmanual_5_1_11 mesh
901
902This group allows testing shape tessellation (*BRepMesh*) and shading.
903
904DRAW modules: MODELING (package *MeshTest*), VISUALIZATION (package *ViewerTest*)
905
906| Grid | Commands | Functionality |
907| :---- | :----- | :------- |
908| advanced_shading | vdisplay | Shading, complex shapes |
909| standard_shading | vdisplay | Shading, simple shapes |
910| advanced_mesh | mesh | Meshing of complex shapes |
911| standard_mesh | mesh | Meshing of simple shapes |
912| advanced_incmesh | incmesh | Meshing of complex shapes |
913| standard_incmesh | incmesh | Meshing of simple shapes |
914| advanced_incmesh_parallel | incmesh | Meshing of complex shapes, parallel mode |
915| standard_incmesh_parallel | incmesh | Meshing of simple shapes, parallel mode |
916
917@subsubsection testmanual_5_1_12 mkface
918
919This group allows testing creation of simple surfaces.
920
921DRAW module: MODELING (package *BRepTest*)
922
923| Grid | Commands | Functionality |
924| :---- | :----- | :------- |
925| after_trim | mkface | |
926| after_offset | mkface | |
927| after_extsurf_and_offset | mkface | |
928| after_extsurf_and_trim | mkface | |
929| after_revsurf_and_offset | mkface | |
930| mkplane | mkplane | |
931
932@subsubsection testmanual_5_1_13 nproject
933
934This group allows testing normal projection of edges and wires onto a face.
935
936DRAW module: MODELING (package *BRepTest*)
937
938| Grid | Commands | Functionality |
939| :---- | :----- | :------- |
940| Base | nproject | |
941
942@subsubsection testmanual_5_1_14 offset
943
944This group allows testing offset functionality for curves and surfaces.
945
946DRAW module: MODELING (package *BRepTest*)
947
948| Grid | Commands | Functionality |
949| :---- | :----- | :------- |
950| compshape | offsetcompshape | Offset of shapes with removal of some faces |
951| faces_type_a | offsetparameter, offsetload, offsetperform | Offset on a subset of faces with a fillet |
952| faces_type_i | offsetparameter, offsetload, offsetperform | Offset on a subset of faces with a sharp edge |
953| shape_type_a | offsetparameter, offsetload, offsetperform | Offset on a whole shape with a fillet |
954| shape_type_i | offsetparameter, offsetload, offsetperform | Offset on a whole shape with a fillet |
955| shape | offsetshape | |
956| wire_closed_outside_0_005, wire_closed_outside_0_025, wire_closed_outside_0_075, wire_closed_inside_0_005, wire_closed_inside_0_025, wire_closed_inside_0_075, wire_unclosed_outside_0_005, wire_unclosed_outside_0_025, wire_unclosed_outside_0_075 | mkoffset | 2d offset of closed and unclosed planar wires with different offset step and directions of offset ( inside / outside ) |
957
958@subsubsection testmanual_5_1_15 pipe
959
960This group allows testing construction of pipes (sweeping of a contour along profile).
961
962DRAW module: MODELING (package *BRepTest*)
963
964| Grid | Commands | Functionality |
965| :---- | :----- | :------- |
966| Standard | pipe | |
967
968@subsubsection testmanual_5_1_16 prism
969
970This group allows testing construction of prisms.
971
972DRAW module: MODELING (package *BRepTest*)
973
974| Grid | Commands | Functionality |
975| :---- | :----- | :------- |
976| seminf | prism | |
977
978@subsubsection testmanual_5_1_17 sewing
979
980This group allows testing sewing of faces by connecting edges.
981
982DRAW module: MODELING (package *BRepTest*)
983
984| Grid | Commands | Functionality |
985| :---- | :----- | :------- |
986| tol_0_01 | sewing | Sewing faces with tolerance 0.01 |
987| tol_1 | sewing | Sewing faces with tolerance 1 |
988| tol_100 | sewing | Sewing faces with tolerance 100 |
989
990@subsubsection testmanual_5_1_18 thrusection
991
992This group allows testing construction of shell or a solid passing through a set of sections in a given sequence (loft).
993
994| Grid | Commands | Functionality |
995| :---- | :----- | :------- |
996| solids | thrusection | Lofting with resulting solid |
997| not_solids | thrusection | Lofting with resulting shell or face |
998
999@subsubsection testmanual_5_1_19 xcaf
1000
1001This group allows testing extended data exchange packages.
1002
1003| Grid | Commands | Functionality |
1004| :---- | :----- | :------- |
1005| dxc, dxc_add_ACL, dxc_add_CL, igs_to_dxc, igs_add_ACL, brep_to_igs_add_CL, stp_to_dxc, stp_add_ACL, brep_to_stp_add_CL, brep_to_dxc, add_ACL_brep, brep_add_CL | | Subgroups are divided by format of source file, by format of result file and by type of document modification. For example, *brep_to_igs* means that the source shape in brep format was added to the document, which was saved into igs format after that. The postfix *add_CL* means that colors and layers were initialized in the document before saving and the postfix *add_ACL* corresponds to the creation of assembly and initialization of colors and layers in a document before saving. |
1006
1007
1008@subsection testmanual_5_2 Mapping of OCCT functionality to grid names in group *bugs*
1009
1010| OCCT Module / Mantis category | Toolkits | Test grid in group bugs |
1011| :---------- | :--------- | :---------- |
1012| Application Framework | PTKernel, TKPShape, TKCDF, TKLCAF, TKCAF, TKBinL, TKXmlL, TKShapeSchema, TKPLCAF, TKBin, TKXml, TKPCAF, FWOSPlugin, TKStdLSchema, TKStdSchema, TKTObj, TKBinTObj, TKXmlTObj | caf |
1013| Draw | TKDraw, TKTopTest, TKViewerTest, TKXSDRAW, TKDCAF, TKXDEDRAW, TKTObjDRAW, TKQADraw, DRAWEXE, Problems of testing system | draw |
1014| Shape Healing | TKShHealing | heal |
1015| Mesh | TKMesh, TKXMesh | mesh |
1016| Data Exchange | TKIGES | iges |
1017| Data Exchange | TKSTEPBase, TKSTEPAttr, TKSTEP209, TKSTEP | step |
1018| Data Exchange | TKSTL, TKVRML | stlvrml |
1019| Data Exchange | TKXSBase, TKXCAF, TKXCAFSchema, TKXDEIGES, TKXDESTEP, TKXmlXCAF, TKBinXCAF | xde |
1020| Foundation Classes | TKernel, TKMath | fclasses |
1021| Modeling_algorithms | TKGeomAlgo, TKTopAlgo, TKPrim, TKBO, TKBool, TKHLR, TKFillet, TKOffset, TKFeat, TKXMesh | modalg |
1022| Modeling Data | TKG2d, TKG3d, TKGeomBase, TKBRep | moddata |
1023| Visualization | TKService, TKV2d, TKV3d, TKOpenGl, TKMeshVS, TKNIS | vis |
1024
1025
1026@subsection testmanual_5_3 Recommended approaches to checking test results
1027
1028@subsubsection testmanual_5_3_1 Shape validity
1029
1030Run command *checkshape* on the result (or intermediate) shape and make sure that *parse.rules* of the test grid or group reports bad shapes (usually recognized by word "Faulty") as error.
1031
1032Example
1033~~~~~
1034checkshape result
1035~~~~~
1036
1037To check the number of faults in the shape command *checkfaults* can be used.
1038
1039Use: checkfaults shape source_shape [ref_value=0]
1040
1041The default syntax of *checkfaults* command:
1042~~~~~
1043checkfaults results a_1
1044~~~~~
1045
1046The command will check the number of faults in the source shape (*a_1*) and compare it
1047with number of faults in the resulting shape (*result*). If shape *result* contains
1048more faults, you will get an error:
1049~~~~~
1050checkfaults results a_1
1051Error : Number of faults is 5
1052~~~~~
1053It is possible to set the reference value for comparison (reference value is 4):
1054
1055~~~~~
1056checkfaults results a_1 4
1057~~~~~
1058
1059If number of faults in the resulting shape is unstable, reference value should be set to "-1".
1060As a result command *checkfaults* will return the following error:
1061
1062~~~~~
1063checkfaults results a_1 -1
1064Error : Number of faults is UNSTABLE
1065~~~~~
1066
1067@subsubsection testmanual_5_3_2 Shape tolerance
1068The maximal tolerance of sub-shapes of each kind of the resulting shape can be extracted from output of tolerance command as follows:
1069
1070~~~~~
1071set tolerance [tolerance result]
1072regexp { *FACE +: +MAX=([-0-9.+eE]+)} $tolerance dummy max_face
1073regexp { *EDGE +: +MAX=([-0-9.+eE]+)} $tolerance dummy max_edgee
1074regexp { *VERTEX +: +MAX=([-0-9.+eE]+)} $tolerance dummy max_vertex
1075~~~~~
1076
1077It is possible to use command *checkmaxtol* to check maximal tolerance of shape and compare it with reference value.
1078
1079Use: checkmaxtol shape [options...]
1080
1081Allowed options are:
1082 * -ref: reference value of maximum tolerance
1083 * -source: list of shapes to compare with
1084 * -min_tol: minimum tolerance for comparison
1085 * -multi_tol: tolerance multiplier
1086
1087The default syntax of *checkmaxtol* command for comparison with the reference value:
1088~~~~~
1089checkmaxtol result -ref 0.00001
1090~~~~~
1091
1092There is an opportunity to compare max tolerance of resulting shape with max tolerance of source shape.
1093In the following example command *checkmaxtol* gets max tolerance among objects *a_1* and *a_2*.
1094Then it chooses the maximum value between founded tolerance and value -min_tol (0.000001)
1095and multiply it on the coefficient -multi_tol (i.e. 2):
1096
1097~~~~~
1098checkmaxtol result -source {a_1 a_2} -min_tol 0.000001 -multi_tol 2
1099~~~~~
1100
1101If the value of maximum tolerance more than founded tolerance for comparison, the command will return an error.
1102
1103Also, command *checkmaxtol* can be used to get max tolerance of the shape:
1104
1105~~~~~
1106set maxtol [checkmaxtol result]
1107~~~~~
1108
1109@subsubsection testmanual_5_3_3 Shape volume, area, or length
1110
1111Use command *vprops, sprops,* or *lprops* to correspondingly measure volume, area, or length of the shape produced by the test. The value can be extracted from the result of the command by *regexp*.
1112
1113Example:
1114~~~~~
1115# check area of shape result with 1% tolerance
1116regexp {Mass +: +([-0-9.+eE]+)} [sprops result] dummy area
1117if { abs($area - $expected) > 0.1 + 0.01 * abs ($area) } {
1118 puts "Error: The area of result shape is $area, while expected $expected"
1119}
1120~~~~~
1121
1122@subsubsection testmanual_5_3_4 Memory leaks
1123
1124The test system measures the amount of memory used by each test case, and considerable deviations (as well as overall difference) comparing with reference results will be reported by *testdiff* command.
1125
1126The typical approach to checking memory leak on a particular operation is to run this operation in cycle measuring memory consumption at each step and comparing it with some threshold value. Note that file begin in group bugs defines command *checktrend* that can be used to analyze a sequence of memory measurements to get statistically based evaluation of the leak presence.
1127
1128Example:
1129~~~~~
1130set listmem {}
1131for {set i 1} {$i < 100} {incr i} {
1132 # run suspect operation
1133
1134 # check memory usage (with tolerance equal to half page size)
1135 lappend listmem [expr [meminfo w] / 1024]
1136 if { [checktrend $listmem 0 256 "Memory leak detected"] } {
1137 puts "No memory leak, $i iterations"
1138 break
1139 }
1140}
1141~~~~~
1142
1143@subsubsection testmanual_5_3_5 Visualization
1144
1145Take a snapshot of the viewer, give it the name of the test case, and save in the directory indicated by Tcl variable *imagedir*.
1146
1147~~~~~
1148vinit
1149vclear
1150vdisplay result
1151vsetdispmode 1
1152vfit
1153vzfit
1154vdump $imagedir/${casename}_shading.png
1155~~~~~
1156
1157This image will be included in the HTML log produced by *testgrid* command and will be checked for non-regression through comparison of images by command *testdiff*.
1158
1159@subsubsection testmanual_5_3_6 Number of free edges
1160
1161To check the number of free edges run the command *checkfreebounds*.
1162
1163It compares number of free edges with reference value.
1164
1165Use: checkfreebounds shape ref_value [options...]
1166
1167Allowed options are:
1168 * -tol N: used tolerance (default -0.01)
1169 * -type N: used type, possible values are "closed" and "opened" (default "closed")
1170
1171~~~~~
1172checkfreebounds result 13
1173~~~~~
1174
1175Option -tol N is used to set tolerance for command *freebounds*, which is used within command *checkfreebounds*.
1176
1177Option -type N is used to select the type of counted free edges - closed or opened.
1178
1179If the number of free edges in the resulting shape is unstable, reference value should be set to "-1".
1180As a result command *checkfreebounds* will return the following error:
1181
1182~~~~~
1183checkfreebounds result -1
1184Error : Number of free edges is UNSTABLE
1185~~~~~
1186
1187@subsubsection testmanual_5_3_7 Compare numbers
1188
1189Procedure to check equality of two reals with some tolerance (relative and absolute)
1190
1191Use: checkreal name value expected tol_abs tol_rel
1192
1193~~~~~
1194checkreal "Some important value" $value 5 0.0001 0.01
1195~~~~~
1196
1197@subsubsection testmanual_5_3_8 Check number of sub-shapes
1198
1199Compare number of sub-shapes in "shape" with given reference data
1200
1201Use: checknbshapes shape [options...]
1202Allowed options are:
1203 * -vertex N
1204 * -edge N
1205 * -wire N
1206 * -face N
1207 * -shell N
1208 * -solid N
1209 * -compsolid N
1210 * -compound N
1211 * -shape N
1212 * -t: compare the number of sub-shapes in "shape" counting
1213 the same sub-shapes with different location as different sub-shapes.
1214 * -m msg: print "msg" in case of error
1215
1216~~~~~
1217checknbshapes result -vertex 8 -edge 4
1218~~~~~
1219
1220@subsubsection testmanual_5_3_9 Check pixel color
1221
1222To check pixel color command *checkcolor* can be used.
1223
1224Use: checkcolor x y red green blue
1225
1226 x, y -- pixel coordinates
1227
1228 red green blue -- expected pixel color (values from 0 to 1)
1229
1230This procedure checks color with tolerance (5x5 area)
1231
1232Next example will compare color of point with coordinates x=100 y=100 with RGB color R=1 G=0 B=0.
1233If colors are not equal, procedure will check the nearest ones points (5x5 area)
1234~~~~~
1235checkcolor 100 100 1 0 0
1236~~~~~