From: abv Date: Mon, 9 Mar 2015 11:28:12 +0000 (+0300) Subject: 0025907: Optimization of testdiff command X-Git-Tag: V6_9_0_beta~34 X-Git-Url: http://git.dev.opencascade.org/gitweb/?p=occt.git;a=commitdiff_plain;h=936f43da8a72a9bf997db683d9578a3c92fc845b 0025907: Optimization of testdiff command - Work with strings optimized in Tcl procedures used in testdiff command - CPU and memory differences output of testdiff command improved to give relative change in percents - Cumulative CPU and memory differences are output for test grids - In HTML output of testdiff command, deviations of memory and CPU measurements greater than 5% are colored (red or green) - Search of image files in testdiff command corrected to avoid wrong attribution of image file to issues starting with the same first letters; images must start with the test case name, optionally followed by underscore or dash and arbitrary text - Image_Diff tool optimized for the case if images are exactly the same - Perf_Meter class output corrected, destructor made non-virtual - DRAW command diffimage optimized to not save diff files if there is no difference - Tests User Guide updated according to these changes and actual state --- diff --git a/dox/dev_guides/tests/tests.md b/dox/dev_guides/tests/tests.md index d0b13748db..6ca4c1248c 100644 --- a/dox/dev_guides/tests/tests.md +++ b/dox/dev_guides/tests/tests.md @@ -23,7 +23,7 @@ The tests are organized in three levels: See Test Groups for the current list of available test groups and grids. -Some tests involve data files (typically CAD models) which are located separately and are not included with OCCT code. The archive with publicly available test data files should be downloaded and installed independently on OCCT sources (from http://dev.opencascade.org). +Some tests involve data files (typically CAD models) which are located separately and are not included with OCCT code. The archive with publicly available test data files should be downloaded and installed independently on OCCT sources (see http://dev.opencascade.org). @subsection testmanual_1_2 Intended Use of Automatic Tests @@ -44,7 +44,6 @@ The modifications made in the OCCT code and related test scripts should be inclu @subsubsection testmanual_1_3_1 Setup Before running tests, make sure to define environment variable *CSF_TestDataPath* pointing to the directory containing test data files. -(Publicly available data files can be downloaded from http://dev.opencascade.org separately from OCCT code.) For this it is recommended to add a file *DrawAppliInit* in the directory which is current at the moment of starting DRAWEXE (normally it is OCCT root directory, $CASROOT ). This file is evaluated automatically at the DRAW start. @@ -58,7 +57,7 @@ return ;# this is to avoid an echo of the last command above in cout Note that variable *CSF_TestDataPath* is set to default value at DRAW start, pointing at the folder $CASROOT/data. In this example, subdirectory d:/occt/test-data is added to this path. Similar code could be used on Linux and Mac OS X except that on non-Windows platforms colon ":" should be used as path separator instead of semicolon ";". -All tests are run from DRAW command prompt (run *draw.tcl* or *draw.sh* to start it). +All tests are run from DRAW command prompt (run *draw.bat* or *draw.sh* to start it). @subsubsection testmanual_1_3_2 Running Tests @@ -102,7 +101,7 @@ Example: The tests are considered as non-regressive if only OK, BAD (i.e. known problem), and SKIPPED (i.e. not executed, typically because of lack of a data file) statuses are reported. See Interpretation of test results for details. -The results and detailed logs of the tests are saved by default to a subdirectory of the current folder, whose name is generated automatically using the current date and time, prefixed by word "results_" and Git branch name (if Git is available and current sources are managed by Git). +The results and detailed logs of the tests are saved by default to a new subdirectory of the subdirectory *results* in the current folder, whose name is generated automatically using the current date and time, prefixed by Git branch name (if Git is available and current sources are managed by Git). If necessary, a non-default output directory can be specified using option –outdir followed by a path to the directory. This directory should be new or empty; use option –overwrite to allow writing results in existing non-empty directory. Example: @@ -111,9 +110,7 @@ Draw[]> testgrid -outdir d:/occt/last_results -overwrite ~~~~~ In the output directory, a cumulative HTML report summary.html provides links to reports on each test case. An additional report in JUnit-style XML format can be output for use in Jenkins or other continuous integration system. -Type help testgrid in DRAW prompt to get help on options supported by *testgrid* command. - -For example: +Type help testgrid in DRAW prompt to get help on options supported by *testgrid* command: ~~~~~ Draw[3]> help testgrid @@ -125,6 +122,7 @@ testgrid: Run all tests, or specified group, or one grid -outdir dirname: set log directory (should be empty or non-existing) -overwrite: force writing logs in existing non-empty directory -xml filename: write XML report for Jenkins (in JUnit-like format) + -beep: play sound signal at the end of the tests Groups, grids, and test cases to be executed can be specified by list of file masks, separated by spaces or comma; default is all (*). ~~~~~ @@ -146,6 +144,28 @@ Note that normally an intermediate output of the script is not shown. The detail To see intermediate commands and their output during the test execution, add one more argument "echo" at the end of the command line. Note that with this option the log is not collected and summary is not produced. +Type help testgrid in DRAW prompt to get help on options supported by *testgrid* command: + +~~~~~ +Draw[3]> help test +test: Run specified test case + Use: test group grid casename [options...] + Allowed options are: + -echo: all commands and results are echoed immediately, + but log is not saved and summary is not produced + It is also possible to use "1" instead of "-echo" + If echo is OFF, log is stored in memory and only summary + is output (the log can be obtained with command "dlog get") + -outfile filename: set log file (should be non-existing), + it is possible to save log file in text file or + in html file(with snapshot), for that "filename" + should have ".html" extension + -overwrite: force writing log in existing file + -beep: play sound signal at the end of the test + -errors: show all lines from the log report that are recognized as errors + This key will be ignored if the "-echo" key is already set. +~~~~~ + @subsubsection testmanual_1_3_4 Creating a New Test The detailed rules of creation of new tests are given in section 3. The following short description covers the most typical situations: @@ -166,6 +186,7 @@ Use prefix "bug" followed by Mantis issue ID and, if necessary, additional suffi Example: * Added files: + ~~~~~ git status –short A tests/bugs/heal/data/OCC210a.brep @@ -284,7 +305,7 @@ The test group may contain *parse.rules* file. This file defines patterns used f Each line in the file should specify a status (single word), followed by a regular expression delimited by slashes (*/*) that will be matched against lines in the test output log to check if it corresponds to this status. -The regular expressions support a subset of the Perl *re* syntax. See also Perl regular expressions. +The regular expressions support a subset of the Perl *re* syntax. See also Perl regular expressions. The rest of the line can contain a comment message, which will be added to the test report when this status is detected. @@ -353,7 +374,7 @@ Usually it executes a specific sequence of commands common for all tests in the Example: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{.tcl} - vdump $logdir/${casename}.gif ;# makes a snap-shot of AIS viewer + vdump $imagedir/${casename}.png ;# makes a snap-shot of AIS viewer ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @subsubsection testmanual_2_3_4 File "cases.list" @@ -437,7 +458,9 @@ If the new test corresponds to a functionality already covered by the existing s It is advisable to make self-contained test scripts whenever possible, so as they could be used in environments where data files are not available. For that simple geometric objects and shapes can be created using DRAW commands in the test script itself. -If the test requires a data file, it should be put to subdirectory *data* of the test grid. It is recommended to prefix the data file with the corresponding issue id prefixed by *bug*, e.g. *bug12345_face1.brep*, to avoid possible conflicts with names of existing data files. +If the test requires a data file, it should be put to directory listed in environment variable *CSF_TestDataPath*. +Alternatively, it can be put to subdirectory *data* of the test grid. +It is recommended to prefix the data file with the corresponding issue id prefixed by *bug*, e.g. *bug12345_face1.brep*, to avoid possible conflicts with names of existing data files. Note that when the test is integrated to the master branch, OCC team will move the data file to data files repository, so as to keep OCCT sources repository clean from data files. @@ -498,15 +521,23 @@ Example: stepread [locate_data_file CAROSKI_COUPELLE.step] a * ~~~~~ -When the test needs to produce some snapshots or other artefacts, use Tcl variable *logdir* as the location where such files should be put. Command *testgrid* sets this variable to the subdirectory of the results folder corresponding to the grid. Command *test* sets it to $CASROOT/tmp unless it is already defined. Use Tcl variable *casename* to prefix all files produced by the test. This variable is set to the name of the test case. +When the test needs to produce some snapshots or other artefacts, use Tcl variable *imagedir* as the location where such files should be put. +Command *testgrid* sets this variable to the subdirectory of the results folder corresponding to the grid. +Command *test* by default creates dedicated temporary directory in the system temporary folder (normally the one specified by environment variable *TempDir*, *TEMP*, or *TMP*) for each execution, and sets *imagedir* to that location. +However if variable *imagedir* is defined on top level of Tcl interpretor, command *test* will use it instead of creating a new directory. + +Use Tcl variable *casename* to prefix all files produced by the test. +This variable is set to the name of the test case. +For the image file (snapshot) to be recognized by the test system (for inclusion in HTML log and differences), its name should start with name of the test case (use variable *casename*), optionally followed by underscore or dash and arbitrary suffix. +The image format (defined by extension) should be *png*. Example: ~~~~~ -xwd $logdir/${casename}.png +xwd $imagedir/${casename}.png vdisplay result; vfit -vdump $logdir/${casename}-axo.png +vdump $imagedir/${casename}-axo.png vfront; vfit -vdump $logdir/${casename}-front.png +vdump $imagedir/${casename}-front.png ~~~~~ would produce: @@ -518,11 +549,15 @@ A1-front.png Note that OCCT must be built with FreeImage support to be able to produce usable images. +Other Tcl variables defined during the test execution are: +- *groupname*: name of the test group +- *gridname*: name of the test grid +- *dirname*: path to the root directory of the current set of test scripts + In order to ensure that the test works as expected in different environments, observe the following additional rules: * Avoid using external commands such as *grep, rm,* etc., as these commands can be absent on another system (e.g. on Windows); use facilities provided by Tcl instead. * Do not put call to *locate_data_file* in catch statement – this can prevent correct interpretation of the missing data file by the test system. - @subsection testmanual_3_5 Interpretation of test results The result of the test is evaluated by checking its output against patterns defined in the files *parse.rules* of the grid and group. @@ -1010,7 +1045,7 @@ for {set i 1} {$i < 100} {incr i} { @subsubsection testmanual_5_3_5 Visualization -Take a snapshot of the viewer, give it the name of the test case, and save in the directory indicated by Tcl variable *imagedir*. Note that this variable directs to the *log* directory if command *testgrid* is active, or to *tmp* subdirectory of the current folder if the test is run interactively. +Take a snapshot of the viewer, give it the name of the test case, and save in the directory indicated by Tcl variable *imagedir*. ~~~~~ vinit diff --git a/src/DrawResources/TestCommands.tcl b/src/DrawResources/TestCommands.tcl index 91ade327c3..81a1bd8722 100644 --- a/src/DrawResources/TestCommands.tcl +++ b/src/DrawResources/TestCommands.tcl @@ -981,6 +981,7 @@ proc _run_test {scriptsdir group gridname casefile echo} { } # evaluate test case + set tmp_imagedir 0 if [catch { # set variables identifying test case uplevel set casename [file tail $casefile] @@ -1005,6 +1006,7 @@ proc _run_test {scriptsdir group gridname casefile echo} { } uplevel set imagedir \"$imagedir\" + set tmp_imagedir 1 } # execute test scripts @@ -1049,18 +1051,22 @@ proc _run_test {scriptsdir group gridname casefile echo} { # add memory and timing info set stats "" if { ! [catch {uplevel meminfo h} memuse] } { - set stats "MEMORY DELTA: [expr ($memuse - $membase) / 1024] KiB\n" + append stats "MEMORY DELTA: [expr ($memuse - $membase) / 1024] KiB\n" } uplevel dchrono _timer stop set time [uplevel dchrono _timer show] - if [regexp -nocase {CPU user time:[ \t]*([0-9.e-]+)} $time res cpu] { - set stats "${stats}TOTAL CPU TIME: $cpu sec\n" + if { [regexp -nocase {CPU user time:[ \t]*([0-9.e-]+)} $time res cpu_usr] } { + append stats "TOTAL CPU TIME: $cpu_usr sec\n" } if { $dlog_exists && ! $echo } { dlog add $stats } else { puts $stats } + + # unset global vars + uplevel unset casename groupname gridname dirname + if { $tmp_imagedir } { uplevel unset imagedir test_image } } # Internal procedure to check log of test execution and decide if it passed or failed @@ -1090,7 +1096,7 @@ if [catch { continue } set status [string trim $status] - if { $comment != "" } { set status "$status ([string trim $comment])" } + if { $comment != "" } { append status " ([string trim $comment])" } set rexp [regsub -all {\\b} $rexp {\\y}] ;# convert regexp from Perl to Tcl style lappend badwords [list $status $rexp] } @@ -1605,7 +1611,7 @@ proc _log_xml_summary {logdir filename log include_cout} { } else { while { [gets $fdlog logline] >= 0 } { if { $include_cout } { - set testout "$testout$logline\n" + append testout "$logline\n" } if [regexp -nocase {TOTAL CPU TIME:\s*([\d.]+)\s*sec} $logline res cpu] { set add_cpu " time=\"$cpu\"" @@ -1620,21 +1626,21 @@ proc _log_xml_summary {logdir filename log include_cout} { # record test case with its output and status # Mapping is: SKIPPED, BAD, and OK to OK, all other to failure - set testcases "$testcases\n \n" - set testcases "$testcases\n \n$testout " + append testcases "\n \n" + append testcases "\n \n$testout " if { $result != "OK" } { if { [regexp -nocase {^SKIP} $result] } { incr nberr - set testcases "$testcases\n " + append testcases "\n " } elseif { [regexp -nocase {^BAD} $result] } { incr nbskip - set testcases "$testcases\n $message" + append testcases "\n $message" } else { incr nbfail - set testcases "$testcases\n " + append testcases "\n " } } - set testcases "$testcases\n " + append testcases "\n " } # write last test suite @@ -1741,6 +1747,11 @@ proc _diff_img_name {dir1 dir2 casepath imgfile} { return [file join $dir1 $casepath "diff-[file tail $dir2]-$imgfile"] } +# auxiliary procedure to produce string comparing two values +proc _diff_show_ratio {value1 value2} { + return "$value1 / $value2 \[[format "%+5.2f%%" [expr 100 * ($value1 - $value2) / double($value2)]]\]" +} + # Procedure to compare results of two runs of test cases proc _test_diff {dir1 dir2 basename status verbose _logvar {_statvar ""}} { upvar $_logvar log @@ -1785,6 +1796,10 @@ proc _test_diff {dir1 dir2 basename status verbose _logvar {_statvar ""}} { if { [llength $in1] > 0 } { _log_and_puts log "Only in $path1: $in1" } if { [llength $in2] > 0 } { _log_and_puts log "Only in $path2: $in2" } } + set gcpu1 0 + set gcpu2 0 + set gmem1 0 + set gmem2 0 foreach logfile $common { # load two logs set log1 [_read_file [file join $dir1 $basename $logfile]] @@ -1816,10 +1831,12 @@ proc _test_diff {dir1 dir2 basename status verbose _logvar {_statvar ""}} { [regexp {TOTAL CPU TIME:\s*([\d.]+)} $log2 res1 cpu2] } { set stat(cpu1) [expr $stat(cpu1) + $cpu1] set stat(cpu2) [expr $stat(cpu2) + $cpu2] + set gcpu1 [expr $gcpu1 + $cpu1] + set gcpu2 [expr $gcpu2 + $cpu2] # compare CPU times with 10% precision (but not less 0.5 sec) if { [expr abs ($cpu1 - $cpu2) > 0.5 + 0.05 * abs ($cpu1 + $cpu2)] } { - _log_and_puts log "CPU [split $basename /] $casename: $cpu1 / $cpu2" + _log_and_puts log "CPU [split $basename /] $casename: [_diff_show_ratio $cpu1 $cpu2]" } } @@ -1830,16 +1847,18 @@ proc _test_diff {dir1 dir2 basename status verbose _logvar {_statvar ""}} { [regexp {MEMORY DELTA:\s*([\d.]+)} $log2 res1 mem2] } { set stat(mem1) [expr $stat(mem1) + $mem1] set stat(mem2) [expr $stat(mem2) + $mem2] + set gmem1 [expr $gmem1 + $mem1] + set gmem2 [expr $gmem2 + $mem2] # compare memory usage with 10% precision (but not less 16 KiB) if { [expr abs ($mem1 - $mem2) > 16 + 0.05 * abs ($mem1 + $mem2)] } { - _log_and_puts log "MEMORY [split $basename /] $casename: $mem1 / $mem2" + _log_and_puts log "MEMORY [split $basename /] $casename: [_diff_show_ratio $mem1 $mem2]" } } # check images - set imglist1 [glob -directory $path1 -types f -tails -nocomplain $casename*.{png,gif}] - set imglist2 [glob -directory $path2 -types f -tails -nocomplain $casename*.{png,gif}] + set imglist1 [glob -directory $path1 -types f -tails -nocomplain ${casename}.{png,gif} ${casename}-*.{png,gif} ${casename}_*.{png,gif}] + set imglist2 [glob -directory $path2 -types f -tails -nocomplain ${casename}.{png,gif} ${casename}-*.{png,gif} ${casename}_*.{png,gif}] _list_diff $imglist1 $imglist2 imgin1 imgin2 imgcommon if { "$verbose" > 1 } { if { [llength $imgin1] > 0 } { _log_and_puts log "Only in $path1: $imgin1" } @@ -1860,11 +1879,19 @@ proc _test_diff {dir1 dir2 basename status verbose _logvar {_statvar ""}} { } } } + + # report CPU and memory difference in group if it is greater than 10% + if { [expr abs ($gcpu1 - $gcpu2) > 0.5 + 0.005 * abs ($gcpu1 + $gcpu2)] } { + _log_and_puts log "CPU [split $basename /]: [_diff_show_ratio $gcpu1 $gcpu2]" + } + if { [expr abs ($gmem1 - $gmem2) > 16 + 0.005 * abs ($gmem1 + $gmem2)] } { + _log_and_puts log "MEMORY [split $basename /]: [_diff_show_ratio $gmem1 $gmem2]" + } } if { "$_statvar" == "" } { - _log_and_puts log "Total MEMORY difference: $stat(mem1) / $stat(mem2)" - _log_and_puts log "Total CPU difference: $stat(cpu1) / $stat(cpu2)" + _log_and_puts log "Total MEMORY difference: [_diff_show_ratio $stat(mem1) $stat(mem2)]" + _log_and_puts log "Total CPU difference: [_diff_show_ratio $stat(cpu1) $stat(cpu2)]" } } @@ -1889,8 +1916,15 @@ proc _log_html_diff {file log dir1 dir2} { puts $fd "
"
     set logpath [file split [file normalize $file]]
     foreach line $log {
-        puts $fd $line
+        # put a line; highlight considerable (>5%) deviations of CPU and memory
+        if { [regexp "\[\\\[](\[0-9.e+-]+)%\[\]]" $line res value] && 
+             [expr abs($value)] > 5 } {
+            puts $fd "
0 ? \"red\" : \"lightgreen\"]\">$line
" + } else { + puts $fd $line + } + # add images if { [regexp {IMAGE[ \t]+([^:]+):[ \t]+([A-Za-z0-9_.-]+)} $line res case img] } { if { [catch {eval file join "" [lrange $case 0 end-1]} gridpath] } { # note: special handler for the case if test grid directoried are compared directly diff --git a/src/Image/Image_Diff.cxx b/src/Image/Image_Diff.cxx index c3a2ff874a..7659a680b3 100644 --- a/src/Image/Image_Diff.cxx +++ b/src/Image/Image_Diff.cxx @@ -288,6 +288,12 @@ Standard_Integer Image_Diff::Compare() return -1; } + // first check if images are exactly teh same + if (! memcmp (myImageNew->Data(), myImageRef->Data(), myImageRef->SizeBytes())) + { + return 0; + } + // Tolerance of comparison operation for color // Maximum difference between colors (white - black) = 100% Image_ColorXXX24 aDiff = {{255, 255, 255}}; diff --git a/src/OSD/OSD_PerfMeter.cxx b/src/OSD/OSD_PerfMeter.cxx index 3c0fbe6058..d8438ff79f 100644 --- a/src/OSD/OSD_PerfMeter.cxx +++ b/src/OSD/OSD_PerfMeter.cxx @@ -241,7 +241,7 @@ void perf_sprint_all_meters (char *buffer, int length, int reset) for (i=0; inb_enter) { - int n = sprintf (string, " Perf meter results : enters seconds sec/enter\n"); + int n = sprintf (string, " Perf meter results : enters seconds microsec/enter\n"); if (n < length) { memcpy (buffer, string, n); diff --git a/src/OSD/OSD_PerfMeter.hxx b/src/OSD/OSD_PerfMeter.hxx index a2372ad72d..e26b632a27 100644 --- a/src/OSD/OSD_PerfMeter.hxx +++ b/src/OSD/OSD_PerfMeter.hxx @@ -59,7 +59,7 @@ public: void Flush() const { perf_close_imeter(myIMeter); } //! Assures stopping upon destruction - virtual ~OSD_PerfMeter() { if (myIMeter >= 0) Stop(); } + ~OSD_PerfMeter() { if (myIMeter >= 0) Stop(); } protected: diff --git a/src/ViewerTest/ViewerTest_ViewerCommands.cxx b/src/ViewerTest/ViewerTest_ViewerCommands.cxx index e1d466e720..8f7c676b5c 100644 --- a/src/ViewerTest/ViewerTest_ViewerCommands.cxx +++ b/src/ViewerTest/ViewerTest_ViewerCommands.cxx @@ -5281,7 +5281,7 @@ static int VDiffImage (Draw_Interpretor& theDI, Standard_Integer theArgNb, const theDI << aDiffColorsNb << "\n"; // save image of difference - if (aDiffImagePath != NULL) + if (aDiffColorsNb >0 && aDiffImagePath != NULL) { aComparer.SaveDiffImage (aDiffImagePath); } diff --git a/tests/demo/samples/cpu b/tests/demo/samples/cpu new file mode 100644 index 0000000000..c5b15000ee --- /dev/null +++ b/tests/demo/samples/cpu @@ -0,0 +1,7 @@ +# test for CPU sample +source $env(CASROOT)/samples/tcl/cpu.tcl + +# make a snapshot +vdump $imagedir/${test_image}.png + +puts "TEST COMPLETED"