Posted
almost 11 years
ago
by
Robert Maynard
I am proud to announce the CMake 3.0 fourth release candidate.
Sources and binaries are available at:
http://www.cmake.org/files/v3.0/?C=M;O=D
Documentation is available at:
http://www.cmake.org/cmake/help/v3.0
Release notes appear
... [More]
below and are also published at
http://www.cmake.org/cmake/help/v3.0/release/3.0.0.html
Some of the more significant features of CMake 3.0 are:
Compatibility options supporting code written for CMake versions prior to 2.4 have been removed.
The CMake language has been extended with *Bracket Argument* and *Bracket Comment* syntax inspired by Lua long brackets.
The CMake documentation has been converted to reStructuredText and uses Sphix generation.
Generators for Visual Studio 10 (2010) and later were renamed to include the product year like generators for older VS versions:
"Visual Studio 10" -> "Visual Studio 10 2010"
"Visual Studio 11" -> "Visual Studio 11 2012"
"Visual Studio 12" -> "Visual Studio 12 2013"
This clarifies which generator goes with each Visual Studio version. The old names are recognized for compatibility.
A new "CodeLite" extra generator is available for use with the Makefile or Ninja generators.
A new "Kate" extra generator is available for use with the Makefile or Ninja generators.
The "add_library()" command learned a new "INTERFACE" library type. Interface libraries have no build rules but may have properties defining "usage requirements" and may be installed, exported, and imported. This is useful to create header-only libraries that have concrete link dependencies on other libraries.
The "export()" command learned a new "EXPORT" mode that retrieves the list of targets to export from an export set configured by the "install(TARGETS)" command "EXPORT" option. This makes it easy to export from the build tree the same targets that are exported from the install tree.
The "project()" command learned to set some version variables to values specified by the new "VERSION" option or to empty strings. See policy "CMP0048".
Several long-outdated commands that should no longer be called have been disallowed in new code by policies:
Policy "CMP0029" disallows "subdir_depends()"
Policy "CMP0030" disallows "use_mangled_mesa()"
Policy "CMP0031" disallows "load_command()"
Policy "CMP0032" disallows "output_required_files()"
Policy "CMP0033" disallows "export_library_dependencies()"
Policy "CMP0034" disallows "utility_source()"
Policy "CMP0035" disallows "variable_requires()"
Policy "CMP0036" disallows "build_name()"
-----------------------------------------------------------------
Changes made since CMake 3.0.0-rc3:
Brad King (10):
Help: Revise and format policy CMP0025 and CMP0047 docs
Do not warn by default when policy CMP0025 or CMP0047 is not set
CMakeDetermineVSServicePack: Format documentation
CMakeDetermineVSServicePack: Match versions more robustly
CMakeDetermineVSServicePack: Add VS 11 update 4
Fortran: Detect pointer size on Intel archs with PGI (#14870)
CMakeRCInformation: Do not mention 'Fortran' in documentation
CMakeRCInformation: Recognize 'windres' tools with '.' in name (#14865)
Drop /lib32 and /lib64 from link directories and RPATH (#14875)
cmArchiveWrite: Handle NULL error string (#14882)
Nils Gladitz (1):
Policies: omit warnings about unset policies when they are actually set to NEW
Robert Maynard (1):
Qt4Macros: Make QT4_CREATE_MOC_COMMAND a function
Sean McBride (1):
create_test_sourcelist: Initialize variable at declaration
Stephen Kelly (1):
Help: Fix typo in cmake-qt manual.
[Less]
|
Posted
almost 11 years
ago
by
Robert Maynard
I am proud to announce the CMake 3.0 third release candidate.
Sources and binaries are available at:
http://www.cmake.org/files/v3.0/?C=M;O=D
Documentation is available at:
http://www.cmake.org/cmake/help/v3.0
Release notes appear
... [More]
below and are also published at
http://www.cmake.org/cmake/help/v3.0/release/3.0.0.html
Some of the more significant features of CMake 3.0 are:
Compatibility options supporting code written for CMake versions prior to 2.4 have been removed.
The CMake language has been extended with *Bracket Argument* and *Bracket Comment* syntax inspired by Lua long brackets.
The CMake documentation has been converted to reStructuredText and uses Sphix generation.
Generators for Visual Studio 10 (2010) and later were renamed to include the product year like generators for older VS versions:
"Visual Studio 10" -> "Visual Studio 10 2010"
"Visual Studio 11" -> "Visual Studio 11 2012"
"Visual Studio 12" -> "Visual Studio 12 2013"
This clarifies which generator goes with each Visual Studio version. The old names are recognized for compatibility.
A new "CodeLite" extra generator is available for use with the Makefile or Ninja generators.
A new "Kate" extra generator is available for use with the Makefile or Ninja generators.
The "add_library()" command learned a new "INTERFACE" library type. Interface libraries have no build rules but may have properties defining "usage requirements" and may be installed, exported, and imported. This is useful to create header-only libraries that have concrete link dependencies on other libraries.
The "export()" command learned a new "EXPORT" mode that retrieves the list of targets to export from an export set configured by the "install(TARGETS)" command "EXPORT" option. This makes it easy to export from the build tree the same targets that are exported from the install tree.
The "project()" command learned to set some version variables to values specified by the new "VERSION" option or to empty strings. See policy "CMP0048".
Several long-outdated commands that should no longer be called have been disallowed in new code by policies:
Policy "CMP0029" disallows "subdir_depends()"
Policy "CMP0030" disallows "use_mangled_mesa()"
Policy "CMP0031" disallows "load_command()"
Policy "CMP0032" disallows "output_required_files()"
Policy "CMP0033" disallows "export_library_dependencies()"
Policy "CMP0034" disallows "utility_source()"
Policy "CMP0035" disallows "variable_requires()"
Policy "CMP0036" disallows "build_name()"
-----------------------------------------------------------------
Changes made since CMake 3.0.0-rc1:
Aurélien Gâteau (1):
find_dependency: Give more helpful message if VERSION is empty
Bas Couwenberg (1):
FindRuby: Add support for Ruby 2.0 and 2.1
Brad King (12):
Help: Add FindRuby-2 topic release notes
Help: Consolidate FindRuby-2 release notes for 3.0.0
Help: Mention in find_package that cmake-gui step is Windows-only (#14781)
cmake: Fix --check-build-system argument count check (#14784)
Record more policies on targets when created
Tests: Simplify and document policy scopes in RunCMake.CMP* tests
Help: Document variables CMAKE_APPBUNDLE_PATH and CMAKE_FRAMEWORK_PATH
CMakeDetermine*Compiler: Factor out search for compiler in PATH
Xcode: Convert forced CMAKE_<LANG>_COMPILER to full path if possible
CMake*CompilerId: Fix patch level for Intel >= 14.0 (#14806)
CMake 3.0.0-rc2
CMake 3.0.0-rc3
Matt McCormick (1):
FindPython{Interp,Libs}: Search for Python 3.4.
Stephen Kelly (11):
add_definitions: Don't document genex support.
CMP0043: Document old and new interfaces for setting directory property.
find_dependency: Don't propagate EXACT argument.
Qt4: Use correct qdbus executable in macro.
QtAutogen: Fix AUTOGEN depends on custom command output with VS.
find_dependency: Make sure invalid EXACT use can be reported.
cmTarget: Don't create duplicate backtraces in CMP0046 warning
QtDialog: Avoid linking to Qt4 WinMain when using Qt 5.
cmTarget: Restore <CONFIG>_LOCATION to CMP0026 OLD behavior (#14808)
QtDialog: Fix Qt 5 build on non-Windows.
Disallow INTERFACE libraries with add_custom_command(TARGET).
[Less]
|
Posted
almost 11 years
ago
by
Ben Boeckel
ParaView is not a small project. It includes over 3 million lines of C and C++ code and 250,000 lines of Python with a smattering of other languages such as FORTRAN, Tcl, and Java (accoring to the wonderful sloccount tool) in its build, and the build
... [More]
has more than 18,000 things to do when Python bindings are enabled. When loading it up to use for handling data on a super computer, its memory footprint seem to be gargantuan.
In a CoProcessing module, just loading up ParaView shoots your virtual memory space up by 500MB. But what is actually taking up all of this memory when the build artifacts only take up a total of 150MB? To find out, we need to dig into how a kernel (specifically, Linux here) does its memory accounting.
First, I'll go over the methods used to find out memory usage and what is using it. On Linux, there is a filesystem called procfs mounted at the /proc directory. This filesystem is implemented with a little bit of kernel knowledge and magic to allow you to introspect what is going on in a process. Here, the two files which are important to us are the status and maps files. The status file contains things like memory usage, historical memory usage, signal masks, capabilities, and more. The maps file contains a list of all of the memory regions mapped to a process, their permissions, and what file they come from. Information on other files may be found in the proc(5) manpage. To find out how much memory ParaView is causing a process to consume, I used a small CoProcessing module and modified it to initialize ParaView and then print out its accounting information from /proc/self/status and /proc/self/maps.
First, looking at the status file, we see something like (trimmed for brevity):
% cat /proc/self/status | head -n 7
Name: cat
State: R (running)
Tgid: 16128
Ngid: 0
Pid: 16128
PPid: 15490
TracerPid: 0
This file is where we can get statistics on our process directly from the kernel such as its name, whether it is waiting on I/O, sleeping, or running, permission information, signal masks, memory usage, and more. The field we are interested in here is VmPeak which is the value you usually see in top under VIRT and always seems so high. This value is the most memory that the process ever had mapped for it over its lifetime. For the example here, I get:
VmPeak: 107916 kB
Now, you may as "Really? 100MB for cat?". It may seem like a lot, but we can see where it comes from using the maps file. Let's see where it's all going:
% cat /proc/self/maps | awk -f range2size.awk1
49152 r-xp 00000000 08:13 787901 /usr/bin/cat
4096 r--p 0000b000 08:13 787901 /usr/bin/cat
4096 rw-p 0000c000 08:13 787901 /usr/bin/cat
135168 rw-p 00000000 00:00 0 [heap]
106074112 r--p 00000000 08:13 804951 /usr/lib/locale/locale-archive
1785856 r-xp 00000000 08:13 787887 /usr/lib64/libc-2.18.so
2097152 ---p 001b4000 08:13 787887 /usr/lib64/libc-2.18.so
16384 r--p 001b4000 08:13 787887 /usr/lib64/libc-2.18.so
8192 rw-p 001b8000 08:13 787887 /usr/lib64/libc-2.18.so
20480 rw-p 00000000 00:00 0
131072 r-xp 00000000 08:13 786485 /usr/lib64/ld-2.18.so
12288 rw-p 00000000 00:00 0
4096 rw-p 00000000 00:00 0
4096 r--p 0001f000 08:13 786485 /usr/lib64/ld-2.18.so
4096 rw-p 00020000 08:13 786485 /usr/lib64/ld-2.18.so
4096 rw-p 00000000 00:00 0
139264 rw-p 00000000 00:00 0 [stack]
8192 r-xp 00000000 00:00 0 [vdso]
4096 r-xp 00000000 00:00 0 [vsyscall]
We can see here that the vast majority is that /usr/lib/locale/locale-archive file (which we can avoid; see below). After that, it's mostly the C library and the linker which are unavoidable for the most part. Other things to take note of are the special [heap], [stack], [vdso], and [vsyscall] sections. The first two should be fairly obvious, but the other two are created by the kernel and those are the pages where the code which actually talks to the kernel lives. The other field which interests us is the second one which is the permission bits for that section of memory. The "r-xp", "rw-p", and "r--p" pages are fairly standard (executable code, writeable memory, and read-only data memory). The oddball one is that one "---p" sections which is pretty large (two megabytes). This memory is used to help catch buffer overflows from trampling over the code and to assist in aligning memory to ease sharing across processes and are created by the binutils toolchain to separate the code section (r-xp) from the writeable data section (rw-p) in shared libraries. On a 64 bit x86 machine, each library uses 2MB for this buffer which seems like a lot until you realize that the memory is never actually realized in silicon (since any use causes a segfault) and 64 bit machines have access to at least 256TB of virtual memory to play with (and can be extended to the full 16EB if the need ever arises). On 32 bit machines, this is gap is instead 4 KB since the address space is at a much higher premium.
Now that we have some tools at our disposal to try and figure out where our memory is going, let's look at ParaView itself.
The data here was gathered using a CoProcessing module which loaded up ParaView, dumped the information from the status and maps files, then quit. This gives us a good idea of how much memory using ParaView is costing us without getting any datasets thrown into the mix.
Doing this, the maps data was massaged using the following bit of shell code:
awk '{ split($1, range, "-"); $1 = strtonum("0x" range[2]) - strtonum("0x" range[1]); total[$2] += $1 } END { for (perm in total) { print perm ": " total[perm] } }' < maps | sort -k1
This code uses awk to compute the size of the memory range from the first field then accounts that size to the total amount of memory for a given set of permission bits. At the end, it prints out the total amount of memory for each set of permissions and then sorts it. Doing this for a build gives us output like:
---p: 79581184
r--p: 106364928
r-xp: 12341248
rw-p: 15507456
From this, we can see that almost 80MB of that buffer memory, 106 MB of read-only memory, 12 MB of executable code, and 15 MB of writeable memory (this is where static std::string variables since they dynamically allocate their memory). The executable code size isn't going to change much (there's -Os which optimizes for size, but you usually pay for it in performance where loop unrolling really boosts performance). The read-only memory has that pesky locale-archive sitting in there inflating our numbers:
7fdd518fc000-7fdd57e25000 r--p 00000000 08:13 804951 /usr/lib/locale/locale-archive
It turns out that this is loaded up only when Python is used. Using other tools to find out why it's loaded2, we find that its purpose is to aid the C library when working in different locales (specifically, non-C). It can be skipped by running ParaView with the environment variable LC_ALL set to "C". Doing this gives us the following for the r--p sections:
r--p: 290816
Much better. Next on the list is the amount of memory being mapped for those 2MB empty sections. The only way to actually remove them is by patching the linker to either make the sections smaller (which requires patching your toolchain) or to make fewer libraries. The former is out of scope for most users, but the latter could be done by making libraries such as vtkFilters rather than vtkFiltersCore, vtkFiltersGeometry, and so on. However, the easiest way is to use Catalyst builds to just reduce the amount of code and libraries in a build to begin with. By using Catalyst builds, ParaView can be made to use only 40% (the Extras edition which contains data writers) to 50% (for Rendering-Base) of a full build's VIRT memory footprint.
Here is a graph of memory usage when using various build types (shared and static), Catalyst editions (Extras, Rendering-Base, and a full build). All experiments were run with LC_ALL=C to keep the non-Python experiments comparable. Each source tree was built without Python, with Python (+python), and with Python and Unified Bindings enabled (+python+ub). The Python-enabled builds also have experiments showing usage when Python code was actually used rather than just being built-in for the CoProcessing pipeline or not (experiments with an asterisk):
As the graph shows, the Catalyst builds are much smaller than the full builds and get the largest memory usage gains. One thing to note is that if MPI is used, a static build will incur the the amount shown here per process. For shared builds, any memory under the "buf", "data", and "code" sections of the detailed graph (attached, but now shown) should be shareable between processes, so each process after the first only costs the amount of "write" memory for the experiment.
1 The range2size.awk file converts the byte ranges into byte sizes to see the sizes more easily.
2The ltrace and strace tools for those interested, but their use is outside the scope of this article. [Less]
|
Posted
almost 11 years
ago
by
Ben Boeckel
ParaView is not a small project. It includes over 3 million lines of C and C++ code and 250,000 lines of Python with a smattering of other languages such as FORTRAN, Tcl, and Java (accoring to the wonderful sloccount tool) in its build, and the build
... [More]
has more than 18,000 things to do when Python bindings are enabled. When loading it up to use for handling data on a super computer, its memory footprint seem to be gargantuan.
In a CoProcessing module, just loading up ParaView shoots your virtual memory space up by 500MB. But what is actually taking up all of this memory when the build artifacts only take up a total of 150MB? To find out, we need to dig into how a kernel (specifically, Linux here) does its memory accounting.
First, I'll go over the methods used to find out memory usage and what is using it. On Linux, there is a filesystem called procfs mounted at the /proc directory. This filesystem is implemented with a little bit of kernel knowledge and magic to allow you to introspect what is going on in a process. Here, the two files which are important to us are the status and maps files. The status file contains things like memory usage, historical memory usage, signal masks, capabilities, and more. The maps file contains a list of all of the memory regions mapped to a process, their permissions, and what file they come from. Information on other files may be found in the proc(5) manpage. To find out how much memory ParaView is causing a process to consume, I used a small CoProcessing module and modified it to initialize ParaView and then print out its accounting information from /proc/self/status and /proc/self/maps.
First, looking at the status file, we see something like (trimmed for brevity):
% cat /proc/self/status | head -n 7
Name: cat
State: R (running)
Tgid: 16128
Ngid: 0
Pid: 16128
PPid: 15490
TracerPid: 0
This file is where we can get statistics on our process directly from the kernel such as its name, whether it is waiting on I/O, sleeping, or running, permission information, signal masks, memory usage, and more. The field we are interested in here is VmPeak which is the value you usually see in top under VIRT and always seems so high. This value is the most memory that the process ever had mapped for it over its lifetime. For the example here, I get:
VmPeak: 107916 kB
Now, you may as "Really? 100MB for cat?". It may seem like a lot, but we can see where it comes from using the maps file. Let's see where it's all going:
% cat /proc/self/maps | awk -f range2size.awk1
49152 r-xp 00000000 08:13 787901 /usr/bin/cat
4096 r--p 0000b000 08:13 787901 /usr/bin/cat
4096 rw-p 0000c000 08:13 787901 /usr/bin/cat
135168 rw-p 00000000 00:00 0 [heap]
106074112 r--p 00000000 08:13 804951 /usr/lib/locale/locale-archive
1785856 r-xp 00000000 08:13 787887 /usr/lib64/libc-2.18.so
2097152 ---p 001b4000 08:13 787887 /usr/lib64/libc-2.18.so
16384 r--p 001b4000 08:13 787887 /usr/lib64/libc-2.18.so
8192 rw-p 001b8000 08:13 787887 /usr/lib64/libc-2.18.so
20480 rw-p 00000000 00:00 0
131072 r-xp 00000000 08:13 786485 /usr/lib64/ld-2.18.so
12288 rw-p 00000000 00:00 0
4096 rw-p 00000000 00:00 0
4096 r--p 0001f000 08:13 786485 /usr/lib64/ld-2.18.so
4096 rw-p 00020000 08:13 786485 /usr/lib64/ld-2.18.so
4096 rw-p 00000000 00:00 0
139264 rw-p 00000000 00:00 0 [stack]
8192 r-xp 00000000 00:00 0 [vdso]
4096 r-xp 00000000 00:00 0 [vsyscall]
We can see here that the vast majority is that /usr/lib/locale/locale-archive file (which we can avoid; see below). After that, it's mostly the C library and the linker which are unavoidable for the most part. Other things to take note of are the special [heap], [stack], [vdso], and [vsyscall] sections. The first two should be fairly obvious, but the other two are created by the kernel and those are the pages where the code which actually talks to the kernel lives. The other field which interests us is the second one which is the permission bits for that section of memory. The "r-xp", "rw-p", and "r--p" pages are fairly standard (executable code, writeable memory, and read-only data memory). The oddball one is that one "---p" sections which is pretty large (two megabytes). This memory is used to help catch buffer overflows from trampling over the code and to assist in aligning memory to ease sharing across processes and are created by the binutils toolchain to separate the code section (r-xp) from the writeable data section (rw-p) in shared libraries. On a 64 bit x86 machine, each library uses 2MB for this buffer which seems like a lot until you realize that the memory is never actually realized in silicon (since any use causes a segfault) and 64 bit machines have access to at least 256TB of virtual memory to play with (and can be extended to the full 16EB if the need ever arises). On 32 bit machines, this is gap is instead 4 KB since the address space is at a much higher premium.
Now that we have some tools at our disposal to try and figure out where our memory is going, let's look at ParaView itself.
The data here was gathered using a CoProcessing module which loaded up ParaView, dumped the information from the status and maps files, then quit. This gives us a good idea of how much memory using ParaView is costing us without getting any datasets thrown into the mix.
Doing this, the maps data was massaged using the following bit of shell code:
awk '{ split($1, range, "-"); $1 = strtonum("0x" range[2]) - strtonum("0x" range[1]); total[$2] += $1 } END { for (perm in total) { print perm ": " total[perm] } }' < maps | sort -k1
This code uses awk to compute the size of the memory range from the first field then accounts that size to the total amount of memory for a given set of permission bits. At the end, it prints out the total amount of memory for each set of permissions and then sorts it. Doing this for a build gives us output like:
---p: 79581184
r--p: 106364928
r-xp: 12341248
rw-p: 15507456
From this, we can see that almost 80MB of that buffer memory, 106 MB of read-only memory, 12 MB of executable code, and 15 MB of writeable memory (this is where static std::string variables since they dynamically allocate their memory). The executable code size isn't going to change much (there's -Os which optimizes for size, but you usually pay for it in performance where loop unrolling really boosts performance). The read-only memory has that pesky locale-archive sitting in there inflating our numbers:
7fdd518fc000-7fdd57e25000 r--p 00000000 08:13 804951 /usr/lib/locale/locale-archive
It turns out that this is loaded up only when Python is used. Using other tools to find out why it's loaded2, we find that its purpose is to aid the C library when working in different locales (specifically, non-C). It can be skipped by running ParaView with the environment variable LC_ALL set to "C". Doing this gives us the following for the r--p sections:
r--p: 290816
Much better. Next on the list is the amount of memory being mapped for those 2MB empty sections. The only way to actually remove them is by patching the linker to either make the sections smaller (which requires patching your toolchain) or to make fewer libraries. The former is out of scope for most users, but the latter could be done by making libraries such as vtkFilters rather than vtkFiltersCore, vtkFiltersGeometry, and so on. However, the easiest way is to use Catalyst builds to just reduce the amount of code and libraries in a build to begin with. By using Catalyst builds, ParaView can be made to use only 40% (the Extras edition which contains data writers) to 50% (for Rendering-Base) of a full build's VIRT memory footprint.
Here is a graph of memory usage when using various build types (shared and static), Catalyst editions (Extras, Rendering-Base, and a full build). All experiments were run with LC_ALL=C to keep the non-Python experiments comparable. Each source tree was built without Python, with Python (+python), and with Python and Unified Bindings enabled (+python+ub). The Python-enabled builds also have experiments showing usage when Python code was actually used rather than just being built-in for the CoProcessing pipeline or not (experiments with an asterisk):
As the graph shows, the Catalyst builds are much smaller than the full builds and get the largest memory usage gains. One thing to note is that if MPI is used, a static build will incur the the amount shown here per process. For shared builds, any memory under the "buf", "data", and "code" sections of the detailed graph (attached, but now shown) should be shareable between processes, so each process after the first only costs the amount of "write" memory for the experiment. Also, if Python is used in the experiment, enabling unified bindings makes the build smaller by removing the need for around four thousand targets to build (for a full ParaView build) to wrap all of the VTK classes a second time. The result is that there is less code at the expense of some more "write" memory (9 MB of code removed for 7 MB or write memory).
1 The range2size.awk file converts the byte ranges into byte sizes to see the sizes more easily.
2The ltrace and strace tools for those interested, but their use is outside the scope of this article. [Less]
|
Posted
almost 11 years
ago
by
Robert Maynard
I am proud to announce that CMake 3.0 has entered the release candidate stage.
Sources and binaries are available at:
http://www.cmake.org/files/v3.0/?C=M;O=D
Documentation is available at:
http://www.cmake.org/cmake/help/v3.0
Release
... [More]
notes appear below and are also published at
http://www.cmake.org/cmake/help/v3.0/release/3.0.0.html
Some of the more significant features of CMake 3.0 are:
Compatibility options supporting code written for CMake versions prior to 2.4 have been removed.
The CMake language has been extended with *Bracket Argument* and *Bracket Comment* syntax inspired by Lua long brackets.
The CMake documentation has been converted to reStructuredText and uses Sphix generation.
Generators for Visual Studio 10 (2010) and later were renamed to include the product year like generators for older VS versions:
"Visual Studio 10" -> "Visual Studio 10 2010"
"Visual Studio 11" -> "Visual Studio 11 2012"
"Visual Studio 12" -> "Visual Studio 12 2013"
This clarifies which generator goes with each Visual Studio version. The old names are recognized for compatibility.
A new "CodeLite" extra generator is available for use with the Makefile or Ninja generators.
A new "Kate" extra generator is available for use with the Makefile or Ninja generators.
The "add_library()" command learned a new "INTERFACE" library type. Interface libraries have no build rules but may have properties defining "usage requirements" and may be installed, exported, and imported. This is useful to create header-only libraries that have concrete link dependencies on other libraries.
The "export()" command learned a new "EXPORT" mode that retrieves the list of targets to export from an export set configured by the "install(TARGETS)" command "EXPORT" option. This makes it easy to export from the build tree the same targets that are exported from the install tree.
The "project()" command learned to set some version variables to values specified by the new "VERSION" option or to empty strings. See policy "CMP0048".
Several long-outdated commands that should no longer be called have been disallowed in new code by policies:
Policy "CMP0029" disallows "subdir_depends()"
Policy "CMP0030" disallows "use_mangled_mesa()"
Policy "CMP0031" disallows "load_command()"
Policy "CMP0032" disallows "output_required_files()"
Policy "CMP0033" disallows "export_library_dependencies()"
Policy "CMP0034" disallows "utility_source()"
Policy "CMP0035" disallows "variable_requires()"
Policy "CMP0036" disallows "build_name()"
CMake 3.0.0 Release Notes
*************************
Changes made since CMake 2.8.12.2 include the following.
Documentation Changes
=====================
* The CMake documentation has been converted to reStructuredText and
now transforms via Sphinx (http://sphinx-doc.org) into man and html
pages. This allows the documentation to be properly indexed and to
contain cross-references.
Conversion from the old internal documentation format was done by an
automatic process so some documents may still contain artifacts.
They will be updated incrementally over time.
A basic reStructuredText processor has been implemented to support
"cmake --help-command" and similar command-line options.
* New manuals were added:
* "cmake-buildsystem(7)"
* "cmake-commands(7)", replacing "cmakecommands(1)" and
"cmakecompat(1)"
* "cmake-developer(7)"
* "cmake-generator-expressions(7)"
* "cmake-generators(7)"
* "cmake-language(7)"
* "cmake-modules(7)", replacing "cmakemodules(1)"
* "cmake-packages(7)"
* "cmake-policies(7)", replacing "cmakepolicies(1)"
* "cmake-properties(7)", replacing "cmakeprops(1)"
* "cmake-qt(7)"
* "cmake-toolchains(7)"
* "cmake-variables(7)", replacing "cmakevars(1)"
* Release notes for CMake 3.0.0 and above will now be included with
the html documentation.
New Features
============
Syntax
------
* The CMake language has been extended with *Bracket Argument* and
*Bracket Comment* syntax inspired by Lua long brackets:
set(x [===[bracket argument]===] #[[bracket comment]])
Content between equal-length open- and close-brackets is taken
literally with no variable replacements.
Warning: This syntax change could not be made in a fully
compatible way. No policy is possible because syntax parsing
occurs before any chance to set a policy. Existing code using an
unquoted argument that starts with an open bracket will be
interpreted differently without any diagnostic. Fortunately the
syntax is obscure enough that this problem is unlikely in
practice.
Generators
----------
* A new "CodeLite" extra generator is available for use with the
Makefile or Ninja generators.
* A new "Kate" extra generator is available for use with the
Makefile or Ninja generators.
* The "Ninja" generator learned to use "ninja" job pools when
specified by a new "JOB_POOLS" global property.
Commands
--------
* The "add_library()" command learned a new "INTERFACE" library
type. Interface libraries have no build rules but may have
properties defining "usage requirements" and may be installed,
exported, and imported. This is useful to create header-only
libraries that have concrete link dependencies on other libraries.
* The "export()" command learned a new "EXPORT" mode that retrieves
the list of targets to export from an export set configured by the
"install(TARGETS)" command "EXPORT" option. This makes it easy to
export from the build tree the same targets that are exported from
the install tree.
* The "export()" command learned to work with multiple dependent
export sets, thus allowing multiple packages to be built and
exported from a single tree. The feature requires CMake to wait
until the generation step to write the output file. This means one
should not "include()" the generated targets file later during
project configuration because it will not be available. Use *Alias
Targets* instead. See policy "CMP0024".
* The "install(FILES)" command learned to support "generator
expressions" in the list of files.
* The "project()" command learned to set some version variables to
values specified by the new "VERSION" option or to empty strings.
See policy "CMP0048".
* The "string()" command learned a new "CONCAT" mode. It is
particularly useful in combination with the new *Bracket Argument*
syntax.
* The "unset()" command learned a "PARENT_SCOPE" option matching
that of the "set()" command.
* The "include_external_msproject()" command learned to handle
non-C++ projects like ".vbproj" or ".csproj".
* The "ctest_update()" command learned to update work trees managed
by the Perforce (p4) version control tool.
* The "message()" command learned a "DEPRECATION" mode. Such
messages are not issued by default, but may be issued as a warning
if "CMAKE_WARN_DEPRECATED" is enabled, or as an error if
"CMAKE_ERROR_DEPRECATED" is enabled.
* The "target_link_libraries()" command now allows repeated use of
the "LINK_PUBLIC" and "LINK_PRIVATE" keywords.
Variables
---------
* Variable "CMAKE_FIND_NO_INSTALL_PREFIX" has been introduced to
tell CMake not to add the value of "CMAKE_INSTALL_PREFIX" to the
"CMAKE_SYSTEM_PREFIX_PATH" variable by default. This is useful when
building a project that installs some of its own dependencies to
avoid finding files it is about to replace.
* Variable "CMAKE_STAGING_PREFIX" was introduced for use when cross-
compiling to specify an installation prefix on the host system that
differs from a "CMAKE_INSTALL_PREFIX" value meant for the target
system.
* Variable "CMAKE_SYSROOT" was introduced to specify the toolchain
SDK installation prefix, typically for cross-compiling. This is used
to pass a "--sysroot" option to the compiler and as a prefix
searched by "find_*" commands.
* Variable "CMAKE_<LANG>_COMPILER_TARGET" was introduced for use
when cross-compiling to specify the target platform in the
*toolchain file* specified by the "CMAKE_TOOLCHAIN_FILE" variable.
This is used to pass an option such as "--target=<triple>" to some
cross- compiling compiler drivers.
* Variable "CMAKE_MAP_IMPORTED_CONFIG_<CONFIG>" has been introduced
to optionally initialize the "MAP_IMPORTED_CONFIG_<CONFIG>" target
property.
Properties
----------
* The "ADDITIONAL_MAKE_CLEAN_FILES" directory property learned to
support "generator expressions".
* A new directory property "CMAKE_CONFIGURE_DEPENDS" was introduced
to allow projects to specify additional files on which the
configuration process depends. CMake will re-run at build time when
one of these files is modified. Previously this was only possible to
achieve by specifying such files as the input to a
"configure_file()" command.
* A new *AUTORCC* feature replaces the need to invoke
"qt4_add_resources()" by allowing ".qrc" files to be listed as
target sources.
* A new *AUTOUIC* feature replaces the need to invoke
"qt4_wrap_ui()".
* Test properties learned to support "generator expressions". This
is useful to specify per-configuration values for test properties
like "REQUIRED_FILES" and "WORKING_DIRECTORY".
* A new "SKIP_RETURN_CODE" test property was introduced to tell
"ctest(1)" to treat a particular test return code as if the test
were not run. This is useful for test drivers to report that
certain test requirements were not available.
* New types of *Compatible Interface Properties* were introduced,
namely the "COMPATIBLE_INTERFACE_NUMBER_MAX" and
"COMPATIBLE_INTERFACE_NUMBER_MIN" for calculating numeric maximum
and minimum values respectively.
Modules
-------
* The "CheckTypeSize" module "check_type_size" macro and the
"CheckStructHasMember" module "check_struct_has_member" macro
learned a new "LANGUAGE" option to optionally check C++ types.
* The "ExternalData" module learned to work with no URL templates if
a local store is available.
* The "ExternalProject" function "ExternalProject_Add" learned a new
"GIT_SUBMODULES" option to specify a subset of available submodules
to checkout.
* A new "FindBacktrace" module has been added to support
"find_package(Backtrace)" calls.
* A new "FindLua" module has been added to support
"find_package(Lua)" calls.
* The "FindBoost" module learned a new "Boost_NAMESPACE" option to
change the "boost" prefix on library names.
* The "FindBoost" module learned to control search for libraries with
the "g" tag (for MS debug runtime) with a new
"Boost_USE_DEBUG_RUNTIME" option. It is "ON" by default to preserve
existing behavior.
* The "FindJava" and "FindJNI" modules learned to use a "JAVA_HOME"
CMake variable or environment variable, and then try
"/usr/libexec/java_home" on OS X.
* The "UseJava" module "add_jar" function learned a new "MANIFEST"
option to pass the "-m" option to "jar".
* A new "CMakeFindDependencyMacro" module was introduced with a
"find_dependency" macro to find transitive dependencies in a
"package configuration file". Such dependencies are omitted by the
listing of the "FeatureSummary" module.
* The "FindQt4" module learned to create *Imported Targets* for Qt
executables. This helps disambiguate when using multiple "Qt
versions" in the same buildsystem.
Generator Expressions
---------------------
* New "$<PLATFORM_ID>" and "$<PLATFORM_ID:...>" "generator
expressions" have been added.
* The "$<CONFIG>" "generator expression" now has a variant which
takes no argument. This is equivalent to the "$<CONFIGURATION>"
expression.
* New "$<UPPER_CASE:...>" and "$<LOWER_CASE:...>" "generator
expressions" generator expressions have been added.
* A new "$<MAKE_C_IDENTIFIER:...>" "generator expression" has been
added.
Other
-----
* The "cmake(1)" "-E" option learned a new "sleep" command.
* The "ccmake(1)" dialog learned to honor the "STRINGS" cache entry
property to cycle through the enumerated list of possible values.
* The "cmake-gui(1)" dialog learned to remember window settings
between sessions.
* The "cmake-gui(1)" dialog learned to remember the type of a cache
entry for completion in the "Add Entry" dialog.
New Diagnostics
===============
* Directories named in the "INTERFACE_INCLUDE_DIRECTORIES" target
property of imported targets linked conditionally by a "generator
expression" were not checked for existence. Now they are checked.
See policy "CMP0027".
* Build target names must now match a validity pattern and may no
longer conflict with CMake-defined targets. See policy "CMP0037".
* Build targets that specify themselves as a link dependency were
silently accepted but are now diagnosed. See "CMP0038".
* The "target_link_libraries()" command used to silently ignore
calls specifying as their first argument build targets created by
"add_custom_target()" but now diagnoses this mistake. See policy
"CMP0039".
* The "add_custom_command()" command used to silently ignore calls
specifying the "TARGET" option with a non-existent target but now
diagnoses this mistake. See policy "CMP0040".
* Relative paths in the "INTERFACE_INCLUDE_DIRECTORIES" target
property used to be silently accepted if they contained a "generator
expression" but are now rejected. See policy "CMP0041".
* The "get_target_property()" command learned to reject calls
specifying a non-existent target. See policy "CMP0045".
* The "add_dependencies()" command learned to reject calls
specifying a dependency on a non-existent target. See policy
"CMP0046".
* Link dependency analysis learned to assume names containing "::"
refer to *Alias Targets* or *Imported Targets*. It will now produce
an error if such a linked target is missing. Previously in this
case CMake generated a link line that failed at build time. See
policy "CMP0028".
* When the "project()" or "enable_language()" commands initialize
support for a language, it is now an error if the full path to the
compiler cannot be found and stored in the corresponding
"CMAKE_<LANG>_COMPILER" variable. This produces nicer error
messages up front and stops processing when no working compiler is
known to be available.
* Target sources specified with the "add_library()" or
"add_executable()" command learned to reject items which require an
undocumented extra layer of variable expansion. See policy
"CMP0049".
* Use of "add_custom_command()" undocumented "SOURCE" signatures now
results in an error. See policy "CMP0050".
Deprecated and Removed Features
===============================
* Compatibility options supporting code written for CMake versions
prior to 2.4 have been removed.
* Several long-outdated commands that should no longer be called
have been disallowed in new code by policies:
* Policy "CMP0029" disallows "subdir_depends()"
* Policy "CMP0030" disallows "use_mangled_mesa()"
* Policy "CMP0031" disallows "load_command()"
* Policy "CMP0032" disallows "output_required_files()"
* Policy "CMP0033" disallows "export_library_dependencies()"
* Policy "CMP0034" disallows "utility_source()"
* Policy "CMP0035" disallows "variable_requires()"
* Policy "CMP0036" disallows "build_name()"
* The "cmake(1)" "-i" wizard mode has been removed. Instead use an
interactive dialog such as "ccmake(1)" or use the "-D" option to set
cache values from the command line.
* The builtin documentation formatters that supported command-line
options such as "--help-man" and "--help-html" have been removed in
favor of the above-mentioned new documentation system. These and
other command-line options that used to generate man- and html-
formatted pages no longer work. The "cmake(1)" "--help-custom-
modules" option now produces a warning at runtime and generates a
minimal document that reports the limitation.
* The "COMPILE_DEFINITIONS_<CONFIG>" directory properties and the
"COMPILE_DEFINITIONS_<CONFIG>" target properties have been
deprecated. Instead set the corresponding "COMPILE_DEFINITIONS"
directory property or "COMPILE_DEFINITIONS" target property and use
"generator expressions" like "$<CONFIG:...>" to specify per-
configuration definitions. See policy "CMP0043".
* The "LOCATION" target property should no longer be read from non-
IMPORTED targets. It does not make sense in multi-configuration
generators since the build configuration is not known while
configuring the project. It has been superseded by the
"$<TARGET_FILE>" generator expression. See policy "CMP0026".
* The "COMPILE_FLAGS" target property is now documented as
deprecated, though no warning is issued. Use the "COMPILE_OPTIONS"
target property or the "target_compile_options()" command instead.
* The "GenerateExportHeader" module "add_compiler_export_flags"
function is now deprecated. It has been superseded by the
"<LANG>_VISIBILITY_PRESET" and "VISIBILITY_INLINES_HIDDEN" target
properties.
Other Changes
=============
* The version scheme was changed to use only two components for the
feature level instead of three. The third component will now be
used for bug-fix releases or the date of development versions. See
the "CMAKE_VERSION" variable documentation for details.
* The default install locations of CMake itself on Windows and OS X
no longer contain the CMake version number. This allows for easy
replacement without re-generating local build trees manually.
* Generators for Visual Studio 10 (2010) and later were renamed to
include the product year like generators for older VS versions:
* "Visual Studio 10" -> "Visual Studio 10 2010"
* "Visual Studio 11" -> "Visual Studio 11 2012"
* "Visual Studio 12" -> "Visual Studio 12 2013"
This clarifies which generator goes with each Visual Studio version.
The old names are recognized for compatibility.
* The "CMAKE_<LANG>_COMPILER_ID" value for Apple-provided Clang is
now "AppleClang". It must be distinct from upstream Clang because
the version numbers differ. See policy "CMP0025".
* The "CMAKE_<LANG>_COMPILER_ID" value for "qcc" on QNX is now
"QCC". It must be distinct from "GNU" because the command-line
options differ. See policy "CMP0047".
* On 64-bit OS X the "CMAKE_HOST_SYSTEM_PROCESSOR" value is now
correctly detected as "x86_64" instead of "i386".
* On OS X, CMake learned to enable behavior specified by the
"MACOSX_RPATH" target property by default. This activates use of
"@rpath" for runtime shared library searches. See policy "CMP0042".
* The "build_command()" command now returns a "cmake(1)" "--build"
command line instead of a direct invocation of the native build
tool. When using "Visual Studio" generators, CMake and CTest no
longer require "CMAKE_MAKE_PROGRAM" to be located up front.
Selection of the proper msbuild or devenv tool is now performed as
late as possible when the solution (".sln") file is available so it
can depend on project content.
* The "cmake(1)" "--build" command now shares its own stdout and
stderr pipes with the native build tool by default. The "--use-
stderr" option that once activated this is now ignored.
* The "$<C_COMPILER_ID:...>" and "$<CXX_COMPILER_ID:...>" "generator
expressions" used to perform case-insensitive comparison but have
now been corrected to perform case-sensitive comparison. See policy
"CMP0044".
* The builtin "edit_cache" target will no longer select "ccmake(1)"
by default when no interactive terminal will be available (e.g. with
"Ninja" or an IDE generator). Instead "cmake-gui(1)" will be
preferred if available.
* The "ExternalProject" download step learned to re-attempt download
in certain cases to be more robust to temporary network failure.
* The "FeatureSummary" no longer lists transitive dependencies since
they were not directly requested by the current project.
* The "cmake-mode.el" major Emacs editing mode has been cleaned up
and enhanced in several ways.
* Include directories specified in the
"INTERFACE_INCLUDE_DIRECTORIES" of *Imported Targets* are treated as
"SYSTEM" includes by default when handled as *usage requirements*. [Less]
|
Posted
almost 11 years
ago
by
Joe Snyder
Testing sofware with Graphcical user interfaces (GUIs) can be more challenging than testing command line software. This is because GUIs require a mouse or other human interface system to drive them. Tesing a GUI involves being able to record human
... [More]
interactions and play them back later. There are a variety of tools available for GUI testing, but many are constrained to a particular platform, GUI toolkit or development language. There are also many commercial GUI testing tools. In this blog we will look at an open source solution for GUI testing called Sikuli. In particular we will look at how it can be driven from CTest and used to populate a CDash dashboard. The example project we will present is testing a Vista GUI written in Delphi for OSEHRA.
About Sikuli
Sikuli is an open source cross-platform project released under the MIT License, that started as a project in the "User Interface Design Group" at MIT. It is a cross-platform GUI testing/automation tool that uses OpenCV to search the computer's screen for anchor images and Jython to control the mouse and keyboard to interact with what it finds. This allows greater freedom when testing as you can interact with any GUI that can be seen on a computer screen.
For a more detailed look at the workings of Sikuli look at the design documentation.
Sikuli's source code is available on Github:
IDE: https://github.com/RaiMan/SikuliX-IDE
API: https://github.com/RaiMan/SikuliX-API
A typical Sikuli "script" is a folder with ".sikuli" in the name create by the Sikuli IDE . The folder usually contains a few common pieces:
a Python file containing commands, used when running the script
An HTML file that mirrors the Python file, used for display in the IDE
A series of PNG files which are used as arguments for commands used in the OpenCV searching.
The Sikuli website has many demonstration videos with example scripts created with the IDE.
The screenshot below is the OSEHRA Sikuli script in the Sikuli IDE. It shows the the commands that will be run, dragDrop or doubleClick, and then the image arguments which go with those commands. The wait_click is a user-created function to reduce the number of commands needed. The function does what the name describes: it waits for each image to appear on screen before clicking on it.
To see Sikuli in action watch this video:
The video shows the two run modes options that are available from the IDE. The “slow-motion” mode of Sikuli flashes those red circles around the mouse to denote that a matching object is found on the screen. The slow motion mode is excellent for initial object finding, but quickly becomes a hassle when using right-click. The circle overlay acts as a new window which would close a right-click menu, removing the found object and stopping the script. At 0:30 into the video, the video transitions to show the regular speed actions of Sikuli.
These steps utilize the x1.0-rc3 version of Sikuli.
The downsides to Sikuli
The use of screen captures as the input for finding components can lead to some trouble. Differences in the window styling on different platforms or changing the resolution of a monitor can lead to objects not being found. Sikuli's keyboard and mouse control during a test run require that the computer not be in use when tests are running. This is good for dedicated test machines and nightly test runs. However, for testing during development, it can be disruptive to the development process not allowing the developer to multi-task during the test runs.
Sikuli integration with CTest/CDash
To integrate Sikuli with CTest for automated testing, we will utilize Sikuli's command line signature to run scripts. We will demonstrate the integration of a Sikuli test into a project, with the code needed in the CMakeLists.txt and in the Python file of the Sikuli test. It will cover a few major points:
Necessary steps in a CMakeLists.txt file
Code in Sikuli's Python file
Running and reviewing tests
The following blocks show the required code.
Key:
Commands found in CMakeList.txt
Command found in Sikuli Python script
Capture of Command LIne
In CMakeLists.txt
First, use a find_program command to locate the sikuli-ide file to run and, if necessary, find the path to any executables that going to be tested.
find_program(SIKULI_EXECUTABLE sikuli-IDE.bat sikuli-IDE.sh DOC "Path to the sikuli-IDE file" HINTS "C:/Program Files/Sikuli X/" "C:/Program Files (x86)/Sikuli X/" "/usr/local/")
Next, configure the Python and HTML files to account for local paths to executables or other platform specific information:
configure_file("${sikuli}/${sikuli_name}.py.in" ${CMAKE_CURRENT_BINARY_DIR}/Sikuli/${sikuli_name}.sikuli/${sikuli_name}.py")
configure_file("${sikuli}/${sikuli_name}.html.in" ${CMAKE_CURRENT_BINARY_DIR}/Sikuli/${sikuli_name}.sikuli/${sikuli_name}.html")
The test command is simple to craft and will use two arguments from Sikuli's command line signature
-s
redirects error messages to stderr instead of a pop-up window
-r <path_to_folder>.sikuli
Run a Sikuli script. The path that is passed should be to the .sikuli folder that contains the Python file.
The OSEHRA Test command, which passes the path to the folder that contains the configured file as the value of the '-r' argument, looks as follows:
add_test(FT_${sikuli_name} Sikuli "${SIKULI_EXECUTABLE}" -s -r "${CMAKE_CURRENT_BINARY_DIR}/Sikuli/${sikuli_name}.sikuli")
By default, Sikuli does not return 0 for pass and non zero for failure as CTest expects. This requires the user to explicitly return 0 for passing and non zero on failure in the python script. Alternatively, you can print passing and failing messages from the script and use the CTest PASS_REGULAR_EXPRESSION property to determine passing and failing states.
set_tests_properties(FT_${sikuli_name}Sikuli PROPERTIES PASS_REGULAR_EXPRESSION "Script Completed")
In the Sikuli Python file
An example of the Python file to be configured would look something like the following screenshot, taken from the OSEHRA testing files. Here we can see the configuration ability is used to enter the path to a local executable and set up the connection arguments that it requires.
import sys
addImagePath("${sikuli}")
wait(30)
if exists("1326480962420.png"):
doubleClick("1326480962420.png")
else:
openApp(r'${VITALS_MANAGER_EXECUTABLE} /port=${VISTA_TCP_PORT} /server=${VISTA_TCP_HOST} /ccow=disable')
wait_type("AccessCcndc.png","fakedoc1")
wait_type("VerifyCcndc.png","1Doc!@#$")
wait_click("1320870115117.png")
wait(Pattern("Templates1Va.png").targetOffset(0,-14),10)
wait_click(Pattern("Templates1Va.png").targetOffset(0,-14))
...
There is one important command to be used in the scenarios where the test file will need to be configured. If the test file is configured and placed in a different location from the original file, it is recommended that you utilize an addImagePath command to specify the folder location of the source, which can be found on the second line of the above code fragment. This command adds the supplied path to the set of paths that are used to find the images used in the file's commands.
To ensure that the test's result is captured correctly, the Sikuli script can either print a message to be matched by CTests PASS_REGULAR_EXPRESSION test property that is set in the CMakeLists.txt file:
# type("Exit")
print "Script Completed"
Or, an explicit return value can be returned:
import sys
sys.exit(0)
Running the test & examining results
After the CMake configure, generate, and build, the test can now be run like any other CTest test:
$ ctest -R FT_FunctionalTestingSikuli -V
UpdateCTestConfiguration from :C:/Users/joe.snyder/Work/OSEHRA/VistA-build/DartConfiguration.tcl
Parse Config file:C:/Users/joe.snyder/Work/OSEHRA/VistA-build/DartConfiguration.tcl
Add coverage exclude regular expressions.
UpdateCTestConfiguration from :C:/Users/joe.snyder/Work/OSEHRA/VistA-build/DartConfiguration.tcl
Parse Config file:C:/Users/joe.snyder/Work/OSEHRA/VistA-build/DartConfiguration.tcl
Test project C:/Users/joe.snyder/Work/OSEHRA/VistA-build
Constructing a list of tests
Done constructing a list of tests
Checking test dependency graph...
Checking test dependency graph end
test 150
Start 150: FT_FunctionalTestingSikuli
150: Test command: "C:\Program Files (x86)\Sikuli X\Sikuli-IDE.bat" "-s" "-r" "C:/Users/joe.snyder/Work/OSEHRA/VistA-build/Testing/Functional/Sikuli/FunctionalTesting.sikuli"
150: Test timeout computed to be: 1500
150: [info] Sikuli vision engine loaded.
150: [info] Windows utilities loaded.
150: [info] VDictProxy loaded.
150: [log] App.open C:/Program Files (x86)/Vista/Vitals/VitalsManager.exe /port=9210 /server=127.0.0.1 /ccow=disable(9352)
150: [log] CLICK on (964,616)
150: [log] TYPE "fakedoc1"
150: [log] CLICK on (966,656)
150: [log] TYPE "1Doc!@#$"
150: [log] CLICK on (1126,617)
150: [log] CLICK on (720,226)
150: [log] CLICK on (747,247)
<snip>
Sikuli does an excellent job of logging the actions that it performs through the stdout pipe. The type strings are captured in the log and the click functions log the screen coordinates where the click was performed. The logging is also captured well in the Dashboard Testing display:
A successful test as displayed in CDash:
A failed run:
Conclusion
Sikuli is a very powerful open-source tool that allows testing on any GUI that can be seen on a user's screen by searching the screen for a section that maches a supplied screenshot. It integrates well into a CMake/CTest environment due to its command line capabilities. Sikuli records the actions that it performs by keeping screen coordinates of clicks and the strings that are being sent to a type function during the course of a run. This logging displays the output of the test both on the command line and in the Dashboard's output display. Sikuli isn't without it shortcomings. For example, screen resolution changes can cause it to fail. In addition, the anchor images of the components will also need need to be more carefully maintained when dealing with changing the GUI components and updating the tests. Sikuli's control over the mouse and keyboard can prove disruptive when testing while attempting to do other work, but for overnight submissions or dedicated test machines this is not a problem. [Less]
|
Posted
almost 11 years
ago
by
Luis Ibanez
The Fifth Hackathon for Rare Diseases will take place on Saturday February 22nd at the offices of Zone 5 in downtown Albany.
This is a follow up of the Fourth Hackathon for Rare Diseases that took place on December 7th.
The goal of the
... [More]
Hackathon is to continue implementing the prototype of a web-based platform for facilitating the information management of members of the Rare Diseases community.
A first pass at the prototype is currently available here in Github, under the Apache 2.0 License.
Why Rare Diseases ?
Rare diseases are defined as those who afflict populations of less than 200,000 patients, or about 1 in 1,500 people.
There are, however, about 7,000 rare diseases.
The patients affected by them, and their families, struggle due to the lack of information and general knowledge on the nature and treatment for these afflictions.
It takes in average 7.5 years for a patient to get a correct diagnosis for a rare disease,
After having seen and average of 8 doctors .
By then, these patients have been treated for a variety of incorrect diagnosis and have missed the proper treatment for their case.
Most rare diseases are genetic, and thus are present throughout the person's entire life, even if symptoms do not immediately appear. Many rare diseases appear early in life, and about 30 percent of children with rare diseases will die before reaching their fifth birthday.
The Hackathon event is coordinated in collaboration with Ed Fennell, who is driving the Forum on Rare Diseases at the Albany Medical Center.
This year's Forum on Rare Diseases will take place on February 26th (four days after the hackathon).
Here is a recent talk by Ed Fennel at the Rensselaer Center for Open Source.
Ed Fennel also delivered a talk, raising awareness about Rare Diseases, at TEDxAlbany on November 14th.
Logistics
The Hackathon will take place
Saturday February 22nd
From 10:00am to 5:00pm
Zone 5 offices. Map here.
Thanks to Zone 5 for kindly hosting the event.
Refreshments will be included.
Mentors will include: Kitware developers, SUNY Albany Faculty, SUNY Albany Students, SUNY Albany ASIS&T (Association for Information Science & Technology) Student Chapter, RPI Students from the Rensselaer Center for Open Source (RCOS), Skidmore College GIS Center for Interdisciplinary Research staff/students.
Software developed during the Hackathon will be uploaded to the Emily and Haley organization in Github.
The event is open to ALL,
If you are in the Albany area, join us to apply Open Source to things that Matter !
To register go to this link http://goo.gl/DfkXEw [Less]
|
Posted
almost 11 years
ago
by
Dave DeMarle
It has been a busy start to 2014, the twentieth year of VTK's existance. To begin with, we've just released VTK 6.1.0. Besides bug fixes this release brings cool new capabilities including visualization on the web, fine grained paralellism in
... [More]
multithreaded and GPU accelerated contexts, a zero copy interface to populate vtk arrays with, and the return of easy-to-get-started-with binary distributions. This is the first release since the big restructuring that went on with 6.0. For those who aren't ready to jump on board with VTK 6.x just yet, we've begun to merge patches into the release-5.10 branch in the repository that extend the lifetime, if not the functionality of, the old code base a little while longer while we all get comfortable with 6.x.
VTK has thrived for as long as it has, in part because of the developers' shared attention to the often mundane software engineering chores that go on behind the scenes in a large software project. Take regression test coverage for example. Every day tens of volunteer machines run up to two thousand regression tests on VTK. 2000 some odd tests sounds like a lot, but when you consider that VTK has roughly 2.1 million lines of code in it, those tests could mean very little. One good heuristic for examining how good the regression suite is is to measure the number of lines executed and skipped during the test suite.
We currently measure that on two dashboard machines that compile VTK with "-fprofile-arcs -ftest-coverage" flags. These flags cause the regression tests to make a note of the specific lines in the library that are executed as they run. Afterward we run gcov to analyze the reports and then submit summaries to CDash. A Windows machine with Bullseye coverage will join them soon. From these reports we can tell that our tests exercise a bit better than 60% of VTK, which is quite good considering the size of the toolkit.
Still, we want to know that VTK never acts poorly. We can get closer to knowing that if we hit the missing 40% in our tests. So on January 28'th, we gathered as many VTK developers together as we could with an offer of free coffee and donughts and attacked exactly this problem. Guided by some reports assembled by Bill Lorensen we each picked different poorly tested parts of VTK and hacked away for the day. In some cases we removed or deprecated underused code, in others we wrote new tests to prove that well used parts of the library work today and will continue to work as VTK evolves.
Over the day we were able to increase the coverage from 67.94 to 70.47%. That isn't bad, and coincidentally was just high enough to pass the arbitrarily chosen threshold configured into CDash to give us the first green coverage report in my not so recent memory. We've been doing well since the Hackathon too, the metric is up almost half a point since then, and we hope to keep the momentum going. In particular we will be more strict about enforcing rule #31 from in the VTK software process guide that says all new code must include regression tests.
[Less]
|
Posted
almost 11 years
ago
by
Dave DeMarle
It has been a busy start to 2014, the twentieth year of VTK's existance. To begin with, we've just released VTK 6.1.0. Besides bug fixes this release brings cool new capabilities including visualization on the web, fine grained paralellism in
... [More]
multithreaded and GPU accelerated contexts, a zero copy interface to populate vtk arrays with, and the return of easy-to-get-started-with binary distributions. This is the first release since the big restructuring that went on with 6.0. For those who aren't ready to jump on board with VTK 6.x just yet, we've begun to merge patches into the release-5.10 branch in the repository that extend the lifetime, if not the functionality of, the old code base a little while longer while we all get comfortable with 6.x.
VTK has thrived for as long as it has, in part because of the developers' shared attention to the often mundane software engineering chores that go on behind the scenes in a large software project. Take regression test coverage for example. Every day tens of volunteer machines run up to two thousand regression tests on VTK. 2000 some odd tests sounds like a lot, but when you consider that VTK has roughly 2.1 million lines of code in it, those tests could mean very little. One good heuristic for examining how good the regression suite is is to measure the number of lines executed and skipped during the test suite.
We currently measure that on two dashboard machines that compile VTK with "-fprofile-arcs -ftest-coverage" flags. These flags cause the regression tests to make a note of the specific lines in the library that are executed as they run. Afterward we run gcov to analyze the reports and then submit summaries to CDash. A Windows machine with Bullseye coverage will join them soon. From these reports we can tell that our tests exercise a bit better than 60% of VTK, which is quite good considering the size of the toolkit.
Still, since we want to know that VTK never acts poorly, and we can get closer to knowing that if hit the missing 40% in our tests. So on January 28'th, we gathered as many VTK developers together as we could with an offer of free coffee and donughts and attacked exactly this problem. Guided by some reports assembled by Bill Lorensen we each picked different poorly tested parts of VTK and hacked away for the day. In some cases we removed or deprecated underused code, in others we wrote new tests to prove that well used parts of the library work today and will continue to work as VTK evolves.
Over the day we were able to increase the coverage from 67.94 to 70.47%. That isn't bad, and coincidentally was just high enough to pass the arbitrarily chosen threshold configured into CDash to give us the first green coverage report in my not so recent memory. We've been doing well since the Hackathon too, the metric is up almost half a point since then, and we hope to keep the momentum going. In particular we will be more strict about enforcing rule #31 from in the VTK software process guide that says all new code must include regression tests.
[Less]
|
Posted
about 11 years
ago
by
Robert Maynard
Some problems were reported with the 2.8.12 release. Thanks to the work of Brad King, Robert Maynard, Rolf Eike Beer, Ruslan Baratov, and Ted Kremenek those problems have been fixed. We've prepared a 2.8.12.2 bug fix release to address those issues.
... [More]
Some of the notable changes in this patch release are:
XCode: Fix compiler line matching for XCode 5.1
Visual Studio: Convert include path to backslashes for Visual Studio 2010 and newer
FindOpenMP: Support compilers that do not need any special flags
The complete list of changes in 2.8.12.2 can be found at:
http://www.cmake.org/Wiki/CMake/ChangeLog [Less]
|