543
I Use This!
Activity Not Available

News

Analyzed about 1 year ago. based on code collected about 1 year ago.
Posted almost 12 years ago by Luis Ibanez
" One million of the best jobs in America may go unfilled...because only 1-in-10 schools teach students how to code." http://www.code.org/ Some memorable quotes: "Everybody in this country should learn how to program a computer... because it ... [More] teaches you how to think."  -Steve Jobs  "You don't have to be a genius to know how to program. You just need to be determined." "To be able to come up with an idea, and see it in your hands, and then be able to press a button and have millions of people to look at it,... I think we are the first generation to have that kind of experience." "It is the closest thing we have to a Superpower."  "Learning how to program doesn't start as wanting to learn all of computer science or trying to master this discipline... it start of because I wanted to something that was fun..." "The whole limit in the system is that there are just not enough people trained in the field today."- Mark Zuckerberg [Less]
Posted almost 12 years ago by Marcus Hanwell
I wrote about Code Review, Topic Branches and VTK in the April issue of the Source last year. Since then, our Gerrit review server has seen over 2,000 topic submissions for review. I haven't crunched the numbers, but have dealt with many corner cases ... [More] and issues along the way. Many topics have been one commit long, and we have had some absolutely enormous topics representing many months of works too. There have also been several that are somewhere in the middle, even with multiple authors. The high-level view of the software process around Gerrit code review is shown below, and remains largely unchanged since we added it. The Good Ol' Days Now seems like a good time to take stock of what has worked and what has not worked so well, and offer some words of advice. First, let's consider the motivation behind using topic branches. In the good old days of CVS (you remember CVS don't you), we would work away on a fix, feature, or rewrite and test it out locally. Once we thought it was ready we would commit it; it seems strange to me to think of it, but by commit I mean we would type cvs commit and add a message that would be immediately pushed to the tip of trunk after we pressed enter. We tried to commit before noon so we could watch the continuous dashboard submissions and fix anything that went wrong. If someone else had committed something we would curse under our breath, do a cvs up, and resolve any conflicts that occurred. Topic Branches Nowadays, I will work on several topic branches at any given time - often switching between them, occasionally rebasing them, and saving my progress as I go. If it is a simple topic I normally work along, and every time I want to save my progress, I will use 'git commit add file/i/altered' and 'git commit --amend' to add any new changes to the existing (and often only) commit in a topic. For larger topics, once I feel that an atomic set of changes implement a given feature or fix, I will add a commit with an appropriate description. I especially work to separate features from bug fixes and style changes as each is quite different. In general I will try to keep these things in separate topics, but sometimes it makes sense to combine them. We have had several debates on the list about this, but after reviewing many topics I also avoid committing the minutia of my development, such as the wild goose chase I went on when trying to add a new feature or the class I added and later renamed. I will often squash these away into a single commit, or amend the commit as I go. The primary reasoning for this is that I want someone to review my commits, and I don't expect them to follow every wrong turn I took. As a secondary reason, I am thinking about the poor guy trying to figure out why a line was added in a year or two (probably me) - I would like to give my future self a nice, succinct high-level overview of what I thought I was doing rather than picking through several messages. I also try to format a very short first line with a clue as to the purpose (like an email subject) and then a more detailed body formatted as paragraphs if that is required. We provide quite a few macros to make things simpler. Once I think my topic is ready for review, I will use 'git prepush' to check I am pushing what I thought I was, and then 'git gerrit-push' to push the topic to the review server for review. I will often then quickly click through the commits and look at the diffs to see if it shows anything I had missed before adding some reviewers. I usually try to choose reviewers who I know work on that code (git log with paths often helps here), and I will examine the output from any automated checks (the robots) and the CDash@Home builds. This has allowed me to become amazingly lazy, often only building quite large changes on Linux and then relying on the CDash@Home builds to point out any cross-platform issues that I may have missed. I am not a fan of very large topics, especially if I see no logical justification in the context of code review or later code excavation expeditions. Code Reviews When performing a code review, I have to admit to being a little undecided on the best course of action. Initially I went for the Gerrit default of editing commits, but now I tend to favor appending commits to a topic. If a reviewer points out something trivial, I will often just edit the commit; if it is something less trivial, I will append a commit. If the topic is small, or it is the final commit I usually just edit it, but with larger topics I almost always append a commit to avoid any need to review all of the commits again. When reviewing changes where there are lots of issues, I will also sometimes just make the changes in a follow up commit, and push that straight to the topic. My motivation for this is that it takes about the same amount of time as asking, and the developer who submitted the code can see what I changed (and point out if I broke anything). If I am less certain, or it is just one or two things, I generally make inline comments and try to sum up any thoughts in the review (or just point them to inline comments). I really like the ability to make collaborative topics where we can fix up one another's commits, and have learned quite a lot in some code reviews (and avoided merging subtle bugs in some cases too). What Doesn't Work We have found that long-lived topics never really work. If you have more than 10 commits in a topic, or more than a few thousand lines of changes, it is usually near impossible for a mere human to review your changes. You can still benefit from automated checks and the CDash@Home builds, but the code review aspect tends to be superficial at best. We have found a much better model, even if the branch can't land right away in master, is to push a topic and block it using the "Do not submit" option in code review at the topic level (or marking a commit in the topic as WIP at the start of the first line of the commit message). You can then have others review your code as you go and see automatic check results and CDash@Home builds as you go. This means that mistakes in coding style, memory allocation, API, etc. can all be spotted earlier (saving time later in correcting more mistakes as they pile up), and your code will likely be reviewed. Topics also present the possibility of reviewing both individual commits, where you can express if something is good or bad, as well as the overall topic. Commits can be dropped from topics entirely if they are deemed unsuitable for example. Sometimes topics can languish and be forgotten and a quick email to the development list with a link to the topic can certainly help here. The notifications and active topics also need some work in Gerrit, so it is possible that people will miss the requests for review. Without a green CDash@Home build on all platforms it is also very difficult to know if a topic introduces regressions, and going forward, getting to and keeping a green dashboard must be a priority for any project wishing to use topic reviews in conjunction with CDash@Home. There are also times when reviews are ignored by the submitter, or they choose not to act on them. Conflicts As with any project, there can occasionally be conflict... Try to keep reviews technical, and explain why something shouldn't be done in a particular way. Also, try not to take reviews personally, as we all have the goal of producing valid code with a consistent API and style. There are also merge conflicts once you try to submit a topic - these are much easier to handle in Gerrit now as it features great merge commit review! Simply check out the tip of the topic, do a 'git fetch origin' and then a 'git merge origin/master', and resolve any conflicts and then 'git gerrit-push'. The reviewer can then see the conflicts along with how you chose to resolve them, and approve that commit. This also has the enormous advantage of retaining all previously reviewed commits with no ambiguity over whether they are the same as what was reviewed. Conclusions We know that there are still some rough edges, and Gerrit could certainly do with some improvements to its interface, but I think that Gerrit has had a positive impact on development in the projects I have used it on. It allows me to develop code I think will work, have it tested before it is merged, and get one or two more sets of eyes on the code. I have witnessed some nasty errors that would have been hard to track down later get spotted, although things certainly still slip through from time to time. Hooking up CDash@Home to Gerrit could certainly help us stick to that dream of green, and is something I have been speaking to the CDash developers about. We are also looking at the best way to maintain topic support in Gerrit, after some problems upstreaming the feature. These are my thoughts after using Gerrit in the VTK and Open Chemistry projects as both a code submitter and reviewer. I would welcome your feedback. We have been tracking feedback since introducing this feature in VTK, and have a list of improvements (if we ever find the time to work on them) to the process. We want to ensure that it is kept as light as possible, and I try to remind reviewers that while we want them to review code, it does not have to take hours and we can still fix things that slip by with follow up commits. A lot of work has gone into improving our process, and we continue to think about how things could be made better. [Less]
Posted almost 12 years ago by Marcus Hanwell
I wrote about Code Review, Topic Branches and VTK in the April issue of the Source last year. Since then, our Gerrit review server has seen over 2,000 topic submissions for review. I haven't crunched the numbers, but have dealt with many corner cases ... [More] and issues along the way. Many topics have been one commit long, and we have had some absolutely enormous topics representing many months of works too. There have also been several that are somewhere in the middle, even with multiple authors. The high-level view of the software process around Gerrit code review is shown below, and remains largely unchanged since we added it. The Good Ol' Days Now seems like a good time to take stock of what has worked and what has not worked so well, and offer some words of advice. First, let's consider the motivation behind using topic branches. In the good old days of CVS (you remember CVS don't you), we would work away on a fix, feature, or rewrite and test it out locally. Once we thought it was ready we would commit it; it seems strange to me to think of it, but by commit I mean we would type cvs commit and add a message that would be immediately pushed to the tip of trunk after we pressed enter. We tried to commit before noon so we could watch the continuous dashboard submissions and fix anything that went wrong. If someone else had committed something we would curse under our breath, do a cvs up, and resolve any conflicts that occurred. Topic Branches Nowadays, I will work on several topic branches at any given time - often switching between them, occasionally rebasing them, and saving my progress as I go. If it is a simple topic I normally work along, and every time I want to save my progress, I will use 'git add file/i/altered' and 'git commit --amend' to add any new changes to the existing (and often only) commit in a topic. For larger topics, once I feel that an atomic set of changes implement a given feature or fix, I will add a commit with an appropriate description. I especially work to separate features from bug fixes and style changes as each is quite different. In general I will try to keep these things in separate topics, but sometimes it makes sense to combine them. We have had several debates on the list about this, but after reviewing many topics I also avoid committing the minutia of my development, such as the wild goose chase I went on when trying to add a new feature or the class I added and later renamed. I will often squash these away into a single commit, or amend the commit as I go. The primary reasoning for this is that I want someone to review my commits, and I don't expect them to follow every wrong turn I took. As a secondary reason, I am thinking about the poor guy trying to figure out why a line was added in a year or two (probably me) - I would like to give my future self a nice, succinct high-level overview of what I thought I was doing rather than picking through several messages. I also try to format a very short first line with a clue as to the purpose (like an email subject) and then a more detailed body formatted as paragraphs if that is required. We provide quite a few macros to make things simpler. Once I think my topic is ready for review, I will use 'git prepush' to check I am pushing what I thought I was, and then 'git gerrit-push' to push the topic to the review server for review. I will often then quickly click through the commits and look at the diffs to see if it shows anything I had missed before adding some reviewers. I usually try to choose reviewers who I know work on that code (git log with paths often helps here), and I will examine the output from any automated checks (the robots) and the CDash@Home builds. This has allowed me to become amazingly lazy, often only building quite large changes on Linux and then relying on the CDash@Home builds to point out any cross-platform issues that I may have missed. I am not a fan of very large topics, especially if I see no logical justification in the context of code review or later code excavation expeditions. Code Reviews When performing a code review, I have to admit to being a little undecided on the best course of action. Initially I went for the Gerrit default of editing commits, but now I tend to favor appending commits to a topic. If a reviewer points out something trivial, I will often just edit the commit; if it is something less trivial, I will append a commit. If the topic is small, or it is the final commit I usually just edit it, but with larger topics I almost always append a commit to avoid any need to review all of the commits again. When reviewing changes where there are lots of issues, I will also sometimes just make the changes in a follow up commit, and push that straight to the topic. My motivation for this is that it takes about the same amount of time as asking, and the developer who submitted the code can see what I changed (and point out if I broke anything). If I am less certain, or it is just one or two things, I generally make inline comments and try to sum up any thoughts in the review (or just point them to inline comments). I really like the ability to make collaborative topics where we can fix up one another's commits, and have learned quite a lot in some code reviews (and avoided merging subtle bugs in some cases too). What Doesn't Work We have found that long-lived topics never really work. If you have more than 10 commits in a topic, or more than a few thousand lines of changes, it is usually near impossible for a mere human to review your changes. You can still benefit from automated checks and the CDash@Home builds, but the code review aspect tends to be superficial at best. We have found a much better model, even if the branch can't land right away in master, is to push a topic and block it using the "Do not submit" option in code review at the topic level (or marking a commit in the topic as WIP at the start of the first line of the commit message). You can then have others review your code as you go and see automatic check results and CDash@Home builds as you go. This means that mistakes in coding style, memory allocation, API, etc. can all be spotted earlier (saving time later in correcting more mistakes as they pile up), and your code will likely be reviewed. Topics also present the possibility of reviewing both individual commits, where you can express if something is good or bad, as well as the overall topic. Commits can be dropped from topics entirely if they are deemed unsuitable for example. Sometimes topics can languish and be forgotten and a quick email to the development list with a link to the topic can certainly help here. The notifications and active topics also need some work in Gerrit, so it is possible that people will miss the requests for review. Without a green CDash@Home build on all platforms it is also very difficult to know if a topic introduces regressions, and going forward, getting to and keeping a green dashboard must be a priority for any project wishing to use topic reviews in conjunction with CDash@Home. There are also times when reviews are ignored by the submitter, or they choose not to act on them. Conflicts As with any project, there can occasionally be conflict... Try to keep reviews technical, and explain why something shouldn't be done in a particular way. Also, try not to take reviews personally, as we all have the goal of producing valid code with a consistent API and style. There are also merge conflicts once you try to submit a topic - these are much easier to handle in Gerrit now as it features great merge commit review! Simply check out the tip of the topic, do a 'git fetch origin' and then a 'git merge origin/master', and resolve any conflicts and then 'git gerrit-push'. The reviewer can then see the conflicts along with how you chose to resolve them, and approve that commit. This also has the enormous advantage of retaining all previously reviewed commits with no ambiguity over whether they are the same as what was reviewed. Conclusions We know that there are still some rough edges, and Gerrit could certainly do with some improvements to its interface, but I think that Gerrit has had a positive impact on development in the projects I have used it on. It allows me to develop code I think will work, have it tested before it is merged, and get one or two more sets of eyes on the code. I have witnessed some nasty errors that would have been hard to track down later get spotted, although things certainly still slip through from time to time. Hooking up CDash@Home to Gerrit could certainly help us stick to that dream of green, and is something I have been speaking to the CDash developers about. We are also looking at the best way to maintain topic support in Gerrit, after some problems upstreaming the feature. These are my thoughts after using Gerrit in the VTK and Open Chemistry projects as both a code submitter and reviewer. I would welcome your feedback. We have been tracking feedback since introducing this feature in VTK, and have a list of improvements (if we ever find the time to work on them) to the process. We want to ensure that it is kept as light as possible, and I try to remind reviewers that while we want them to review code, it does not have to take hours and we can still fix things that slip by with follow up commits. A lot of work has gone into improving our process, and we continue to think about how things could be made better. [Less]
Posted almost 12 years ago by Will Schroeder
There are many reasons why Open Science is a good thing. For some it's a moral argument that stresses sharing the results of (usually publicly funded) scientific research with society, preventing fraud through transparency, and benefiting teaching ... [More] through the use of open materials. Others see the growing complexity and challenges of science as demanding collaboration; so that larger teams with their wider expertise can be brought to bear. Clearly there are personal benefits too; as Steve Lawrence has shown there is a correlation between sharing the results of research and the number of paper citations. Many innovators and entrepreneurs are also fond of Open Science because sharing technology can accelerate the innovation process and empower small business by reducing intellectual rights barriers. And there are a lot of us that just like to have fun--the communities and relationships that form in an open environment make the hard work of science that much more enjoyable. While I agree with all of these sentiments, they miss the critical issue: reproducibility. It was for good reason that the Royal Society was formed in 1660 with the militant motto "Nullius in verba" or rendered in English "take nobody's word for it." Once the scientific method was formalized and practiced by these and other pioneers we began to benefit from the power of science. The early scientists (actually Natural Philosophers) realized that an understanding of physical reality was based on the practice of objectively performing "experiments" and repeatedly reproducing the same results. Only then could something be called truth, and incorporated into our foundational knowledge base. I sometimes wonder whether the early scientists were responding to a time of superstition; I can't help but think of the rather humorous comedy skit from Monty Python. In the famous "How to Tell a Witch" the reasoning behind the determination as to whether an accused woman is a witch is something to behold, and ends up testing whether the purported witch weighs the same as a duck. It's easy to see how superstition and emotion can produce a faulty decision about physical reality; yet using erroneous facts produce similar results. While hilarious when Monty Python plays it, unfortunately there is anecdotal evidence that for many people such faulty "reasoning" processes are alive and well. If you think scientists are immune, consider the recent study that found that 90% of published papers in preclinical cancer research describe work that is not reproducible, and therefore wrong. Such people never learned or forgot about the importance of reproducibility, with the result that we are developing therapies and pharmaceuticals on shifting ground. I don't think that many of us in the open source world became involved thinking that we were following in the footsteps of hallowed scientists. For most of us, the reasons articulated in the first paragraph were enough. Yet very quickly we realized that without reproducible results, i.e., testing, we were doomed to build on unstable foundations. So in the end the essential requirement that our software be built on "truth" demanded that we test constantly to ensure that our results were repeatable no matter what happened to the underlying platform, data, and algorithms. It is for this very reason that we are proponents of Open Access journals like the Insight Journal, and use tools like CMake, CTest and CDash at the heart of our software development process. In this way, when an experiment (test) fails in the forest we hear it, and take the necessary steps to maintain the integrity of our foundational reality (and to the benefit of our users who build on it). Once the imperative of reproducibility is accepted, all of the other open practices follow. Without Open Access publications to describe and explain the experimental process, Open Data to provide controlled input, and Open Source to rerun computations and analysis, it is not possible to reliably reproduce experiments. So to my way of thinking, if you are a technologist then there is no choice but to practice Open Science. Anything else is tantamount to arguing that a witch weighs the same as a duck. [Less]
Posted almost 12 years ago by Will Schroeder
There are many reasons why Open Science is a good thing. For some it's a moral argument that stresses sharing the results of (usually publicly funded) scientific research with society, preventing fraud through transparency, and benefiting teaching ... [More] through the use of open materials. Others see the growing complexity and challenges of science as demanding collaboration; so that larger teams with their wider expertise can be brought to bear. Clearly there are personal benefits too; as Steve Lawrence has shown there is a correlation between sharing the results of research and the number of paper citations. Many innovators and entrepreneurs are also fond of Open Science because sharing technology can accelerate the innovation process and empower small business by reducing intellectual rights barriers. And there are a lot of us that just like to have fun--the communities and relationships that form in an open environment make the hard work of science that much more enjoyable. While I agree with all of these sentiments, they miss the crucial issue: reproducibility. It was for good reason that the Royal Society was formed in 1660 with the militant motto "Nullius in verba" or rendered in English "take nobody's word for it." Once the scientific method was formalized and practiced by these and other pioneers we began to benefit from the power of science. The early scientists (actually Natural Philosophers) realized that an understanding of physical reality was based on the practice of objectively performing "experiments" and repeatedly reproducing the same results. Only then could something be called truth, and incorporated into our foundational knowledge base. I sometimes wonder whether the early scientists were responding to a time of superstition; I can't help but think of the rather humorous comedy skit from Monty Python. In the famous "How to Tell a Witch" the reasoning behind the determination as to whether an accused woman is a witch is something to behold, and ends up testing whether the purported witch weighs the same as a duck. It's easy to see how superstition and emotion can produce a faulty decision about physical reality; yet using erroneous facts produce similar results. While hilarious when Monty Python plays it, unfortunately there is anecdotal evidence that for many people such faulty "reasoning" processes are alive and well. If you think scientists are immune, consider the recent study that found that 90% of published papers in preclinical cancer research describe work that is not reproducible, and therefore wrong. Such people never learned or forgot about the importance of reproducibility, with the result that we are developing therapies and pharmaceuticals on shifting ground. I don't think that many of us in the open source world became involved thinking that we were following in the footsteps of hallowed scientists. For most of us, the reasons articulated in the first paragraph were enough. Yet very quickly we realized that without reproducible results, i.e., testing, we were doomed to build on unstable foundations. So in the end the essential requirement that our software be built on "truth" demanded that we test constantly to ensure that our results were repeatable no matter what happened to the underlying platform, data, and algorithms. It is for this very reason that we are proponents of Open Access journals like the Insight Journal, and use tools like CMake, CTest and CDash at the heart of our software development process. In this way, when an experiment (test) fails in the forest we hear it, and take the necessary steps to maintain the integrity of our foundational reality (and to the benefit of our users who build on it). Once the imperative of reproducibility is accepted, all of the other open practices follow. Without Open Access publications to describe and explain the experimental process, Open Data to provide controlled input, and Open Source to rerun computations and analysis, it is not possible to reliably reproduce experiments. So to my way of thinking, if you are a technologist then there is no choice but to practice Open Science. Anything else is tantamount to arguing that a witch weighs the same as a duck. [Less]
Posted about 12 years ago by Bill Hoffman
CMake: Building with all your cores As a distance runner, it is important to run using a fully engaged core. This allows for the most efficient means of moving towards my running goals. Software developers are equally motivated to use as much of ... [More] their “cores” as possible to build software. OK, I admit this is a bit of a lame analogy, but I don’t think you would find too many developers that are not interested in building software as fast as possible using all of the horse power available on the hardware they are using. The CMake build system and its developers have always been aware of how important parallel builds are, and have made sure that CMake could take advantage of them when possible. Since CMake is a Meta build tool that does not directly build software, but rather generates build files for other tools, the approaches to parallel building differ from generator to generator and platform to platform. In this blog, I will cover the approaches for parallel builds on the major platforms and tool chains supported by CMake. First some terms: Target Level Parallelism – This is when a build system builds high level targets at the same time. High level targets are things like libraries and executables. Object Level Parallelism – This is when a build system builds individual object files at the same time. Basically, it invokes the compiler command line for independent objects at the same time. CMake generator – A CMake generator is a target build tool for CMake. It is specified either in the cmake-gui or with the –G command line option to cmake. I will start with Linux, followed by Apple OSX, and finish up with Windows. Linux: GNU Make The traditional gmake tool which is usually installed as “make” on Linux systems can run parallel builds. It is used by CMake’s “Unix Makefiles” generator. To have parallel builds with gmake, you need to run gmake with the –jN command line option. The flag tells make to build in parallel. The N argument is used to specify how many jobs are run in parallel during the build. For minimum build times, you want to use a value of N that is one more than the number of cores on the machine.  So, if you have a quad core Linux machine, you would run make –j5.  Here is an example: # assume your source code is in a directory called src and you are one directory up from there mkdir build cd  build cmake –G”Unix Makefiles” ../src make –j5   ninja Some developers at Google recently created a new build tool called ninja. This is a replacement for the GNU make tool. ninja was created to run faster than make and of course run parallel builds very well. Fortunately, CMake now has a ninja generator so that your project can take advantage of this new tool. Unfortunately, if you are using CMake to build Fortran 95 or greater code that makes used of Fortran modules you will have to stick to GNU make.  The ninja support for Fortran depend information is not yet implemented in CMake. (if you are interested in this, please send me an email). If your project does not include Fortran code, then ninja might be a good tool for you to try. ninja is very quick to figure out that it has nothing to do which is important for incremental builds of large projects. To use ninja you will need to first build ninja from source. The source for ninja can be found here: git://github.com/martine/ninja.git. You will need python and a c++ compiler to build ninja. There is a README in the top of the ninja source tree that explains how to build it. Basically, you just run python bootstrap.py. This will produce a ninja executable. Once it is built, you will need to put ninja in your PATH so CMake can find it. ninja does not require a –j flag like GNU make to perform a parallel build. It defaults to building cores +2 jobs at once (thanks to Matthew Woehlke for pointing out that it is not simply 10 as I had originally stated.).  It does however accept a –j flag with the same syntax as GNU make, -j N where N is the number of jobs run in parallel. For more information run ninja –help with the ninja you have built. Once you have ninja built and installed in your PATH, you are ready to run cmake.  Here is an example: # assume your source code is in a directory called src and you are one directory up from there mkdir build cd  build cmake –GNinja ../src ninja   Mac OSX Mac OSX is almost the same as Linux and both GNU make and ninja can be used by following the instructions in the Linux section. Apple also provides an IDE build tool called Xcode. Xcode build performs parallel builds by default. To use Xcode, you will obviously have to have Xcode installed. You run cmake with the Xcode generator.  Here is an example: # assume your source code is in a directory called src and you are one directory up from there mkdir build   cd  build cmake –GXcode ../src # start Xcode IDE and load the project CMake creates, and build from the IDE # or you can build from the command line like this: cmake -–build . –config Debug   Note, cmake –build can be used for any of the CMake generators, but is particularly useful when building IDE based generators from the command line.  You can add options like -j to cmake --build by putting them after the -- option on the command line.  For example, cmake --build . --config Debug -- -j8 will pass -j8 to the make command Windows: The Windows platform actually has the greatest diversity of build options. You can use the Visual Studio IDE, nmake, GNU make, jom, MinGW GNU make, cygwin’s GNU Make, or ninja. Each of the options has some merit. It depends on how you develop code and which tools you have installed to decide which tool best fits your needs. Visual Studio IDE This is a very popular IDE developed by Microsoft. With no extra options the IDE will perform target level parallelism during the build. This works well if you have many targets of about the same size that do not depend on each other. However, most projects are not constructed in that maner. They are more likely to have many dependencies that will only allow for minimal parallelism. However, it is not time to give up on the IDE. You can tell it to use object file level parallelism by adding an extra flag to the compile line. The flag is the /MP flag which has the following help: “/MP[N] use up to 'n' processes for compilation”.  The N is optional as /MP without an n will use as many cores as it sees on the machine.  This flag must be set at CMake configure time instead of build time like the –j flag of make. To set the flag you will have to edit the CMake cache with the cmake-gui and add it to the CMAKE_CXX_FLAGS and the CMAKE_C_FLAGS.  The downside is that the IDE will still perform target level parallelism along with object level parallelism which can lead to excessive parallelism grinding your machine and GUI to a halt. It has also been known to randomly create bad object files. However, the speed up is significant so it is usually worth the extra trouble it causes. GNU Make on Windows Using GNU Make on Windows is similar to using it on Linux or the Mac. However, there are several flavors of GNU make that can be found for Windows. Since I am talking about achieving maximum parallelism, you need to make sure that the make you are using supports the job-server. The makefiles that CMake generates are recursive in implementation http://www.cmake.org/Wiki/CMake_FAQ#Why_does_CMake_generate_recursive_Makefiles.3F. This means that there will be more than one make process will be running during the build. The job-server code in gmake allows these different processes to communicate with each other in order to figure out how many jobs to start in parallel. The original port of GNU make to Windows did not have a job-server implementation. This meant that the –j option was basically ignored by windows GNU make when recursive makefiles were used. The only option was to use the Cygwin version of make. However, at some point the Cygwin make stopped supporting C:/ paths which meant that it could not be used to run the Microsoft compiler. I have a patched version of Cygwin’s make that can be found here:  (www.cmake.org/files/cygwin/make.exe ) Recently, someone implemented the job-server on Windows gmake as seen on this mailing list post: http://mingw-users.1079350.n2.nabble.com/Updated-mingw32-make-3-82-90-cvs-20120823-td7578803.html This means that a sufficiently new version of MinGW gmake will have the job server code and will build in parallel with CMake makefiles.  To build with gmake on windows, you will first want to make sure the make you are using has job-server support. Once you have done that, the instructions are pretty much the same as on Linux.  You will of course have to run cmake from a shell that has the correct environment for the Microsoft command line cl compiler to run. To get that environment you can run the Visual Studio command prompt. That command prompt basically sets a bunch of environment variables that let the compiler find system include files and libraries. Without the correct environment CMake will fail when it tests the compiler. There are three CMake generators supporting three different flavors of GNU make on windows. They are MSYS Makefiles, Unix Makefiles and MinGW Makefiles. MSYS is setup to find the MSYS tool chain and not the MS compiler. MinGW finds the MinGW toolchain. Unix Makefiles will use the CC and CXX environment variables to find the compiler which you can set to cl for the MS compiler. If you are using the Visual Studio cl compiler and want to use gmake, the two options are the “Unix Makefiles” or the “MinGW Makefiles” generators with either the patched Cygwin gmake, or a MinGW make new enough to have the job-server support. The MSYS generator will not work with the MS compiler because of path translation issues done by the shell. Once you have the environment setup for the compiler and the correct GNU make installed, you can follow the instructions found in the Linux section basically cmake, make –jN. JOM The legacy command line make tool that comes with Visual Studio is called nmake. nmake is a makefile processor like GNU make with a slight different syntax. However, it does not know how to do parallel builds. If the makefiles are setup to run cl with more than one source file at a time, the /MP flag can be used to run parallel builds with nmake. CMake does not create nmake makefiles that can benefit from /MP. Fortunately, Joerg Bornemann a Qt developer created the jom tool. jom is a drop in replacement for nmake and is able to read and process nmake makefiles created by CMake. jom will perform object level parallelism, and is a good option for speeding up the builds on Windows. Jom can be downloaded in binary form from here: http://releases.qt-project.org/jom. There is a jom specific generator called “NMake Makefiles JOM”. Here is an example (assumes jom is in the PATH): # assume your source code is in a directory called src and you are one directory up from there mkdir build   cd  build cmake –G” NMake Makefiles JOM” ../src jom   ninja ninja is used on Windows pretty much the same way it is used on Linux or OSX. You still have to build it which will require installing python. To obtain and build ninja see the Linux section on ninja. You will also need to make sure that you have the VS compiler environment setup correctly. Once you have ninja.exe in your PATH and cl ready to be used from your shell, you can run the CMake Ninja generator. Here is an example: # assume your source code is in a directory called src and you are one directory up from there mkdir build   cd  build cmake –GNinja ../src ninja   Conclusion It is possible although not entirely obvious especially on Windows to build with all the cores of your computer. Multiprocessing is obviously here to stay, and performance gains will be greater if parallel builds are taken advantage as the number of core available increases. My laptop has 4 real cores and 4 more with hyperthreading with a total of 8 cores. Recently, I have been using ninja with good results as I mostly use emacs and the visual studio compiler from the command line. Prior to ninja I used the Cygwin version of gmake.  I would be interested to hear what other people are using and if you have performance tests of the various forms of build parallelism available.   [Less]
Posted about 12 years ago by Matt McCormick
JSON is a plain-text data file format that has become popular because it is easy for humans to read and write, easy for machines to parse, and it is the standard data format used in JavaScript. Open source JSON parsers are available for in a variety ... [More] of languages; a popular library in C++ is JsonCpp.  In TubeTK, we want to use this standardized format more extensively to configure complex algorithms for reproducible analysis, etc.   Since TubeTK is primarily written in C++ and uses CMake for its build system, we created a CMake configuration for  JsonCpp.  This also makes the project easier to use as a CMake ExternalProject.  The newly created jsoncpp-cmake repository can be downloaded with Git: enjoy! [Less]
Posted about 12 years ago by Luis Ibanez
Index To the Series 1. Raspberery Pi likes Open Source 2. Cross-Compiling for Raspberry Pi 3. Cross-Compiling ITK for Raspberry Pi 4. Raspberry Pi likes VTK 5. Raspberry Pi likes Node.js    Cross-Compiling for ... [More] Raspberry Pi   Using a Dell Precision M6700 (Ubuntu 12.10) to build binaries for the Raspberry Pi   This is a follow up on our exploration of the Raspberry Pi. Thanks to Andrew Maclean who generously shared with us his recipe to cross-compile for the Raspberry Pi in the comments of our previous blog. Two of the main challenges with cross-compilation are that: There are many ways to do it. Recipes out there have some missing ingredients. Here we are following Andrew's recipe, and adding comments and small updates as we go about the process. (Please share with us your variations, and improvements to this recipe).  We are doing this in a DELL PRECISION M6700 Laptop running Ubuntu 12.10.   Note about the images in this blog post: We captured a good number of screenshots to document the process below. The images may appear in low resolution in the blog and may be hard to read, but if you click on them, you will get the full resolution image of the screenshot and text should appear very clearly on them.   Step 1. Build the Toolchain Since we are going to run in laptop with an Intel processor, and we want to build object code for the ARM processor at the heart of the Raspberry Pi, we need a cross-compiler and its associated tools, which is usually called a "toolchain". Here we are using "crosstool-ng" to build such tool chain. Following Andrew's advice, many of the instruction below follow this post by Chris Boot:http://www.bootc.net/archives/2012/05/26/how-to-build-a-cross-compiler-for-your-raspberry-pi/   Step 1.1 Download crosstool-ng We go to: http://crosstool-ng.org/#download_and_usage and download the most recent version, that at the time of writing this blog was:  1.17.0note that in the download page, the version numbers are sorted alphabetically (not numerically). In my first visit, I went straight to the bottom of the page and erroneously grabbed version 1.9.3, just because it was at the bottom of the page... This link below, with the downloads sorted by date, might be useful to you: http://crosstool-ng.org/download/crosstool-ng/?sort=modtime&order=desc The file 00-LATEST-is-1.17.0 should have also be a hint... if I were paying attention...  :-)   We created a directory to host it and then downloaded and extracted the sources by doing: mkdir -p  ~/src/RaspberryPi/toolchain cd ~/src/RaspberryPi/toolchain wget http://crosstool-ng.org/download/crosstool-ng/crosstool-ng-1.17.0.tar.bz2 tar xjf crosstool-ng-1.17.0.tar.bz2 cd crosstool-ng-1.17.0   Step 1.2 Configure and Build Here we continue following Chris Boot's instructions. We chose to configure the tool to be installed in a local directory inside our home directory. cd ~/src/RaspberryPi/toolchain/crosstool-ng-1.17.0 mkdir -p ~/local/crosstool-ng ./configure --prefix=/home/ibanez/local/crosstool-ng to get this to work, we had to install the following Ubuntu packages (most of which were listed in Andrew's recipe), bison cvs flex gperf texinfo automake libtool The whole is done with the command: sudo aptitude install bison cvs flex gperf texinfo automake libtool  then we can do make make install and add to the PATH the bin directory where crosstool-ng was installed: export PATH=$PATH:/home/ibanez/local/crosstool-ng/bin/ In some cases, it might be necessary to unset the LD_LIBRARY_PATH, to prevent the toolchain from grabbing other shared libraries from the host machine: unset LD_LIBRARY_PATH   Step 1.3 Build Raspberry Toolchain Create a staging directory. This is a temporary directory where the toolchain will be configured and built, but it is not its final installation place. mkdir -p ~/src/RaspberryPi/staging cd ~/src/RaspberryPi/staging/ ct-ng  menuconfig   You will see a menu similar to: Go into the option "Paths and misc options" Enable the option "Try features marked as EXPERIMENTAL" In the option "Prefix Directory (NEW)", one can set the actual destination directory where the toolchain will be installed. In this case we choose to install in ${HOME}/local/x-tools/${CT_TARGET}.Others may prefer /opt/cross/x-tools/${CT_TARGET}, for example. After you select < Ok > Select the < Exit > option to go back to the main menu There, select "Target options". Change the Target architecture to arm. Leave Endianness set to Little endian and  Bitness set to 32-bit. Use again the < Exit > option to go back to the main menu Select "Operating System" There, change the "Target OS" option from (bare-metal) to the option "linux" Take the <Select> option Use the < Exit > option to get back to the main menu Select "Binary utilities" Select "binutils version" Take the most recent version that is not marked as EXPERIMENTAL. In our case, that was version 2.21.1a Go back to the main menu Select "C compiler" Enable the Show Linaro versions (EXPERIMENTAL) option. Here we selected the "linaro-4.7-2012.10 (EXPERIMENTAL) "This is a bit newer than the version "linaro-4.6-2012.04 (EXPERIMENTAL)" that Chris Boot was using in his blog post, so here we are taking our chances... Select that option. Exist the configuration and Save the changes  Then, start the build process by typing ct-ng  build Since this will take a while, Chris recommends here to go and get Coffee (or lunch)... It was nice to see that the build projects uses the proper "make -j " options for parallel building and therefore makes use of all the available cores: Not to be a whiner.... but,... the problem with this, is that it only gives us 18minutes and 9 seconds for the Coffee break   :-) When the build process finishes, we end up with the toolchain installed in the "prefix" directory.In our case: ${HOME}/local/x-tools/${CT_TARGET} More specifically: /home/ibanez/local/x-tools/arm-unknown-linux-gnueabi Where we will find the following collection of executables in the "bin" directory: We now add this directory to the PATH:export PATH=$PATH:/home/ibanez/local/x-tools/arm-unknown-linux-gnueabi/bin We can then test the toolchain with a "hello world" small C program. Compiling it locally as "helloworld" (in "aleph" which is the name of our Ubuntu Laptop). Then copying it into the Raspberry Pi and finally running it there   Step 1.4 Build the C++ compiler in the toolchain By default, our process above only built the C compiler. We are now going to build the C++ compiler as well. We go back to the staging directory:/home/ibanez/src/RaspberryPi/staging and run the configuration processct-ng menuconfig We go into the "C compiler" option Enable the option "C++" Save and Exit and type again   "ct-ng  build"to build the toolchain. This time it took 13 minutes 18 seconds and we have now the new C++ components in the toolchain binary directory  Time to test the C++ compiler with a Hello World. We build it locally Copy the executable to the Raspberry Pi Login in the Raspberry Pi Execute the cross-compiled executable This completes the set up of the tool chain. We are now ready to use CMake to cross compile bigger projects.         Step 2. One CMake File to Rule Them All ! We now turn our attention to the Cross Compilation instructions of the CMake Wiki http://www.cmake.org/Wiki/CMake_Cross_Compiling The first step here is to write a .cmake file that points to the toolchain. In our case we choose to call this file: Toolchain-RaspberryPi.cmake and put on it the following content   # this one is importantSET(CMAKE_SYSTEM_NAME Linux)#this one not so muchSET(CMAKE_SYSTEM_VERSION 1)# specify the cross compilerSET(CMAKE_C_COMPILER/home/ibanez/local/x-tools/arm-unknown-linux-gnueabi/bin/arm-unknown-linux-gnueabi-gcc)SET(CMAKE_CXX_COMPILER/home/ibanez/local/x-tools/arm-unknown-linux-gnueabi/bin/arm-unknown-linux-gnueabi-g++)# where is the target environmentSET(CMAKE_FIND_ROOT_PATH/home/ibanez/local/x-tools/arm-unknown-linux-gnueabi)# search for programs in the build host directoriesSET(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)# for libraries and headers in the target directoriesSET(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)SET(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY) Note here that the path /home/ibanez/local/x-tools/arm-unknown-linux-gnueabi is the base directory where we installed the toolchain. and the file names arm-unknown-linux-gnueabi-gcc arm-unknown-linux-gnueabi-g++ are the names of the generated C and C++ compilers respectively.   We now put the cmake toolchain file in the directory: /home/ibanez/bin/RaspberryPi/CMakeToolChain Then we create a CMake-based Hello World example: mkdir -p /tmp/hello/src mkdir -p /tmp/hello/bin cd /tmp/hello/src write here a CMakeLists.txt file with just: cmake_minimum_required(VERSION 2.8)project(HelloWorld)add_executable(HelloWorld HelloWorld.cxx )target_link_libraries(HelloWorld) and the associated HelloWorld.cxx file with: #include <iostream>int main() {  std::cout << "Hello World++ !" << std::endl;  return 0;  } Then, we can change directories to the bin directory and configure with CMake, by pointing to the toolchain file as: cd /tmp/hello/bin cmake -DCMAKE_TOOLCHAIN_FILE=/home/ibanez/bin/RaspberryPi/CMakeToolChain/Toolchain-RaspberryPi.cmake ../src and simply build with "make" Then copy the resulting executable to the Raspberry Pi, and run it.   Step 3. Setting up additional bin and lib files for cross compiling. If you need access to the libraries on the RaspberryPi for compiling (for example if you have built the latest boost libriaries on the RaspberryPi and it is installed in /usr/local), you can copy these to a directory on your host computer using rsync that will preserve the symlinks. On your host machine you may also have to install rsync  sudo aptitude install rsync On the RaspberryPi install rsync:  sudo apt-get istall rsync Create a folder on the cross compiling machine. For example, here we call it: ~/bin/RaspberryPi mkdir -p ~/bin/RaspberryPi cd to this folder cd  ~/bin/RaspberryPi and do the following: rsync -rl [email protected]:/lib . rsync -rl [email protected]:/usr . Remember to run these rsync commands whenever new libraries are added to the RaspberryPi system or when the RaspberryPi is upgraded.     This concludes our introduction to Cross Compilation for the Raspberry Pi using CMake.   Please share with us your comments, and suggestions for improving this process. [Less]
Posted about 12 years ago by Luis Ibanez
 Cross-Compiling for Raspberry Pi   Using a Dell Precision M6700 (Ubuntu 12.10) to build binaries for the Raspberry Pi   This is a follow up on our exploration of the Raspberry Pi. Thanks to Andrew Maclean who ... [More] generously shared with us his recipe to cross-compile for the Raspberry Pi in the comments of our previous blog. Two of the main challenges with cross-compilation are that: There are many ways to do it. Recipes out there have some missing ingredients. Here we are following Andrew's recipe, and adding comments and small updates as we go about the process. (Please share with us your variations, and improvements to this recipe).  We are doing this in a DELL PRECISION M6700 Laptop running Ubuntu 12.10.   Note about the images in this blog post: We captured a good number of screenshots to document the process below. The images may appear in low resolution in the blog and may be hard to read, but if you click on them, you will get the full resolution image of the screenshot and text should appear very clearly on them.   Step 1. Build the Toolchain Since we are going to run in laptop with an Intel processor, and we want to build object code for the ARM processor at the heart of the Raspberry Pi, we need a cross-compiler and its associated tools, which is usually called a "toolchain". Here we are using "crosstool-ng" to build such tool chain. Following Andrew's advice, many of the instruction below follow this post by Chris Boot:http://www.bootc.net/archives/2012/05/26/how-to-build-a-cross-compiler-for-your-raspberry-pi/   Step 1.1 Download crosstool-ng We go to: http://crosstool-ng.org/#download_and_usage and download the most recent version, that at the time of writing this blog was:  1.17.0note that in the download page, the version numbers are sorted alphabetically (not numerically). In my first visit, I went straight to the bottom of the page and erroneously grabbed version 1.9.3, just because it was at the bottom of the page... This link below, with the downloads sorted by date, might be useful to you: http://crosstool-ng.org/download/crosstool-ng/?sort=modtime&order=desc The file 00-LATEST-is-1.17.0 should have also be a hint... if I were paying attention...  :-)   We created a directory to host it and then downloaded and extracted the sources by doing: mkdir -p  ~/src/RaspberryPi/toolchain cd ~/src/RaspberryPi/toolchain wget http://crosstool-ng.org/download/crosstool-ng/crosstool-ng-1.17.0.tar.bz2 tar xjf crosstool-ng-1.17.0.tar.bz2 cd crosstool-ng-1.17.0   Step 1.2 Configure and Build Here we continue following Chris Boot's instructions. We chose to configure the tool to be installed in a local directory inside our home directory. cd ~/src/RaspberryPi/toolchain/crosstool-ng-1.17.0 mkdir -p ~/local/crosstool-ng ./configure --prefix=/home/ibanez/local/crosstool-ng to get this to work, we had to install the following Ubuntu packages (most of which were listed in Andrew's recipe), bison cvs flex gperf texinfo automake libtool The whole is done with the command: sudo aptitude install bison cvs flex gperf texinfo automake libtool  then we can do make make install and add to the PATH the bin directory where crosstool-ng was installed: export PATH=$PATH:/home/ibanez/local/crosstool-ng/bin/ In some cases, it might be necessary to unset the LD_LIBRARY_PATH, to prevent the toolchain from grabbing other shared libraries from the host machine: unset LD_LIBRARY_PATH   Step 1.3 Build Raspberry Toolchain Create a staging directory. This is a temporary directory where the toolchain will be configured and built, but it is not its final installation place. mkdir -p ~/src/RaspberryPi/staging cd ~/src/RaspberryPi/staging/ ct-ng  menuconfig   You will see a menu similar to: Go into the option "Paths and misc options" Enable the option "Try features marked as EXPERIMENTAL" In the option "Prefix Directory (NEW)", one can set the actual destination directory where the toolchain will be installed. In this case we choose to install in ${HOME}/local/x-tools/${CT_TARGET}.Others may prefer /opt/cross/x-tools/${CT_TARGET}, for example. After you select < Ok > Select the < Exit > option to go back to the main menu There, select "Target options". Change the Target architecture to arm. Leave Endianness set to Little endian and  Bitness set to 32-bit. Use again the < Exit > option to go back to the main menu Select "Operating System" There, change the "Target OS" option from (bare-metal) to the option "linux" Take the <Select> option Use the < Exit > option to get back to the main menu Select "Binary utilities" Select "binutils version" Take the most recent version that is not marked as EXPERIMENTAL. In our case, that was version 2.21.1a Go back to the main menu Select "C compiler" Enable the Show Linaro versions (EXPERIMENTAL) option. Here we selected the "linaro-4.7-2012.10 (EXPERIMENTAL) "This is a bit newer than the version "linaro-4.6-2012.04 (EXPERIMENTAL)" that Chris Boot was using in his blog post, so here we are taking our chances... Select that option. Exist the configuration and Save the changes  Then, start the build process by typing ct-ng  build Since this will take a while, Chris recommends here to go and get Coffee (or lunch)... It was nice to see that the build projects uses the proper "make -j " options for parallel building and therefore makes use of all the available cores: Not to be a whiner.... but,... the problem with this, is that it only gives us 18minutes and 9 seconds for the Coffee break   :-) When the build process finishes, we end up with the toolchain installed in the "prefix" directory.In our case: ${HOME}/local/x-tools/${CT_TARGET} More specifically: /home/ibanez/local/x-tools/arm-unknown-linux-gnueabi Where we will find the following collection of executables in the "bin" directory: We now add this directory to the PATH:export PATH=$PATH:/home/ibanez/local/x-tools/arm-unknown-linux-gnueabi/bin We can then test the toolchain with a "hello world" small C program. Compiling it locally as "helloworld" (in "aleph" which is the name of our Ubuntu Laptop). Then copying it into the Raspberry Pi and finally running it there   Step 1.4 Build the C++ compiler in the toolchain By default, our process above only built the C compiler. We are now going to build the C++ compiler as well. We go back to the staging directory:/home/ibanez/src/RaspberryPi/staging and run the configuration processct-ng menuconfig We go into the "C compiler" option Enable the option "C++" Save and Exit and type again   "ct-ng  build"to build the toolchain. This time it took 13 minutes 18 seconds and we have now the new C++ components in the toolchain binary directory  Time to test the C++ compiler with a Hello World. We build it locally Copy the executable to the Raspberry Pi Login in the Raspberry Pi Execute the cross-compiled executable This completes the set up of the tool chain. We are now ready to use CMake to cross compile bigger projects.         Step 2. One CMake File to Rule Them All ! We now turn our attention to the Cross Compilation instructions of the CMake Wiki http://www.cmake.org/Wiki/CMake_Cross_Compiling The first step here is to write a .cmake file that points to the toolchain. In our case we choose to call this file: Toolchain-RaspberryPi.cmake and put on it the following content   # this one is importantSET(CMAKE_SYSTEM_NAME Linux)#this one not so muchSET(CMAKE_SYSTEM_VERSION 1)# specify the cross compilerSET(CMAKE_C_COMPILER/home/ibanez/local/x-tools/arm-unknown-linux-gnueabi/bin/arm-unknown-linux-gnueabi-gcc)SET(CMAKE_CXX_COMPILER/home/ibanez/local/x-tools/arm-unknown-linux-gnueabi/bin/arm-unknown-linux-gnueabi-g++)# where is the target environmentSET(CMAKE_FIND_ROOT_PATH/home/ibanez/local/x-tools/arm-unknown-linux-gnueabi)# search for programs in the build host directoriesSET(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)# for libraries and headers in the target directoriesSET(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)SET(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY) Note here that the path /home/ibanez/local/x-tools/arm-unknown-linux-gnueabi is the base directory where we installed the toolchain. and the file names arm-unknown-linux-gnueabi-gcc arm-unknown-linux-gnueabi-g++ are the names of the generated C and C++ compilers respectively.   We now put the cmake toolchain file in the directory: /home/ibanez/bin/RaspberryPi/CMakeToolChain Then we create a CMake-based Hello World example: mkdir -p /tmp/hello/src mkdir -p /tmp/hello/bin cd /tmp/hello/src write here a CMakeLists.txt file with just: cmake_minimum_required(VERSION 2.8)project(HelloWorld)add_executable(HelloWorld HelloWorld.cxx )target_link_libraries(HelloWorld) and the associated HelloWorld.cxx file with: #include <iostream>int main() {  std::cout << "Hello World++ !" << std::endl;  return 0;  } Then, we can change directories to the bin directory and configure with CMake, by pointing to the toolchain file as: cd /tmp/hello/bin cmake -DCMAKE_TOOLCHAIN_FILE=/home/ibanez/bin/RaspberryPi/CMakeToolChain/Toolchain-RaspberryPi.cmake ../src and simply build with "make" Then copy the resulting executable to the Raspberry Pi, and run it.   Step 3. Setting up additional bin and lib files for cross compiling. If you need access to the libraries on the RaspberryPi for compiling (for example if you have built the latest boost libriaries on the RaspberryPi and it is installed in /usr/local), you can copy these to a directory on your host computer using rsync that will preserve the symlinks. On your host machine you may also have to install rsync  sudo aptitude install rsync On the RaspberryPi install rsync:  sudo apt-get istall rsync Create a folder on the cross compiling machine. For example, here we call it: ~/bin/RaspberryPi mkdir -p ~/bin/RaspberryPi cd to this folder cd  ~/bin/RaspberryPi and do the following: rsync -rl [email protected]:/lib . rsync -rl [email protected]:/usr . Remember to run these rsync commands whenever new libraries are added to the RaspberryPi system or when the RaspberryPi is upgraded.     This concludes our introduction to Cross Compilation for the Raspberry Pi using CMake.   Please share with us your comments, and suggestions for improving this process. [Less]
Posted about 12 years ago by David Cole
And a few more... Here are some important bug fixes to the CMake 2.8.10 release. Thanks going out to Alex Neundorf, Brad King, Rolf Eike Beer, (and me), … fixes for the following problems are now available in a 2.8.10.2 bug fix release. The change ... [More] log page for this bug-fix only release is here: http://public.kitware.com/Bug/changelog_page.php?version_id=107 Please use the latest release installers from our download page http://cmake.org/cmake/resources/software.html rather than any previous 2.8.10 builds. Thanks for your continued support!   -Dave     These are the commits that fixed the problems: Changes in CMake 2.8.10.2 (since 2.8.10.1) ---------------------------------------------- Alex Neundorf (1): Automoc: fix regression #13667, broken build in phonon Brad King (1): Initialize IMPORTED GLOBAL targets on reconfigure (#13702) David Cole (1): CMake: Fix infinite loop untarring corrupt tar file Rolf Eike Beer (1): FindGettext: fix overwriting result with empty variable (#13691) [Less]