Posted
almost 10 years
ago
by
mollekopf
It’s been a while since the last progress report on akonadi next. I’ve since spent a lot of time refactoring the existing codebase, pushing it a little further,
and refactoring it again, to make sure the codebase remains as clean as possible. The
... [More]
result of that is that an implementation of a simple resource only takes a couple of template instantiations, apart from code that interacts with the datasource (e.g. your IMAP Server) which I obviously can’t do for the resource.
Once I was happy with that, I looked a bit into performance, to ensure the goals are actually reachable. For write speed, operations need to be batched into database transactions, this is what allows the db to write up to 50’000 values per second on my system (4 year old laptop with an SSD and an i7). After implementing the batch processing, and without looking into any other bottlenecks, it can process now ~4’000 values per second, including updating ten secondary indexes. This is not yet ideal given what we should be able to reach, but does mean that a sync of 40’000 emails would be done within 10s, which is not bad already. Because commands first enter a persistent command queue, pulling the data offline is complete even faster actually, but that command queue afterwards needs to be processed for the data to become available to the clients and all of that together leads to the actual write speed.
On the reading side we’re at around 50’000 values per second, with the read time growing linearly with the amount of messages read. Again far from ideal, which is around 400’000 values per second for a single db (excluding index lookups), but still good enough to load large email folders in a matter of a second.
I implemented the benchmarks to get these numbers, so thanks to HAWD we should be able to track progress over time, once I setup a system to run the benchmarks regularly.
With performance being in an acceptable state, I will shift my focus to the revisioned, which is a prerequisite for the resource writeback to the source. After all, performance is supposed to be a desirable side-effect, and simplicity and ease of use the goal.
Randa
Coming up next week is the yearly Randa meeting where we will have the chance to sit together for a week and work on the future of Kontact. This meetings help tremendously in injecting momentum into the project, and we have a variety of topics to cover to direct the development for the time to come (and of course a lot of stuff to actively hack on). If you’d like to contribute to that you can help us with some funding. Much appreciated!
[Less]
|
Posted
almost 10 years
ago
by
greve
Kolab Now was first launched January 2013 and we were anxious to find out: If someone offered a public cloud service for people that put their privacy and security first. A service that would not just re-sell someone else’s platform with some added
... [More]
marketing, but did things right. Would there be a demand for it? Would people choose to pay with money instead of their privacy and data? These past two and a half years have provided a very clear answer. Demand for a secure and private collaboration platform has grown in ways we could have only hoped for.
To stay ahead of demand we have undertaken a significant upgrade to our hosted solution that will allow us to provide reliable service to our community of users both today and in the years to come. This is the most significant set of changes we’ve ever made to the service, which have been months in the making. We are very excited to unveil these improvements to the world as we complete the roll-out in the coming weeks.
From a revamped and simplified sign-up process to a more robust directory
service design, the improvements will be visible to new and existing users
alike. Everyone can look forward to a significantly more robustness and
reliable service, along with faster turnaround times on technical issues. We
have even managed to add some long-sought improvements many of you have been
asking for.
The road travelled
Assumptions are the root of all evil. Yet in the absence of knowledge of the future, sometimes informed assumptions need to be made. And sometimes the world just changes. It was February 2013 when MyKolab was launched into public beta.
Our expectation was that a public cloud service oriented on full business collaboration focusing on privacy and security would primarily attract small and medium enterprises between 10 and 200 users. Others would largely elect to use the available standard domains. So we expected most domains to be in the 30 users realm, and a handful of very large ones.
That had implications for the way the directory service was set up.
In order to provide the strongest possible insulation between tenants, each domain would exist in its own zone within the directory service. You can think of this as o dedicated installations on shared infrastructure instead of the single domain public clouds that are the default in most cases. Or, to use a slightly less technical analogies, between serial houses or apartments in a large apartment block.
So we expected some moderate growth for which we planned to deploy some older hardware to provide adequate redundancy and resource so there would be a steady show-case for how to deploy Kolab into the needs of Application and Internet Service Providers (ASP/ISP).
Literally on the very day when we carried that hardware into the data centre did Edward Snowden and his revelations become visible to the world. It is a common quip that assumptions and strategies usually do not outlive their contact with reality. Ours did not even make it that far.
After nice, steady growth during the early months, MyKolab.com took us on a wild ride.
Our operations managed to work miracles with the old hardware in ways that often made me think this would be interesting learning material for future administrators. But efficiency only gets you so far.
Within a couple of months however we ended up replacing it in its entirety. And to the largest extent all of this was happening without disruption to the production systems. New hardware was installed, services switched over, old hardware removed, and our team also managed to add a couple of urgently sought features to Kolab and deploy them onto MyKolab.com as well.
What we did not manage to make time for is re-work the directory service in order to adjust some of the underlying assumptions to reality. Especially the number of domains in relation to the number of users ended up dramatically different from what we initially expected. The result of that is a situation where the directory service has become the bottleneck for the entire installation – with a complete restart easily taking in the realm of 45 minutes.
In addition, that degree of separation translated to more restrictions of sharing data with other users, sometimes to an extent that users felt this was lack of a feature, not a feature in and of itself.
Re-designing the directory service however carries implications for the entire service structure, including also the user self-administration software and much more. And you want to be able to deploy this within a reasonable time interval and ensure the service comes back up better than before for all users.
On the highway to future improvements
So there is the re-design, the adaptation of all components, the testing, the migration planning, the migration testing and ultimately also the actual roll-out of the changes. That’s a lot of work. Most of which has been done by this point in time.
The last remaining piece of the puzzle was to increase hardware capacity in order to ensure there is enough reserve to build up an entire new installation next to existing production systems, and then switch over, confirm successful switching, and then ultimately retire the old setup.
That hardware has been installed last week.
So now the roll-out process will go through the stages and likely complete some time in September. That’s also the time when we can finally start adding some features we’ve been holding back to ensure we can re-adjust our assumptions to the realities we encountered.
For all users of Kolab Now that means you can look forward to a much improved service resilience and robustness, along with even faster turnaround times on technical issues, and an autumn of added features, including some long-sought improvements many of you have been asking for.
Stay tuned.
[Less]
|
Posted
almost 10 years
ago
by
Aaron Seigo
The Kontact groupware client from the KDE community, which also happens to be the premier desktop client for Kolab, is "just" a user interface (though that seriously undersells its capabilities, as it still does a lot in that UI), and it uses a
... [More]
system service to actually manage the groupware data. In fact, that same service is used by applications such as KDE Plasma to access data; this is how calendar events end up being shown in the desktop clock's calendar for instance. That service (as you might already know) is called Akonadi.
In its current design, Akonadi uses an external1 database server to store much of its data2. The default configuration is a locally-running MySQL server that Akonadi itself starts and manages. This can be undesirable in some cases, such as multi-user systems where running a separate MySQL instance for each and every user may be more overhead than desired, or when you already have a MySQL instance running on the system for other applications.
While looking into some improvements for a corporate installation of Kontact where the systems all have user directories hosted on a server and mounted using NFS, I tried out a few different Akonadi trick. One of those tricks was using a remote MySQL server. This would allow this particular installation to move Akonadi's database related I/O load off the NFS server and share the MySQL instance between all their users. For a larger number of users this could be a pretty significant win.
How to accomplish this isn't well documented, unfortunately, at least not that I could readily find. Thankfully I can read the source code and work with some of the best Akonadi and Kontact developers that currently work on it. I will be improving the documentation around this in the coming weeks, though.3 Until then, here is how I went about it.
Configuring Akonadi
First, you want Akonadi to not be running. Close Kontact if it is running and then run akonadictl stop. This can take a little while, even though that command returns immediately. To ensure Akonadi actually is stopped run akonadictl status and make sure it says that it is, indeed, stopped.
Next, start the Akonadi control panel. The command line approach is kcmshell4 kcm_akonadi_resources, but you can also open the command runner in Plasma (ALt+F2 or Alt+Space, depending) and type in akonadi to get something like this:
It's the first item listed, at least on my laptop: Akonadi Configuration. You can also go the "slower" route and open System Settings and either search for akonadi or go right into the Personal Information panel. No matter how you go about it, you'll see something like this:
Switch to the Akonadi Server Configuration tab and disable the Use internal MySQL server option. Then you can go about entering a hostname. This would be localhost for MySQL7 running on the same machine, or an IP address or domain name that is reachable from the system. You will also need to supply a database name4 (which defaults to akonadi), username5 and password. Clear the Options line of text, and hit the ol' OK button. That's it.
Assuming your MySQL is up and running and the username and password you supplied are correct, Akonadi will now be using a remote MySQL database. Yes, it is that easy.
Caveats
In this configuration, the limitations are twofold:
network quality
local configuration is now tied to that database
Network quality is the biggest factor. Akonadi can send a lot of database queries and each of those results in a network roundtrip. If your network latency for a roundtrip is 20ms, for instance, then you are pretty well hard-limited to 50 queries per second. Given that Akonadi can issue several queries for an item during initial sync, this can result in quite slow initial synchronization performance on networks with high latency.6
Past latency, bandwidth is the other important factor. If you have lots of users or just tons of big mails, consider the network traffic incurred in sending that data around the network.
For typical even semi-modern network in an office environment, however, the network should not be a big issue in terms of either latency or bandwidth.
The other item to pay attention to is that the local configuration and file data kept outside the database by Akonadi will now be tied to the contents of that database, and vice versa. So you can not simply setup a single database in a remote database server and then connect simultaneously to it from multiple Akonadi instances. In fact, I will guarantee you that this will eventually screw up your data in unpleasant ways. So don't do it. ;)
In an office environment where people don't move between machines and/or when the user data is stored centrally as well, this isn't an issue. Otherwise, create one database for each device you expect to connect to it. Yes, this means multiple copies of the data, but it will work without trashing your data and that's more important thing.
How well does it work?
Now for the Big Question: Is this actually practical and safe enough for daily use? I've been using this with my Kolab Now account since last week. To really stretch the realm of reality, I put the MySQL instance on a VM hosted in Germany. In spite of forcing Akonadi to trudge across the public internet (and over wifi), so far, so good. Once through a pretty slow initial synchronization, Kontact generally "feels" on par with and often even a bit snappier than most webmail services that I've used, though certainly slower than a local database. In an office environment, however, I would hope that the desktop systems have better network than "my laptop on wifi accessing a VM in Germany".
As for server load, for one power user with a ton of email (my life seems to revolve around email much of the time) it is entirely negligible. MySQL never budged much above 1% CPU usage during my monitoring of it, and after sync was usually just idling.
I won't be using this configuration for daily use. I still have my default-configured Akonadi as well, and that is not only faster but travels with my laptop wherever it is, network or not. Score one for offline access.
Footnotes
1: If you are thinking something along the lines of "the real issue is that it uses a database server at all", I would partially agree with you. For offline usage, good performance, and feature consistency between accounts, a local cache of some sort is absolutely required. So some local storage makes sense. A full RDBMS carries more overhead than truly necessary and SQL is not a 100% perfect fit for the dataset in question. Compared to today, there were far fewer options available to the Akonadi developers a decade ago when the Akonadi core was being written. When the choice is between "not perfect, but good enough" and "nothing", you usually don't get to pick "nothing". ;) In the Akonadi Next development, we've swapped out the external database process and the use of SQL for an embedded key/value store. Interestingly, the advancements in this area in the decade since Akonadi's beginning were driven by a combination of mobile and web application requirements. That last sentence could easily be unpacked into a whole other blog entry.
2: There is a (configurable) limit to the size of payload content (e.g. email body and attachments) that Akonadi will store in the database which defaults to 4KiB. Anything over that limit will get stored as a regular file on the file system with a reference to that file stored in the database.
3: This blog entry is, in part, a way to collect my thoughts for that documentation.
4: If the user is not allowed to create new databases, then you will need to pre-create the database in MySQL.
5: The user account is a MySQL account, not your regular system user account ... unless MySQL is configured to authenticate against the same user account information that system account login is, e.g. PAM / LDAP.
6: Akonadi appears to batch these queries into transactions that exist per-folder being sync'd or every 100 emails, whichever comes first, so if you are watching the database during sync you will see data appear in batches. This can be done pretty easily with an SQL statement like select count(*) from PartTable; Device this number by 3 to get the number of actual items, time how long it takes for a new batch to arrive and you'll quickly have your performance numbers for synchronization.
7: That same dialog also offers options for things other than MySQL. There are pros and cons to each of the options in terms of stability and performance. Perhaps I'll write about those in the future, but this blog entry with its stupid number of footnotes is already too long. ;)
[Less]
|
Posted
almost 10 years
ago
by
greve
Bringing together an alliance that will liberate our future web and mobile collaboration was the most important motive behind our launching the Roundcube Next campaign at the 2015 Kolab Summit. This goal we reached fully.
There is now a group of some
... [More]
of the leading experts for messaging and collaboration in combination with service providers around the world that has embarked with us on this unique journey:
bHosted
Contargo
cPanel
Fastmail
Sandstorm.io
sys4
Tucows
TransIP
XS4ALL
The second objective for the campaign was to get enough acceleration to be able to allow two, three people to focus on Roundcube Next over the coming year. That goal we reached partially. There is enough to get us started and go through the groundwork, but not enough for all the bells and whistles we would have loved to go for. To a large extent that’s because we would have a lot of fantasy for bells and whistles.
But perhaps it is a good thing that the campaign did not complete all the way into the stretch goals.
Since numbers are part of my responsibility, allow me to share some with you to give you a first-hand perspective of being inside an Indiegogo Campaign:
Roundcube Next Campaign Amount
$103,541.00
100.00%
Indiegogo Cost
-$4,141.64
4.00%
PayPal Cost
-$4,301.17
4.15%
Remaining Amount
$95,098.19
91.85%
So by the time the money was in our PayPal account, we are down 8.15%.
The reason for that is simple: Instead of transferring the complete amount in one transaction, which would have incurred only a single transaction fee, they transferred it individually per contribution. Which means PayPal gets to extract the per transaction fee. I assume the rationale behind this is that PayPal may have acted as the escrow service and would have credited users back in case the campaign goal had not been reached. Given our transactions were larger than average for crowd funding campaigns, the percentage for other campaigns is likely going to be higher. It would seem this can even go easily beyond the 5% that you see quoted on a variety of sites about crowd funding.
But it does not stop there. Indiegogo did not allow to run the campaign in Swiss Franc, and PayPal forces transfers into our primary currency, resulting in another fee for conversion. On the day the Roundcube Next Campaign funds were transferred to PayPal, XE.com lists the exchange rate as 0.9464749579 CHF per USD.
USD
CHF
% of total
Roundcube Next Campaign Amount
$103,541.00
SFr. 97,998.96
100.00%
Remaining at PayPal
$95,098.19
SFr. 90,008.06
91.85%
Final at bank in CHF
$92,783.23
SFr. 87,817.00
89.61%
So now we’re at 10.39% in fees, of which 4% go to Indiegogo for their services. A total of 6.39% went to PayPal. Not to mention this is before any t-shirt is printed or shipped, and there is of course also cost involved in creating and running a campaign.
The $4,141.64 we paid to Indiegogo are not too bad, I guess. Although their service was shaky and their support non-existent. I don’t think we ever got a response to our repeated support inquiries over a couple of weeks. And we experienced multiple downtimes of several hours which were particularly annoying during the critical final week of the campaign where we can be sure to have lost contributions.
PayPal’s overhead was $6,616.27 – the equivalent of another Advisor to the Roundcube Next Campaign. That’s almost 60% more than the cost for Indiegogo. Which seems excessive and is reminding me of one of Bertolt Brecht’s more famous quotes.
But of course you also need to add the effort for the campaign itself, including preparation, running and perks. Considering that, I am no longer surprised that many of the campaigns I see appear to be marketing instruments to sell existing products that are about to be released, and less focused on innovation.
In any case, Roundcube Next is going to be all about innovation. And Kolab Systems will continue contribute plenty of its own resources as we have been doing for Roundcube and Roundcube Next, including a world class Creative Director and UI/UX expert who is going to join us in a month from now.
We also remain open to others to come aboard.
The advisory group is starting to constitute itself now, and will be taking some decisions about requirements and underlying architecture. Development will then begin and continue up until well into next year. So there is time to engage even in the future. But many decisions will be made in the first months, and you can still be part of that as Advisor to Roundcube Next.
It’s not too late to be part of the Next. Just drop a message to [email protected].
[Less]
|
Posted
almost 10 years
ago
by
Aaron Seigo
I try to keep memory of how various aspects of development were for me in past years. I do this by keeping specific projects I've been involved with fresh in my memory, revisiting them every so often and reflecting on how my methods and experiences
... [More]
have changed in the time since. This allows me to wander backwards 5, 10, 15, 20 years in the past and reflect.
Today I was presenting the "final" code-level design for a project I've been tasked with: an IMAP payload filter for use with Kolab. The best way I can think to describe it is as a protocol-level firewall (of sorts) for IMAP. The first concrete use case we have for it is to allow non-Kolab-aware clients (e.g. Thunderbird) to connect to a Kolab server and see only the mail folders, implying that the groupware folders are filtered out of the IMAP session. There are a large number of other use case ideas floating about, however, and we wanted to make sure that we could accommodate those in future by extending the codebase. While drawing out on the whiteboard how I planned for this to come together, along with a break-out of the work into two-week sprints, I commented in passing that it was actually a nicely simple program.
In particular, I'm quite pleased with how the "filter groupware folders" will actually be implemented quite late in the project as a very simple, and very isolated, module that sits on top of a general use scaffolding for real-time manipulation of an IMAP stream.
When I arrived back at my desk, I took a moment to reflect on how I would have perceived the same project earlier in my career. One thing that sprung out at me was that the shape of the program was very clear in my head. Roll back a decade and the details would have been much more fuzzy. Roll back 15 years and it probably would have been quite hand-wavy at the early stages. Today, I can envision a completed codebase.
If someone had presented that vision to me 10 or 15 years ago, I would have accepted it quite happily ("Yes! A plan!"). Today, I know that plan is a lie in much the same way as a 14-day weather report is: it is the best we can say about the near future from our knowledge of today. If nothing changes, that's what it will be. Things always change, however. This is one of life's few constants.
So what point is there to being able to see an end point? That's a good question and I have to say that I never attempted to develop the ability to see a codebase in this amount of detail before writing it. It just sort of happened with time and experience, one of the few bonuses of getting older. ;) As such, one might think that since it the final codebase will almost certainly not look exactly like what is floating about in my head, this is not actually a good thing to have at all. Could it perhaps lock one mentally into a path which can be realized, but which when complete will not match what is there?
A lot of modern development practice revolves around the idea of flexibility. This shows up in various forms: iteration, approaching design in a "fractal" fashion, implementing only what you need now, etc. A challenge inherent in many of these approaches is growing short-sighted. So often I see projects switch data storage systems, for instance, as they run into completely predictable scalability, performance or durability requirements over time. It's amazing how much developer time is thrown away simply by misjudging at the start what an appropriate storage system would be.
This is where having a long view is really helpful. It should inform the developer(s) about realistic possible futures which can eliminate many classes of "false starts" right at the beginning. It also means that code can be written with purpose and confidence right from the start, because you know where you are headed.
The trick comes in treating this guidance as the lie it is. One must be ready and able to modify that vision continuously to reflect changes in knowledge and requirement. In this way one is not stuck in an inflexible mission while still having enough direction to usefully steer by. My experience has been that this saves a hell of a lot of work in the long run and forces one to consider "flexible enough" designs from the start.
Over the years I've gotten much better at "flexible enough" design, and being able to "dance" the design through the changing sea of time and realities. I expect I will look back in 5, 10, 15 and 20 years and remark on how much I've learned since now, as well.
I am reminded of steering a boat at sea. You point the vessel to where you want to go, along a path you have in your mind that will take around rocks and currents and weather. You commit to that path. And when the ocean or the weather changes, something you can count on happening, you update your plans and continue steering. Eventually you get there.
[Less]
|
Posted
almost 10 years
ago
by
mollekopf
I recently had the dubious pleasure of getting Kontact to work on windows, and after two weeks of agony it also yielded some results =)
Not only did I get Kontact to build on windows (sadly still something to be proud off), it is also largely
... [More]
functional. Even timezones are now working in a way that you can collaborate with non-windows users, although that required one or the other patch to kdelibs.
To make the whole excercise as reproducible as possible I collected my complete setup in a git repository [0]. Note that these builds are from the kolab stable branches, and not all the windows specific fixes have made it back upstream yet. That will follow as soon as the waters calm a bit.
If you want to try it yourself you can download an installer here [1],
and if you don’t (I won’t judge you for not using windows) you can look at the pretty pictures.
[0] https://github.com/cmollekopf/kdepimwindows
[1] http://mirror.kolabsys.com/pub/upload/windows/Kontact-E5-2015-06-30-19-41.exe
[Less]
|
Posted
almost 10 years
ago
by
Aaron Seigo
The crowdfunding campaign to provide funding and greater community engagement around the refactoring of Roundcube's core to give it a secure future has just wrapped up. We managed to raise $103,531 from 870 people. This obviously surpassed our goal
... [More]
of $80,000, so we're pretty ecstatic. This is not the end, however: now we begin the journey to delivering a first release of Roundcube Next. This blog entry outines some of that path forward
Perks
The most obvious thing on our list is to get people's t-shirts and stickers out to them. We have a few hundred of them to print and ship, and it looks like we may be missing a few shipping addresses so I'll be following up with those people next week. Below is a sneak peak of what the shirts might look like. We're still working out the details, so they may look a bit different than this once they come off the presses, but this should give you an idea. We'll be in touch with people for shirt sizes, color options, etc. in the coming week.
Those who elected for the Kolaborator perk will get notified by email how to redeem your free months on Kolab Now. Of course, everyone who elected for the in-application-credits-mention will get that in due time as well. We've got you all covered! :)
Note that it takes a couple of weeks for Indiegogo to get the funds to us, and we need to waitn on that before confirming our orders and shipping for the physical perk items.
Roundcube Backstage
We'll be opening the Roundcube Backstage area in the ~2 weeks after wrap-up is complete next week. This will give us enough time to create the Backstage user accounts and get the first set of content in place. We will be using the Discourse platform for discussions and posting our weekly Backstage updates to. I'm really looking forward to reading your feedback there, answering questions, contemplating the amazing future that lays ahead of us, ...
The usual channels of Roundcube blogging, forums and mailing lists will of course remain in use, but the Backstage will see all sorts of extras and closer direct interaction with the developers. If you picked up the Backstage perk, you will get an email next week with information on when and where you can activate your account.
Advisory Committee
The advisory committee members will also be getting an email next week with a welcome note. You'll be asked to confirm who the contact person should be, and they'll get a welcome package with further information. We'll also want some information for use in the credits badge: a logo we can use, a short description you'd like to see with that logo describing your group/company, and the web address we should point people to.
The Actual Project!
The funds we raised will cover getting the new core in place with basic email, contacts and settings apps. We will be able to adopt JMap into this and build the foundations we so desperately need. The responsive UI that works on phones, tablets and desktop/laptop systems will come as a result of this work as well, something we are all really looking forward to.
Today we had an all-hands meeting to take our current requirements, mock-ups and design docs and reflect on how the feedback we received during the campaign should influence those. We are now putting all this together in a clear and concise form that we can share with everyone, particularly our Advisory Committee members as well as in the Backstage area. This will form the bases for our first round of stakeholder feedback which I am really looking forward to.
We are committed to building the most productive and collaborative community around any webmail system out there, and these are just our first steps. That we have the opportunity here to work with the likes of Fastmail and Mailpile, two entities that one may have thought of as competitors rather than possible collaborators, really shows our direction in terms of inclusivity and looking for opportunities to collaborate.
Though we are at the end of this crowdfunding phase, this is really just the beginning, and the entire team here isn't waiting a moment to get rolling! Mostly because we're too excited to do anything else ;)
[Less]
|
Posted
almost 10 years
ago
by
greve
Software is a social endeavour. The most important advantage of Free Software is its community. Because the best Open Source is built by a community of contributors. Contribution being the single most important currency and differentiation between
... [More]
users and community. You want to be part of that community at least by proxy because like any community, members of our community spend time together, exchange ideas, and create cohesion that translates into innovation, features, best practices.
We create nothing less than a common vision of the future.
By the rules of our community, anyone can take our software and use it, extend it, distribute it. A lot of value can be created this way and not everyone has the capabilities to contribute. Others choose not to contribute in order to maximise their personal profits. Short of actively harming others, egoism, even in its most extreme forms, is to be accepted. That is not to say it is necessarily a good idea for you to put the safeguarding of your own personal interests into the hands of an extreme egoist. Or that you should trust in their going the extra mile for you in all the places that you cannot verify.
That is why the most important lesson for non-developers is this: Choose providers based on community participation. Not only are they more likely to know early about problems, putting them in a much better position to provide you with the security you require. They will also ensure you will have a future you like.
Developers know all this already, of course, and typically apply it at least subconsciously.
Growing that kind of community has been one of the key motives to launch Roundcube Next, which is now coming close to closing its phase of bringing together its key contributors. Naturally everyone had good reasons to get involved, as recently covered on Venturebeat.
Last night Sandstorm.io became the single greatest contributor to the campaign in order to build that better future together, for everyone. Over the past weeks, many other companies, some big, some small, have done the same.
Together, we will be that community that will build the future.
[Less]
|
Posted
almost 10 years
ago
by
mollekopf
Reproducible testing is hard, and doing it without automated tests, is even harder. With Kontact we’re unfortunately not yet in a position where we can cover all of the functionality by automated tests.
If manual testing is required, being able to
... [More]
bring the test system into a “clean” state after every test is key to reproducibility.
Fortunately we have a lightweight virtualization technology available with linux containers by now, and docker makes them fairly trivial to use.
Docker
Docker allows us to create, start and stop containers very easily based on images. Every image contains the current file system state, and each running container is essentially a chroot containing that image content, and a process running in it. Let that process be bash and you have pretty much a fully functional linux system.
The nice thing about this is that it is possible to run a Ubuntu 12.04 container on a Fedora 22 host (or whatever suits your fancy), and whatever I’m doing in the container, is not affected by what happens on the host system. So i.e. upgrading the host system does not affect the container.
Also, starting a container is a matter of a second.
Reproducible builds
There is a large variety of distributions out there, and every distribution has it’s own unique set of dependency versions, so if a colleague is facing a build issue, it is by no means guaranteed that I can reproduce the same problem on my system.
As an additional annoyance, any system upgrade can break my local build setup, meaning I have to be very careful with upgrading my system if I don’t have the time to rebuild it from scratch.
Moving the build system into a docker container therefore has a variety of advantages:
* Builds are reproducible across different machines
* Build dependencies can be centrally managed
* The build system is no longer affected by changes in the host system
* Building for different distributions is a matter of having a couple of docker containers
For building I chose to use kdesrc-build, so building all the necessary repositories is the least amount of effort.
Because I’m still editing the code from outside of the docker container (where my editor runs), I’m simply mounting the source code directory into the container. That way I don’t have to work inside the container, but my builds are still isolated.
Further I’m also mounting the install and build directories, meaning my containers don’t have to store anything and can be completely non-persistent (the less customized, the more reproducible), while I keep my builds fast and incremental. This is not about packaging after all.
Reproducible testing
Now we have a set of binaries that we compiled in a docker container using certain dependencies, so all we need to run the binaries is a docker container that has the necessary runtime dependencies installed.
After a bit of hackery to reuse the hosts X11 socket, it’s possible run graphical applications inside a properly setup container.
The binaries are directly mounted from the install directory, and the prepared docker image contains everything from the necessary configurations to a seeded Kontact configuration for what I need to test. That way it is guaranteed that every time I start the container, Kontact starts up in exactly the same state, zero clicks required. Issues discovered that way can very reliably be reproduced across different machines, as the only thing that differs between two setups is the used hardware (which is largely irrelevant for Kontact).
..with a server
Because I’m typically testing Kontact against a Kolab server, I of course also have a docker container running Kolab. I can again seed the image with various settings (I have for instance a John Doe account setup, for which I have the account and credentials already setup in client container), and the server is completely fresh on every start.
Wrapping it all up
Because a bunch of commands is involved, it’s worthwhile writing a couple of scripts to make the usage a easy as possible.
I went for a python wrapper which allows me to:
* build and install kdepim: “devenv srcbuild install kdepim”
* get a shell in the kdepim dir: “devenv srcbuild shell kdepim”
* start the test environment: “devenv start set1 john”
When starting the environment the first parameter defines the dataset used by the server, and the second one specifies which client to start, so I can have two Kontact instances with different users for invitation handling testing and such.
Of course you can issue any arbitrary command inside the container, so this can be extended however necessary.
While that would of course have been possible with VMs for a long time, there is a fundamental difference in performance. Executing the build has no noticeable delay compared to simply issuing make, and that includes creating a container from an image, starting the container, and cleaning it up afterwards. Starting the test server + client also takes all of 3 seconds. This kind of efficiency is really what enables us to use this in a lather, rinse, repeat approach.
The development environment
I’m still using the development environment on the host system, so all file-editing and git handling etc. happens as usual so far. I still require the build dependencies on the host system, so clang can compile my files (using YouCompleteMe) and hint if I made a typo, but at least these dependencies are decoupled from what I’m using to build Kontact itself.
I also did a little bit of integration in Vim, so my Make command now actually executes the docker command. This way I get seamless integration and I don’t even notice that I’m no longer building on the host system. Sweet.
While I’m using Vim, there’s no reason why that shouldn’t work with KDevelop (or whatever really..).
I might dockerize my development environment as well (vim + tmux + zsh + git), but more on that in another post.
Overall I’m very happy with the results of investing in a couple of docker containers, and I doubt we could have done the work we did, without that setup. At least not without a bunch of dedicated machines just for that. I’m likely to invest more in that setup, and I’m currently contemplating dockerizing also my development setup.
In any case, sources can be found here:
https://github.com/cmollekopf/docker.git
[Less]
|
Posted
almost 10 years
ago
by
Timotheus Pokorra
Just before the Kolab Summit, at the end of April 2015, the Phabricator instance for Kolab went online! Thanks to Jeroen and the team from Kolab Systems who made that happen!
I have to admit it took me a while to get to understand Phabricator, and
... [More]
how to use it. I am still learning, but I now know enough to write an initial post about it.
Phabricator describes itself as an “open source, software engineering platform”. It aims to provide all the tools you need to engineer a software. In their words: “a collection of open source web applications that help software companies build better software”
To some degree, it replaces solutions like Github or Gitlab, but it has much more than Code Repository, Bug Tracking and Wiki functionality. It also has tools for Code Review, Notifications and Continuous builds, Project and Task Management, and much more. For a full list, see http://phabricator.org/applications/
In this post, I want to focus on how you work with the code, and how to submit patches. I am quite used to the idea of Pull Requests as Github does it. Things are a little bit different with Phabricator. But when you get used to it, they are probably more powerful.
Starting with browsing the code: there is the Diffusion application. You can see all the Kolab projects there.
It also shows the “git clone” command at the top for each project.
Admittedly, that is quite crowded, and if you still want the simple cgit interface, you get it here: https://cgit.kolab.org/
Now imagine you have fixed a bug or want to submit a change for the Kolab documentation (project docs). You clone the repository, locally edit the files, and commit them locally.
You can submit patches online with the application Differential: Go to Differential, and at the top right, you find the link “Create Diff“: There you can paste your patch or upload it from file, and specify which project/repository it is for. All the developers of that repository will be notified of your patch. They will review it, and if they accept it, the patch is ready to land. I will explain that below.
Alternatively, you can submit a patch from the command line as well!
Let me introduce you to Arcanist: this is a command line application which is part of Phabricator, that helps with integration of your git directory with Phabricator. There is a good manual for Arcanist: Arcanist User Guide
Arcanist is not part of Fedora yet (I have not checked other distributions), but you can install it from the Kolab Development repository like this, eg on Fedora 21:
# import the Kolab key
rpm --import "http://keyserver.ubuntu.com/pks/lookup?op=get&search=0x830C2BCF446D5A45"
curl http://obs.kolabsys.com/repositories/Kolab:/Development/Fedora_21/Kolab:Development.repo -o /etc/yum.repos.d/KolabDevelopment.repo
yum install arcanist
# configure arcanist (file: ~/.arcrc)
arc set-config default https://git.kolab.org
arc install-certificate
# go to https://git.kolab.org/conduit/login/ and copy the token and paste it to arc
Now you can create a clone of the repository, in this example the Kolab Documentation:
git clone https://git.kolab.org/diffusion/D/docs.git
# if you have already an account on git.kolab.org, and uploaded your SSH key to your configuration:
# git clone ssh://[email protected]/diffusion/D/docs.git
cd docs
# do your changes
# vi source/installation-guide/centos.rst
git commit -a
arc diff # Creates a new revision out of ALL unpushed commits on
# this branch.
This will also create a code review item on Differential!
For more options of arc diff, see the Arcanist User Guide on arc diff
By the way, have a look at this example: https://git.kolab.org/D23
Now after your code change was reviewed and accepted, your code change is “ready to land”.
It depends if you have write permissions on the repository. If you don’t have them, ask on IRC (freenode #kolab) or on the kolab developers’ mailinglist for someone to merge your change.
If you have push permissions, this is what you do (if D23 is your Differential id):
# assuming you have Arcanist configured as described above...
arc patch --nobranch D23
# if we are dealing with a branch:
# arc land D23
git push origin master
I hope this helps to get started with using Phabricator, and it encourages you to keep or start submitting patches to make Kolab even better!
[Less]
|