Posted
about 11 years
ago
With Bdale Garbee’s casting vote this week, the Debian technical committee finally settled the question of init for both Debian and Ubuntu in favour of systemd.
I’d like to thank the committee for their thoughtful debate under pressure in the
... [More]
fishbowl; it set a high bar for analysis and experience-driven decision making since most members of the committee clearly took time to familiarise themselves with both options. I know the many people who work on Upstart appreciated the high praise for its code quality, rigorous testing and clarity of purpose expressed even by members who voted against it; from my perspective, it has been a pleasure to support the efforts of people who want to create truly great free software, and do it properly. Upstart has served Ubuntu extremely well – it gave us a great competitive advantage at a time when things became very dynamic in the kernel, it’s been very stable (it is after all the init used in both Ubuntu and RHEL 6 and has set a high standard for Canonical-lead software quality of which I am proud.
Nevertheless, the decision is for systemd, and given that Ubuntu is quite centrally a member of the Debian family, that’s a decision we support. I will ask members of the Ubuntu community to help to implement this decision efficiently, bringing systemd into both Debian and Ubuntu safely and expeditiously. It will no doubt take time to achieve the stability and coverage that we enjoy today and in 14.04 LTS with Upstart, but I will ask the Ubuntu tech board (many of whom do not work for Canonical) to review the position and map out appropriate transition plans. We’ll certainly complete work to make the new logind work without systemd as pid 1. I expect they will want to bring systemd into Ubuntu as an option for developers as soon as it is reliably available in Debian, and as our default as soon as it offers a credible quality of service to match the existing init.
Technologies of choice evolve, and our platform evolves both to lead (today our focus is on the cloud and on mobile, and we are quite clearly leading GNU/Linux on both fronts) and to embrace change imposed elsewhere. Init is contentious because it is required for both developers and system administrators to understand its quirks and capabilities. No wonder this was a difficult debate, the consequences for hundreds of thousands of people are very high. From my perspective the fact that good people were clearly split suggests that either option would work perfectly well. I trust the new stewards of pid 1 will take that responsibility as seriously as the Upstart team has done, and be as pleasant to work with. And… onward. [Less]
|
Posted
about 11 years
ago
As prep for the upcoming 14.04 LTS release of Ubuntu I spent some quality time with each of the main flavours that I track – Kubuntu, Ubuntu GNOME, Xubuntu, and Ubuntu with the default DE, Unity.
They are all in really great shape! Thanks and
... [More]
congratulations to the teams that are racing to deliver Trusty versions of their favourite DE’s. I get the impression that all the major environments are settling down from periods of rapid change and stress, and the timing for an LTS release in 14.04 is perfect. Lucky us
The experience reminded me of something people say about Ubuntu all the time – that it’s a place where great people bring diverse but equally important interests together, and a place where people create options for others of which they are proud. You want options? This is the place to get them. You want to collaborate with amazing people? This is the place to find them. I’m very grateful to the people who create those options – for all of them it’s as much a labour of love as a professional concern, and their attention to detail is what makes the whole thing sing.
Of course, my testing was relatively lightweight. I saw tons of major improvements in shared apps like LibreOffice and Firefox and Chromium, and each of the desktop environments feels true to its values, diverse as those are. What I bet those teams would appreciate is all of you taking 14.04 for a spin yourselves. It’s stable enough for any of us who use Linux heavily as an engineering environment, and of course you can use a live boot image off USB if you just want to test drive the future. Cloud images are also available for server testing on all the major clouds.
Having the whole team, and broader community, focus on processes that support faster development at higher quality has really paid off. I’ve upgraded all my systems to Trusty and those I support from afar, too, without any issues. While that’s mere anecdata, the team has far more real data to support a rigorous assessment of 14.04′s quality than any other open platform on the planet, and it’s that rigour that we can all celebrate as the release date approached. There’s still time for tweaks and polish; if you are going to be counting on Trusty, give it a spin and let’s make sure it’s perfect. [Less]
|
Posted
about 11 years
ago
KDE Project: DCOPHacker Public Radio FOSDEM edition is up now for those wanting a podcast packaged full of geek interviews. My interview is 27 minutes in and talks of KDE, Kubuntu and how Baloo will make the KDE world slicker. MP3 file.
And the
... [More]
video of my talk Do you have to be brain damaged to care about desktop Linux is up on FOSDEM's website. Mixing the highs and lows of severe head trauma with the highs and lows of developing a KDE Linux distribution.
[Less]
|
Posted
over 11 years
ago
I'm sitting in my hotel in New York City exhausted and energized from the first day of Real World Crypto, which is being held at City College in Harlem. I'm delighted to be spending a few days on an island full of interesting people, and a little
... [More]
embarrassed to realize the last time I blogged was from a hotel room at the OSEHRA conference last fall, but tired and energized by the conference so far.
RWC is a conference that aims to bring cryptography researchers together with developers implementing crypto in real world applications, and get them working together more closely. I think it's been very successful at that goal so far. Here are some of the highlights as I remember them from day 1 - please be tolerant of any errors in my reporting as I am far from expert in cryptography and I may not do the ideas justice.
In the morning I had the privilege of breakfast with Jacob Kaplan-Moss, who founded the Django project and is the Director of Security for Heroku. We had a fast-paced and wide-ranging discussion about how to reduce the effort for small healthcare companies to become HIPAA compliant, and why that is important for the spread of patient centered care innovations. He's also building a Raspberry-PI powered submarine which keeps shorting out, and we had some fun chatting about different types of information radiators in office environments - I'm still advocating for someone to connect a smell machine to a code quality metric so that checking in bad code cause the building to stink.
Later in the morning Dr Hoeteck Wee gave a very interesting report on efforts to formally verify the security of TLS, which is the protocol that is deployed to secure almost all communications on the internet and mobile phone apps. Because of the ubiquity of this protocol, this work was very important, even if it was coming a bit after the fact once TLS was already widely deployed. After all, empirical evidence does not equal proof. The definition of security used was data confidentiality, integrity, and entity authentication. Dr. Wee's conclusions were that TLS is secure even though it violates some basic cryptographic principles such as strict separation of the key exchange from the record protocol. He also concluded that the TLS protocol is fragile, and the security it has is accidental (depends on the client sending a finish message which was intended for authentication not for key exchange). After the talk a distinguished audience member made the comment that modularity is expensive and costs 2 round trips, which is .5 seconds of latency when communicating via satellite.
Brian Warner gave a fascinating account of crypto in Firefox sync, which has been through 3 complete redesigns. There was a huge amount to learn from this talk, and the biggest lesson was that bad UI (even when it's an accident) will totally ruin a beautifully designed protocol for all your customers. Firefox sync was originally designed for syncing data between multiple devices while keeping your data secure event against a compromised server - yet over half of the users had only one device and actually wanted backup rather than sync. The server-proof design meant that when a user with only one device lost that device, there was no way to recover their decryption key and so the data on the server was no good to them. I lost the link to Brian's full slides but his blog post is here with many details about the new SRP based design. I thought it was especially noteworthy that most of the protocol could actually be run in the clear without needing SSL/TLS, although the initial handshake does need TLS.
At lunchtime I had the honor of finally meeting Zooko Wilcox-O'Hearn in person, who I first met online when I was working on Ubuntu One years ago. We've crossed paths on twitter and IRC over the years, but never stood in the same room before. Zooko is now the CEO of https://leastauthority.com/ which provides highly secure and private storage based on Tahoe-LAFS (which he creates and gives away as free and open source software). I respect Zooko a great deal not only because of the products that he builds, but the integrity with which he reasons about what problems are important for the health of society and thus what problems he should work on. I was also delighted to meet Andrew Isaacson who was sitting at the same table, who is one of the founders of the Noisebridge hackerspace in San Francisco, and is currently very passionate about building secure multi-party chat (on top of his day job).
After lunch there were three sessions on Multi-Party Computation, which is a cryptographic technique by which it is possible to perform computations on data without any of the servers involved in the computation actually knowing the data. This is done by some math tricks which separate the data into chunks and apply transforms, then distribute the transformed and unreadable data to a set of servers who then perform some predefined operations and send the results back. When the results are combined and run through the matching transforms, the answer appears. It's a fascinating technique with many possible applications, and the principle thing limiting use has been how slow it is to run. Of the three talks, the one that stood out to me was John Launchbury who showed how they were able to accelerate shared computation to usable speeds by stepping back and re-thinking what data actually needed to be computed. John gave examples of working implementations of secure mail filtering (the server filters your mail without being able to read it), and secure voice conferencing (built on a modified mumble server). One potential application of these techniques is for research against sets of medical data obtained from multiple databases (such as multiple hospitals). You want the researcher to be able to compute an answer that pulls from multiple data sets but never actually combine the data sets in a way where a breach would compromise all of the hospitals. At one point a very useful definition was given: secure multi-party computation protocols emulate an incorruptible trusted third party in a world without trusted third parties. This is useful because you can design your system to rely on a trusted third party and then swap in MPC.
There was also a very interesting talk on mitigating server breaches using MPC, the idea is to split cryptographic keys among multiple servers, perhaps running in multiple hosting providers and running multiple operating systems with a frequent key refresh happening, so that the only meaningful breach is a breach of all systems simultaneously. This appears to be quite applicable for systems where you are encrypting data to be stored inside a database - by not storing the crypto key on the DB server or in the memory of the web servers, you have a system that is significantly harder to compromise. The comparison was made that MPC systems are a software way to achieve many of the properties of HSM, which made me realize that I'm very overdue for doing a deep dive on http://aws.amazon.com/cloudhsm/. This talk made mention of the claim that in 2012 there were $7 billion lost in HIPAA covered security breaches, and I would be very glad to see these techniques made available in open source form where they could be as broadly deployed as linux. Unfortunately it sounded like this was a crypto core rather than a product, and it was being built as a proprietary project where only richer companies would be able to enjoy better security. It's a good reminder about how important it is to fund infrastructure work like this that underpins our very ability to securely communicate.
Next was a talk on multi-linear maps, which are an extension of bilinear maps. The explanation given was that diffie helman is such a crypto gold mine because you can take values a1 and hide them in gai, and some tasks are still easy in this representation (any linear/affine function), while others are hard such as quadratic functions. Bilinear maps are similar in that quadratics are easy but cubics are hard - multi-linear maps are the result of asking "why stop at 2?" and so they extend the concept to k-linear. There was definitely a point at which things break down, but I didn't manage to catch what the practical limit to dimensions was.
Finally we heard from Dr. Shai Halevi from IBM research on some very interesting work on encrypted search. This work has been funded by IARPA in order to facilitate the intelligence community searching very large data sets without giving the searchers unrestricted access to the data sets. It was intriguing to see that these techniques could potentially be used for such things as warrant enforcement, where they could result in dramatically reducing the amount of data that is disclosed in a warrant based search. Dr. Halevi made the amusing observation that the effects or significance of statistical leakage is entirely dependent on the application - kind of like saying that the effect of a bullet depends on if it hits anyone. He also called for a theory of leakage composition so that we might be able to more usefully reason about differential privacy.
It was an amazing first day, I'm still digesting much of what was presented. I'm particularly grateful to Pat Deegan PhD & Associates for funding my travel to this conference, and recognizing how important it is to participate in discussions about modern crypto and it's application in the real world. It's been worth the money so far, I'm buzzing with new product ideas and have several areas for further research. You can be sure I'll be applying what I've learned to our products at PDA to keep the patient data we are entrusted with as secure as possible.
P.S. I decided today to stop slacking and finally start working on a book about achieving HIPAA compliance in a small healthcare startup. Want to help keep me motivated? Sign up here for occasional updates and preview content from the book. [Less]
|
Posted
over 11 years
ago
KDE Project: DCOPKDE Frameworks 5 tech preview shipped last week and we've been packaging furiously. This is all new packaging from scratch, none of our automated scripts we have for KDE SC. Tier 1 is in our experimental PPA for Trusty. Special thanks to Scarlett our newest Kubuntu Ninja who is doing a stormer.
|
Posted
over 11 years
ago
KDE Project: DCOP
Frameworks 5 Tech Preview has arrived.
KDE Frameworks it a port of kdelibs to Qt 5 and turned into modules so you can install only the bits you need. People are often reluctant to add kdelibs to their applications because it brings
... [More]
in too many dependencies. With KDE Frameworks it has been modularised so much is simply extra Qt libraries. This will bring KDE software to a much wider audience. Many parts of kdelibs have just been moved into Qt itself thanks to the open qt-project.
Binary packages are available in Kubuntu as part of Neon 5 and I'm starting to package them in our experimental PPA.
[Less]
|
Posted
over 11 years
ago
KDE Project: DCOP
I'm going to FOSDEM for 2014, are you? FOSDEM is a massive free software meeting with more projects than you knew existed. We need help on the KDE stall. We also need visitors in the devroom. Finally we need KDE people to come
... [More]
and eat pizza on Saturday evening. Add yourself to the wiki page if you want to help KDE at FOSDEM.
[Less]
|
2013 started with what felt like a failure, but in the end, I believe that thebest decision was made. During 2011 and 2012 I worked on and then managedthe Unity desktop team. This was a C++ project that brought me back to myhard-core hacker side
... [More]
after four and a half years on Launchpad. The Unitydesktop was a C++ project using glib, nux, and Compiz. After bringing Unity tobe the default desktop in 12.04 and ushering in the stability and performanceimprovements, the decision was made to not use it as the way to bring theUbuntu convergence story forward. At the time I was very close tho the Unity 7codebase and I had an enthusiastic capable team working on it. The decisionwas to move forwards with a QML based user interface. I can see now that thiswas the correct decision, and in fact I could see it back in January, but thatdidn't make it any easier to swallow.I felt that I was at a juncture and I had to move on. Either I stayed withCanonical and took another position or I found something else to do. I do likethe vision that Mark has for Ubuntu and the convergence story and I wanted tohang around for it even if I wasn't going to actively work on the story itself. For a while I was interested in learning a new programming language, and Go was considered the new hotness, so I looked for a position working on Juju. I was lucky to be able to join the the juju-core team.After a two weak break in January to go to a family wedding, I came back towork and started reading around Go. I started with the language specificationand then read around and started with the Go playground. Then started with theJuju source.Go was a very interesting language to move to from C++ and Python. Noinheritance, no exceptions, no generics. I found this quite a change. I evenblogged about some of these frustrations.As much as I love the C++ language, it is a huge and complex language. Onewhere you are extremely lucky if you are working with other really competentdevelopers. C++ is the sort of language where you have a huge amount of power and control, but you pay other costs for that power and control. Most C++ code is pretty terrible.Go, as a contrast, is a much smaller, more compact, language. You can keep theentire language specification in your head relatively easily. Some of this isdue to specific decisions to keep the language tight and small, and others I'msure are due to the language being young and immature. I still hope forgenerics of some form to make it into the language because I feel that theyare a core building block that is missing.I cut my teeth in Juju on small things. Refactoring here, tweakingthere. Moving on to more substantial changes. The biggest bit that leaps tomind is working with Ian to bring LXC containers and the local provider to theGo version of Juju. Other smaller things were adding much more infrastructurearound the help mechanism, adding plugin support, refactoring the provisioner,extending the logging, and recently, adding KVM container support.Now for the obligatory 2014 predictions...I will continue working on the core Juju product bringing new and wonderfulfeatures that will only be beneficial to that very small percentage ofdevelopers in the world who actually deal with cloud deployments.Juju will gain more industry support outside just Canonical, and will be seenas the easiest way to OpenStack clouds.I will become more proficient in Go, but will most likely still be complainingabout the lack of generics at the end of 2014.Ubuntu phone will ship. I'm guessing on more than just one device and withmore than one carrier. Now I do have to say that these are just personalpredictions and I have no more insight into the Ubuntu phone process thananyone outside Canonical.The tablet form-factor will become more mature and all the core applications,both those developed by Canonical and all the community contributed coreapplications will support the form-factor switching on the fly.The Unity 8 desktop that will be based on the same codebase as the phone andtablet will be available on the desktop, and will become the way that peoplework with the new very high resolution laptops. [Less]
|
Posted
over 11 years
ago
KDE Project: DCOPI'm done triaging old photos, here's some of my favourite Ubuntu themed ones from 2005 to 2009.
The first Ubuntu conference I went to in Sydney featuring Andreas, one of the Kubuntu originators
The excitable Jeff Waugh who provided
... [More]
a lot of the character behind Gnome and early Ubuntu. Last seen writing a lament about the Canonical’s worsening relationship with GNOME which I must admit to being too lazy to read.
A Kubuntu group photo in Barcelona from 2009, little did I know Barcelona would become a spiritual and practical home for Kubuntu.
It's important to build community in open source, Kubuntu has always used hot tub parties for this, you won't get that with any other distro.
Canonical One flight to Montreal, these days I prefer to take the train, more environmentally friendly.
The first LTS release was preceeded by a polishing sprint in London where we worked on the first version of Ubiquity, the live CD installer.
Back in the day I had to post out every CD by myself. (I still get requests for CDs but we stopped having any physical media some time ago).
Paul taking his Ubuntu evangelising a little too seriously.
Ubuntu summits were often in fancy hotels which always surprised me by nobody using the swimming pool, I feel if life gives you a roof top swimming pool it's just ungrateful not to use it.
I like to go for a canoe if possible when I travel the world.
Have a fun holiday all, come and join us at Kubuntu if you want to be part of more world adventures.
[Less]
|
Posted
over 11 years
ago
KDE Project: DCOPI've also been triaging old photos to make some room on my server, so here's a Christmas present of a retrospective of old KDE photos.
It all started here, my first KDE talk about Umbrello, my dissertation project. (And if anyone
... [More]
else is needing a dissertation to work on, I recommend a KDE project, take over Umbrello, academics love that UML stuff, it's a guaranteed A).
An early Linux expo with Jono Bacon, whatever happened to him?
The 2005 desktop architects meeting in Portland with Gnomers
The KDE 4 release party in 2008! Complete with Konqi and Katie.
KDE at FOSDEM in 2008, we're going again this year, make sure you're there!
KDE beach party in Malaga, my web server logs show this photo is a favourite of bloggers
Another expo, back when there was a market for Linux expos
The first real KDE conference a decade ago, Kastle in Nove Hrady
Happy Christmas. Ubuntu retrospective photo blog still to come
[Less]
|