Posted
over 8 years
ago
As we can read in recent news, VMware has become a gold member of the
Linux foundation. That causes - to say the least - very mixed feelings to me.
One thing to keep in mind: The Linux Foundation is an industry
association, it exists to act in the
... [More]
joint interest of it's paying
members. It is not a charity, and it does not act for the public good.
I know and respect that, while some people sometimes appear to be
confused about its function.
However, allowing an entity like VMware to join, despite their many
years long disrespect for the most basic principles of the FOSS
Community (such as: Following the GPL and its copyleft principle),
really is hard to understand and accept.
I wouldn't have any issue if VMware would (prior to joining LF) have
said: Ok, we had some bad policies in the past, but now we fully comply
with the license of the Linux kernel, and we release all
derivative/collective works in source code. This would be a positive
spin: Acknowledge past issues, resolve the issues, become clean and then
publicly underlining your support of Linux by (among other things)
joining the Linux Foundation. I'm not one to hold grudges against
people who accept their past mistakes, fix the presence and then move
on. But no, the haven't fixed any issues.
They are having one of the worst track records in terms of intentional
GPL compliance issues for many years, showing outright disrespect for Linux,
the GPL and ultimately the rights of the Linux developers, not resolving
those issues and at the same time joining the Linux Foundation? What
kind of message sends that?
It sends the following messages:
you can abuse Linux, the GPL and copyleft while still being accepted
amidst the Linux Foundation Members
it means the Linux Foundations has no ethical concerns whatsoever
about accepting such entities without previously asking them to become
clean
it also means that VMware has still not understood that Linux and FOSS
is about your actions, particularly the kind of choices you make how
to technically work with the community, and not against it.
So all in all, I think this move has seriously damaged the image of both
entities involved. I wouldn't have expected different of VMware, but I
would have hoped the Linux Foundation had some form of standards as to
which entities they permit amongst their ranks. I guess I was being
overly naive :(
It's a slap in the face of every developer who writes code not because
he gets paid, but because it is rewarding to know that copyleft will
continue to ensure the freedom of related code.
UPDATE (March 8, 2017):
I was mistaken in my original post in that VMware didn't just join,
but was a Linux Foundation member already before, it is "just" their
upgrade from silver to gold that made the news recently. I stand
corrected. Still doesn't make it any better that the are involved
inside LF while engaging in stepping over the lines of license
compliance.
UPDATE2 (March 8, 2017):
As some people pointed out, there is no verdict against VMware. Yes,
that's true. But the mere fact that they rather distribute derivative
works of GPL licensed software and take this to court with an armada
of lawyers (instead of simply complying with the license like everyone
else) is sad enough. By the time there will be a final verdict, the
product is EOL. That's probably their strategy to begin with :/
[Less]
|
Posted
over 8 years
ago
For those of you who don't know what the tinkerphones/OpenPhoenux GTA04 is: It is a
'professional hobbyist' hardware project (with at least public
schematics, even if not open hardware in the sense that editable
schematics and PCB design files are
... [More]
published) creating updated
mainboards that can be used to upgrade Openmoko phones. They fit into
the same enclosure and can use the same display/speaker/microphone.
What the GTA04 guys have been doing for many years is close to a miracle
anyway: Trying to build a modern-day smartphone in low quantities,
using off-the-shelf components available in those low quantities, and
without a large company with its associated financial backing.
Smartphones are complex because they are highly integrated devices. A
seemingly unlimited amount of components is squeezed in the tiniest
form-factors. This leads to complex circuit boards with many layers
that take a lot of effort to design, and are expensive to build in low
quantities. The fine-pitch components mandated by the integration
density is another issue.
Building the original GTA01 (Neo1937) and GTA02 (FreeRunner) devices at
Openmoko, Inc. must seem like a piece of cake compared to what the GTA04
guys are up to. We had a team of engineers that were familiar at last
with feature phone design before, and we had the backing of a consumer
electronics company with all its manufacturing resources and expertise.
Nevertheless, a small group of people around Dr. Nikolaus Schaller has
been pushing the limits of what you can do in a small for fun
project, and the have my utmost respect. Well done!
Unfortunately, there are bad news. Manufacturing of their latest
generation of phones (GTA04A5) has been stopped due to massive soldering
problems with the TI OMAP3 package-on-package (PoP).
Those PoPs are basically "RAM chip soldered onto the CPU, and the stack
of both soldered to the PCB". This is used to save PCB footprint and to
avoid having to route tons of extra (sensitive, matched) traces between
the SDRAM and the CPU.
According to the mailing list posts, it seems to be incredibly difficult
to solder the PoP stack due to the way TI has designed the packaging of
the DM3730. If you want more gory details, see
this post
and yet another post.
It is very sad to see that what appears to be bad design choices at TI
are going to bring the GTA04 project to a halt. The financial hit by
having only 33% yield is already more than the small community can take,
let alone unused parts that are now in stock or even thinking about
further experiments related to the manufacturability of those chips.
If there's anyone with hands-on manufacturing experience on the DM3730
(or similar) TI PoP reading this: Please reach out to the GTA04 guys and
see if there's anything that can be done to help them.
UPDATE (March 8, 2017):
In an earlier post I was asserting that the GTA04 is open hardware
(which I actually believed up to that point) until some readers have
pointed out to me that it isn't. It's sad it isn't, but still it has
my sympathies.
[Less]
|
Posted
over 8 years
ago
The recent Amazon S3 outage should make a strong argument that centralized services have severe issues, technically but from a business point of view as well(you don’t own the destiny of your own product!) and I whole heartily agree with “There is no
... [More]
cloud, it’s only someone else’s computer”.
Still from time to time I like to see beyond my own nose (and I prefer the German version of that proverb!) and the current exploration involves ReactJS (which I like), Tensorflow (which I don’t have enough time for) and generally looking at Docker/Mesos/Kubernetes to manage services, zero downtime rolling updates. I have browsed and read the documentation over the last year, like the concepts (services, replication controller, pods, agents, masters), planned how to use it but because it doesn’t support SCTP never looked into actually using it.
Microsoft Azure has the Azure Container Services and since end of February it is possible to create Kubernetes clusters. This can be done using the v2 of the Azure CLI or through the portal. I finally decided to learn some new tricks.
Azure asks for a clientId and password and I entered garbage and hoped the necessary accounts would be created. It turns out that the portal is not creating it and also not doing a sanity check of these credentials and second when booting the master it will not properly start. The Microsoft support was very efficient and quick to point that out. I wish the portal would make a sanity check though. So make sure to create a principal first and use it correctly. I ended up creating it on the CLI.
I re-created the cluster and executed kubectl get nodes. It started to look better but one agent was missing from the list of nodes. After logging in I noticed that kubelet was not running. Trying to start it by hand shows that docker.service is missing. Now why it is missing is probably for Microsoft engineering to figure out but the Microsoft support gave me:
sudo rm -rf /var/lib/cloud/instances
sudo cloud-init -d init
sudo cloud-init -d modules -m config
sudo cloud-init -d modules -m final
sudo systemctl restart kubelet
After these commands my system would have a docker.service, kubelet would start and it will be listed as a node. Commands like kubectl expose are well integrated and use a public IPv4 address that is different from the one used for ssh/management. So all in all it was quite easy to get a cluster up and I am sure that some of the hick-ups will be fixed… [Less]
|
Posted
over 8 years
ago
In May 2016 we got the GTP-U tunnel encapsulation/decapsulation
module developed by Pablo Neira, Andreas Schultz and myself merged into
the 4.8.0 mainline kernel.
During the second half of 2016, the code basically stayed untouched. In
early 2017
... [More]
, several patch series of (at least) three authors have been
published on the netdev mailing list for review and merge.
This poses the very valid question on how do we test those (sometimes
quite intrusive) changes. Setting up a complete cellular network with
either GPRS/EGPRS or even UMTS/HSPA is possible using OsmoSGSN and
related Osmocom components. But it's of course a luxury that not many
Linux kernel networking hackers have, as it involves the availability of
a supported GSM BTS or UMTS hNodeB. And even if that is available,
there's still the issue of having a spectrum license, or a wired setup
with coaxial cable.
So as part of the recent discussions on netdev, I tested and described a
minimal test setup using libgtpnl, OpenGGSN and sgsnemu.
This setup will start a mobile station + SGSN emulator inside a Linux
network namespace, which talks GTP-C to OpenGGSN on the host, as well as
GTP-U to the Linux kernel GTP-U implementation.
In case you're interested, feel free to check the following wiki page:
https://osmocom.org/projects/linux-kernel-gtp-u/wiki/Basic_Testing
This is of course just for manual testing, and for functional (not
performance) testing only. It would be great if somebody would pick up
on my recent mail containing some suggestions about an automatic
regression testing setup for the kernel GTP-U code. I have way
too many spare-time projects in desperate need of some attention to work
on this myself. And unfortunately, none of the telecom operators (who
are the ones benefiting most from a Free Software accelerated GTP-U
implementation) seems to be interested in at least co-funding or
otherwise contributing to this effort :/
[Less]
|
Posted
over 8 years
ago
Keeping up my yearly blogging cadence, it’s about time I wrote to let people know what I’ve been up to for the last year or so at Mozilla. People keeping up would have heard of the sad news regarding the Connected Devices team here. While I’m sad for
... [More]
my colleagues and quite disappointed in how this transition period has been handled as a whole, thankfully this hasn’t adversely affected the Vaani project. We recently moved to the Emerging Technologies team and have refocused on the technical side of things, a side that I think most would agree is far more interesting, and also far more suited to Mozilla and our core competence.
Project DeepSpeech
So, out with Project Vaani, and in with Project DeepSpeech (name will likely change…) – Project DeepSpeech is a machine learning speech-to-text engine based on the Baidu Deep Speech research paper. We use a particular layer configuration and initial parameters to train a neural network to translate from processed audio data to English text. You can see roughly how we’re progressing with that here. We’re aiming for a 10% Word Error Rate (WER) on English speech at the moment.
You may ask, why bother? Google and others provide state-of-the-art speech-to-text in multiple languages, and in many cases you can use it for free. There are multiple problems with existing solutions, however. First and foremost, most are not open-source/free software (at least none that could rival the error rate of Google). Secondly, you cannot use these solutions offline. Third, you cannot use these solutions for free in a commercial product. The reason a viable free software alternative hasn’t arisen is mostly down to the cost and restrictions around training data. This makes the project a great fit for Mozilla as not only can we use some of our resources to overcome those costs, but we can also use the power of our community and our expertise in open source to provide access to training data that can be used openly. We’re tackling this issue from multiple sides, some of which you should start hearing about Real Soon Now™.
The whole team has made contributions to the main code. In particular, I’ve been concentrating on exporting our models and writing clients so that the trained model can be used in a generic fashion. This lets us test and demo the project more easily, and also provides a lower barrier for entry for people that want to try out the project and perhaps make contributions. One of the great advantages of using TensorFlow is how relatively easy it makes it to both understand and change the make-up of the network. On the other hand, one of the great disadvantages of TensorFlow is that it’s an absolute beast to build and integrates very poorly with other open-source software projects. I’ve been trying to overcome this by writing straight-forward documentation, and hopefully in the future we’ll be able to distribute binaries and trained models for multiple platforms.
Getting Involved
We’re still at a fairly early stage at the moment, which means there are many ways to get involved if you feel so inclined. The first thing to do, in any case, is to just check out the project and get it working. There are instructions provided in READMEs to get it going, and fairly extensive instructions on the TensorFlow site on installing TensorFlow. It can take a while to install all the dependencies correctly, but at least you only have to do it once! Once you have it installed, there are a number of scripts for training different models. You’ll need a powerful GPU(s) with CUDA support (think GTX 1080 or Titan X), a lot of disk space and a lot of time to train with the larger datasets. You can, however, limit the number of samples, or use the single-sample dataset (LDC93S1) to test simple code changes or behaviour.
One of the fairly intractable problems about machine learning speech recognition (and machine learning in general) is that you need lots of CPU/GPU time to do training. This becomes a problem when there are so many initial variables to tweak that can have dramatic effects on the outcome. If you have the resources, this is an area that you can very easily help with. What kind of results do you get when you tweak dropout slightly? Or layer sizes? Or distributions? What about when you add or remove layers? We have fairly powerful hardware at our disposal, and we still don’t have conclusive results about the affects of many of the initial variables. Any testing is appreciated! The Deep Speech 2 paper is a great place to start for ideas if you’re already experienced in this field. Note that we already have a work-in-progress branch implementing some of these ideas.
Let’s say you don’t have those resources (and very few do), what else can you do? Well, you can still test changes on the LDC93S1 dataset, which consists of a single sample. You won’t be able to effectively tweak initial parameters (as unsurprisingly, a dataset of a single sample does not represent the behaviour of a dataset with many thousands of samples), but you will be able to test optimisations. For example, we’re experimenting with model quantisation, which will likely be one of multiple optimisations necessary to make trained models usable on mobile platforms. It doesn’t particularly matter how effective the model is, as long as it produces consistent results before and after quantisation. Any optimisation that can be made to reduce the size or the processor requirement of training and using the model is very valuable. Even small optimisations can save lots of time when you start talking about days worth of training.
Our clients are also in a fairly early state, and this is another place where contribution doesn’t require expensive hardware. We have two clients at the moment. One written in Python that takes advantage of TensorFlow serving, and a second that uses TensorFlow’s native C++ API. This second client is the beginnings of what we hope to be able to run on embedded hardware, but it’s very early days right now.
And Finally
Imagine a future where state-of-the-art speech-to-text is available, for free (in cost and liberty), on even low-powered devices. It’s already looking like speech is going to be the next frontier of human-computer interaction, and currently it’s a space completely tied up by entities like Google, Amazon, Microsoft and IBM. Putting this power into everyone’s hands could be hugely transformative, and it’s great to be working towards this goal, even in a relatively modest capacity. This is the vision, and I look forward to helping make it a reality. [Less]
|
Posted
over 8 years
ago
I've recently attended a seminar that (among other topics) also covered
RF interference hunting. The speaker was talking about various
real-world cases of RF interference and illustrating them in detail.
Of course everyone who has any interest in
... [More]
RF or cellular will know
about fundamental issues of radio frequency interference. To the
biggest part, you have
cells of the same operator interfering with each other due to too
frequent frequency re-use, adjacent channel interference, etc.
cells of different operators interfering with each other due to
intermodulation products and the like
cells interfering with cable TV, terrestrial TV
DECT interfering with cells
cells or microwave links interfering with SAT-TV reception
all types of general EMC problems
But what the speaker of this seminar covered was actually a cellular
base-station being re-broadcast all over Europe via a commercial
satellite (!).
It is a well-known fact that most satellites in the sky are basically
just "bent pipes", i.e. they consist of a RF receiver on one frequency,
a mixer to shift the frequency, and a power amplifier. So basically
whatever is sent up on one frequency to the satellite gets
re-transmitted back down to earth on another frequency. This is abused
by "satellite hijacking" or "transponder hijacking" and has been covered
for decades in various publications.
Ok, but how does cellular relate to this? Well, apparently some people
are running VSAT terminals (bi-directional satellite terminals) with
improperly shielded or broken cables/connectors. In that case, the RF
emitted from a nearby cellular base station leaks into that cable, and
will get amplified + up-converted by the block up-converter of that VSAT
terminal.
The bent-pipe satellite subsequently picks this signal up and
re-transmits it all over its coverage area!
I've tried to find some public documents about this, an there's
surprisingly little public information about this phenomenon.
However, I could find a slide set from SES, presented at a
Satellite Interference Reduction Group: Identifying Rebroadcast (GSM)
It describes a surprisingly manual and low-tech approach at hunting down
the source of the interference by using an old nokia net-monitor phone
to display the MCC/MNC/LAC/CID of the cell. Even in 2011 there were
already open source projects such as airprobe that could have done the
job based on sampled IF data. And I'm not even starting to consider
proprietary tools.
It should be relatively simple to have a SDR that you can tune to a
given satellite transponder, and which then would look for any
GSM/UMTS/LTE carrier within its spectrum and dump their identities in a
fully automatic way.
But then, maybe it really doesn't happen all that often after all to
rectify such a development...
[Less]
|
Posted
over 8 years
ago
In the good old days ever since the late 1980ies - and a surprising
amount even still today - telecom signaling traffic is still carried
over circuit-switched SS7 with its TDM lines as physical layer, and not
an IP/Ethernet based transport.
When
... [More]
Holger first created OsmoBSC, the BSC-only version of OpenBSC some
7-8 years ago, he needed to implement a minimal subset of SCCP wrapped
in TCP called SCCP Lite. This was due to the simple fact that the MSC
to which it should operate implemented this non-standard protocol
stacking that was developed + deployed before the IETF SIGTRAN WG
specified M3UA or SUA came around. But even after those were specified
in 2004, the 3GPP didn't specify how to carry A over IP in a standard
way until the end of 2008, when a first A interface over IP study
was released.
As time passese, more modern MSCs of course still implement classic
circuit-switched SS7, but appear to have dropped SCCPlite in favor of
real AoIP as specified by 3GPP meanwhile. So it's time to add this to
the osmocom universe and OsmoBSC.
A couple of years ago (2010-2013) implemented both classic SS7
(MTP2/MTP3/SCCP) as well as SIGTRAN stackings (M2PA/M2UA/M3UA/SUA in
Erlang. The result has been used in some production deployments, but
only with a relatively limited feature set. Unfortunately, this code
has nto received any contributions in the time since, and I have to say
that as an open source community project, it has failed. Also, while
Erlang might be fine for core network equipment, running it on a BSC
really is overkill. Keep in miond that we often run OpenBSC on
really small ARM926EJS based embedded systems, much more resource
constrained than any single smartphone during the late decade.
In the meantime (2015/2016) we also implemented some minimal SUA support
for interfacing with UMTS femto/small cells via Iuh (see OsmoHNBGW).
So in order to proceed to implement the required
SCCP-over-M3UA-over-SCTP stacking, I originally thought well, take
Holgers old SCCP code, remove it from the IPA multiplex below, stack it
on top of a new M3UA codebase that is copied partially from SUA.
However, this falls short of the goals in several ways:
The application shouldn't care whether it runs on top of SUA or SCCP,
it should use a unified interface towards the SCCP Provider.
OsmoHNBGW and the SUA code already introduce such an interface baed on
the SCCP-User-SAP implemented using Osmocom primitives (osmo_prim).
However, the old OsmoBSC/SCCPlite code doesn't have such abstraction.
The code should be modular and reusable for other SIGTRAN stackings
as required in the future
So I found myself sketching out what needs to be done and I ended up
pretty much with a re-implementation of large parts. Not quite fun, but
definitely worth it.
The strategy is:
Implement the SCCP SCOC state machines for connection-oriented SCCP
(of which Iu and A interface are probably the only users) using
Osmcoom Finite State Machines (osmo_fsm).
Migrate the existing SUA code on top of that, maintaining the existing
osmo_prim based SCCP User SAP
Implement SCCP to SUA and vice-versa message transcoding
to makes sure the bulk of the code has to deal only with one message
format (parsed SUA).
Introduce a MTP SAP
at the lower boundary of the SCCP code
Implement xUA ASP and AS statemachines using osmo_fsm
and add ASPTM/ASPSM support to SUA (was missing so far) * Implement
Implement M3UA using the xUA ASP and AS FSMs as well as the general
xUA message encoder/decoder,
offering the MTP SAP toward SCCP
And then finally stack all those bits on top of each other, rendering a
fairly clean and modern implementation that can be used with the IuCS of
the virtually unmodified OsmmoHNBGW, OsmoCSCN and OsmoSGSN for testing.
Next steps in the direction of the AoIP are:
Implementation of the MTP-SAP based on the IPA transport
Binding the new SCCP code on top of that
Converting OsmoBSC code base to use the SCCP-User-SAP for its
signaling connection
From that point onwards, OsmoBSC doesn't care anymore whether it
transports the BSSAP/BSSMAP messages of the A interface over
SCCP/IPA/TCP/IP (SCCPlite) SCCP/M3UA/SCTP/IP (3GPP AoIP), or even
something like SUA/SCTP/IP.
However, the 3GPP AoIP specs (unlike SCCPlite) actually modify the
BSSAP/BSSMAP payload. Rather than using Circuit Identifier Codes and
then mapping the CICs to UDP ports based on some secret conventions,
they actually encapsulate the IP address and UDP port information for
the RTP streams. This is of course the cleaner and more flexible
approach, but it means we'll have to do some further changes inside the
actual BSC code to accommodate this.
[Less]
|
Posted
over 8 years
ago
When implementing any kind of communication protocol, one always dreams
of some existing test suite that one can simply run against the
implementation to check if it performs correct in at least those use
cases that matter to the given application.
... [More]
Of course in the real world, there rarely are protocols where this is
true. If test specifications exist at all, they are often just very
abstract texts for human consumption that you as the reader should
implement yourself.
For some (by far not all) of the protocols found in cellular networks,
every so often I have seen some formal/abstract machine-parseable test
specifications. Sometimes it was TTCN-2, and sometimes TTCN-3.
If you haven't heard about TTCN-3, it is basically a way to create
functional tests in an abstract description (textual + graphical), and
then compile that into an actual executable tests suite that you can run
against the implementation under test.
However, when I last did some research into this several years ago, I
couldn't find any Free / Open Source tools to actually use those
formally specified test suites. This is not a big surprise, as even
much more fundamental tools for many telecom protocols are missing, such
as good/complete ASN.1 compilers, or even CSN.1 compilers.
To my big surprise I now discovered that Ericsson had released their
(formerly internal) TITAN TTCN3 Toolset
as Free / Open Source Software under EPL 1.0. The project is even part
of the Eclipse Foundation. Now I'm certainly not a friend of Java or
Eclipse by all means, but well, for running tests I'd certainly not
complain.
The project also doesn't seem like it was a one-time code-drop but seems
very active with many repositories on gitub. For example for the core
module, titan.core shows
plenty of activity on an almost daily basis. Also, binary releases for
a variety of distributions are made available. They
even have a video showing the installation ;)
If you're curious about TTCN-3 and TITAN, Ericsson also have made
available a great 200+ pages slide set about TTCN-3 and TITAN.
I haven't yet had time to play with it, but it definitely is rather high
on my TODO list to try.
ETSI provides a couple of test suites in TTCN-3 for protocols like
DIAMETER, GTP2-C, DMR, IPv6, S1AP, LTE-NAS, 6LoWPAN, SIP, and others at
http://forge.etsi.org/websvn/ (It's also the first time I've seen that
ETSI has a SVN server. Everyone else is using git these days, but yes,
revision control systems rather than periodic ZIP files is definitely a
big progress. They should do that for their reference codecs and ASN.1
files, too.
I'm not sure once I'll get around to it. Sadly, there is no TTCN-3 for
SCCP, SUA, M3UA or any SIGTRAN related stuff, otherwise I would want to
try it right away. But it definitely seems like a very interesting
technology (and tool).
[Less]
|
Posted
over 8 years
ago
Last weekend I had the pleasure of attending FOSDEM 2017. For many years, it is probably the most
exciting event exclusively on Free Software to attend every year.
My personal highlights (next to meeting plenty of old and new friends)
in terms of
... [More]
the talks were:
20 Years of Linux Virtual Memory by MM-Guru Andrea Arcangeli
GPU-Enabled Polyphase Filterbanks by Jan Kraemer
Virtual multi-antenna arrays for estimating the bearing of radio transmitters by Francois Quitin
Secure Microkernel for Deeply Embedded Devices by Jim "jserv" Huang
A discussion of Fedora's Legal state by Tom Callaway
Radio Lockdown Directive by Max Mehl
I was attending but not so excited by Georg Greve's OpenPOWER talk. It was a
great talk, and it is an important topic, but the engineer in me would
have hoped for some actual beefy technical stuff. But well, I was just
not the right audience. I had heard about OpenPOWER quite some time ago
and have been following it from a distance.
The LoRaWAN talk
couldn't have been any less technical, despite stating technical,
political and cultural in the topic. But then, well, just recently
33C3 had the most exciting LoRa PHY Reverse Engineering Talk by Matt
Knight.
Other talks whose recordings I still want to watch one of these days:
Smart Card Forwarding
AF_KTLS - TLS/DTLS Linux kernel module
Overview of gr-inspector
Frosted Embedded POSIX OS
[Less]
|
Posted
over 8 years
ago
I'm very happy that in 2017, we will have the first ever technical
conference on the Osmocom cellular infrastructure projects.
For many years, we have had a small, invitation only event by Osmocom
developers for Osmocom developers called OsmoDevCon.
... [More]
This was fine for
the early years of Osmocom, but during the last few years it became
apparent that we also need a public event for our many users. Those
range from commercial cellular operators to community based efforts like
Rhizomatica, and of course include the many
research/lab type users with whom we started.
So now we'll have the public OsmoCon on April 21st, back-to-back with
the invitation-only OsmoDevcon from April 22nd through 23rd.
I'm hoping we can bring together a representative sample of our user
base at OsmoCon 2017 in April. Looking forward to meet you all. I hope
you're also curious to hear more from other users, and of course the
development team.
Regards,
Harald
[Less]
|