I Use This!
Activity Not Available

News

Analyzed 6 months ago. based on code collected 6 months ago.
Posted over 3 years ago
January 2022 was the sunniest January i’ve ever experienced. So I spent its precious weekends mostly climbing around in the outside world, and the weekdays preparing for the enourmous Python 3 migration that one of Codethink’s clients is embarking ... [More] on. Since I discovered Listenbrainz, I always wanted to integrate it with Calliope, with two main goals. The first, to use an open platform to share and store listen history rather than the proprietary Last.fm. And the second, to have an open, neutral place to share playlists rather than pushing them to a private platform like Spotify or Youtube. Over the last couple of months I found time to start that work, and you can now sync listen history and playlists with two new cpe listenbrainz-history and cpe listenbrainz commands. So far playlists can only be exported *from* Listenbrainz, and the necessary changes to the pylistenbrainz binding are still in review, but its a nice start. [Less]
Posted over 3 years ago
It is that time of the year again when we start gathering ideas and mentors for Google Summer Code . Google Summer of Code 2022 will bring some changes. Our main highlights are: Expanded eligibility: The program will no longer be solely focused on ... [More] university students or recent graduates. Multiple Sizes of Projects: medium sized projects (~175 hours) and large projects (~350 hours) Read Expanding Google Summer of Code in 2022 | Google Open Source Blog for more info. Please, submit your project ideas as issues in our gitlab repository by March 1st. Make sure you answer all the questions in the issue template (Project-Proposal template). The GNOME Foundation recognizes that mentoring is a time consuming effort, and for this reason, we will be giving accepted mentors an option to receive the $500 USD stipend that Google pays the organization for each contributor. Mentors can choose to revert the fund into a donation to the GNOME Foundation. Some payment restrictions may apply (please contact us for questions). Proposals will be reviewed by the GNOME GSoC Admins and posted in our Project Ideas page. If you have any doubts, please don’t hesitate to contact the GNOME GSoC Admins on this very same forum or on Matrix in the channel #soc:gnome.org [Less]
Posted over 3 years ago by [email protected] (Peter Hutterer)
After roughly 20 years and counting up to 0.40 in release numbers, I've decided to call the next version of the xf86-input-wacom driver the 1.0 release. [1] This cycle has seen a bulk of development (>180 patches) which is roughly as much as ... [More] the last 12 releases together. None of these patches actually added user-visible features, so let's talk about technical dept and what turned out to be an interesting way of reducing it. The wacom driver's git history goes back to 2002 and the current batch of maintainers (Ping, Jason and I) have all been working on it for one to two decades. It used to be a Wacom-only driver but with the improvements made to the kernel over the years the driver should work with most tablets that have a kernel driver, albeit some of the more quirky niche features will be more limited (but your non-Wacom devices probably don't have those features anyway). The one constant was always: the driver was extremely difficult to test, something common to all X input drivers. Development is a cycle of restarting the X server a billion times, testing is mostly plugging hardware in and moving things around in the hope that you can spot the bugs. On a driver that doesn't move much, this isn't necessarily a problem. Until a bug comes along, that requires some core rework of the event handling - in the kernel, libinput and, yes, the wacom driver. After years of libinput development, I wasn't really in the mood for the whole "plug every tablet in and test it, for every commit". In a rather caffeine-driven development cycle [2], the driver was separated into two logical entities: the core driver and the "frontend". The default frontend is the X11 one which is now a relatively thin layer around the core driver parts, primarily to translate events into the X Server's API. So, not unlike libinput + xf86-input-libinput in terms of architecture. In ascii-art: | +--------------------+ | big giant /dev/input/event0->| core driver | x11 |->| X server +--------------------+ | process | Now, that logical separation means we can have another frontend which I implemented as a relatively light GObject wrapper and is now a library creatively called libgwacom: +-----------------------+ | /dev/input/event0->| core driver | gwacom |--| tools or test suites +-----------------------+ | This isn't a public library or API and it's very much focused on the needs of the X driver so there are some peculiarities in there. What it allows us though is a new wacom-record tool that can hook onto event nodes and print the events as they come out of the driver. So instead of having to restart X and move and click things, you get this: $ ./builddir/wacom-recordwacom-record: version: 0.99.2 git: xf86-input-wacom-0.99.2-17-g404dfd5a device: path: /dev/input/event6 name: "Wacom Intuos Pro M Pen" events: - source: 0 event: new-device name: "Wacom Intuos Pro M Pen" type: stylus capabilities: keys: true is-absolute: true is-direct-touch: false ntouches: 0 naxes: 6 axes: - {type: x , range: [ 0, 44800], resolution: 200000} - {type: y , range: [ 0, 29600], resolution: 200000} - {type: pressure , range: [ 0, 65536], resolution: 0} - {type: tilt_x , range: [ -64, 63], resolution: 57} - {type: tilt_y , range: [ -64, 63], resolution: 57} - {type: wheel , range: [ -900, 899], resolution: 0} ... - source: 0 mode: absolute event: motion mask: [ "x", "y", "pressure", "tilt-x", "tilt-y", "wheel" ] axes: { x: 28066, y: 17643, pressure: 0, tilt: [ -4, 56], rotation: 0, throttle: 0, wheel: -108, rings: [ 0, 0] This is YAML which means we can process the output for comparison or just to search for things. A tool to quickly analyse data makes for faster development iterations but it's still a far cry from reliable regression testing (and writing a test suite is a daunting task at best). But one nice thing about GObject is that it's accessible from other languages, including Python. So our test suite can be in Python, using pytest and all its capabilities, plus all the advantages Python has over C. Most of driver testing comes down to: create a uinput device, set up the driver with some options, push events through that device and verify they come out of the driver in the right sequence and format. I don't need C for that. So there's pull request sitting out there doing exactly that - adding a pytest test suite for a 20-year old X driver written in C. That this is a) possible and b) a lot less work than expected got me quite unreasonably excited. If you do have to maintain an old C library, maybe consider whether's possible doing the same because there's nothing like the warm fuzzy feeling a green tick on a CI pipeline gives you. [1] As scholars of version numbers know, they make as much sense as your stereotypical uncle's facebook opinion, so why not. [2] The Colombian GDP probably went up a bit [Less]
Posted over 3 years ago by [email protected] (Jakub Steiner)
Since the start of the year I’ve been doing weeklybeats. So far it’s been possible mainly thanks to the Dirtywave M8’s ability to be picked up, instantly turned on and creating. I have to admit to mostly composing in bed (and only occasionally ... [More] waking up my better half by shaking to the beat). Fugue on Soundcloud. I’m not going to be spamming the planet with every track, but I do want to share some that I feel worked out. This one is a jam done on the Digitakt (that I actually modified to run on a li-ion battery) and the M8 doing most of the heavy lifting. The track started by setting up Fugue Machine like sequence in the M8. A friend of mine suggested to spice the ambient with a bit of a beat, which I used a Digitakt for. I now map the mixer on the M8 onto the 8 encoders of the Digitakt, but for the jam I was still using the internal keypad. Fugue I really enjoy stepping back into my tracking shoes after two decades, especially when I ran into the old Buzz crew on the Dirtywave Discord server. Shout out to Ilya and Noggin’ who’ve made my re-entry to music super enjoyable. [Less]
Posted over 3 years ago
Today, OBS Studio published its 27.2 release. With this release, besides the always good to have bugfixes and improvements, there’s one change in particular that makes me super excited: this is the first release officially published to Flathub! ... [More] Flathub joins OBS Studio’s Ubuntu PPA in the list of official builds. On Ubuntu, both can be installed and used without any major annoyance, since Flatpak can easily be installed there – though it would be great if Flatpak was distributed by default on Ubuntu, but oh well, such is life. For other Linux distributions, especially the ones not based on Debian, the Flathub package is probably the easiest one to install, and certainly the most complete. Official Build Becoming an official build is not only a badge, or a community recommendation, in OBS Studio’s case. It brings features only enabled in official builds. The first and most obvious one is services integration. OBS Studio builds its Flatpak package on its own CI, and enables this feature using private keys that aren’t otherwise available to distribution packages. This build is then published on Flathub directly. ¹ Another benefit is that the OBS Studio community can effectively enable and support a lot more Linux users to use it the way the community envisioned. People on Fedora, Arch, Gentoo, Endless OS, Debian, elementary OS, and dozens of other distros can use a complete, featureful, and supported build of OBS Studio. In many cases, these distros come with Flatpak and even Flathub by default, and OBS Studio can be installed with a simple search in their app centers. It couldn’t possibly be easier than that. Wild Packaging In addition to enabling services integration, Flatpak makes it much easier for OBS Studio to package its complicated dependencies. For example, OBS Studio needs to patch CEF internally for it to be used as the browser source, and browser docks, and this makes it pretty difficult to package it using traditional packages, since it could conflict with the upstream CEF package. FFmpeg is another case of a patched dependency. Unfortunately, many distro packages don’t ship with browser integration, and don’t use the patched FFmpeg. Perhaps even worse, some unofficial packages add many third-party unofficial plugins by default. This vast array of packaging formats enabling different features, and shipping different plugins, make it much harder for the support volunteers to support Linux users properly. Speaking for myself, even when people were using the Flathub package before it was made official, the fact that it was a Flatpak made it significantly easier to track down bugs. And, interestingly, it made it much easier to track down bug in other packages, such as distros mispackaging or not installing all portal dependencies correctly. In some sense, Flatpak may have helped distributions package dependencies better! Plugin Support Plugins are the heart and soul of OBS Studio. People use a variety of third-party plugins, and supporting them is important for a complete experience. Flatpak has support for extensions, which are used to audio plugins for example. OBS Studio is able to detect plugins installed via Flatpak. ² It is easy to install OBS Studio plugins distributed through Flathub using GNOME Software The number of OBS Studio plugins available on Flathub is numerically small, but their packaging is robust and they’re well supported. I’m confident in these plugins – many of which are maintained by their own authors! – and I’m hopeful that we’ll see more of them showing up there in the future. If you are a plugin author that wants your plugin to show up on app stores like the screenshot above, I’ve written some documentation on the process. It should be relatively easy to do that, but if you have any questions about it, please let me know. I’d like to make this as well documented and simple as possible, so that we can focus less on technicalities, and more on the quality of the packaging and metadata. The Flatpak community is also welcoming and willing to help everyone, so feel free to join their Matrix room. Conclusion It is super exciting to me that OBS Studio is being published on Flathub officially. I think it makes it so much more convenient to install on Linux! I’m also looking forward the recent work on introducing payments and official badges on Flathub, and will make sure that OBS Studio gains such badge as soon as it’s available. Other than these Flatpak-related news, there are more PipeWire changed in the pipeline for future OBS Studio releases, and some of these changes can be really exciting too! Stay tuned as we progress on the PipeWire front, I’m sure it will enable so many interesting features. As I write this article, I’m starting to feel like the dream of an actual Linux platform is slowly materializing, right here, in front of us. OBS Studio is spearheading this front, and helping us find what the platform is still missing for wider adoption, but the platform is real now. It’s tangible, apps can target it. I’d like to thank all the contributors that made this possible, and in particular, tytan652 and columbarius. Nothing I wrote about here would have been possible without your contributions. I’d like to also thank Flathub admins for helping figuring out the publish pipeline. Finally, I’d like to thank the OBS Studio community for accepting these contributions, and for all the patience while we figured things out. ¹ – Massive thanks to tytan652 for implementing secret keys support in flatpak-builder ² – If a plugin is not available on Flathub, you can still manually install them. This is dangerous though, and can make OBS Studio crash. If you want to see a plugin on Flathub, it is always best to politely suggest that for the author! [Less]
Posted over 3 years ago
In 2017, I was attending FOSDEM when GNOME announced that I was to become the new Executive Director of the Foundation. Now, nearly 5 years later, I’ve decided the timing is right for me to step back and for GNOME to start looking for its next ... [More] leader. I’ve been working closely with Rob and the rest of the board to ensure that there’s an extended and smooth transition, and that GNOME can continue to go from strength to strength. GNOME has changed a lot in the last 5 years, and a lot has happened in that time. As a Foundation, we’ve gone from a small team of 3, to employing people to work on marketing, investment in technical frameworks, conference organisation and much more beyond. We’ve become the default desktop on all major Linux distributions. We’ve launched Flathub to help connect application developers directly to their users. We’ve dealt with patent suits, trademarks, and bylaw changes. We’ve moved our entire development platform to GitLab. We released 10 new GNOME releases, GTK 4 and GNOME 40. We’ve reset our relationships with external community partners and forged our way towards that future we all dream of – where everyone is empowered by technology they can trust. For that future, we now need to build on that work. We need to look beyond the traditional role that desktop Linux has held – and this is something that GNOME has always been able to do. I’ve shown that the Foundation can be more than just a bank account for the project, and I believe that this is vital in our efforts to build a diverse and sustainable free software personal computing ecosystem. For this, we need to establish programs that align not only with the unique community and technology of the project, but also deliver those benefits to the wider world and drive real impact. 5 years has been the longest that the Foundation has had an ED for, and certainly the longest that I’ve held a single post for. I remember my first GUADEC as ED. As you may know, like many of you, I’m used to giving talks at conferences – and yet I have never been so nervous as when I walked out on that stage. However, the welcome and genuine warmth that I received that day, and the continued support throughout the last 5 years makes me proud of what a welcoming and amazing community GNOME is. Thank you all. [Less]
Posted over 3 years ago
Since the early days of working on the macOS backend for GTK 4 I knew eventually we’d have to follow suit with what the major browsers were doing in terms of drawing efficiency. Using OpenGL was (while deprecated, certainly not going anywhere) fine ... [More] from a performance standpoint of rendering. But it did have a few drawbacks. In particular, OpenGL (and Metal afaik) layers don’t have ways to damage specific regions of the GPU rendering. That means as we’d flip between front and back buffers, the compositor will re-composite the whole window. That’s rather expensive for areas that didn’t change, even when using a “scissor rect”. If you’re willing to go through the effort of using IOSurface, there does exist another possibility. So this past week I read up on the APIs for CALayer and IOSurface and began strapping things together. As a life-long Linux user, I must say I’m not very impressed with the macOS experience as a user or application author, but hey, it’s a thing, and I guess it matters. The IOSurface is like a CPU-fronted cache on a GPU texture. You can move the buffer between CPU and GPU (which is helpful for software rendering with cairo) or leave it pretty much just in the GPU (unless it gets paged out). It also has a nice property that you can bind it to an OpenGL texture using GL_TEXTURE_RECTANGLE. Once you have a texture, you can back GL_FRAMEBUFFER with it for your rendering. That alone isn’t quite enough, you also need to be able to attach that content to a layer in your NSWindow. We have a base layer which hosts a bunch of tiles (each there own layer) and whose layer.contents property can be mapped directly to an IOSurfaceRef, easy peasy. This is what all the major browsers are doing on macOS, and now we are too. This also managed to simplify a lot of code in the macOS backend, which is always appreciated. https://gitlab.gnome.org/GNOME/gtk/-/merge_requests/4477 [Less]
Posted over 3 years ago
Since the early days of working on the macOS backend for GTK 4 I knew eventually we’d have to follow suit with what the major browsers were doing in terms of drawing efficiency. Using OpenGL was (while deprecated, certainly not going anywhere) fine ... [More] from a performance standpoint of rendering. But it did have a few drawbacks. In particular, OpenGL (and Metal afaik) layers don’t have ways to damage specific regions of the GPU rendering. That means as we’d flip between front and back buffers, the compositor will re-composite the whole window. That’s rather expensive for areas that didn’t change, even when using a “scissor rect”. If you’re willing to go through the effort of using IOSurface, there does exist another possibility. So this past week I read up on the APIs for CALayer and IOSurface and began strapping things together. As a life-long Linux user, I must say I’m not very impressed with the macOS experience as a user or application author, but hey, it’s a thing, and I guess it matters. The IOSurface is like a CPU-fronted cache on a GPU texture. You can move the buffer between CPU and GPU (which is helpful for software rendering with cairo) or leave it pretty much just in the GPU (unless it gets paged out). It also has a nice property that you can bind it to an OpenGL texture using GL_TEXTURE_RECTANGLE. Once you have a texture, you can back GL_FRAMEBUFFER with it for your rendering. That alone isn’t quite enough, you also need to be able to attach that content to a layer in your NSWindow. We have a base layer which hosts a bunch of tiles (each their own layer) whose layer.contents property can be mapped directly to an IOSurfaceRef, easy peasy. With that in place, all of the transparent regions use tiles limited to the transparent areas only (which will incur alpha blending by the compositor). The rest of the area is broken up into tiles which are opaque and therefore do not require blending by the compositor and can be updated independently without damaging the rest of the window contents. You can see the opaque regions by using “Quartz Debug” and turning on the “Show opaque regions” checkbox. Sadly, screen-capture doesn’t appear to grab the yellow highlights you get when you turn on the “Flash screen updates” checkbox, so you’ll have to imagine that. Opaque regions displayed in green This is what all the major browsers are doing on macOS, and now we are too. This also managed to simplify a lot of code in the macOS backend, which is always appreciated. https://gitlab.gnome.org/GNOME/gtk/-/merge_requests/4477 [Less]
Posted over 3 years ago
Since the early days of working on the macOS backend for GTK 4 I knew eventually we’d have to follow suit with what the major browsers were doing in terms of drawing efficiency. Using OpenGL was (while deprecated, certainly not going anywhere) fine ... [More] from a performance standpoint of rendering. But it did have a few drawbacks. In particular, OpenGL (and Metal afaik) layers don’t have ways to damage specific regions of the GPU rendering. That means as we’d flip between front and back buffers, the compositor will re-composite the whole window. That’s rather expensive for areas that didn’t change, even when using a “scissor rect”. If you’re willing to go through the effort of using IOSurface, there does exist another possibility. So this past week I read up on the APIs for CALayer and IOSurface and began strapping things together. As a life-long Linux user, I must say I’m not very impressed with the macOS experience as a user or application author, but hey, it’s a thing, and I guess it matters. The IOSurface is like a CPU-fronted cache on a GPU texture. You can move the buffer between CPU and GPU (which is helpful for software rendering with cairo) or leave it pretty much just in the GPU (unless it gets paged out). It also has a nice property that you can bind it to an OpenGL texture using GL_TEXTURE_RECTANGLE. Once you have a texture, you can back GL_FRAMEBUFFER with it for your rendering. That alone isn’t quite enough, you also need to be able to attach that content to a layer in your NSWindow. We have a base layer which hosts a bunch of tiles (each there own layer) and whose layer.contents property can be mapped directly to an IOSurfaceRef, easy peasy. Opaque regions displayed in green This is what all the major browsers are doing on macOS, and now we are too. This also managed to simplify a lot of code in the macOS backend, which is always appreciated. https://gitlab.gnome.org/GNOME/gtk/-/merge_requests/4477 [Less]
Posted over 3 years ago by [email protected] (Jakub Steiner)
Probably only a fraction of you had a chance to see Droneman in a theatre. It’s available on Netflix, so perhaps not universally, but ever so slightly more available to a global audience. Why would I be plugging a movie? Because it’s my only entry in IMDB and features a spectacular performance by my son, that’s why!