Posted
over 13 years
ago
pulseaudio: Wiki has moved: http://t.co/aVZbRego Help us fill it!! (by Colin)
|
Posted
over 13 years
ago
So as many followers may know already, most of the technical infrastructure we use for PulseAudio has been moved over to FreeDesktop.org. We already moved the mailing lists and git hosting some time ago, and one of the main bits left was the wiki.
We
... [More]
had previously used the FreeDesktop wiki for a couple, isolated pages (mainly because the pulseaudio.org wiki was just too frustrating to use), but the vast majority of content was still on the old servers.
So I finally got around to looking at migrating the content. Now a lot of it is out of date (again see the "too frustrating to use" comment above!), but there is still a lot of data and history there that we'd like to preserve.
Fortunately, FreeDesktop use MoinMoin which is very easy to manipulate, squeeze and mould into the right shape. No complicated databases, just a relatively isolated file system layout. This very much eased the migration process.
We used Trac over at pulseaudio.org and I've done a fair bit of hacking on Trac before so this was also quite convenient as the only two scripts I found for migrating wiki content from Trac to Moin were very basic and limited in their features.
So I set about writing a script to do the conversion. trac2moin supports full wiki conversion including history and attachments. It can rename pages (and fix up links) with a simple map file and also rename users with another map file. It can even fixup some of the syntax differences and even translate a few basic macros. All in all, the conversion process was pretty good.
There will, of course, still be a requirement for a big refresh of the data and content, but now that the bulk of the heavy lifting is done that task can be planned, organised and undertaken without any barriers!
Many thanks to Arun Raghavan and to Tollef Fog Heen for their help in this conversion process.
So get updating!
Share and Enjoy:
[Less]
|
Posted
over 13 years
ago
So as many followers may know already, most of the technical infrastructure we use for PulseAudio has been moved over to FreeDesktop.org. We already moved the mailing lists and git hosting some time ago, and one of the main bits left was the wiki.
We
... [More]
had previously used the FreeDesktop wiki for a couple, isolated pages (mainly because the pulseaudio.org wiki was just too frustrating to use), but the vast majority of content was still on the old servers.
So I finally got around to looking at migrating the content. Now a lot of it is out of date (again see the "too frustrating to use" comment above!), but there is still a lot of data and history there that we'd like to preserve.
Fortunately, FreeDesktop use MoinMoin which is very easy to manipulate, squeeze and mould into the right shape. No complicated databases, just a relatively isolated file system layout. This very much eased the migration process.
We used Trac over at pulseaudio.org and I've done a fair bit of hacking on Trac before so this was also quite convenient as the only two scripts I found for migrating wiki content from Trac to Moin were very basic and limited in their features.
So I set about writing a script to do the conversion. trac2moin supports full wiki conversion including history and attachments. It can rename pages (and fix up links) with a simple map file and also rename users with another map file. It can even fixup some of the syntax differences and even translate a few basic macros. All in all, the conversion process was pretty good.
There will, of course, still be a requirement for a big refresh of the data and content, but now that the bulk of the heavy lifting is done that task can be planned, organised and undertaken without any barriers!
Many thanks to Arun Raghavan and to Tollef Fog Heen for their help in this conversion process.
So get updating!
Share and Enjoy: [Less]
|
Posted
over 13 years
ago
For a long time now, fellow-Gentoo’ers have had to edit /etc/asound.conf or ~/.asoundrc to make programs that talk directly to ALSA go through PulseAudio. Most other distributions ship configuration that automatically probes to see if PulseAudio is
... [More]
running and use that if avaialble, else fall back to the actual hardware. We did that too, but the configuration wasn’t used, and when you did try to use it, broke in mysterious ways.
I finally got around to actually figuring out the problem and fixing it, so if you have custom configuration to do all this, you should now be able to remove it after emerge’ing media-plugins/alsa-plugins-1.0.25-r1 or later with the pulseaudio USE flag. With the next PulseAudio bump, we’ll be depending on this to make the out-of-the-box experience a lot more seamless.
This took much longer to get done than it should have, but we’ve finally caught up. :)
[Props to Mart Raudsepp (leio) for prodding me into doing this.] [Less]
|
Posted
over 13 years
ago
For a long time now, fellow-Gentoo’ers have had to edit /etc/asound.conf or ~/.asoundrc to make programs that talk directly to ALSA go through PulseAudio. Most other distributions ship configuration that automatically probes to see if PulseAudio is
... [More]
running and use that if avaialble, else fall back to the actual hardware. We did that too, but the configuration wasn’t used, and when you did try to use it, broke in mysterious ways.
I finally got around to actually figuring out the problem and fixing it, so if you have custom configuration to do all this, you should now be able to remove it after emerge’ing media-plugins/alsa-plugins-1.0.25-r1 or later with the pulseaudio USE flag. With the next PulseAudio bump, we’ll be depending on this to make the out-of-the-box experience a lot more seamless.
This took much longer to get done than it should have, but we’ve finally caught up. :)
[Props to Mart Raudsepp (leio) for prodding me into doing this.] [Less]
|
Posted
over 13 years
ago
Arun
put an awesome article up, detailing how PulseAudio compares to Android's
AudioFlinger in terms of power consumption and suchlike. Suffice to say,
PulseAudio rocks, but go and read the whole thing, it's worth it.
Apparently, AudioFlinger is a great choice if you want to shorten your
battery life.
|
Posted
over 13 years
ago
Arun
put an awesome article up, detailing how PulseAudio compares to Android's
AudioFlinger in terms of power consumption and suchlike. Suffice to say,
PulseAudio rocks, but go and read the whole thing, it's worth it.
Apparently, AudioFlinger is a great choice if you want to shorten your
battery life.
|
Posted
over 13 years
ago
I’ve been meaning to try this for a while, and we’ve heard a number of requests from the community as well. Recently, I got some time here at Collabora to give it a go — that is, to get PulseAudio running on an Android device and see how it compares
... [More]
with Android’s AudioFlinger.
The Contenders
Let’s introduce our contenders first. For those who don’t know, PulseAudio is pretty much a de-facto standard part of the Linux audio stack. It sits on top of ALSA which provides a unified way to talk to the audio hardware and provides a number of handy features that are useful on desktops and embedded devices. I won’t rehash all of these, but this includes a nice modular framework, a bunch of power saving features, flexible routing, and lots more. PulseAudio runs as a daemon, and clients usually use the libpulse library to communicate with it.
In the other corner, we have Android’s native audio system — AudioFlinger. AudioFlinger was written from scratch for Android. It provides an API for playback/recording as well as a control mechanism for implementing policy. It does not depend on ALSA, but instead allows for a sort of HAL that vendors can implement any way they choose. Applications generally play audio via layers built on top of AudioFlinger. Even if you write a native application, it would use OpenSL ES implementation which goes through AudioFlinger. The actual service runs as a thread of the mediaserver daemon, but this is merely an implementation detail.
Note: all my comments about AudioFlinger and Android in general are based on documentation and code for Android 4.0 (Ice Cream Sandwich).
The Arena
My test-bed for the tests was the Galaxy Nexus running Android 4.0 which we shall just abbreviate to ICS. I picked ICS since it is the current platform on which Google is building, and hopefully represents the latest and greatest in AudioFlinger development. The Galaxy Nexus runs a Texas Instruments OMAP4 processor, which is also really convenient since this chip has pretty good support for running stock Linux (read on to see how useful this was).
Preparations
The first step in getting PulseAudio on Android was deciding between using the Android NDK like a regular application or integrate into the base Android system. I chose the latter — even though this was a little more work initially, it made more sense in the long run since PulseAudio really belongs to the base-system.
The next task was to get the required dependencies ported to Android. Fortunately, a lot of the ground work for this was already done by some of the awesome folks at Collabora. Derek Foreman’s androgenizer tool is incredibly handy for converting an autotools-based build to Android–friendly makefiles. With Reynaldo Verdejo and Alessandro Decina’s prior work on GStreamer for Android as a reference, things got even easier.
The most painful bit was libltdl, which we use for dynamically loading modules. Once this was done, the other dependencies were quite straightforward to port over. As a bonus, the Android source already ships an optimised version of Speex which we use for resampling, and it was easy to reuse this as well.
As I mentioned earlier, vendors can choose how they implement their audio abstraction layer. On the Galaxy Nexus, this is built on top of standard ALSA drivers, and the HAL talks to the drivers via a minimalist tinyalsa library. My first hope was to use this, but there was a whole bunch of functions missing that PulseAudio needed. The next approach was to use salsa-lib, which is a stripped down version of the ALSA library written for embedded devices. This too had some missing functions, but these were fewer and easy to implement (and are now upstream).
Now if only life were that simple. :) I got PulseAudio running on the Galaxy Nexus with salsa-lib, and even got sound out of the HDMI port. Nothing from the speakers though (they’re driven by a TI twl6040 codec). Just to verify, I decided to port the full alsa-lib and alsa-utils packages to debug what’s happening (by this time, I’m familiar enough with androgenizer for all this to be a breeze). Still no luck. Finally, with some pointers from the kind folks at TI (thanks Liam!), I got current UCM configuration files for OMAP4 boards, and some work-in-progress patches to add UCM support to PulseAudio, and after a couple of minor fixes, wham! We have output. :)
(For those who don’t know about UCM — embedded chips are quite different from desktops and expose a huge amount of functionality via ALSA mixer controls. UCM is an effort to have a standard, meaningful way for applications and users to use these.)
In production, it might be handy to write light-weight UCM support for salsa-lib or just convert the UCM configuration into PulseAudio path/profile configuration (bonus points if it’s an automated tool). For our purposes, though, just using alsa-lib is good enough.
To make the comparison fair, I wrote a simple test program that reads raw PCM S16LE data from a file and plays it via the AudioTrack interface provided by AudioFlinger or the PulseAudio Asynchronous API. Tests were run with the brightness fixed, wifi off, and USB port connected to my laptop (for adb shell access).
All tests were run with the CPU frequency pegged at 350 MHz and with 44.1 and 48 kHz samples. Five readings were recorded, and the median value was finally taken.
Round 1: CPU
First, let’s take a look at how the two compare in terms of CPU usage. The numbers below are the percentage CPU usage taken as the sum of all threads of the audio server process and the audio thread in the client application using top (which is why the granularity is limited to an integer percentage).
44.1 kHz 48 kHz
AF PA AF PA
1% 1% 2% 0%
At 44.1 kHz, the two are essentially the same. Both cases are causing resampling to occur (the native sample rate for the device is 48 kHz). Resampling is done using the Speex library, and we’re seeing minuscule amounts of CPU usage even at 350 MHz, so it’s clear that the NEON optimisations are really paying off here.
The astute reader would have noticed that since the device’ native sample rate is 48 kHz, the CPU usage for 48 kHz playback should be less than for 44.1 kHz. This is true with PulseAudio, but not with AudioFlinger! The reason for this little quirk is that AudioFlinger provides 44.1 kHz samples to the HAL (which means the stream is resampled there), and then the HAL needs to resample it again to 48 kHz to bring it to the device’ native rate. From what I can tell, this is a matter of convention with regards to what audio HALs should expect from AudioFlinger (do correct me if I’m mistaken about the rationale).
So round 1 leans slightly in favour of PulseAudio.
Round 2: Memory
Comparing the memory consumption of the server process is a bit meaningless, because the AudioFlinger daemon thread shares an address space with the rest of the mediaserver process. For the curious, the resident set size was: AudioFlinger — 6,796 KB, PulseAudio — 3,024 KB. Again, this doesn’t really mean much.
We can, however, compare the client process’ memory consumption. This is RSS in kilobytes, measured using top.
44.1 kHz 48 kHz
AF PA AF PA
2600 kB 3020 kB 2604 kB 3020 kB
The memory consumption is comparable between the two, but leans in favour of AudioFlinger.
Round 3: Power
I didn’t have access to a power monitor, so I decided to use a couple of indirect metrics to compare power utilisation. The first of these is PowerTOP, which is actually a Linux desktop tool for monitoring various power metrics. Happily, someone had already ported PowerTOP to Android. The tool reports, among other things, the number of wakeups-from-idle per second for the processor as a whole, and on a per-process basis. Since there are multiple threads involved, and PowerTOP’s per-process measurements are somewhat cryptic to add up, I used the global wakeups-from-idle per second. The “Idle” value counts the number of wakeups when nothing is happening. The actual value is very likely so high because the device is connected to my laptop in USB debugging mode (lots of wakeups from USB, and the device is prevented from going into a full sleep).
44.1 kHz 48 kHz
Idle AF PA AF PA
79.6 107.8 87.3 108.5 85.7
The second, similar, data point is the number of interrupts per second reported by vmstat. These corroborate the numbers above:
44.1 kHz 48 kHz
Idle AF PA AF PA
190 266 215 284 207
PulseAudio’s power-saving features are clearly highlighted in this comparison. AudioFlinger causes about three times the number of wakeups per second that PulseAudio does. Things might actually be worse on older hardware with less optimised drivers than the Galaxy Nexus (I’d appreciate reports from running similar tests on a Nexus S or any other device with ALSA support to confirm this).
For those of you who aren’t familiar with PulseAudio, the reason we manage to get these savings is our timer-based scheduling mode. In this mode, we fill up the hardware buffer as much as possible and go to sleep (disabling ALSA interrupts while we’re at it, if possibe). We only wake up when the buffer is nearing empty, and fill it up again. More details can be found in this old blog post by Lennart.
Round 4: Latency
I’ve only had the Galaxy Nexus to actually try this out with, but I’m pretty certain I’m not the only person seeing latency issues on Android. On the Galaxy Nexus, for example, the best latency I can get appears to be 176 ms. This is pretty high for certain types of applications, particularly ones that generate tones based on user input.
With PulseAudio, where we dynamically adjust buffering based on what clients request, I was able to drive down the total buffering to approximately 20 ms (too much lower, and we started getting dropouts). There is likely room for improvement here, and it is something on my todo list, but even out-of-the-box, we’re doing quite well.
Round 5: Features
With the hard numbers out of the way, I’d like to talk a little bit about what else PulseAudio brings to the table. In addition to a playback/record API, AudioFlinger provides mechanism for enforcing various bits of policy such as volumes and setting the “active” device amongst others. PulseAudio exposes similar functionality, some as part of the client API and the rest via the core API exposed to modules.
From SoC vendors’ perspective, it is often necessary to support both Android and standard Linux on the same chip. Being able to focus only on good quality ALSA drivers and knowing that this will ensure quality on both these systems would be a definite advantage in this case.
The current Android system leaves power management to the audio HAL. This means that each vendor needs to implement this themselves. Letting PulseAudio manage the hardware based on requested latencies and policy gives us a single point of control, greatly simplifying the task of power-management and avoiding code duplication.
There are a number of features that PulseAudio provides that can be useful in the various scenarios where Android is used. For example, we support transparently streaming audio over the network, which could be a handy way of supporting playing audio from your phone on your TV completely transparently and out-of-the-box. We also support compressed formats (AC3, DTS, etc.) which the ongoing Android-on-your-TV efforts could likely take advantage of.
Edit: As someone pointed out on LWN, I missed one thing — AudioFlinger has an effect API that we do not yet have in PulseAudio. It’s something I’d definitely like to see added to PulseAudio in the future.
Ding! Ding! Ding!
That pretty much concludes the comparison of these two audio daemons. Since the Android-side code is somewhat under-documented, I’d welcome comments from readers who are familiar with the code and history of AudioFlinger.
I’m in the process of pushing all the patches I’ve had to write to the various upstream projects. A number of these are merely build system patches to integrate with the Android build system, and I’m hoping projects are open to these. Instructions on building this code will be available on the PulseAudio Android wiki page.
For future work, it would be interesting to write a wrapper on top of PulseAudio that exposes the AudioFlinger audio and policy APIs — this would basically let us run PulseAudio as a drop-in AudioFlinger replacement. In addition, there are potential performance benefits that can be derived from using Android-specific infrastructure such as Binder (for IPC) and ashmem (for transferring audio blocks as shared memory segments, something we support on desktops using the standard Linux SHM mechanism which is not available on Android).
If you’re an OEM who is interested in this work, you can get in touch with us — details are on the Collabora website.
I hope this is useful to some of you out there! [Less]
|
Posted
over 13 years
ago
I’ve been meaning to try this for a while, and we’ve heard a number of requests from the community as well. Recently, I got some time here at Collabora to give it a go — that is, to get PulseAudio running on an Android device and see how it compares
... [More]
with Android’s AudioFlinger.
The Contenders
Let’s introduce our contenders first. For those who don’t know, PulseAudio is pretty much a de-facto standard part of the Linux audio stack. It sits on top of ALSA which provides a unified way to talk to the audio hardware and provides a number of handy features that are useful on desktops and embedded devices. I won’t rehash all of these, but this includes a nice modular framework, a bunch of power saving features, flexible routing, and lots more. PulseAudio runs as a daemon, and clients usually use the libpulse library to communicate with it.
In the other corner, we have Android’s native audio system — AudioFlinger. AudioFlinger was written from scratch for Android. It provides an API for playback/recording as well as a control mechanism for implementing policy. It does not depend on ALSA, but instead allows for a sort of HAL that vendors can implement any way they choose. Applications generally play audio via layers built on top of AudioFlinger. Even if you write a native application, it would use OpenSL ES implementation which goes through AudioFlinger. The actual service runs as a thread of the mediaserver daemon, but this is merely an implementation detail.
Note: all my comments about AudioFlinger and Android in general are based on documentation and code for Android 4.0 (Ice Cream Sandwich).
The Arena
My test-bed for the tests was the Galaxy Nexus running Android 4.0 which we shall just abbreviate to ICS. I picked ICS since it is the current platform on which Google is building, and hopefully represents the latest and greatest in AudioFlinger development. The Galaxy Nexus runs a Texas Instruments OMAP4 processor, which is also really convenient since this chip has pretty good support for running stock Linux (read on to see how useful this was).
Preparations
The first step in getting PulseAudio on Android was deciding between using the Android NDK like a regular application or integrate into the base Android system. I chose the latter — even though this was a little more work initially, it made more sense in the long run since PulseAudio really belongs to the base-system.
The next task was to get the required dependencies ported to Android. Fortunately, a lot of the ground work for this was already done by some of the awesome folks at Collabora. Derek Foreman’s androgenizer tool is incredibly handy for converting an autotools-based build to Android–friendly makefiles. With Reynaldo Verdejo and Alessandro Decina’s prior work on GStreamer for Android as a reference, things got even easier.
The most painful bit was libltdl, which we use for dynamically loading modules. Once this was done, the other dependencies were quite straightforward to port over. As a bonus, the Android source already ships an optimised version of Speex which we use for resampling, and it was easy to reuse this as well.
As I mentioned earlier, vendors can choose how they implement their audio abstraction layer. On the Galaxy Nexus, this is built on top of standard ALSA drivers, and the HAL talks to the drivers via a minimalist tinyalsa library. My first hope was to use this, but there was a whole bunch of functions missing that PulseAudio needed. The next approach was to use salsa-lib, which is a stripped down version of the ALSA library written for embedded devices. This too had some missing functions, but these were fewer and easy to implement (and are now upstream).
Now if only life were that simple. :) I got PulseAudio running on the Galaxy Nexus with salsa-lib, and even got sound out of the HDMI port. Nothing from the speakers though (they’re driven by a TI twl6040 codec). Just to verify, I decided to port the full alsa-lib and alsa-utils packages to debug what’s happening (by this time, I’m familiar enough with androgenizer for all this to be a breeze). Still no luck. Finally, with some pointers from the kind folks at TI (thanks Liam!), I got current UCM configuration files for OMAP4 boards, and some work-in-progress patches to add UCM support to PulseAudio, and after a couple of minor fixes, wham! We have output. :)
(For those who don’t know about UCM — embedded chips are quite different from desktops and expose a huge amount of functionality via ALSA mixer controls. UCM is an effort to have a standard, meaningful way for applications and users to use these.)
In production, it might be handy to write light-weight UCM support for salsa-lib or just convert the UCM configuration into PulseAudio path/profile configuration (bonus points if it’s an automated tool). For our purposes, though, just using alsa-lib is good enough.
To make the comparison fair, I wrote a simple test program that reads raw PCM S16LE data from a file and plays it via the AudioTrack interface provided by AudioFlinger or the PulseAudio Asynchronous API. Tests were run with the brightness fixed, wifi off, and USB port connected to my laptop (for adb shell access).
All tests were run with the CPU frequency pegged at 350 MHz and with 44.1 and 48 kHz samples. Five readings were recorded, and the median value was finally taken.
Round 1: CPU
First, let’s take a look at how the two compare in terms of CPU usage. The numbers below are the percentage CPU usage taken as the sum of all threads of the audio server process and the audio thread in the client application using top (which is why the granularity is limited to an integer percentage).
44.1 kHz 48 kHz
AF PA AF PA
1% 1% 2% 0%
At 44.1 kHz, the two are essentially the same. Both cases are causing resampling to occur (the native sample rate for the device is 48 kHz). Resampling is done using the Speex library, and we’re seeing minuscule amounts of CPU usage even at 350 MHz, so it’s clear that the NEON optimisations are really paying off here.
The astute reader would have noticed that since the device’ native sample rate is 48 kHz, the CPU usage for 48 kHz playback should be less than for 44.1 kHz. This is true with PulseAudio, but not with AudioFlinger! The reason for this little quirk is that AudioFlinger provides 44.1 kHz samples to the HAL (which means the stream is resampled there), and then the HAL needs to resample it again to 48 kHz to bring it to the device’ native rate. From what I can tell, this is a matter of convention with regards to what audio HALs should expect from AudioFlinger (do correct me if I’m mistaken about the rationale).
So round 1 leans slightly in favour of PulseAudio.
Round 2: Memory
Comparing the memory consumption of the server process is a bit meaningless, because the AudioFlinger daemon thread shares an address space with the rest of the mediaserver process. For the curious, the resident set size was: AudioFlinger — 6,796 KB, PulseAudio — 3,024 KB. Again, this doesn’t really mean much.
We can, however, compare the client process’ memory consumption. This is RSS in kilobytes, measured using top.
44.1 kHz 48 kHz
AF PA AF PA
2600 kB 3020 kB 2604 kB 3020 kB
The memory consumption is comparable between the two, but leans in favour of AudioFlinger.
Round 3: Power
I didn’t have access to a power monitor, so I decided to use a couple of indirect metrics to compare power utilisation. The first of these is PowerTOP, which is actually a Linux desktop tool for monitoring various power metrics. Happily, someone had already ported PowerTOP to Android. The tool reports, among other things, the number of wakeups-from-idle per second for the processor as a whole, and on a per-process basis. Since there are multiple threads involved, and PowerTOP’s per-process measurements are somewhat cryptic to add up, I used the global wakeups-from-idle per second. The “Idle” value counts the number of wakeups when nothing is happening. The actual value is very likely so high because the device is connected to my laptop in USB debugging mode (lots of wakeups from USB, and the device is prevented from going into a full sleep).
44.1 kHz 48 kHz
Idle AF PA AF PA
79.6 107.8 87.3 108.5 85.7
The second, similar, data point is the number of interrupts per second reported by vmstat. These corroborate the numbers above:
44.1 kHz 48 kHz
Idle AF PA AF PA
190 266 215 284 207
PulseAudio’s power-saving features are clearly highlighted in this comparison. AudioFlinger causes about three times the number of wakeups per second that PulseAudio does. Things might actually be worse on older hardware with less optimised drivers than the Galaxy Nexus (I’d appreciate reports from running similar tests on a Nexus S or any other device with ALSA support to confirm this).
For those of you who aren’t familiar with PulseAudio, the reason we manage to get these savings is our timer-based scheduling mode. In this mode, we fill up the hardware buffer as much as possible and go to sleep (disabling ALSA interrupts while we’re at it, if possibe). We only wake up when the buffer is nearing empty, and fill it up again. More details can be found in this old blog post by Lennart.
Round 4: Latency
I’ve only had the Galaxy Nexus to actually try this out with, but I’m pretty certain I’m not the only person seeing latency issues on Android. On the Galaxy Nexus, for example, the best latency I can get appears to be 176 ms. This is pretty high for certain types of applications, particularly ones that generate tones based on user input.
With PulseAudio, where we dynamically adjust buffering based on what clients request, I was able to drive down the total buffering to approximately 20 ms (too much lower, and we started getting dropouts). There is likely room for improvement here, and it is something on my todo list, but even out-of-the-box, we’re doing quite well.
Round 5: Features
With the hard numbers out of the way, I’d like to talk a little bit about what else PulseAudio brings to the table. In addition to a playback/record API, AudioFlinger provides mechanism for enforcing various bits of policy such as volumes and setting the “active” device amongst others. PulseAudio exposes similar functionality, some as part of the client API and the rest via the core API exposed to modules.
From SoC vendors’ perspective, it is often necessary to support both Android and standard Linux on the same chip. Being able to focus only on good quality ALSA drivers and knowing that this will ensure quality on both these systems would be a definite advantage in this case.
The current Android system leaves power management to the audio HAL. This means that each vendor needs to implement this themselves. Letting PulseAudio manage the hardware based on requested latencies and policy gives us a single point of control, greatly simplifying the task of power-management and avoiding code duplication.
There are a number of features that PulseAudio provides that can be useful in the various scenarios where Android is used. For example, we support transparently streaming audio over the network, which could be a handy way of supporting playing audio from your phone on your TV completely transparently and out-of-the-box. We also support compressed formats (AC3, DTS, etc.) which the ongoing Android-on-your-TV efforts could likely take advantage of.
Edit: As someone pointed out on LWN, I missed one thing — AudioFlinger has an effect API that we do not yet have in PulseAudio. It’s something I’d definitely like to see added to PulseAudio in the future.
Ding! Ding! Ding!
That pretty much concludes the comparison of these two audio daemons. Since the Android-side code is somewhat under-documented, I’d welcome comments from readers who are familiar with the code and history of AudioFlinger.
I’m in the process of pushing all the patches I’ve had to write to the various upstream projects. A number of these are merely build system patches to integrate with the Android build system, and I’m hoping projects are open to these. Instructions on building this code will be available on the PulseAudio Android wiki page.
For future work, it would be interesting to write a wrapper on top of PulseAudio that exposes the AudioFlinger audio and policy APIs — this would basically let us run PulseAudio as a drop-in AudioFlinger replacement. In addition, there are potential performance benefits that can be derived from using Android-specific infrastructure such as Binder (for IPC) and ashmem (for transferring audio blocks as shared memory segments, something we support on desktops using the standard Linux SHM mechanism which is not available on Android).
If you’re an OEM who is interested in this work, you can get in touch with us — details are on the Collabora website.
I hope this is useful to some of you out there! [Less]
|
Posted
over 13 years
ago
While providing a working wireless network you want some statistics data. This small howto will create
a small statistic setup using graphite and collectd.
We'll use the kernel interface nl80211 which is also used by hostapd to collect all statistics
... [More]
data
from AP.
A collectd plugin will communicate with the kernel via nl80211 at AP. Collectd will also transfer these data
over network to our statistics host where graphite runs. On the graphite host will be a collectd which
receive statistics and push it into graphite.
'picture made with dia shows kernel,nl80211,ap,network,collectd,graphitehost,graphite'
Install statistics host
Install graphite
Let setup our graphite host. I will use a debian testing, because of a newer collectd version.
Collectd v5 contians a plugin for graphite. Collectd v4 can only used via python plugin.
graphite install:
code:
apt-get install python-django-tagging python-cairo libapache2-mod-wsgi python-twisted python-memcache python-pysqlite2 python-simplejson python-django
pip install whisper
pip install carbon
pip install graphite-web
cd /opt/graphite/conf
cp carbon.conf.example carbon.conf
cp storage-schemas.conf.example storage-schemas.conf
cd /opt/graphite/webapp/graphite/
python manage.py syncdb
You should change at least the storage-schemas.conf. Increase default retentions upto 14 days.
/opt/graphite/bin/carbon-cache.py start
src: http://linuxracker.com/2012/03/31/setting-up-graphite-server-on-debian-squeeze/
Install collectd
collectd install:
apt-get install collectd-core
Add these lines to types.db ( /usr/share/collectd/types.db )
stations value:GAUGE:0:256
nl_station connection_time:GAUGE:0:4294967295 inactive_time:GAUGE:0:4294967295 rx_bytes:GAUGE:0:4294967295 tx_bytes:GAUGE:0:4294967295 rx_packages:GAUGE:0:4294967295 tx_packages:GAUGE:0:4294967295 tx_retried:GAUGE:0:4294967295 tx_failed:GAUGE:0:4294967295 signal:GAUGE:-255:255 nl80211_mask:GAUGE:0:4294967295 nl80211_beacon_loss:GAUGE:0:4294967295 signal_avg:GAUGE:-255:255
nl_survey noise:GAUGE:-255:255 active:GAUGE:0:18446744073709551615 busy:GAUGE:0:18446744073709551615 extbusy:GAUGE:0:18446744073709551615 transmit:GAUGE:0:18446744073709551615 receive:GAUGE:0:18446744073709551615 inuse:GAUGE:0:1
add following lines to /etc/collectd.conf
LoadPlugin network
LoadPlugin write_graphite
<Plugin network>
Listen "0.0.0.0" "25826"
</Plugin>
<Plugin write_graphite>
<Carbon>
Host "localhost"
Port "2003"
</Carbon>
</Plugin>
.
/etc/init.d/collectd restart
We are done with our graphite host. Start within screen the django development server. For production setup of graphite use memcache + wsgi.
cd /opt/graphite/webapp/graphite/
python manage.py runserver
Configuring AP
You can build your own openWRT image or use this one.
Howto build our openWRT image
mkdir openwrt-collectd
cd openwrt-collectd
export SRCROOT=$PWD
git clone git://nbd.name/openwrt.git
cd openwrt
./scripts/feeds update
./scripts/feeds install collectd
cd ..
git clone git://dev.c-base.org/collect-nl80211/collectd-nl80211.git
cd collectd-nl80211
sh create-openwrt-collectd-patch.sh
cp 999-nl80211.patch ../openwrt/feeds/packages/utils/collectd/patches
cd ../openwrt/feeds/packages/
cat $SRCROOT/collectd-nl80211/openwrt-collectd-makefile.patch | patch -p0
cd $SRCROOT/openwrt
This will patch collectd Makefile. You can select now nl80211 within openwrt build system
With make menuconfig select from collectd the nl80211 module. Make sure you compiled
collectd + collectd-mod-network + collectd-mod-nl80211 hard into your image and not as module.
Flash your AP
scp $SRCROOT/openwrt/bin/ar71xx/openwrt-ar71xx-generic-tl-wdr4300-v1-jffs2-sysupgrade.bin [email protected]:/tmp
ssh [email protected]
sysupgrade /tmp/openwrt-ar71xx-generic-tl-wdr4300-v1-jffs2-sysupgrade.bin
Configuring openWRT
collectd.conf
# enable default wireless OpenWRT
uci set wireless.radio0.disabled=0
uci commit
# modify network config + wifi interface to match your configuration
scp collectd-example.conf [email protected]:/etc/collectd.conf
[Less]
|