Posted
over 8 years
ago
A graphic can describe a thing more than 1000 words.
This is how mksquashfs 4.3 works.
mksquashfs source
|
Posted
almost 9 years
ago
This howto will get your through a LEDE to create your own kernel patch using
the LEDE infrastructure. It's based on LEDE reboot-1279-gc769c1b.
LEDE has already a lot of patches. They are all applied on one tree.
We will create a new patch for
... [More]
lantiq.
To get started, let see how LEDE organize the patches.
First of all we take a look on /target/linux/*
All of these folders represent a architecture target, except generic.
The generic target is used by all targets.
To continue, we need to know which kernel version your target architecture is running on.
This is written down into target/linux/lantiq/Makefile.
We're running a 4.4.Y kernel. The Y is written down into /include/kernel-version.mk. We will
use .15.
Ok. Now let's see. When LEDE is preparing the kernel build directory, it search for a matching patch
directory.
download the kernel 4.4.x (x from /include/kernel-version.mk)
unpack the kernel under /build_dir/target-../linux-lantiq/linux-4.4.15
apply generic patches
apply lantiq patches
create .config
But which is the right patches directory? It use the following make snippet from /include/kernel.mk
Meaning it will use /patches-4.4 if exists or if not try to use /patches.
Now we know how patches are applied to the linux kernel tree.
We could go into the directory, create a new patches directory and use quilt...
Or we use the quilt target for that.
make target/linux/clean -> to clean up the old directory.
Now make target/linux/prepare QUILT=1 will unpack the source, copy all present patches into ./patches
and use quilt to apply.
With quilt you can move forwards and backwards between patches, aswell as modifying those.
cd ./build_dir/target-mips_34kc+dsp_uClibc-0.9.33.2/linux-lantiq/linux-4.5.15/
to switch into the linux directory.
quilt push -a to apply all patches from LEDE.
quilt new platform/999-mymodification.patch to add a new patch
quilt add net/l2tp/l2tp_core.c to track this file.
Call your editor to modify this file.
With quilt refresh it adds your modifcation to the patch platform/999-mymodification.patch.
Your modification is under ./build_dir/../linux-4.4.15/patches/platform/.
With make target/linux/refresh it will refresh all patches and copy them to the correct folder under target/linux/*/patches. [Less]
|
Posted
about 9 years
ago
The TP-Link CPE510, a nice outdoor device, got a bad rx behaviour when using it with LEDE.
I want to give a short overview how to debug those problems. It could also help you finding problems when facing
ath9k pci cards.
To get down to the device.
... [More]
The CPE510 based on a AR9344 SoC. The integrated wireless part is
supported by the ath9k driver. To get more knowledge about the AR9344 you should take a look into
the public available datasheet. (google for ar9344 datasheet ;)
The AR9344 supports using GPIOs for special purposes it's called a GPIO function. If the function is
enabled, the gpio is internally routed to the special purpose. Now the simple part comes
if you know which register to look into, just look into it.
After reading the pages 52/53 of the datasheet, it's clear that it can route everything signal to every gpio.
Remember the table, because it explains what value means what it's routed to the gpio.
We suggest LNA are missing because the receiving part of the CPE510 is bad. So the value 46 and 47 are the important ones,
46 LNA Chain 0, 47 LNA Chain 1. LNA stands for low noice amplifier.
Now we know how the GPIOs works, let's find the register controlling the GPIO function. The GPIO section start at 130, but the interesting part
is the GPIO IO Function 0 register at address 0x1804002c. It give you 8 bit to describe it's function, if it's 0x0 no function is selected and the
GPIO is used as normal output. So if you write 46 into the bit 0-7 you set the GPIO to become the LNA Chain 0 signal. Every GPIO from
GPIO0 to GPIO19 can configured using those register.
We know what registers are interesting (0x1804002c - 0x1804003c).
We know which values are interesting (decimal 46 and decimal 47).
But how can we read out those value from a running system?
First answer I hear is JTAG, but JTAG isn't easy to archive and more difficult to use on ar71xx, because the bootloader
usally deactivate JTAG as one of the first commands.
But we can ask the kernel. /dev/mem is quite usefull for that. It's a direct way to the memory, very dangerous, but also handy ;).
The easiest way to interface with /dev/mem is the simple utility called devmem or devmem2.
To compile a compatible devmem2 you should use the GPL sources of the firmware, but you can also
download the binary from here [1].
Copy devmem2 to /tmp via scp and start reading the values.
Because mips is a 32bit architecture we have to read the register
Back to our LNA value. 46 and 47. In hex are those 0x2E and 0x2F. We have to look
for those values aligned to 8bit.
# ./devmem2 0x1804002c
/dev/mem opened.
Memory mapped at address 0x2aaae000.
Value at address 0x1804002C (0x2aaae02c): 0x0
# ./devmem2 0x18040030
/dev/mem opened.
Memory mapped at address 0x2aaae000.
Value at address 0x18040030 (0x2aaae030): 0xB0A0900
# ./devmem2 0x18040034
/dev/mem opened.
Memory mapped at address 0x2aaae000.
Value at address 0x18040034 (0x2aaae034): 0x2D180000
# ./devmem2 0x18040038
/dev/mem opened.
Memory mapped at address 0x2aaae000.
Value at address 0x18040038 (0x2aaae038): 0x2C
# ./devmem2 0x1804003c
/dev/mem opened.
Memory mapped at address 0x2aaae000.
Value at address 0x1804003C (0x2aaae03c): 0x2F2E0000
#
Found it in 0x1804003C. LNA 0 is GPIO 18 and LNA1 is GPIO 19.
[1] https://lunarius.fe80.eu/blog/files/lede/devmem2 [Less]
|
Posted
about 9 years
ago
The TP-Link CPE510, a nice outdoor device, got a bad rx behaviour when using it with LEDE.
I want to give a short overview how to debug those problems. It could also help you finding problems when facing
ath9k pci cards.
To get down to the device.
... [More]
The CPE510 based on a AR9344 SoC. The integrated wireless part is
supported by the ath9k driver. To get more knowledge about the AR9344 you should take a look into
the public available datasheet. (google for ar9344 datasheet ;)
The AR9344 supports using GPIOs for special purposes it's called a GPIO function. If the function is
enabled, the gpio is internally routed to the special purpose. Now the simple part comes
if you know which register to look into, just look into it.
After reading the pages 52/53 of the datasheet, it's clear that it can route everything signal to every gpio.
Remember the table, because it explains what value means what it's routed to the gpio.
We suggest LNA are missing because the receiving part of the CPE510 is bad. So the value 46 and 47 are the important ones,
46 LNA Chain 0, 47 LNA Chain 1. LNA stands for low noice amplifier.
Now we know how the GPIOs works, let's find the register controlling the GPIO function. The GPIO section start at 130, but the interesting part
is the GPIO IO Function 0 register at address 0x1804002c. It give you 8 bit to describe it's function, if it's 0x0 no function is selected and the
GPIO is used as normal output. So if you write 46 into the bit 0-7 you set the GPIO to become the LNA Chain 0 signal. Every GPIO from
GPIO0 to GPIO19 can configured using those register.
We know what registers are interesting (0x1804002c - 0x1804003c).
We know which values are interesting (decimal 46 and decimal 47).
But how can we read out those value from a running system?
First answer I hear is JTAG, but JTAG isn't easy to archive and more difficult to use on ar71xx, because the bootloader
usally deactivate JTAG as one of the first commands.
But we can ask the kernel. /dev/mem is quite usefull for that. It's a direct way to the memory, very dangerous, but also handy ;).
The easiest way to interface with /dev/mem is the simple utility called devmem or devmem2.
To compile a compatible devmem2 you should use the GPL sources of the firmware, but you can also
download the binary from here [1].
Copy devmem2 to /tmp via scp and start reading the values.
Because mips is a 32bit architecture we have to read the register
Back to our LNA value. 46 and 47. In hex are those 0x2E and 0x2F. We have to look
for those values aligned to 8bit.
# ./devmem2 0x1804002c
/dev/mem opened.
Memory mapped at address 0x2aaae000.
Value at address 0x1804002C (0x2aaae02c): 0x0
# ./devmem2 0x18040030
/dev/mem opened.
Memory mapped at address 0x2aaae000.
Value at address 0x18040030 (0x2aaae030): 0xB0A0900
# ./devmem2 0x18040034
/dev/mem opened.
Memory mapped at address 0x2aaae000.
Value at address 0x18040034 (0x2aaae034): 0x2D180000
# ./devmem2 0x18040038
/dev/mem opened.
Memory mapped at address 0x2aaae000.
Value at address 0x18040038 (0x2aaae038): 0x2C
# ./devmem2 0x1804003c
/dev/mem opened.
Memory mapped at address 0x2aaae000.
Value at address 0x1804003C (0x2aaae03c): 0x2F2E0000
#
Found it in 0x1804003C. LNA 0 is GPIO 18 and LNA1 is GPIO 19.
[1] https://lunarius.fe80.eu/blog/files/lede/devmem2 [Less]
|
Posted
over 9 years
ago
Since long time ago I was inspired of the features of LAVA (Linaro Automated Validation).
Lava was developed by Linaro to do automatic test on real hardware. It's written in python and based on a lot small daemons and one django application.
It's
... [More]
scheduling submitted tests on hardware depending on the rights and availability.
Setting up an own instance isn't so hard, there is an video howto. But Lava is changing it's basic device model to pipeline devices to make it more flexible because the old device model was quite limited.
Our instance is available under https://lava.coreboot.org. Atm. there is only one device (x60) and we're looking for help to add more devices.
coreboot is under heavy development around 200 commits a month. Sometime breaks, most time because somebody refactored code and made it simpler.
There are many devices supported by coreboot, but commits aren't tested on every hardware. Which means it broke on some hardware. And here the bisect loop begins.
Lava is the perfect place to do bisecting. You can submit a Testjob via commandline, track Job and wait until it's done. Lava itself takes cares that a job doesn't take to long.
To break down the task into smaller peaces:
checkout a revision
compile coreboot
copy artefact somewhere where Lava can access it (http-server)
submit a lava testjob
lava deploys your image and do some tests
git-bisect does the binary search for the broken revision, checks out the next commit which needs to be tested.
But somebody have to tell git-bisect if this is a good or bad revision. Or you use git bisect run.
git bisect run a small script and uses the return code to know if this is a good or bad revision. There is also a third command skip, to skip the revision if the compilation fails.
git-bisect would do the full bisect job, but to use lava, it needs a Lava Test Job.
Under https://github.com/lynxis/coreboot-lava-bisect is my x60 bisect script together with a Lava Test Job for the x60. It only checks if coreboot is booting. But you might want to test something else. Is the cdrom is showing up? Is the wifi card properly detected? Checkout the lava documentation for more information about how to write a Lava Testjob or a Lava Test.
To communicate with Lava on the shell you need to have lava-tool running on your workstation. See https://validation.linaro.org/static/docs/overview.html
With lava-tool submit-job $URL job.yml you can submit a job and get the JobId.
And check the status of your job with lava-tool job-status $URL $JOBID. Depending on the job-status
the script must set the exit code. My bisect script for coreboot is https://github.com/lynxis/coreboot-lava-bisect
cd coreboot
# CPU make -j$CPU
export CPU=4
# your login user name for the lava.coreboot.org
# you can also use LAVAURL="https://[email protected]/RPC2"
export LAVAUSER=lynxis
# used by lava to download the coreboot.rom
export COREBOOTURL=https://fe80.eu/bisect/coreboot.rom
# used as a target by *scp*
export [email protected]:/var/www/bisect/coreboot.rom
git bisect start
git bisect bad
git bisect good
git bisect run /path/to/this/dir/bisect.sh
[Less]
|
Posted
over 9 years
ago
Since long time ago I was inspired of the features of LAVA (Linaro Automated Validation).
Lava was developed by Linaro to do automatic test on real hardware. It's written in python and based on a lot small daemons and one django application.
It's
... [More]
scheduling submitted tests on hardware depending on the rights and availability.
Setting up an own instance isn't so hard, there is an video howto. But Lava is changing it's basic device model to pipeline devices to make it more flexible because the old device model was quite limited.
Our instance is available under https://lava.coreboot.org. Atm. there is only one device (x60) and we're looking for help to add more devices.
coreboot is under heavy development around 200 commits a month. Sometime breaks, most time because somebody refactored code and made it simpler.
There are many devices supported by coreboot, but commits aren't tested on every hardware. Which means it broke on some hardware. And here the bisect loop begins.
Lava is the perfect place to do bisecting. You can submit a Testjob via commandline, track Job and wait until it's done. Lava itself takes cares that a job doesn't take to long.
To break down the task into smaller peaces:
checkout a revision
compile coreboot
copy artefact somewhere where Lava can access it (http-server)
submit a lava testjob
lava deploys your image and do some tests
git-bisect does the binary search for the broken revision, checks out the next commit which needs to be tested.
But somebody have to tell git-bisect if this is a good or bad revision. Or you use git bisect run.
git bisect run a small script and uses the return code to know if this is a good or bad revision. There is also a third command skip, to skip the revision if the compilation fails.
git-bisect would do the full bisect job, but to use lava, it needs a Lava Test Job.
Under https://github.com/lynxis/coreboot-lava-bisect is my x60 bisect script together with a Lava Test Job for the x60. It only checks if coreboot is booting. But you might want to test something else. Is the cdrom is showing up? Is the wifi card properly detected? Checkout the lava documentation for more information about how to write a Lava Testjob or a Lava Test.
To communicate with Lava on the shell you need to have lava-tool running on your workstation. See https://validation.linaro.org/static/docs/overview.html
With lava-tool submit-job $URL job.yml you can submit a job and get the JobId.
And check the status of your job with lava-tool job-status $URL $JOBID. Depending on the job-status
the script must set the exit code. My bisect script for coreboot is https://github.com/lynxis/coreboot-lava-bisect
cd coreboot
# CPU make -j$CPU
export CPU=4
# your login user name for the lava.coreboot.org
# you can also use LAVAURL="https://[email protected]/RPC2"
export LAVAUSER=lynxis
# used by lava to download the coreboot.rom
export COREBOOTURL=https://fe80.eu/bisect/coreboot.rom
# used as a target by *scp*
export [email protected]:/var/www/bisect/coreboot.rom
git bisect start
git bisect bad
git bisect good
git bisect run /path/to/this/dir/bisect.sh
[Less]
|
Posted
over 9 years
ago
All roads lead to Rome, but PulseAudio is not far behind! In fact, how the PulseAudio client library determines how to try to connect to the PulseAudio server has no less than 13 different steps. Here they are, in priority order:
1) As an application
... [More]
developer, you can specify a server string in your call to pa_context_connect. If you do that, that’s the server string used, nothing else.
2) If the PULSE_SERVER environment variable is set, that’s the server string used, and nothing else.
3) Next, it goes to X to check if there is an x11 property named PULSE_SERVER. If there is, that’s the server string, nothing else. (There is also a PulseAudio module called module-x11-publish that sets this property. It is loaded by the start-pulseaudio-x11 script.)
4) It also checks client.conf, if such a file is found, for the default-server key. If that’s present, that’s the server string.
So, if none of the four methods above gives any result, several items will be merged and tried in order.
First up is trying to connect to a user-level PulseAudio, which means finding the right path where the UNIX socket exists. That in turn has several steps, in priority order:
5) If the PULSE_RUNTIME_PATH environment variable is set, that’s the path.
6) Otherwise, if the XDG_RUNTIME_DIR environment variable is set, the path is the “pulse” subdirectory below the directory specified in XDG_RUNTIME_DIR.
7) If not, and the “.pulse” directory exists in the current user’s home directory, that’s the path. (This is for historical reasons – a few years ago PulseAudio switched from “.pulse” to using XDG compliant directories, but ignoring “.pulse” would throw away some settings on upgrade.)
8) Failing that, if XDG_CONFIG_HOME environment variable is set, the path is the “pulse” subdirectory to the directory specified in XDG_CONFIG_HOME.
9) Still no path? Then fall back to using the “.config/pulse” subdirectory below the current user’s home directory.
Okay, so maybe we can connect to the UNIX socket inside that user-level PulseAudio path. But if it does not work, there are still a few more things to try:
10) Using a path of a system-level PulseAudio server. This directory is /var/run/pulse on Ubuntu (and probably most other distributions), or /usr/local/var/run/pulse in case you compiled PulseAudio from source yourself.
11) By checking client.conf for the key “auto-connect-localhost”. If so, also try connecting to tcp4:127.0.0.1…
12) …and tcp6:[::1], too. Of course we cannot leave IPv6-only systems behind.
13) As the last straw of hope, the library checks client.conf for the key “auto-connect-display”. If it’s set, it checks the DISPLAY environment variable, and if it finds a hostname (i e, something before the “:”), then that host will be tried too.
To summarise, first the client library checks for a server string in step 1-4, if there is none, it makes a server string – out of one item from steps 5-9, and then up to four more items from steps 10-13.
And that’s all. If you ever want to customize how you connect to a PulseAudio server, you have a smorgasbord of options to choose from! [Less]
|
Posted
over 9 years
ago
This one’s going to be a bit of a long post. You might want to grab a cup of coffee before you jump in!
Over the last few years, I’ve spent some time getting PulseAudio up and running on a few Android-based phones. There was the initial Galaxy Nexus
... [More]
port, a proof-of-concept port of Firefox OS (git) to use PulseAudio instead of AudioFlinger on a Nexus 4, and most recently, a port of Firefox OS to use PulseAudio on the first gen Moto G and last year’s Sony Xperia Z3 Compact (git).
The process so far has been largely manual and painstaking, and I’ve been trying to make that easier. But before I talk about the how of that, let’s see how all this works in the first place.
The Problem
If you have managed to get by without having to dig into this dark pit, the porting process can be something of an exercise in masochism. More so if you’re in my shoes and don’t have access to any of the documentation for the audio hardware. Hardware vendors and OEMs usually don’t share these specifications unless under NDA, which is hard to set up as someone just hacking on this stuff as an experiment or for fun in their spare time.
Broadly, the task involves looking at how the devices is set up on Android, and then replicating that process using the standard ALSA library, which is what PulseAudio uses (this works because both the Android and generic Linux userspace talk to the same ALSA-based kernel audio drivers).
Android’s configuration
First, you look at the Android audio HAL code for the device you’re porting, and the corresponding mixer paths XML configuration. Between the two of these, you get a description of how you can configure the hardware to play back audio in various use cases (music, tones, voice calls), and how to route the audio (headphones, headset, speakers, Bluetooth).
Snippet from mixer paths XML
In this example, there is one path that describes how to set up the hardware for “deep buffer playback” (used for music, where you can buffer a bunch of data and let the CPU go to sleep). The next path, “speaker”, tells us how to set up the routing to play audio out of the speaker.
These strings are not well-defined, so different hardware uses different path names and combinations to set up the hardware. The XML configuration also does not tell us a number of things, such as what format the hardware supports or what ALSA device to use. All of this information is embedded in the audio HAL code.
Configuring with ALSA
Next, you need to translate this configuration into something PulseAudio will understand1. The preferred method for this is ALSA’s UCM, which describes how to set up the hardware for each use case it supports, and how to configure the routing in each of those use cases.
Snippet from UCM
This is a snippet from the “hi-fi” use case, which is the UCM use case roughly corresponding to “deep buffer playback” in the previous section. Within that, we’re looking at the “speaker device” and you can see the same mixer controls as in the previous XML file are toggled. This file does have some additional information — for example, this snippet specifies what ALSA device should be used to toggle mixer controls (“hw:apq8064tablasnd”).
Doing the Porting
Typically, I start with the “hi-fi” use case — what you would normally use for music playback (and could likely use for tones and such as well). Getting the “phone” use case working is usually much more painful. In addition to setting up the audio hardware similar to th “hi-fi use case, it involves talking to the modem, for which there isn’t a standard method across Android devices. To complicate things, the modem firmware can be extremely sensitive to the order/timing of setup, often with no means of debugging (a.k.a. fun times!).
When there is a new Android version, I need to look at all the changes in the HAL and the XML file, redo the translation to UCM, and then test everything again.
This is clearly repetitive work, and I know I’m not the only one having to do it. Hardware vendors often face the same challenge when supporting the same devices on multiple platforms — Android’s HAL usually uses the XML config I showed above, ChromeOS’s CrAS and PulseAudio use ALSA UCM, Intel uses the parameter framework with its own XML format.
Introducing xml2ucm
With this background, when I started looking at the Z3 Compact port last year, I decided to write a tool to make this and future ports easier. That tool is creatively named xml2ucm2.
As we saw, the ALSA UCM configuration contains more information than the XML file. It contains a description of the playback and mixer devices to use, as well as some information about configuration (channel count, primarily). This information is usually hardcoded in the audio HAL on Android.
To deal with this, I introduced a small configuration file that provides the additional information required to perform the translation. The idea is that you write this configuration once, and can more or less perform the translation automatically. If the HAL or the XML file changes, it should be easy to implement that as a change in the configuration and just regenerate the UCM files.
Example xml2ucm configuration
This example shows how the Android XML like in the snippet above can be converted to the corresponding UCM configuration. Once I had the code done, porting all the hi-fi bits on the Xperia Z3 Compact took about 30 minutes. The results of this are available as a more complete example: the mixer paths XML, the config XML, and the generated UCM.
What’s next
One big missing piece here is voice calls. I spent some time trying to get voice calls working on the two phones I had available to me (the Moto G and the Z3 Compact), but this is quite challenging without access to hardware documentation and I ran out of spare time to devote to the problem. It would be nice to have a complete working example for a device, though.
There are other configuration mechanisms out there — notably Intel’s parameter framework. It would be interesting to add support for that as well. Ideally, the code could be extended to build a complete model of the audio routing/configuration, and generate any of the configuration that is supported.
I’d like this tool to be generally useful, so feel free to post comments and suggestions on Github or just get in touch.
p.s. Thanks go out to Abhinav for all the Haskell help!
Another approach, which the Ubuntu Phone and Jolla SailfishOS folks take, is to just use the Android HAL directly from PulseAudio to set up and use the hardware. This makes sense to quickly enable any arbitrary device (because the HAL provides a hardware-independent interface to do so). In the longer term, I prefer to enable using UCM and alsa-lib directly since it gives us more control, and allows us to use such features as PulseAudio’s dynamic latency adjustment if the hardware allows it. ↩
You might have noticed that the tool is written in Haskell. While this is decidedly not a popular choice of language, it did make for a relatively easy implementation and provides a number of advantages. The unfortunate cost is that most people will find it hard to jump in and start contributing. If you have a feature request or bug fix but are having trouble translating it into code, please do file a bug, and I would happy to help! ↩
[Less]
|
Posted
over 9 years
ago
Happy 2016 everyone!
While I did mention a while back (almost two years ago, wow) that I was taking a break, I realised recently that I hadn’t posted an update from when I started again.
For the last year and a half, I’ve been providing freelance
... [More]
consulting around PulseAudio, GStreamer, and various other directly and tangentially related projects. There’s a brief list of the kind of work I’ve been involved in.
If you’re looking for help with PulseAudio, GStreamer, multimedia middleware or anything else you might’ve come across on this blog, do get in touch! [Less]
|
Posted
over 9 years
ago
2.1 surround sound is (by a very unscientific measure) the third most popular surround speaker setup, after 5.1 and 7.1. Yet, ALSA and PulseAudio has since a long time back supported more unusual setups such as 4.0, 4.1 but not 2.1. It took until
... [More]
2015 to get all pieces in the stack ready for 2.1 as well.
The problem
So what made adding 2.1 surround more difficult than other setups? Well, first and foremost, because ALSA used to have a fixed mapping of channels. The first six channels were decided to be:
1. Front Left
2. Front Right
3. Rear Left
4. Rear Right
5. Front Center
6. LFE / Subwoofer
Thus, a four channel stream would default to the first four, which would then be a 4.0 stream, and a three channel stream would default to the first three. The only way to send a 2.1 channel stream would then be to send a six channel stream with three channels being silence.
This was not good enough, because some cards, including laptops with internal subwoofers, would only support streaming four channels maximum.
(To add further confusion, it seemed some cards wanted the subwoofer signal on the third channel of four, and others wanted the same signal on the fourth channel of four instead.)
ALSA channel map API
The first part of the solution was a new alsa-lib API for channel mapping, allowing drivers to advertise what channel maps they support, and alsa-lib to expose this information to programs (see snd_pcm_query_chmaps, snd_pcm_get_chmap and snd_pcm_set_chmap).
The second step was for the alsa-lib route plugin to make use of this information. With that, alsa-lib could itself determine whether the hardware was 5.1 or 2.1, and change the number of channels automatically.
PulseAudio bass / treble filter
With the alsa-lib additions, just adding another channel map was easy.
However, there was another problem to deal with. When listening to stereo material, we would like the low frequencies, and only those, to be played back from the subwoofer. These frequencies should also be removed from the other channels. In some cases, the hardware would have a built-in filter to do this for us, so then it was just a matter of setting enable-lfe-remixing in daemon.conf. In other cases, this needed to be done in software.
Therefore, we’ve integrated a crossover filter into PulseAudio. You can configure it by setting lfe-crossover-freq in daemon.conf.
The hardware
If you have a laptop with an internal subwoofer, chances are that it – with all these changes to the stack – still does not work. Because the HDA standard (which is what your laptop very likely uses for analog audio), does not have much of a channel mapping standard either! So vendors might decide to do things differently, which means that every single hardware model might need a patch in the kernel.
If you don’t have an internal subwoofer, but a separate external one, you might be able to use hdajackretask to reconfigure your headphone jack to an “Internal Speaker (LFE)” instead. But the downside of that, is that you then can’t use the jack as a headphone jack…
Do I have it?
In Ubuntu, it’s been working since the 15.04 release (vivid). If you’re not running Ubuntu, you need alsa-lib 1.0.28, PulseAudio 7, and a kernel from, say, mid 2014 or later.
Acknowledgements
Takashi Iwai wrote the channel mapping API, and also provided help and fixes for the alsa-lib route plugin work.
The crossover filter code was imported from CRAS (but after refactoring and cleanup, there was not much left of that code).
Hui Wang helped me write and test the PulseAudio implementation.
PulseAudio upstream developers, especially Alexander Patrakov, did a thorough review of the PulseAudio patch set. [Less]
|