Posted
almost 15 years
ago
by
vokimon
The CLAM project is delighted to announce the long awaited 1.4.0 release of CLAM, the C++ framework for audio and music, code name 3D molluscs in the space. In summary, this long term release includes a lot of new spacialization modules for 3D audio; MIDI, OSC and guitar effects modules; architectural enhancements such as typed […]
|
Posted
almost 15 years
ago
by
[email protected] (Xavier Amatriain)
A couple of weeks ago, I attended a three-day course on Hadoop from the guys at Cloudera. Although I had heard and read about Hadoop before, this was a great opportunity to learn many details on Hadoop and find out about several tools that make up
... [More]
the Hadoop ecosystem. If, like me before, you only have a rough idea of what's in Hadoop, you should be interested in the post. Take what I say with a grain of salt, since I am no expert in Hadoop. However, because I am not an expert, I think I can guarantee a fresher look and you can trust I am not trying to sell you the project. But, if you are an expert and you read the post, you might want to give feedback in case I got something wrong.Hadoop is an open source java implementation of the MapReduce framework introduced by Google. The main developer and contributor to Hadoop, however, is Yahoo. It might seem weird that one of Google's main competitors releases an open source version of a framework they introduced. More so, when Google has recently been granted a patent for it. However, it seems unlikely that Google can execute their patent. One of the main reasons is that Map and Reduce functions have been known and used in functional programming for many years. Another reason is that Hadoop has gained a huge popularity as part of the Apache project. Enforcing the patent would not get Google much love from many companies that are now making a living of it or using it as an important component in their web architecture.But, before we go into any more detail, it would be good to understand what can Hadoop be used for and when we should think about adopting it. First, and above all, Hadoop is a framework for data analysis and processing. Therefore, if you have no data, or if you have no need to process it, do not continue with this post. Hadoop is sometimes presented as an alternative to traditional relational databases. However, it is not a database (although it does provide a noSQL one called HBase as one of its tools), it is a framework for distributing data processes. Ok, so here was the second keyword: distribution. If you think you can do whatever you need to do in a single machine, you don't need Hadoop. However, you might want to look at it anyway, since distributing your data processes can be cheaper and also much more reliable. And finally, and related to the previous, using Hadoop only makes sense if you are processing large datasets and by large I mean several TB's.However, even if your problem fits into the three previous conditions (distributed processing of large datasets) you can still not be completely sure Hadoop is your solution. Distributed relational databases are still an option. I won't go into the details, but you might want to read at some voices that are recently stepping in to defend the scalability of relational databases and their applicability in highly demanding large datasets. These two posts are good reading: "Getting Real about NoSQL and the SQL-Isn't-Scalable Lie" and "SCALE 8x: Relational vs. non-relational". I would also recommend this recent presentation on "What every developer should know about database scalability"So now that we have some intuition of when Hadoop may be of interest, let me introduce the two main issues behind Hadoop: MapReduce and HDFS.MapReduce is a programming model introduced by Google, which is at the core of Hadoop. It is based on the use of two functions taken from functional programming: Map and Reduce. Map processes a (key,value) pair into a list of intermediate (key,value) pairs. Reduce takes an intermediate key and the set of values for that key. Both the mapper and reducer functions are written by the user. The framework groups together intermediate values associated with the same key in order to pass them to the corresponding Reduce.MapReduce claims to be a sufficiently generic programming model that most data processing tasks can be decomposed in such a way. If you are interested in learning more, I recommend you start with Google's paper. You can also take a look at Google's set of videos introducing the framework. If you want a more "academic" presentation, you might want to take a look at these UC Berkeley classes.The other important core issue in Hadoop I mentioned before is the Hadoop Distributed File System (HDFS). HDFS is the equivalent of the Google File System (GFS) used in the original MapReduce framework. This filesystem is optimized for reading in streaming large files (from several gigabytes to terabytes). Note that HDFS does not allow, for instance, to edit a file once it has been written.Ok, so now we have the basics in place: how do we use Hadoop? Since Hadoop is written in Java, the most straightforward to get started is by using its Java API. If you look at the Hadoop Map/Reduce tutorial, for instance, you will see how the framework is introduced through its Java API.But, if you want to use Hadoop but would rather keep away from Java, there are plenty of other options. First, there is Hadoop Streaming, which allows to use arbitrary program code with Hadoop. Stdin and Stdout are used for data flow, and each mapper and reducer is defined in a separate program. This comes in very handy if you want to use Hadoop through a scripting language. Now, if you want to have a greater performance in your mapper and reducer functions and would like to call compiled C++ code instead, your solution is called Hadoop Pipes.Now, what if you still would like to access the Hadoop framework but do not fancy the MapReduce programming mode? In other words, is there any higher-level and more programmer friendly way to interface with Hadoop? And the answer is, of course, yes. There are several ways to do this but I will mention two of them: Hive and Pig. Hive is a tool developed at Facebook that allows for an SQL-like access to the Hadoop infrastructure. Although the project is not very mature yet, this is a very interesting option to consider and it seems to be giving very good results to Facebook. The other option is to use Pig, developed by Yahoo. Pig provides a higher-level language called Pig Latin that increases productivity, especially if you are dealing with non-java programmers that are closer to the domain (e.g. data analysts). Pig Latin is a dataflow language and it even has a graphical front-end plugin for Eclipse called PigPen.I would not like to finish this personal overview of the Hadoop Ecosystem without mentioningMahout, a project for distributed machine learning with Hadoop. Among its examples, Mahout includes an implementation of several collaborative filtering algorithms for recommendation. I would also encourage you to take a look at this list of academic papers about or using Hadoop.At this point I have to say that I have mixed feelings about Hadoop and about MapReduce itself. Although it is a powerful framework with immediate application to real-life problems that involve very large datasets, the model feels more like a kludge than a paradigm shift. I understand why people turn to tools like Hive and Pig that hide the MapReduce complexity behind more friendly models such as ER and Dataflow networks. Providing a framework that is both efficient, usable but also conceptually illuminating is definitely an area to work in the future. And it seems that I am not the only one thinking along these lines. Even Yahoo themselves are looking into new ways that go beyond Hadoop and MapReduce. [Less]
|
Posted
almost 15 years
ago
by
[email protected] (Xavier Amatriain)
Some days ago I was surprised to find (the cult of) Scrum in a list of the Worst Technologies of the Decade. I posted the link on LinkedIn's Agile Alliance group and that started a mild discussion where most people more or less defended Scrum. What
... [More]
follows is my personal take on the issue: Is Scrum really (that) evil?(No need to say that if you know nothing about Scrum you should at least learn something about it before proceeding with the rest of the post)Ok, so the first thing people will tell you about Scrum is that it is not a method, obviously not a technology, not a methodology... then what is it? Scrum sells itself as a ... (drums here) "framework"! For those of you with a less agile background, another well known example of a framework in the context of development methods is the Rational Unified Process. If you are thinking on proposing a new development "method", you should definitely think about selling it as a framework. This will bring you a number of benefits, the most important being:You can say that your framework is useful for any kind of situationIf anything goes wrong, you can always blame whoever instantiated the framework for not doing it rightWell, that is exactly what happens with Scrum. You can not say it is evil in itself, not even good or bad: it will be as good or bad as its particular instance. This is, of course, unfair with Scrum users that may have the feeling that if anything goes right it will be thanks to the framework, but if anything is wrong it will be their fault.(UPDATE: The following paragraph has been edited after feedback from comments to the post and Kent Beck himself. I only had time to skip through the 2nd Edition of XP Explained, but I think I got the point.)Unfortunately, the alternative is not very appealing either. Other methods like eXtreme Programming in their first incarnations would tell you that the only way to apply the method was to apply all practices. There was an important point in the message since some practices are actually complementary and feedback into others. So you can't, for instance, decide to skip Continuous Integration and still think you can get away with it. In any case, reality is that you cannot expect all teams to apply all practices and while practices like Unit Testing reach 60% usage, others like Pair Programming do not even reach 25%. Also as Kent Beck acknowledges in the preface to the 2nd edition of his foundational eXtreme Programming Explained, enforcing all practices is like enforcing a given programming style. And that might not be fit in every situation. The current proposed solution in XP is to present some primary practices and some corollary. Also now the message is that practices cannot be enforced and need to be evaluated in each situation. This brings XP closer to being a toolkit of practices, which is not far from the concept of framework: the responsibility is on the particular instance or application.Maybe the ideal solution would look something like: propose a generic framework that includes several practices and variations and then illustrate the framework with several practical instances that can be used depending on the project at hand. Of course, I am no the first one to think about something like this. As a matter of fact, Alistair Cockburn's Crystal methodologies were designed precisely with this idea in mind. Unfortunately the author did not get very far in proposing the different instances and now the only one fully developer is Crystal Clear, valid for smaller and lighter projects.In any case, Scrum is a framework and when something goes wrong it is pretty likely that the "user" will be to blame. However, as I have just explained, there is no clear alternative to putting so much weigh on the particular instance of a method.Still, can we say that there some things that are inherently good or bad about scrum?The good thingsOf course there are a few things that are good about Scrum regardless of how you apply the framework. To name a few:Accountability: Scrum makes it much easier to make teams accountable for what they do. It also makes it easier to hold clients accountable for what they asked and even managers for what they did not provide. Forget about eXtreme Programming cards and boards, scrum artifacts like the burndown charts or the impediments or product backlog are great tools if what you need is more accountability in your life... I mean projects.Credibility of Agile: Let's face it, the first thing that comes to mind to many people when they read the agile manifesto is a bunch of long-bearded hipster coding with one hand and eating chips with the other. And this impression might not improve much more when you mention things like the "planning game" or.... (my god!) Pair Programming. Scrum brings credibility to Agile because it looks like "serious stuff".The agile for non-developers and managers: Believe it or not, those who have to decide whether to promote Agile in a company may have no clue of what the life of a developer is like (and many could not care less). If you plan selling Agile by saying you will make developer's life easier, think again. But if you talk about shortening cycles, minimizing risk and... improving accountability (that is offering tools for managers to understand what is going on) you might have a chance. The biggest strength of Scrum is that it makes it much easier for managers and decision makers to buy into Agile. And believe me, this is a major issue that by itself may justify Scrum.The bad onesToo many artifacts: Ok, but wasn't one of the goals of becoming agile to offer a lightweight method with as few artifacts as possible. Then, what are all these artifacts doing in my life now? Product backlog, sprint backlog, impediments backlog, product burndown chart, sprint burndown chart, daily meeting, retrospective, sprint planning... is this really necessary? The truth is that if managed right many of these artifacts are not a huge overhead and are useful, really. And again, they are needed to convince managers that we are doing things right. The problem is that, if we are not very careful, it is very easy to abuse them and end up making them more important than the product or team itself.Too much focus on the process: Related to the previous, it is easy to fall into a situation where the process is more important than anything else. It is not about your project, your product, and your team... it is about getting the process right. What? This sounds more like CMM than Agile!Developers? What developers?: And finally... where does that leave developers? If you implement Scrum and nothing else, you are pretty likely to turn developers' lifes into a sweatshop 2.0: they have to work more, faster, and better.... and are more accountable than before. Scrum does not care much about developers, so you better do this yourself. That is why a preferred approach is to sparkle in some XP practices on top of a Scrum-driven project organization. However, because XP will not be so popular among managers and decision makers you are likely to have much less support (if you don't agree with this, just try mentioning pair programming). Scrum by itself is not developer-friendly and that is a fact. Conclusions: is this really agile?So now do me a favor and take a moment to (re)-read the Agile Manifesto. What do you read in the first line? "(We value) Individuals and interactions over processes and tools". The main sin is that it makes it really easy to revert the sign in the equation. It is much easier to sell processes and tools to managers and because Scrum is a framework it is very easy to in a way that is not even agile anymore. of ScrumMy advice, however is not to avoid Scrum. Use Scrum to convince your organization of the benefits of Agile. Use the "processes and tools" view to make your point to managers and decision makers. But,Do not forget to build motivated teamsValue developers and educate them in eXtreme Programming techniquesMake them feel that they are far more important than any Scrum process or tool by allowing self-organizationMake sure that Scrum Masters (or project managers and the like) understand they dual role of using the "process" to improve accountability and upwards reporting while using people's skill and valuing the individuals in the team.Of course, this is not the magic bullet of agile implementation. As a matter of fact, the combination of Scrum with eXtreme Programming techniques is already a favored approach in many companies (according to the 2009 State of Agile study 24% of the companies use this hybrid). But again, individuals and interactions may be more valuable than any process or tool. So at the end, it may not be so much about how you combine or instantiate your particular flavor of agile method but about how good and motivated are your teams.As always, would love to hear your comments and experiences. [Less]
|
Posted
almost 15 years
ago
by
[email protected] (Vokimon)
Let me post about some of the latest changes on the CLAM build system.The CLAM project has been distributing two kinds of shared libraries: the main library modules (core, audioio and processing) and a set of plugins which are optional
... [More]
(spacialization, osc, guitareffects...). Plugins are very convenient since programs like the NetworkEditor can be extended without recompile and enable third party extensions. They are so convenient that we were seriously considering splitting processing and auidioio as plugins.But the use of content of a clam plugin is limited to abstract Processing interface. No other symbols are visible from other components. That has a serious impact on the flexibility to define plugin boundaries. For example, if a plugin defines a new data token type, any processings managing that token type should be in that same plugin. To use symbols of one plugin from another implies installing headers, provide soname, linker name and so on. That is, building a module library.So, if plugins want to be libraries and libraries want to be plugins, let them all be both. Each module will be compiled as library (libclam_mymodule.so.X.Y.Z), with soname (libclam_mymodule.so.X.Y), linker link (libclam_mymodule.so), headers (include/CLAM/mymodule/*), pc file... and it will also provide a plugin library (libclam_mymodule_plugin.so) which has no symbol by itself but is linked against the module library. When a program loads such a plugin, all the processings in the module library become available via the abstract interface. On the other side, if program or a module needs to use explicitly the symbols of another module, it just has to include the headers and link against the module library.I just applied the changes to the plugins and seems to work nicely. Adapting the main modules will be harder because the old sconstruct file, but it is a matter of days and we can split them. I moved a lot of common SCons code to the clam.py scons tool. There is a cute environment method called ClamModule which generates a clam module including the module library, soname, linker name, plugin, pkg-config, and install targets. I also added some build system freebies for the modules and applications:All the intermediate generated code apart in a 'generated' directoryNon verbose command lineColour command lineEnhanced scanning methods with black listsWell, do not rely too much on the current clam scons tool API. Changes are still flowing through the subversion. Of course, warn me if I broke something. ;-) [Less]
|
Posted
about 15 years
ago
by
[email protected] (Vokimon)
After a long refactoring to get typed controls into clam without breaking anything, we already have them. Kudos for this achievement go also to Francisco (who started the whole thing as its GSoC project), Hernan and Nael.Now controls are defined just
... [More]
like ports with a type. So for example, you can define an out control being the type OutControl<MyType>. As a side effect, control callbacks came in natural way. Instead of using a different class (formerly InControlCallback<MyType, HostProcessingType>), now you just have to pass a processing method to the control constructor. Templates do the magic too, but that is hidden from the processing programmer API which is cool. See, some example bellow.So, once we enabled typed controls working, now is time to have more than just float controls. The main problem now is that in order to get some useful new control type you have to modify the network editor and the prototyper to make them useful.Although the most demanding needs are for enum and integer controls, I started with simpler bool controls. My goal is you don't have to modify the NetworkEditor or the Prototyper to introduce a new control type. That is, a plugin could add new control senders, control displays, default connected processings (double clicking on a control) and binders for the prototyper (to locate ui elements to link to processing controls). So i wanted a new simple use case not being float to explore a feasible API.Below you can see a network with a BoolControlSender (widget hxx/cxx, processing hxx,cxx) and a BinaryCounter (hxx,cxx) outputing into two BoolControlDisplay (widget hxx/cxx, processing hxx,cxx). Maybe is a little late for using CLAM for your Christmas lights, i guess ;-)Currently, instead of bools we use floats considering a threshold for being true or false. A new ControlGate processing provides a transition by doing such translation:So now is time to look to the code I had to add and see what can be enhanced to ease adding new control types:One of the things I didn't liked during the implementation is having to add an entry into a long list of 'if' statements in ProcessingBoxEmbededWidget.cxx. This is clamming for a refactoring into a Factory.Also the menu entry filling (to connect to new processing) and the default create-and-connect action become a list of if clauses with the type as parameter.Both, prototyper binders and embedded widgets had to duplicate the control sending code. That could be generalized into a binder object that both use.So, it is clear that there is room for a lot of enhancement and it looks like those enhancements could also be applied to ports as well :-) [Less]
|
Posted
about 15 years
ago
by
[email protected] (Xavier Amatriain)
A couple of weeks ago I attended a two-day course on Survey Design and Evaluation. In my recent research (see for instance the Rate It Again publication in last Recsys conference) I have become more and more interested on how people give their
... [More]
opinions.The course was taught entirely by Professor Willem Saris, a very well-known researcher in survey design that was able to attract attendees from all over the world for this course. Although the course was fairly advanced, it touched upon the very issues that I wanted to see discussed. In this post I will try to very briefly mention some of them. More than trying to give a thorough explanation, I hope to draw your attention over some of these issues. Even if you are not into Recommender Systems, it is not strange for Computer Science researchers to be involved in projects in which you need to do some sort of survey, and I am sure you will find some of these issues as interesting as I have.I will summarize some initial issues in this first post and dive into others in future posts if you consider it interesting enough.But, before I start, let me throw in a couple of surprising conclusions just to catch your attention:Batteries of agree/disagree questions are evil! Yes, I am sure you have come across them and possibly even designed a survey in which users are asked at the beginning something like "Mark how much you agree/disagree with the following statements". Well, this is to be avoided at any cost. I will clarify why and how you can replace these kinds of questions.You cannot compare results among different demographic groups assuming that the same response means the same to any group. It turns out that different countries, for instance, have different rating styles and understand questions differently. For instance, British respondents tend to be much milder in their response than Spaniards. In a scale from 0 to 5, a British 3 might mean the same as a Spanish 5! In any case, this is not something you can assume in advanced, it is something you need to analyze in order to guarantee fair comparisons. More on this later.Ok, so I hope I have caught your attention by now and you agree with me that these issues are very interesting and seldom explained (actually, during the course we saw many examples of professional surveys that were plain wrong).The method for developing a survey presented in the course was a three step procedure: (1) Distinguish between concepts by postulation and concepts by intuition; (2) Develop assertions for concepts by intuition; and (3) Develop requests for an answer from assertions. Let's see them in a bit of detail.1. Concepts by postulation and concepts by intuitionOne first important decision when trying to measure a given concept is whether we can measure it by intuition or by postulation. If we think that a concept is straightforward enough, we can directly ask the question we would like to be answered (e.g. How often do you watch sports on television?). However, many times we are trying to measure concepts for which a simple and direct question won't do (e.g. How interested in politics are you?) so we need to measure them by postulation.When measuring a concept by postulation, we need to decompose the complex concept we want to measure into a series of indicators. These indicators can be either formative or reflective. Formative indicators are variables that define the concept. They should take into account all the necessary components and are not necessarily correlated. On the other hand, reflective indicators are consequences of the concept being measured (e.g. people watch the news because they are interested on politics). These indicators are correlated since they are all linked by the originating concept.2. From concept to assertionThere are three forms of assertions for asking a concept by intuition:Subject + LV predicator + subject complement (e.g. Politicians are fair)Subject + predicator + direct object (e.g. I like conservatives)Subject + predicator (e.g. The importance of world economics has changed)On the other hand, a concept by intuition might be measuring different kinds of subjective variables that can be separated into categories such as: evaluation, importance, feelings, rights, policies... It turns out that depending on the kind of subjective variable we are measure, one kind of structure might or might not be appropriate. For instance, if you are measuring importance, only structure 1 will work (there is a complete table that I cannot reproduce where you see the relation between kind of variable and structure to use).3. From assertions to requests for answersOne last step is to decide how to present the request to the survey participant. The following list summarizes the different options available:Direct requestWith WH wordWithout WH wordDirect Instruction ("Please indicate....")Direct Request ("Will you vote...")Indirect Request (made of pre request and subordinate clause)With WH word ("Tell me why you think...")Without WH word ("Do you think....?")Following this 3 step approach does not guarantee you are avoiding all errors but rather guarantees that you are looking into all the issues that are needed in order to decide what is the right question to measure a given concept.And, if you cannot avoid all errors, what can you do about it? Well, you can measure them and take them into account and even predict them. In order to do that I would need to introduce the Multitrait Multimethod Approach and concepts such as reliability, validity, and quality. But that shall be in a second post if there is enough interest on this.You can read more on these issues in Wille Saris' book "Design, Evaluation, and Analysis of Questionnaires for Survey Research". [Less]
|
Posted
about 15 years
ago
by
[email protected] (Xavier Amatriain)
This is one of the hot discussions that has sparked as a result of the Netflix Prize. During the competition several teams reported trying to use movie metadata always with discouraging results. This is probably best summarized by a 2008 post by
... [More]
Pragmatic Theory, one of the leading teams.The issue was re-opened during the last Recsys conference in two ways: First, there was an interesting discussion during one of the panels including the leading teams. Second a paper with a rather provocative title was published: "Recommending new movies: even a few ratings are more valuable than metadata" .After this, I have seen several discussions in which people used these findings to conclude that content-based recommendations are little more than a dead end, and it is not worth to invest on such research. One such discussion happened in the Recommender Systems group in LinkedIn. But, it was in the Music-IR list, where things heated up the most, turning into a long and interesting thread. Most of what follows is basically an edited version of what I already expressed in those two discussions.In a few words, my take on this issue is that results reported in the context of the Netflix competition are (1) Algorithm-dependent and (2) dataset-dependent. Although these findings are a valid explanation of why people found no use for metadata in the context of the Netflix prize, one can not extrapolate this finding to other contexts. Why?Results related to the Netflix prize only refer to how some specific content features help improve the success measure chosen in this case (RMSE). It is a well-known fact that RMSE in a Recommender System does not correlate perfectly with user satisfaction. Things like, for instance, serendipity or novelty, are more likely to come out of a content-based than a CF Recsys since content-based approaches are better suited to explore the long tail.The dataset in Netflix is somewhat representative of many Recsys cases, but not all. For instance, the sparsity of the rating matrix is much greater in the "movie" dimension, than in the "user" dimension. That is, for a given movie, we are likely to have many ratings. On the other hand, for a given user, we are likely to have very few ratings. As some of the participants in the Recsys panel explained, the Netflix problem is more about how to fill in user "missing values" than movie "missing values". That is one of the reasons why movie content does not help much. Adding content to the user dimension (for instance by adding demographics) would probably have helped. Obviously, this is not easy to do unless Netflix had included the phone number or SSN of users in the dataset.When people talk about content information in the context of the Netflix Prize, they are referring to a very specific form of content: editorial metadata coming mainly from imdb. But, in different settings, there are many other and better sources of content information. For instance, one can try to infer descriptors by automatically analyzi ng the signal (either video or audio) and use those features for content-based recommendations. We are still far from having automatic algorithms that can on their own bring useful enough features to map to user preferences. But, that does not mean these features do not exist. Another approach to extracting those features is to have experts manually anotate the content. This is what Pandora does in their music recommendation system. And although I have not seen hard numbers, it seems users are more satisfied than when using CF alone. I think that we will probably see the use of content (and user demographics) in the second edition of the prize, since the dataset will be very different and will include fewer ratings per movie and more user info.All that said, in the general case, and with no other info on the problem, I would probably venture to say that Collaborative Filtering is a more general solution than content-based. But clearly, the best solution is to combine both as each solves a part of the problem.So, let me try to summarize my thinking in a set of simple statements:CF is more effective than content-based recommendations in the general case.The fact that editorial metadata has not proved useful to increase RMSE accuracy in the Netflix Prize does not mean that content-based recommendations are useless.Adding some sort of content description helps recommendations as long as this description does effectively describe the content and maps into user preferences.Editorial metadata does not map directly to the content, neither to user preferences so its usefulness may be very limmited.Feautures automatically derived from the content map directly to the content but not to user preferences in the general case. Lots of research efforts still need to go into this to close this semantic gap.Manually annotated content features map to the content and to user preferences so they should prove useful as in the case of Pandora. But they might be expensive in the general case.As always, looking forward to your comments. [Less]
|
Posted
about 15 years
ago
Clam developers Pau Arumí and Natanel Olaiz recently presented some new work in the fantastic Blender conference in Amsterdam. The talk was about a technology developed at BarcelonaMedia involving an innovative usage of Blender for 3D audio using
... [More]
CLAM for the audible-scene rendering and decoding and Ardour for playing out to any loudspeaker-layout.
It was very nice to meet Blender developers and artists, and the overall conference was a great experience!
Our talk was entitled: Remixing of movie soundtracks into immersive 3D audio
The summary:
We present a use of Blender for an innovative purpose: the remastering of traditional movie soundtracks into highly-immersive 3D audio soundtracks. To that end we developed a complete workflow making use of Blender with Python extensions, Ardour (the Digital Audio Workstation) and audio plugins for 3D spatialization and room acoustics simulation. The workflow consists in two main stages: the authoring of a simplified scene and the audio rendering. The first stage is done within Blender: taking advantage of the video sequence editor playing next to a 3D view, the operator recreates the animation of sound sources mimicking the original video. He then associates the objects in the scene with existing audio tracks of an Ardour session with the soundtrack mix and, optionally, adds acoustics properties to the scene prop materials (e.g. defining how a wooden room will sound) to render acoustics simulation using ray-tracing algorithms. In the second stage, a specification of the loudspeakers positions used in the exhibition is given, and the Ardour session with the soundtrack is automatically modified incorporating all the Blender’s edited sound scene, the necessary routing, and the 3D audio decoding plugins such as Ambisonics and other techniques implemented with CLAM.
The slides are available (we hope to add the accompanying videos soon).
[Less]
|
Posted
about 15 years
ago
by
parumi
Clam developers Pau Arumí and Natanel Olaiz recently presented some new work in the fantastic Blender conference in Amsterdam. The talk was about a technology developed at BarcelonaMedia involving an innovative usage of Blender for 3D audio using
... [More]
CLAM for the audible-scene rendering and decoding and Ardour for playing out to any loudspeaker-layout. It [...] [Less]
|
Posted
about 15 years
ago
by
parumi
Clam developers Pau Arumí and Natanel Olaiz recently presented some new work in the fantastic Blender conference in Amsterdam. The talk was about a technology developed at BarcelonaMedia involving an innovative usage of Blender for 3D audio using
... [More]
CLAM for the audible-scene rendering and decoding and Ardour for playing out to any loudspeaker-layout. It was […] [Less]
|