9
I Use This!
Activity Not Available

News

Posted over 15 years ago by [email protected] (Xavier Amatriain)
If you are participating in the Netflix Prize, don't worry... This post is not about the economic crisis and Netflix filing for bankrupcy and not paying the prize. But this is in fact about a much more "scary" perspective: what if there was no way to ... [More] lower the threshold set in the competition? Or more precisely, what if the only way to lower the error threshold was actually overfitting to the existing training and testing dataset?The possibility cannot be discarded. A few weeks back I posted on this same blog a discussion on a recent work we did analyzing the impact of natural noise on user ratings. That is, when users give us feedback through ratings, they are adding a background noise. There are many possible reasons for this. In some cases the user does not really see a difference between rating an item with a 3 or a 4. Other times, the user is not being careful enough when giving the feedback and is letting other factors affect the result. Ratings will be affected by things like how long ago the item was used, what was the previously rated item, or even the mood the user is in.But, whatever the reason is, the result is that we have data with noise and/or errors. If we take a random rating and asked the user "What was your rating for item X?" we will inevitably get errors. Uhm... so even the user makes errors when recalling her own ratings? Yes! As a matter of fact we can easily measure this error by asking her to rate the same items several times (see, again, my previous post on this).Now, here is a rule of thumb: we cannot predict the user any better than this same user can assess her own ratings. Therefore this natural noise threshold is setting a "magic barrier". We cannot try, and it really makes no sense, to go below this error in our predictions. What difference does it make that our system is very good in predicting some item should "be" a 3 if the user does not really see the difference between (or is not sure about) a 2 and a 3 for that item. Or, the other way around: "How can we predict a 3 with no error if the user "randomly" moved between 2,3, and 4 when giving us feedback?"So, and returning to the initial issue, the question is now: is the Netflix Prize threshold below this "magic barrier"? Do they even know? Well, a member of the leading team Korbell and I had an informal conversation with Netflix VP for Personalization. Of course, they cannot say much about the prize in case they would be giving vital information for winning the price. However, when we asked him whether he had any information as for what was this "magic barrier" on the Netflix Prize dataset he answered that they did a small study to estimate something similar to this. Their estimation was "around 0.5". That is surely non-negligible but it is safely located below the winning threshold of 0.83. However, remember this was only a small study that gave them a rough estimate. Our measures on a similar dataset yielded RMSE values between 0.57 and 0.82 Although these values depend on several variables such as the time between ratings or even how items are presented to the user, we have reasons to believe the Netflix dataset should be on the higher end of this range (if not higher!). Read more on our UMAP article.As a final appendix, let me throw in two important conclusions that pinpoint future directions. First, it is clear that the RMSE measure should be reconsidered. If, on average, the user does not know the difference between a 2 and a 3, we should not take that into account in our success measure. Top-N measures seem much more suitable as a measure of success in Recommender Systems: the user might not care or see a difference between a 2 and 3, but she will surely be deceived if we recommend something she values with a 1.Another strategy is to select only users that are more consistent and use those to generate recommendations for the target user. If the target user is noisy herself, we will still get lousy recommendations. But we will be minimizing errors for the rest. This is the approach we took in our Wisdom of the Few. Finally, although we cannot aim at getting results below the "magic barrier", there is something we can do: lower that barrier. In a work we have under submission, we devised a "denoising" algorithm that is able to improve accuracy almost up to a 15% by lowering this noise threshold. But, I will leave this for a future post once we hopefully get the paper accepted. [Less]
Posted over 15 years ago by [email protected] (Xavier Amatriain)
In a previous post I talked about the presentation my former boss (and still friend) JoAnn Kuchera-Morin gave at TED talking about the Allosphere project I was technical director at UCSB. I am now writing this follow-up post because since then many ... [More] more people have found about the project.TED put the video of the presentation online a few weeks ago.Shortly after, the video was slashdotted. And of course, once that happen you are likely to get much attention. Some of the most interesting comments I found:AlloSphere three story virtual environment not available for birthday parties @ EndgadgetSingularity HubThe AlloSphere: Flying through a giant virtual brain?HCI and the AllosphereAlloSphere: Interpret Scientific Data in a 3 Story High Metal SphereI particularly enjoyed this one and this other one. Funny, when I was working on the project people made the joke about Professor X's Cerebro, where I was the X :-)Hopefully, now that the project has caught more attention, it will be easier to get the right money in the door. [Less]
Posted over 15 years ago by [email protected] (Xavier Amatriain)
A few weeks ago I read the list of the 2008 ACM Fellows. ACM each year recognizes computer scientists for their contributions. The 2008 list includes 44 new fellows from very different backgrounds. However, I was happy to find out that I knew some of ... [More] them and I definitely agreed with them being on the list. I will add my small grain by mentioning them in this blog:Alan C. KayFor fundamental contributions to personal computing and object-oriented programmingThe surprise here is how in the world could Alan Kay still not be an ACM Fellow! I can thing of maybe only a handful of people that I would put before him in such a category. If you are doing anything related to computers you probably already know a lot about Alan: father of Object-Oriented Programming (including Smalltalk), inventor of laptop, the windows-based interfaces, the OLPC project...Still, this is a good excuse to read and learn a bit more about him. Alan was one of my usual cites in my Software Engineering lessons and I was fortunate enough to meet him during the presentation fo the OLPC project in UCLA and learned about many things, including his love for Spanish ham and his current relation with several Open Source projects.Perry R. Cook Princeton University For contributions to computer music, physics-based sound synthesis and voice analysis/synthesisI have known Perry for many years. He was even a professor in one of my PhD courses at UPF. But anyone who has done some research in anything related to computer music knows Perry Cook. He is most known for his work on physical modeling of instruments and voice synthesis. But he has also co-authored very important sofware packages such as STK, which was a big influence on our CLAM. More recently, he has also coauthored Chuck with Ge Wang. But above all, Perry is a great guy... someone you will want to hang out with after the conference and have some beers.Joseph A. Konstan - University of Minnesota For contributions to human-computer interactionFunny that I am hosting Prof. Konstan this Friday on his visit to Barcelona. I have just met him briefly in previous ACM Recsys conferences. And to be honest, I was not very much aware of his previous work. Joseph was well-known to me for being one of the founders (together with John Riedl) of the GroupLens research group. Their work on Recommender Systems has been seminal and extremely important for raising awareness of this field in recent years. But apart from that it turns out that Prof. Konstan has been President of SIGCHI (one of ACM's most important Special Interest Groups with 4500 members). He is also known for his work on Online Communities and Computer Systems for HIV prevention. Read more in his webpage.William Buxton - Microsoft Research For contributions to the field of human-computer interactionBill Buxton is an amazing guy I had the pleasure to meet in Santa Barbara. That was before he was appointed as Microsoft Chief Scientist. But still, he was teh kind of person everybody listened to as soon as he started talking. You only need to read his bio tu understand why that is. He started working also on Computer Music and at that time started working on multi-touch surfaces and composition tools. He then went off to be Chief Scientis at Alias/Wavefront, now part of Autodesk and known worldwide for their Maya package. It is a bit weird to see him now as Microsoft Chief Scientist. But, hey... that's a heck of a job! [Less]
Posted over 15 years ago by [email protected] (Xavier Amatriain)
Imagine being able to process streams of audio that are playing on your browser directly and on real-time on a external application. For instance, you could analyze the music of a youtube video while it's playing.Well, this is possible in Linux with ... [More] a little infrastructure: jack, pulseaudio and a couple of modules that connect one with the other. This post in the ubuntuforums gives a good enough explanation on the requirements (scroll down until you see the section on "pulseaudio through jack").Anyway, I have recorded a two-part screencast where I explain all this and use this setting to process a youtube video with CLAM and detect its chords in real-time.Part 1 Part 2(Final demo step, recorded with a camera given to the problems using jack, clam, and the gtk-RecordMyDesktop app at the same time. Still... you'll get the idea after watching part 1) [Less]
Posted over 15 years ago by vokimon
Several nice CLAM related presentations has been given in conferences during last month. At the Linux Audio Conference in Parma, we presented an article on Blender-CLAM integration for real-time 3D audio (paper, slides, and video available at the link) and we also gave a workshop on CLAM app and plugin prototyping features. At the [...]
Posted over 15 years ago by vokimon
Several nice CLAM related presentations has been given in conferences during last month. At the Linux Audio Conference in Parma, we presented an article on Blender-CLAM integration for real-time 3D audio (paper, slides, and video available at the link) and we also gave a workshop on CLAM app and plugin prototyping features. At the WWW2009 […]
Posted over 15 years ago
Several nice CLAM related presentations has been given in conferences during last month. At the Linux Audio Conference in Parma, we presented an article on Blender-CLAM integration for real-time 3D audio (paper, slides, and video available at the ... [More] link) and we also gave a workshop on CLAM app and plugin prototyping features. At the WWW2009 in Madrid, we presented an article on the new web services based extractors for Annotator and the data source aggreation interface also some videos of the presentation and demos are available featuring data sources aggregation and live chord extraction from youtube videos. Binaural videos shown at LAC will be published soon. [Less]
Posted over 15 years ago by vokimon
Several nice CLAM related presentations has been given in conferences during last month. At the Linux Audio Conference in Parma, we presented an article on Blender-CLAM integration for real-time 3D audio (paper, slides, and video available at the link) and we also gave a workshop on CLAM app and plugin prototyping features. At the WWW2009 […]
Posted over 15 years ago by [email protected] (Xavier Amatriain)
One of the most common approaches to Recommender Systems is the so-called Collaborative Filtering. The main rationale is the following: In order to predict items that you will like, we find the most similar users to you by looking at your previous ... [More] likes and dislikes. We then recommend items that those users have liked, but you still don't know.There are several caveats with this approach. One of them is that we need an effective way of capturing users likes and dislikes. Most of the times we need to do this by asking users to explicitly rate items. This is the typical 1 to 5 star rating that you get in many services from Netflix to Amazon. But we know, as I commented in an earlier post, that users are noisy when giving that feedback.So, because rating feedback is noisy, we are prone to make errors when predicting what a user.But standard Collaborative Filtering has several other problems. First, because we need to compute neighbors and predictions, we need to transmit all user ratings to a centralized server and this can compromise user privacy. The number of users and items is likely to be huge and applying this approach is computationally expensive and has scalability issues. And so on...We have proposed a new approach called "Expert-based Collaborative Filtering". In this approach, instead finding neighbors from a general pool of like-minded users similar to the target, we find neighbors in an expert database. The rationale is that these experts will be much more consistent in their ratings (i.e. less noisy) and data will be less sparse.We have conducted experiments using movies and experts from Rotten Tomatoes and concluded that users prefer recommendations drawn from like-minded experts more than those predicted from (noisy) like-minded peers.In the next SIGIR 2009 conference in Boston we will be presenting the paper entitled "The Wisdom of the Few: A Collaborative Filtering Approach Based on Expert Opinions from the Web". Here you can access a copy of the paper where you will find a complete explanation about this new approach. [Less]
Posted over 15 years ago by [email protected] (Xavier Amatriain)
I have been meaning to blog about the WWW conference since it happened a few weeks back but have been pushing it back because of deadlines. In any case, I did not want to let it go by without at least writing a few lines.The WWW09 conference was held ... [More] in Madrid, April 20-24. I have attended many conferences in my life but this was the first time I was in a WWW conference. And it was overall a really positive surprise. The WWW is a very large conference. Most of the time there are up to 8 parallel tracks in the main conference, let aside posters and other events. However, the organization was extremely good (a german colleague joked that it did not seem a spanish-organized event).Probably one of the highlights in terms of the organization was the visit of the Prince of Spain for the opening ceremony. Although he speaks perfect English, he gave a talk in Spanish because of protocol rules... weird. You can see my (very bad) recording of the opening ceremony here (part 1, part2, part3).The conference was really taken by the twitter hype. It was amazing to see people tweatting at all times and about everything. You only need to do a search for #www2009 in twitter to find out (it was a trending topic for a large part of the conference). Or you can see this amazing picture of people tweating and blogging during a Flamenco concert in the conference reception.One of the interesting surprises of the conferences was the Developers Track. One of the bad things about conferences talk that I have complained in the past is that they seldom add anything beyond reading the conference paper. But in the Devel Track, this was not true in anyway. There were very good presentations including demos, in-depth explanations with code examples, etc... This is the track where I presented the work result of Jun's GSoC that I already explained in a previous post. Most of the talk was recorded and is now available on three Youtube videos (part1, part2, part3).Other interesting presentations from the Telefonica Research team include Pablo Rodriquez' keynote, and Josep M. Pujol's presentation of the Porqpine search engine.I am usually not very fond of very large conferences such as the WWW. But I have to say that I really liked it. Having so much to choose from guaranteed that there was always something interesting to attend. And there was a great balance between academics, hackers and people from industry. Definitely, one of the conferences I want to be targetting in years to come. [Less]