Posted
almost 16 years
ago
by
[email protected] (Florian Ragwitz)
For quite some time perl provided a form of my declarations that
includes a type name, like this:
my Str $x = 'foo';
However, that didn't do anything useful, until Vincent Pit came along
and wrote the excellent
Lexical::Types
module, which allows
... [More]
you to extend the semantics of typed lexicals and
actually make them do something useful. For that, it simply invokes a
callback for every my declaration with a type in the scopes it is
loaded. Within that callback you get the variable that is being
declared as well as the name of the type used in the declaration.
We also have Moose type constraints and the great
MooseX::Types module,
that allows us to define our own type libraries and import the type
constraints into other modules.
Let's glue those modules together. Consider this code:
use MooseX::Types::Moose qw/Int/;
use Lexical::Types;
my Int $x = 42;
The first problem is that the perl compiler expects a package with the
name of the type used in my to exist. If there's no such package
compilation will fail.
Creating top-level namespaces for all the types we want to use would
obviously suck. Luckily the compiler will also try to look for a
function with the name of the type in the current scope. If that
exists and is inlineable, it will call that function and use the
return value as a package name.
In the above code snippet an Int function already exists. We
imported that from MooseX::Types::Moose. Unfortunately it isn't
inlineable. Even if it were, compilation would still fail, because it
would return a Moose::Meta::TypeConstraint instead of a valid
package name.
To fix that, let's rewrite the code to this:
use MooseX::Types::Moose qw/Int/;
use MooseX::Lexical::Types qw/Int/;
my Int $x = 42;
Let's also write a MooseX::Lexical::Types module that replaces
existing imported type exports with something that can be inlined and
returns an existing package name based on the type constraint's name.
package MooseX::Lexical::Types;
use Class::MOP;
use MooseX::Types::Util qw/has_available_type_export/;
use namespace::autoclean;
sub import {
my ($class, @args) = @_;
my $caller = caller();
my $meta = Class::MOP::class_of($caller) || Class::MOP::Class->initialize($caller);
for my $type_name (@args) {
# get the type constraint by introspecting the caller
my $type_constraint = has_available_type_export($caller, $type_name);
my $package = 'MooseX::Lexical::Types::TYPE::' . $type_constraint->name;
Class::MOP::Class->create($package);
$meta->add_package_symbol('&'.$type_name => sub () { $package });
}
Lexical::Types->import; # enable Lexical::Types for the caller
}
1;
With that the example code now compiles. Unfortunately it breaks every
other usecase of MooseX::Types. The export will still need to return a
Moose::Meta::TypeConstraint at run time so this will continue to
work:
has some_attribute => (is => 'ro', isa => Int);
So instead of returning a plain package name from our exported
function we will return an object that delegates all method calls to
the actual type constraint, but evaluates to our special package name
when used as a string:
my $decorator = MooseX::Lexical::Types::TypeDecorator->new($type_constraint);
$meta->add_package_symbol('&'.$type_name => sub () { $decorator });
and:
package MooseX::Lexical::Types::TypeDecorator;
use Moose;
use namespace::autoclean;
# MooseX::Types happens to already have a class that doesn't do much
# more than delegating to a real type constraint!
extends 'MooseX::Types::TypeDecorator';
use overload '""' => sub {
'MooseX::Lexical::Types::TYPE::' . $_[0]->__type_constraint->name
};
1;
Now we're able to use Int as usual and have Lexical::Types invoke
its callback on MooseX::Lexical::Types::TYPE::Int. Within that
callback we will need the real type constraint again, but as it is
invoked as a class method with no good way to pass in additional
arguments, we will need to store the type constraint somewhere. I
choose to simply add a method to the type class we create when
constructing our export. After that, all we need is to implement our
Lexical::Types callback. We will put that in a class all our type
classes will inherit from:
Class::MOP::Class->create(
$package => (
superclasses => ['MooseX::Lexical::Types::TypedScalar'],
methods => {
get_type_constraint => sub { $type_constraint },
},
),
);
The Lexical::Types callback will now need to tie things together by
modifying the declared variable so it will automatically validate
values against the type constraint when being assigned to. There are
several ways of doing this. Using tie on the declared variable
would probable be the easiest thing to do. However, I decided to use
Variable::Magic
(also written by Vincent Pit - did I mention he's awesome?), because
it's mostly invisible at the perl level and also performs rather well
(not that it'd matter, given that validation itself is relatively
slow):
package MooseX::Lexical::Types::TypedScalar;
use Carp qw/confess/;
use Variable::Magic qw/wizard cast/;
use namespace::autoclean;
my $wiz = wizard
# store the type constraint in the data attached to the magic
data => sub { $_[1]->get_type_constraint },
# when assigning to the variable, fail if we can't validate the
# new value ($_[0]) against the type constraint ($_[1])
set => sub {
if (defined (my $msg = $_[1]->validate(${ $_[0] }))) {
confess $msg;
}
();
};
sub TYPEDSCALAR {
# cast $wiz on the variable in $_[1]. pass the type package name
# in $_[0] to the wizard's data construction callback.
cast $_[1], $wiz, $_[0];
();
}
1;
With this, our example code now works. If someone wants to assign,
say, 'foo' to the variable declared as my Int $x our magic
callback will be invoked, try to validate the value against the type
constraint and fail loudly. WIN!
The code for all this is available
github and should also
be on CPAN shortly.
You might notice warnings about mismatching prototypes. Those are
caused by Class::MOP and fixed in the git version of it, so they'll go
away with the next release.
There's still a couple of caveats, but please see the documentation
for that.
[Less]
|
Posted
almost 16 years
ago
by
[email protected] (Florian Ragwitz)
For a long time the Catalyst Framework has been using code attributes
to allow users to declare actions that certain URLs get dispatched to.
That looks something like this:
sub base : Chained('/') PathPart('') CaptureArgs(0) { ... }
sub index
... [More]
: Chained('base') PathPart('') Args(0) { ... }
sub default : Chained('base') PathPart('') Args { ... }
It's a nice and clean syntax that keeps all important information
right next to the method it belongs to.
However, attributes in perl have a couple of limitations. For one, the
interface the perl core provides to use them is horrible and doesn't
provide nearly enough information to do a lot of things, but most
importantly attributes are just plain strings. That means you will
need to parse something like "Chained('base')" into
(Chained => 'base') yourself to make proper use of them.
While that's easy for the above example, it can be very hard in the
general case because only perl can parse Perl. It's one of the reasons
you can't use Catalyst::Controller::ActionRole to apply
parameterized roles to your action instances, because parsing
parameters out of things like
Does(SomeRole => { names => [qw/affe tiger/], answer_re => qr/42/ })
would be awful and wrong.
With Catalyst 5.8 most of the attribute related code has been removed
from the internals. It's now using MooseX::MethodAttributes to do
all the heavy lifting. Also the internals of how actions are
registered have been refactored to make it easier to implement
alternate ways without changing the Catalyst core.
As a proof of concept for this I implemented a new way of declaring
actions that's very similar to how Moose provides it's sugar
functions. You can get it from
github.
With that, the above example looks like this:
action base => (Chained => '/', PathPart => '', CaptureArgs => 0) => sub { ... };
action index => (Chained => 'base', PathPart => '', Args => 0 ) => sub { ... };
action default => (Chained => 'base', PathPart => '', Args => undef) => sub { ... };
It also moves method declaration from compiletime to runtime, making
this possible:
for my $action (qw/foo bar baz/) {
action $action => (Chained => 'somewhere', Args => 0) => sub {
my ($self, $ctx) = @_;
$ctx->stash->{ $action } = $ctx->model('Foo')->get_stuff($action);
};
}
Admittedly, that's all very ugly, but illustrates well what kind of
things we're able to do now. But it doesn't need to be ugly. With
Devel::Declare we have a great tool to add our own awesome syntax
to perl, similar to how things like MooseX::Method::Signatures,
MooseX::MultiMethods and MooseX::Declare do.
So how would a declarative syntax for Catalyst controllers look like?
I don't know. Ideas include something like this:
under /some/where, action foo ('foo', $id) { ... }
to mean:
sub foo : Chained('/some/where') PathPart('foo') CaptureArgs(1) { ... }
Adding Moose type constraints to this would be interesting, too, and
make validation of captures and arguments a lot easier. Multi dispatch
similar to MooseX::MultiMethods could be handy as well:
under /some/where {
action ('foo', Int $id) {
# find and stash an item by id
}
action ('foo', Str $name) {
# search items using $name
}
action ('foo', Any $thing) {
# display error page
}
}
So you see there are a lot of possibilities that should be
explored. Unfortunately I have no idea what kind of syntax and
features people would like to have, so your feedback on this would be
much appreciated. :-)
[Less]
|
Posted
almost 16 years
ago
by
[email protected] (Anders Waldenborg)
Slightly delayed, but...After XMMS2 Team's tussles in Brussels; Here we go again!XMMS2 Team is proud to present a new release, as late as always. This time there has been huge changes "under the hook" with the new "xmmsv".You can obtain XMMS2 here:
... [More]
Release notes: http://wiki.xmms2.xmms.se/index.php/Release:DrMattDestruction Source: http://sourceforge.net/projects/xmms2The XMMS2 Team would like to extend a big THANK YOU to all who have helped out with this release, and an extra thanks to especially to those 10 persons that made the AUTHORS file grow [Less]
|
Posted
almost 16 years
ago
On April 17, I was invited to do a presentation on Git for Purple Scout in Malmö, Sweden. Around 40 people showed up (including many XMMS2 folks) and endured 2-hours on what Git is, why it’s so awesome and all the fancy stuff you can do with it. I
... [More]
think people liked it and although most seemed to be using Git already, they were nice enough to say that they’d learned something anyway.
I’d given talks about Git previously in Switzerland, but for this occasion I reworked and pimped up my slides quite a bit to cover more material and have more cute diagrams. As before, you can get the slides for the Git presentation (PDF), or even fiddle with the source file, under the terms of the Creative Commons Attribution-Share Alike 2.5 License. Reuse, modify or poke fun at at will!
Sorry it’s still in proprietary Keynote format, because that’s the only vaguely acceptable software I found to make lots of diagram easily… Any Free alternative would be welcome, if someone knows of one.
It was great fun preparing and giving this course, and being back in Sweden and seeing friends again, so tusen tack till Purple Scout for making this happen! [Less]
|
Posted
almost 16 years
ago
by
[email protected] (Tobias Rundström)
As I have talked about earlier I am holding an education for company management about Open Source Communities. Since I can't release the slides directly I am going to blog a bit about what these slides contain.One of the things I am trying to get
... [More]
across the table is that getting involved in a Open Source Community as a company is hard work and often counter-intuitive to old business practices. To illustrate this I have created some case studies about companies that have tried to involve them-self in the community and the different outcomes of that. In my examples I use Nokia, Apple, Google and Sun as examples (and some more of them), all these companies are interacting with the Open Source community in different ways. All of these companies have both succeeded and failed with their interactions (I won't comment on the individual cases in this blog post, but I am still interested in your feedback, what do you think about the companies listed above and do you have other examples of companies failing or succeeding with Open Source?).While I was researching these different companies (most of my research was based on google searches like "opensource at X") I ran across Sun's Open Source webpage, this page states that 'Open Source is about Participation'. I think that is one of the most accurate one-liners about Open Source I have ever heard. In order to be able to accepted and successful with a Open Source Community you must show that you can participate, create code and work together with the community with it rules.I think very few Open Source Communities would accept companies that tries to 'buy' their way into gaining influence over a certain project. But companies that can send relevant, well written patches that implements a feature or fixes a bug in a project they are using can succeed. Many nerds just care about code and that is the way it should be.Interacting with a Open Source Community is not like interacting with business partner, few communities will implement features that they don't like just because your company needs it. Many community volunteers have a lot of pride invested in their projects and will place code-style and technical aspects before the needs of their end-users. This is very different from how a company works, because companies needs to see to the user needs before the technical aspects of the actual code (this might actually explain why most proprietary code is such a mess - "We need this now, or else!").This means that companies have to care about things like this when they are contributing to Open Source projects, otherwise they might never get their patches merged.So to sum up, if you want to gain the trust of a Open Source Community, participate and show them the code! [Less]
|
Posted
almost 16 years
ago
by
[email protected] (Tobias Rundström)
Purple Scout was contracted to do a Open Source Education a while back. The customer wanted a education that gave them the history, business and legal aspects, but they also wanted a section with some "real life" stories, from someone that have
... [More]
worked in a Open Source community already. While both my bosses handled the business and legal aspects I tackled the community section.After almost a month of preparation, we held the pilot in front of a smaller group today. I was a bit nervous at the start but managed to hold a very engaging talk about the different inner workings of a community and a generalization of what drives open source hackers. It was a fun and interactive group that I managed to provoke a couple of times :-)I would love to share the slides I did, but unfortunately they contain some information that I can't spread, therefore I will try to blog a bit about the conclusions that I managed to draw from all of this.Also a question: What do you people think is the driving factor for participating in open source communities as a company, i.e. not for you personally, but what would drive your company to work with open source? [Less]
|
Posted
almost 16 years
ago
The company I’m working at, Playlouder MSP, is looking for a new Javascript developer to join the team. We’re a young and dynamic London-based team working on an online social platform for listening to and sharing music, including unlimited legal
... [More]
access to music for a fixed monthly fee.
Excerpt of the job ad:
Media Service Provider has been working with ISPs and the music industry to offer ISP customers groundbreaking levels of access to digital music—and a groundbreaking user interface to match.
Tasked with delivering an innovative, browser-based music application to a large audience, it’s critical that we are able to deliver a reliable, responsive and fun user experience across a range of modern browsers (no IE6!). As a key addition to our development team, you’ll be central to this effort.
The role would suit a Javascript guru who enjoys the challenge of developing a real thick-client application in the browser. It may equally suit a GUI application developer with solid experience on other platforms (Cocoa, C#, Java, Flash, GTK, …) who’s confident in their ability to pick up the necessary Javascript skills.
So if you’re looking for a job in the UK and interested in music and UI development, have a look at the full job ad for all the details! [Less]
|
Posted
almost 16 years
ago
I have lately been embroiled in a debate about the importance of commenting in code. While I don’t yet believe that comments are completely unnecessary, I tend to think that they are largely unnecessary. Almost two years ago, I wrote ‘Software
... [More]
development without maps’ in which I extolled the virtues of ‘documentation’. While it can be easy to confuse ‘documentation’ with ‘commenting’ and jump to the conclusion that I’m now contradicting myself, I’d like to draw the distinction between the two. Commenting is only a specific type of documentation - a kind of documentation that is so narrow in scope that it only applies to code. [Sidenote: In general, commenting will end up covering the ‘mechanisms’ expressed by code, and very little of the ‘policies’ and justifications - which would be best covered by higher level documentation anyway.]
In essence, I see programming as an activity where one devises and manipulates abstract models. Writing code is merely a process of expression of such models. What I was trying to get at, in my previous blog post, was that the same models can be viewed at different levels of abstraction. The ‘documents’ encompass knowledge about the models at a higher abstraction level than the code. Documents, being UML diagrams, specifications, or other suitable representations. Comments, on the other hand, are at the same level as code. While it may be useful to sketch out a program using comments before the real code is written, I believe that such ‘scaffolding’ should eventually fade away. Like faint construction lines in a technical drawing, such comments are there to provide an initial framework for laying out the real lines and curves. Like faint and rough sketches on canvas, or a block of stone or wood, the comments become unnecessary as the system is painted, or moulded into shape.
As a software system takes shape and matures, the most reliable indicator of what it is doing is the code itself, not any superfluous comments. Indeed, there are various dangers associated with using and maintaining comments beyond the initial stages of coding:
comment maintenance overhead
As code is modified for various reasons - bug fixes, feature additions, refactoring for reuse - extra time is required to verify not only the proper operation of the code (which can be automated), but also to review existing comments and rewriting the ‘comment narrative’ in a way that fits the reshaped code. I consider this to be a special case of duplication of logic - in the presence of comments that attempt to explain the program logic, one needs to maintain not only the actual program logic expressed in the code, but also those comments that ‘pretend’ to be saying what’s happening. Such a task becomes especially daunting in a team of non-trivial size, as various people end up writing and rewriting chunks of this narrative - which all ends up devolving into an incoherent and unreliable mess. There are ways of managing, verifying and testing program code - and enforcing program coding style conventions - written in a diverse environment, but there is no reliable way of getting people to write a natural language in a consistent, non-monotonous way. The best we can do is introduce the editor model - should we then have a ‘comment editor’ on every programming team, to ensure that all the comments flow clearly and adopt the right style and tone?
matching comments with code
On the other hand, if programmers do not take the required amount of time to fully review and rewrite comments as they implement changes, we end up in a situation where the comments do not accurately reflect the sequence of events expressed in the code. As this compounds, over time, programmers who are introduced to the system later on bear the considerable risk of being lead astray while attempting to trace defects. The end result is code of poor quality, at the significant expense of time - time wasted following misleading comments.
Looking at the issue from another perspective, what about the usefulness of comments? The machine does not care about comments in the code. Comments do not affect the compiler. Comments do not affect whatever is going to be interpreting the code, or some processed version of the code. That is, indeed, the point of comments. Comments won’t make programs run faster, or in a more stable manner. Comments won’t eliminate bugs. If guns don’t kill people and people kill people, then comments don’t eliminate bugs - people eliminate bugs.
As far as I can tell, the importance of commenting code is 1) seemingly over-emphasised by Computer Science departments, and 2) an unfounded myth perpetuated in programming shops. OK, after a quick trip through IEEEXplore, I found several papers seemingly extolling the virtues of code commenting. However, they all seem to cite the same paper when doing so. A paper written in 1988. A paper about a study based on PL/I and Pascal. Enough said. Clearly, there has been some research done on the topic, but a lot of it seems outdated, especially in the face of more modern concepts such as Object Oriented languages, etc. I shall delve more into this and possibly post a follow-up to this.
In any case, even I was to assume that comments are useful and will bring about world peace, I cannot find any qualitative documentation on the subject. It’s all well and good to say things like “Your code should be 20% commented” or “You need to have comments describing each function” or “Comment about why, not necessarily what, the code is doing”, but no one ever seems to have good examples of such. Such vagueness simply serves to exacerbate the problem - anyone can write practically anything they like in comment blocks and get away with it. “Whaddya mean, I wrote comments! See!” A more practical way of posing this question would be: Given that I have a programming task, how do I write good quality comments that will remain useful to future programmers, given what I know about the problem domain, and how I anticipate it to change? Is there an example of this story and how it pans out?
Taking a step back, let’s look at the real problem that commenting pretends to solve.
As I mentioned earlier, programming involves the manipulation of abstract models and expressing them in code. It therefore follows that one gains more by learning how to more effectively express programming code than by decorating code with comments. Programming code is the ultimate middle ground - it is something understandable both by the programmer and by the machine (at some level). The programmer who can write prose in comments does not hold a candle to the programmer who can get the machine to execute exactly what he wants to get done. There are two main reasons for the existence of comments in code: 1) making up for the lack of higher level documentation, and 2) attempting to mask complexity by ‘explaining it in english’.
The first case is symptomatic of a poor development process, where intentions articulated at the business level aren’t properly captured and architected into solutions at various levels of abstraction. In the absence of UML and other such higher level program models, developers are supposed to resort to code comments to explain ‘why’ certain things are being done, even though higher level descriptions would be more accessible to various stakeholders. (This is a variant of the “explain why you’re doing this” style of commenting) Comments used in such a manner simply mask a lack of traceability between what a particular client wants, what possible solutions are presented and approved, and what ends up getting implemented. The valuable historical record of the back-and-forth discussions detailing what happened and when particular decisions were made, is simply lost. Coupled with an environment where developers are added to and removed from a project in a piecemeal fashion, this is simply a recipe for a fragmented disaster, as few of those involved have a complete memory of the sequence of events.
On the other hand, masking complexity by ‘explaining in english’ is plainly disproven by decades of development on programming languages. Concepts such as functions, classes, methods, packages and libraries were developed for the specific purpose of managing complexity by breaking down code into smaller, safer, more tractable and more manageable chunks. Those allow for the implementation of layerable and composable solution patterns, as well as enabling reuse - all techniques well known to help improve the long term reliability and maintainability of software systems. If it weren’t for those, we’d still be writing long epics in FORTRAN.. if we could even get that far.. The fact that classes and methods can be given meaningful names dispels a lot of the reason for using inline comments. Got a function doing lots of things? Break it up into smaller functions that have descriptive names. The code is then easier to follow at a higher abstraction level. Want to zoom in on a specific step? Just go into that function. Easy. Up and down the abstraction ladder.
Despite the various reasons for writing comments, it all boils down to managing complexity: the complexity of stakeholders’ requirements, the complexity of the solution at hand. What I’ve been trying to say is that all of this relates to the management of the development process. By building maps and models of stakeholders’ requirements, a better understanding of the problem domain can be achieved. The process of building such maps and models also helps in the discovery and resolution of conflicting requirements and essential priorities. From there, developers can devise subsystems that - when coupled together - should aim to meet the set of requirements. Such subsystems can then be defined in terms of their interfaces and interactions with each other - all at a higher abstraction level. From there, the team can then zoom in further into each subsystem and attempt to refine the implementation further. This same process can be repeated down to applications, services, packages, classes and methods. The whole ‘stack’ describes the abstraction ladder in a consistent manner, and moving up and down this stack constitutes the ‘zooming’ action. Explaining the process in such a way presents a simple concept that can be applied by various people involved in the process. Stakeholders talk to architects and account managers to produce documents and customer-facing models, while architects present the same documents and models to developers for further refinement and evaluation. The end product of this process is a whole collection of inter-related artifacts that document the history of the project and its various aspects from different perspectives - all of which is much more useful and accessible then comments buried deep in the code.
Another way of looking at the commenting problem is one of situational awareness. Piles of comments (or code, for that matter) are essentially worthless to a programmer until he reads them. (And when he does read them, the code will provide a more accurate picture of what’s happening, rather than the comments) A programmer (A) who writes some code has implicit knowledge of what the code does and why it’s doing what it does. A programmer (B) who simply reads comments written by someone else has explicit knowledge of what the code is doing, but not necessarily any implicit knowledge. A programmer (C) who reads some code written by someone else internalises a more accurate mental model of what the code is doing. Simply put, programmer A devises a mental model and expresses it in code, while programmer C is doing the reverse process by reading the code and building a mental model. While programmer B has some chance of success at building a model, he might end up doing so ‘faster’ than programmer C, but at the expense of an inaccurate - possibly even out of date - model. This whole construction and deconstruction of mental models is exactly why the ‘abstraction ladder’ development process is powerful - it provides models of varying detail at various levels to enable almost anyone to more easily conceptualise any part of the solution and work with it. [Less]
|
Posted
almost 16 years
ago
by
[email protected] (Tobias Rundström)
I got myself a Sony Reader PRS-700 the other day. Imported from the USA of course, since we can't get fancy things like that here in Europe. Actually I am evaluating this unit for some friends and co-workers so they know what Reader to buy later.So
... [More]
far I must say that I am impressed. I have been using it on my weekly trips back and forth to Malmö and more or less left all my books at home. The built-in light is pretty slick, I can actually read books in bed without disturbing my lovely fiance.A word about the display, a lot of people hate it because it "glares", my guess is that these are the same people that "can't" use a glossy macbook either. I on the other hand have never had any problems with it. So I will continue to read my books digital going forward, no more big books that weigh a ton in my backpack. [Less]
|
Posted
almost 16 years
ago
Now that we have defined what/whose problem we’re trying to solve, and debated about the implementation details, it would be worth asking why a graphical XMMS2 client would be a good fit.
After all, we have a brand new korving CLI (nycli), isn’t that
... [More]
enough? In a sense it is, but it fills a different niche. GUI applications are good at things that CLI applications aren’t, and vice-versa. So the goal is to exploit the specific advantages of graphical music players.
For instance, even the most hardcore fans of the command-line will admit that the following tasks are easier with a graphical player:
Edit a playlist, using mouse selection and drag-and-drop.
Browse albums by cover.
Organize music manually into playlists or using dynamic collections.
But these are just simple examples that are now expected of any standard graphical music player.
Can’t we get something more exciting?
As Obama taught us, “yes, we can!”
Three main aspects usually poorly supported and under-exploited in music players are powerful tools to:
Browse
Organize
Explore/discover
Browsing
Graphical applications provide a rich visual experience that can be exploited to navigate large amounts of information, in particular using the spatial aspect and our ability to recognize images quickly.
Typically, users have become familiar with widgets like iTunes Cover Flow, which exploit the visual clue of album covers to quickly flick through releases. Several XMMS2 developers have expressed their interest in a more “album-centric” client, which essentially means supporting album entities in the interface, as opposed to just tracks.
An extra step would be to also promote the artists, and possibly other properties (e.g. genre, year, label), as premium entities. So rather than “album-centric”, the client would be “entity-centric” (in the sense that most existing music players are track-centric). Each entity could have its own fullscreen pane (or “page”), with corresponding information (more below) and links to browse the related entities. This would lead to a web-like navigation, where for instance each artist would get a page with the list of his releases (plus a photo, bio, etc), and clicking on a release would bring the page for that release, with the list of tracks.
However, browsing doesn’t have to follow a rigid path: we’re used to browsing inside categories (e.g. releases) alphabetically, and to jump between categories using the explicit hierarchy from the data (e.g. from an artist to its releases to their tracks), but that’s just one of many possibilities.
The user may want to browse a subset of her media library, for instance filtering by a range of release years and genre (e.g. “70’s rock”), or a custom collection she assembled herself. She may want to follow connections from an artist page to pages of “related artists”, to use tags to jump from a track to a list of albums, to find all the music she added the same week as a given release.
It’s time to think beyond the simple local data and harvest The Cloud to enrich the user experience; services like Last.Fm already provide an API to retrieve Similar Artists, social Tags, etc. And with our Collections API, it’s just a whole lot of power waiting to be unleashed!
Organizing
While browsing is the passive process of visiting what’s there, organizing is the active process of applying your own order on the content.
Playlists are the most common organizational tool, usually directly tied to playback, and the usual editing features should naturally be supported (insert, enqueue, move, remove). Special playlist behavior (queue, party shuffle, random, etc) should also be configurable easily. (Note: fancy playlist formatting is outside the scope of this post.)
The second main organizational tool is collections. A collection is akin to a “themed-bucket”, i.e. a set of music that the user has put together in order to reuse it later. But rather than focusing on the underlying nature of a collection (a graph of operators), the interface should emphasize the organic process of someone creating a custom group of music. Any search, or essentially any “view” of music, should be recordable as a collection; and it should always be possible to refine or further filter a collection, as well as add custom content to it. It should be as easy as typing a search or dropping content in a folder, rather than as complex as setting up a network of mail filters.
Finally, tags should be at the user’s disposal for applying a minimal description to content. There is a subtle difference between tags and collections (and how they play together), and I don’t have much in mind regarding tags so far, but I think they will definitely be a powerful addition.
Exploring/Discovering
In the browsing section above, different new ways to navigate one’s music have been evoked. The next logical step after that, however, is to help the user discover new music he doesn’t yet know about, by giving pointers to music and information outside of his media library.
Many online services offer to make you discover new music, but this feature remains unusual in desktop music players, except Spotify (with a tab of related artists) and recent versions of Banshee (with custom recommendations, perhaps based off Last.Fm?).
Those illustrate two interesting directions.
First, providing pointers from a certain point in the user’s media library to complementary information and new music. For instance, on an artist pane, show the list of all its releases, including those missing from the user’s music files, show related artists, both present in the user’s media library or not, show the artist’s news feed, etc. Or show reviews for albums, or lyrics for individual tracks. Basically, gather information from external sources about the user’s music, and invite him to discover new music as well.
The second direction is more general: given the user’s music profile and playback history, suggest new artists or genres he might be interested in exploring. For instance, using all the artists present in the media library, infer what will be the user’s next favorite band. Or suggest popular releases, based on the kind of music recently played. Here, suggestions are made spontaneously, based on the behavior of the user.
The goal of these two features is to get the user excited about not just his current music files, but the whole portion of the music world they span. They should invite the user to be curious about his or new music.
One place to put these suggestions on would be the music player’s “home screen”. Without entering into too much details yet, the player could provide the user with ideas on what he might want to listen, based on what he played recently, his collections, tags, etc. Rather than a long table of “all the tracks”, the entry screen could be a richer, custom view of different ways he could start playing his music.
Obviously, this post hinted at a lot of potentially complex features, which would take a lot of time and effort to all put together perfectly. The main point, however, was to point at various ways of making the experience of this music player a little special. In particular, most existing players are still nothing more than a fancy dressing over music tracks (i.e. files).
But to make a really rich and complete experience, I believe that one must embrace music as a culture; promoting entities (e.g. artists, genre, releases, etc) to key navigation points and tying it to all the information available on the web would be a good start in that direction.
Darth Cee-Lo, by Ethan Hein
[Less]
|