[Note – this is part of a distributed article, see previous post for explanation]
The Future of Content
In this section I am going to argue that digital content will move toward being free and widely available because of two complementary arguments: the argument from economics and the argument from quality.
The argument from economics
Where content can be digitised, it is having a profound effect on the economics that underlie the business model of that content, and the way society uses and thinks about it. In this opening section I want to look at two examples of how the digitisation of content has led to significant changes in a number of industries.
Many newspapers ignored the online world for too long, assuming their customers would want their news that way. Now that they have been forced to shift all of their content online, they are finding new business models to cope with this. The initial hope was that essentially the same model would apply, that people would pay for content. But the pay per copy, micropayment and subscription model have all failed. Most recently the New York Times closed its TimesSelect subscription business and made its content free. Apart from it not making enough money, the reason seems to be that the subscription model harmed the alternative model of advertising, which is driven by the global audience finding their content.
Vivian Schiller of The Times comments “What wasn’t anticipated was the explosion in how much of our traffic would be generated by Google, by Yahoo … our projections for growth on that paid subscriber base were low, compared to the growth of online advertising”. In other words, it is better to be free to all of the market than paid for by a small section.
Another example of the massive changes wrought in an industry by the digitisation of content is the music industry, which like newspapers was slow, and resistant, to change. The wake-up call for the music industry was Napster, when suddenly millions of users were exchanging songs and albums without paying. This was partly a result of wanting to have something for free, but it was also because Napster facilitated behaviour which was the content discovery and the social function of music. Through Napster users could find other bands that were similar, sample different types of music, find other users with similar tastes and, most importantly, do all this from their laptop.
Eventually, iTunes saw the industry find a model that seemed to suit both parties. Users could download individual tracks at relatively low cost, and also engage in the content discovery through shared playlists.
LastFm and Pandora take this social aspect a stage further, by data mining users actions to build up a network of artists, so that you only need enter an artist’s name and be able to have a personalised radio station playing similar tracks. You can also find users with similar tastes, join groups, see upcoming events, etc. You can’t own music through these sites and you can’t request a specific album, but when the richness of the choice becomes so great, maybe that becomes less significant. And meanwhile the file sharing software is back, with applications such as Limewire allowing users to share files again.
The internet has also changed the relationship of the recording artist to the record companies. Increasingly bands are establishing an online presence, allowing free downloads of their music to build a following, touring, and recording an album, and only then seeking a label. Even this last stage will become redundant once CDs finally disappear. As Chris Anderson puts it in the Long Tail
“At this point, the artists don’t need the labels any more. The consumers don’t need the labels any more and I think the labels, rather than trying to protect what business they have, need to ask themselves what is their relevance."
Changing our relationship to content
I’ve concentrated just on two of the more obvious examples here, but there are many more. For instance, where audio goes, video will follow once the bandwidth is sufficient. So for traditional television broadcast we are seeing subscription models becoming increasingly difficult to maintain. This has been partly affected by the BitTorrent type sharing services that allow the download of large AV files. At the moment this remains something of a technical skill, but as it becomes more viable and easier, then DVDs will go the way of CDs.
As well as changing the underlying economics of the industry, it alters the way we relate to the original product. To return to music again, in Everything is Miscellaneous, David Weinberger suggests that digitisation of content has altered our perceptions of what we thought was the basic unit of musical output:
"For decades we’ve been buying albums. We thought it was for artistic reasons, but it was really because the economics of the physical world required it: Bundling songs into long-playing albums lowered the production, marketing, and distribution costs … As soon as music went digital, we learned that the natural unit of music is the track."
Nick Carr disagrees with Weinberger, stating the artistic structure of the album, using Exile on Main Street as an example but Clay Shirky argues that if this were so, then it would survive digitisation. When you look on iTunes this is not borne out – most people download Tumbling Dice.
Digitisation has made the track the currency, and then users have begun to create their own playlists by mixing tracks together. In addition, attendance at concerts and festivals is on the increase – people will pay for the live event, but less so for the content that supports it. Digitisation has changed our relationship to music, artists and record companies. Forever.
Shirky’s Second Law and the Content Law
In 2003 Clay Shirky wrote an article called Fame vs Fortune: Micropayments and Free Content, in which he argued that micropayments (which he defined as "payments of between a quarter and a fraction of a penny,”) would fail. At the time there was considerable interest, and belief, in micropayments as a model for internet enterprise. He said of the failed attempts of micropayments that “they failed because the trend towards freely offered content is an epochal change, to which micropayments are a pointless response.” Whereas analogue publishing has inherent costs, essentially the cost of the format, storage and transportation, digital publishing doesn’t. The cost involved then is that of the creators, and online the creator can become the publisher. They will then be faced with a dilemma, Shirky argues, of fame or fortune. In an analogue publishing world you could have both, since in order to achieve fame lots of people need to have read your book, or bought your album. If online people are resistant to paying then charging for your content limits your fame potential. As he puts it in a follow up posting in 2007, creators are “in the position of having to decide between going for audience size (fame) or restricting and charging for access (fortune), and that the desire for fame, no longer tempered by reproduction costs, would generally win out.”
We’ll call this Shirky’s second law (the first is generally given as “Diversity plus freedom of choice creates inequality”, a precursor to the long tail), that is given the choice between fame and fortune, fame wins out.
This, combined with what we have seen above led me to propose the content law, which I think embodies what will happen to content in the future:
"Digital content wants to be free, and will seek the path to maximum access."
In their book Blown to Bits, Evans and Wurster argued that the digital marketplace has seen the unbundling of the economics of information and physical product. This is most readily seen in retail, where you have to see the physical object in a shop to know about it, but online the product information is separated out. David Weinberger explores the implications of this unbundling further, in that it allows infinite recategorisation because the information, unlike the physical product, can be in multiple places at once.
Let’s consider a possible future example, that of books. When the web first became popular there were some suggestions that books would disappear since we could download free ones. This did not come to pass for a number of reasons:
1. Transportability – books are easy to carry about, and don’t run out of battery life, can be used on a train and don’t require a special device.
2. Ease of use – books don’t require special help programmes, can be used by most of the population, have good navigation features and are good at presenting text, compared with the small screens of some hand held devices.
3. Cultural value – we cherish books as social artifacts. Few things cause as much outrage amongst civilised people as seeing books being burnt or desecrated. People have a deep affection towards the tactile nature of a book.
For these reasons, and combined with the advanced content discovery facilitated by Amazon and co, book sales have done very well since the arrival of the net. But let us consider what would happen if digital paper really arrived (despite several proclamations digital ink and paper have been stubbornly difficult to realise, but that isn’t the point of my argument). It felt like real paper, you could have it bound in books like real paper, you could write on it like real paper. But crucially the content it displayed could change, you could search it, and it could record all those annotations you made.
Even as something of a bibliophile, this begins to look tempting to me. I like having books as objects on my shelves, but I used to like having vinyl albums and CDs also, but now I only have MP3s (and clear shelves). If digital paper were so good it overcame the 3 benefits of books above, then it would have significant advantages over real paper.
What would the book publishing industry look like then? I would suggest that Shirky’s Second Law and the Content Law would take over. What would be the role of publishers? If you can download a copy of book into your digital paper book, then a good deal of what the publisher does disappears. They don’t need to provide the printing, binding, or distribution. What they can provide is the marketing. But this is where Shirky’s second law becomes relevant – some authors will start to give their content away cheaply (since unless you’re JK Rowling, most of the cost of a book goes to the retailer and publisher to cover all the costs associated with analogue format). As an author you may only get 10% and have to do most of the work, so why not sell it for that and generate interest online? Then another author decides that actually what they’ll do is give their book away free, because that way people link to it, quote from it, mash it up with Google map overlays, or whatever. Being free and open generates a lot more traffic.
For the industry as a whole the content law is now in operation. Books are now digital content that wants to be free and to have the maximum audience. Publishers need to find a new business model and relevance or they disappear.
As George Siemens puts it "Consumers, like learners will in the future, have a dramatically different relationship with content than they have had in the past. Textbook publishers, journals, and other content-centric industries need to take heed of these lessons and adjust before they become the next statistic.”
The argument from quality
The argument I have given above suggests that economics will be the main driving factor in the liberation of content and has focused on individuals or small groups creating content. A second factor is that it will improve the quality of a lot of content through the distribution of the process.
Here we have a powerful analogy in the process of natural selection which shows us that a vastly distributed process can produce things of great complexity. As Daniel Dennett argues in Darwin’s Dangerous Idea, Darwin’s great contribution was to remove the need for top down intervention (or sky hooks in his metaphor) in any explanation of how biological complexity is achieved:
"Cranes can do the lifting work our imaginary skyhooks might do, and they do it in an honest, non-question-begging fashion… Skyhooks are miraculous lifters, unsupported and insupportable. Cranes are no less excellent as lifters, and they have the decided advantage of being real.” (pg 75).
The blind, but distributed process of natural selection was sufficient to do all the ‘lifting’ required in biological design space. Natural selection is distributed over many individuals in a species and over a very long time span. This allows for small, but incremental changes to produce cumulative complexity. The internet allows for similar distribution across individuals, but unlike natural selection, each participant is not dumb, or blind, to the process, thus the overall process is speeded up considerably, and we don’t have to wait millions of years for the results.
What the internet, and web 2.0 in particular, achieves is this massive distribution of the task. Just as prior to Darwin there was no way of conceiving of biological complexity without some designer, without top-down input, so many of the web 2.0 critics fail to understand how you can achieve the relevant sophistication required for, an encyclopaedia say, without a heavily controlled, centralised, top-down process. The democratisation of process that web 2.0 has wrought is key to understanding why they are wrong.
Take photography as an example. There are a lot of people out there who are quite good photographers, some of whom are ignorant of the very specialist skills required, but who have a natural talent, others who have some knowledge, and others still who are specialist in certain types of photography. They were all largely ignorant though of the process required to become professional photographers, of how to share their photographs when the sharing is controlled by the economics of distribution and a top down authority. With the advent of Flickr all of these people can now share their photographs. The result is an explosion of creativity, and real, undeniable high quality photographs.
What the web 2.0 critics would say is that if you compare any photographer at random from Flickr with any random professional photographer then the second will win out, because they have been through the filtering process. But this is to fundamentally misunderstand the nature of the distributed process that is now in place. Sure, if you pick any photographer at random on Flickr then you probably find very average family snaps, but the result of the process as a whole is the production of islands of complexity. And what is more, because the traditional filtering process in the top-down model tends to make professional opinion converge, what you get from the bottom up process is a far greater range of inventiveness and style. In evolutionary terms, the first process is like inter-breeding while the second is akin to broadening the gene pool.
This is what critics such as Andrew Keen fail to appreciate in their criticism of web 2.0 and user generated content. It is not the comparison of any one individual with another, or any one artefact, that is significant, but comparison between the processes. And when it comes to producing complexity, mass distribution wins every time.
So the second reason why content will become free is that only by removing it from behind the confines of payment and strict rights control, can certain types of content be improved. We have seen this in software of course with open source, and with general knowledge in wikipedia, but it also occurs on a smaller basis with blog posts, podcasts, etc where the overall product is improved by making it open, and then incorporating user comments and feedback.
[Over to Ray Corrigan for the second part of this]