Annotation & the net conundrum
Annotation tools such as hypothes.is have gathered a lot of interest over the past year, and certainly have a lot of potential in education. It was at Open Ed last year that Jon Becker brought some of the ethical issues to my attention. These tools allow others to overlay annotation and commentary on any site, visible to anyone with that browser extension. It’s not on that site as such, so they don’t need permission to do that. This is great for annotating, say, an article in a newspaper, or a Governmental press release for example. But as Audrey Watters points out, less great if as an individual, you have been subjected to threats and trolls in the past. Just as you may want to turn off comments, so you may want to turn off annotation. And that’s what Audrey has done by installing a bit of code. As she puts it “My blog. My rules. No comments.” That seems fair enough – part of the message around helping people develop digital identity is owning your own space, which means have agency and control over that.
However, it comes at a cost. There is no speaking truth to power if you can block tools, you can’t annotate say the latest White House environmental policy stating the scientific evidence for damage this will cause (of course you can elsewhere, but it’s working on that document that others will see where there is value). This is the essential problem we keep coming back to with the internet: The great thing about the internet is that anyone can say anything. The terrible thing about the internet is that anyone can say anything. It’s why discussion lists, chat rooms and twitter have all succumbed to dark forces. Annotation is just the latest manifestation of this essential dichotomy between complete openness (but vile people use it to make people’s lives miserable) and control (used by states and corporations to block criticism). There are negatives at either end (and positives too it should be said).
In his excellent newsletter (sign up if you haven’t already) Mike Caulfield sums it up “the lessons of the past decade or so of the web have been harsh. The dream of open participation, in the economic and social climate it has been dropped into, has been as much plague as cure. It’s been easily perverted by the hateful, the corporate, and more recently perhaps, the state-sponsored.” Mike goes onto argue that we can address this dichotomy by careful tool design: “the separate question is what should be encouraged by the design of our technology. People want to turn this into a legal debate, but it’s not. It’s a tools debate, and the main product of a builder of social tools is not the tool itself but the culture that it creates. So what sort of society do you want to create?”
I’m not sure the problem can be addressed by tool design, it’s just too intractable. But I don’t know enough about tool design, so maybe it can, it would be good to see if there are ways we can shape it. And in this, hypothes.is seem like a reasonable ally, they’re a non-profit and from what I know of them, seem genuinely want to engage in this issue. So if we’re going to make it work, then they represent a good case study. Maybe we can’t though – and simply being able to block is the best solution. Because this issue is central to the future of the internet, and will resurface with the next tool. And given that the internet shapes pretty much most of society now, it is THE FUTURE OF ALL HUMANITY (what’s that? You were with me up until the last bit and then I overdid it?)
Nate Angell (@xolotl)
First, full disclosure: I work at Hypothesis, though what I say here represents only my thinking as an individual.
Thank you for this post because it raises points I feel are absolutely central to the discussion. I think we are too quick to focus discussion on tools and was a bit confused by Mike’s excellent post as it seemed like he at once called for the discussion to focus on tools but then emphasized how Russian hacking is successful because it uses both human and technological methods.
Mike is absolutely right of course about the pattern of abuse we see in online technologies as they grow in popularity and influence, but so are you to raise the fact that the issues here revolve around a balance along what one might see as a privacy/control to free-speech continuum. In a recent unconference session at I Annotate, participants likened the extremes of this continuum as “my website, my rules” on one end and “my browser, my rules” on the other.
We humans live in a complex world where we already negotiate that continuum and have developed many social, legal, cultural, and yes, technical mechanisms to navigate it—sometimes successfully and sometimes not, but with a recognition that either end of the continuum is not the best solution. I think the way forward online is most clear if we think about this whole constellation of mechanisms, and not just tools alone.
As a longtime student of the history of communication technologies, one of the most fundamental lessons I’ve learned is that tools are always developing within and in turn reshaping social and cultural frameworks. Focusing on a pure “tools” debate just blinds us to the powerful social and cultural influences and solutions in which tools are realized and used, as Audrey Watters herself so often reminds us in her valuable work.
I strongly support folks like Audrey doing what they feel necessary to protect themselves and their work from misuse and abuse, but I also strongly support a pervasive annotation infrastructure that empowers everyone to connect across the web without it becoming even more a patchwork of proprietary fiefdoms, especially in those areas where money and power are already working to control open discourse to our disadvantage.
One thing annotation can do is support complex conversations like this across domains, as annotations on Mike’s newsletter demonstrate: https://hyp.is/pXVA2jUfEeegCB_iuWxJ7A/tinyletter.com/michaelcaulfield/letters/traces-4-by-mike-caulfield-the-coming-annotation-wars
Sometimes the old solutions are the best:
Back in the 90s I used the Crit Mediator to read Japanese language websites in my English language web browser. Hypothes.is is fairly easily defeated (as Audrey shows). But I’m not sure how you would defeat this mediator approach. (We used this approach with some of the OER tools we built at COSL back in the 00’s.)
Thanks David, I’ll have a look at that. I’m not sure it’s a case of ‘defeating’ hypothes.is as finding a way in which the sort of individual control is built in so you can turn off, but at a higher level it’s not. erm, that’s not possible is it? Which is my conundrum I guess
Nate Angell (@xolotl)
Would the Crit Mediator really solve the problem Audrey raised? Let’s say something like Crit Mediator became a widespread browser capability. Wouldn’t most users’ primary experience of Audrey’s work then include annotations over which she has no control and which could be abusive? It seems like a “mediated” solution just moves the problem “over” to another space.
As long as we look to technical solutions for an issue that is human and cultural in scope I think we are most likely to just move the problem around rather than solving it. I like what Maha suggests below to take a less universalist approach to what counts as free-speech and control.
I’m guessing David is using it in a purely “technical” sense, but to me, the language of “defeat” does not help us move this conversation forward.
Hi Martin. The main thing I want to agree on (and wrote about on Prof Hacker in Open on Whose Terms) is that sort of complex space where it seems to make sense to encourage and support some folks from limiting freedoms of others in order to protect themselves and that’s their right (like Audrey) vs what it means when ppl in power do the same (thus curtailing others’ rights and limiting their own accountability almost). I don’t know exactly why things like freedom of speech or freedom in general tend to be universalized and not contextualized by ppl in the West. And yet interpretations to contextualize it in my context are often tyrannical. I’m sure some smart philosopher whose work i misunderstand wrote if this. Foucault probably 😉 Because postmodernism
Hi Maha, yes, it’s exactly that – but on the internet there is no distinction between those two things, so protecting one is giving power to the other. I think. And yes, I need a philosopher. And a techie. And maybe a sociologist.
Nate Angell (@xolotl)
I’m not ready to say that we can draw contextual distinctions around freedom and control offline, but not online. The Internet is already not just a single environment where only one set of rules and practices apply. It disempowers us to think that context isn’t possible—and leads us to misrecognize the ways in which the Internet is already “contextualized” in very specific ways, mostly in support of existing centers of money and power.