Skip to main content

Two (used) comments on Gillespie's new chapter "The Relevance of Algorithms"

I'm in Paris this semester, as a visiting doctoral student at the Center for the Sociology of Innovation (CSI) at Ecole des Mines and at the médialab at Sciences Po. 

Apart from finding myself in the middle of two very lively research communities, I've also been so lucky that a series of cross-institutional seminars on Digital Methods are taking place in Paris this spring.

The last seminar was on "Transformative interaction: web effects on social dynamics", for which I volunteered to prepare a brief comment on one of the selected readings, namely Tarleton Gillespie's chapter "The Relevance of Algorithms", forthcoming in an edited volume on "Media Technologies" to be published by MIT Press. (The full chapter has been uploaded by Gillespie here).

Since I prepared the comments in writing, and since they did in fact spark some discussion, I've decided that it might be appropriate to recycle them as a blog post. Here goes:

The chapter by Gillespie is what he calls ”a conceptual map” to ”interrogate algorithms as a key feature of our information ecosystems”, the ”cultural forms” that follow and the ”political ramifications” of these new knowledge practices (p. 2).

The main theme is what Gillespie calls ’public relevance algorithms’, which signals a specific attention to the role algorithms play for participation in publics and politics. This is perhaps where Gillespie’s general contribution lies.

I’m not entirely convinced about the metaphor of a ’conceptual map’ – it is not clear to me that Gillespie provides much of a map in the sense of specifying the position of certain concepts in relation to each other. However, the chapter does cover a lot of ground and reads perhaps more like a textbook introduction to thinking about algorithms – and a rather good one, I think – which also means that it contains many different arguments and cannot be summarized in the same way that some research papers can be summarized. 

On page two, Gillespie provides a very pedagogical list of six dimensions of public relevance algorithms. These points summarize the themes covered in the paper, but they do not really stand in for the main text, which surveys a lot of different arguments and references. Instead of going through these, I will just point to a few places that I found especially useful or thought-provoking, and elaborate a bit on those in an attempt to provoke some comments from you.

I’ll jump directly to the sixth and last theme which is the production of calculated publics. Here, Gillespie thematizes how algorithms feed claims about publics back to themselves in ways that are often intuitive, but opaque in their mechanisms of calculation. He draws a contrast to surveys and polls as other measures of public opinion, where the problem is how to move from sample to population in the best way. With algorithms, however, Gillespie says that the central problem is not so much reaching the population, which is often quite accessible through the web, but that ”the intention behind these calculated representations of the public is by no means actuarial”. That is, they are not limited to an interest in neutral accounting, but shaped by all sorts of other factors that Gillespie explores a bit.

I think this problem raises a couple of questions that we could discuss. First, is this ’central problem’ really new? Surveys and polls are also motivated by interests that are more than actuarial. Perhaps it is rather the case, if Gillespie is right, that the problem of intentionality in the devices used to elicit publics comes to the fore with algorithms. If so, then it seems to me a productive thing. To recall an argument of Noortje Marres, who was here last week, the formation of publics is perhaps inherently problematic, and a foregrounding of this problematicness might thus be useful. 

The next question that we could discuss is, as Gillespie also hints at, what kinds of reflexivity, if any, comes with this foregrounding of the role technologies play in enacting publics through the widespread use of web algorithms. Does the fact that web users can select from a variety of algorithms, and the quite visible ways in which they rank information in front of our eyes, lead to new reflexivities about how publics are enacted? Perhaps even about how more conventional social science methods operate? One of the things I am thinking about at the moment is how to understand the roles it plays that Facebook constantly feeds back qualitative and quantitative evaluations in the forms of likes and comments. These mechanisms seem to resemble social science methods at our fingertips, and in an imperfect way that highlights their problematicness.

These questions are related to Gillespie’s third theme, the evaluation of relevance, where he points out that the question is not only what choices we make with the help of algorithms, including how our choices are shaped and traced, but also how the notion of choice itself is enacted in new or different ways.

In the paper that Noortje mentioned last time that I am working on together with a colleague, what we do is try to articulate the ’world’ that is assumed by Facebook’s EdgeRank algorithm. One vantage point for our arguments, which is a second and related point that I would like to offer up for discussion, is that the impetus to participate in publics comes form a pragmatic need to orient ourselves in uncertain situations. As such, orderings to navigate by are never just something that need to be revealed or exposed, but always already something that need to be in place for action to be possible at all. In a sense, we are all big data analysts, because of our need to handle the constant flow of inputs from our environments, and as such, we are on the lookout for invariants – to use a term from James Gibson – that means immobile objects that can make a difference that we can navigate by. Perhaps then, we have already operated according to certain algorithms in order to arrive at some kind of objectivity that allows us to navigate a large stream of input and go on with our business in an efficient way. When algorithms provide ’algorithmic objectivity’ (which is Gillespie’s fourth theme), they enact a world in which an algorithm becomes a invariant that can be trusted to remain in the same place, and against which other events can be measured. As a kind of ’scopic systems’ that Buczowski talks about in his paper, drawing on Knorr-Cetina. In this way, algorithms are perhaps pragmatically useful, even when their objectivity is far from ’perfect’ or ’universal’. How might this change the way in which the can be critiqued or not?

Comments