James Coupe, Metadata and Metanarratives
Metadata and Metanarratives, by James Coupe, 2015
In The Order of Things, Michel Foucault outlines four types of similitude, the first of which, convenientia, describes things “which come sufficiently close to one another to be in juxtaposition; their edges touch, their fringes intermingle, the extremity of the one also denotes the beginning of the other” [1]. Such things complement each other in some way, and share a resemblance borne from juxtaposition and association. Foucault states that these relationships are not necessarily obvious based on external appearance – in other words they are not purely visible forms of resemblance, but rely on commonalities that are hidden, a situation that “reverses the relation of the visible to the invisible” [2]. He sees a need for object to include external ‘signatures’ that allow us to uncover their hidden qualities.
Foucault’s separation of the ‘obvious’ and ‘hidden’ properties of objects is analogous to the relationship between content and metadata in digital data. Whereas an image is content, metadata would include the image’s size, resolution, date of creation; for an audio file, it might include the file format, who made it, title, etc. So metadata is information about information, and reveals the hidden elements that can be used to identify similitudes between objects – in this case, where two audio files have something in common. When approached algorithmically, it is thereby possible for, say, iTunes to recommend music to you based on the resemblance of one audio track’s metadata to another, without necessarily needing to know the content of those tracks. The more resolution there is in the metadata, the more accurate the similitudes. Or, perhaps the more accurate the ‘narrative’ relationship that exists between the two audio tracks. Given an ability to access these hidden properties directly, it would be possible to generate a series of similitudes that would work – i.e. be sympathetic to each other – yet that would reside outside of ordinary empirical perception, removing the need for Foucault’s external signatures. In other words, metadata-aware systems can show us that two things have a relationship, in some cases, that we would not be able to identify without them.
Let’s consider the implications of this for narrative. I’d like to introduce a project that uses metadata to directly generate narrative, and that illustrates this idea of sympathetic relationships between metadata as used for narrative purposes.
My 2010 work Today, too, I experienced something I hope to understand in a few days is a Facebook application, built from three sources, each a form of self-surveillance and reliant upon metadata to connect them together. First is a series of video portraits of people who volunteered to be filmed at specially arranged events organized in Seattle, Barrow and Manchester, using poses and actions loosely based on Danish experimental filmmaker Jorgen Leth’s 1967 film The Perfect Human. The work’s title comes from a line in the film. The videos are uploaded to a database where a program automatically edits them in the style of Leth’s film, using metadata extracted from the original cinematography – duration, type of shot, gender of subject. The second source is text from the status posts of people who have voluntarily signed up to the project’s Facebook application. In so doing, all the status posts they have ever made are put into a database and mined for narrative associations, before being joined together into a story. These narratives, made up of multiple people’s status updates, are then overlaid as subtitles on video portraits of people whose demographic matches those of the original post. Lastly, YouTube videos with tags that match keywords in the status posts are automatically downloaded, and displayed next to the video portraits as a split-screen video. The resulting video is then uploaded to YouTube, and also put onto the Facebook page of all subscribed users.
As you might see from the videos the relationships between each status update are in some cases tangential, i.e. not the way we might have thought about them ourselves. We can see the logic – keywords, grammars, etc. And this is the interesting thing about the similitude principle – it betrays the logic of the machine, the story as told by the computer, based on its worldview. Because this project generates videos based entirely on metadata, it has been described as “tapping into the sadistic voyeurism behind the benign face of cool” [3]. The passive activity of simply being part of social networks such as Facebook generates commercial associations that are algorithmically determined – health, real estate, fashion, car products based on your age and gender. By taking an individual’s status posts and combining them with those of other people that are thematically or grammatically linked, Today, too, I experienced something I hope to understand in a few days flattens subjectivities – people become data for the system. When combined with YouTube videos based purely on common keywords, the semantics become even further skewed – not totally irrational, after all this is a prime example of Foucault’s similitudes, but into a space where the algorithm reveals its rules. This is the sadistic voyeurism – where the voyeur is the network, persistently and with no regard for the individuals finding narrative relationships between products and human beings, for the price of those humans wanting to be connected to other people and feel a sense of community. Social media provides us with a kind of transactional observational system, where in order to see oneself as part of a community, one must submit one’s metadata to the commercial gaze that underpins it.
With the digitization of many elements of daily life – communications, reading, music, entertainment, social interactions – almost everything has metadata, including us. Of the classified documents leaked by Edward Snowden, the first to be published by The Guardian revealed that Verizon was required to hand over, in bulk, the telephone records of its customers to the National Security Agency. These records did not include the content of telephone calls, but rather the metadata associated with the calls: phone numbers, GPS coordinates, duration and time of calls, SIM card ID. Senator Dianne Feinstein, chair of the Senate intelligence committee, wrote in USA Today: “The call-records program is not surveillance. It does not collect the content of any communication, nor do the records include names or locations. The NSA only collects the type of information found on a telephone bill” [4].
Here, Feinstein differentiates between metadata and surveillance – a controversial distinction that many would contest. To follow the NSA’s logic, ‘surveillance’ would be limited to the collection and analysis of the content of conversations that, presumably, people deliberately participated in. In contrast, metadata constitutes supplementary information that is inadvertently generated—for instance, the time and duration of a call. The distinction between surveillance of content and metadata points to the expanded scope of observational systems. It also highlights the emergence of a new narrative form. Unlike traditional narrative, metadata-based narratives infer ‘content’, they do not use it directly. They get it wrong, a lot. This is why we are being offered things via Facebook and Google ads that we don’t want. We understand why we’re being offered them – demographic, search terms, etc, and because they are not content-driven they are sometimes wrong, but on other occasions eerily correct. These are the new narratives, and they are becoming increasingly familiar to us. A Facebook front page, for example, is a succession of text, announcements, images and videos from other people, whose only connection to each other is our own metadata. Yet somehow we are not overwhelmed – the simultaneity makes sense to us as we find a way to join them together into something coherent.
The transition from analog communication to digital information systems permits the easy filtering, evaluation and comparison of indexed data, and following Foucault, allows hidden relationships to emerge. It also demonstrates the extent to which social media networks such as Facebook rely entirely upon metadata. The actual content of peoples’ individual status posts is largely irrelevant, other than in terms of specific keywords (i.e. the post represented as a form of metadata). Demographics, device-types, location, mobility, and ‘likes’ are much more valuable in terms of building up a narrative profile of people as potential customers to sell things to. As such, Facebook should be seen as the pre-eminent self-surveillance network of our time, successfully combining a commercial business model with voluntary self-surveillance. So in one context – the Snowden leaks – we fight to protect the privacy of our telephone conversations; in another context – social media – we voluntarily donate intimate personal information to a corporate entity, even now in the knowledge that this data is being siphoned off by the NSA. In this context we can see a considerable interdependence between metadata, narrative and surveillance emerge, with digital information systems enabling metadata collection, which in turn facilitates computer driven narrative inference, which in turn is itself what we might call surveillance.
Sites like Twitter and YouTube do not acquire the quantity of personal-related metadata that Facebook does, simply because they do not incorporate as many data- points into their system. Where Facebook succeeds is in finding virtual analogies for so many aspects of our real-world lives – work, friends, emotions, events, births, deaths, etc. This provides Facebook with an enormous ability to accurately narrativize our lives based purely on metadata. Interestingly, this becomes mutually beneficial – through such ‘big data’ systems, we can find patterns that may not be apparent in real life. So here the machine takes charge: the scale of the data is beyond our empirical capabilities and can show us things about ourselves that we would not be able to perceive without it.
Michael Curry’s 2003 paper, The Profiler’s Question and the Treacherous Traveler: Narratives of Belonging in Commercial Aviation, articulates the close relationship between data, profiling and narrative. Curry traces the attempts made by the airline industry to figure out if an air traveler was “a known and rooted member of the community” [5]. In the early days of commercial air travel, flying was an exclusive activity because it was expensive, meaning that the reasons for people to fly were relatively limited. These limitations meant a small number of profiles for a ‘normal’ traveler, which made people outside that norm stand out more easily. These profiles could be easily transposed onto a narrative – a well paid businessman on a business trip to meet with other executives would be the norm; a criminal with a gun in his hand baggage who will try to divert the plane to his homeland and make a getaway, the exception. Security measures were designed based on these narratives – searching hand baggage, filtering out passengers with one-way tickets bought with cash, etc. As air travel became cheaper and more popular, the profiles, and thereby the narratives, also expanded exponentially. At that point the profiles needed greater resolution – they needed more metadata – in order to infer the content of their journeys. Consequently airlines began requiring things like photo IDs, passports, ZIP codes, and credit cards, and these can be considered as ways to obtain that metadata – to access histories which could flesh out the narratives of individual passengers. The ‘content’ of a person’s journey – i.e. asking that person face-to-face for their reasons for travel – were less reliable than the metadata associated with the journey (age, gender, travel companions, method of ticket purchase, seat selection, travel history, etc.).
The strategies that Curry describes are equally applicable to airlines, NSA, advertising and marketing. As he says,
If one knows a location – a street address, wired-telephone number, latitude and longitude, or even airline flight and seat number – one can use that datum as a means of associating activities and participants one with another, and creating an image of the whole. The desire may be to find potential deodorant buyers or potential hijackers, but the method can be the same. [6]
Curry’s observations show once again that visible or aural content is not as valuable for surveillance purposes as metadata. In terms of narrative, it also hints at the capability for narrative to be algorithmically generated – i.e. to respond directly to the currently available metadata, where a story is based on the demographics, locations, preferences of a community of people. In this sense, it is dangerous to impose pre-determined narratives because those may not take account of the emergent aspirations, motivations and anxieties of specific profiled groupings. Metadata narratives need to be dynamic and emergent, potentially based on possibilities that we cannot see ourselves. As is the case with Today, too, I experienced something I hope to understand in a few days, narrative is a dynamic thing that can shift in direct response to real world events, and as a result keeps pace with current social and technological paradigms. If something happens in the real world, this is reflected in the Facebook updates, that then are used to generate the videos.
Related to Curry’s idea is another of my works, a video art installation titled – Swarm (2013). This is a work that can be understood only in the context of profiling and narrative inference. Swarm takes the logic of social media – demographically organized communities based around common interests, habits and markets – and transposes it onto gallery audiences. Using four rows of monitors, the work generates competing panoramic representations of the gallery space that appear to be exclusively occupied by specific groupings of people – men in their 20s, women in their 50s, people of Asian descent, people dressed in black, men with beards. Each group is shown as what appears to be a live panoramic video image, with people inserted into a ‘crowd’ alongside others who have previously visited the gallery. Some crowds are much larger than others – a large group of middle-aged white women on one panorama, standing around, waiting for something to happen, may juxtapose with a solitary Latino male on another. Different demographic groupings territorialize the gallery’s spaces, their numbers dynamically expanding and contracting.
Swarm, like many of my other works, uses computer vision algorithms to profile people via live video cameras. The cameras identify faces and then analyze the landmark features of those faces – relationships between eyes, nose, mouth, jawline, etc. These features are then compared to those in a large database of pre-tagged faces, and the age, gender, race, facial expression, etc. of the person in the gallery are estimated. The system works with metadata – it is not looking for specific individuals, rather it is looking for characteristics and then comparing them to existing patterns based on the metadata of others. It calculates the locations of people inside the gallery space and uses those to figure out how to build crowds of people, again based on positional metadata. On each row of monitors, there are dynamically generated groupings of people, and these are based on what the metadata can show us – groupings based on how the computer organizes humans, what the majority is, what the outliers are. The various narratives that are inferred from those groupings are essentially the same ones that drive the NSA’s logic, airport profiles and internet commerce.
Where does this correlation take us? Swarm relies on our understanding and experience of metadata to make sense to its audiences. Social media, by finding patterns within our personal data, trains us to understand the way metadata works as narrative. So the groupings in Swarm are familiar and recognizable, showing us the extent to which we have become used to algorithmically constructed communities as a way of experiencing the world, and to finding ourselves inserted into groupings of people based upon our metadata. Swarm removes these groupings from their familiar commercial context and as a result is more oppressive, exclusionary and menacing. The algorithm is visible and present, assuming control of our metadata and using it to construct narratives that we can exert very little control over. Arguably the events of the last year in Ferguson, Staten Island and Sanford have made this kind of demographic profiling even more pertinent.
Swarm was inspired by another kind of narrative, J.G. Ballard’s High-Rise, a novel in which people live in close proximity in a one-thousand unit modern apartment building. Eventually, the pressures of isolated yet claustrophobic living causes the residents of the high-rise to form clans, organized around class demographics. The situation rapidly becomes monstrous, as residents begin killing each other in order to regain control of their environments. For Ballard, the residents are cool, unemotional, desensitized, with minimal need for privacy and capable of thriving within the closed environment of this “malevolent zoo”:
[The residents had] no qualms about the invasion of their privacy by government agencies and data-processing organizations, and if anything welcomed these invisible intrusions, using them for their own purposes. These people were the first to master a new kind of late-twentieth century life. They thrived on the rapid turnover of acquaintances, the lack of involvement with others, and the total self-sufficiency of lives which, needing nothing, were never disappointed. [7]
Ballard’s isolation/proximity dialectic is useful in thinking about metadata. Social media isolates individuals by creating a customized, unique experience with narrative content generated specifically for you, based on your demographic, purchasing habits and interests; it also relies upon proximity by calculating your similarity to the profile of other individuals. Inside this framework, we exchange privacy for an identity within a network of algorithmically determined similitudes. No longer is it about protecting our own subjective thoughts, rather it is about managing our external (metadata) identity and ensuring that the system sees us correctly, i.e. with the narrative that we want. We can become virtuosic operators within this claustrophobic environment – as Ballard recognizes, we use the situation “for our own purposes”. An advanced understanding of how a network sees us requires an awareness of the metadata that we generate within it, and for us to perceive ourselves as isolated and connected at the same time.
On the Observing of the Observer of the Observers (2013) is an installation that explores this interdependence of the individual and the masses. All visitors become participants, and everyone observes and is observed. The work incorporates a labyrinthine sequence of rooms, each containing five cameras and five monitors. The cameras are positioned in a ring in the center of each room, capturing a 360-degree panorama that is then displayed on the screens. Each camera runs computer vision algorithms that determine what they record and what they ignore, selectively sending video to the monitors to display a panoramic view of the gallery space that is asynchronous. Each camera will only record video when a single individual is in the shot. When spliced back together to form a panorama, those individuals find themselves paired up with one other person, or four others, or none – each room has a unique set of rules that it follows to recomposit the footage that it captures into specific narrative scenarios. Some blend staged footage – the Asch conformity test, a religious sermon on God as voyeur – with gallery visitors.
James Coupe, On the Observing of the Observer of the Observers (2013)
Each room’s latest footage is autonomously distributed to a screening room, where it is spliced into a ten-minute narrative film, using a series of instructions adapted from Friedrich Dürrenmatt’s 1986 novella, The Assignment as subtitles and voice- over. The instructions (for instance, “Try not to be observed”, “Pay attention to man and lend him meaning”) sound like self-help-style directives, perhaps providing a source of comfort as people find themselves encountering a taxonomy of individuals in the installation, some live, some archived, some previously recorded versions of themselves. Every person’s experience of the work is unique, both isolated and claustrophobic as it constantly reconstructs itself based around their identity and passage through the installation.
James Coupe, On the Observing of the Observer of the Observers (2013)
Durrenmatt’s novella, The Assignment, which this work uses as its narrative template, revolves around the disappearance of Tina Von Lambert, who leaves behind a diary with a final entry that simply states, “I am being observed.” It is unclear if this refers to the meticulous studies her psychiatrist husband makes of her, or if it is a positive acknowledgement that, at last, someone is paying attention to her. Later in the story, a logician develops a theory of observation that connects war, science, terrorism, marriage and God. According to the logician, people have an inherent need to be seen, without which they would feel insignificant and depressed:
… [he] would have to conclude that other people suffered as much from not being observed as he did, and that they, too, felt meaningless unless they were being observed, and that this was the reason why they all observed and took snapshots and movies of each other…[8]
So here observation is a self-perpetuating loop between content and metadata. Awareness of how we are seen determines how we present ourselves. A desire to be validated as meaningful by metadata-seeking systems encourages us to contribute more personal content. So, potentially inverse to our expectations, we are creating content in order to generate metadata. Social media provides us with the tools to verify that when we ask to be observed, someone is watching. For what is a Facebook post without at least someone ‘liking’ it? Or a tweet with no followers? Or a YouTube video that no one watches? Metadata is much crueler than this, however: while it may give us the impression that we can strategically oscillate between observer and observed, between exhibitionist and voyeur, the reality is that every click, scroll and pause generates data for the networks that provide them. Ken Rudin, head of Facebook analytics, has discussed cursor tracking as a means of generating additional metadata [9], converting users into active observers, tracking our gaze and connecting us with products we didn’t even realize we wanted. Even when we just want to watch, we also perform for the network and contribute even greater narrative resolution for it.
To wrap up I want to finish by showing you one last work, Sanctum, which incorporates several of the narrative systems that I have laid out today. Sanctum is perhaps unique in that it is a public artwork that uses social media narratives – in other words a public artwork that asks what it means to be in public today. As a public artwork, it brings into play a number of issues concerning narrative and metadata that are worth discussing.
Screenshot of Sanctum (2013): https://www.youtube.com/watch?v=RBWk__eh2aI
Installed on the façade of the Henry Art Gallery at the University of Washington (UW) in Seattle, Sanctum uses six video cameras to track and profile people as they walk towards the gallery. Once they have been profiled, voices that match their demographic are beamed at them via ultrasonic speakers. The voices read out narratives built from Facebook status posts, again matching their demographic. As they get closer to the gallery’s façade the voices become clearer, eventually resolving into a single voice. Once within 12 feet of the façade, a person’s live image is put onto video monitors that wrap around the gallery, paired up with other people that match their demographic, and with the Facebook narrative as subtitles. Here we see something distinct from Today, too, I experienced something I hope to understand in a few days – this is an individuated, real-time narrative, based upon your demographic, thereby precisely mimicking the experience of commercial internet practices.
James Coupe and Juan Pampin, Sanctum (2013)
One of the core goals with Sanctum was to get as close as possible to these kinds of metadata-aware systems. We didn’t want to simply represent such systems; we wanted to build one. And interestingly once that is attempted in a public space it gets much more complicated. There were legal issues involved in placing surveillance cameras in a public space, profiling people in public space, juxtaposing fictional narratives with live images of people, all things that happen as a matter of routine online. In this sense, public art and public space becomes a really useful vehicle for highlighting the issues involved in the complex relationship between surveillance, metadata and narrative.
Signs for Sanctum (2013) – initial version on the left; amended version on the right.
Interestingly, the initial assumption was that people would object to being profiled and recorded. Gallery staff were instructed not to use the word ‘profiling’ when discussing the work with members of the public. The first version of the signage for the project produced by the gallery did not encourage people to participate in the work, and was only changed on insistence from the artists. Contrary to these initial fears, the work has been extremely successful and at the time of writing, no complaints have been filed since the project launched. Instead, interesting behaviors have emerged: people using the work to leave messages for others in the system, people uploading images onto Facebook of themselves as seen by the system, people staging performances in front of the façade – new narrative forms perhaps. These are people who understand the system’s logic and are skilled manipulators of metadata. As I discussed earlier, there is perhaps a level of familiarity with the rules of engagement when it comes to social media, metadata-based narratives and surveillance. Arguably Sanctum provides a platform for people to explore these rules and is a testing ground of sorts to measure yourself up against surveillance systems and their narrative logic.
Being installed from April 2013 until Summer 2015, Sanctum also spans the Snowden revelations of June 2013. Their confirmation that everyone is being monitored, all of the time, via the various metadata-enabled devices that we carry around with us amplified the layered approach to public space that Sanctum employs. As people walk past the installation, on their phones, they broadcast GPS- tagged metadata, reinforcing the underlying menace of what Sanctum is doing with their demographic information. There is a cost to the narratives that these kind of systems are capable of constructing. Last November, the Henry Art Gallery organized a symposium about Sanctum and the issues of surveillance and privacy that it confronts. Speakers included Cory Doctorow, Marc Rotenberg and Edward Shanken, with a goal to take lessons from Sanctum that will allow other galleries and artists to make work that explores these ethical, legal and institutional grey areas. It is vital that artists can make work that uses the same tools deployed by governments – not painting pictures of these scenarios but operating in the same reality, with the same methods recast. Only then can we attain a critical position capable of meaningful understanding with real-world implications and impact. As Hans Haacke once said, the system is not imagined, it is real [10].
And more broadly, within the context of this symposium – what does it mean for art and artists to operate within environments where metadata increasingly supersedes content? What happens to the art object, what happens to artistic practice? What skills does an artist need to have today? It’s extremely important that artists continue to interpret, reflect, critique and comment, that they have the skills and the platform to irritate the systems we live alongside, and vitally, use the right tools for the job. Traditional art materials are increasingly inadequate for this, hence the need for artists to be hackers, programmers and highly conversant with systems as well as objects. The works that such artists make use technology but increasingly it is becoming hard to sustain a critical practice that engages with the world as we live in it without using or referencing technology in some way. Geert Lovink wrote recently that the Snowden revelations marked the symbolic closure of the “new media” era. So now we must talk about art in the age of digital media, rather than digital art and work out how to prepare ourselves accordingly.
Endnotes
[1] Michel Foucault, The Order of Things (Vintage, 1973), 18 2 Michel Foucault, The Order of Things (Vintage, 1973), 18
[2] Ibid, 26
[3] Maria Walsh, James Coupe: Today, too, I experienced something I hope to understand in a few days, Art Monthly, October 2010, 37
[4] http://www.usatoday.com/story/opinion/2013/10/20/nsa-call-records- program-sen-dianne-feinstein-editorials-debates/3112715/
[5] Michael R. Curry, The Profiler’s Question and the Treacherous Traveler: Narratives of Belonging in Commercial Aviation, Surveillance and Society, Vol. 1, Issue 4, 476.
[6] Michael R. Curry, The Profiler’s Question and the Treacherous Traveler: Narratives of Belonging in Commercial Aviation, Surveillance and Society, Vol. 1, Issue 4, 493.
[7] J.G. Ballard, High-Rise, (Caroll & Graf, 1989), 36
[8] Friedrich Durrenmatt, The Assignment (University of Chicago Press, 2008), 19
[9] http://blogs.wsj.com/cio/2013/10/30/facebook-considers-vast-increase-in-data- collection/
[10] Jack Burnham, Great Western Saltworks (George Braziller, 1974), 22.