In the age of the ubiquitous internet, 24hours is already a bit late to be posting a response to anything, but I had to be sure. There is rarely any time for reflection, and much of the content of our electronic media is reflex. These thoughts are on a recent opening and panel discussion at Eyebeam (Center for Art and Technology in Chelsea NY) concerning the topic of Augmented Reality.
At about 6:30 I arrived at the panel at which point the moderator, Laetitia Wolff was finishing her introductory remarks. I caught enough to hear her point out the existence of as many video cameras on the earth as there are neurons in the human brain, connecting with the idea that this constitutes a form of an artificial intelligence equivalent with a human brain, or the possibility of one. Though intriguing, admittedly it’s a bit disturbing to dream of the possibilities of an intelligence formed from the interconnection of electronic eyes. With the announcement that the handsomely designed Google Glass will be made available this year (2013), one can’t help but wonder what it all could mean in the context of a the emergence of a potentially new medium.
Augmented Reality (AR) serves to visually enhance objects, spaces or people with virtual content. It has the potential to dramatically change the relationship between the physical and digital worlds. (Henchoz)
The above excerpt from the “Is Augmented Reality the Next Medium” curatorial statement written by Nicolas Henchoz and provides a bit of context. A good part of the discussion was occupied by mentions of graphic overlays (projections and heads up displays), physical objects with embedded information, and our mobile devices providing windows into new content. Enough material to start any dreamer’s head spinning.
But it wasn’t that my imagination ran wild with possibilities that made it hard for me to follow the particulars of the conversation. I was left wanting deeper insights, thirsty for critical dialog. I found myself asking questions which were never fully addressed in the discussion. A moment of relief came when Christiane Paul cautioned us to question this desire for further mediation that AR entails, but there was no real follow-up to this call to investigate what is staged, and to unmask theatricality.
It would seem that perhaps the most obvious question to address would be our ideas of reality and its relationship with the virtual. A mention of Umberto Eco’s essay, “Travels in Hyperreality”, provided some insight. Though not directly quoted by any of the panelists, here’s the paragraph referenced:
Constructing a full-scale model of the Oval Office (using the same materials, the same colors, but with everything obviously more polished, shinier, protected against deterioration) means that for historical information to be absorbed, it has to assume the aspect of a reincarnation. To speak of things that one wants to connote as real, these things must seem real. The “completely real” becomes identified with the “completely fake.” Absolute unreality is offered as real presence. The aim of the reconstructed Oval Office is to supply a “sign” that will then be forgotten as such: The sign aims to be the thing, to abolish the distinction of the reference, the mechanism of replacement. Not the image of the thing, but its plaster cast. Its double, in other words. (Eco)
It was pointed out that this instance of the Oval Office model served to illustrate a possible mode by which a simulation or replica functions. The reproduction in the pursuit of realism becomes hyperreal, standing in for the thing itself. Well beyond evoking a connection to the real, this form of realistic simulation becomes its own reality, and as such operates in its own unique way as a modifier of the potential experience of the real thing. Despite this, however, further insight into what addition theoretical framework we have for approaching the notion of the Real, reality, and the virtual failed to surface.
In building Augmented Reality, there is a dynamic between the physical object or environment, its simulation through electronic media, the mediated experience of an overlay of virtual content, and the ways in which the experience of one spills over into the other. Perhaps I yearned for some connection to the Lacanian theory of the Mirror Stage, but without a clear idea of how we formulate or notion of what we take to be the Real and the operation of the virtual within it, we stand little chance of understanding how this new reality will be used to control or influence perception. Granted, not every new technology is evil, but they aren’t without their unintended consequences. There’s going to be influence of some kind or another and we have to be aware of how to look for it.
It’s incredible to imagine just how many computation devices are in the world, currently connected by various wireless networks, and how many of those have cameras of some sort. Though taken as a whole, can they possibly exhibit a human equivalent of intelligence? Are we able to formulate criteria by which we can asses the level of intelligence such a system might have? How does this equate to the level of intelligence of a single human, a small group, or the entire population?
When taken as a whole, the human species may be hardly more intelligent than slime mold. As we currently understand it, intelligence comes from the connectivity between elements and the plasticity of those connections. It’s not so much the structure itself, but the formation and revision of particular configurations. Sadly, the point missed by the panel is that our digitally mediated environment must be programmed, and until it can program itself, we must do it. The only information we can put into it will be limited by what we ourselves can input followed by the sophistication of the algorithms we write to automate that process. Here is where there are clear sources of structural bias and issues of access. Beyond that there are also the issues of interface and content filtering.
Jonathan Lee of Google UXA rightly lists inputs and outputs as chief technical challenges faced by designers of user interface (UI) frameworks for Augmented Reality. There are no shortage of sensors today, and haptic interfaces allow for a wide variety of user control over content. It seems that the problem is that there are almost too many inputs. The question then becomes a matter of managing the inputs, of extracting information from the input streams and storing them in a way that enhances virtual content and a user’s experience of navigating that content. Content and context aware algorithms solve this problem, but bring up other issues. Our experience of the internet is already highly mediated by content filtering algorithms. It can almost be argued that serendipity has been all but filtered out (they should make an app for that!) as individuals are catered to based on previously gathered information as interpreted by predictive algorithms (call for submissions: creative predictive algorithms). On the broader issue of adaptive algorithms and similar forms of artificial intelligence, one has to ask what are the models for such algorithms? They must be programmed at some point, based upon some body of data. How do we select or craft the template? Is a possible consequence of further refining the intelligence of our algorithms a normative model for intelligence?
Perhaps it might seem as though I’ve come unhinged, but these questions become important when we begin to approach the task of embedding objects with information. What information or virtual content do we embed in these objects? Who has the ability to do the embedding? What are the possible system architectures that would allow for the system to become a place where the experience of an environment is actually enhanced. What is the framework for approaching this issue of enhancement?
While you consider these, here’s some more of the curatorial statement:
The prospects of augmented reality are linked to a fundamental question: What makes the value of an object, its identity, our relationship with it? The answer lies in the physical properties of the object, but also in its immaterial qualities, such as the story it evokes, the references with which it is connected, the questions it brings up. For a long time, physical reality and immaterial values expressed themselves separately. But with digital technology an object can express its story, reveal information, interact with its context and users in real time. (Henchoz)
It’s important not to mistake the map for the terrain. Physical objects are already vessels of their own history as they are products of a particular sequence of events. Those events, though external and broad in scope, can be decoded, traced and ultimately placed within a larger context of processes (not only physical ones but those with linkage to various cultural practices). With digital technology, an object will not express its story, but always that of someone else. To which we much ask, why that particular story? How did it find its embodiment as embedded data in that particular object? Is it a special case? Why does this story come to us and not others? If we open the system up for anyone to leave their story with any object, what do we do with hundreds of unique stories each told through a single object? What of a world filled with such objects? How do we navigate this terrain of embedded content? The information revealed by an object through media will, on the surface, only be what is placed their by the one privileged with the ability to do so. The nature of interactions will be limited to those programmed by those privileged enough to do so and the awareness equally limited.
The pieces in the exhibition did little to elaborate these deeper questions, or complicate the view of reality that values the particular form of Augmented Reality as put forward by Nicolas Henchoz. The lack of imagination here comes off as almost tongue in cheek. A microphone is placed before a drum kit rigged with mallets and drum sticks attached to actuators. By making utterances of vocalizations into the microphone, the guests can use their voice to control the kit. Mediation is dealt with as a translation or mapping of one kind of sound through a chain of electronic and mechanical processes to the production of another. Elsewhere in the exhibition space there is a flat shallow museum display case without protective glass, in which various postcards, photos, notes, and objects have been placed. iPads are locked and tethered to the case, provided for guests to view the objects in the display with the camera in order to reveal additional virtual content in the form of animations or video, suggesting a sort of lived experience beyond the inert relics. In all there were seven pieces in the exhibition, of which two were not working after the panel discussion. Despite technological foundations of the works presented, the whole exhibition space is filled with wide sheets of paper, gently curved around large cardboard tubes, evoking the sensation one might have of inhabiting a paper factory or new paper printing facility.
There are two major paradigms within average digital, electronic and media art: “the funny mirror” and “demo mode”. The exhibition explored variations of these two paradigms to great effect, but with little affect. But it’s still unclear whether this was all to be taken seriously, or if the whole panel discussion and exhibition is actually intensely subtle critique of current developments of AR. The partners and funders list for the whole affair doesn’t do much to shed light on that matter, except to indicate that there are a group of respectable people taking this all very seriously, whether as an emerging new technology with radical potential as a profoundly transformative media or as a nuanced critique thereof.