An artist working with electronics and electronic media, based in Brooklyn, NY

Posts tagged “theory

Thoughts On “Is Augmented Reality the Next Medium”

In the age of the ubiquitous internet, 24hours is already a bit late to be posting a response to anything, but I had to be sure.  There is rarely any time for reflection, and much of the content of our electronic media is reflex.  These thoughts are on a recent opening and panel discussion at Eyebeam (Center for Art and Technology in Chelsea NY) concerning the topic of Augmented Reality.

At about 6:30 I arrived at the panel at which point the moderator, Laetitia Wolff was finishing her introductory remarks.  I caught enough to hear her point out the existence of as many video cameras on the earth as there are neurons in the human brain, connecting with the idea that this constitutes a form of an artificial intelligence equivalent with a human brain, or the possibility of one.  Though intriguing, admittedly it’s a bit disturbing to dream of the possibilities of an intelligence formed from the interconnection of electronic eyes.  With the announcement that the handsomely designed Google Glass will be made available this year (2013), one can’t help but wonder what it all could mean in the context of a the emergence of a potentially new medium.

Augmented Reality (AR) serves to visually enhance objects, spaces or people with virtual content.  It has the potential to dramatically change the relationship between the physical and digital worlds. (Henchoz)

The above excerpt from the “Is Augmented Reality the Next Medium” curatorial statement written by Nicolas Henchoz and provides a bit of context.  A good part of the discussion was occupied by mentions of graphic overlays (projections and heads up displays), physical objects with embedded information, and our mobile devices providing windows into new content.  Enough material to start any dreamer’s head spinning.

But it wasn’t that my imagination ran wild with possibilities that made it hard for me to follow the particulars of the conversation.  I was left wanting deeper insights, thirsty for critical dialog.  I found myself asking questions which were never fully addressed in the discussion.  A moment of relief came when Christiane Paul cautioned us to question this desire for further mediation that AR entails, but there was no real follow-up to this call to investigate what is staged, and to unmask theatricality.

It would seem that perhaps the most obvious question to address would be our ideas of reality and its relationship with the virtual.  A mention of Umberto Eco’s essay, “Travels in Hyperreality”, provided some insight.  Though not directly quoted by any of the panelists, here’s the paragraph referenced:

Constructing a full-scale model of the Oval Office (using the same materials, the same colors, but with everything obviously more polished, shinier, protected against deterioration) means that for historical information to be absorbed, it has to assume the aspect of a reincarnation. To speak of things that one wants to connote as real, these things must seem real. The “completely real” becomes identified with the “completely fake.” Absolute unreality is offered as real presence. The aim of the reconstructed Oval Office is to supply a “sign” that will then be forgotten as such: The sign aims to be the thing, to abolish the distinction of the reference, the mechanism of replacement. Not the image of the thing, but its plaster cast. Its double, in other words. (Eco)

It was pointed out that this instance of the Oval Office model served to illustrate a possible mode by which a simulation or replica functions.  The reproduction in the pursuit of realism becomes hyperreal, standing in for the thing itself.  Well beyond evoking a connection to the real, this form of realistic simulation becomes its own reality, and as such operates in its own unique way as a modifier of the potential experience of the real thing.  Despite this, however, further insight into what addition theoretical framework we have for approaching the notion of the Real, reality, and the virtual failed to surface.

In building Augmented Reality, there is a dynamic between the physical object or environment, its simulation through electronic media, the mediated experience of an overlay of virtual content, and the ways in which the experience of one spills over into the other.  Perhaps I yearned for some connection to the Lacanian theory of the Mirror Stage, but without a clear idea of how we formulate or notion of what we take to be the Real and the operation of the virtual within it, we stand little chance of understanding how this new reality will be used to control or influence perception.  Granted, not every new technology is evil, but they aren’t without their unintended consequences.  There’s going to be influence of some kind or another and we have to be aware of how to look for it.

It’s incredible to imagine just how many computation devices are in the world, currently connected by various wireless networks, and how many of those have cameras of some sort.  Though taken as a whole, can they possibly exhibit a human equivalent of intelligence?  Are we able to formulate criteria by which we can asses the level of intelligence such a system might have?  How does this equate to the level of intelligence of a single human, a small group, or the entire population?

When taken as a whole, the human species may be hardly more intelligent than slime mold.  As we currently understand it, intelligence comes from the connectivity between elements and the plasticity of those connections.  It’s not so much the structure itself, but the formation and revision of particular configurations.  Sadly, the point missed by the panel is that our digitally mediated environment must be programmed, and until it can program itself, we must do it.  The only information we can put into it will be limited by what we ourselves can input followed by the sophistication of the algorithms we write to automate that process.  Here is where there are clear sources of structural bias and issues of access.  Beyond that there are also the issues of interface and content filtering.

Jonathan Lee of Google UXA rightly lists inputs and outputs as chief technical challenges faced by designers of user interface (UI) frameworks for Augmented Reality.  There are no shortage of sensors today, and haptic interfaces allow for a wide variety of user control over content.  It seems that the problem is that there are almost too many inputs.  The question then becomes a matter of managing the inputs, of extracting information from the input streams and storing them in a way that enhances virtual content and a user’s experience of navigating that content.  Content and context aware algorithms solve this problem, but bring up other issues.  Our experience of the internet is already highly mediated by content filtering algorithms.  It can almost be argued that serendipity has been all but filtered out (they should make an app for that!) as individuals are catered to based on previously gathered information as interpreted by predictive algorithms (call for submissions: creative predictive algorithms).  On the broader issue of adaptive algorithms and similar forms of artificial intelligence, one has to ask what are the models for such algorithms?  They must be programmed at some point, based upon some body of data.  How do we select or craft the template?  Is a possible consequence of further refining the intelligence of our algorithms a normative model for intelligence?

Perhaps it might seem as though I’ve come unhinged, but these questions become important when we begin to approach the task of embedding objects with information.  What information or virtual content do we embed in these objects?  Who has the ability to do the embedding?  What are the possible system architectures that would allow for the system to become a place where the experience of an environment is actually enhanced.  What is the framework for approaching this issue of enhancement?

While you consider these, here’s some more of the curatorial statement:

The prospects of augmented reality are linked to a fundamental question: What makes the value of an object, its identity, our relationship with it?  The answer lies in the physical properties of the object, but also in its immaterial qualities, such as the story it evokes, the references with which it is connected, the questions it brings up.  For a long time, physical reality and immaterial values expressed themselves separately.  But with digital technology an object can express its story, reveal information, interact with its context and users in real time. (Henchoz)

It’s important not to mistake the map for the terrain.  Physical objects are already vessels of their own history as they are products of a particular sequence of events.  Those events, though external and broad in scope, can be decoded, traced and ultimately placed within a larger context of processes (not only physical ones but those with linkage to various cultural practices).  With digital technology, an object will not express its story, but always that of someone else.  To which we much ask, why that particular story?  How did it find its embodiment as embedded data in that particular object?  Is it a special case?  Why does this story come to us and not others?  If we open the system up for anyone to leave their story with any object, what do we do with hundreds of unique stories each told through a single object?  What of a world filled with such objects?  How do we navigate this terrain of embedded content?  The information revealed by an object through media will, on the surface, only be what is placed their by the one privileged with the ability to do so.  The nature of interactions will be limited to those programmed by those privileged enough to do so and the awareness equally limited.

The pieces in the exhibition did little to elaborate these deeper questions, or complicate the view of reality that values the particular form of Augmented Reality as put forward by Nicolas Henchoz.  The lack of imagination here comes off as almost tongue in cheek.  A microphone is placed before a drum kit rigged with mallets and drum sticks attached to actuators.  By making utterances of vocalizations into the microphone, the guests can use their voice to control the kit.  Mediation is dealt with as a translation or mapping of one kind of sound through a chain of electronic and mechanical processes to the production of another.  Elsewhere in the exhibition space there is a flat shallow museum display case without protective glass, in which various postcards, photos, notes, and objects have been placed.  iPads are locked and tethered to the case, provided for guests to view the objects in the display with the camera in order to reveal additional virtual content in the form of animations or video, suggesting a sort of lived experience beyond the inert relics.  In all there were seven pieces in the exhibition, of which two were not working after the panel discussion.  Despite technological foundations of the works presented, the whole exhibition space is filled with wide sheets of paper, gently curved around large cardboard tubes, evoking the sensation one might have of inhabiting a paper factory or new paper printing facility.

There are two major paradigms within average digital, electronic and media art: “the funny mirror” and “demo mode”.  The exhibition explored variations of these two paradigms to great effect, but with little affect.  But it’s still unclear whether this was all to be taken seriously, or if the whole panel discussion and exhibition is actually intensely subtle critique of current developments of AR.   The partners and funders list for the whole affair doesn’t do much to shed light on that matter, except to indicate that there are a group of respectable people taking this all very seriously, whether as an emerging new technology with radical potential as a profoundly transformative media or as a nuanced critique thereof.


Questions: Metric Expansion of Space, Dark Energy, Probability

My head has been swimming. Perhaps I should have continued studying theoretical physics when I was younger, but there’s nothing that can change that now. I’ve always been fascinated by the sciences of the very large and the very small. Developing theories about the nature of space and time has been a past time of mine since I could grasp the concept of such things, and in this my cousin was my chief partner in scientific blasphemy. Forgive me now in lacking rigorousness and academic references, I simply wish to bounce ideas off the aether and see where I’ve gone terribly wrong and also what might be of merit.

Today, the LHC at CERN is carefully winding up and pitching beams of highly energetic particles (more accurately wavicles, perhaps better described as multi-dimensional folds) at one another, hoping that in the interactions, we can better understand the stuff that makes us what we are. The more elusive Why will probably forever lie beyond the pale of comprehension, and yet we strive to corner it with data, formulate hypotheses, develop and test theories, add to the Katamari Damacy ball of equations used to explain what we find, and then reduce them to elegant truisms to test in further experiments. What has amazed me since youth, is that no matter how much we know, and continue to learn, there seems to be a growing number of questions and increasingly perplexing problems concerning our current understanding of things and the gaps in our ability to explain what we think we know.

The expansion of space has puzzled me. We have a clear picture from observational data that the universe is not only expanding, but that the rate of expansion is accelerating. When I was young enough to grasp this concept of expanding space, my first reaction was to ask, where the center was and what is at the edge. We know now that there is essentially no center, that the best idea we have is that all of the universe came into being all at once, in a flash, commonly (and perhaps mistakenly) referred to as the singularity in a “Big Bang”. There is no edge and therefore no center, but perhaps this is completely wrong, misleading and confusing. To stick with what we understand clearer, we take the speed of light to be constant (and from what we have observed, so far this appears to be the case). Therefore, the more distant an object, the longer light emitted from it has taken to reach us. The result is that the further into deep space we peer through our telescopes, the further into the past. What we have found is that all the light from these distant objects appears to have a shift in their spectrum. Light emitted by plasma of elements contain a signature spectrum or frequency gaps and bands (which correspond to the energy jumps made by electrons as they absorb and emit energy in the form of photons). What scientists Slipher, Hubble and others have observed is that the light from distant objects contain spectra of familiar elements, but the gaps and peaks in intensities are all shifted towards the red end of the spectrum, the result of the Doppler effect leading to the conclusion that most everything in space appears moving away from us (though some objects like the Andromeda galaxy are blue-shifted, and thus moving towards us). Another startling observation is that the further an object, the more intense the red-shift. One explanation was that not only are they receding faster from us, but that the expansion of space between us and the distant object can account for that shift.

Contemporary cosmological models accept the Universe as not only expanding, but expanding at an accelerating rate. That is, space itself is expanding, everywhere all at once, the effect of which is that objects are being pushed increasingly further apart. What’s more is that the rate at which objects are being distanced is accelerating. Why? One possible explanation is this idea of “negative pressure”. That is, if gravity represents a positive pressure, to bring all matter together, then this Dark Energy can be thought of as the source of this negative pressure, which fuels this increasing expansion of space. Is it anti-gravity? Not quite (to explain this exceeds my own grasp of the concept), but regardless, we can still ask, “From where does this pressure come?” If there is no meaningful “outside” to the universe from which to create a negative pressure, how can it be created internally?

To be clear, Dark Matter and Dark Energy have little to do with one another in this line of thought. In my understanding of things, Dark Energy is simply a way to account for the energy budget of the Universe, to explain what we can (or can’t) observe. Dark Matter is the place holder to account for all those gravitational forces we observe in the structures of our universe which cannot be accounted for by the observed quantities of luminous (radiative) matter.

With my very limited understanding of the math operating behind these deeper theories of how the way things are and the ways in which we presently explain what we can observe today, I’d like to put forward some very modest ideas and see if someone can help point me to a deeper understanding of the current state of theory and my own ignorant blunders.

I’d like to posit that space, empty space, contains probabilities. Probabilities of what? being different, nothing more. These probabilities taken together total 1. The probability then exists with 100% certainty that it will change. How and why? We do not understand enough to conjecture, let alone speak of what that change may entail, but change comes at 100% certainty. Here time cannot have any meaning. It either has happened that the probability has manifest, or it is yet to manifest (the granularity of time perhaps?).

If space is taken first to be a region with a probability of 0 that anything different could happen, then that region of space effectively does not exist. Who cares what it is, if it cannot be anything otherwise? There is no way of knowing even whether it is since it is completely inert. In a way, we have found a way to define absolutely “nothing”. If a single quanta of space exists, it must be because the probability for its existence, or for its being something other than what it was, must have been other than 0. If we imagine this singular region as being different and assign probabilities to each way that it could be different, we get a range of possible states coexisting simultaneously within one single quanta of space. That is, until the probability manifests as an actuality, in this case, they come into existence.

If the probabilities produce the existence of some state, and if there were some underlying symmetry or structure of the probabilities governing the possible existence of states in which the existence of one necessitates the existence of a counterpart, then there is a distinct possibility that the single quanta of space can exist in actual multiplicities, that is, divide. I realize how sloppy this all is, but from this, I understand time to be an artifact of the concurrence of these probabilities taking place in relation to one another. That is equally sloppily stated, but I think it gets the point across: that there exists no meaningful time-frame without relationship to something else, and that space itself is nothing more than the state of a set of probabilities and time is a by-product of probabilities manifesting themselves. The granularity of space is then seen as the minimum distance for one state to manifest a different state which is a probability of the relationship to something other than itself as dictated by the underlying structure of those probabilities. Can we probe these shapes? In what ways does the Standard Model and Quantum Field Theory allow for this way of thinking? We currently have a minimum distance, the Planck distance, and know the energy required to probe at that distance. What happens to time, and energy at these scales? Can they be thought of in terms of the evolution of probabilities, and probabilities the process by which space expands?

To zoom out a bit and assume that there is some mechanism causing space itself to expand, and that it is internal, or inherent to space itself, does that account for the relativistic effects on light due to gravity and the expansion of space? Could it be possible that the speed of light could be explained by the same mechanism behind the expansion of space? Does space expand equally everywhere and all at once?

If the mechanism for the expansion of space can be described mathematically as the existence of probabilities that something will exists where there wasn’t something before, can this be reconciled with the idea of the conservation of energy? Does it require energy for space to expand by these means? What would the model for this look like? Do we have a problem of an exponential demand on energy or can this be resolved by the existence of other features of space-time? We already know that it is probable for a particle/anti-particle pair to spring into existence, that this is latent within any region of space. I suppose it’s this idea of probabilities at very small scales and the idea of quantum foam, which have led me down this line of thinking. That and the idea of asymptotic freedom by way of Frank Wilczek.

Is there anything that sounds right? Where did I go wrong? Perhaps probabilities are problematic because the language exists within the framework of mathematics, but it is through mathematics that we have put into language this picture of the world. Is probability a deep enough technology or concept to probe the foundation of all there is? It certainly can speak to what isn’t, and could never be.

Bibliography:

“The Lightness of Being” by Frank Wilczek
“The Quantum World: Quantum Physics for Everyone” by Kenneth W. Ford
“New Theories of Everything” by John D. Barrow
“On Space and Time” by Various Contributors, Edited by Shahn Majid


Glitch Art Resources

I created a resource page for my class, “Doing it Wrong” at 3rd Ward.  The class is a short 3-hour Glitch Art techniques primer:

http://phillipstearns.wordpress.com/glitch-art-resources/

It is by no means complete. In fact, please contact me if you’d like for me to add something.  I’ll be updating this relatively frequently.


Bernard Garnicnig’s “Almost White”

This slideshow requires JavaScript.


Selections from Almost White by Bernhard Garnicnig (source: flickr – images taken between 29 Mar 2006 & 04 Aug 2010)

Photographs taken to set a camera’s white balance.

“Almost all cameras allow the user to set the photographic white point manually. To make this setting on some cameras, you have to shoot a picture of a usually white surface and set it as the white point reference.

These pictures, usually deleted right after confirming the setting, question the concept of subjective realities in the photographic process and document the photographers surroundings from his part unconscious, part mechanic eye. This is one of the last kinds of photography where no post processing is applied by a human while it shows how much the camera is manipulating the image already.

Its one of the last snapshots of photographic truth in the digital imaging age.” — B. Garnicnig

So much of the world is left on the cutting room floor. This is a necessary part of relating experience, whether in the transmission of factual information or the telling of story based on fact. Not every detail is needed in order to give a general picture or to convey the essence of an idea or experience, nor can every detail be captured, recalled, or communicated. Exactly what is omitted reveals the circumstances (bias) around and through which the material aspects of an experience are transformed by the process of relating and crafted into media.

Where lived subjective experience is often filled or obscured by the mediated experience of information displayed or rendered through electronics, today those scraps are increasingly difficult to find. Our culture of digitally mediated exchanges has been carefully structured to remove the perception of the framework through which we conduct our daily activities. The unwanted bits are tucked away on our hard-drives or tossed in the recycling bin—in some cases, deleted in-camera. Everything is curated, edited, cleaned and polished (even the raw webcam feed is a considered choice to convey honesty).

It’s not so much an issue of noise—the din of cellphone rings, tinny ear buds cranked way too high, the drone of our air handling systems and refrigeration units, the screeching, grinding and rumbling of our transportation machines is certainly something we will not easily rid ourselves of—the dust that imposed itself on the grooves of a record, the grain of a piece of paper and the pen-in-hand overcoming it to scrawl a letter, the grit of static and dead air between the stations, are all disappearing.

It is not so much a sense of nostalgia but a reflection, looking back on where we were but a few years ago, while understanding that today digital media strives for higher levels of fidelity (which ironically force television personalities to pursue more extreme methods to alter their physical appearance; this in addition to the artificial sharpening and saturation applied to just about every image these days) in an attempt to look forward into the potential media of the future where everything is so radically fabricated and manipulated that there is no honesty, no substance, no reality left but a simulated phantom of what once was.

What is striking to me about Bernhard’s Almost White series is that it brings to the surface issues of the photographic medium, how its digitization has been quietly accepted as Photography wholesale. The question of what makes photography different than digital imaging has been unearthed. By using these artifacts as evidence of the manipulation nearly every digital image undergoes, Bernard opens the door to questioning the honesty behind every media image. Even tempting us to ask exactly how staged are these white balance calibration shots? Has “fidelity” in the digital (dark) age become a matter of passing off artificially enhanced hyper-realism as reality? Or has it becomes something more subtle, staging reality in such a way that the extraordinary seems mundane?


Glitch Theory: “Notes on Glitch”

Jose Irion Neto, Untitled Databent JPEG-LS (2010)Jose Irion Neto, Untitled Databent JPEG-LS (2010)

In its 6th edition, titled “Wrong”, The online journal, World Picture, recently published an article, “Notes on Glitch” by Hugh S. Manon and Daniel Temkin with a companion gl1tchw0rks gall3ry curated by Temkin.

“Notes on Glitch” covers an impressive amount of ground, offering perspectives on well known problematics of the newly emerging form of Glitch art, theorizing about issues of authenticity, effort, aesthetics, methodology, materialism, as well as presenting some interesting trajectories for further thought.

This article is by no means comprehensive and is in no way making a claim to be. It does put together a great resource for those interested in learning more about this growing phenomenon within electronic culture. I’m certainly excited about the conversations this piece of Glitch theory is sure to generate within the community and beyond.


Change Blindness


Without thinking about it, I had turned the TV on and settled on the couch. In an attempt to halt my exhausted mind, I was letting the stream of media wash over my brain, when a story about a recent fashion show featured an brief interview with designer, Gareth Pugh, set in motion a cascade of thoughts. He went on about opposites: black/white, male/female, good/evil—binary states. There was no grey area in the way he spoke about the elements he was working with in developing his fashion. My gut feeling is that this obsession with opposites and extremes, although cliche, is perhaps indicative of a general malaise.

Initially I was tempted to ask myself if extremism is merely a coincidence, concurrent with maturing global capitalism, or if it is a consequence of employing digital technologies in the advancement of free markets, but to make it an issue of economy casts the issue in the wrong light altogether. Digital technologies are symbols of speed, communication, efficiency, but also exemplify certain attitudes towards the material nature of reality—attitudes that express little about the spiritual content that define our connection to it.

Does building a culture upon a technological substrate that is based upon systems of discrimination, determinism and absolute binary states have subtle consequences for the formation and development of social behavior? Out of a sufficient number of bits (although each bit embodies an equal possibility of being in one of only two states) any quantity can be expressed in discrete terms. 32 bits is roughly 4.3 billion unique states, but does combinatorics have anything to say about the gray areas of our age? Ethics is replete with gradients; the events resulting from the meeting of cultures whose values and customs precipitate diverse ethics and morals, often times contradictory and incompatible, demand analysis which can reconcile extreme ideals and beliefs in a position between or outside them. Else, it could be reasoned to give both sides the means to eradicate the other.

Is it simply a matter of perspective? We can’t perceive the discrete nature of our digital age (perhaps this is why it slips by undetected), but it reflects a desire in our thinking for absolutism. Deterministic systems can easily be represented in deterministic machines, but what is the necessary fudge factor to introduce indeterminism into these same deterministic systems? Bigger numbers? Better math? Brute force computation? At what point does it matter?

For painting a digital gradient between incompatible color palettes, maybe it is a question of the limitation of our sense organs, but a simulation is a simulation and the world modeled after a complex system of mathematics lacks a certain spirit. Our ability to express becomes limited to the scope of our mathematical equations. Though we may be writing ourselves into the machines in the form of our programs, code, algorithms, the necessary reformulation of an indeterministic experience into discrete language to be executed on a deterministic machine robs its fruits of a certain vitality. Perhaps it’s a simple limitation of or present state of technological development that we have no mediated equivalent of a handshake, and sensual physical encounters are not yet possible over the so called broadband networks spanning the more developed parts of our globe. Perhaps a supplement should never be taken as a replacement for the real thing.

Objects are born from the mind and realized with the almost exclusive use of automated machines, or humans guided by routines optimized by machines. If we program the machine, does that necessarily imbue it with a spirit? What is there to be said about spiritless machines overdetermining the actions of spirited machines? Does this situation diminish or enrich the spirits of those machines who possess them? We are hard pressed to turn up well reasoned answers, and yet we’re removing the hand, which is attached to the spirit, from the making of our world. Curves formed from discrete values, guiding the indeterministic materials of the real world; the mind acting on matter, however mediated—we shape our world but to what extent? Where do our machines begin to exert their crude reduction of our intentions on our own thoughts as a form of deterministic human enabled machine agency? The relationship between human and machine is dynamic and reciprocal and we cannot easily formulate a way of quantifying it. It is difficult, if not impossible, to program that which we do not fully understand, let alone that which refuses to be subjected to discrete forms of classification and analysis.

Perhaps there is something that I fear and I can’t quite express it. There’s sense of a loss, but it’s not the loss itself that troubles me, it’s the general attitude towards that loss: indifference, ignorance, or complete obliviousness. And in the middle of this intuition is a sense of helplessness at the fact that nothing can be done to reverse the trend, only to create an isolated pocket of appreciative practitioners. Not Luddites, no. We will die without our technologies; they are outgrowths of our species and we share a common blood. But a world made by hand is quickly becoming the world made by the hand guided by the machine; its a pointless paranoia and the best I can do is make note of this uneasy feeling, reach for my pills and sleep it off.


Photogenesis

Meditations on chemical and digital photographic processes

DCP_0022

In non-digital photography—the “capture” of images through exposure—a moment in time is sublimated into a successive process of chemical mediations.  These translations are obscured in the resulting photographic image except to the skilled who can recognize certain chemical techniques for enhancement or manipulation. The deception of the photographic image lies in the obfuscation of technique—texture is an illusion resulting from light playing off surfaces or through objects.  In painting, the technique, because it always produces a certain texture, becomes integral to the perception of the work and its content.  Perhaps it is because painting must transcend or reconcile with its deception, and that it is not simply an image, that distinguishes it from photographic image making—the subject or referent is simulated through the illusion of light created through the application of paint on a surface, where in photography it is a photo-chemical impression upon physical material, a literal play of light upon surfaces.  The digital images from the DCP Series complicate this issue of texture and technique.  They exhibit a richness in detail, where the technique of manipulating the electronics of the camera asserts itself as simulated texture within the image, not in such a way as to reclaim that domain of texture occupied by painting but to draw attention to the fact that it—the digital image itself—is almost pure simulation, that there are many imperceptible layers of mediation involved in the production of the digital image which remove it from its referent.

DCP_Series - Modified Kodak DC280

DCP_Series - Modified Kodak DC280

The referent in the DCP Series images is the process of digital photography revealed through intervening with the physical hardware during image capture. Here the illusion of texture arises from the play of data through algorithms; light, and therefore exposure, is amputated from the digital photographic process.  Where the mediations separating the real from the simulated within non-digital photography involve photo-chemical transformations of materials via exposure and development, the mediations involved in the creation of the digital images in the DCP Series involve complicated algorithms which are made visible through the intervention of wires intersecting processes by connecting points on the circuit boards which were never intended to meet.  Though the specifics of the tools and methods involved in both practices are radically different, because digital photography evolved from non-digital photography, there exists not only an overlap but a discontinuity between the two.  By scrutinizing work produced at the limits of each practice, and attempting to locate the essence of one within the other, the possibility of creating new forms arises.

Locating the analog of the physical process of manipulating the circuits of digital cameras in the photographic process poses an interesting set of problems.  That the image of film based photography exists in a physical domain and the image of the digital era exists as a data set corresponding to the charges stored in vast arrays of microscopic capacitors already complicates any attempt to unite the domains of digital and photo-chemical image making.  The translation of light to a data set makes the digital camera an all in one image making machine; you don’t need to have a photo lab to produce images.  Data acquisition and storage; data read back and software interpretation of data; and output to the monitor replace the processes of exposing and developing film and then exposing and developing photographic paper.  Algorithms and silicon replace film, paper and chemical baths.

Parallels to the process of intervening in the electronics of the camera can be found in chemically processing unexposed film.  Created completely in the darkroom through the application of different chemicals directly on the film emulsion, the resulting images circumvent the need to expose film to light.  This raises the question of whether a photographic image requires the exposure of film at all, or whether its development takes precedence in the creation of photographic images.

Man Ray’s photograms alter our perception of the processes that define photography by discarding not only film, but the lens and the camera altogether.  By inserting physical objects between light and photographic paper to create images, the mechanism of the camera—the voyeur’s perspective onto the world—is circumvented.  In the digital domain, instead of adding objects to photographic paper, the addition of objects to the circuitry—alligator clips and wires—circumvents the cameras inherent image capturing capabilities.  However, because the process of modifying the cameras used in the DCP Series overrides the process of exposure, the Rayogram still falls short as a suitable analogy with which to locate the resultant digital images within the context of traditional photography.

Is it still possible to have a photograph without any of the mediums being exposed to light?

If images produced by developing unexposed, but chemically manipulated positive film or photographic paper (chemigram) can still fall under the umbrella of photography, then we have shifted the emphasis of photography from the subject, light, and exposure, to the chemical process of development which may not even involve light (except in the mediation of the electromagnetic forces responsible for chemical reactions).  To develop a single frame of unexposed (positive) film and/or an unexposed sheet of photographic paper would exemplify this process.  The question is now: where can we locate the notion of development within the practice of digital photography?

Inside the prepared digital camera, the element typically exposed to light in the production of an image, the CCD, is bypassed and the electronics responsible for interpreting its signals and writing them to a digital storage medium are manipulated to produce the image.  The process of data acquisition, processing and storage is akin to exposing film to light, and developing its negative.  When the data is read back, it is interpreted by decompression algorithms and presented on a screen.  With this software, the data set that describes the image can be manipulated using any number of mathematical operations.  This whole process of generating data and interpreting it as an image could of course be emulated within software, but the result would involve neither the mechanisms of exposure nor development in any traditional sense and thus the result could not be considered photographic.  Digital images produced within the camera occupy this interstitial zone between photography and algorithmically generated imagery, because the tools involved are designed to focus light, expose a surface and record the resulting data.  Perhaps by circumventing the process of exposure, the images produced by these prepared cameras cannot be considered photographic in any traditional sense.

DCP_0055

It is still tempting to identify the process of creating these images with photography.  The shutter release is still involved; however, the act of initiating an exposure is abstracted, initiating a Rube Goldbergesque chain of pre-programmed instructions, where photons generate electrical signals which are quantized and stored as data points.  After the intervening processes employed in the production of the DCP Series, the digital camera thinks it’s taking an exposure but the paths from the CCD to the recording device have been severely compromised.  By bypassing the CCD electronics, we intercept the digital processes of “development”—analog to digital conversion, compression algorithms, etc.— and dump our redirected electrons onto what would in film photography been the exposed and developed negative: the flash memory card.  It’s like taking a picture with the shutter mechanism disabled and afterwards bathing the film in a cocktail of different chemicals; you trigger the mechanics of an exposure but what happens in the treatment of the “film” is what we’re concerned with.  You could almost discard the camera altogether, except that in the digital camera the translation of the image from CCD to storage medium—what would otherwise be from film to developed negative and then to photograph paper etc.—is dependent upon the system of components and short-circuits that have no algorithmic equivalent, they escape the type of emulation that would allow us to forget about the physical object altogether.

No doubt, this whole process is, in the end, digital, but perhaps there is hope that it is actually a possibility to contextualize it within the domain of photography and not simply relegate it to the domain of digital image production.  It may be that by preparing poloroid cameras so that the film is physically damaged while it is being pulled through the mechanisms, we find the closest parallels to these images in the DCP Series.

As a final note, this whole exercise of attempting locating this work within the tradition of photography is necessary because it is not based in emulation.  The act of using a digital camera locates the resulting image within the practice of photography. The question then is if altering the electronics of the camera is a photographic process, does it have a precedent from previous photographic traditions and if so in which specific stage of the whole process can we find the closest similarities? Of course, I’m also interested in how this obscures the definition of photography—whether digital or film-based—and also whether there are other practices that have touched upon this problem of “what is photography?”. So that the traditionalists may understand the images and the process in terms of what they already know, we can refer back to those artists who are chemically manipulating unexposed film and developing the results. Though the analogy is not a perfect match, the form of photography discovered and exploited in the production of the DCP Series is the digital age’s answer to those artists.

See Also

Artists:
Pierre Cordier
Polli Marriner
Francoise André

Reading list:

Luis Nadeau, Encyclopedia of Printing, Photographic and Photomechanical Processes New Brunswick, NJ (Atelier Luis Nadeau), 1989, and the related website, photoconservation.com

Gordon Baldwin, Looking at Photographs: A Guide to Technical Terms Los Angeles and London (J. Paul Getty Museum in association with the British Museum Press), 1991


Follow

Get every new post delivered to your Inbox.

Join 5,807 other followers