I was curious to see how well images from the High Voltage Image Making project would transfer into textiles, so put together a design file made from the lead image for the Kickstarter campaign and sent it off to the weavers who make my Glitch Textiles.
I’m super pleased with the results and will be making this design available for a limited time, exclusively through the kickstarter campaign as a reward for backing at the $225 level.
High Voltage Image Making is a project exploring the use of electrical discharge as a means of creating images in photographic media.
I’m raising funds to create a body of enlarged archival prints and to continue developing this project further in terms of creating new works and exploring new techniques.
Limited Editions, Signed Prints, a Photo Book, Commissioned Originals, and more available as backer rewards!
Get started making glitch art! I’m offering a class covering basic glitch art techniques on Skillshare: Glitch Art – Creating Design from Error: Databending Basics
Learn how to use text editors and hex editors to make glitch art and then turn a series of glitched images into an animated GIF. We begin with a brief introduction to what Glitch Art is, the materials involved, and then dive into hacking the materiality of our digital world.
Sign up using the link above for $10 off class enrollment!
Want to learn how to glitch a digital camera?
I’m teaching a workshop in Los Angeles at the Machine Project on February 23rd.
PS – I’m providing the cameras and you get to take yours home with you after the workshop!
Learn how to make images like this:
Don’t know about you, but this winter has been brutally cold up here in Brooklyn. Grab a glitch blanket at GlitchTextiles.com for 25% off using the promo code 25FEB2014. Stay warm.
*NEW* Infected Blankets – throws designed by translating the complete genomes of well known viruses into pixelated mosaics. Catching, but not contagious.
*NEW* Gradients and Test Patterns – designs made from color test images.
2013/2014 Knit Glitch Blankets
2013/2014 Woven Glitch Blankets
I arranged a swap with Nukeme and UCNV: a couple of glitched throws for a glitched labcoat.
The package arrived today and to my surprise they sent me both versions, light and dark. They are mindblowingly awesome. If you’re long in the arm, like me, see if they’ll taylor to your measurements.
Happy New Year!
2013 was a busy year and I’m just now getting caught up on posting here.
Year of the Glitch turned two this year and will continue to be a platform for exploring my experiments in digital art.
More recently, I’ve been getting into Processing and have posted a collection of videos documenting that work.
Glitch Textiles has seen a few new designs added to its collections:
Really excited about 2014 and will have some great projects to share in the coming months.
Day 4 @ TextielLab Textielmusuem Tilburg, NL
The palette is fixed and I’ve settled on my final design constraints and source material. For the next two working days in the lab, I’ll be weaving fragments from core memory dumps. Raw binary data from my system RAM have been rendered into a 6-bit color-space with a total of 64 colors. The data itself is a collection of fragments of files, images, sounds, temporary data and programs, a sketch of my activities assembled according to the obscure logic of my operating system.
Complete documentation of the process and resources will come in the following weeks.
After having my PC Laptop, camera, and audio recorder stolen on a train to Amsterdam, I am in debt to my dear friend Jeroen Holthuis for helping me write a program in Processing which performs variable bits per channel rendering of raw binary data in a similar fashion to Paul Kerchen’s LoomPreview. He has also been kind enough to loan me his camera and host me for some of my time in the Netherlands. Many thanks!
From May 1 through May 14th, Pete Edwards and Phillip Stearns have been working on developing an open platform for endless musical and electronic invention, exploration, and discovery from the bottom up or the top down. This system is based on minimizing the differences in the input and output “languages” used in various musical electronic formats. This means finding a way to allow free communication between logic, analog and eventually digital electronics. We are working to achieve this by finding a middle ground between these mediums where signal format and amplitude can be shared freely with minimal need for translators and adaptors. Our proof of concept models have shown that unhindered communication between binary logic and variable analog systems renders wildly adventurous possibilities and a unique musical character.
The form factor ethos is one where our passion for invention and performance are given equal attention. The key to achieving this goal is designing a hardware system with maximal scalability of size, quality and hardware format. Thus allowing the experimenter to quickly and cheaply connect circuit boards with simple jumper wires. Meanwhile the traveling musician may prefer to adapt their system to be held in a rugged housing with large format control hardware. This is effectively achieved by adopting a standard layout for a set of core modules which can be built up to the appropriate scale using a series of shields and pluggable add ons.
After a series of discussion on what such a system might look like and how to establish a standard that could be as flexible as possible, allowing for the nesting of micro and macro elements, we began prototyping modules and stackable hardware interfaces.
Project documentation is still underway, with schematics for the prototypes still in development, however, we have, after only two weeks, produced a functional system that fulfills many of our goals including portability, quick system (re)configuration, open patchable interconnection architecture, and stable breadboard compatible form factor with the potential for stackable shields and interfaces.
Future plans discussed for the project include the development of VCO, VCA, and VCF modules that operate on 5 volts, releasing schematics and system specifications to the public, production of low profile breadboard compatible modules in kit and pre-fabricated form with options for either through hole or smd components.
A video demonstrating the 4000 series CMOS logic based modules can be viewed here.
The Module Prototypes:
The Shifter – A dual 4-bit serial in parallel out (SIPO) shift register (CD4015) is connected as a single 8-bit SIPO shift register. Two 1 of 8 digitally addressable analog switches control two feedback taps which allow for each of the shift registers 8 outputs to be fedback to the register input. Input to the register is the output of four cascaded dual input XOR gates (CD4070) for a total of 5 possible inputs. The first two inputs are provided by the 1 of 8 switches, the third and fourth inputs are labeled as “mod” inputs for patching of any logic level signal, and the fifth input is connected to a “seed” button located on the lower left corner of the module. A logic level signal on the clock input will shift, or advance, the register once every positive going edge transition. Setting the feedback taps to the same state will fill the register with logic 0 each positive edge transition of the clock input. The register may need to be jump started by pressing the “seed” occasionally in the event that all outputs go low (lock up condition). The edge connector and header row provides connections for ground, power (3-18V), address and inhibit control inputs for each of the 1 of 8 switches, “mod” inputs, 8 parallel outputs of the register, and output from three of the XOR gates (1 = both feedback taps XORed, 2 = the second tap and “mod” inputs XORed, 3 = “mod” inputs XORed).
Divide by 2 by 2 by 2…- A single 12-bit binary counter (CD4040) takes a logic level signal and provides 12 sub-octaves, each available as patch points on the header on the left side of the module. Additionally, three 1 of 8 digitally addressable analog switches (CD4051) provide independent selection of the first 8 sub-octaves generated by the binary counter. The header row along the bottom provides connections for ground, power (3-18V DC), counter clock input, counter reset, address lines and inhibit control inputs for each of the three 1 of 8 switches, and the final four output stages of the binary counter.
Divide by 3-10 - This module divides a logic level signal frequency by integers 3 through 10. A 1 of 8 digitally addressable analog switch allows for the selection of the factor of division. A divide by 2 through 10 counter (CD4018) operates on feedback to establish the division factor and is used in conjunction with a quad 2-input AND gate (CD4081). The header row and connector provide connections for ground, power (3-18V DC), counter clock input, address lines and inhibit control inputs for the 1 of 8 switch, and the sub harmonic output.
Rhythm Brain – Three binary rate multipliers (CD4089) share a common clock input and output pulses that are multiples 0-15 of 1/16th the logic level signal on the clock input. All chips share a common “set to 15″ input, which globally resets the pattern. Each chip has independent 4-bit addressable rate multiplication and inhibit controls. The edge connector and header row provide connections for ground, power (3-18V), 3 independent 4-bit address selection of rate multiplication and inhibit controls, and individual output for each chip. An additional set of outputs provide the compliment of the individual outputs on the header on the right side of the module.
3bit Digitizer – An incoming analog voltage is digitized and quantized in real-time at 3-bit resolution. Two quad opamps (TL074) are used as comparators connected to a resistor network which sets 8 thresholds at equal intervals from 0v to the Voltage supply level. An 8-bit priority encoder (CD4532) is used to convert the comparator outputs to 3-bits. The edge connector and header row provide connections for ground, power (3-18V), 3-bit output in order LSB to MSB, enable output, gate select output, and the 8 outputs of the comparators.
A whole new collection of blanket designs is now available!
Binary Blankets is a series of blankets aimed at making visible the hidden data structures that give shape to everyday life. The materiality of our digital age is composed of binary data encoded on electronic devices and transmitted through the airwaves on invisible frequencies of light. As an alternative to the screen, Binary Blankets literally gives you a way to experience the fabric of this otherwise invisible and intangible side of our digital world.
This initial collection of 18 designs features raw binary data sourced from a handful of files and programs such as Microsoft Word, iTunes, Google Chrome, and Mac OSX.
Nicolas Maigret // Daniel Neumann // Melissa F. Clarke & Nat Roe // Phillip Stearns
Hosted by Spectrum
121 Ludlow, Second Floor
New York, New York 10002
Title: SYSTEM INTROSPECTION [2002-2013]
System Introspection can be envisaged as an observation of the machine by itself, proposing a physical experience of the numeric data and its different languages and contents. The live version is based on a concrete exploration of the binary code on a local HardDrive and its intrinsic qualities (structure, logic, rhythm, redundancy, compression) immediately returned by the computer in the form of visual and sound flows.
Nicolas Maigret has been developing digital and sound art realizations since 2001. In his works, internal characteristics of media are revealed through their errors, dysfunctions, borderlines or failure threshold, which he develops sensory and immersive audio visual experiences. After studying Intermedia arts, he joined the Locus-Sonus Laboratory in Nice dedicated to networked art research. He taught at the Fine Arts School of Bordeaux and is presently involved in an artist run space named Plateforme in Paris. Simultaneously, he co-founded the Art of Failure collective in 2006.
His works have been presented in various exhibitions and venues such as File (Sao Paulo, BR) – Encountering Data (New York, USA) – Upgrade! (Chicago, USA) – Gli.tc/h (Birmingham, UK) – Gaite Lyrique (Paris, FR) – Leeds Film Festival (UK) – Le Zoo (Genève, CH) – LEAP (Berlin, DE) – DeOrigenBélico, (Caracas, VE) – Sonica (Ljubljana, SI) – Artivistic (Montreal, CA) – ESG (Kosice, SK) – Cimatics (Brussels, BE)
Title: A Corner As A Field
Multi-channel live improvisation using electronic sounds that are ran through diverse re-recording processes to create concrete spatial acoustic fields.
Daniel Neumann is a Brooklyn-based sound artist, organizer and audio engineer, originally from Leipzig, Germany.
In his artistic practice he is using conceptual and mostly collaborative strategies to explore sound and sound material and its modulation through space and media. Pieces are developed in different formats and variations as ongoing processes, which can result in concerts, installations, radio shows and others. The leitmotif for these processes is the development of a poetry of the fragile, and a skepticism towards demonstrations of power. Impermanence is understood as temporal fragility. For the collaborative practice he coined the terms ‘modular collaboration’ and ‘sonic exchange’, which describe non-hierarchical and decentralized forms of organization, where collaborators interact as equals. Context and site are important parameters and often used as a starting point.
Melissa F. Clarke & Nat Roe: Private Language
Melissa F. Clarke is an interdisciplinary artist whose work employs data and generative self-programmed compositional environments.
Nat Roe has used his weekly late-night radio program with WFMU since 2008 as a platform for sound-collages that explore a nuanced relationship with popular culture.
Private Language appropriates, collages and processes radio signals using digital and analog means, as well as exploiting sonic qualities inherent to the playback device. Private Language’s arsenal contains inflections of Brion Gysin’s cutups, John Cage’s chance composition, DJ Screw’s codeine tinted outlook, and the chaotic anxiety of no-wave. Visually, kaleidoscopic geometrical solids frame diaristic encounters with culture as surreal, uncanny and sometimes alienating. Video footage is sourced from Youtube using a Max patch which employs similarity algorithms to cycle through visuals in a manner that mirrors the chance-based subject matter of flipping through a radio dial; the software also includes custom algorithms that trigger visuals in response to sound.
Proto-Chiptunes: the hypothetical ancestor of modern-day 8-bit video game music, known as “Chiptunes”. Before there were arduinos, video game systems, or even microchips capable of producing sound, there was only binary logic. But in order to find the roots of this ancient music, we must go back further, back before the time of logic, far back into the pre-history of electronics. From the primordial ooze of analog circuits arose the first digital logic circuits. Made only from transistors, resistors and diodes, they clawed their way out of the random void to assert their unambiguous binary dominion over the whole world of electronics. When the digital circuits had established themselves as supreme rulers of the electronic world, and mastered the use of fire, they developed a style of music called “0 01 0110 10010011 0101 01 1″ known today as “Proto-Chiptunes”. Now the CMOS 4000 Series Digital Logic Family re-imagine this primitive electronic music under the careful and patient direction of Phillip Stearns.
Bio: The Brooklyn based artist is responsible for the Year of the Glitch and Glitch Textiles projects. His work as an artist involves a lot of tinkering with electronics: taking things apart, short circuiting devices and building things from scratch. A passion for noise is informed by a love of physics. He’s a freelance photographer and audio technician on the side and teaches electronics at 3rd Ward.
I’m teaching my techniques and tricks for Visual Glitch Art in the coming weeks. See you in class!
In the age of the ubiquitous internet, 24hours is already a bit late to be posting a response to anything, but I had to be sure. There is rarely any time for reflection, and much of the content of our electronic media is reflex. These thoughts are on a recent opening and panel discussion at Eyebeam (Center for Art and Technology in Chelsea NY) concerning the topic of Augmented Reality.
At about 6:30 I arrived at the panel at which point the moderator, Laetitia Wolff was finishing her introductory remarks. I caught enough to hear her point out the existence of as many video cameras on the earth as there are neurons in the human brain, connecting with the idea that this constitutes a form of an artificial intelligence equivalent with a human brain, or the possibility of one. Though intriguing, admittedly it’s a bit disturbing to dream of the possibilities of an intelligence formed from the interconnection of electronic eyes. With the announcement that the handsomely designed Google Glass will be made available this year (2013), one can’t help but wonder what it all could mean in the context of a the emergence of a potentially new medium.
Augmented Reality (AR) serves to visually enhance objects, spaces or people with virtual content. It has the potential to dramatically change the relationship between the physical and digital worlds. (Henchoz)
The above excerpt from the “Is Augmented Reality the Next Medium” curatorial statement written by Nicolas Henchoz and provides a bit of context. A good part of the discussion was occupied by mentions of graphic overlays (projections and heads up displays), physical objects with embedded information, and our mobile devices providing windows into new content. Enough material to start any dreamer’s head spinning.
But it wasn’t that my imagination ran wild with possibilities that made it hard for me to follow the particulars of the conversation. I was left wanting deeper insights, thirsty for critical dialog. I found myself asking questions which were never fully addressed in the discussion. A moment of relief came when Christiane Paul cautioned us to question this desire for further mediation that AR entails, but there was no real follow-up to this call to investigate what is staged, and to unmask theatricality.
It would seem that perhaps the most obvious question to address would be our ideas of reality and its relationship with the virtual. A mention of Umberto Eco’s essay, “Travels in Hyperreality”, provided some insight. Though not directly quoted by any of the panelists, here’s the paragraph referenced:
Constructing a full-scale model of the Oval Office (using the same materials, the same colors, but with everything obviously more polished, shinier, protected against deterioration) means that for historical information to be absorbed, it has to assume the aspect of a reincarnation. To speak of things that one wants to connote as real, these things must seem real. The “completely real” becomes identified with the “completely fake.” Absolute unreality is offered as real presence. The aim of the reconstructed Oval Office is to supply a “sign” that will then be forgotten as such: The sign aims to be the thing, to abolish the distinction of the reference, the mechanism of replacement. Not the image of the thing, but its plaster cast. Its double, in other words. (Eco)
It was pointed out that this instance of the Oval Office model served to illustrate a possible mode by which a simulation or replica functions. The reproduction in the pursuit of realism becomes hyperreal, standing in for the thing itself. Well beyond evoking a connection to the real, this form of realistic simulation becomes its own reality, and as such operates in its own unique way as a modifier of the potential experience of the real thing. Despite this, however, further insight into what addition theoretical framework we have for approaching the notion of the Real, reality, and the virtual failed to surface.
In building Augmented Reality, there is a dynamic between the physical object or environment, its simulation through electronic media, the mediated experience of an overlay of virtual content, and the ways in which the experience of one spills over into the other. Perhaps I yearned for some connection to the Lacanian theory of the Mirror Stage, but without a clear idea of how we formulate or notion of what we take to be the Real and the operation of the virtual within it, we stand little chance of understanding how this new reality will be used to control or influence perception. Granted, not every new technology is evil, but they aren’t without their unintended consequences. There’s going to be influence of some kind or another and we have to be aware of how to look for it.
It’s incredible to imagine just how many computation devices are in the world, currently connected by various wireless networks, and how many of those have cameras of some sort. Though taken as a whole, can they possibly exhibit a human equivalent of intelligence? Are we able to formulate criteria by which we can asses the level of intelligence such a system might have? How does this equate to the level of intelligence of a single human, a small group, or the entire population?
When taken as a whole, the human species may be hardly more intelligent than slime mold. As we currently understand it, intelligence comes from the connectivity between elements and the plasticity of those connections. It’s not so much the structure itself, but the formation and revision of particular configurations. Sadly, the point missed by the panel is that our digitally mediated environment must be programmed, and until it can program itself, we must do it. The only information we can put into it will be limited by what we ourselves can input followed by the sophistication of the algorithms we write to automate that process. Here is where there are clear sources of structural bias and issues of access. Beyond that there are also the issues of interface and content filtering.
Jonathan Lee of Google UXA rightly lists inputs and outputs as chief technical challenges faced by designers of user interface (UI) frameworks for Augmented Reality. There are no shortage of sensors today, and haptic interfaces allow for a wide variety of user control over content. It seems that the problem is that there are almost too many inputs. The question then becomes a matter of managing the inputs, of extracting information from the input streams and storing them in a way that enhances virtual content and a user’s experience of navigating that content. Content and context aware algorithms solve this problem, but bring up other issues. Our experience of the internet is already highly mediated by content filtering algorithms. It can almost be argued that serendipity has been all but filtered out (they should make an app for that!) as individuals are catered to based on previously gathered information as interpreted by predictive algorithms (call for submissions: creative predictive algorithms). On the broader issue of adaptive algorithms and similar forms of artificial intelligence, one has to ask what are the models for such algorithms? They must be programmed at some point, based upon some body of data. How do we select or craft the template? Is a possible consequence of further refining the intelligence of our algorithms a normative model for intelligence?
Perhaps it might seem as though I’ve come unhinged, but these questions become important when we begin to approach the task of embedding objects with information. What information or virtual content do we embed in these objects? Who has the ability to do the embedding? What are the possible system architectures that would allow for the system to become a place where the experience of an environment is actually enhanced. What is the framework for approaching this issue of enhancement?
While you consider these, here’s some more of the curatorial statement:
The prospects of augmented reality are linked to a fundamental question: What makes the value of an object, its identity, our relationship with it? The answer lies in the physical properties of the object, but also in its immaterial qualities, such as the story it evokes, the references with which it is connected, the questions it brings up. For a long time, physical reality and immaterial values expressed themselves separately. But with digital technology an object can express its story, reveal information, interact with its context and users in real time. (Henchoz)
It’s important not to mistake the map for the terrain. Physical objects are already vessels of their own history as they are products of a particular sequence of events. Those events, though external and broad in scope, can be decoded, traced and ultimately placed within a larger context of processes (not only physical ones but those with linkage to various cultural practices). With digital technology, an object will not express its story, but always that of someone else. To which we much ask, why that particular story? How did it find its embodiment as embedded data in that particular object? Is it a special case? Why does this story come to us and not others? If we open the system up for anyone to leave their story with any object, what do we do with hundreds of unique stories each told through a single object? What of a world filled with such objects? How do we navigate this terrain of embedded content? The information revealed by an object through media will, on the surface, only be what is placed their by the one privileged with the ability to do so. The nature of interactions will be limited to those programmed by those privileged enough to do so and the awareness equally limited.
The pieces in the exhibition did little to elaborate these deeper questions, or complicate the view of reality that values the particular form of Augmented Reality as put forward by Nicolas Henchoz. The lack of imagination here comes off as almost tongue in cheek. A microphone is placed before a drum kit rigged with mallets and drum sticks attached to actuators. By making utterances of vocalizations into the microphone, the guests can use their voice to control the kit. Mediation is dealt with as a translation or mapping of one kind of sound through a chain of electronic and mechanical processes to the production of another. Elsewhere in the exhibition space there is a flat shallow museum display case without protective glass, in which various postcards, photos, notes, and objects have been placed. iPads are locked and tethered to the case, provided for guests to view the objects in the display with the camera in order to reveal additional virtual content in the form of animations or video, suggesting a sort of lived experience beyond the inert relics. In all there were seven pieces in the exhibition, of which two were not working after the panel discussion. Despite technological foundations of the works presented, the whole exhibition space is filled with wide sheets of paper, gently curved around large cardboard tubes, evoking the sensation one might have of inhabiting a paper factory or new paper printing facility.
There are two major paradigms within average digital, electronic and media art: “the funny mirror” and “demo mode”. The exhibition explored variations of these two paradigms to great effect, but with little affect. But it’s still unclear whether this was all to be taken seriously, or if the whole panel discussion and exhibition is actually intensely subtle critique of current developments of AR. The partners and funders list for the whole affair doesn’t do much to shed light on that matter, except to indicate that there are a group of respectable people taking this all very seriously, whether as an emerging new technology with radical potential as a profoundly transformative media or as a nuanced critique thereof.
For the next 3 Days, Glitch Textiles are available on Fab.com with sales prices discounted over 20%!
“Electronic media artist Phillip Stearns translates technical malfunctions onto blankets. Available through his Brooklyn-based venture, Glitch Textiles, the cotton creations exhibit psychedelic patterns made by corrupting the hardware of a digital camera, resulting in imagery that’s beautiful in its flaws.” -Fab.com
Spread the word!
Studio Visit and Interview on Creators Project
Studio Visit Interview on Periscope
Feature on Rhizome
Year of the Glitch is now accepting submissions
Seeking supporters for a chance to glitch Times Square
Retinal Pigment Epithelium…
Visit my portfolio and click on the support button. You can also like your favorite images. Don’t forget to share with your friends!
3rd Ward, an awesome multi-disciplinary workspace and education center where I teach circuit classes, is hosting their annual Holiday Craft Fair.
12:00 – 6:00pm
195 Morgan Ave Brooklyn, NY 11237
I’ll have a selection of over 20 Glitch Blankets and will be offering prints as well as 5% discounts for members of 3rd Ward and followers of Year of the Glitch (if you’re both, take an additional 5% off the first 5% discount!). You must have proof of your 3rd Ward membership, or the Year of the Glitch code to receive either discount.
Not a follower of Year of the Glitch? Go here!
9 New Designs for the Jacquard Woven Glitch Blankets now available!
These and all other Glitch Textiles are on sale for $200 each +shipping.
Order by the end of the day on Friday, November 30th to receive yours before December 25th.
I’m in the process of updating each and every page, but ALL Glitch Textiles are available on sale for $200 each plus $15 flat rate shipping (FedEx Ground). Just click on the design you like, then click the “Buy Now” button to place your order via PayPal.
My head has been swimming. Perhaps I should have continued studying theoretical physics when I was younger, but there’s nothing that can change that now. I’ve always been fascinated by the sciences of the very large and the very small. Developing theories about the nature of space and time has been a past time of mine since I could grasp the concept of such things, and in this my cousin was my chief partner in scientific blasphemy. Forgive me now in lacking rigorousness and academic references, I simply wish to bounce ideas off the aether and see where I’ve gone terribly wrong and also what might be of merit.
Today, the LHC at CERN is carefully winding up and pitching beams of highly energetic particles (more accurately wavicles, perhaps better described as multi-dimensional folds) at one another, hoping that in the interactions, we can better understand the stuff that makes us what we are. The more elusive Why will probably forever lie beyond the pale of comprehension, and yet we strive to corner it with data, formulate hypotheses, develop and test theories, add to the Katamari Damacy ball of equations used to explain what we find, and then reduce them to elegant truisms to test in further experiments. What has amazed me since youth, is that no matter how much we know, and continue to learn, there seems to be a growing number of questions and increasingly perplexing problems concerning our current understanding of things and the gaps in our ability to explain what we think we know.
The expansion of space has puzzled me. We have a clear picture from observational data that the universe is not only expanding, but that the rate of expansion is accelerating. When I was young enough to grasp this concept of expanding space, my first reaction was to ask, where the center was and what is at the edge. We know now that there is essentially no center, that the best idea we have is that all of the universe came into being all at once, in a flash, commonly (and perhaps mistakenly) referred to as the singularity in a “Big Bang”. There is no edge and therefore no center, but perhaps this is completely wrong, misleading and confusing. To stick with what we understand clearer, we take the speed of light to be constant (and from what we have observed, so far this appears to be the case). Therefore, the more distant an object, the longer light emitted from it has taken to reach us. The result is that the further into deep space we peer through our telescopes, the further into the past. What we have found is that all the light from these distant objects appears to have a shift in their spectrum. Light emitted by plasma of elements contain a signature spectrum or frequency gaps and bands (which correspond to the energy jumps made by electrons as they absorb and emit energy in the form of photons). What scientists Slipher, Hubble and others have observed is that the light from distant objects contain spectra of familiar elements, but the gaps and peaks in intensities are all shifted towards the red end of the spectrum, the result of the Doppler effect leading to the conclusion that most everything in space appears moving away from us (though some objects like the Andromeda galaxy are blue-shifted, and thus moving towards us). Another startling observation is that the further an object, the more intense the red-shift. One explanation was that not only are they receding faster from us, but that the expansion of space between us and the distant object can account for that shift.
Contemporary cosmological models accept the Universe as not only expanding, but expanding at an accelerating rate. That is, space itself is expanding, everywhere all at once, the effect of which is that objects are being pushed increasingly further apart. What’s more is that the rate at which objects are being distanced is accelerating. Why? One possible explanation is this idea of “negative pressure”. That is, if gravity represents a positive pressure, to bring all matter together, then this Dark Energy can be thought of as the source of this negative pressure, which fuels this increasing expansion of space. Is it anti-gravity? Not quite (to explain this exceeds my own grasp of the concept), but regardless, we can still ask, “From where does this pressure come?” If there is no meaningful “outside” to the universe from which to create a negative pressure, how can it be created internally?
To be clear, Dark Matter and Dark Energy have little to do with one another in this line of thought. In my understanding of things, Dark Energy is simply a way to account for the energy budget of the Universe, to explain what we can (or can’t) observe. Dark Matter is the place holder to account for all those gravitational forces we observe in the structures of our universe which cannot be accounted for by the observed quantities of luminous (radiative) matter.
With my very limited understanding of the math operating behind these deeper theories of how the way things are and the ways in which we presently explain what we can observe today, I’d like to put forward some very modest ideas and see if someone can help point me to a deeper understanding of the current state of theory and my own ignorant blunders.
I’d like to posit that space, empty space, contains probabilities. Probabilities of what? being different, nothing more. These probabilities taken together total 1. The probability then exists with 100% certainty that it will change. How and why? We do not understand enough to conjecture, let alone speak of what that change may entail, but change comes at 100% certainty. Here time cannot have any meaning. It either has happened that the probability has manifest, or it is yet to manifest (the granularity of time perhaps?).
If space is taken first to be a region with a probability of 0 that anything different could happen, then that region of space effectively does not exist. Who cares what it is, if it cannot be anything otherwise? There is no way of knowing even whether it is since it is completely inert. In a way, we have found a way to define absolutely “nothing”. If a single quanta of space exists, it must be because the probability for its existence, or for its being something other than what it was, must have been other than 0. If we imagine this singular region as being different and assign probabilities to each way that it could be different, we get a range of possible states coexisting simultaneously within one single quanta of space. That is, until the probability manifests as an actuality, in this case, they come into existence.
If the probabilities produce the existence of some state, and if there were some underlying symmetry or structure of the probabilities governing the possible existence of states in which the existence of one necessitates the existence of a counterpart, then there is a distinct possibility that the single quanta of space can exist in actual multiplicities, that is, divide. I realize how sloppy this all is, but from this, I understand time to be an artifact of the concurrence of these probabilities taking place in relation to one another. That is equally sloppily stated, but I think it gets the point across: that there exists no meaningful time-frame without relationship to something else, and that space itself is nothing more than the state of a set of probabilities and time is a by-product of probabilities manifesting themselves. The granularity of space is then seen as the minimum distance for one state to manifest a different state which is a probability of the relationship to something other than itself as dictated by the underlying structure of those probabilities. Can we probe these shapes? In what ways does the Standard Model and Quantum Field Theory allow for this way of thinking? We currently have a minimum distance, the Planck distance, and know the energy required to probe at that distance. What happens to time, and energy at these scales? Can they be thought of in terms of the evolution of probabilities, and probabilities the process by which space expands?
To zoom out a bit and assume that there is some mechanism causing space itself to expand, and that it is internal, or inherent to space itself, does that account for the relativistic effects on light due to gravity and the expansion of space? Could it be possible that the speed of light could be explained by the same mechanism behind the expansion of space? Does space expand equally everywhere and all at once?
If the mechanism for the expansion of space can be described mathematically as the existence of probabilities that something will exists where there wasn’t something before, can this be reconciled with the idea of the conservation of energy? Does it require energy for space to expand by these means? What would the model for this look like? Do we have a problem of an exponential demand on energy or can this be resolved by the existence of other features of space-time? We already know that it is probable for a particle/anti-particle pair to spring into existence, that this is latent within any region of space. I suppose it’s this idea of probabilities at very small scales and the idea of quantum foam, which have led me down this line of thinking. That and the idea of asymptotic freedom by way of Frank Wilczek.
Is there anything that sounds right? Where did I go wrong? Perhaps probabilities are problematic because the language exists within the framework of mathematics, but it is through mathematics that we have put into language this picture of the world. Is probability a deep enough technology or concept to probe the foundation of all there is? It certainly can speak to what isn’t, and could never be.
“The Lightness of Being” by Frank Wilczek
“The Quantum World: Quantum Physics for Everyone” by Kenneth W. Ford
“New Theories of Everything” by John D. Barrow
“On Space and Time” by Various Contributors, Edited by Shahn Majid
Loads of new Glitch Textiles designs just arrived as machine knit Glitch Blankets. These and all other Glitch Textiles are available for purchase again. Just in time for winter! $300 for 40×60″ knit blankets, $400 for 53×71″Jacquard woven blankets, and $250 for 36×24″ wall hangings. Simply head over to the Glitch Textiles project page, click on the design you’d wish to purchase, and click on the “Buy Now” link.
I’m very excited to announce that a fresh batch of all new machine knit Glitch Blankets for the Glitch Textiles project just arrived! I’ve developed a dozen new designs and am in the process of photographing them all. For the time being, enjoy the slideshow featuring photos of a handful of the new blanket designs offered as Kickstarter rewards (link).
These will become available for purchase starting November 1, 2012. Stay Tuned!
Listening to the Ocean on a Shore of Gypsum Sand is a collaborative project between Gene Kogan, Phillip Stearns, and Dan Tesene. Seashells are 3d printed from algorithmically generated forms for the sole purpose of listening to the “ocean”. The project questions the role of experience in the mediation of the virtual world to the real world and visa versa.
For those of us who have had the experience of listening to the sound of the ocean in actual seashells, it is a questions of lived experience shaping an approach, not only to the object (or world) at hand, but how it is perceived and acted upon. Are we to trust these shells? Do we seek out natural shells for comparison?
To those for whom their first experience of listening to the “ocean” through the digitally produced shell, the question becomes one of how the first encounter with a virtualized and simulated reality shapes the experience of lived space. This virtual shell is all I know of the real, until I encounter those found in nature—and when I see this natural shell, what then is my experience of? More broadly, how does mediated reality form our preconceptions of the world?
For some, these questions seem obvious—we may even have convinced ourselves that we have this all figured out. We are aware of the possibility that the virtual world and real world are two interacting identities, distinct ideas that maintain their individuality despite their mutual influence on one another. There is, however, a possibility that this distinction is fading with younger generations, as technologically mediated experiences permeate childhood. I wonder about the effect of this as they grown into the world.
This project will be on view at Soundwalk 2012, a sound art festival in Long Beach, CA on September 1st 6-10pm.