The network is made of cheap plastic, brief protection for young waves as they’re first emitted. Boxes whose only sign of life is a blinking LED, hiding the noise, the speed, the data they channel.
Convolution reverb is a method for recording the sonic nature of a space and applying it to raw sounds so that they sound as though they occurred there. For example, making a drum sound as though it was recorded in a cathedral, or a flute in a cave and so on.
Convolution, as a process, is a mathematical method for getting the correlating elements of two functions (or signals).
I’ve been running images through convolution algorithms, using an impulse response (the sonic ‘fingerprint’) I recorded in the Hockney Gallery at RCA. Currently, I’m attempting to pin down the best way to do this. The basic function is simple – essentially multiplying each pixel by every sample in the impulse, offset each time. (I got a decent understanding of convolution and simple ways of implementing it here) – but running it over the thousands of pixels in an image, with the thousands of pixels in the audio is fairly taxing.
The aim is to be able to use it to convolve 3 dimensional objects (most likely sampled as point clouds, or the vertices in polygon meshes) with varying spaces, exploring the way that physical location can affect digital objects.
Until then, some iterations:
I used to stare at this place when I was a child and we passed it in the car. Before I knew it was a cement factory, I knew it was abandoned. I always wondered why no one lived there. As I got older, it seemed bleaker.
I never knew there was a Monkey Puzzle tree in there, it seems so much more alive now, the circular courtyard like a garden.
[Image by Richard Chivers, via: Architecture of Doom]
The models in Vignesh’s research are defined by conditions and parameters and therefore have no explicit form. The observations he makes don’t rely on the form of the system or, even, parallels with real world particles but are defined by looking at the effects, the outcomes of each variation.
As much as possible I will mirror this on the installation, keeping the form of the objects as close to their behaviour as possible, for example they will emit sound and so should be speakers. They will emit light and so should be bulbs.
Extending this, the coupling should, perhaps, be explicit. Longer cables signifying more loosely coupled particles.
The Guardian has a story about ‘Optic Nerve’, GCHQ’s operation intercepting and collectin frames from Yahoo webcam feeds. It contains a couple of choice quotes from the agency’s documents. The first, and perhaps most telling is:
“One of the greatest hindrances to exploiting video data is the fact that the vast majority of videos received have no intelligence value whatsoever, such as pornography, commercials, movie clips and family home movies.”
Bulk collection is, perhaps, leading to wasted effort and, perhaps, leading to a counter strategy – to flood the databases and servers with too much information, in a way like Hasan Elahi. Putting one’s life online as the ultimate alibi and extra information as a counter surveillance measure.
My favourite part of the article, though, is “it noted that current ‘naïve’ pornography detectors assessed the amount of flesh in any given shot, and so attracted lots of false positives by incorrectly tagging shots of people’s faces as pornography.”
The idea of a naïve pornography detector seems hilarious, but it also picks out a further problem with the mass of data collected – it’s not possible (or at least efficient) to trawl it manually so in the absence of truly accurate or intelligent algorithms it’s borderline meaningless. Again this brings to light a method for skirting surveillance – image creation for algorithms, to amplify the expected results. Spoofing data by talking to the processes observing us.
Yes, there were fascinating, expensive pieces of machinery, C. Elegans & bacteria, but there was also this very orange bin.