A Shout From the Edge of Satellites

2015-08-19--11-15--SDR--FM--F135-964-934--BW11-776

There are many (artificial) satellites orbiting the earth, some functional, some long dead and some that have broken free of their natural lifespan to come back to life and begin transmitting again. Many of these transmit their data, weather images for example, on the 136-138mhz band and can be listened to with a cheap USB Radio/TV receiver and Software Defined Radio application.

I’ve been using these tools, just to see what’s in the various parts of the radio spectrum (spoiler: noise, lots of noise) and was hovering just below 136mhz where I picked up a signal:

This is just one section of it (I recorded around 4) each of which begin with a small burst of white noise before playing through a regular square wave pattern (which, I assume, is used to set/show the data rate) and then the section of what sounds like data. The image at the top of this post shows an attempt to decode it as a weather satellite image with wxtoimg, but this doesn’t seem to follow through – you can see the sections split by the lighter bands (a noisy one, then a regular one).

There’s certainly data there, but working out how to decode it is tricky. I’m going to approach it as if it’s using Manchester encoding to begin with, working from the square wave as the data rate, and see where that takes it.

Network Seance

An audio-visual network seance, with Francesco and I channeling the Wi-Fi spirits through the Network Ensemble, possesing keyboards, amps, a glockenspiel and even visual software. You hear, to begin, the raw pulses the ensemble outputs, looping through the different categories of data. Then, leaving one in, we bring in the solenoids – playing the glockenspiel – and the MIDI keyboard, controlled by various ports.

Footage from the second live performance with the Network Ensemble, on 01/07/15 (part of An Evening With Gekiyasu and Friends).

More about the Network Ensemble

Shake Visualisation

The vid.stab video stabilisation library takes 2 passes to stabilise a video. After the first pass is complete it leaves you with a file containing the frame by frame transformations that go into the camera shake. I wrote a simple parser for the file and rendered the individual transforms, as well as an overall global value over the video’s frames.

In a crowded scene, such as this found, first person, footage of someone walking down a busy street, the transforms are chaotic, being interfered with by the movement of people witihin the scenes, but the moments when someone enters and moves acros the field of view produce quite beautiful, harmonic interludes, with many lines moving in the same direction, and when the camera is moved in a sweeping fashion (around 1:00) from a head turn, the whole field of vectors angles in unison.

This purpose of this tool is to understand how computers and algorithms might see motion in videos, towards breaking down how they might do biometric identification from the characteristic gait of the camera ‘wearer’.

Audio Convolution (Processing Test)

Screen Shot 2014-08-25 at 22.52.22

Using Processing and its color datatype (an integer, ARGB ordered, with 8 bits per channel) for convolution. Creates a much noisier, colourful output – but with too little resemblance to the source image to be useful.

Amazing patterns though.

Audio Convolution → Space (tests)

face-thru-impulse

Convolution reverb is a method for recording the sonic nature of a space and applying it to raw sounds so that they sound as though they occurred there. For example, making a drum sound as though it was recorded in a cathedral, or a flute in a cave and so on.

Convolution, as a process, is a mathematical method for getting the correlating elements of two functions (or signals).

I’ve been running images through convolution algorithms, using an impulse response (the sonic ‘fingerprint’) I recorded in the Hockney Gallery at RCA. Currently, I’m attempting to pin down the best way to do this. The basic function is simple – essentially multiplying each pixel by every sample in the impulse, offset each time. (I got a decent understanding of convolution and simple ways of implementing it here) – but running it over the thousands of pixels in an image, with the thousands of pixels in the audio is fairly taxing.

The aim is to be able to use it to convolve 3 dimensional objects (most likely sampled as point clouds, or the vertices in polygon meshes) with varying spaces, exploring the way that physical location can affect digital objects.

Until then, some iterations:

png-21

png-16

png-2

png-3

Charles Csuri – Statistics as an Interactive Art Object

page87-1

From 1975, an essay by Charles Csuri (computer artist, famous for his ‘Random War’ works) on the use of Statistics in art. He talks about the impact of the use of computational technologies in art in two ways. Interaction in art is shown to transform the viewer into an active participant in the works, allowing for a shift in their perception:

A case can be made for the idea that art can alter perception, and that since perception is an active organizing process rather than a passive retention-of-image causation, only by actively participating with the art object can one perceive it—and thus, in perceiving it, change one’s reality structure

He uses the example of the AID (Automatic Interaction Detector) program from 1963 to show how the user can affect the view of data, moving it in three dimensions, and altering it over time, a precursor to many visualisation tools now.

Csuri discusses the impact of information for art too, expressing many of the arguments for the use of ‘Big Data’ that are put forth today namely that “we have developed an enormous capacity to create large data-bases and programs that print out mountains of statistical information. While this capacity is a phenomenal one, we generally have difficulty in knowing how to interpret such data”.

Beyond this, though, he elaborates on the potential of this space for artists in a way that is rarely done in the current fervour for representation:

Rather than looking to the visual form or the external appearance of reality, the artist can now deal directly with content. It is a new conceptual landscape with its mountains, valleys, flat spaces, dark and light with gradations of texture and color. With computers, the artist can look at statistics representing real-world data about every facet of society—its problems reflecting tragic, comic and even surrealistic viewpoints. The artist has opportunities to express his perceptions of reality in a new way.

For Csuri, data and statistics are a new, exciting space for artistic expression, a way of expanding and modulating their perception and expression, a tool to augment, not merely represent, reality.

Read the essay here: http://www.atariarchives.org/artist/sec25.php

This Weekend: Computational Rube Goldberg Transcoder

I’ll be running a workshop with Francesco this Saturday (25 Jan) 1-3pm in the Work in Progress show

Part workshop and part performance this is an exercise in creating (and disrupting) a sensor ⁄ signal loop.

Drawing from the idea of feedback loop and Rube Goldberg machine, we will be transcoding data from one platform to another, in a journey from digital signal to physical output and vice versa. There is no beginning or end but rather different platforms through which data can be input in the form of sound, colour, materials, lights, physical movements and so forth…

Anybody can and should intervene at any point to disrupt the transcoding of the signal and foster new serendipitous outcomes. Feel free to bring images, photos, instruments or just yourself.

Data Visualisation

Screen Shot 2013-10-16 at 10.39.47 Screen Shot 2013-10-16 at 10.40.03

(slides from Tom Armitage’s talk, ‘Spreadsheets and Weathervanes’ at the Open Data Institute, thanks Riah)