Shake Visualisation

The vid.stab video stabilisation library takes 2 passes to stabilise a video. After the first pass is complete it leaves you with a file containing the frame by frame transformations that go into the camera shake. I wrote a simple parser for the file and rendered the individual transforms, as well as an overall global value over the video’s frames.

In a crowded scene, such as this found, first person, footage of someone walking down a busy street, the transforms are chaotic, being interfered with by the movement of people witihin the scenes, but the moments when someone enters and moves acros the field of view produce quite beautiful, harmonic interludes, with many lines moving in the same direction, and when the camera is moved in a sweeping fashion (around 1:00) from a head turn, the whole field of vectors angles in unison.

This purpose of this tool is to understand how computers and algorithms might see motion in videos, towards breaking down how they might do biometric identification from the characteristic gait of the camera ‘wearer’.

Audio Convolution (Processing Test)

Screen Shot 2014-08-25 at 22.52.22

Using Processing and its color datatype (an integer, ARGB ordered, with 8 bits per channel) for convolution. Creates a much noisier, colourful output – but with too little resemblance to the source image to be useful.

Amazing patterns though.

Audio Convolution → Space (tests)

face-thru-impulse

Convolution reverb is a method for recording the sonic nature of a space and applying it to raw sounds so that they sound as though they occurred there. For example, making a drum sound as though it was recorded in a cathedral, or a flute in a cave and so on.

Convolution, as a process, is a mathematical method for getting the correlating elements of two functions (or signals).

I’ve been running images through convolution algorithms, using an impulse response (the sonic ‘fingerprint’) I recorded in the Hockney Gallery at RCA. Currently, I’m attempting to pin down the best way to do this. The basic function is simple – essentially multiplying each pixel by every sample in the impulse, offset each time. (I got a decent understanding of convolution and simple ways of implementing it here) – but running it over the thousands of pixels in an image, with the thousands of pixels in the audio is fairly taxing.

The aim is to be able to use it to convolve 3 dimensional objects (most likely sampled as point clouds, or the vertices in polygon meshes) with varying spaces, exploring the way that physical location can affect digital objects.

Until then, some iterations:

png-21

png-16

png-2

png-3

Granular Synth 01

From a day of Arduino synth hacking/making, we make these noises.

Exhibition Plans

Planning the installation on Resonance, Revenant for ‘Physics Happens in a Dark Place’, I’ve produced an approximate floor plan.

floorplan

The aim is that the cables will be long enough to bundle out the back of the amplifier and then trail towards the speakers, showing the connections and the distance between the particles and serving to be flexible enough to show the changing nature of the underlying system (which will change the connections and couplings based on self selecting frequencies).

Speaker Prototypes

speaker

Using the speakers, and a simple bare casing to hang them. Adding as little extraneous form as possible, as discussed here.

R0512796-01

I intend to purchase speakers similar to these which will allow for less casing.