Limbs → Legs

Developed in the 60s by General Electric, this quadrapedal ‘truck’ had legs controlled individually by the operator’s 4 limbs. The range of amplification of the users movements could be strong enough to move a car out of the way, and gentle enough to step on a lightbulb without crushing it.

It has excellent little feet, and this prototype is wonderfully horse-like in its pose:

getruckPSMar1969p79a-x640

Audio Convolution → Space (tests)

face-thru-impulse

Convolution reverb is a method for recording the sonic nature of a space and applying it to raw sounds so that they sound as though they occurred there. For example, making a drum sound as though it was recorded in a cathedral, or a flute in a cave and so on.

Convolution, as a process, is a mathematical method for getting the correlating elements of two functions (or signals).

I’ve been running images through convolution algorithms, using an impulse response (the sonic ‘fingerprint’) I recorded in the Hockney Gallery at RCA. Currently, I’m attempting to pin down the best way to do this. The basic function is simple – essentially multiplying each pixel by every sample in the impulse, offset each time. (I got a decent understanding of convolution and simple ways of implementing it here) – but running it over the thousands of pixels in an image, with the thousands of pixels in the audio is fairly taxing.

The aim is to be able to use it to convolve 3 dimensional objects (most likely sampled as point clouds, or the vertices in polygon meshes) with varying spaces, exploring the way that physical location can affect digital objects.

Until then, some iterations:

png-21

png-16

png-2

png-3