Testing basil.js the scripting library for InDesign on snippets of my dissertation, manipulating type and layout with maths. Should be writing prose not code.

Audio Convolution (Processing Test)

Screen Shot 2014-08-25 at 22.52.22

Using Processing and its color datatype (an integer, ARGB ordered, with 8 bits per channel) for convolution. Creates a much noisier, colourful output – but with too little resemblance to the source image to be useful.

Amazing patterns though.

Audio Convolution → Space (tests)


Convolution reverb is a method for recording the sonic nature of a space and applying it to raw sounds so that they sound as though they occurred there. For example, making a drum sound as though it was recorded in a cathedral, or a flute in a cave and so on.

Convolution, as a process, is a mathematical method for getting the correlating elements of two functions (or signals).

I’ve been running images through convolution algorithms, using an impulse response (the sonic ‘fingerprint’) I recorded in the Hockney Gallery at RCA. Currently, I’m attempting to pin down the best way to do this. The basic function is simple – essentially multiplying each pixel by every sample in the impulse, offset each time. (I got a decent understanding of convolution and simple ways of implementing it here) – but running it over the thousands of pixels in an image, with the thousands of pixels in the audio is fairly taxing.

The aim is to be able to use it to convolve 3 dimensional objects (most likely sampled as point clouds, or the vertices in polygon meshes) with varying spaces, exploring the way that physical location can affect digital objects.

Until then, some iterations:





Simulation 001

Screen Shot 2014-03-23 at 19.27.41

Attempting to simulate the simulations of a PHD student who’s spent 4 or so years studying quantum physics. In Processing. In an afternoon.

Needless to say, I don’t think it’s particularly accurate. It’s a start though.

25/02/14 – 03/03/14 • IED Week Report

I’ve been looking at algorithms, the understanding (or lack of) we have of them, and the wider effects they may have, outside of their prescribed decision making remit. I’m attempting to come to an understanding of algorithms in a wider sense than just the computational one I currently have. Using them in my work for visual means, but also to understand data and to automate tasks it seems important to understand what meanings they bring and the effects they have.

This article about the nature of Twitter’s trend identification algorithm highlights some of the issues, including the power that the decisions of algorithms can have by appearing impartial or neutral, and the difficulty of assessing that neutrality when not only the algorithm is private and unknowable, but so is the data it operates on – in this case, tweets.

In order to assess algorithms & their impacts we need to not only understand the steps they go through, but also to know about the data they operate on, but is it useful to even discuss algorithms?

Governing Algorithms was a conference held in 2013, discussing the nature of algorithms and their potential for control, in preparation for the conference a provocation piece was produced. It has been particularly useful in questioning the ways algorithms are discussed and analysed and, in fact, the idea that they are discussed. It questions the idea, and what it leans towards calling a fashion, of discussing ‘algorithms’ specifically, they ask: “would the meaning of the text change if one substituted the word “algorithm” with “computer”, “software”, “machine”, or even “god”? What specifically about algorithms causes people to attribute all kinds of effects to them?”

Kinect with Simple Open NI and Processing on Mavericks

I wasn’t able to get kinect working with Processing 2.1.1 and OS X Mavericks, but following this answer to a similar problem sorted it out. Essentially, you run brew install libfreenect in Terminal (you’ll need homebrew installed) and then move the resulting installed file to inside the Simple OpenNI Library, so move




And all should be well.

This Weekend: Computational Rube Goldberg Transcoder

I’ll be running a workshop with Francesco this Saturday (25 Jan) 1-3pm in the Work in Progress show

Part workshop and part performance this is an exercise in creating (and disrupting) a sensor ⁄ signal loop.

Drawing from the idea of feedback loop and Rube Goldberg machine, we will be transcoding data from one platform to another, in a journey from digital signal to physical output and vice versa. There is no beginning or end but rather different platforms through which data can be input in the form of sound, colour, materials, lights, physical movements and so forth…

Anybody can and should intervene at any point to disrupt the transcoding of the signal and foster new serendipitous outcomes. Feel free to bring images, photos, instruments or just yourself.