I was going to call a part of my Dissertation ‘Loose Chips…’ but then I couldn’t work out what they might sink, or at least, nothing that rhymed. And then the US Department of Defense had done it in 1998 anyway.
Stands up as pretty good advice though.
Using Processing and its color datatype (an integer, ARGB ordered, with 8 bits per channel) for convolution. Creates a much noisier, colourful output – but with too little resemblance to the source image to be useful.
Amazing patterns though.
Convolution reverb is a method for recording the sonic nature of a space and applying it to raw sounds so that they sound as though they occurred there. For example, making a drum sound as though it was recorded in a cathedral, or a flute in a cave and so on.
Convolution, as a process, is a mathematical method for getting the correlating elements of two functions (or signals).
I’ve been running images through convolution algorithms, using an impulse response (the sonic ‘fingerprint’) I recorded in the Hockney Gallery at RCA. Currently, I’m attempting to pin down the best way to do this. The basic function is simple – essentially multiplying each pixel by every sample in the impulse, offset each time. (I got a decent understanding of convolution and simple ways of implementing it here) – but running it over the thousands of pixels in an image, with the thousands of pixels in the audio is fairly taxing.
The aim is to be able to use it to convolve 3 dimensional objects (most likely sampled as point clouds, or the vertices in polygon meshes) with varying spaces, exploring the way that physical location can affect digital objects.
Until then, some iterations:
From 1975, an essay by Charles Csuri (computer artist, famous for his ‘Random War’ works) on the use of Statistics in art. He talks about the impact of the use of computational technologies in art in two ways. Interaction in art is shown to transform the viewer into an active participant in the works, allowing for a shift in their perception:
A case can be made for the idea that art can alter perception, and that since perception is an active organizing process rather than a passive retention-of-image causation, only by actively participating with the art object can one perceive it—and thus, in perceiving it, change one’s reality structure
He uses the example of the AID (Automatic Interaction Detector) program from 1963 to show how the user can affect the view of data, moving it in three dimensions, and altering it over time, a precursor to many visualisation tools now.
Csuri discusses the impact of information for art too, expressing many of the arguments for the use of ‘Big Data’ that are put forth today namely that “we have developed an enormous capacity to create large data-bases and programs that print out mountains of statistical information. While this capacity is a phenomenal one, we generally have difficulty in knowing how to interpret such data”.
Beyond this, though, he elaborates on the potential of this space for artists in a way that is rarely done in the current fervour for representation:
Rather than looking to the visual form or the external appearance of reality, the artist can now deal directly with content. It is a new conceptual landscape with its mountains, valleys, flat spaces, dark and light with gradations of texture and color. With computers, the artist can look at statistics representing real-world data about every facet of society—its problems reflecting tragic, comic and even surrealistic viewpoints. The artist has opportunities to express his perceptions of reality in a new way.
For Csuri, data and statistics are a new, exciting space for artistic expression, a way of expanding and modulating their perception and expression, a tool to augment, not merely represent, reality.
Read the essay here: http://www.atariarchives.org/artist/sec25.php
From a day of Arduino synth hacking/making, we make these noises.
In 1955 RAND Corporation published a book of One Million random numbers, before it became trivial to generate them using computers it was the go to text for random numbers to use in probability research, from the write up that accompanies the 2001 re-issue:
“Not long after research began at RAND in 1946, the need arose for random numbers that could be used to solve problems of various kinds of experimental probability procedures. These applications, called Monte Carlo methods, required a large supply of random digits and normal deviates of high quality, and the tables presented here were produced to meet those requirements. This book was a product of RAND’s pioneering work in computing, as well a testament to the patience and persistence of researchers in the early days of RAND. The tables of random numbers in this book have become a standard reference in engineering and econometrics textbooks and have been widely used in gaming and simulations that employ Monte Carlo trials. Still the largest published source of random digits and normal deviates, the work is routinely used by statisticians, physicists, polltakers, market analysts, lottery administrators, and quality control engineers. ”
The digits were generated using an electronic, simulated roulette wheel hooked up to a computer and I can only imagine that to generate a million of them took a significant amount of time especially as, according to Wikipedia, they had to be filtered and tested to ensure their randomness (which seems somewhat paradoxical, but randomness is weird like that.)
Although this isn’t making it on to my reading list, Deborah Bennett’s history of chance, Randomness, definitely is.
You can download parts of the book and related materials here
I’ve been reading 10 PRINT CHR$(205.5+RND(1)); : GOTO 10 and revelling in the expansive output of tiny programs. The book mentions demoscene site pouet.net and a little hunt around brought me to the one embedded above. It’s great – turn it up loud.
[Here’s the original page]