LM386 based amplifiers:
LM386 based amplifiers:
LM386 based amplifiers:
Trevor Paglen’s images of US intelligence agency buildings serve to focus security/surveillance discussion on something tangible, as he says in the article:
“The scarcity of images is not surprising. A surveillance apparatus doesn’t really “look” like anything. A satellite built by the National Reconnaissance Office (NRO) reveals nothing of its function except to the best-trained eyes. The NSA’s pervasive domestic effort to collect telephone metadata also lacks easy visual representation; in the Snowden archive, it appears as a four-page classified order from the Foreign Intelligence Surveillance Court. Since June 2013, article after article about the NSA has been illustrated with a single image supplied by the agency, a photograph of its Fort Meade headquarters that appears to date from the 1970s.”
This lack of tangibility makes discussion difficult, we can talk or write in the abstract about effects and intent, but this perhaps only speak to those already interested. The challenge for art and design (I’m speaking specifically of interaction or experience design here) is to bring the discussion forward and allow people to explore and analyse these effects and intents. What does it mean for our experience of a physical space to know of the surveillance occurring? What does it mean for our experience of an interface to know that it’s logging our swipes, taps and clicks?
Often the temptation is to create a replica of a system of control, to mirror the existing systems highlighting or amplifying their effects and privacy invasions, but this doesn’t bring us closer to an understanding it only amplifies our assumptions. With Paglen’s images we are coming closer to having the resources to discuss and present and design these things from a point of understanding. As with Jacob Appelbaum’s keynote at Transmediale earlier this year in which he discusses the objects and architectures of surveillance, the aim is to see these structures before we can start to pull them apart and understand or represent their effects.
For the PhysicsSpace project I’ve been working on turning a theoretical simulation of the interaction of quantum particles into an engaging, communicative, experience. It’s currently a two part prototyping process:
Firstly giving the experience physical form, a way that people can see and manipulate the system. The challenge here has been avoiding arbitrary forms that further obfuscate the process. To deal with this I am prototyping the objects very simply out of what they need to function, avoiding abstract casings and particle metaphors, mirroring the scientific simulation whereby each item is described by a set of properties and connections. I’ve got a delivery coming (hopefully today) with components and materials I need to test this.
The second part involves simplifying the output of the system to communicate the user’s actions and changes. This boils down to a simulation of a simulation, using Processing to mirror the work ( in a simpler way ) and pare down its output to the most interesting, engaging essentials.
“Interdiction”: objects gaining another attribute, while being delivered to you. “The point is that we don’t know, but when I look at this (computer) I now see two possibilities”.
“It would have been impossible to see the world in this way and be taken seriously were it not for (Snowden’s) actions”
“potentially this keyboard is an agent of the state, that exists in her house now”
The Chicago police department uses algorithmic prediction to identify a lost of people ‘at risk’ of committing violent crime, raising issue of profiling, racism and the possibility of algorithmic impartiality.
The models in Vignesh’s research are defined by conditions and parameters and therefore have no explicit form. The observations he makes don’t rely on the form of the system or, even, parallels with real world particles but are defined by looking at the effects, the outcomes of each variation.
As much as possible I will mirror this on the installation, keeping the form of the objects as close to their behaviour as possible, for example they will emit sound and so should be speakers. They will emit light and so should be bulbs.
Extending this, the coupling should, perhaps, be explicit. Longer cables signifying more loosely coupled particles.
I’ve been considering the form of the objects that I’ll be using to represent the experimental model. Coming from the idea of resonance and bell sounds, cones could be used to represent simplified bells, with them fitting particularly well with the speakers they’ll need to contain. Balancing hiding them and staying true to their form.
I have come to the realisation that I’ve taken on too much and this is impacting the work I’m able to produce.
[Image by John Frum Press]
“We’re about to begin a detailed, tightly scripted series of more than 100 actions, all recorded to the minute using the GMT time zone for consistency. These steps are a strange mix of high-security measures lifted straight from a thriller (keycards, safe combinations, secure cages), coupled with more mundane technical details – a bit of trouble setting up a printer – and occasional bouts of farce. In short, much like the internet itself.”
Etching stainless steel, using vinegar, salt and a 9v battery.
This is the first text, an expanded proposal and investigation, for my dissertation:
I had intended to write here about algorithms in computational art, investigating their place in final artworks and the relationship between algorithmic tools and the artists agency. Through research, however, two things became clear. Firstly, I was using the word ‘algorithm’ as a stand in for a number of different processes, and technologies  and, secondly, I wasn’t really interested in the impact of algorithmic, computational processes on artists agency but, rather, the impact of these processes on our day to day lives and the ability of artists and designers to highlight this.
I’ve been looking at algorithms, the understanding (or lack of) we have of them, and the wider effects they may have, outside of their prescribed decision making remit. I’m attempting to come to an understanding of algorithms in a wider sense than just the computational one I currently have. Using them in my work for visual means, but also to understand data and to automate tasks it seems important to understand what meanings they bring and the effects they have.
This article about the nature of Twitter’s trend identification algorithm highlights some of the issues, including the power that the decisions of algorithms can have by appearing impartial or neutral, and the difficulty of assessing that neutrality when not only the algorithm is private and unknowable, but so is the data it operates on – in this case, tweets.
In order to assess algorithms & their impacts we need to not only understand the steps they go through, but also to know about the data they operate on, but is it useful to even discuss algorithms?
Governing Algorithms was a conference held in 2013, discussing the nature of algorithms and their potential for control, in preparation for the conference a provocation piece was produced. It has been particularly useful in questioning the ways algorithms are discussed and analysed and, in fact, the idea that they are discussed. It questions the idea, and what it leans towards calling a fashion, of discussing ‘algorithms’ specifically, they ask: “would the meaning of the text change if one substituted the word “algorithm” with “computer”, “software”, “machine”, or even “god”? What specifically about algorithms causes people to attribute all kinds of effects to them?”
“For those who aren’t nerds, hackers or cryptographers and have better things to do than keep up with the pitfalls of digitalization every hour, there are ten simple rules to resist exploitation and surveillance”
From Joël de Rosnay’s book ‘The Macroscope’:
“Today we are confronted with another infinite: the infinitely complex. We are confounded by the number and variety of elements, of relationships, of interactions and combinations on which the functions of large systems depend. We are only the cells, or the cogs; we are put off by the interdependence and the dynamism of the systems, which transform them at the very moment we study them. We must be able to understand them better in order to guide them better. And this time we have no instrument to use. We have only our brain–our intelligence and our reason–to attack the immense complexity of life and society. True, the computer is an indispensable instrument, yet it is only a catalyst, nothing more than a much-needed tool.
We need, then, a new instrument. The microscope and the telescope have been valuable in gathering the scientific knowledge of the universe. Now a new tool is needed by all those who would try to understand and direct effectively their action in this world, whether they are responsible for major decisions in politics, in science, and in industry or are ordinary people as we are.
I shall call this instrument the macroscope (from macro, great, and skopein, to observe).”
[via: Well Formed Data]
“Consequently, any serious visualization of a sufficiently complex topic should always aim exposing the complexity, the inner contradictions, the manifold nature of the underlying phenomenon.”
Moritz Stefaner on using data to tell “worlds not stories”.
The New Scientist has an article about a computer calculated mathematical proof so long it’s unlikely to ever be checked by a human.
“It would take years to check the computer’s working – and extending the method to check for yet higher discrepancies might easily produce proofs that are simply too long to be checked by humans. But that raises an interesting philosophical question, says Lisitsa: can a proof really be accepted if no human reads it?”
The conclusion is essentially that, yes, it can be accepted without direct human confirmation by using alternative computer methods to corroborate, but the philosophical question still stands, for me, and ranges wider than just mathematics.
What does it mean to collate more than we can observe, for example, in big data or digital surveillance – if we can only parse what we create through a pre defined, mediatory program, what can we understand? Or to what depth?
And, for art and design, does this separation affect our ability to be reflective and critical, or is this just another tool in the vein of the camera obscura or lucida to enable the creation of artworks?