The vid.stab video stabilisation library takes 2 passes to stabilise a video. After the first pass is complete it leaves you with a file containing the frame by frame transformations that go into the camera shake. I wrote a simple parser for the file and rendered the individual transforms, as well as an overall global value over the video’s frames.
In a crowded scene, such as this found, first person, footage of someone walking down a busy street, the transforms are chaotic, being interfered with by the movement of people witihin the scenes, but the moments when someone enters and moves across the field of view produce quite beautiful, harmonic interludes, with many lines moving in the same direction, and when the camera is moved in a sweeping fashion (around 1:00) from a head turn, the whole field of vectors angles in unison.
This purpose of this tool is to understand how computers and algorithms might see motion in videos, towards breaking down how they might do biometric identification from the characteristic gait of the camera ‘wearer’.
The intercept has a good analysis of GCHQ’s ‘Regin’ Malware, including a breakdown of its likely stealthy, modular installation process.
“The malware, which steals data from infected systems and disguises itself as legitimate Microsoft software, has also been identified on the same European Union computer systems that were targeted for surveillance by the National Security Agency.”
It’s a long term piece of software, and not just in its slow installation, the article reckons it was in development for over a decade and has been spread as widely as “Russia, Saudi Arabia, Mexico, Ireland, Belgium, and Iran”.
Living in a Non-Place, without the burden of his, or any other, history allowed Karimi Nasseri to re-invent his identity:
“Over the years, he has claimed many things about his origins. At one time his mother was Swedish, another time English. Nasseri’s effectively reinvented himself in the Charles de Gaulle airport and denies these days that he’s Iranian, deflecting any conversation about his childhood in Tehran.”
He is now known as ‘Sir, Alfred Mehran’, a name taken, comma and all, from a British Immigration letter. Having no papers, and no official state-based identity is what forced Nasseri to inhabit the airport in the first place. He proved his identity in order to enter, as Augé shows is a necessary part of the Non-Place, but, with his papers stolen, was unable to prove it to leave or enter the next bureaucratic Non-Place in his immigration journey.
Although, for most, Charles de Gaulle airport is a Non-Place it could be argued that for Nasseri as Sir, Alfred Mehran it was the opposite, a Place. Although Nasseri had no history there, it being erased by the loss of his papers and his status as an aylum seeker, disowned by his home country, Mehran was known throughout the airport, it was his home and he built stories and relationships there.
More about Nasseri/Mehran here and here.
However, it is also likely that some people will attack technology directly. Technology is already the primary controlling force in our lives: automated systems run the stock market, algorithms are highly influential forces in deciding Google search results or Netflix recommendations, and sophisticated policing and surveillance techniques keep people from threatening the system without them even knowing it. However, more people are going to realize how much technology influences their lives as they begin to interact with its artificial products on an everyday basis. Consider, for example, how widespread the anti-Facebook sentiment is, or how easily people can attack a company like Google. Before this point in history, technology wasn’t even a cultural topic for discussion. Now it is one of the most common.
Luddites must also constantly ask themselves how their current projects contribute to the overall goal of ending the industrial system. Any projects that do not lead to that goal should be dropped.
– John Jacobi, The Luddite Method
Earthcode is a project by Martin Howse exploring the idea of producing a computer integrated with, and constructed from, the earth. The image above is from the Earthboot part of the project, the earth as OS, from the site:
“earthboot boots from the earth.
earthboot returns vampiric technology to the earth.
earthboot enables almost any computer to boot straight from the earth, sidestepping dirty mining actions, and the expensive refining and doping of raw minerals; thus avoiding environmentally wasteful production techniques for the construction of data bearing devices such as hard drives or USB memory sticks
Instead, earthboot boots straight from the earth itself, exploring the being-substrate of contemporary digital technology; the material basis of 21st century computation.
earthboot revives the use of underground flows of electricity or telluric currents which were first exploited as generators of power within the telegraphic communications apparatus of the 19th century.
earthboot proposes a barely functional telluric operating system (OS), exposing the vampirism of current technology. Telluric or underground currents are translated directly into code for an earthbound operating system.
The laptop, or PC, literally boots up directly from the specially designed, earthboot USB device pushed into the earth, running code which is totally dependent on small fluctuations in electric current within the local terrain.
Quite often the earthboot operating system is not always fully functional as expected. Crashing is the price to pay for booting straight from the earth.
A prototype has been constructed based on the ATMEGA32u4 which emulates a USB mass storage device, sampling earth voltages and converting these directly into instructions for an earthbooting computer. Preliminary tests for earthboot have proved successful using code based on the LUFA mass storage example.”
More information here: http://www.1010.co.uk/org/earthcode.html
What is the experience of a thermometer? If we take the ‘?’ outlined in ‘Alien Phenomenology, or What It’s Like To Be a Thing’ by Ian Bogost, we might say that it’s aware (in some sense) of the hook it’s hung on, or the wall or table it rests against or on, the temperature acting on its mercury (or other liquid) and the hands that touch it to angle it for reading. Localised experience, as with most ‘things’.
Now, what is the experience of a thermometer that is part of the ‘Internet of Things’, the growing collection of sensors, processors and actuators networked together around the world? (e.g. http://sensorist.com/hardware) This would, assuming it had a similar form factor have a similar experience, although it’s unlikely to be touched to be read, or have a human readable display as part of it – this would be done remotely, over the network.
This network adds another aspect to its experience, it is now able to receive control signals through its connection, and therefore gains experience of another device, for example the user’s smartphone. This is the intended extension of a thing when it is networked, but the internet, the network is much larger than the communication between smartphone and thermometer and, in order to be accessible from different situations, the ‘thing’ must make itself part of that global, public network.
This is not to say that it’s now accessible to anyone, and able to access everything, but even a mistaken visit to its IP address by a search engine crawl bot, or a mis-typed address gives it awareness beyond the simple intended one of thermometer to companion smartphone app. This simple thermometer develops distributed senses, it shifts from the regular dimensions of the standard thermometer and gains the ability, in some sense to travel instanly between locations. It broadens its horizons, so to speak.
How does it feel about that? What changes inside it? What can it know/do/feel that we can’t?
I was going to call a part of my Dissertation ‘Loose Chips…’ but then I couldn’t work out what they might sink, or at least, nothing that rhymed. And then the US Department of Defense had done it in 1998 anyway.
Stands up as pretty good advice though.
From 1975, an essay by Charles Csuri (computer artist, famous for his ‘Random War’ works) on the use of Statistics in art. He talks about the impact of the use of computational technologies in art in two ways. Interaction in art is shown to transform the viewer into an active participant in the works, allowing for a shift in their perception:
A case can be made for the idea that art can alter perception, and that since perception is an active organizing process rather than a passive retention-of-image causation, only by actively participating with the art object can one perceive it—and thus, in perceiving it, change one’s reality structure
He uses the example of the AID (Automatic Interaction Detector) program from 1963 to show how the user can affect the view of data, moving it in three dimensions, and altering it over time, a precursor to many visualisation tools now.
Csuri discusses the impact of information for art too, expressing many of the arguments for the use of ‘Big Data’ that are put forth today namely that “we have developed an enormous capacity to create large data-bases and programs that print out mountains of statistical information. While this capacity is a phenomenal one, we generally have difficulty in knowing how to interpret such data”.
Beyond this, though, he elaborates on the potential of this space for artists in a way that is rarely done in the current fervour for representation:
Rather than looking to the visual form or the external appearance of reality, the artist can now deal directly with content. It is a new conceptual landscape with its mountains, valleys, flat spaces, dark and light with gradations of texture and color. With computers, the artist can look at statistics representing real-world data about every facet of society—its problems reflecting tragic, comic and even surrealistic viewpoints. The artist has opportunities to express his perceptions of reality in a new way.
For Csuri, data and statistics are a new, exciting space for artistic expression, a way of expanding and modulating their perception and expression, a tool to augment, not merely represent, reality.
Read the essay here: http://www.atariarchives.org/artist/sec25.php
Telegeography, a telecommunications market research and consulting firm, provides maps of the submarine cable routes and landing points, both as interactive maps and as Google Fusion tables (and the obligatory Github).
– Cable Routes table
– Cable landings table
– Combined map
– Internet Exchange map
A vision of 1999, from 1967. Some fantastic retro-futurist predictions from the Philco-Ford Corporation.
“Interdiction”: objects gaining another attribute, while being delivered to you. “The point is that we don’t know, but when I look at this (computer) I now see two possibilities”.
“It would have been impossible to see the world in this way and be taken seriously were it not for (Snowden’s) actions”
“potentially this keyboard is an agent of the state, that exists in her house now”
The Chicago police department uses algorithmic prediction to identify a lost of people ‘at risk’ of committing violent crime, raising issue of profiling, racism and the possibility of algorithmic impartiality.
The Guardian has a story about ‘Optic Nerve’, GCHQ’s operation intercepting and collectin frames from Yahoo webcam feeds. It contains a couple of choice quotes from the agency’s documents. The first, and perhaps most telling is:
“One of the greatest hindrances to exploiting video data is the fact that the vast majority of videos received have no intelligence value whatsoever, such as pornography, commercials, movie clips and family home movies.”
Bulk collection is, perhaps, leading to wasted effort and, perhaps, leading to a counter strategy – to flood the databases and servers with too much information, in a way like Hasan Elahi. Putting one’s life online as the ultimate alibi and extra information as a counter surveillance measure.
My favourite part of the article, though, is “it noted that current ‘naïve’ pornography detectors assessed the amount of flesh in any given shot, and so attracted lots of false positives by incorrectly tagging shots of people’s faces as pornography.”
The idea of a naïve pornography detector seems hilarious, but it also picks out a further problem with the mass of data collected – it’s not possible (or at least efficient) to trawl it manually so in the absence of truly accurate or intelligent algorithms it’s borderline meaningless. Again this brings to light a method for skirting surveillance – image creation for algorithms, to amplify the expected results. Spoofing data by talking to the processes observing us.
Lev Manovich has an article titled ‘Image Processing and Software Epistemology’, in it he states that: “Another important type of software epistemology is fuzing data sources to create the new knowledge which is not explicitly contained in any of them. Using the web, it is possible to create a description of an individual by combining piece of information from his/her various social media profiles and making deductions form them … Strictly speaking, the underlying algorithms do not add any new information to each of the images (their pixels are not changed). But since each image can now becomes a part of the larger whole, its meanings for a human observer change.”
In talking about this he touches on two things of interest to me, the first is that he talks of images as data that can be acted on – all things within software are data and this expands the possibilities for analysis and deduction. The second is the “ability to generate additional information from the data years or even decades after it was recorded”, the idea that we may be able to know more from todays data in the future as a result of improved algorithms or new ways of linking disparate data.
This has potentially deep impacts for our understanding of historical events and also potentially leads to situations where we recontextualise our experience of a place or a moment in light of new algorithmic information about it. This is not necessarily a new idea, Manovich uses the example of the film Blow Up in his essay, but it is likely to happen at a quicker pace, with potentially larger revelations.
Yes, there were fascinating, expensive pieces of machinery, C. Elegans & bacteria, but there was also this very orange bin.
How does the physical context of the lab inform and influence tools for scientific data collection?
In his paper ‘Working Memory’, A.D. Baddeley says: “It is suggested that active storage involves rehearsal, a process whereby the system reads out information from the store and then feeds it back, thereby continually refreshing or updating the memory trace.”, so in order to store information in our working or sensory memory (which is an active store) the information must be repeated within a feedback loop. He goes on to say, specifically about auditory memory, that “Memory span is a function of both the durability of a trace within the phonological store and the rate at which rehearsal can refresh that trace … if rehearsal can be repeated every 1-2s, forgetting will be prevented”.
Machine architecture influences use and to assume that this would not influence the resulting aesthetics is naïve. The infinitely re-configurable and re-contextualizing nature of the machine is the whole point of why we use these damn things. So an image construction method that would closely match this discrete logic, down to the very 0s and 1s of the machine’s ABCs, was an important step in creating a “plastic” image, capable of reconfiguring itself multiple times per second.
Douglas Edric Stanley – Artifactual Playground