Rehearsals

rehearsal_004

I’ve created these drawings in response to Baddely’s paper on working memory and rehearsal traces. Attempting to match a shape or a line while in a distracting environment to see the changes that occur, and the way the feedback loop and environmental conditions can affect this.

rehearsal_003
rehearsal_002
rehearsal_001

Active Memory & Rehearsal

In his paper ‘Working Memory’, A.D. Baddeley says: “It is suggested that active storage involves rehearsal, a process whereby the system reads out information from the store and then feeds it back, thereby continually refreshing or updating the memory trace.”, so in order to store information in our working or sensory memory (which is an active store) the information must be repeated within a feedback loop. He goes on to say, specifically about auditory memory, that “Memory span is a function of both the durability of a trace within the phonological store and the rate at which rehearsal can refresh that trace … if rehearsal can be repeated every 1-2s, forgetting will be prevented”.

Image construction and the Plastic Image

Machine architecture influences use and to assume that this would not influence the resulting aesthetics is naïve. The infinitely re-configurable and re-contextualizing nature of the machine is the whole point of why we use these damn things. So an image construction method that would closely match this discrete logic, down to the very 0s and 1s of the machine’s ABCs, was an important step in creating a “plastic” image, capable of reconfiguring itself multiple times per second.

Douglas Edric Stanley – Artifactual Playground

Images as Spatial Memory

Zebra4
“On several occasions, it seems, having recently printed holographs of a certain environment, the users of these holographs will experience a new kind of spatial familiarity with what would otherwise be a new location; they are able to know, for instance, accurately and in advance, what will be found around corners, where objects are located in relation to others, and even how far apart things are placed

But I’m captivated by the suggestion that new representational technologies—new ways of documenting and sharing spatial information—might come with their own cognitive implications: new memory disorders, new anxieties, new sources of identification or confusion. Put another way, what spatial or topographic disorders already exist—such as vertigo—and do certain representational technologies (like 3D film or even Google Street View) augment these disorders or keep them at bay? To use a somewhat absurd example, simply for the point of illustration, could something like 3D film be used someday as a kind of non-chemical cure for acrophobia? You’re prescribed a certain time of exposure. ”

[From Bldg Blog]

Sensory, Short-Term & Long-Term Memory

“Sensory memory takes the information provided by the senses and retains it accurately but very briefly. Sensory memory lasts such a short time that it is often considered part of the process of perception.”

“Short-term memory temporarily records the succession of events in our lives. It may register a face that we see in the street, or a telephone number that we overhear someone giving out, but this information will quickly disappear forever unless we make a conscious effort to retain it. ”

“Long-term memory not only stores all the significant events that mark our lives, it lets us retain the meanings of words and the physical skills that we have learned. Its capacity seems unlimited … But it is far from infallible. ”

[SOURCE]

Kinect with Simple Open NI and Processing on Mavericks

I wasn’t able to get kinect working with Processing 2.1.1 and OS X Mavericks, but following this answer to a similar problem sorted it out. Essentially, you run brew install libfreenect in Terminal (you’ll need homebrew installed) and then move the resulting installed file to inside the Simple OpenNI Library, so move

/usr/local/Cellar/libfreenect/0.2.0/lib/libfreenect.0.1.2.dylib

to

/Users/yourusername/Documents/Processing/libraries/SimpleOpenNI/library/osx/OpenNI2/Drivers/libfreenect.0.1.2.dylib

And all should be well.

Images As Spatial Memory

An old article on Bldg Blg points towards the use of images (specifically holographic in this case, but really any image) to impart a familiarity with a space before visiting.

it seems, having recently printed holographs of a certain environment, the users of these holographs will experience a new kind of spatial familiarity with what would otherwise be a new location; they are able to know, for instance, accurately and in advance, what will be found around corners, where objects are located in relation to others, and even how far apart things are placed.

And:

But I’m captivated by the suggestion that new representational technologies—new ways of documenting and sharing spatial information—might come with their own cognitive implications: new memory disorders, new anxieties, new sources of identification or confusion. Put another way, what spatial or topographic disorders already exist—such as vertigo—and do certain representational technologies (like 3D film or even Google Street View) augment these disorders or keep them at bay? To use a somewhat absurd example, simply for the point of illustration, could something like 3D film be used someday as a kind of non-chemical cure for acrophobia? You’re prescribed a certain time of exposure.

[Source]

Sensory Memory

Sensory memory acts, in a way, like a buffer for each sense, briefly retaining the information provided. The amount of time it’s retained varies by sense, for example echoic memory (sensory memory for sounds) stores information for roughly 3-4 seconds, while iconic memory (a part of the sensory memory for visual perception) stores information for less than one second.

It seems like these memories occur prior to any analysis or processing by the brain and, therefore, are a moment it’s possible to inject new data to the memory making and cognition process.

Performed Program

Here I’ve added some adjustable parameters to the OCR Text > Animation program that allow for me to adjust it in realtime as it’s running. In this case I can adjust the type size, the camera angle and the depth scaling.

OCR Type to Depth to animation

These are some renderings from processing of the various processes/methods I’ve been through to animate the text generated from an environment using OCR.

I’ve used a depth map of the environment to animate and, in some cases, colour the text or change the font.

Vision

11349469736_ec988fac7f_o

An image of the (now broken) Symbiosis glasses prototype in action. Possibly relevant here to the idea of adjusting/enhancing perception/the senses.

Found in Kevin’s Flickr stream

This Weekend: Computational Rube Goldberg Transcoder

I’ll be running a workshop with Francesco this Saturday (25 Jan) 1-3pm in the Work in Progress show

Part workshop and part performance this is an exercise in creating (and disrupting) a sensor ⁄ signal loop.

Drawing from the idea of feedback loop and Rube Goldberg machine, we will be transcoding data from one platform to another, in a journey from digital signal to physical output and vice versa. There is no beginning or end but rather different platforms through which data can be input in the form of sound, colour, materials, lights, physical movements and so forth…

Anybody can and should intervene at any point to disrupt the transcoding of the signal and foster new serendipitous outcomes. Feel free to bring images, photos, instruments or just yourself.

Haus-Rucker-Co

haus-rucker-co-env-trans
[ Environment Transformer – Haus-Rucker-Co ]

Haus-Rucker-Co were an experimental architecture studio concerned with (among other things) enhancing sensory experience & highlighting the “taken for granted” nature of our senses.

Cognitive Architecture Research

Materials for architecture in the brain converge from various sensory organs
Kenya Hara, Designing Design, p.156

In the book ‘Designing Design’, Kenya Hara discusses the Architecture of Information, positing that in the mind, images are constructed from sensory input and retrieved memories, with memories being the “primary material of the image”.

The limitations of ‘Woking Memory’ are outlined in “Cognitive Architecture and Instructional Design” by Sweller et al, with a loose limit of 7 items that can be held and processed in there at any one time. Human beings are said to get around this seeming limitation by making use of larger tree-based structures, known as Schemas, held in long term memory, as Sweller et al put it:

Although the number of elements is limited, the size, complexity, and sophistication of elements is not. A schema can be anything that has been learned and is treated as a single entity. If the learning process has occurred over a long period of time, the schema may incorporate a huge amount of information. Our schema for a restaurant includes extensive knowledge about food and its functions in human affairs; money and its role in exchanging goods and services; the basic architecture of buildings; furniture and how it is used; plus many other facts, functions, processes, and entities. This huge array of elements has been acquired over many years but can be held in working memory, as a single entity.

This use or aggregation of previous knowledge to allow us to construct meaning and analyse or respond to our current situation links back to what Hara is saying about memory:

Memories not only lead the recipient to voluntarily ruminate on the past, but, called up in succession as the brain receives outside stimuli, also act to flesh out an image for understanding new information.

John Soanes

A broken journey sound recording around the John Soanes museum. In this memory exercise I have attempted to remember my journey around the John Soanes museum days later, using only an audio recording as a prompt. The audio gets split part way through and as a result the chronology is broken and it becomes difficult to comprehend as part of a wider journey. There seems to be a distinct point of reference for me. Coming to the stairway and hearing a conversation between staff members about the telephone there. It’s unclear to me, however, whether that occurs at 2:56 or 9:30 in the recording.

A memory of the journey, triggered by the sounds:

  • 0:34

    Sarcophagous. Mother & Daughter in discussion.
    The leg of a statue on the wall.

  • 2:14

    The room with the mirrors around the arches.

  • 2:56

    The hall
    The discussion about the age of the telephone & when it was installed.

  • 3:33

    The stairs up.
    The small room in the corner.

  • 4:32

    Voices. Who? Indistinct and unremembered but melodic.

  • 6:25

    “Here until ….. he died in 1837”
    The model room? A table with architectural models on.

  • 7:33

    “Oh yes ….. No No”
    Amongst people, but where?

  • 8:39

    A door slams

  • 9:30

    A phone rings.
    Is this where I thought I was at 2:56

  • 9:46

    A cut out.

  • CHRONOLOGY BROKEN

    Audio lost for an unknown time.

  • 11:22

    “Thank you very much”
    The exit.

  • 11:34

    The road. Cars.

Cognitive Architecture: Brief

This brief is about information architecture in mental and physical spaces, and centres on Sir John Soane’s museum, often described as a physical manifestation of his mind. Research and create a work in response to both the museum and cognitive architecture.

Environmental OCR

I’ve been using OCR (Optical Character Recognition) to get a computer to read spaces around the building as text. Each image was read twice, once allowing the algorithm to pick the settings it thought were best for the image, and once requesting that it “Assume a single uniform block of text” This is setting that lead to the larger amount of text output, but some of the mistakes made by the automatic setting are quite comical (see “fig” above, or the wonky emoticon below).

ocr_env-02b

ocr_env-03

ocr_env-03b

ocr_env-04

ocr_env-04b

ocr_env-05

ocr_env-05b

ocr_env-01

ocr_env-01b

[OCR done using Tesseract, single block of text requested with “-psm 6”]

Processing + Seene

Seene is an iOS application that let’s you create photos with added 3d depth. SeeneLib by Ben Van Citters allows you to work with its file format and data in Processing.