The vid.stab video stabilisation library takes 2 passes to stabilise a video. After the first pass is complete it leaves you with a file containing the frame by frame transformations that go into the camera shake. I wrote a simple parser for the file and rendered the individual transforms, as well as an overall global value over the video’s frames.
In a crowded scene, such as this found, first person, footage of someone walking down a busy street, the transforms are chaotic, being interfered with by the movement of people witihin the scenes, but the moments when someone enters and moves across the field of view produce quite beautiful, harmonic interludes, with many lines moving in the same direction, and when the camera is moved in a sweeping fashion (around 1:00) from a head turn, the whole field of vectors angles in unison.
This purpose of this tool is to understand how computers and algorithms might see motion in videos, towards breaking down how they might do biometric identification from the characteristic gait of the camera ‘wearer’.