Home » E-Learning » Automatically Augmenting Lifelog Events Using Pervasively Generated Content from Millions of People

Automatically Augmenting Lifelog Events Using Pervasively Generated Content from Millions of People

Advertisements

English: This depicts the evolution of wearabl...

English: This depicts the evolution of wearable computers. See http://wearcam.org/steve5.htm for the original JPEG file. (Photo credit: Wikipedia)

 

Automatically Augmenting Lifelog Events Using Pervasively Generated Content from Millions of People

In the pursuit of pervasive user-generated content (ugc) based on senors, by augmenting visual lifelogs with ‘Web 2.0’ content collected by millions of other individuals.

We present a system that realises the aim of using visual content and sensor readings passively captured by an individual and then augmenting that with web content collected by other individuals. Doherty and Smeaton (2009)

  • Lifelogging, like keeping a diary, is a private and exclusive form of reverse surveillance. Doherty and Smeaton (2009)
  • Using SenseCam from Microsoft. Zacks (2006)
  • human memory operates by associating linked items together. Baddley (2004)
  • supportive of those patients suffering from early stage memory impairment. Berry e al (2009)
  • enhancing SenseCam gathered images by data mining from ugc sites such as Flickr and YouTube. Doherty and Smeaton (2009)
  • See also MyLifeBits. Bell and Gemmel (2007)
  • A commercial lifelogging product ViconRevue. OMG
  • Flickr has over 95 million geo-tagged images. (2010)
  • YouTube has 100 million video views per day (2010) YouTube fact_sheet

It has a camera and a range of other sensors for monitoring the wearer’s environment by detecting movement, temperature, light intensity, and the possible presence of other people in front of the device via body heat.

(I’d like the sensecam to be smaller still and include a microchip in a swimmer’s cap, or goggle or swimsuit to monitor various other factors, including heart rate, blood sugar levels and carbon dioxide). 

How the mind disects, stores and correlates the information if gathers is somewhat different to the linear recording or cataloguing of current systems though. 

After her first stroke a patient found engagement when otherwise unable to communicate by looking at family photographs on an iPad. After a second stroke the same patient, deemed incapable of comprehension or communication, responded to hundreds of images of paintings she had known in her lifetime – in particular responding to the question posed when looking at one painting. Where is it? Ans; Louvre. What is it? Mona Lisa. (Vernon, 2012)

As sensing technologies become more ubiquitous and wearable a new trend of lifelogging and passive image capture is starting to take place and early clinical studies have shown much promise in aiding human memory. Doherty and Smeaton (2009)

Fig.1. A game of pairs – our minds are far more interprative, chaotic and illogical when it comes to visual associations based on what we see around us.

However, it is presumptious, prescritpive and even manipulative to assume that a person recalls ‘more of the same’ when visualing sensing or surveying a place. The foibles of the human mind and system is that noises and smell, the temperature and weather, and the time of day have a part to play. I visit Trafalgar Square and smell pigeons even though they are long gone. I visit Buckingham Palace and recall finding a woman dead on the pavement one late evening. I see snow and think of the broken leg I got from skiing in my teens – not snowmen. I see any icecream van and think specifically of Beadnell Bay, Northumberland.

The mind is far, far more complex than a fancy game of ‘pairs’. I have perhaps 30,000 of my own images online, so why support, replace or supplement these with those taken by others? What if during my lifetime I tag, link and assocaite these images? How might these be linked to another personal log – a diary of some 2.5 million written over a 30 year period?

There are research challenges involved in further improving the quality of the lifelog augmentation process, especially with regard to “event-specific” lifelog events, e.g., football matches, rock concerts, etc. Other research challenges include investigations into selecting initial seed images based on adaptive radii, more sophisticated tag selection techniques, and also considering how interface design and varying methods of visualisation affect users’ acceptance of augmented data.

REFERENCE

Baddeley, A., Ed. Your Memory: A User’s Guide; Carlton Books: New York, NY, USA, 2004.

Bell, G.; Gemmell, J. A digital life. Scientific American Magazine, March 2007.

Berry, E.; Hampshire, A.; Rowe, J.; Hodges, S.; Kapur, N.; Watson, P.; Smyth, G.B.G.; Wood, K.;
Owen, A.M. The neural basis of effective memory therapy in a patient with limbic encephalitis. J.
Neurol. Neurosurg. Psychiatry 2009, 80, 582–601.

Doherty,.R. and Smeaton.A.F. (2009) Automatically Augmenting Lifelog Events Using Pervasively Generated Content from Millions of People

Vernon, J.F. (2012) Use of hundreds of image grabs of contempary artists, Leonardo da Vinci and Van Gogh to communicate with an elderly patient after a series of catastrophic and ultimately fatal strokes.

Zacks, J.M.; Speer, N.K.; Vettel, J.M.; Jacoby, L.L. Event understanding and memory in healthy
aging and dementia of the alzheimer type. Psychol. Aging 2006, 21, 466–482.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Categories

%d bloggers like this: