Home » Posts tagged 'memory' (Page 2)

Tag Archives: memory

Learning & Memory – my 1,500th post to this blog

Fig. 1. Looks a like a good read

I’m starting to read papers on neuroscience that result on my starting to use my hands and fingers as I read, even reading and re-reading phrases and sentences out loud as I try to ‘get my head around it’. (A search in the Open Universal Online library for ‘hippocampus rats memory’ brought me to the above.

This is the kind of thing from the abstract:

 The nucleus accumbens shell (NAC) receives axons containing dopamine-b-hydroxylase that originate from brainstem neurons in the nucleus of the solitary tract (NTS). Recent findings show that memory enhancement produced by stimulating NTS neurons after learning may involve interactions with the NAC. However, it is unclear whether these mnemonic effects are mediated by norepinephrine (NE) release from NTS terminals onto NAC neurons. (From Kerfoot & Williams (2011:405)

On the other hand, when I read this I think I’ve taken it too far. Like the skier who watches with admiration as someone comes down a gully but would never do it themselves. 

 The A2 neurons are activated during times of heightened arousal by the release of glutamate from vagal nerve fibers that ascend from the periphery to the brainstem (Allchin et al. 1994; King and Williams 2009). Highly arousing events increase epinephrine secretion from the adrenals and facilitate binding to b-adrenergic receptors along the vagus nerve (Lawrence et al. 1995) that in turn, increase impulse flow to brainstem neurons in the NTS (Lawrence et al. 1995; Miyashita and Williams 2006). Epinephrine administration, stimulation of the vagus nerve or direct infusion of glutamate onto A2 NTS neurons are all known to significantly potentiate norepinephrine release within the amygdala and hippocampus (Segal et al. 1991; Liang et al. 1995; Williams et al. 1998; Izumi and Zorumski 1999; Hassert et al. 2004; Miyashita and Williams 2004; Roosevelt et al. 2006). (From Kerfoot & Williams (2011:405).

Fig. 2. Neuroscience for Dummies (Frank Amthor 2012) L5704

This is the bold step I’ve taken, not having to reading papers on neuroscience but feeling the need to do so. I’ve had three years of considering the theory behind learning, now I want to see (where it can be seen) what is happening. Papers rarely illustrate. What I want are papers with photos, charts, and video clips, with animations and multi-choice questions, then a bunch of contactable folk at the bottom to have a conversation with.

Figure 2 will have to do for now, though having got through ‘Neuroscience for Dummies’ I’m ready for the sequel ‘Neuroscience for the Dolterati’.

To understand how the nervous systems works, according to Professor Frank Amthor I need to know how neurons work, how they talk to other neural circuits and how these circuits form a particular set of functional modules in the brain. Figure 2 starts to do this.  (Amthor, 2012. Kindle Location 323)

What is going on here?

If I understand it correctly there is, because of the complexity of connections between neurons, a relationship with many parts of the brain simultaneously, some common to us all, some, among the millions of links, unique to us. Each neuron is connected to 10,000 others.  To form a memory some 15 parts of the brain are involved.

Learning is situated, much of it we are not aware of.

There is a multi-sensory context. Come to think of it, while I was concentrating I got cramp in my bum and right thigh perched as I am on a hard kitchen chair, and the lingering after taste of the cup of coffee I drank 45 minutes ago. I can hear the kitchen clock ticking – though most of the time it is silent (to my mind), and the dog just sighed.

Does it matter that my fingers are tapping away at a keyboard?

Though second-nature touch-typing it occupies my arms and hands and fingers which could otherwise be animated as if I were I talking. Would this in some way help capture the thought? I am talking, in my head. The stream of consciousness is almost audible. It was a couple of sentences with a few new acronyms involving an image I have in my head on what neurons, synapses and axons looks like.

What would happen where I to use a voice recorder and speak my thoughts instead?

By engaging my limbs and voice would my thinking process improve and would the creation of something to remember be all the stronger.

I’m getting pins and needles/cramp in my right leg. Aaaaaaaaaaagh! Party over.

The question posed is often ‘what’s going on in there?’ referring to the brain. Should the question simply be ‘what’s going on?’

Learning & Memory

My eyesight is shifting. In the space of six months of moved to reading glasses. Now my normal glasses are no good either for reading or distance. Contacts are no use either. As a consequence I’m getting new glasses for middle distance and driving. The solution with the contact lenses is more intriguing.

To correct for astigmatism and near or short sightedness I am going to have a one lens in one eye to deal with the astigmatism and a different lens to deal with the short sightedness in the other. My mind will take the information from both and … eventually, create something that is sharp close up and at a distance. This has me thinking about what it is that we see, NOT a movie or video playing out on our retina, but rather an assemblage of meaning and associations formed in the brain.

I will try these lenses and hang around, wander the shops, then return. I am advised that I may feel and appear drunk. I can understand why. I could well describe being drunk as trying to navigate down a path with a microscope in one hand and a telescope in the other while looking through both. I feel nauseous just thinking about it.

So ‘stuff’ is going on in the brain.

These days the activity resulting in the brain figuring something out can, in some instance and to some degree, be seen. Might I have an fMRI scan before the appointment with the optician? Might I then have a series of further scans to follow this ‘re-wiring’ process.

I need to be careful here, the wrong metaphor, however much it helps with understanding may also lead to misunderstanding. Our brain is organic, there are electro-chemical processes going on, but if I am correct there is no ‘re-wiring’ as such, the connections have largely existed since birth and are simply activated and reinforced?

Fig.3 . Synaptic transmission

Any neuroscientists out there willing to engage with a lay person?

What would observing this process of unconscious learning tells us about the process of learning? And is it that unconscious if am I am aware of the sensations that have to be overcome to set me right?

REFERENCE

Kerfoot, E, & Williams, C n.d.(2011), ‘Interactions between brainstem noradrenergic neurons and the nucleus accumbens shell in modulating memory for emotionally arousing events’, Learning & Memory, 18, 6, pp. 405-413, Science Citation Index, EBSCOhost, viewed 7 March 2013.

Amfor, F (2012) Neuroscience for Dummies. Cheat Sheet. (for the time challenged)

 

Someone who correctly sensed what was coming in 2004 might be a person to ask what is due in 2013/1014

In this paper from Grainne Conole she says (writing in 2003, published 2004) that wireless, smart and wifi will have a huge impact … prescient. Can you remember how little of what we now take for granted was around in 2004? I was probably using a Psion and a bog-standard phone. 

‘Technologies do have great potential to offer education, however this is a complex multifaceted area; we need rigorous research if we are going to unpick the hype and gain a genuine understanding of how technologies can be used effectively’. (Conole. p.2 2004)

  • Pedagogical
  • Technical
  • Organisational

‘Academics working in this area need to demonstrate that the research is methodologically rigorous, building appropriately on existing knowledge and theories from feeder disciplines and feeding into policy and practice’. (Conole, 2004)

  • effective models for implementation
  • mechanism for embedding the understanding gained from learning theory into design
  • guidelines and good practice
  • literacy needs of tutors and students
  • the nature and development of online communities
  • different forms of communications and collaboration
  • the impact of gaming
  • cultural differences in the use of online courses

‘much of the current research is criticised for being too anecdotal, lacking theoretical underpinning’ (Mitchell, 2000)

This is what you find in the press, newspapers and magazine always go for the anecdotal and sensationalist view of what technology may do. Has technology yet brought the world to an end? I guess the atomic bomb has always, legitimately, been more scary than other technologies although I dare so there are those who say Google will bring about the end of the world.

‘A more detailed critique of the methodological issues of e-learning research and its epistemological underpinnings are discussed elsewhere’. (Olive and Conle, 2004)

  • A better understanding of the benefits and limitations of different methods.
  • More triangulation of results.

What people are looking for:

  • potential efficiency gains and cost effectiveness
  • evidence-based practice with comparison of the benefits of new technologies over existing teaching and learning methods
  • How technologies can be used to improve the student learning experience.

No surprises that in business use of e-learning is benchmarked with cost and outcomes closely followed – are we improving and saving at the same time? Typically travel and accommodation costs are saved where people don’t have to be away from work and learning times can be cut without loss of information retention on the compliance like stuff – health and safety, data protection, equity in the workplace and basic induction (or as American companies call it ‘on boarding’ which sounds to me like something you do with guests on a cruise liner – or is them embarking)

How do we capture experience in a way that we build it back into design and implementation. (Point 8 of 12 p.8 Conole 2004)

What are the inherent affordances of different technologies? (Conole. p. 8 2004)

‘Only time will tell’. (Conole. p. 17, 2004)

Or as I would say, ‘on verra’.

I am doing the classic ‘expand and contract’ of problem solving – the problem is finding an area of research I can believe in and sustain for four years. Though for H809 all I need is a title of a research paper. I still would prefer to be narrowing down the areas that interest me:

  • memory
  • virtual worlds
  • blogging
  • spaced education (see memory)
  • lifelogging / sensecam (see memory)
  • Artificial Intelligence (learning companion … see memory?)

Whilst the research question ought to come first, I hope that Activity Theory will have a role to play too.

REFERENCE

Conole, G (2004) E-learning the hype and the reality

Oliver, M. and Conole, G. (2004) Methodology and e-learning. ELRC research paper. No. 4

 

 

Neuroscience Cases: The Man Who Could Not Forget

Neuroscience Cases: The Man Who Could Not Forget.

As learning involves memory and our desire to recall selectively what is required for an assignment, exam or in our work, let alone living our daily lives, what role does forgetting serve? Why it is so important that we struggle to remember and need to forget? What does neuroscience reveal and why do special cases such as these tell us so much?

 

I cannot through words share with my mother our collective memories, I cannot do a ‘mind transfusion’.



Fig. 1  My parents – and a fraction of the record we have of left of them now that they are gone.

My mother had a stroke.

She would die within three months and after a second stroke very poor comprehension and ability to communicate will get very much worse. I cannot become an expert in care for a stroke victim overnight, but I read enough and ask questions. We find two ways ‘in’ – song and images. The images are never of people – various sparks of joyous recognition come when we are seen in the flesh and behave like children rather than adults in our 40s and 50s. I cannot through words share with my mother our collective memories, I cannot do a ‘mind transfusion’. I cannot even talk about things we did a year or ten years ago – I sense the time is irrelevant, she is as likely to recall her first doll as she is our last visit to the Royal Academy of Arts to enjoy Van Gogh’s Letters. A visit where she gently nurtured the interest of her 13 year old granddaughter, sharing insights between the letters, sketches and paintings from the point of view of an artist and art teacher and art historian, to a bright girl who liked to draw.

A mouthful of the food from the Fortnum and Mason’s restaurant might have triggered her memory – we did treat her to various foods.

What worked, in defiance of the medical reports that essentially said ‘there is nothing there’ was an iPad loaded with images grabbed from a number of hefty art books – 20th century art, the Van Gogh exhibition book and pictures from the Louvre. I spoke to that part of her that I might work. I challenged her as I showed the pictures to say when the letter had been written or why was Van Gogh so keen to tell his brother what he was up to. And what was the name of Van Gogh’ s brother? I got through Van Gogh and contemporary artists then moved onto the Louvre.

Up comes the Mona Lisa.

‘Where is this painting? We’ve seen it. It was so small?’

And she replied, ‘Louvre’.

‘Where’s that?’ I asked.

‘Paris’ she said.

Perhaps had my mother been in her sixties we and she could have seen a way to perceive with this.

Would a lifelog have got to this point in under 15 minutes? Might a screen of fast moving images offered in spaced-out way, with eye-tracking identify that ‘glimmer’ of recognition that would then prioritise images in the same set? Though who would know why a set was being favoured? We associate images with feelings, and people, and places, not with a set book or date or necessarily a genre of work.

Fig. 2. I think in pictures. But have to communicate in words. I wonder if a stream of pictures, as Tumblrs do, is a better record of our thoughts?

I think Bell has shown how we can freeze content from the digital ocean without knowing what value it will bring.

Perhaps from such an iceberg or glacier, at a later date, we can mine such event sparking artifacts that call up a memory as indicated above. But this artifact is not the memory and never can be. We should applaud Bell and others for going beyond thinking about such massive data collections, the ‘world brain og H G Wells or the Memex of Vannevar Bush.

 

The reality is that our digital world long ago washed over the concept of an e-memory.

Fig. 1. Tablets are the university in the pocket.

Bell and Gemmel (2009) need to imagine the future beyond the lens of life-logging and e-memories. What else will be developing at just as fast a rate. Where will Google and Apple be in our lives?

Fundamentally though, is this view that a recording of what is going on around someone forms any kind of memory at all. Of far greater value is how a personalised capture of an event, assisted by technology, becomes additional support to someone as they learn.

A student who hasn’t prepared for an exam is imagined calling upon all kinds of records to get her straight – would someone who had done so little and left it so late have any desire to go to this effort now?

Much of what Bell describes isn’t a sound e-memory construct either, it is simply searching, grabbing, downloading, adding links and collecting references that may have personal attributes to them.

It simply doesn’t wash that anyone would need say to refer to the way they dealt with a problem in the past when they can just as readily call up the solutions of a myriad of others. Anyone can imagine the perfect use of an imaginary service or product – this doesn’t validate it. Where are the patterns that show this happening in this way.

The reality is that our digital world long ago washed over the concept of a e-memory.

An e-memory or automatic logging is not reflection – the gathering process as Bell and Gemmel (2009)  conceives it requires no control over how information is gathered – the user may actually not even recognise the events that are played back. How could a sports coach possibly get a better view from a camera snapping images every 22 seconds of say a soccer match or squad of swimmers possibly make the choices or get the level of detail he picks up with his own eyes.

Instead of indulgently and obsessively digitising everything in sight like a 21st century transporter, Bell should have been constructing research based on the use of e-learning devices and software and giving them out to thousands of users to conduct trials. He wrongly assumes that his family and the passing on of family heirlooms is clearly like every other.

He hadn’t foreseen the creation of hundreds of thousands of Apps.

Bell and Gemmel (2009. p. 141) assume that this lifelog will preserve an image of a loved one we would want to keep. But when would we ever see them? They the camera. And where would be get, and should we have access to the lifelogs of others who will have caught out loved one in shot?

The idea of a ‘world brain’ that acts as a perfect memory prosthesis to humans is not new.

Fig. 1 H G Wells 

In the late 1930s, British science fiction writer H. G. Wells wrote about a “world brain” through which “the whole human memory can be [ .  .  .  ] made accessible to every individual.” Mayer-Schönberger (2011. p. 51)

I think to keep a lifelog is to invite sharing. It’s so Web 2.0.

It may be extreme, but some will do it, just as people keep a blog, or post of a picture taken every day for a year or more. The value and fears of such ‘exposure’ on the web have been discussed since the outset. There are new ways of doing things, new degrees of intimacy.

‘Obliterating the traditional distinction between information seekers and information providers, between readers and authors, has been a much-discussed quality since the early days of the Internet’. Mayer-Schönberger (2011. p. 83)

‘By using digital memory, our thoughts, emotions, and experiences may not be lost once we pass away but remain to be used by posterity. Through them we live on, and escape being forgotten’. Mayer-Schönberger (2011. p. 91)

At a faculty level I have twice created blogs for the recently deceased.

Fig. Jack Wilson MM 1938

It was with greater sadness that I did so with my own parents with my father in 2001 and my mother in 2012. While, by recording interviews with my late grandfather I moved close to the conception of a digital expression of a person. It doesn’t take much to imagine a life substantially ‘lifelogged’ and made available in various forms – a great tutor who continues to teach, a move loved grandparent or partner to whom you may still turn …

Source: wikihow.com via Peter on Pinterest

 

Fig. 3. Bell and Gemmel  imagines lifelogs of thousands of patients used to in epidemiological survey. (Bell and Gemmell, 2009. p. 111)

This has legs. It ties in with a need. It related to technologies being used to managed patients with chronic illnesses. It ties in to the training of clinicians too.

 

 

Is lifelogging a solution on the lookout for a problem?

Fig. 1. A hundred cards in a hundred days. Away from my fiancee I gave up the diary and posted her one of these every day.

I’m from a generation where we have a record in letters. Does a digital record simply enable more ofthe same kind of thing?

It is true that the worth grows as they years pass, that to know what you were doing a year, three years, ten years or two decades ago at least puts a rye smile on your face.

‘If you have ever tried reading an old diary entry of yours from many years ago, you may have felt this strange mixture of familiarity and foreignness, of sensing that you remember some, perhaps most, but never all of the text’s original meaning’. Mayer-Schönberger (2011. p. 34).

Which is why Bell’s approach my diminish the mind, not enhance it.

The mind reworks a memory every time it is relived – it isn’t the same memory when it reforms on a shelf in your mind. Whereas Bell’s ‘memory’ sits their unchanging. Crucially it lacks the mental context, connections and connotations of the person. Indeed, it isn’t a memory at all, it is simply a digital record snapped by a device. Afterall, it is a false input – lacking the filter of the person’s eyes and senses. The laziness of such a lifelog has serious flaws. Just because it can be done, does not mean that it should be. If it is to be done, then it should be research led, or as part of a problem solving, outcome driven project. Supporting those with dementia or cognitive disabilities, aiding those recovering from a stroke …

Is lifelogging a solution on the lookout for a problem?

Forgetfulness Bell and Gemmell, 2009. p. 52) doesn’t sound like a worthy cause, better to learn to remember, better to enjoy and use those around you – family and friends. Alzheimer’s disease is a cause. Parkinson’s too. Possibly those with cognitive problems. Could lifelogging be an assistive technology for those prone to forget? Does the lifelog to such a person become the calculator to anyone struggling with more the simple arithmetic? A prosthesis to their mind?

What might we learn from diaries and blogs?

Who has benefitted from these? What therefore might we gain from a lifelog? It matters who is the lifelogger. However, the lifelog by the very nature of keeping one, impacts on the life. You don’t want to keep a diary and do nothing. It invites you to be adventurous. On the other hand, it may invite you to live within the laws of the land, and moral laws.

Would Pepys have kept a lifelog?

 

 

The diffusion and use of innovations is complex – like people.

Fig. 1 Who’s the digital native which one is the immigrant? 

There is no evidence to support any suggestion that there was ever such a group as a ‘digital native’ and it is sensationalist claptrap or lazy  journalism to talk of ‘millenials’ – there aren’t any. The research shows the complex and human reality. It is not generational.  (Kennedy et al, 2009., Jones et al. 2010., Bennett and Maton., 2010) I’m not the only father who knows more and does more online than his kids – we had computers at university in the mid-1980s and in the office within a decade.

Bell and Gemmel fall for the falsehood of the ‘Millenials’. (2009. p. 19)

Fig.2. The devices we use do not split us across generations.

On digital natives or millenials add that behaviours supposedly attributable only to this younger generation are also evident in anyone using these tools and devices – the digitally literate is impatient and is easily distracted.

This applies to anyone who spends much time online. It is not age, gender or race related. We all fidget if downloads are slow or we lose a signal. We’re just being people. It is not generational. Rather behaviours with this tools reflects who we are, not what the kit affords.

Fig. 3. Whether you were born before or after this arrival doesn’t make a jot of difference.

So you here anyone calling our parents the ‘TV generation’, or the generation before that the ‘Wireless Generation’. It is shorthand that   is harmless until it is used to define policy.

They refer to those born between 1982 and 2001 as a homogenous cohort, as if they are all born into families where they will have access to gadgets and later the internet as a birthright. The figures given by Bell and Gemmel (2009) stick to those in North America – just the US or Canada too?  So what if a few become software millionaires. Others aren’t getting jobs at all. And there are plenty of other ways to earn a crust.

Of the 70 million they talk about how many have been interviewed?

When it comes to the use of various online tools and platforms what actually is their behaviour? Its the same behaviour they’d show out in the real world, at school or in the shopping-mall, making and losing friends. And when it comes to blogging, who knows what is going on. The authors assume (2009. p 20) that there is some kind of truth in what people post – that in my experience blogging for many hours a day since 1999 is far, far from that. Indeed finding the honest voice is the one in 30,000.

There is a considerable degree of fakery, and blatant fiction.

I am reminded of the entirely fictitious ‘Online Caroline’ of a decade ago. She posted a sophisticated blog for the era, with photos and video chat. Like Orson Wells following an audience over the invasion of earth this blog had people calling the police when Caroline’s CCTV supposedly logged someone nicking stuff from her flat.

Bell and Gemmel (2009) talk about lifelogging as a panacea.

Fig. 4. The context in which we learn

There are lessons and techniques that have their place. In fact we’re doing a lot of it already. Through several devices or one we are recording, snapping, storing, sharing, loading, compiling, curating, mixing and remembering.

Every example given is a positive, a selected moment on which to build … what about the times of heartache and memory, of parent’s arguments and childhood bullying. Do we want those? If trying a cigarette, getting drunk, being caught in the open with a dodgy stomach or vomiting?

The authors, Gordon Bell and Gemmel (2009) as well as  Viktor Mayer-Schönberger  (2009)  consider four issues in relation to the creation of digital memories:

  1. Record (digitization)
  2. Storage (cheap)
  3. Recall (easy)
  4. Global Access (Mayer-Schönberger, 2009. p. 14)

A fifth should be how this content is managed and manipulated, how selections are made and how it is edited and fed back to the content’s owner, or how it forms another person’s memory when picked up and mashed online.

As (Mayer-Schönberger, 2009. p. 16) puts it, to cope with the sea of stimuli, our brain uses multiple levels of processing and filtering before committing information to long-term memory .

Could decluttering the hoarders house be achieved by creating for them a digital archive and putting everything else in the bin?   

Human Memory

Fig. 5. How we forget. And where software and tools can play a part to help us remember – to create more memories and better recall. 

We forget (perhaps an implicit result of the second law of thermodynamics).  (Mayer-Schönberger, 2009. p. 21) Or a fact. A neuroscientist needs to get engaged at this stage. What IS going on in there?

Let’s say that memory formation could be liken to the aggregation of coral.

This memory has had no opportunity to fix in this way if it is a snap-shot of the an impression of a moment detached from its context – what was going in, how the person was feeling, what they thought of the events, how these would colour and shape their memory .

We are prone to mis-attribute

Language is a recently recent phenomenon (Mayer-Schönberger, 2009. p. 23) Should we therefore remember in images?

Painting dates back some 30,000 years. The written language is even more recent (6000 years ago) as pictographs became cuneiform became an alphabet –  so would an oral tradition be of more value?

REFERENCE

Bell, G., and Gemmel. J (2009)  Total Recall: How the E-Memory Revolution Will Change Everything

Bennett, S, & Maton, K (2010), ‘Beyond the “Digital Natives” Debate: Towards a More Nuanced Understanding of Students’ Technology Experiences’,Journal Of Computer Assisted Learning, 26, 5, pp. 321-331, ERIC, EBSCOhost, (viewed 13 Dec 2012).

Jones C., Ramanaua R., Cross S. & Healing G. (2010) Net generation or Digital Natives: is there a distinct new generation entering university? Computers and Education 54, 722–732.

Kennedy G., Dalgarno B., Bennett S., Gray K., Waycott J., Judd T., Bishop A., Maton K., Krause K. & Chang R. (2009) Educating the Net Generation – A Handbook of Findings for Practice and Policy. Australian Learning and Teaching Council. Available at: http://www.altc.edu.au/ system/files/resources/CG6-25_Melbourne_Kennedy_ Handbook_July09.pdf (last accessed 19 October 2009).

Mayer-Schönberger, V (2009) Delete: The Virtue of Forgetting in the Digital Age

 

Why lifelogging as total capture has less value that selective capture and recall.

Beyond Total Capture. Sellen and Whittaker (2010)

Abigail Sellen and Steve Whittaker

Abigail J. Sellen (asellen@microsoft.com) is a principal researcher at Microsoft Research in Cambridge, U.K., and Special Professor of Interaction at the University of Nottingham, U.K.

Steve Whittaker (whittaker@almaden.ibm.com) is a research scientist at IBM Almaden Research Center, San Jose, CA.

may 2010 | vol. 53 | no. 5 | communications of the acm 77

You read about people looking at ways to capture everything – to automatically lifelog the lot. They constantly look for new ways to gather and store this data. What’s the point? Because they can? Who are they? Is it a life worth remembering? And do we need it all?!

Far better to understand who and what we are and use the technology either to ameliorate our shortcomings, aid those with memory related challenges and tailored to specific needs captured moments of worth.

There’s ample literature on the subject. I’m not overly excited about revolutionary change. It is as valuable, possibly more so, to forget, rather than to remember – and when you do remember to have those memories coloured by perspective and context.

My interest is in supporting people with dementia and cognitive difficulties – perhaps those recovering from a stroke for whom a guided path of stimulation into their past could help ‘awaken’ damaged memories. If lives are to be stored, then perhaps grab moments as a surgeon operates for training purposes, recreate a 3:1 tutorial system with AI managed avatars so that such learning practices can be offered to hundreds of thousands rather than just the privilege, elite or lucky few. Use the device on cars, not people, to monitor and manage poor driving – indeed wear one of these devices and see your insurance premiums plummet.

What does it mean to support human memory, rather than capturing everything.

Five Rs

  1. Recollect
  2. Reminisce
  3. Retrieve
  4. Reflect
  5. Remember

Despite the device SenseCam stimulate memories are as quickly forgotten as any other.
Whittacker 2010 – Family archives of photos are rarely accessed

Of less value than the considerable effort to produce these archives justifies.

  • Can’t capture everything
  • Need to prioritise – selectivity (as SwimTag and swim data)
  • Visual
  • Memory taxonomies
  • Quick and easy to use better than accuracy – good enough
  • Cues not capture – the data cues memories that are different to the image captured.
  • Memory is complex

Play to the strengths of human memory, help overcome weaknesses.

Incorporating the psychology of memory into the design of novel mnemonic devices opens up exciting possibilities of ways to augment and support human endeavours.

REFERENCE

Sellen, A. J., & Whittaker, S. (2010). Beyond total capture: a constructive critique of lifelogging. Communications of the ACM, 53(5), 70-77.

Whittaker, S., Bergman, O., and Clough, P. Easy on that trigger dad: A study of long-term family photo retrieval. Personal and Ubiquitous Computing 14, 1 (Jan. 2010), 31–43.

 

%d bloggers like this: