Home » Posts tagged 'Semiconductor memory'

Tag Archives: Semiconductor memory

How more deeply embedded is a visual memory if you crafted the drawing or painting that is the catalyst for its recall.

Fig.1. Hockney on iPad. Now here’s a lifelog to treasure. How does the master i-paint?

If a moment is to be captured, maybe David Hockney has the answer with a iBrush painting on an iPad. This is closer to the truth of a moment, seen through the artist’s being, their psychological and physiological approach to what they both see and perceive in front of them.

When we do we form a real memory of the actions required to undertake a task. We build on our initial attempts. The memory to ski, to dance, to swim, to skip, to ride a bicycle, to write, to draw, to pay a musical instrument – these cannot be caught by a complex collection of digital recording devices. Perhaps if the player wore a total bodysuit as actors do to play CGI generated character then we’d have a record of the memory of this experience. It wouldn’t a digital memory make – just a record.

The Semantic Web aims to standardize transmission and translation of information, is an important effort in this area. (Bell and Gemmel, 2009 p. 220 )

Is it really necessary, possible or desirable to take the moronic  qualities of sports coverage and impose it on a person going about their every day and far less eventful day. This is the premise for a comedy sketch. (Bell and Gemmel, 2009. p. 224)

Fig. 2.  How we learn. Conceptualised in SimpleMinds while taking the Open University Masters in Open and Distance Education.

All praise to the blogger Mark Stewart but does such a record need to be entertaining to gain validity and so a place in Digital Lives  at the British Library. (Bell and Gemmel, 2009. p. 225)

Though remembering and talking through a difficult time can also offer its solution. However, these is a difference between hoarding the memory or keeping it to yourself, and letting it go. The wrong memories permanently on recall will be multiple albatrosses.

Take WW1 veterans, or any war veterans, some want to tell, some went to bury. Why should people in the future feel obliged to record it all, to have more than enough to bring it back?

As written language is such a recent phenomenon it is perhaps not the best or even the most natural way to remember.

Going back we had the drawn image and the spoken word, but never the absolute of a panoramic or 360 digital picture, but rather moment filtered through the mind and expressed at the fingertips as a painting or drawing.

Fig. 3. Web 1.0 invigorated by Web 2.0 into a water-cycle of, appropriately, clouds.

Weather systems and water courses, urban run–off and transpiration.

Here the flotsam and jetsam of web content, the good, the bad, the ugly and ridiculous, the massive, the moving, the invaluable as well as the ephemera, is agitated, mixed–up and circulated, viewed, pinched, reborn, mashed up, bashed up or left to atrophy – but it is in the mix and open for business. Find it and the thought, or image, or sound bite,  the message, the idea is yours to dwell upon and utilise. It is education, revelation and knowledge of a new order. Expect more of the same and the unexpected.

 

Advertisement

Digital content, like its liquid equivalent in a digital ocean, has an extraordinary ability to leak out.

Gordon Bell

Gordon Bell (Photo credit: Wikipedia)

Fig.1. Gordon Bell, ready for action – lifelogging for a decade

The biggest problem with lifelogging as it is conceived of by Gordon Bell (2009)  is that the camera points away from the protagonist rather than at them.

Far better the record of the person’s facial expressions as they go about their daily business as an indication of what is going on their minds – which is otherwise impossible to suggest unless a running commentary is offered. Though of course, the contribution of the running commentary, let alone the wearing of the device and its being on changes the record. This cannot therefore be an objective documentary record, as if a zoological research study. And then, what do you legally do with images you get not just outside, but inside the someone’s house.

This content is implicitly for private and singular consumption only, but it would pick up images that others could use in illicit ways.

Fig. 2. The Point, Beadnell. A memory forever for my encounters with nature on this stick of rock pointing into the North Sea.

Digital content, like its liquid equivalent in a digital ocean, has an extraordinary ability to leak out.

I don’t believe Bell’s attitudes regarding privacy are headed for extinction, but some people will choose to keep as much as possible private while others will go to great lengths to expose and disclose everything – in both situations there is for better and for worse. (Bell and Gemmel, 2009 p. 213)

If 10,000 asthmatics revealed their health related lifelog in real time how soon would researchers be able to act on this? If alcoholics wore a lifelog would their drinking stop and certainly drink-driving be over forever? What a field day psychologists would have and what they would learn about all kinds of things such as depression, bipolar or ADHD.

Bell introduces us to a Speechome where a couple have turned their house in the set of the TV show Big Brother, with cameras everywhere. (Bell and Gemmel 2006. p. 114)

Their son hasn’t had a choice – there is a ‘total record’ of his development over this period. Is it right to use your own child in this way? And can a record such as this be called a ‘corpus’ ? It isn’t a scientific study, just a CCTV record. This is where Bell’s language is, throughout, skewed in favour of the system and methodologies he is expounding. He would do far greater justice to his actions if his record where the subject of academic study, the publication of peer review and therefore the release to academics of the record he has kept. Someone will volunteer this if he won’t.

Part of our era is the sharing and connectivity of information and the way it is transformed through collective experience and comment … even trailblasing many others to do the same.

Fig. 3 Stephen Gough the bloke who refused to put any clothes on – anywhere, ever. A form of obsession.

There is a character from Scotland who insists on living his life naked.

He is consequently arrested repeatedly. It strikes me, I’m afraid that Gordon Bell might be evangelical about being naked … but will keep his clothes on. Like an omnivore selling the virtues of veganism, while eating everything under the sun. Or will Bells 10/15 year lifelog be released to researchers on his death?

‘Most of us are well along the path to outsourcing our brains to some form or e-memory’. Bell says (2009. p 119).

Should we scrutinise this for some scientific value? ‘Most of us …’ meaning?

From a study of 1000, or 2000 people.

Who, where do they live, what is their educational background?

Their access to digital kit and networks? Are they representative of the 6 billion on the planet, or just a community of Silicon Valley Computer engineers? ‘Most of us … ‘ implies that this could be the self-selecting readership of the book. Who would read it if they could empathise? ‘Well along the path’ implies that already there is a groundswell, a desired adoption of these kinds of technologies.
On what basis is this to be believed?

Are there are number of ‘diffusion of innovation’ studies current in order to measure this? What is the benchmark? What are the parameters of the path?

‘Our brains’ – by what definition either ‘ours’ or even ‘brains’.

A living organ cannot be outsourced can it? This isn’t like making a donation to a sperm bank. There is no means to store any component of our brains nor has anything more that a gallery of images or a storage space for documents yet been developed. There is no electronic memory. Even if you want to call a relational database on a hard drive an e-memory it cannot be – no amount of juggling the electronic pack of cards will turn an audio file, a still image or video into the memory. Indeed, the only possible association with a memory is when someone looks at them and a memory forms in their mind – and what is more, anyone at all, looking at or hearing or viewing these records will also form memories. i.e. they are the enablers of memory recall, or thought creation, they are a catalyst, but they can never be the memory.

The idea of a ‘world brain’ that acts as a perfect memory prosthesis to humans is not new.

Fig. 1 H G Wells 

In the late 1930s, British science fiction writer H. G. Wells wrote about a “world brain” through which “the whole human memory can be [ .  .  .  ] made accessible to every individual.” Mayer-Schönberger (2011. p. 51)

I think to keep a lifelog is to invite sharing. It’s so Web 2.0.

It may be extreme, but some will do it, just as people keep a blog, or post of a picture taken every day for a year or more. The value and fears of such ‘exposure’ on the web have been discussed since the outset. There are new ways of doing things, new degrees of intimacy.

‘Obliterating the traditional distinction between information seekers and information providers, between readers and authors, has been a much-discussed quality since the early days of the Internet’. Mayer-Schönberger (2011. p. 83)

‘By using digital memory, our thoughts, emotions, and experiences may not be lost once we pass away but remain to be used by posterity. Through them we live on, and escape being forgotten’. Mayer-Schönberger (2011. p. 91)

At a faculty level I have twice created blogs for the recently deceased.

Fig. Jack Wilson MM 1938

It was with greater sadness that I did so with my own parents with my father in 2001 and my mother in 2012. While, by recording interviews with my late grandfather I moved close to the conception of a digital expression of a person. It doesn’t take much to imagine a life substantially ‘lifelogged’ and made available in various forms – a great tutor who continues to teach, a move loved grandparent or partner to whom you may still turn …

Source: wikihow.com via Peter on Pinterest

 

Fig. 3. Bell and Gemmel  imagines lifelogs of thousands of patients used to in epidemiological survey. (Bell and Gemmell, 2009. p. 111)

This has legs. It ties in with a need. It related to technologies being used to managed patients with chronic illnesses. It ties in to the training of clinicians too.

 

 

Dreams of technology enhanced learning as a micro-chipped jelly-fish in a digital ocean

Fig. 1. I visualised the biological and digitised memory as a huge, translucent jellyfish.

It is a deliberate exercise to fall asleep with a book or eBook in my hands and in my head. This may even be a mid-evening exercise lasting between 20 and 45 minutes. It works if I remember what goes on and then write it down. There’s no way this can be digitally grabbed.

In the skin of the jellyfish there is a microchip. This microchip represents the digital record, the stamp like artefact that is a snap of time, a set of images, a sound-bite, a record of the creatures physiological outputs. The rest of the creature is what isn’t capture – the tendrils of the jellyfish the synapses that connect the bulk of the creature memories that defy definition.

It isn’t a digital memory. The visual or sensory capture is a fraction of what forms the memory.

Not is memory static. It is forming and reforming, diminishing, refreshing and fracturing all the time. To grab a ‘memory’ is to capture and box a set of impossibly complex electro-chemical reactions. It is multi-dimentional too – when I see the Royal Cinema, I feel a sticky ice-lolly stick on my neck, I smell the fusty, cigarette-smoke embedded chairs and here the announcement of the serial. And what I see is filtered through my mind’s eye, not a lense.

I don’t see it in high definition.

There are three kinds of memory: (Bell and Gemmell, 2009. p. 53)

  • Procedural (muscle memory)
  • Semantic (facts that you know that aren’t rooted in time and place)
  • Episodic (autobiographical)

Perhaps if I am going to wear a gadget around my neck it should record something I cannot see or sense?

At a different wavelength or spectrum. i.e. telling me something I don’t know.

‘Biological memory is subjective, patchy, emotion-tinged, ego-filtered, impressionistic, and mutable. Digital memory is objective, disappassionate, prosaic, and unforgivingly accurate’. (Bell and Gemmel, 2009. p. 56)

The first is a memory the second is not. Bell appears to think that ‘memory’ is an artefact that is capable of a digital record – it is not. Nothing that MyLifeBits has done is a record of a memory – it is simply stuff digitised. For something to be called a ‘digital memory’ then it will need to have the attributes of its biological and analogue form.

A memory is a product of our lapses and distractions.

It matters that we daydream, as well as focus. It matters that what we see or experience once needs to be experienced a second, third and fourth time so that meaning aggregates and our minds adapt. What Bell is describing is a massive, relentless, comprehensive attempt at keeping a diary. Would it not be more useful to hire a personal assitance? If you have the wealth to support it have, like Winston Churchill, a secretary at hand to take dictation? This has to be a close proximity to the record of a memory as it is formed?  

With the written word came libraries.

‘If you have ever tried reading an old diary entry of yours from many years ago, you may have felt this strange mixture of familiarity and foreigness, of sensing that you remember some, perhaps most, but never all of the text’s original meaning’. (Mayer-Schönberger, 2009. p. 33)

On Gordon Bell – his goal is nothing short of obliterating forgetting. (Mayer-Schönberger, 2009. p. 50)

I wonder if the quest to make an Artificial Intelligence like the human mind will be a more fruitful one that trying to turn a human mind into a digital one? That adding AI attributes to a database will achieve more, than by thinking of the human as nothing more than a bipedal device from which to record external goings on?

 

The idea of gathering a substantial part of one’s life experience fascinates me, as it has often inspired others


Fig. 1. Hands by Escher.

The danger is for it to become one’s modus operandi, that the act of gathering is what you become. I recall many decades ago, possibly when I started to keep a diary when I was 13, a documentary – that can no doubt now be found on the Internet – on a number of diarists. There were not the well-known authors or celebrity politicians, but the obscure keeper of the heart beat, those who would toil for two hours a day writing about what they had done, which was to edit what they’d written about the day before … if this starts to look like a drawing by Escher then perhaps this illustrates how life-logging could get out of hand, that it turns you inside out, that it causes implosion rather than explosion. It may harm, as well as do good. We are too complex for this to be a panacea or a solution for everybody.

A myriad of book, TV and Film expressions of memory, its total recall, false recall, falsehoods and precisions abound. I think of the Leeloo in The Fifth Element learning about Human Kind flicking through TV Channels.

Fig. 2. Leeloo learns from TV what the human race is doing to itself

Always the shortcut for an alien to get into our collective heads and history. Daryl Hannah does it in Splash too. Digitisation of our existence, in part or total, implies that such a record can be stored (it can) and retrieved in an objective and viable way (doubtful). Bell (2009) offers his own recollections, sci-fi shorts and novels, films too that of course push the extremes of outcomes for the purposes of storytelling rather than seeking more mundane truth about what digitization of our life story may do for us.


Fig. 3. Swim Longer, Faster

There are valid and valuable alternatives – we do it anyway when we make a gallery of family photos – that is the selective archiving of digital memory, the choices over what to store, where to put it, how to share then exploit this data. I’m not personally interested in the vital signs of Gordon Bell’s heart-attack prone body, but were I a young athlete, a competitive swimmer, such a record during training and out of the pool is of value both to me and my coach.

I am interested in Gordon Bell’s ideas – the value added, not a pictoral record of the 12-20 events that can be marked during a typical waking day, images grabbed as a digital camera hung around his neck snaps ever 20-30 seconds, or more so, if it senses ‘change’ – gets up, moves to another room, talks to someone, browses the web … and I assume defecates, eats a meal and lets his eyes linger on … whatever takes his human fancy.

How do we record what the mind’s eye sees?

How do we capture ideas and thoughts? How do we even edit from a digital grab in front of our eyes and pick out what the mind is concentrating on? A simple click of a digital camera doesn’t do this, indeed it does the opposite – it obscure the moment through failing to pick out what matters. Add sound and you add noise that the mind, sensibly filters out. So a digital record isn’t even what is being remembered. I hesitate as I write – I here two clocks. No, the kitchen clock and the clicking of the transformer powering the laptop. And the wind. And the distant rumble of the fridge. This is why I get up at 4.00am. Fewer distractions. I’ve been a sound engineer and directed short films. I understand how and why we have to filter out extraneous noises to control what we understand the mind of the protagonist is registering. If the life-logger is in a trance, hypnotized, day dreaming or simply distracted the record from the device they are wearing is worse than an irrelevance, it is actually a false cue, a false record.

Fig. 4. Part of the brain and the tiniest essence of what is needed to form a memory

Mind is the product of actions within a biological entity. To capture a memory you’d have to capture an electro-chemical instance across hundreds of millions of synapses.


Fig. 5. Diving of Beadnell Harbour, 1949. My later mother in her teens.

An automatically harvested digital record must often camouflage what might have made the moment a memory. I smell old fish heads and I see the harbour at Beadnell where as a child fisherman brought in a handful of boats every early morning. What if I smell old fish as I take rubbish to recycle? Or by a bin down the road from a fish and chip shop. What do my eyes see, and what does my mind see?

I love the messiness of the human brain – did evolution see this coming?

In ‘Delete’ Mayer-Schönberger (2009. p. 1) suggests that forgetting, until recently was the norm, whereas today, courtesy of our digital existences, forgetting has become the exception.

I think we still forget – we don’t try to remember phone numbers and addresses as we think we have them in our phone – until we wipe or lose the thing. In the past we’d write them down, even make the effort to remember the things. It is this need to ‘make an effort’ to construct a memory that I fear could be discombobulated.

I’m disappointed though that Mayer-Schönberger stumbles for the false-conception ‘digital natives’ – this is the mistaken impression that there exists a generation that is more predisposed and able than any other when it comes to all things digital. Kids aren’t the only ones with times on their hands, or a passion for the new, or even the budget and will to be online. The empirical evidence shows that the concept of a digital native is unsound – there aren’t any. (Jones et al, 2010., Kennedy et al, 2009., Bennet and Maton, 2010., Ituma, 2011)

The internet and digital possibilities have not created the perfect memory. (Mayer-Schönberger 2009. p. 3)

To start with how do we define ‘memory’ ?

A digital record is an artefact, it isn’t what is remembered at all. Indeed, the very nature of memory is that it is different every time you recall a fact or an event. It becomes nuanced, and coloured. It cannot help itself.

Fig. 6. Ink drops as ideas in a digital ocean

A memory like drops of ink in a pond touches different molecules every time you drip, drip, drip. When I hear a family story of what I did as a child, then see the film footage I create a false memory – I think I remember that I see, but the perspective might be from my adult father holding a camera, or my mother retelling the story through ‘rose tinted glasses’.


Fig. 7. Not the first attempt at a diary, that was when I was 11 ½ . ‘A day in the life of … ‘ came to a close with breakfast after some 500 words.

I kept a diary from March 1973 to 1992 or so. I learnt to write enough, a few bullet points in a five year diary in the first years – enough to recall other elements of that day. I don’t need the whole day.

%d bloggers like this: