Home » Posts tagged 'science'

Tag Archives: science

Advertisements

I LOVE the way the brain will throw you a googlie. It’s why we’re human.

And then there’s this – 12 grabs of an Activity System looking like Toblerone.

One per month, one per hour.

This is the point. The thing is

a) a grab in time

b) unstable

c) a construct or model (as well as a theory).

A theory because it can be re-applied (for now).

Fig.1. Its image explains itself.

Engestrom and others go to great lengths to remind us that the model/theory of an Activity System is a snap shot in time – that even as we look at it things are moving on, that the relationships don’t simply change as a result of the interactions with each other – but because the whole thing shifts.

OK. Take a chocolate triangle of activity Theory and visualise it in sequence. Better still, drop what you are doing and go and buy some.

Now take a piece and eat it.

The logic remains equally sound when I suggest that by consuming a moment of the Activity System in its last iteration you are enacting what the Internet has done and is doing.

This is what the connectivity of the Web does – the degree and scale of connections is overpowering and consuming.

One step more.

That triangle of chocolate, nougat, almonds and honey that I see as a multi-sensory expression of an Activity System may be digested in the stomach, but its ingredients hit you in the head.

It’s a brain thing.

Which explains my interest in neuroscience.

It happens. It should be visible. It can be measured.

Just reading this a million Lego Characters are kicking a few more million molecular bricks along a dendrite in part because they must, then again just to see what happens (yes, I have just read ‘Neuroscience for Dummies’). So some stick in odd places. Some will hit the mark (whatever that is) while another will remind you of the very moment you first nibbled on Toblerone.

I LOVE the way the brain will throw you a googlie. (as a fraction of the planet know cricket other metaphors are required. I never even played the game as I was deemed rubbish – actually, though no one spotted it in five years of prep school, I needed glasses).

On the one hand, my interest is to take a knife to all of this, chop it off and put it in the compost bin so that I am left with something that is ‘tickable’, on the other hand I want to indulge the adventure of the composting process.

 

Advertisements

When I think if learning, I think of the minuscule intricacies of the component parts of the brain and at the same time the immense vastness of the known universe.

As humans we are eager to understand everything.

It seems appropriate to marry neuroscience with astrophysics, like brackets that enclose everything. From a learning point of view then ask as you look at a person or group of people, ‘what is going on?’ specifically, ‘what is going on in there? (the brains) and between them to foster insight, understanding, innovation and advancement.

The best interface for this, a confluence for it all, is the Internet and the connectedness of it all.

What has the impact of the Internet been and based on everything we currently know, where do we presume it is going?

‘If you’re not lost and confused in a MOOC you are probably doing something wrong’


Photo credit: Robin Good

 

‘MOOCs indicate that we are seeing a complexification of wishes and needs’ – so we need a multispectrum view of what universities do in society. George Siemens, (18:51 25th March 2013).

 

A terrific webinar hosted by Martin Weller with George Siemens speaking. Link to the recorded event and my notes to follow.

I took away some key reasons why OER has a future:

 

  1. Hype between terrifying and absurd.
  2. State reduction in funding will see a private sector rise.
  3. Increase in rest of world’s desire for HE OER
  4. Certificates growing.
  5. The Gap
  6. Accelerating time to completion
  7. Credit and recognition for students who go to the trouble to gain the competencies.
  8. Granular learning competencies and the gradual learning and badging to stitch together competencies.

 

And a final thought from the host:

‘If you’re not lost and confused in a MOOC  you are probably doing something wrong’.  Martin Weller (18:45 25th March 2013)

Which rather means I may be doing something wrong!

I posted to Linkedin, I am neither confused, nor lost. Indeed I have a great sense of where I am and what is going on, have met old online friends and am making new contacts and enjoy using two of my favourite platforms: Google+ and WordPress.  (All the fun’s at H817open)

 

A selection of papers are proving enlightening too:

 

1) John Hopkins Bloomberg School of Public Health OpenCourseWare (2009) Kanchanaraksa, Gooding, Klass and Yager.

 

2) The role of CSCL pedagogical patterns as mediating artefacts for repurposing Open Educational Resources (2010) Conole, McAndrew & Dimitriadis

 

3) A review of the open educational resources (OER) movement: Achievements, challenges, and new opportunities. Report to The William and Flora Hewlett Foundation.

 

I’ll post a 500 word review of the above shortly as per H817open Activity 7.

The value is both expanding the reasons for OER as well as having a handful of objections, negatives and concerns. Like all things regarding e-learning, they is no panacea for putting in the time and effort.

And a couple of others that look interesting:

 

Disruptive Pedagogies and Technologies in Universities (2012)  Anderson and McGreal

 

Open education resources: education for the world? (2012) Richter and McPherson

 

Traditional Learning vs. the Web

 

Fig.1. Lewes Castle in the snow

Another conception of Open Learning

The traditional, institutionalised, top down, dictatorial behaviorist learning of the past.

The trees representing the organic, overwhelming, irresistible drive of the the Web.

 

Would you know a digital scholar if you met one? Are we there yet?

Martin Weller, in ‘The Digital Scholar’ looks forward to the time when there will be such people – a decade hence. I suggested, in a review of his book in Amazon, that ’10 months’ was more likely given the pace of change, to which he replied that academia was rather slow to change. That was 18 months ago.

Are there any ‘digital scholars’ out there?

How do we spot them? Is there a field guide for such things?

I can think of a few candidates I have come across, people learning entirely online for a myriad of reasons and developing scholarly skills without, or only rarely, using a library, attending a tutorial or lecture, or sitting an exam. But can they ever be considered ‘scholarly’ without such things? They’ll need to collaborate with colleagues and conduct research.

On verra.

The Educational Impact of Weekly E-Mailed Fast Facts and Concepts

In this study, the authors assessed the educational impact of weekly Fast Facts and Concepts (FFAC) e-mails on residents’ knowledge of palliative care topics, self-reported preparedness in palliative care skills, and satisfaction with palliative care education.

The more papers I read, like learning a foreign language, the thinner the blur between mystery and comprehension in terms of judging a paper and its contents. My goal is to be able to conduct such research and write such papers. I understandably feel that a first degree in medicine and a second masters degree in education is required at this level. At best I might be able to take on psychology or neuroscience. My preference and hope would be to become part of a team of experts.

Purpose: Educational interventions such as electives, didactics, and Web-based teaching have been shown to improve residents’ knowledge, attitudes, and skills. However, integrating curricular innovations into residency training is difficult due to limited time, faculty, and cost.

What – A clear problem:

Integrating palliative care into residency training can be limited by the number of trained faculty, financial constraints, and the difficulty of adding educational content with limited resident duty hours. (Claxton et al. p. 475 2011)

Who – Participants

Beginning internal medicine interns

Why – Time- and cost-efficient strategies for creating knowledge transfer are increasingly important. Academic detailing, an educational practice based on behavioral theory, uses concise materials to highlight and repeat essential messages. Soumerai  (1990)

How – We designed this study to assess the educational impact of weekly e-mailed FFAC on
internal medicine interns in three domains: knowledge of palliative care topics, satisfaction with palliative care education, and self-reported preparedness in palliative care skills.

Methods

This randomized, controlled study of an educational intervention included components of informed consent, pretest, intervention, and posttest.

Fast Facts and Concepts

FFAC are 1-page, practical, peer-reviewed, evidence-based summaries of key palliative care topics first developed by Eric Warm, M.D., at the University of Cincinnati Internal Medicine Residency Program in 2000.6

Intervention

One e-mail containing two FFAC was delivered weekly for 32 weeks to interns in the intervention group.

Pre-test

All participants completed a pretest that assessed knowledge of palliative care topics, self-rated preparedness to perform palliative care skills, and satisfaction with palliative care education.

Method: Internal medicine interns at the University of Pittsburgh and Medical College of Wisconsin were randomized to control and intervention groups in July 2009. Pretests and post-tests assessed medical knowledge through 24 multiple choice questions, preparedness on 14 skills via a 4-point Likert scale and satisfaction based on ranking of education quality.

The intervention group received 32 weekly e-mails.

Control Group
No e-mails were sent to the control group.

Post-test
Respondents completed a post-test 1 to 8 weeks after the
intervention

Educational equipoise
All study participants were informed of the content and the online availability of FFAC during recruitment. At the conclusion of the study, both control and intervention groups were given a booklet that contained all the e-mailed FFAC.

Statistical analysis

Descriptive statistics and t tests were used to compare the demographic data between the control and intervention groups. Medical knowledge, preparedness, and satisfaction were compared pretest and post-test within groups by Wilcoxon tests and between groups via Mann-Whitney U tests. The data did not meet assumptions for multivariate analysis due to the small sample size. Only univariate analysis was performed.

Although traditional academic detailing techniques include educational outreach visits and distribution of printed graphic materials, e-learning techniques such as e-mail delivery of educational content, listservs and Web-based tutorials can also be considered rooted in this behavioral theory given their focus on repeated, concise content.

Pain assessment and management, breaking bad news, communicating about care goals, and providing appropriate medical care for a dying patient are necessary skills for surgery, family medicine, pediatric, obstetrics and gynaecology, physical medicine and rehabilitation, emergency medicine, neurology, radiation oncology, anesthesiologist, and psychiatry residents.

Studies that focus on e-mail education interventions have shown that weekly e-mails change the behavior of e-mail recipients, improve learner retention of educational content and that retention improvements increase with the duration over which e-mails were received. (Kerfoot et al. 2007 ) (Matzie et al. 2009)

Results: The study group included 82 interns with a pretest response rate of 100% and post-test response rate of 70%. The intervention group showed greater improvement in knowledge than the control (18% increase compared to 8% in the control group, p = 0.005).

Preparedness in symptom management skills (converting between opioids, differentiating types of pain, treating nausea) improved in the intervention group more than the control group ( p = 0.04, 0.01, and 0.02, respectively).

There were no differences in preparedness in communication skills or satisfaction between the control and intervention groups.

Conclusions: E-mailed FFAC are an educational intervention that increases intern medical knowledge and self-reported preparedness in symptom management skills but not preparedness in communication skills or satisfaction with palliative care education.

REFERENCE

Claxton, R, Marks, S, Buranosky, R, Rosielle, D, & Arnold, R 2011, ‘The Educational Impact of Weekly E-Mailed Fast Facts and Concepts’, Journal Of Palliative Medicine, 14, 4, pp. 475-481, Academic Search Complete, EBSCOhost, viewed 25 February 2013.

Matzie KA, Price Kerfoot B, Hafler JP, Breen EM: (2009) Spaced education improved the feedback that surgical residents given to medical students: A randomized trial. Am J Surg 2009;197:252–257.

Price Kerfoot B, DeWolf WC, Masser BA, Church PA, Federman DD: (2007) Spaced educational improves the retention of clinical knowledge by medical students: A randomized controlled trial. Med Educ 2007;41:23–31.

Soumerai SB, Avorn J: (1990) Principles of educational outreach (‘academic detailing’) to improve clinical decision making. JAMA 1990;263:549–556.

 

Self and Peer Grading on Student Learning – Dr. Daphne Koller

Fig. 1. Slide from Dr Daphne Koller‘s recent TED lecture (Sadler and Goodie, 2006)

I just watched Daphne Koller’s TED lecture on the necessity and value of students marking their own work. (for the fifth time!)

Whilst there will always be one or two who cheat or those who are plagiarists, the results from ‘Big Data’ on open learning courses indicate that it can be a highly effective way forward on many counts.

  1. it permits grading where you have 1,000 or 10,000 students that would otherwise be very expensive, cumbersome and time consuming
  2. as a student you learn from the assessment process – of your work and that of others
  3. student assessment of other’s work is close to that of tutors though it tends to be a little more harsh
  4. student assessment of their own work is even closer to the grade their tutor would have given with exceptions at opposite ends of the scale – poor students give themselves too high a grade and top students mark themselves down.

Conclusions

  •  it works
  •  it’s necessary if learning reach is to be vastly extended
  • isn’t human nature a wonderful thing?! It makes me smile. There’s an expression, is it Cockney? Where one person says to another ‘what are you like?’

Fascinating.

‘What are we like?’ indeed!

REFERENCE

Philip M. Sadler & Eddie Good (2006): The Impact of Self- and Peer-Grading on Student Learning, Educational Assessment, 11:1, 1-31

 

Even a well-designed quasi-experimental study is inferior to a well-designed randomised controlled trial.

EBSCO Publishing Logo

EBSCO Publishing Logo (Photo credit: Wikipedia)

In favour of randomized control trials

Torgerson and Torgerson (2001)

The dominant paradigm in educational research is based on qualitative methodologies (interpretive paradigm).  Torgerson and Torgerson (2001. p. 317)

Despite fairly widespread use of quantitative methods the most rigorous of these, the randomised controlled trial (RCT), is rarely used in British educational research. Torgerson and Torgerson (2001. p. 317)

Even a well-designed quasi-experimental study is inferior to a well-designed randomised controlled trial. Torgerson and Torgerson (2001. p. 318)

Something I’ll need to get my head around – again!

The first problem is the statistical phenomenon of regression to the mean (Cook and Campbell, 1979; Torgerson, C.J., 2000) All groups studied need to have the same regression to the mean chances. Torgerson and Torgerson (2001. p. 318)

Non-randomised quantitative methods are nearly always inferior to the randomised trial. Torgerson and Torgerson (2001. p. 319)

Nearly 40 years ago Schwartz and Lellouch described two types of randomised trial: the ‘explanatory trial’ and the ‘pragmatic trial’ (Schwartz and Lellouch, 1967).

The explanatory trial design is probably the one with which most people are familiar.

This type of study is tightly controlled and, where possible, placebo interventions are used.

Thus, one may take a large group of children all from a similar socio-economic background and attainment and randomly allocate them into two groups. One group receives the intervention under investigation whilst the other receives a dummy or sham intervention.  Torgerson and Torgerson  (2001. p. 320)

The pragmatic trial : the environment in which the trial is conducted is kept as close to normal educational practice as possible.

The children, or schools, are allocated the new intervention at random. A disadvantage of the pragmatic approach is that the trials usually have to be much larger than the explanatory approach but the pragmatic trial approach is probably the most feasible and useful trial design for educational research. Because the trial mimics normal educational practice, there is a greater variation that can make it harder to detect a small effect. To cope with this the sample size needs to be increased accordingly. Torgerson and Torgerson (2001. p. 320)

The underlying idea of a randomised trial is exceedingly simple.

Two or more groups of children, identical in all respects, are assembled. Clearly, the individual children are different but when groups of children are assembled by randomisation, and with a large enough sample size, they will be sufficiently similar at the group level in order to make meaningful comparisons. In other words, the differences are spread equally across both groups, making them essentially the same. Torgerson and Torgerson (2001. p. 321)

The analysis of a randomised trial is actually simpler than other forms of quantitative research because we know the two groups are similar at baseline. Torgerson and Torgerson (2001. p. 321)

Randomisation creates groups with the same proportion of girls, with the same proportion of pupils from various socio-economic and ethnic groups, with the same distribution of ages, heights, weights etc. – this is the simple elegance of randomisation! Torgerson and Torgerson (2001. p. 323)

To avoid observer bias blinded outcome assessment must be undertaken.  Torgerson and Torgerson (2001. p. 324)

Qualitative methodologies are well suited to investigating what happens with individuals; RCTs are appropriate for looking at the larger units relevant to policy makers. Torgerson and Torgerson (2001. p. 3264)

REFERENCE

SCHWARTZ, D. and LELLOUCH, D. (1967) Explanatory and pragmatic attitudes
in therapeutic trials. Journal of Chronic Diseases 20, 637–648.

Torgerson, C, & Torgerson, D 2001, ‘The Need for Randomised Controlled Trials in Educational Research’, British Journal Of Educational Studies, 3, p. 316, JSTOR Arts & Sciences IV, EBSCOhost, viewed 13 February 201

 

Google is making more of us brighter

Nicholas Carr speaking at the 12th Annual Gild...

Nicholas Carr speaking at the 12th Annual Gilder/Forbes Telecosm Conference on May 28, 2008. (Photo credit: Wikipedia)

Is Google Making Us Stupid?  (July/August 2008)

Critique of Nicholas Carr’s piece in the Atlantic.

No.

Google is contributing to putting a university in everyone’s pocket – that is if you have a smartphone or tablet and Internet access. So remarked Lt. Col. Sean Brady in his MBA Blog in the Financial Times in 2011.

In 2008 Nicholas Carr jumped on the Internet / Google bandwagon of scaremongering sensationalism by suggesting that the Internet is doing something to our brains.

He did little better with ‘The Shallows’ (2008) of which the Guardian Book review said:

‘Buy it, knife out all the pages, bin a few, shuffle the rest and begin to digest. It may not be what the author intended, but you might learn more, and make some stimulating connections along the way – just like you do on the internet’. (Tucker, 2011)

He  holds a B.A. from Dartmouth College and an M.A., in English and American Literature and Language, from Harvard University. His BA is presumably in literature, not psychology or engineering or computer science.

He says he’s one of America’s ‘leading internet intellectuals’?  Who qualifies this?

Does this make him an authority on neuroscience or digital literacies or even web-science and the Internet? 

Carr opens and closes this article with images and words from the film 2001 A Space Odyssey.  Stanley Kubrick isn’t the author, that’s Arthur C. Clarke. At least he has a first class honours degree in physics and mathematics. But he is writing fiction – rather like Carr. Anyone can quote H G Wells, Jules Verne or the script writers of the Star Trek Series but this hardly lends credibility.

This is Journalism for a Sunday Colour supplement.

Not science, not academic research – it is closer to stand up comedy.

Carr (2008) writes:
‘I can feel it, too. Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory’. (Carr, 2008)

Carr may feel these things and express it this way, but this does not mean that this is going on. On the contrary. Carr confuses brain and mind and doesn’t know the difference. What he writes he is a personal view, stream of consciousness, spurious, unscientific and easily challenged.  Just because Carr thinks it and writes it down doesn’t make it so.

If his concentration drifts then this is an internal cognitive issue, not the propensity of what is going on around to distract. Some people are more distractable than others, while our ability to focus or drift will change too. When the chips are down and you have to focus for an exam that is what people generally do.

‘I think I know what’s going on’. Carr writes, ‘For more than a decade now.  I’ve been spending a lot of time online, searching and surfing and sometimes adding to the great databases of the Internet’.

Is this put in to support his credentials?

What precisely is he doing here?

Fiddling around in Wikipedia, or writing a blog? His role with Encyclopedia Britannica is pure PR – he had a dig at Wikipedia, is a popular writer of general fiction so they asked for a piece of him. Anyone can contribute to Wikipedia – though I hope Carr is keeping his ideas to himself as the contents of articles like this should and would be very quickly shredded as intellectually inaccurate and weak.

The Web has been a godsend to me as a writer. Carr says (2008)

There’s a naive view here that the authors or editors who use hyperlinks know what they are doing and even link to information that has validity. Finding text to support your thesis doesn’t meant that either they or you are correct – where is the counter argument? The balance?

For me, as for others, the Net is becoming a universal medium, the conduit for most of the information that flows through my eyes and ears and into my mind. (Carr 2008)

Does ‘flow’ expose Carr’s ignorance of how the mind works? There is no physical flow, rather there is perception, filtered by various sensors, then physically constructed across a multitude of pathways.  How much does the average person actually spend in front of a screen online? Were similar things not also said about TV frying our minds, and before that Radio, and before than the book?  If you agree with Marshall McLuhan then you must accept that books, radio and TV are still around – we are not being uniquely, exclusively or universally cudgelled by the Internet or Google all our waking lives. Nor does a clever metaphor enhance credibility – this is your feeling and impression, not fact.

Behaviours are very easily changed.

You learn that  you cannot have all the candy in the candy store, that you can’t read all the books in the library, that the paper you pick up you will browse until your eye is caught by something of interest.

Just because ‘ the Net seems to be doing is chipping away my capacity for concentration and contemplation’ does not mean this is the case for everyone else, however much they might nod along in agreement.

‘I’m not the only one’. Carr writes, ‘When I mention my troubles with reading to friends and acquaintances—literary types, most of them—many say they’re having similar experiences’.  (Carr, 2008)

Which friends? How many? What did you ask them? Where is it written down? Would you describe as mentioning what you say your friends think as evidence, as objective, as anywhere close to the research that is required before such claims as those you make can be substantiated?

The more they use the Web, the more they have to fight to stay focused on long pieces of writing. Carr says, (Carr, 2008)

Nonsense of course. Let’s do some research. We may find the opposite. That a little reading excited the curiosity so that a lot more is then done. Short and long can be equally good, or bad, it depends on context, author, presentation – a plethora of things. What has been discovered about the use of blogs? That people read more, or less? That people write more or less? With 1.5 million words online my 13 blogging habit may be exceptional, though plenty use a blog to develop books, to share a thesis as it is written, to review books and courses …

“I can’t read War and Peace  anymore,” he admitted. “I’ve lost the ability to do that. Even a blog post of more than three or four paragraphs is too much to absorb. I skim it.” Carr writes, 2008.

So what. This isn’t evidence, it’s hearsay. It’s anecdotal. One person  in response to a leading question and a quote selected for the sole purpose of supporting the hypothesis. These are mental habits so much as cognitive behaviours. You can’t change ‘mental habits’ like buying a new car – how you think, your memories, have all constructed and adapted to your unique experience.

Experiments or research? Does  he  know the difference? And can the tests you suggest ever be done? Internet use can not be isolated from everything else we do in our lives: read books and papers, watch TV, listen to the radio, attend a lecture, watch a lecture online, join a webinar or class, read a letter, an email, a blog, listen to a podcast, look at pictures you’ve taken and those that you haven’t.

But a recently published study of online research habits , conducted by scholars from University College London, suggests that we may well be in the midst of a sea change in the way we read and think. Carr writes (2008).

Which ‘scholars’? If they have names provide them. Just as there is no evidence that they went back to read the longer article, there is no evidence to say that they didn’t. In all likelihood having had the opportunity to skim through so many abstracts they saved what mattered and then did go back and read. In the process they gained through serendipitous inquiry that is usually far more structured than you suggest – people click through other papers by the same author, or on the same subject, or from a faculty they are interested in … and may by chance and out of curiosity click on other things too. How do you think research on how people read microfiche were carried out, and what did it find? What about going through newspapers at the National Newspaper Archive where invariably, Internet like, you eyes skims over the page and everything it contains. Whilst some aspects of the world around us is changing – a bit. We aren’t changing at all. We are literate, we live longer … but there cannot be some overnight evolutionary change in what we are.

It is clear that users are not reading online in the traditional sense; indeed there are signs that new forms of “reading” are emerging as users “power browse” horizontally through titles, contents pages and abstracts going for quick wins. It almost seems that they go online to avoid reading in the traditional sense. Carr writes, 2008.

Who says? What makes it clear? Who did the research? Where is the research?  It may be clear to Carr but where is the evidence? Where is the data? The research? The peer reviewed papers that support your view?

What are these ‘new forms of reading’ – how do you define old forms of reading? Have you considered how people read fifty years ago, 500 or thousand years ago? From manuscripts, or in cuneiform on papyrus? What we are experiencing is an expansion in the way that we can engaged with content, that’s all. We read different things in different ways from a poster on a hoarding as we drive to a station, to a timetable of train times, the front page of a newspaper, a blog post, business reports, email, text message …

We read, for better or worse, with greater ease or difficulty in part depending on the way the text is presented and the affordances of the webpage or reading software if we’ve downloaded it. We don’t take handwritten notes because we don’t need to. We may opt to read online or in some cases order a physical book. Far from new forms of reading, there is a great deal of effort being put in to retain the best that came out of print – in all its forms, whether a book, leaflet or poster while trying to avoid clutter, fonts, layouts and links that would disturb how we read i.e. the technology is designed to perform to suit us, we are not adjusting to suit the technology – far from it. If something doesn’t work we say so and ignore it.

Thanks to the ubiquity of text on the Internet, not to mention the popularity of text-messaging on cell phones, we may well be reading more today than we did in the 1970s or 1980s, when television was our medium of choice. Says Carr, 2008. Medium of choice over what other choices? So, we watched a lot of TV in the 70s, 80s – and 90s and 00s by the way … just as generations in the 40s and 50s sat around the radio. Now we have a multitude of choices – and if our radio, our TV and our paper are delivered to a tablet what is it that we are doing that is so different to these previous ages?

‘We may well be reading more today …’ are we, or are we not? And if so why? And so what. More people are literate. There is easier access to anything. Suggestions based on casual observation or hear say have suggested that people who spend too long online in social networks don’t have a life – the evidence suggests that the social life is an extension of what goes on in the real world and that it encourages people to arrange to meet up.

Maryanne Wolf, Carr tells us,  worries that the style of reading promoted by the Net, a style that puts “efficiency” and “immediacy” above all else, may be weakening our capacity for the kind of deep reading that emerged when an earlier technology, the printing press, made long and complex works of prose commonplace.

From a review by Douglas (2008)  in the New Scientist I read that ‘Just as writing reduced the need for memory, the proliferation of information and the particular requirements of digital culture may short-circuit some of written language’s unique contributions—with potentially profound consequences for our future’.

The worst error Wolf makes here is to imply that our minds function in any way like a circuit-board, to start with the is the way Marshall McLuhan spoke of what TV and mass media was doing to our brains in the 1960s – a metaphor is a dangerous explanation as it implies something that isn’t the case.  The mind is not a circuit board. This is how Marshall McLuhan spoke of what was going on in the 1950s and 1960s. Using the terminology of the time to suggest that some kind of revolution was occurring and that there would be no going back – not too surprisingly life goes on very much as it did fifty years ago, even a hundred and fifty or a thousand years ago. We grow up, we live, we may have children, we may raise them, we get older, we die. More of us learn to read, more of us go to university, more of us drive and have TVs – and increasingly, though by no means universally, we have access to a computer and/or the Internet.

When we read online, she says, we tend to become “mere decoders of information.” Our ability to interpret text, to make the rich mental connections that form when we read deeply and without distraction, remains largely disengaged.

Sometime in 1882, Friedrich Nietzsche bought a typewriter—a Malling-Hansen Writing Ball, to be precise. Carr writes (Carr, 2008) Once he had mastered touch-typing, he was able to write with his eyes closed, using only the tips of his fingers. Words could once again flow from his mind to the page.

So what? Handwriting is different to typing, and writing in pencil is different to ink pen or biro, and typing on a mechanical typewriter is different to a laptop, iPad or smartphone. The typewriter here as assistive technology, to the able bodied simple an enhancement of what we can do with other kit and tools.

The human brain is almost infinitely malleable, says Carr using his anecdote about Nietzsche as the clinching evidence.  Huh? How scientific is a phrase such as ‘almost infinitely malleable’. Define your parameters. Offer some precision. You don’t because you can’t. And as you’re not after the truth you’re not about to understate any research.

The clock’s methodical ticking helped bring into being the scientific mind and the scientific man. But it also took something away.

So there were no scientific minds before clocks? Ptolemy? Leonardo da Vinci? Others can add to the list. What has it taken away? Rather let’s consider what it has brought in access, accessibility and scale. ‘A university in my pocket’ is how an Open University MBA student described learning online with access to the resources of the course, but also to a wealth of other online content.

The Internet, an immeasurably powerful computing system, is subsuming most of our other intellectual technologies. It’s becoming our map and our clock, our printing press and our typewriter, our calculator and our telephone, and our radio and TV.

It can be measure and is – there is a figure, however much it grows that we are currently below … what is an ‘intellectual technology’? How do you account for the phenomenon that the Internet through people connecting is sending people out of the house and office to meet fellow human beings to interact away from the keyboard. You find a like-mind you eventually want to talk to them face to face. In this respect use of the Internet has a counter balance – it encourages doing other things.

A new e-mail message, for instance, may announce its arrival as we’re glancing over the latest headlines at a newspaper’s site. The result is to scatter our attention and diffuse our concentration. Carr writes, (Carr, 2008)

Distractability has less to do with the distractions around us, and more to do with our desire or propensity to be distracted. We all know people who can write essays and revise for exam with the radio on or in front of the TV, while others need the silence of a silent library. We are most certainly NOT all the same which is how Nicholas Carr sees us from the outset.

As people’s minds become attuned to the crazy quilt of Internet media, traditional media have to adapt to the audience’s new expectations. Carr writes, (Carr, 2008)

In the right place, the mind is very capable of doing this, just as we can learn to play the organ or conduct an orchestra, that doesn’t mean we don’t also want the clarity of a movie screen or portrait in a gallery.

Old media have little choice but to play by the new-media rules.

No. The technology permits a better way of doing things, playing to the way the mind is … not what we can turn it into.

Yet, for all that’s been written about the Net, there’s been little consideration of how, exactly, it’s reprogramming us.

Nicholas Carr has now fallen into the trap of using a metaphor of choice, the cliché idea that our minds are somehow either like microchips or the 100 billion neurones that have the options of being connected in multi-billions of shifting ways.

There are plenty of positive futures of a world brain, from H G Wells to Vannevar Bush

That’s the essence of Kubrick’s dark prophecy, Carr concludes (2008) as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence.

REVIEW OF THE SHALLOWS FROM THE NEW YORK TIMES (Lehrer, J (2010))

Carr extends these anecdotal observations by linking them to the plasticity of the brain, which is constantly being shaped by experience. While plasticity is generally seen as a positive feature — it keeps the cortex supple — Carr is interested in its dark side. He argues that our mental malleability has turned us into servants of technology, our circuits reprogrammed by our gadgets. (Lehrer 2010)

There is little doubt that the Internet is changing our brain. Everything changes our brain. What Carr neglects to mention, however, is that the preponderance of scientific evidence suggests that the Internet and related technologies are actually good for the mind.

Carr’s argument also breaks down when it comes to idle Web surfing. A 2009 study by neuroscientists at the University of California, Los Angeles, found that performing Google searches led to increased activity in the dorsolateral prefrontal cortex, at least when compared with reading a “book-like text.” (Lehrer 2010)

LONDON REVIEW OF BOOKS (2011)

This is a seductive model, but the empirical support for Carr’s conclusion is both slim and equivocal. To begin with, there is evidence that web surfing can increase the capacity of working memory. And while some studies have indeed shown that ‘hypertexts’ impede retention – in a 2001 Canadian study, for instance, people who read a version of Elizabeth Bowen’s story ‘The Demon Lover’ festooned with clickable links took longer and reported more confusion about the plot than did those who read it in an old-fashioned ‘linear’ text – others have failed to substantiate this claim. No study has shown that internet use degrades the ability to learn from a book, though that doesn’t stop people feeling that this is so – one medical blogger quoted by Carr laments, ‘I can’t read War and Peace any more.’

Hoyt: Plugging In Productively (2012)

Although I have at times experienced the “shallowness” that Carr describes, his views and the sweeping categorization of the Internet as a source of distraction are a simplistic reduction of a larger, more complicated problem. It’s easy to label technology as the danger because extreme assessments are easier to adhere to than calls for using technology in moderation.

I’m not certain how to go about regaining this control and moving myself from my current mode of passive over consumption, but I think more intentional and purposeful use of the Internet will help me reduce the sense of information saturation I’ve been feeling lately.

REFERENCE

Carr, N (2008) Is Google Making Us Stupid?. The Atlantic. (Accessed 11 February 2013 http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/ )

Carr, N (2008) The Shallows.

Douglas, K 2007, ‘Review: Proust and the Squid by Maryanne Wolf’, New Scientist, 195, 2623, p. 52, Academic Search Complete, EBSCOhost, viewed 8 February 2013.

Holt, J (2011)  Smarter, Happier, More Productive (Accessed 11 Feb 2013 http://www.lrb.co.uk/v33/n05/jim-holt/smarter-happier-more-productive )

Hoyt, H (2012) Plugging into productivity. Contributing Columnist. The Dartmouth. (October 16, 2012) (Accessed 11 Feb 2013 http://thedartmouth.com/2012/10/16/opinion/hoyt )

Lehrer, J (2010) Our Cluttered Minds. The New York Times Book Review
http://www.nytimes.com/2010/06/06/books/review/Lehrer-t.html?_r=1&

Tucker, I (2011) Guardian Book Reviews (Accessed 11 Feb 2013 http://www.guardian.co.uk/books/2011/jul/03/shallows-nicholas-carr-internet-neurology )

The spoken word is crucial to understanding.

The spoken word is crucial to understanding.

Fig.1. Meeting face to face to talk about e-learning – sometimes a webinar wont’t do, though more often you have no choice. 

‘I don’t know what I mean until I have heard myself say it, Said Irish author and satirist Jonathan Swift

Conversation plays a crucial element of socialised learning.

Courtesy of a Google Hangout we can record and share such interactions such as in this conversation on and around ‘personal knowledge management’. Here we can both see and hear why the spoken word is so important.

Trying to understand the historical nature of this, how and when the written word, or other symbols began to impinge on the spoken word requires investigating the earliest forms of the written word and trying to extrapolate the evidence of this important oral tradition, the impact it had on society and the transition that occurred, after all, it is this transition that fascinates us today as we embrace the Internet.

Humans have been around for between 100,000 and 200,000 years. (Encyclopedia Britannica).

There are pigments and cave painting have been found that are 350,000 years old. (Barham 2013), while here are cave paintings as old as 40,000 years (New Scientist).

Stone Age man’s first forays into art were taking place at the same time as the development of more efficient hunting equipment, including tools that combined both wooden handles and stone implements. (BBC, 2012). Art and technology therefore go hand in hand – implying that the new tools of the Internet will spawn flourishing new wave of creation, which I believe to be the case. This era will be as remarkable for the development of the Web into every aspect of our lives as it will be for a epoch identifying renaissance – a new way of seeing things.

We’ve been seeking ways to communicate beyond the transience of the spoken word for millennia.

McLuhan takes us to the spoken word memorised in song and poetry (Lord, 1960 p. 3) while a contemporary writer, Viktor Mayer-Schonbeger, (2009. p. 25) also talks about how rhyme and meter facilitated remembering. McLuhan draws on 1950s scholarship on Shakespeare and asks us to understand that Lear tells us of shifting political views in the Tudor era as a consequence of a burgeoning mechanical age and the growth of print publishing. (Cruttwell, 1955)  McLuhan suggests that the left-wing Machiavellianism in Lear who submits to ‘a darker purpose’ to subdivide of his kingdom is indicative of how society say itself developing at a time of change in Tudor times. Was Shakespeare clairvoyant? Did audiences hang on his words as other generations harken the thoughts of  H G Wells and Karl Popper, perhaps as we do with the likes Alan de Bouton and Malcolm Gladwell?

‘The Word as spoken or sung, together with a visual image of the speaker or singer, has meanwhile been regaining its hold through electrical engineering’.xii. Wrote Prof. Harry Levin to the preface of The Singer of Tales.

Was a revolution caused by the development of and use of the phonetic alphabet?

Or from the use of barter to the use of money?

Was the ‘technological revolution’ of which McLuhan speaks quoting Peter Drucker, the product of a change in society or did society change because of the ‘technological revolution’? (Drucker, 1961) Was it ever a revolution?

We need to be careful in our choice of words – a development in the way cave paintings are done may be called a ‘revolution’ but something that took thousands of years to come about is hardly that.

Similarly periods in modern history are rarely so revolutionary when we stand back and plot the diffusion of an innovation (Rogers, 2005) which Rogers defines as “an idea, practice, or object that is perceived as new by an individual or other unit of adoption. (Rogers, 2005. p. 12). To my thinking, ‘diffusion’ appears to be a better way to consider what has been occurring over the last few decades in relation to ‘technology enhanced communications’, the Internet and the World Wide Web. But to my ears ‘diffusion’ sounds like ‘transfusion’ or ‘infusion’ – something that melts into the fabric of our existence. If we think of society as a complex tapestry of interwoven systems then the Web is a phenomenon that has been absorbed into what already exists – this sounds like an evolving process rather than any revolution. In context of course, this is a ‘revolution’ that is only apparent as such by those who have lived through the change; just as baby boomers grew up with television and may not relate to the perspective that McLuhan gives it and those born in the last decade or so take mobile phones and the Internet as part of their reality with no sense of what came before.

Clay tablets, papyri and the printing press evolved. We are often surprised at just how long the transition took.

To use socio-political terms that evoke conflict and battle is a mistake. Neither the printing press, nor radio, nor television, nor the Internet have been ‘revolutions’ with events to spark them akin to the storming of the Bastille in 1789 or the February Revolution in Russia in 1917 – they have been evolutionary.

Are we living in ‘two forms of contrasted forms of society and experience’ as Marshall McLuhan suggested occurred in the Elizabethan Age between the typographical and the mechanical ages? Then occurred between in the 1960s  between the industrial and electrical ages? ‘Rendering individualism obsolete’. (McLuhan 1962. p. 1)

Individualism requires definition. Did it come with the universal adult suffrage?

Was it bestowed on people, or is it a personality trait? Are we not all at some point alone and individual, as well as part of a family, community or wider culture and society? We are surely both a part and part of humanity at the same time?

Edward Hall (1959), tells us that ‘all man–made material things can be treated asextensions of what man once did with his body or some specialized part of his body. The Internet can therefore become and is already an extension of our minds. A diarist since 1975 I have blogged since 1999 and have put portions of the handwritten diary online too – tagging it so that it can be searched by theme and incident, often charting my progress through subjects as diverse as English Literature, British History, Geography, Anthropology and Remote Sensing from Space, Sports Coaching (swimming, water-polo and sailing). This aide memoire has a new level of sophistication when I can refer to and even read text books I had to use in my teens. It is an extension of my mind as the moments I write about are from my personal experience – there is already a record in my mind.

What is the Internet doing to society? What role has it played in the ‘Arab Spring’? McLuhan considered the work of Karl Popper on the detribalization of Greece in the ancient world). Was an oral tradition manifesting itself in the written word the cause of conflict between Athens and Sparta? McLuhan talks of ‘the Open Society’ in the era of television the way we do with the Internet. We talked about the ‘Global Village’ in the 1980s and 1990s so what do we have now? Karl Popper developed an idea that from closed societies  (1965) through speech, drum and ear we came to  our open societies functioning by way of abstract relations such as exchange or co–operation. – to the entire human family into a single global tribe.

The Global kitchen counter (where I work, on my feet, all day), or the global ‘desk’ if we are sharing from a workspace …

or even the ‘global pocket’ when I think of how an Open University Business School MBA student described doing an MBA using an iPad and a smartphone as a ‘university in my pocket’. You join a webinar or Google Hangout and find yourself in another person’s kitchen, study or even their bed. (Enjoying one such hangout with a group of postgraduate students of the Open University’s Masters in Open and Distance Education – MAODE – we agreed for one session to treat it as a pyjama party. Odd, but representative of the age we live in – fellow students were joining from the UK, Germany, Thailand and the United Arab Emirates). I have been part of such a group with people in New Zealand and California – with people half asleep because it is either very late at night, or very early in the morning.

McLuhan  (1965. p. 7) concludes that the ‘open society’ was affected by phonetic literacy …

and is now threatened with eradication by electric media. Writing fifty years ago is it not time we re-appraised McLuhan’s work and put it in context. We need to take his thesis of its pedestal. Whilst it drew attention at the time it is wrong to suggest that what he had to say in relation to the mass media (radio and TV) if even correct then, others insight in the era of the Internet.  This process of creating an open society has a far broader brief and with a far finer grain today – , the TV of the sitting room viewed by a family, is now a smart device in your pocket that goes with you to the lavatory, to bed, as you commute between work and in coffee and lunch breaks. It will soon be wearable, not only always on, but always attached as goggles, glasses, ear-piece, strap or badge.

If ‘technology extended senses’ McLuhan, 1965. p.8 then the technology we hold, pocket and wear today, are a prosthesis to our senses and to the manner in which the product of these senses is stored, labelled, interpreted, shared, re-lived, and reflected upon.

If Mercators maps and cartography altered 16th century mentality what do Google Maps and Street View do for ours?

Did  the world of sound gives way to the world of vision? (McLuhan, 1965 p.19). What could we learn from anthropologists who looked at non–literate natives with literate natives, the non–literate man with the Western man.

Synchronous conversation online is bringing us back to the power and value of the spoken word – even if it can be recorded, visualised with video and transcripted to form text. The power, nuance and understanding from an interchange is clear.

REFERENCE

Barham, L (2013) From Hand to Handle: The First Industrial Revolution

Carpenter, E and H M McLuhan (19xx) ‘Explorations in communications’. Acoustic Space

Cruttwell, P (1955) The Shakespearean Moment (New York; Columbia) New York. Random House.

Hall, E.T. (1959) The Silent Langauge

Lord, A.A. (1960) The Singer of the Tales (Cambridge. M.A. Harvard University Press)

Drucker, Peter F. “The technological revolution: notes on the relationship of technology, science, and culture.” Technology and Culture 2.4 (1961): 342-351.

Mayer-Schönberger, V (2009) Delete: The Virtue of Forgetting in the Digital Age

Popper, K. (1945)  The Open Society and Its Enemies, Volume One. Routledge (1945, reprint 2006)

Rogers, E.E. (1962) The Diffusion of Innovations.

 

%d bloggers like this: