Home » E-Learning » Accessibility » Designing for accessibility

Designing for accessibility

H810 Activities : 24:1 – 24:4

There are three main approaches to designing for accessibility: (Seale)

Designing for accessible e–learning in particular.

1) Universal design
2) Usability
3) User-centred design.

Form follows Function

‘Software architecture research investigates methods for determining how best to partition a system, how components identify and communicate with each other, how information is communicated, how elements of a system can evolve independently, and how all of the above can be described using formal and informal notations’. Fielding (2000:xvi)

‘All design decisions at the architectural level should be made within the context of the functional, behavioral, and social requirements of the system being designed, which is a principle that applies equally to both software architecture and the traditional field of building architecture. The guideline that “form follows function” comes from hundreds of years of experience with failed building projects, but is often ignored by software practitioners’. Fielding (2000:1)

Form follows function is a principle associated with modern architecture and industrial design in the 20th century.

The principle is that the shape of a building or object should be primarily based upon its intended function or purpose.’ Wikipedia http://en.wikipedia.org/wiki/Form_follows_function

The American architect Louis Sullivan, Greenough’s much younger compatriot, who admired rationalist thinkers like Greenough, Thoreau, Emerson, Whitman and Melville, coined the phrase in his article The Tall Office Building Artistically Considered in 1896. Sullivan (1896)

In 1908 the Austrian architect Adolf Loos famously proclaimed that architectural ornament was criminal, and his essay on that topic would become foundational to Modernism and eventually trigger the careers of Le Corbusier, Walter Gropius, Alvar Aalto, Mies van der Rohe and Gerrit Rietveld. The Modernists adopted both of these equations—form follows function, ornament is a crime—as moral principles, and they celebrated industrial artifacts like steel water towers as brilliant and beautiful examples of plain, simple design integrity’. Wikipedia http://en.wikipedia.org/wiki/Form_follows_function

Which rather suggests we ought to be constrained, take a Puritan stance and behave as if we are designing on a shoestring (rather like Cable & Wireless with their LMS, 2012)

Form follows function, ornament is crime

These two principles—form follows function, ornament is crime—are often invoked on the same occasions for the same reasons, but they do not mean the same thing.

If ornament on a building may have social usefulness like aiding wayfinding, announcing the identity of the building, signaling scale, or attracting new customers inside, then ornament can be seen as functional, which puts those two articles of dogma at odds with each other. Wikipedia http://en.wikipedia.org/wiki/Form_follows_function

In relation to e–learning this could be a prerequisite to a company commissioning work, their audience doing it.

‘We don’t do boring’ WB

Even the likelihood that the learning will be taken in,understood and remembered. If the form needs to satisfy various accessibility functions then it will be a different object than had it not. It will be a different form too if these components are an add in rather than part of the original brief and build.  (Car, mobility car – add wing mirrors, head lights, bumpers, wipers and a horn. Does a car satisfy Section 5 when like the Google Self–drive car independent road travel becomes accessible to the visually and mobility impaired?)

Sustainable Futures Graphic

All of these approaches have their origins outside of e-learning, in disciplines such as:

  • Human-computer interaction
  • Assistive technology
  • Art and design.

Seale’s perspective on design is topsyturvey. (IMHO)

  • First of all we have universally, ‘Design’ – in all things including art, but also engineering, architecture, publishing, fashion, advertising and promotion …
  • Then, secondly – human computer interaction to embrace software, hardware and web–design,
  • And finally, most narrowly and with many caveats, ‘assistive technology’.

Two other approaches to design also have an influence on designing for accessibility:

  • Designing for adaptability JFV – repurposing, enhancing, simplifying …
  • Designing with an awareness of assistive technologies. JFV – adjusting (forewarned is forarmed)

Each of these approaches will be discussed in turn.

Universal design

The universal design approach to designing for accessibility is also known as

  • design for all
  • barrier free design
  • inclusive design.

The underpinning principle of universal design is that in designing with disability in mind a better product will be developed that also better serves the needs of all users, including those who are not disabled.

‘Better’ is a value judgement that is impossible to measure. We can be sure that the product will be different if designed with disabilty in mind, might it however be compromised even rendered less desirable?

‘Universal design is the process of creating products (devices, environments, systems, and processes), which are usable by people with the widest possible range of abilities, operating within the widest possible range of situations (environments, conditions, and circumstances) (Vanderheiden 1996).

User–centered design is ...

Barnum (2002) cites three principles that are central to understanding user–centered design:

(1) “early focus on users and tasks,”

(2) “empirical measurement of product usage,”

(3) “iterative design” (p. 7).

Elaborating, Barnum also suggests that user–centered design encourages product developers to

• gather information from users before product development begins,
• identify tasks that users will want to perform before product development begins,
• include users in the product development process,
• use an iterative product development life cycle.

I’m not convinced that designers or educators, unless they are required to do so, put the needs of the disabled first. The problem has to be tackled baring in mind a myriad of parameters, controls and history. Take the advance in wheelchairs for paralympic sport … there is however a constant need to return to first principals, the basic needs, drawing afresh upon current best practice, experience, tools and knowledge. Design is an iterative process, where solutions may be stumbled upon through serendipity.

Things have moved on considerably in the last few years (2007 >).

‘The greatest potential of these and other on-demand services is their ability to allow users to call up assistance for only those minutes or even seconds that they need them. This provides maximum independence with minimum cost, yet allows the assistance to be invoked as needed to address the situations that arise’. Vanderheiden (2007)

e.g. Philips Centre for Industrial Design

‘Thus the thermostat can be very large and easy to operate when it is needed and invisible the rest of the time. It can also be large for one user in the house while other users, who do not need a large thermostat, can have a more modest size thermostat pro- jected for them. Someone in a wheelchair could have theirs projected (and operated) low on the wall while a tall person could have it appear at a convenient height for them’. Vanderheiden (2007:151)

‘Finally, research on direct brain control19,20 is demonstrating that even individuals who have no physical movement may soon be able to use thought to control devices in their environment directly’. Vanderheiden (2007:152)

BBC 4 reports of man in vegetative state for 10 years communicates that he is in no pain. (BBC 13 NOV 2012) See also Taylor et al (2002)

Mr Routley suffered a severe brain injury in a car accident 12 years

None of his physical assessments since then have shown any sign of awareness, or ability to communicate.

But the British neuroscientist Prof Adrian Owen – who led the team at the Brain and Mind Institute, University of Western Ontario – said Mr Routley was clearly not vegetative.

“Scott has been able to show he has a conscious, thinking mind. We have scanned him several times and his pattern of brain activity shows he is clearly choosing to answer our questions. We believe he knows who and where he is.”


‘As we move to such things as the incorporation of alternate interface capabilities and natural language control of mainstream products, we begin to blur the definitions that we have traditionally used to describe devices, strategies and even people. Those of us who wear glasses are not seen to have a disability (even if we cannot read without them), be- cause they are common and everybody uses them. In fact, the definition of legal blindness is based on one’s vision after correction. However, not all types of disability are defined by people’s ability “after they have been outfitted with all appropriate assistive technologies’. Vanderheiden (2007:154)

Whereas remote control of devices by voice would be an assistive technology by someone with a spinal cord injury today, in the future it may just be the way the standard products work. They still are “assistive technologies” in that they allow the individual to do things that they would not be able to do otherwise, but they are not “special” assistive technologies’. Vanderheiden (2007:155)

‘Apple computer, for example, has announced that their next operating system will have a screen reader for individuals who are blind built directly into the operating system’. Vanderheiden (2007:156)

‘We should be careful that our lack of imagination of what is possible does not translate into limitations that we place on the expectations of those we serve, and the preparation we give them for their future’. Vanderheiden (2007:157)

Thompson (2005) offers a number of examples that illustrate how universal web design can benefit a range of users.

For example, text alternatives for visual content (e.g. providing ALT tags for images) benefits anyone who doesn’t have immediate access to graphics. While this group includes people with blindness, it also includes those sighted computer users who surf the web using text-based browsers, users with slow Internet connections who may have disabled the display of graphics, users of handheld computing devices, and users of voice web and web portal systems including car-based systems.

Central to the universal design approach is a commitment that products should not have to be modified or adapted.

Design once.

Responsive design sees content created once and adjusting for multiple devices – if these devices could include assitive technology, to include hardware, software and websites, then from this single starting point the content would be reversioned for the users tastes and needs.

* Providing a clear, simple design, including a consistent and intuitive navigational mechanism, benefits a variety of users with disabilities, but the result of doing so is a website where users can easily and efficiently find the information they’re looking for. Clearly, this is a benefit to all users. Thompson (2005)

They should be accessible through easily imposed modifications that are ‘right out of the box’ (Jacko and Hanson 2002: 1).

Products should also be compatible with users’ assistive technologies:

‘This practice, when applied to the web, results in web content that is accessible to the broadest possible audience, including people with a wide range of abilities and disabilities who access the web using a wide variety of input and output technologies’ (Thompson 2005).

Whilst some purists argue that universal design is about designing for everyone, the majority of proponents agree that designing for the majority of people is a more realistic approach (Witt and McDermott 2004; Bohman 2003a).

But no longer legally permissable to take this stance in the case of access either in the UK or US, and presumably the same across Europe?

‘This exclusion can affect not only older and dis- abled people but also socially disadvantaged people af- fected by financial, educational, geographic and other factors’. Bühler C, Engelen J, Velasco C, et al. (2012)

While Vanderheiden (1996) argues that ‘there are NO universal designs; there are NO universally designed products. Universal design is a process, which yields products (devices, environments, systems, and processes), which are usable by and useful to the widest possible range of people. It is not possible, however, to create a product, which is usable by everyone or under all circumstances’.

There are some who feel uncomfortable with the principles of universal design because they appear to relieve educators of the responsibility of addressing individual student needs.

Rather, as the Khan Academy have shown, it frees up teachers to provide the individual touch.

‘Digital inclusion policies are drawing attention to the ‘missing 30%’ who are currently excluded and the need to ensure the rights of all citizens to participate in and benefit from these new opportunities. This missing 30% embraces social, economic and political disadvan- tage as well as issues of ageing and disability. The es- tablished principles and practices of Design for All of- fers an opportunity to ensure that designers and devel- opers working in ICT related fields are able to respond to the challenge of creating new systems, services and technologies that meet these broad demands of digital inclusion’. Bühler C, Engelen J, Velasco C, et al. (2012)

For example Kelly et al. (2004) argue that since accessibility is primarily about people and not about technologies it is inappropriate to seek a universal solution and that rather than aiming to provide an e-learning resource which is accessible to everyone there can be advantages in providing resources which are <b>tailored to the student’s particular needs.

This thinking is becoming redundant, or the scale of inclusion in such a category is decreasing – the less impaired – visual, hearing, mobility, cognitive – can be mainstream.

There are also some who warn that no single design is likely to satisfy all different learner needs.

The classic example given to support this argument is the perceived conflict between the needs of those who are blind and those who have cognitive disabilities. For example, the dyslexic’s desire for effective imagery and short text would appear to contradict the blind user’s desire for strong textual narrative with little imagery. However Bohman (2003b) provides a counterbalance to this argument stating that while the visual elements may be unnecessary for those who are blind, they are not harmful to them. As long as alternative text is provided for these visual elements, there is no conflict.


The core is like a Christmas Tree, the components that create accessibilty – always to a degree and never comprehensive – are the lights and decorations, so to design and richness added for branding and to inspire.

Those with cognitive disabilities will be able to view the visual elements, and those who are blind will be able to access the alternative text.

Brajnik (2000) defines usability as the:

‘effectiveness, efficiency and satisfaction with which specified users achieve specified goals in particular environments’.

Effectiveness, efficiency and satisfaction can be achieved by following four main design principles:

  • make the site’s purpose clear
  • help users find what they need
  • reveal site content
  • use visual design to enhance, not define, interaction design (Nielson 2002).

Many practitioners talk of usability and accessibility in the same breath, as if they were synonymous. For example, Jeffels and Marston (2003) write:

‘Using legislation as the stick will hopefully cease to be as important as it currently is; the carrot of usability will hopefully be the real driving force in the production of accessible online materials one day soon.’

There may be good reason for assuming synonymy between accessibility and usability.

For example, Arch (2002) points out that many of the checkpoints in the Web Content Accessibility Guidelines (WCAG) are actually general usability requirements and a number of others are equally applicable to sections of the community that are not disabled, such as those who live in rural areas or the technologically disadvantaged.

Many proponents also argue that a usability design approach is essential in accessibility because in many cases current accessibility standards do not address making web pages directly usable by people, but rather address making web pages usable by technologies (browsers, assistive devices). However, usability and accessibility are different, as Powlik and Karshmer (2002: 218) forcefully point out: ‘To assume accessibility equates to usability is the equivalent of saying that broadcasting equates to effective communication.’

What is the difference between usability and accessibility?

One simple way to understand the difference between usability and accessibility is to see usability as focusing on making applications and websites easy for people to use, whilst accessibility focuses on making them equally easy for everyone to use. Seale C7

Others see the difference as one of objectivity and subjectivity, where accessibility has a specific definition and is about technical guidelines and compliance with official norms.

Usability, however, has a broader definition, is resistant to attempts at specification and is about user experience and satisfaction (Iarsson and Stahl 2003; Craven 2003b; Richards and Hanson 2004).

Whilst usability and accessibility are different, it is clear that they do complement one another.

However, it does not follow that if a product is accessible, it also usable.

This is because for usability accessibility is necessary, but not sufficient. Whilst for accessibility usability is not necessary, but it is desirable. In a compelling indictment of the use of automatic accessibility checking tools Sloan and Stratford (2004) argue that ‘Accessible remains worthless without usable – which depending on the aims of the resource, could imply “valued”, ‘trustworthy, “worthwhile”, “enjoyable”, “fun”, or many other desirable adjectives.’

  • Statement 1: I’m keeping it simple, I don’t want to fall foul of the legislation
  • Statement 2: I’d like to have done something better, but I had to make it accessible. Shame really.
  • Statement 3: This resource is fully accessible – Bobby says so.

‘It’s essential to remember that there are two levels to multimedia accessibility. On the one hand multimedia developers should strive to use techniques of best practice, standards, guidelines, and features of multimedia technology to maximise the accessibility of a particular piece of multimedia content.

But on the wider scale, designers should never forget that the use of multimedia itself is an enabler, a way of making information and concepts more accessible to people.

By supplementing textual content with rich media such as animation, video or audio clips, information flow and the learning process can be greatly enhanced. So where it’s just not possible to make a specific piece of multimedia accessible to some people, think in terms of ‘how can I provide an alternative?’, rather than thinking ‘I’d better not use it’. Sloan and Stratford (2004)

Before designing a multimedia resource ask yourself:

  • What are the aims of this piece of multimedia?
  • What are the pedagogic goals?
  • How does the resource fit in with the rest of the learning environment?
  • Is it absolutely necessary that all students use it?
  • What are the potential barriers to using these multimedia? What level of sensory or motor abilities is required?
  • How might specific learning difficulties or other cognitive impairments affect the ability to use the resource?
  • What alternatives already exist and what alternatives can be reasonably created? How was the subject or topic previously taught?
  • What is the best way that the information can be presented or learning objectives reached such that:

a) the majority can still get the most out of the multimedia resource?
b) those affected by accessibility barriers get the same information and experience in a way best suited to them? Sloan and Stratford (2004)

Thatcher (2004) provides a pertinent example of a website of one US Federal Agency to demonstrate how accessibility does not equal usability.

The site passed most automated accessibility checks, but failed on a usability check. Thatcher notes that the designers used the techniques that they were supposed to use, such as using alt-text on images. However, he concluded that they must have blindly used those techniques without understanding why. Otherwise they would not have provided alt-text on invisible images that have been used purely for spacing, requiring a screen reader to read the text over and over (a total 17 times), when in fact it was information a user did not need to know.

Usability design methods

Usability is essentially about trying to see things from the user’s perspective and designing accordingly. Seale , C7 (2006)

Seeing things from the user’s perspective can be difficult, as Regan (2004a) acknowledges when discussing Flash design and screen readers. In order to see things from a user’s perspective more easily, usability designers may employ user profiles, investigate user preferences or conduct user evaluations (testing).

Personas as the OU, also see examples that Clive Shepherd uses in The New Learning Architect.

For example, Kunzinger et al. (2005) outline how IBM has employed user profiles (personas of users with disabilities) in an attempt to make their products more usable. The personas are described as composite descriptions of users of a product that embody their use of the product, their characteristics, needs and requirements.

According to Kunzinger et al. (2005) these personas engage the empathy of developers for their users and provide a basis for making initial design decisions.

Gappa et al. (2004) report how an EU project called IRIS investigated the preferences of people with disabilities with regard to interface design of web applications as well as user requirements of online help and search engines. The results showed that user preferences for information presentation vary a great deal and that solutions would require comprehensive user profiling.

Bryant (2005) documents the results of a usability study in which four participants from each of the seven target audiences for the website were asked to undertake tasks from one of seven ‘scenarios’, one for each target audience. Participants were observed and videotaped while undertaking these tasks.

This sounds similar to the IET Computer Labs where there are  fequent detailed studies of web usability and  accessibility with a generous supply of assistive technology, video, eye-tracking and two way mirrors.


Paciello (2005) argues that engaging users and applying usability inspection methods are the cornerstones for ensuring universal accessibility.

Enhancing accessibility through usability inspections and usability testing

This paper proposes usability methods including heuristic inspections and controlled task user testing as formulas for ensuring accessible user experience.


‘Unfortunately, the presence of automated accessibility evaluation software often provides limited, unreliable, and difficult to interpret feedback. By its very nature, automated testing is incapable of emulating true user experience’. (2005:01) Michael G. Paciello of The Paciello Group.

Time after time, vendors present “508 compliant” software only to find out that federal employees and citizens cannot us the application with their assistive technologies.

We suggest that the problem involves incomplete design and testing methodologies. We further suggest that the best solution to ensuring accessible user experience (AUE) involves a combination of automated testing tools, expert usability inspections, and controlled task user testing.

In their book, “Designing Web Sites that Work – Usability for the Web”, Brinck, Gergle, and Wood note the following:

“Many typical guidelines for accessibility stress the use of standard technologies or recommend providing standardized fallbacks when using nonstandard technologies. In particular, most advise following HTML standards strictly. Unfortunately, the real world is rather messy, and most web browsers do not follow HTML standards exactly, making strict adherence to the standards less accessible in practice. The only certain way to ensure accessibility is to follow the recommendations as well as possible and then test the site with users with disabilities. ”


The ideal then is to broaden the scope of user inclusiveness by engaging users with disabilities through a variety of usability testing methods and iterative test cycles, including:

• Participatory design
• Rapid prototyping
• Contextual inquiry
• Heuristic inspections
• Critical incident reporting
• Remote evaluation
• Controlled task user testing

Essentially, there are four basic evaluation methods:

• Automatic (using software tools)
• Formal (exact models/formulas to calculate measures)
• Informal (experienced evaluators conducting usability inspections)
• Empirical (usability testing involving users)

There are a number of different usability inspection methods. Each has its on strength and purpose.

These methods include (but are not limited to) the following:

• Heuristic evaluations (expert reviews)
• Guideline and Standards reviews (conformance review)
• Walkthroughs (user/developer scenarios)
• Consistency (family of products review)
• Formal usability inspections (moderated & focused inspection)
• Feature inspections (function, utility review)

During the heuristic inspection, the evaluator is looking for several user interface areas including:

• Ease of Use
• Error Seriousness
• Efficiency
• UI “pleasantness”
• Error Frequency

The resulting recommendation report should feature the following:

• Identify problems and suggest redesign solutions
• Prioritize list of usability problems based on rate of occurrence, severity level, & market/user impact
• Where possible, estimate implementation cost & cost-benefit analysis

To conduct an expert heuristic inspection, we recommend the following process:

• Form a team of 3-5 specialists to perform individual inspections against list of heuristics
• Studies show that use of 4-5 evaluators result in 80% identification of accessibility problems. More is better!
• Evaluators require guidelines, standards, usability guidelines, and UI design experience
• Perform a comparative analysis of collected data, formulate report
• Implement formal inspection process
• Appoint a moderator to manage the team
• Plan the inspection, conduct kick-off meeting, develop preparation phase for individual reviews, conduct main inspection, end with follow-up phase to assess effectiveness of the inspection
• Plan iterative user testing

To ensure that the inspection results in qualitative accessibility results, the following accessibility heuristics should reviewed within any given user interface:

• Simple & natural dialogue
• Familiarity (speak the user’s language)
• Minimize the user’s memory load
• Consistency
• Feedback
• Clearly marked exits
• Shortcuts
• Good error messages
• Prevent errors
• Help & Documentation
Conducting Usability Testing

Why conduct iterative user tests?

What advantages do they have over other methods of ensuring accessibility to software or web sites? The fundamental premise for testing users is to validate their experience with a given interface.

We know that automated testing rarely results in capturing true user experience and heuristic evaluations cannot address all usability issues. Expert evaluations cannot fully emulate user experience (unless inspector has a disability) and task analysis is generally not part of the heuristic evaluations.

On the other hand, the benefits of user testing are many:

• No preconceived notions, expectations, or application prejudices
• Validates user requirements
• Confirms heuristic evaluations
• Builds client confidence.
• Assures AUE (accessible user experience)

Setting up to conduct user testing with individuals with disabilities is not a difficult task. We suggest the following process:


• Establish an accessibility center to perform user testing (Hint: Collaborate with disability organizations)
• Recruit sample users and ensure:
• Cross-disability sampling
• Multi-AT configurations
• Variable user experience level (novice, average, expert)

Involve client developers and engineers as observers. Keep in mind that this is not beta testing. You are not looking for programmatic “bugs”. You are looking for user interface design flaws that prevent a user with a disability from using a software application compatiblilty with their assistive technologies.

If permissible, video record sessions. Moderators or test facilitators should also note the following:

• Do not “lead” participants
• Perform iterative studies
• Pay participants and thank them!


To summarize, the key to accessible user interfaces is to ensure accessible user experience (AUE)

AUE is quantified by combining and conducting two usability inspection methodologies:

  • Heuristic Evaluations
  • User Testing
  • Engaging users and applying usability inspection methods are the cornerstones for ensuring universal accessibility.

Whilst most of us are now aware of the existence of accessibility statements and logos on websites, Shneiderman and Hochheiser (2001: 367) propose that web designers should also place usability statements on their sites in order to ‘inform users and thereby reduce frustration and confusion’.

User centred design

Alexander (2003b) defines user-centred design (UCD) as a development process, which has three core principles:

What is user-centred design?

User-centred design (UCD) is a development process often defined as having three core principles (Gould & Lewis, 1985: 300-11).

1) The first is the focus on users and their tasks.

UCD involves a structured approach to the collection of data from and about users and their tasks. The involvement of users begins early in the development lifecycle and continues throughout it.

2) The second core principle is the empirical measurement of usage of the system.

The typical approach is to measure aspects of ease of use on even the earliest system concepts and prototypes, and to continue this measurement throughout development.

3) The third basic principle of UCD is iterative design.

The development process involves repeated cycles of design, test, redesign and retest until the system meets its usability goals and is ready for release.

The goal of user-centred design is to make a system more usable.

Jacko and Hanson (2002: 1) argue that user profiling, or understanding the specific characteristics and interaction needs of a particular user or group of users with unique needs and abilities, is necessary to truly understand the different interaction requirements of disparate user populations. This understanding of diverse users’ needs will help to increase accessibility.

User centred design in the field of disability and accessibility is attractive because it places people with disabilities at the centre of the design process, a place in which they have not traditionally been. As Luke (2002) notes: ‘When developers consider technical and pedagogical accessibility, people with physical and/or learning disabilities are encouraged to become producers of information, not just passive consumers.’

A range of studies have reported to be user or learner centred in their design approach and in doing so have employed a range of techniques:

  • User testing (Smith 2002; Alexander 2003b; Gay and Harrison 2001; Theofanos and Redish 2003);
  • User profiling (Keller et al.2001)
  • User evaluations (Pearson and Koppi 2001).

‘Networks such as the Web, Intranets or dedicated broadband networks are being used to teach, to conduct research, to hold tutorials, to submit assignments and to act as libraries. The primary users of Web based instruction include universities, professional upgrading, employment training and lifelong learning. Those that can benefit most from this trend are learners with disabilities. Web based instruction is easily adapted to varying learning styles, rates, and communication formats. Issues of distance, transportation and physical access are reduced. Electronic text, unlike printed text, can be read by individuals who are blind, vision impaired, dyslexic and by individuals who cannot hold a book or turn pages.’ Gay and Harrison (2001)

User Training:

The eight participants received, in addition to basic Internet skills, training from ATRC staff on their respective assistive technologies, and training from the CAT staff on basic web based instructional design. Training occurred over a period of five to eight 3-hour sessions depending on the severity and type of disability, the type of assistive technology being used, and the participants experience with computers and the Internet. It was expected that most of the participants would be novice users requiring instruction in technologies and the Internet. Gay and Harrison (2001)

Instrument development:

The instrument used to assess the accessibility of courseware tools was based on the WAI Accessibility Guidelines, which outlines recommendations for creating accessible Web based documents (Treviranus et al , 1999).

Observer Training: an independent observer was trained to identify and distinguish between difficulties associated with the use of assistive technologies, and those associated with access to courseware tools.    Gay and Harrison (2001)

User Testing:

Each of the participants, after being trained on their respective assistive technologies, were observed to record their ease or difficulty in accessing each of the tools outlined by Landon (1998). In five to ten three hour sessions (depending on the rate at which the participant progresses), the trained observer guided each participant through participation in a Web based course module using each of the courseware tools being assessed. A module is synonymous with a weekly class meeting in the traditional educational setting, providing course notes, discussions, resources, exercises, and testing. Gay and Harrison (2001)

However, Bilotta (2005) warns that if accessibility guidelines are translated too literally, there is a risk that unsuccessful sites will be created that ignore the intersection of UCD and the needs and tasks of people with disabilities.


This timely presentation proposes “Case-Study” based approaches to best practices for creating highly usable accessible web environments and a model for evolution of standards creation. Bilotta (2005)


Section 508 of the Rehabilitation Act along with an abundance of other accessibility laws, policies, standards and guidelines are rapidly heightening public awareness of the possibility of an accessible web. These laws and guidelines, coupled with an increased clarification of potential market share as well as encroaching litigation have led individuals, private companies, industry giants, government agencies and educational institutions alike to voraciously pursue accessibility practices for their online content. Bilotta (2005)

The Interpretation Problem

Though awareness of access laws is at an all time high due to mainstream publicity, success in implementing usable and useful accessible web content still remains low. Section 508 and the W3C’s Web Content Accessibility Guidelines (WCAG) are free and readily available for anyone with time to read them, yet the interpretation and implementation process of these laws and guidelines often produces further access barriers. Industry and academia alike, mandated by either the Federal Government or a Business Executive, are attempting to comply with some sort of accessibility standard. Bilotta (2005)

The (Mis)Informed Decision

Which level to choose? What laws does my product fall under? Which guidelines will be most beneficial? Will any law conflict with our business model? These are some of the questions that decision-makers face. Surprisingly, the decision is often based only on surface investigation of the information available. Often, the person implementing a company-wide access is someone who knows little or nothing about accessibility policy or the technology solutions necessary to comply with Federal Law. Bilotta (2005)

The (Over) Implementation

Once the decision has been made as to which standards, laws or guidelines to conform to, the entire course of action is too often transferred to the development team. The implementation of the accessibility standards are left up to a group of developers being told that they must do this work, but not how and certainly not “why.” Developers faced with the challenge of generating accessible web content, attempt to translate these standards into solutions without clearly understanding the goal of their efforts: Improving the everyday experiences of people with disabilities through combination of accessible products, services, and Assistive Technology (AT). It remains a rare occurrence that a database developer has any experience or knowledge of an Operating System Reader such as JAWS or other alternate web browsing strategies. Bilotta (2005)

The Letter of the Law

It is at this point in the process when the well-intended distributions of laws and guidelines becomes one of the largest access barriers for web content. In push to conform absolutely to the technical standards of accessibility, web developers too often create unsuccessful sites by ignoring the intersection of User Centered Design and the needs and tasks of people with disabilities. The User has now been replaced by a set of suggestions from a strict and faceless organization. It is now up to the developer to act as a policy interpreter, which in almost every case, means following each and every guideline precisely as applied to each and every instance possible. The translation of the guidelines can be extremely literal, there is no attempt to consider which standard is the most usable, useful and inclusive for my target audience. Following the WCAG is seen as a bullet-proof accessibility solution. The logic is: “If I comply with the guidelines, My web site is accessible.”  Bilotta (2005)

The relationship between universal, usable and user-centred design approaches.

For many, there are clear overlaps between the three design approaches.

For example Bilotta (2005) claims that accessible web design is nothing more than the logical extension of the principles of both user centered design and universal design. The three approaches are connected in that all three embrace users’ needs and preferences.

For example, in discussing universal design, Burzagli et al. (2004: 240) place an importance on user needs: ‘Design for All can constitute a good approach, as in accordance with its principles, it embraces user needs and preferences, devices and contexts of use in a common operative platform.’

As does Smith (2002: 52), when discussing a user-centred design approach to developing a VLE interface for dyslexic students: ‘This design approach places an emphasis on understanding the needs of the user and it seems to have produced an interface that is appropriate for both dyslexic and non-dyslexic users.’

Perhaps the easiest way to see the connection between the three approaches is to view usability and universal design as specific implementations of user-centred design, where:

  • a traditional usability approach employs a quite specific interpretation of who the end-users are;
  • a universal design approach employs a broader interpretation of who the end users are, an interpretation that focuses on how the user maps on the target population in terms of functional capability as well as skill and experience (Keates and Clarkson 2003).

Designing for adaptability

Some research studies that report to be designing with accessibility in mind have focused their attention on designing for adaptability.

This can mean one of two things:

  • allowing the user to configure the application to meet their needs (Owens and Keller 2000; Arditi 2004);
  • enabling the application to make adaptations to transform seamlessly in order to meet the needs of the user (Stephanidis et al. 1998; Cooper et al. 2000; Hanson and Richards 2004; Alexandraki et al. 2004).

People have different needs when seeking to use a computer to facilitate any activity.

Factors here extend beyond their physical and sensory capabilities and include needs arising from:

  • preferred learning/working styles (activist / reflective)
  • left or right handedness
  • gender
  • the environment they are working in (e.g. office, train, while driving)
  • other cognitive loads they have to deal with simultaneously

Cooper et al. 2000

Increasingly the WWW is being used to host complex applications in fields as diverse as  scientific enquiry, education, e-commerce and entertainment.  For the purposes of this paper we can define a complex application as one where a user has to receive multiple sources of  information from a web server or several servers and is able to interact with the application in  a variety of ways.  This would normally take the form of an interactive multimedia application similar to that which has been traditionally produced for off-line delivery on CDROM.  Now the very complexity and multi-modality of such applications presents a  particular challenge in seeking to design user interfaces for them that are accessible to people  with different disabilities. Cooper et al. 2000

Both approaches are reliant on detailed user profiles and in that sense there is similarity with usability and user-centred design approaches.

Designing applications that make automatic adaptations appears to be a more prevalent design approach at the moment, however. For example Alexandraki et al 2004 describe the ‘eAccessibility Engine’, a tool which employs adaptation techniques to automatically render web pages accessible by users with different types of disabilities. Specifically, the ‘eAccessibility Engine’ is capable of automatically transforming web pages to attain conformance to Section 508 standards and ‘AAA’ conformance to Web Content Accessibility Guidelines.

The proposed tool is intended for use as a web-based service and can be applied to any existing website. The researchers explain how users are not necessarily aware of the presence of the eAccessibilityEngine. If users do not wish to modify their initial selection of an ‘accessibility profile’ they need to directly interact with the tool only once. Paciello (2000: 21) describes the move towards such personalization as the ‘Third Wave’, arguing that: ‘This is the essence of true personalization – Web design that ensures accessibility for every user by adapting to the user’s preferences’.

Designing with an awareness of assistive technology

Some definitions of accessibility stress access by assistive technologies (Caldwell et al.2004).
WCAG 2.0 Layers of Guidance

The individuals and organizations that use WCAG vary widely and include Web designers and developers, policy makers, purchasing agents, teachers, and students. In order to meet the varying needs of this audience, several layers of guidance are provided including overall principles, general guidelines, testable success criteria and a rich collection of sufficient techniques, advisory techniques, and documented common failures with examples, resource links and code.


At the top are four principles that provide the foundation for Web accessibility:

  1. perceivable
  2. operable
  3. understandable
  4. robust.

See also Understanding the Four Principles of Accessibility.


Under the principles are guidelines. The 12 guidelines provide the basic goals that authors should work toward in order to make content more accessible to users with different disabilities. The guidelines are not testable, but provide the framework and overall objectives to help authors understand the success criteria and better implement the techniques.

Success Criteria

For each guideline, testable success criteria are provided to allow WCAG 2.0 to be used where requirements and conformance testing are necessary such as in design specification, purchasing, regulation, and contractual agreements. In order to meet the needs of different groups and different situations, three levels of conformance are defined: A (lowest), AA, and AAA (highest). Additional information on WCAG levels can be found in Understanding Levels of Conformance.

Sufficient and Advisory Techniques

For each of the guidelines and success criteria in the WCAG 2.0 document itself, the working group has also documented a wide variety of techniques.

The techniques are informative and fall into two categories:

1) those that are sufficient for meeting the success criteria

2) those that areadvisory.

The advisory techniques go beyond what is required by the individual success criteria and allow authors to better address the guidelines. Some advisory techniques address accessibility barriers that are not covered by the testable success criteria. Where common failures are known, these are also documented. See also Sufficient and Advisory Techniques in Understanding WCAG 2.0.

Caldwell et al.2004

Accessibility design approaches such as universal design also stress access by assistive technologies (Thompson 2005).

Furthermore, accessibility guidelines and standards such as the WCAG-1 stress access by assistive technologies. For example, guideline nine advises designers to use features that enable activation of page elements via a variety of input devices.

There is a strong case therefore for learning technologists needing to have an awareness of assistive technology in order to avoid situations where users of such technologies cannot ‘conduct Web transactions because the Web environment does not support access functionality’ (Yu 2003: 12).

Colwell et al. (2002: 75) argue that some guidelines are difficult for developers to apply if they have ‘little knowledge of assistive technology or [of] people with disabilities’.

While Bilotta (2005) notes that developers faced with the challenge of generating accessible web content, attempt to translate these standards into solutions without clearly understanding the goal of their efforts … It remains a rare occurrence that a database developer has any experience or knowledge of an Operating System Reader such as JAWS or other alternate web browsing strategies.

(The above expressed in the H810 Evaluating Accessibility for e-learning SimpleMind)

Accessibility tools

Four types of accessibility tool exist at the moment:

  1. filter and transformation tools;
  2. design and authoring tools;
  3. evaluation tools
  4. evaluation and repair tools.

Filter and transformational tools assist web users more than developers and either modify a page or supplement an assistive technology or browser.

Examples include the BBC Education Text to Speech Internet Enhancer (BETSIE), which creates automatic text-only versions of a website and the Kit for the Accessibility of the Internet (KAI). Gonzalez et al.(2003) describe how KAI can offer a global indicator of accessibility to end users (particularly blind people) at the moment of entering a site as well as filter, repair and restructure the site according to their needs.

The BBC have moved away from BETSIE and at the time of writing (May 2010) they are developing a new approach. This assists people to set their browser options and BBC website preferences and these are saved between sessions.

My Web My Way

The BBC were trialling a new tool called ‘MyDisplay’, which allows the user to choose from a range of pre-set themes and allows them to save the chosen theme in a BBC account so that it can be accessed from any computer.

Design and authoring tools that currently exist, appear to fall into two categories:

1) Multimedia design tools (Linder and Gunderson 2003; Regan 2005a)

The Power Point HTML Accessibility Plug-in allows Microsoft PowerPoint presentations to be published in an accessible HTML format for WWW. The tool automatically generates text equivalents for common PPT objects and supports the instructor in generating text alternatives for images that are used in educational settings like pie, bar and scatter point charts. The HTML markup generated exceeds current Section 508 requirements and meets W3C Web Content Accessibility Requirements Double-A conformance. (Linder and Gunderson 2003)

Web Accessibility is as much a set of core practices as it is an abstract concept. Understanding the techniques does not necessarily help those that build websites to arrive at a comprehensive understanding of the issue of accessibility. For this reason, it is important that the tools used by professional web designers and web developers integrate web accessibility into account at every available opportunity. This helps individuals with limited understanding of accessibility to make good decisions at the appropriate points in the design process.

Accessibility Preferences Options

Macromedia Dreamweaver MX 2004 allows you to set preferences that prompt you to provide accessibility information as you’re building the page. By activating options in the Preferences dialog box, you’ll be prompted to provide accessibility-related information for form objects, frames, media, images, and tables as each element is inserted in a page. (Regan 2005a)

2) Tools that simulate disability (Tagaki et al.2004; Saito et al.2005).

This session introduces a disability simulator “aDesigner”, which helps Web designers ensure that the Web content they develop is accessible and usable by visually impaired users. This presentation is designed to show the audience the functions of aDesigner and how easily and effectively the Web designers can analyze Web pages and fix the problems to improve accessibility and usability. Tagaki et al.2004

Web accessibility means access to the World Wide Web for everyone, including people with disabilities and senior citizens. Ensuring Web accessibility improves the quality of the lives of such people by removing barriers that prevent them from taking part in many important life activities. Tagaki et al.2004

The disability simulator aDesigner helps you make sure that the Web content you develop is accessible and usable by visually impaired users and also checks that it complies with accessibility guidelines. By simulating or visualizing Web pages, aDesigner helps you understand how low vision or blind users experience content on a given Web page, and it also performs a checking function for two types of problems: (a) problems such as redundant text or the appropriateness of alternate text and (b) lack of compliance with accessibility guidelines. Tagaki et al.2004

The tool aDesigner overcomes the limitations of existing accessibility tools by helping Web authors (many of whom are young and have good vision) to learn about the real accessibility issues. It simulates two types of disabilities, low vision and blindness, and automatically detects accessibility and usability problems on a page. You can specify the guidelines that you want aDesigner to apply in its review (Section 508, IBM Corporate Instruction 162, JIS (Japanese Industrial Standards) X 8341-3, and three priority levels of the Web Content Accessibility Guidelines). Tagaki et al.2004

The number of design tools that exist however, are tiny, compared to the array of evaluation, validation and repair tools that have been developed.

Evaluation tools conduct a static analysis of web pages and return a report or rating, whilst repair tools identify problems and recommend improvements (Chisholm and Kasday 2003).

Transitioning Web accessibility policies to take advantage of WCAG 2.0

Many governments and organizations around the world have taken measures to ensure that their Web sites are accessible to people with disabilities [3]. In the majority of cases they have either directly adopted or referenced W3C’s Web Content Accessibility Guidelines 1.0 (WCAG 1.0). In some cases they have also referenced the Authoring Tool Accessibility Guidelines 1.0 (ATAG 1.0) and the User Agent Accessibility Guidelines (UAAG 1.0) in order to bring about a more comprehensive Web accessibility solution. Other governments and organizations have developed local standards, regulations, or guidelines based indirectly on WCAG 1.0. (Chisholm and Kasday 2003).

See also

“Web Content Accessibility Guidelines 2.0,” B. Caldwell, W. Chisholm, G. Vanderheiden, and J.White, eds., W3C Working Draft, 30 July 2004, at http://www.w3.org/TR/WCAG20/

Validation tools such as The W3C HTML Validation Service that check HTML and Cascading Style Sheets (CSS) are often included as evaluation tools because validating to a published grammar is considered by some to be the first step towards accessibility (Smith and Bohman 2004)

Evaluation and repair tools

Evaluation tools can be categorized as either general or focused, where general evaluation tools perform tests for a variety of accessibility issues whilst focused tools test for one or a limited aspect of accessibility.

Evaluation and repair tools generally focus on evaluating content, although some do evaluate user agents (Gunderson and May 2005). Most tools see accessibility as an absolute, but some look at the degree of accessibility (Hackett te al.004). Most tools use WCAG–1 or Section 508 as their benchmark, but as guidelines are continually changing one or two are designed to check against the most recent guidelines (Abascal et al.2004).

Finally, most of the tools are proprietary.

Standard referenced evaluation tool

There are a range of tools that identify accessibility problems by mostly following the guidelines of Section 508 and WCAG–1.

Examples include

  • Access
  • Enable
  • LIFT
  • Bobby

Bobby is the most well known automatic accessibility checking tool. It was originally developed by The Center for Applied Special Technology but is now distributed through WatchFire. Bobby tests online documents against either the WCAG-1 or Section 508 standards. A report is generated that highlights accessibility checks, triggered by document mark-up. Some checks can be identified directly as an error; others are flagged as requiring manual attention. If tested against WCAG-1, the report lists priority one, two and three accessibility user checks in separate sections. Bobby provides links to detailed information as to how to rectify accessibility checkpoints, the reason for the checkpoint and provides accessibility guideline references for both WAI and Section 508 guidelines as appropriate. If all priority one checkpoints raised in the report are dealt with, the report for the revised page should indicate that the page is granted: ‘Bobby approved’ status.

Whilst Bobby is the most well-known and perhaps most used evaluation tool, it has not been without its problems

For example, some experts have pointed out that Bobby and the Bobby logo can be used inappropriately to indicate that a site is accessible, when in some people’s perception it is not (Witt and McDermott 2002).

While Bobby will detect a missing text description for an image, it is the developer who is responsible for annotating this image with meaningful text. Frequently, an image has a meaningless or misleading text description though the validation tool output states that the page is accessible.

(Witt and McDermott 2002: 48)

WatchFire Bobby is no longer available. A web search will find current automated tools, some of which are add-ons to particular web browsers. One tool


(WebAim, 2010) works in a similar way to Bobby. Other tools mentioned in this chapter may also have been replaced, but their replacements raise similar principles and problems

Tools that identify and prioritize problems that need repairing

Many automatic evaluation tools not only indicate where the accessibility problem is, but also give designers directions for conforming to violated guidelines. Although they can also be called repair tools, almost none perform completely automatic corrections. For this reason, it is probably more appropriate to identify them as tools that assist the author in identifying and prioritizing. Although the tools mentioned in the previous section offer designers some help in correcting problems, there are tools that fulfil this goal more efficiently. Examples include A-Prompt and Step 508.

A-Prompt (Accessibility Prompt) was developed by the Adaptive Technology Resource Centre in Canada.

A-Prompt first evaluates an HTML web page to identify accessibility, it then provides the web author with a ‘fast and easy way’ to make the necessary repairs. The tool’s evaluation and repair checklist is based on WCAG-1. A-Prompt allows the author to select a file for validation and repair, or select a single HTML element within a file. The tool may be customized to check for different conformance levels. If an accessibility problem is detected, A-Prompt displays the necessary dialogs and guides the user to fix the problem. Many repetitive tasks are automated, such as the addition of ALT-text or the replacement of server-side image maps with client-side image maps. When all potential problems have been resolved, the repaired HTML code is inserted into the document and a new version of the file may be saved to the author’s hard drive. After a web page has been checked and repaired by A-Prompt it will be given a WAI Conformance ranking.

Theofanos, et al.(2004) and Kirkpatrick and Thatcher (2005) describe a free tool developed by the National Centre for Accessible Media (NCAM), called STEP 508.

A free tool developed by the National Center for Accessible Media (NCAM) at WGBH in Boston, called STEP508 was developed in 2003 to address this need. STEP508 is capable of handling data from Watchfire’s Bobby, Parasoft’s WebKing, and UsableNet’s LiftOnline, but is limited to prioritizations of evaluations where Section 508 standards were the basis for testing.

In 2005 NCAM has developed a new version of this tool with funding from the General Services Administration with additional importing, exporting, and reporting capabilities, and with the ability to test against the W3C’s Web Content Accessibility Guidelines. Like its predecessor, the new version allows site administrators to prioritize accessibility errors and quickly generate reports that provide several views of the errors. Errors are prioritized by several factors including ease of repair, page “importance” (determined by page traffic and other criteria), and impact on users. With a sorted list in hand, repair efforts are focused on pages and errors that are the best targets for repair first.

Kirkpatrick and Thatcher (2005)

STEP 508 is a tool for analysing evaluation data from popular web accessibility evaluation tools, such as Watchfire’s Bobby and UsableNet’s LIFT. STEP does not perform the accessibility evaluation, but examines the results of an evaluation and creates a report, which prioritizes the results based on the severity and repairability of the errors found, as well as on the importance of the pages where errors were identified. When it was originally designed in 2003, it was limited to prioritizations of evaluations where Section 508 standards were the basis for testing. However, in 2005 NCAM developed a new version of this tool with additional importing, exporting and reporting capabilities, and with the ability to test against the WCAG–1.

Like its predecessor, the new version allows site administrators to prioritize accessibility errors and quickly generate reports that provide several views of the errors. Errors are prioritized by several factors including ease of repair, page ‘importance’ (determined by page traffic and other criteria) and impact on users. With a sorted list in hand, repair efforts are focused first on pages and errors that are the best targets for repair. In addition to prioritizing errors, STEP allows for easy comparisons between accessibility evaluation tools, to help site administrators understand how the outputs from the various tools differ.

Validity and reliability of evaluation and repair tools

With so many tools to choose from accessibility designers need to be able to assess their validity and reliability. Brajnik (2001) argues that the testing tools have to be validated in order to be truly useful. A crucial property that needs to be assessed is the validity of the rules that tools operate. A valid rule is correct (never identifies false positives) and complete (never yields false negatives). Brajnik (2001) argues that rule completeness is extremely difficult to assess however, and considers that assessing the correctness of a rule is more viable and can be achieved through one of three methods:

  1. comparative experiments,
  2. rule inspection and testing,
  3. and page tracking.

Page tracking entails the repeated use of an automatic tool. If a site is analysed two or more times, each evaluation generates a set of problems. If, on subsequent evaluations, a problem disappears (due to a change in the underlying web page/website) it is assumed that this maintenance action on the part of the webmaster was prompted by the rule showing the problem. Brajnik argues that this method provides an indirect way to determine the utility of the rule, a property that is closely related to rule correctness. A rule that is useful (i.e. its advice is followed by a webmaster) is, according to Brajnik, also likely to be correct.

Interestingly, in another study that examined developers’ responses to the results of automated tools, Ivory et al. (2003) found that developers who relied on their own expertise to modify sites, produced sites that yielded some performance improvements.

This was not the case for sites produced with evaluation tools. Even though the automated tools identified significantly more potential problems than the designers had identified by themselves, designers made more design changes when they did not use an automated tool.

A common method of testing the reliability of tools is to conduct tests where the results of different tools are compared against each other and sometimes also against a manual evaluation.

Studies reveal mixed results. For example in a comparison between Bobby and LIFT in which tool completeness, correctness and specificity were analysed, LIFT was clearly the better performer (Brajnik 2004). Brajnik was able to conclude that:

  • LIFT generated less false positives than Bobby
  • LIFT generates less false negatives than Bobby

For the six most populated checkpoints LIFT had a number of (automatic and manual) tests that was equal or greater than the corresponding number for Bobby.

Other comparative studies have found it harder to come out with a clear winner:

  • Faulkner and Arch (2003) reviewed AccVerify 4.9, Bobby 4.0, InFocus 4.2 and PageScreamer 4.1
  • Lazar et al. (2003) evaluated the reliability of InFocus, A-Prompt and Bobby;
  • Diaper and Worman (2003) reports the results of comparing Bobby and A-Prompt.
  • Lazar et al. concluded from their study that accessibility testing tools are ‘flawed and inconsistent’
  • Whilst, Diaper and Worman (2003) warn that any overhead saved using automatic tools will be spent in interpreting their results.

There must be a strong temptation for organisations to rely more on accessibility assessment tools than they should because the tools are supposed to encapsulate and apply knowledge about the WCAGs and check points. Accessibility tools have their own knowledge over-heads concerning how to use the tools and, vitally, interpret their outputs. Indeed, it may well be that at present the tools’ related knowledge is additional to a sound understanding of the WCAG by the tools’ users, i.e. you need to know more to use the tools, not less.

Choosing an evaluation or repair tool

Decisions about which evaluation or repair tool to choose will depend on a number of factors whether:

  • the designer wants to focus on general or specific accessibility issues;
  • the designer wants to test against WCAG–1, Section 508 or both;
  • the tools are considered to be operating valid rules;
  • the tools are considered to be reliable;
  • the websites to be evaluated are small or large (O’Grady and Harrison 2003);
  • the designer is working as an individual or as part of a larger corporate organisation (Brajnik 2001);
  • the designer wants usability testing to be an integral part of the test (Brajnik 2000).

O’Grady and Harrison (2003) reviewed A-Prompt, Bobby, InFocus and AccVerify.

The review process examined functionality, platform availability, demo version availability and cost. The functional issues examined included the installation process, availability of tech support, standards the product validates and the software’s response to a violation of the checkpoint items. Each software tool was compared on a series of practical and functional indicators. O’Grady and Harrison concluded that the best tools for small to medium sites was A-Prompt and Bobby, whilst for large or multiple sites, InFocus and AccVerify were better choices.

However, Brajnik 2001 argues that comparing these tools can be unfair given their different scope, flexibility, power and price. LIFT is targeted to an enterprise-level quality assurance team and costs from $6,000 upwards, whilst Bobby is available for free and is targeted to a single individual wanting to test a relatively limited number of pages free (Watxchfire initially did make charges when it took over Bobby but it has recently announced that they will transition the Bobby online service to WebXACT, a free online tool).

Problems with evaluation and repair tools

In addition to problems of reliability, many researchers and practitioners have identified further potential problems with the use of evaluation and repair tools including:

  • difficulties with judging the severity of the error (Thompson et al.2003);
  • dangers of inducing a false sense of security (Witt and McDermott 2004);
  • dangers of encouraging over-reliance (SciVisum 2004);
  • difficulties understanding the results and recommendations (Faulkner and Arch 2003; Wattenberg 2004).

These reasons have lead many to conclude that some element of manual testing or human judgement is still needed.

For example, Thompson et al (2003) argue that one shortcoming of the automated tools is their inability to take into account the ‘severity’ of an identified accessibility error. For example, if Site A is missing ALT tags on its spacer images, and Site B is missing ALT tags on its menu buttons, both sites are rated identically, when according to Thompson et al.Site B clearly has the more serious accessibility problem.

They conclude that ‘Some of the web-content accessibility checkpoints cannot be checked successfully by software algorithms alone. There will still be a dependence on the user’s ability to exercise human judgment to determine conformance to the guidelines.’

What do you check and why? What assumptions do you make about the purpose behind a webpage?

The following websites were consistently among the most requested sites for the sample universities. It can be reasonably assumed that these would be sites that would be frequently requested at most universities. The perceived function of each website is included in parentheses.

Perceived function plays an important role in our research method as noted in the Methods section of this paper.

  1. University home page (Read and navigate the entire page without missing important information.);
  2. Search page (Search for any term and read the results.);
  3. List of university colleges, departments, and/or degree programs (Read and navigate the entire page without missing important information.);
  4. Campus directory of faculty, staff and/or students (Look up a name and read the results.);
  5. Admissions home page (Read and navigate the entire page without missing important information.);
  6. Course listings (Find a particular course listing and read all information about that course.);
  7. Academic calendar (Read and navigate the entire calendar without missing important information.);
  8. Employment home page (Read and navigate the entire page without missing important information.);
  9. Job listings (Read and navigate the entire page without missing important information.);
  10. Campus map (Locate a particular location on campus and identify how to get there.);
  11. Library home page (Read and navigate the entire page without missing important information.).

For example, Thompson et al (2003)

Witt and McDermott (2004: 49) argue that while the tools are useful for issues such as locating missing text alternatives to graphics and checking for untidy HTML coding they cannot evaluate a site for layout consistency, ease of navigation, provision of contextual
and orientation information, and use of clear and easy-to-understand language.


developers must also undertake their own audit of the web page and interpret the advice given in order to satisfy themselves that compliance has been achieved.
Witt and McDermott (2004: 49)

That said, these tools can give web developers a false sense of security. If presented with a Bobby or A-Prompt generated checklist showing that all issues have been addressed, developers may believe they have met the accessibility guidelines. Yet in reality, they may have only adhered to a set of rules and produced an inaccessible website.

SciVisum (2004) tested 111 UK websites that publicly claimed to be compliant with WCAG–1 by displaying the WAI compliance logo on their website. They found that 40 per cent of these failed to meet the checkpoints for which they were claiming compliance.

The report concludes:

The SciVisum study indicates that either self-regulation is not working in practice or that organisations are relying too heavily on automated web-site testing.

Semi-automated accessibility testing on Web sites by experienced engineers needs to become a standard practice for UK Web site owners. Only manual tests can help identify areas of improvement, which are impossible to identify with automated checks alone. Unless sites are tested site-wide in this way failure rates amongst those that are claiming to be compliant will continue.

(SciVisum 2004)

Faulkner and Arch (2003) argue that the interpretation of the results from the automated tools requires assessors trained in accessibility techniques with an understanding of the technical and usability issues facing people with disabilities. A thorough understanding of accessibility is also required in order to competently assess the checkpoints that the automated tools check such as consistent navigation, and appropriate writing and presentation style.

Wattenberg (2004) acknowledges that while these tools are beneficial, developers do not always have the time or the motivation to understand the complex and often lengthy recommendations that the validation tools produce.

All of the tools tested have advantages over free online testing tools [Steven Faulkner], the main advantage being their ability check a whole site rather than a limited number of pages. Also files do not have to be published to the web before they can be tested, the user has greater control of what rules are applied when testing, the style and formatting of reports, and in some cases the software will automatically correct problems found.

None of the tools evaluated, as expected, were able to identify all the HTML files and associated multimedia files that needed to be tested for accessibility on the site.

All the software tools evaluated will assist the accessibility quality assurance process, however none of them will replace evaluation by informed humans. The user needs to be aware of their limitations and needs a strong understanding of accessibility issues and the implications for people with disabilities in order to interpret the reports and the accessibility issues or potential issues flagged by the software tools. Faulkner and Arch (2003)

See Appendix 2

Combining automatic and manual judgement of accessibility

The first step is to evaluate the current site with a combination of automated tools and focused human testing. Such a combination is necessary because neither approach, by itself, can be considered accurate or reliable enough to catch every type of error.
(Bohman 2003c)

With the growing acceptance that accessibility evaluation cannot rely on automated methods and tools alone, a number of studies have used a combination of different methods to evaluate accessibility

(Hinn 1999; McCord et al.2002; Sloan et al.2002; Borchert and Conkas 2003).

Hinn (1999) reports on how she used Bobby combined with computer-facilitated focus groups and focused individual interviews to evaluate a virtual learning environment for students with disabilities. In an evaluation of selected web-based health information resources, McCord et al. (2002) also used Bobby, but combined this with a manual evaluation of how the resources interacted with screen reader and speech recognition software.

In thinking about the practicalities of using a combined approach a number of issues arise: time and resources required; the number of people who should be involved in manual evaluation; and the level of expertise required of those people involved in manual evaluation.

Combining automatic and manual judgement of accessibility

The first step is to evaluate the current site with a combination of automated tools and focused human testing. Such a combination is necessary because neither approach, by itself, can be considered accurate or reliable enough to catch every type of error.(Bohman 2003c)

With the growing acceptance that accessibility evaluation cannot rely on automated methods and tools alone, a number of studies have used a combination of different methods to evaluate accessibility (Hinn 1999; McCord et al. 2002; Sloan et al. 2002; Borchert and Conkas 2003). Hinn (1999) reports on how she used Bobby combined with computer-facilitated focus groups and focused individual interviews to evaluate a virtual learning environment for students with disabilities. In an evaluation of selected web-based health information resources, McCord et al.(2002) also used Bobby, but combined this with a manual evaluation of how the resources interacted with screen reader and speech recognition software.

In thinking about the practicalities of using a combined approach a number of issues arise: time and resources required; the number of people who should be involved in manual evaluation; and the level of expertise required of those people involved in manual evaluation.

Number of people

In contrast with the team approach adopted by Sloan et al. (2002), many evaluation studies are conducted by just one or two people. However, the Web Accessibility Initiative (Brewer 2004) appear to favour the use of review teams, arguing that it is the optimum approach for evaluating accessibility of websites because of the advantages that the different perspectives on a review team can bring.

They consider that review teams should have expertise in the following areas:

  • web mark-up languages and validation tools;
  • web content accessibility guidelines and techniques;
  • conformance evaluation process for evaluating websites for accessibility;
  • use of a variety of evaluation tools for website accessibility;
  • use of computer-based assistive technologies and adaptive strategies;
  • web design and development.

(See Appendix 3)

Level of expertise

Many evaluation studies adopt a ‘have-a-go’ response to accessibility evaluation. For example, Byerley and Chambers (2002) documented the problems encountered in using screen readers with two web-based abstracting and indexing services. In the initial phase of the study, the researchers (two sighted librarians) tested the database using JAWS and Windoweyes. They performed simple keyword searches, accessed help screens and manipulated search results by marking citations, printing and emailing full text articles. Others adopt the view that because evaluating accessibility and understanding the guidelines and tools is so difficult, the services of an expert accessibility auditor should be adopted (Sloan 2000).

Two studies that report the involvement of experts are those of Sloan et al. (2002) and Thompson et al. (2003). Sloan et al. carried out an accessibility study of 11 websites in the UK Higher Education sector in 1999. They describe the methodology used by an evaluation team of research experts to carry out the audits which included: drawing initial impressions; testing with automatic validation tools; manual evaluation with accessibility; general inspection; viewing with browsers and assistive technologies; and usability evaluation. Thompson, et al.(2003) describe how two web accessibility ‘experts’ manually evaluated the key web pages of 102 public research universities using a five-point rating scale that focused on each site’s ‘functional accessibility,’ i.e., whether all users can accomplish the perceived function of the site. The evaluators’ combined results were positively correlated with those obtained by using Bobby on the same sample.

There appears to be an interesting paradox at work here.

For some, there is a perception that tools will help to ease the burden and take a ‘great load’ off the shoulders of designers, developers and authors (Kraithman and Bennet 2005). The reality appears to be different however, in that many tools place a great deal of responsibility on the shoulders of designers, developers and authors in terms of needing to interpret the results that tools produce. This has led some to argue that the use of tools should be accompanied by human judgement, in particular expert human judgement.

Thus, it would appear that those considered not to be experts, may not have a role in judging the accessibility of online material at all. This feels like a retrograde step in terms of helping novice learning technologists to feel that they have an important responsibility and contribution to make to the accessibility of online material. In order to avoid disempowering learning technologists we perhaps need to look at developing better tools and/or effective ways of helping learning technologists to gain the skills required to conduct manual judgements of the accessibility of online mate.


In order to understand and make the best use of accessibility guidelines, standards and tools learning technologists need to understand and critique the design approaches that underpin accessibility guidelines and standards and appreciate and critique the strengths and weaknesses of using automatic accessibility tools. By increasing their understanding of these issues learning technologists will be in a stronger position to:

  1. choose which guidelines, standards and tools they will adopt and justify their choice;
  2. adopt or adapt the processes and methodologies that have been used to design and evaluate online material.

Furthermore, in exploring in more detail the different approaches to accessible design, learning technologists should start to get a clearer sense of alignment between:

  • definitions of accessibility;
  • approaches to accessible design;
  • accessibility guidelines and standards;
  • accessibility evaluation tools and methods.

For example, definitions of accessibility that incorporate or stress ‘ease of use’ (e.g. Disability Rights Commission, 2004) may be operationalized through design approaches that focus on usability. The products of these approaches may then be judged against standards that emphasize usability (e.g. WCAG–1) and evaluated using tools that use these standards as benchmarks.

There are signs that some areas are attempting to move towards a standardization of methods and tools.

For example, the Euro accessibility Consortium launched an initiative in 2003 with W3C to foster European co-operation towards a harmonized methodology for evaluating the accessibility of websites.

Whilst moves towards standardization may be helpful, evidence in this chapter suggests that the science of standardization will need to be counterbalanced by the art of judgement and experience.

Witt and McDermott (2002: 49) argue that ‘as a result of the range of standards and guidelines, the levels of interpretation and the subjective judgements required in negotiating these, the creation of an accessible solution is very much an art’.

A major advantage of viewing accessibility as both an art and a science is that learning technologists will be encouraged to take all dimensions and variables into account and to not to focus solely on the technology (Stratford and Sloan 2005).

Accessibility needs to be qualified by questions such as:

  • Accessible to whom?
  • In what context?
  • For what purpose?

The notion of reasonable adjustment is also discussed in the legislation; but to what extent is it reasonable or even desirable to add certain accessibility features?

What affects might adding or withholding features have on other aspects of the curriculum, or on other students?

In addition, if, for whatever reasons, the desired outcome cannot be achieved by one route, what other routes might be appropriate?
(Stratford and Sloan 2005).


Abascal, J., Arrue, M., Fajardo, I., Garay, N. and Tomas, J. (2004) The use of guidelines to automatically verify web accessibility. Universal Access in the Information Society, 3, 71–79.

Alexander, D. (2003b) Redesign of the Monash University web Site: a case study in user-centred design methods. Paper presented at AusWeb03. Online. Available HTTP: <http://ausweb.scu.edu.au/ aw03/ papers/ alexander/ paper.html> (last accessed 17 Nov 2012).

Alexandraki, C., Paramythis, A., Maou, N. and Stephanidis, C. (2004) web accessibility through adaptation. In K. Klaus, K. Miesenberger, W. Zagler and D. Burger (eds) Computers Helping People with Special Needs. Proceedings of 9th International Conference. Berlin Heidelberg: Springer-Verlag, pp. 302–309.

Arch, A. (2002) Dispelling the myths – web accessibility is for all. Paper presented at AusWeb ’02. Online. Available HTTP: <http://ausweb.scu.edu.au/ aw02/ papers/ refereed/ arch/ paper.html> (last accessed 23 May 2012).

Arditi, A. (2004) Adjustable typography: an approach to enhancing low vision text accessibility. Ergonomics, 47, 5, 469–482.

BBC Fergus Walsh presents The Mind Reader: Unlocking My Voice – a Panorama Special BBC One, Tuesday, 13 November, at 22:35 GMT BBC World News (2012)

Bilotta, J. A. (2005) Over-done: when web accessibility techniques replace true inclusive user centred design. Paper presented at CSUN ’05, Los Angeles, 17–19 March 2005. Online. Available HTTP: <http://www.csun.edu/ cod/ conf/ 2005/ proceedings/ 2283.htm> (last accessed 17 Nov 2012).

Bohman, P. (2003a) Introduction to web accessibility. Online. Available HTTP: http://www.webaim.org/ intro/ (last accessed 16 November 2012).

Brajnik, G. (2000) Automatic web usability evaluation: what needs to be done? Paper presented at the 6th conference on Human Factors and the Web. Online. Available HTTP: http://users.dimi.uniud.it/~giorgio.brajnik/papers/hfweb00.html (last accessed 16 November 2012)

Brajnik, G. (2004) Comparing accessibility evaluation tools: a method for tool effectiveness. Universal Access to the Information Society, 3, 252–263.

Brewer, J. (2004) Review Teams for Evaluating web Site Accessibility. Online. Available HTTP: <http://www.w3.org/WAI/eval/reviewteams.html&gt;. (Now available at http://www.w3.org/ WAI/ EO/ Drafts/ review/ reviewteams.html, last accessed 18 Nov 2012.)

Burzagli, L., Emiliani, P. L. and Graziani, P. (2004) Accessibility in the field of education. In C. Stary and C. Stephanidis (eds) Proceedings of UI4All 2004, Lecture Notes in Computer Science, Volume 3196. Berlin Heidelberg: Springer-Verlag, pp. 235–241.

Byerley, S. L. and Chambers, M. B. (2002) Accessibility and usability of web-based library databases for non-visual user. Library Hi Tech, 20, 2, 169–178.

Caldwell, B., Chisholm, W., Vanderheiden, G. and White, J. (2004) web Content Accessibility Guidelines 2.0: W3C Working Draft, 19 November 2004. Online. Available HTTP: <http://www.w3.org/ TR/ WCAG20/> (last accessed 17 Nov 2012).

Chisholm, W. and Brewer, J. (2005) Web Content Accessibility Guidelines 2.0: Transitioning your website. Paper presented at CSUN ’05, Los Angeles, 17–19 March 2005. Online. Available HTTP: <http://www.csun.edu/ cod/ conf/ 2005/ proceedings/ 2355.htm> (last accessed 18 November 2012).

Colwell, C., Scanlon, E. and Cooper, M. (2002) Using remote laboratories to extend access to science and engineering. Computers and Education, 38, 65–76.

Cooper, M., Valencia, L. P. S., Donnelly, A. and Sergeant, P. (2000) User interface approaches for accessibility in complex World-Wide Web applications – an example approach from the PEARL project. Paper presented at the 6th ERCIM Workshop, ‘User Interfaces for All’. Online. Available HTTP: <http://ui4all.ics.forth.gr/UI4ALL-2000/files/Position_Papers/Cooper.pdf&gt;. (Now available at http://ui4all.ics.forth.gr/ UI4ALL-2000/ files/ Position_Papers/ Cooper.pdf, last accessed 17 Nov 2012.)

Craven, J. (2003a) Access to electronic resources by visually impaired people. Information Research, 8, 4. Online. Available HTTP: <http://informationr.net/ ir/ 8-4/ paper156.html> (last accessed 23 May 2012).

Disability Rights Commission (DRC) (2004) The web: access and inclusion for disabled people. Online. Available HTTP: <http://www.drc-gb.org/publicationsandreports/2.pdf&gt; (accessed 5 October 2005 but no longer available).

Faulkner, S. and Arch, A. (2003) Accessibility testing software compared. Paper presented to AusWeb ’03. Online. Available HTTP: <http://ausweb.scu.edu.au/ aw03/ papers/ arch/ paper.html> (last accessed 18 Nov 2012).

Fielding, R.T (2000) Architectural Styles and the Design of Network-based Software Architectures.

Gappa, H., Nordbrook, Mohamad, Y. and Velasco, C. A. (2004) Preferences of people with disabilities to improve information presentation and information retrieval inside Internet Services – results of a user study. In K. Klaus, K. Miesenberger, W. Zagler and D. Burger (eds)Computers Helping People with Special Needs. Proceedings of 9th International Conference. Berlin Heidelberg: Springer-Verlag, pp. 296–301.

Gay, G. and Harrison, L. (2001) Inclusion in an electronic classroom – accessible courseware study. Paper presented at CSUN ’01. Online. Available HTTP: <http://www.csun.edu/ cod/ conf/ 2001/ proceedings/ 0188gay.htm> (last accessed 17 Nov 12).

Gonzales, J., Macias, M., Rodriguez, R. and Sanchez, F. (2003) Accessibility metrics of web pages for blind end-users. In J. M. Cueva Lovelle (ed.) Proceedings of ICWE 2003, Lecture Notes in Computer Science, Volume 2722. Berlin Heidelberg: Springer-Verlag, pp. 374–383.

Gunderson, J. and May, M. (2005) W3C user agent accessibility guidelines test suite version 2.0 and implementation report. Paper presented at CSUN ’05, Los Angeles, 17–19 March 2005. Online. Available HTTP: <http://www.csun.edu/ cod/ conf/ 2005/ proceedings/2363.htm> (last accessed 18 Nov 2012).

Jacko, J. A. and Hanson, V. L. (2002) Universal access and inclusion in design. Universal Access to the Information Society, 2, 1–2.

Keates, S. and Clarkson, P. J. (2003) Countering design exclusion: bridging the gap between usability and accessibility. Universal Access to the Information Society, 2, 215–225.

Kraithman, D. and Bennett, S. (2004) Case study: blending chalk, talk and accessibility in an introductory economics module. Best Practice, 4, 2, 12–15.

Hackett, S., Parmanto, B. and Zeng, X. (2004) Accessibility of Internet Websites through time. Paper presented at ASSETS ’04, Atlanta, Georgia, 18–20 October.

Hanson, V. L. and Richards, J. T. (2004) A web accessibility service: update and findings. Paper presented at ASSETS ’04, 18–20 October, Atlanta, Georgia. Online. Available HTTP: <http://www.research.ibm.com/ people/ v/ vlh/ HansonASSETS04.pdf>
(last accessed 17 Nov 2012).

Ivory, M. Y., Mankoff, J. and Le, A. (2003) Using automated tools to improve website usage by users with diverse abilities. IT and Society, 1, 3, 195–236.

Iwarsson, S. and Stahl, A. (2003) Accessibility, usability and universal design – positioning and definition of concepts describing person–environment relationships. Disability and Rehabilitation, 25, 2, 57–66

Jacko, J. A. and Hanson, V. L. (2002) Universal access and inclusion in design. Universal Access to the Information Society, 2, 1–2.

Jeffels, P. and Martson, P. (2003) Accessibility of online learning materials. Online. Available HTTP: <http://www.scrolla.hw.ac.uk/ papers/ jeffelsmarston.html> (last accessed 16 November  2012)

Keller, S., Braithwaite, R., Owens, J. and Smith, K. (2001) Towards universal accessibility: including users with a disability in the design process. Paper presented at the 12th Australasian Conference on Information Systems, Southern Cross University, New South Wales.

Kirkpatrick, A. and Thatcher, J. (2005) Web accessibility error prioritisation. Paper presented at CSUN ’05, Los Angeles, 17–19 March 2005. Online. Available HTTP: <http://www.csun.edu/ cod/ conf/ 2005/ proceedings/ 2341.htm> (last accessed 18 November 2012).

Kraithman, D. and Bennett, S. (2004) Case study: blending chalk, talk and accessibility in an introductory economics module. Best Practice, 4, 2, 12–15

Kunzinger, E., Snow-Weaver, A., Bernstein, H., Collins, C., Keates, S., Kwit, P., Laws, C., Querner, K., Murphy, A., Sacco, J. and Soderston, C. (2005) Ease of access – a cultural change in the IT product design process. Paper presented at CSUN ’05, Los Angeles, 17–19 March 2005. Online. Available HTTP: <http://www.csun.edu/ cod/ conf/
2005/ proceedings/ 2512.htm> (last accessed 17 November 2012).

Lazar, J., Beere, P., Greenidge, K. D. and Nagappa, Y. (2003) Web accessibility in the Mid-Atlantic States: a study of 50 homepages.Universal Access in the Information Society, 2, 331–341.

Linder, D. and Gunderson, J. (2003) Publishing Accessible WWW presentations from PowerPoint slide presentations. Paper presented at CSUN ’03. Online. Available HTTP: <http://www.csun.edu/ cod/ conf/ 2003/ proceedings/ 174.htm> (last accessed 18 November 2012).

Luke, R. (2002) AccessAbility: Enabling technology for lifelong learning inclusion in an electronic classroom-2000. Educational Technology and Society, 5, 1, 148–152.

Nielsen.J (2000). Designing for web usability. (1st ed.). (Vol. 1). Indianapolis, IN: New Riders Publishing.

Owens, J. and Keller, S. (2000) MultiWeb: Australian contribution to Web accessibility. Paper presented at the 11th Australasian Conference on Information Systems, Brisbane, Australia. Online. Available HTTP: <http://www.deakin.edu.au/infosys/docs/workingpapers/archive/Working_Papers_2000/2000_18_Owens.pdf.&gt; (accessed by Jane Seale 5 October 2005 but no longer available).

Paciello, M. G. (2005) Enhancing accessibility through usability inspections and usability testing. Paper presented at CSUN ’05, Los Angeles, 17–19 March 2005. Online. Available HTTP: <http://www.csun.edu/ cod/ conf/ 2005/ proceedings/ 2509.htm> (last accessed 17 Nov 2012).

Powlik, J. J. and Karshmer, A. I. (2002) When accessibility meets usability. Universal Access in the Information Society, 1, 217–222.

Reece,G.A (2002) Accessibility Meets Usability: A Plea for a Paramount and Concurrent User–centered Design Approach to Electronic and Information Technology
Accessibility for All.

Regan, B. (2005a) Accessible web authoring with Macromedia Dreamweaver MX 2004. Paper presented at CSUN ’05, Los Angeles, 17–19 March 2005. Online. Available HTTP: <http://www.csun.edu/ cod/ conf/ 2005/ proceedings/ 2405.htm> (last accessed 18 November  2012).

$2-3 Digitized Speech and Speech Recognition Chips–Sensory, Inc. Retrieved August
15, 2005, from http://www.sensoryinc.com.

Saito, S., Fukuda, K., Takagi, H. and Asawaka, C. (2005) aDesigner: visualizing blind usability and simulating low vision visibility. Paper presented at CSUN ’05, Los Angeles, 17–19 March 2005. Online. Available HTTP: <http://www.csun.edu/ cod/ conf/ 2005/ proceedings/2631.htm> (last accessed18 Nov 2012).

SciVisum (2004) Web Accessibility Report. Kent, UK: SciVisum Ltd.

Sloan, D., Rowan, M., Booth, P. and Gregor, P. (2000) Ensuring the provision of accessible digital resources. Journal of Educational Media, 25, 3, 203–216.

Sloan, D., Gregor, P., Booth, P. and Gibson, L. (2002) Auditing accessibility of UK higher education web sites. Interacting With Computers, 14, 313–325.

Sloan, D. and Stratford, J. (2004) Producing high quality materials on accessible multimedia. Paper presented at the ILTHE Disability Forum, Anglia Polytechnic University, 29 January. Online. Available HTTP:
http://www.heacademy.ac.uk/assets/documents/resources/database/id418_producing_high_quality_materials.pdf (accessed 16 November 2012).

Smith, S. (2002) Dyslexia and Virtual Learning Environment Interfaces. In L. Phipps., A. Sutherland and J. Seale (eds) Access All Areas: Disability, Technology and Learning. Oxford: ALT/Jisc/TechDis, pp. 50–53.

Smith, J. and Bohman, P. (2004) Evaluating website accessibility: a seven step process. Paper presented at CSUN ’04. Online. Availab le HTTP: <http://www.csun.edu/ cod/ conf/ 2004/ proceedings/ 203.htm> (last accessed 18 Nov 2012).

Stephanidis, C., Paramythis, A., Akoumianakis, D. and Sfyrakis, M. (1998). Self-adapting web-based systems: towards universal accessibility. In C. Stephanidis and A. Waern (eds) Proceedings of the 4th ERCIM Workshop ‘User Interfaces for All’ Stockholm: European Research Consortium for Informatics and Mathematics, pp. 19–21.

Stephanidis et al. (2011) Twenty five years of training and education in ICT Design for
All and Assistive Technology.

Stratford, J. and Sloan, D. (2005) Skills for Access: putting the world to rights for accessibility and multimedia. ALT Online Newsletter, 15 July. Online. Available HTTP: <http://newsletter.alt.ac.uk/e_article000428038.cfm?x=b11,0,w&gt;. (Now available at http://newsweaver.co.uk/ alt/e_article000428038.cfm, (last accessed18 Nov 2012.)

Sullivan, L.H (1896) http://www.scribd.com/doc/104764188/Louis-Sullivan-The-Tall-Office-Building-Artistically-Considered

Tagaki, H., Asakawa, C., Fukudu, K. and Maeda, J. (2004) Accessibility designer: visualizing usability for the blind. Paper presented at ASSETS ’04, 18–20 October, 2004, Atlanta, Georgia.

Taylor, D. M., Tillery, S. I. H. & Schwartz, A. B. (2002). Direct cortical control of 3D neuroprosthetic devices. Science, 296(5574), 1829-32

Thatcher, J. (2004) web accessibility – what not to do. Paper presented at CSUN ’04. Online. Available HTTP: <http://www.csun.edu/ cod/ conf/ 2004/ proceedings/ 80.htm> (last 16 Novembet 2012).

Theofanos, M., Kirkpatrick, A. and Thatcher, J. (2004) Prioritizing web accessibility evaluation data. Paper presented at CSUN ’04. Online. Available HTTP: <http://www.csun.edu/ cod/ conf/ 2004/ proceedings/ 145.htm> (last accessed 18 November 2012).

Thompson, T., Burgstahler, S. and Comden, D. (2003) Research on Web accessibility in higher education. Information Technology and Disabilities, 9, 2. Online. Available HTTP: <http://www.rit.edu/~easi/itd/itdv09n2/thompson.htm&gt;. (Now available at http://people.rit.edu/ easi/itd/ itdv09n2/ thompson.htm, (last accessed 18 Nov  2012.)

Thompson, T. (2005) Universal design and web accessibility: unexpected beneficiaries. Paper presented at CSUN ’05, Los Angeles, 17–19 March 2005. Online. Available HTTP: <http://www.csun.edu/ cod/ conf/ 2005/ proceedings/ 2392.htm> (last accessed
23 May 2012)

Thompson, T., Burgstahler, S. and Comden, D. (2003) Research on Web accessibility in higher education. Information Technology and Disabilities, 9, 2. Online. Available HTTP: <http://www.rit.edu/~easi/itd/itdv09n2/thompson.htm&gt;. (Now available at http://people.rit.edu/ easi/itd/ itdv09n2/ thompson.htm, last accessed 18 Nov 2012.)

Witt, N. A. J. and McDermott, A. P. (2002) Achieving SENDA-compliance for Web Sites in further and higher education: an art or a science? In L. Phipps, A. Sutherland and J. Seale (eds). Access All Areas: Disability, Technology and Learning. Oxford: ALT/TechDis, pp. 42–49.

Witt, N. and McDermott, A. (2004) Web site accessibility: what logo will we use today? British Journal of Educational Technology, 35, 1, 45–56.

Vanderheiden, G. C., Chisholm, W. A., & Ewers, N. (1997, November 18). Making screen readers work more effectively on the web (1st ), [web]. Trace Center. Available: http://www.trace.wisc.edu/text/guideIns/brows er/screen.html

Vanderheiden, G. C.(2007) Redefining Assistive Technology, Accessibility and Disability Based on Recent Technical Advances. Journal of Technology in Human Services Volume 25, Issue 1-2, 2007, pages 147- 158

Yu, H. (2003) Web Accessibility and the law: issues in implementation. In M. Hricko (ed.) Design and Implementation of web Enabled Teaching Tools. Ohio: Kent State University.

WebAim (2010) <i>WAVE Online Web Accessibility Evaluation Tool</i> [online], <a class=”oucontent-hyperlink” href=”http://wave.webaim.org/“>http://wave.webaim.org/</a> (last accessed 18 Nov 2012).

Witt, N. A. J. and McDermott, A. P. (2002) Achieving SENDA-compliance for Web Sites in further and higher education: an art or a science? In L. Phipps, A. Sutherland and J. Seale (eds). Access All Areas: Disability, Technology and Learning. Oxford: ALT/TechDis, pp. 42–49.


The Seven Step Process Smith and Bohman 2004

1. Validate your HTML

This first step happens to be one of the most difficult, especially for people who are not very comfortable with HTML. It is, however, a very important step toward Web accessibility. Proper, standards-based HTML lends itself toward accessibility. Assistive technologies rely on proper HTML more so than most Web browsers. Valid HTML has many other benefits as well, including a decreased likelihood for cross-browser differences or incompatibilities and better support for emerging technologies.

To validate the accessibility of your page’s HTML, use the W3C’s HTML validator at http://validator.w3.org/. The results are often overwhelming at first. Most people are not even aware of the intricate rules and standards for HTML use. Even if you are using professional Web development software programs it is likely that your page will not validate as proper HTML when first validated. Be sure to read the explanations as to why the page does not validate as proper HTML. Learn about the common HTML mistakes and do what you can to fix them.

Creating proper HTML is a challenge, but one that every Web developer should take upon him or herself. When you understand the rules of HTML, you are much more likely to design more usable and accessible Web content.

2. Validate for accessibility

Once you have created proper HTML within your page, many of your accessibility issues will be gone, because proper HTML requires many accessibility techniques, such as alt text. The next step is to find other accessibility issues that may be present in your page. This is where automated accessibility tools come into play.

The Wave accessibility tool is a great place to start – http://www.wave.webaim.org/. The Wave provides useful information about accessibility features and errors within your page. It is designed to facilitate the design process by adding icons to a version of your page. The icons represent structural, content, usability, or accessibility feature or problems within your content. You can easily see the exact location within your page where an error is present.

The Wave (or any other software-based validator) cannot check all accessibility issues, but it checks nearly everything that can possibly be checked in an automated process. As soon as you have fixed the errors and applicable warning from the Wave, you may want to validate your page using other accessibility validators, such as Bobby (http://bobby.watchfire.com/) to get another way of looking at accessibility feedback. If you need to validate or audit the accessibility of an entire site, there are many evaluation tools you can use, including HiSoftware’s line of products (http://www.hisoftware.com/) or InFocus (http://www.ssbtechnologies.com/).

3. Check for keyboard accessibility

This step is very easy. Just access the page and make sure that you can navigate through the entire Web page using the tab key. Ensure that every link and form item is accessible and that all forms can be filled out and submitted via the keyboard. If any content is displayed based upon mouse actions, make sure the content is also available using the keyboard.

4. Test in a screen reader

It is a great idea to have a copy of a screen reader available for testing. I suggest IBM HomePage Reader because it is easy to use and learn, whereas other full-featured screen readers are more expensive and have a very steep learning curve. First listen to the entire page without stopping. Did everything make sense? Did the screen reader access all of the content? Was the alternative text for images appropriate and equivalent enough to convey the content and meaning of the image? Was the reading order of the content logical?

Now try navigating the page with the screen reader. Are link labels descriptive? Were forms accessible via the keyboard? Were form labels included? If the page includes data tables, were data cells associated with headers? Did the navigation structure make sense? Was there an option to navigate within lengthy pages of content? Was content structure, such as headings and lists, correctly implemented? Was any multimedia accessible (i.e., did video have captions, audio have transcripts, Flash have an alternative, etc.)?

5. Check your pages for WCAG compliance

Become familiar with the Web Content Accessibility Guidelines – http://www.w3.org/TR/WAI-WEBCONTENT/. The WCAG are a very complete set of accessibility guidelines. They are so complete that sometimes it is nearly impossible to comply with them. Though some of the guidelines are rather vague and some are outdated, they will give you a good idea of what it takes to be truly accessible. Manually evaluate your page against these guidelines. The WCAG guidelines are broken into three priority levels, based upon level of importance or impact on accessibility. At very least, try to meet the Priority I and Priority II guidelines. The Priority III guidelines are more of a wish list for accessibility, but contain some very important items that should be included whenever possible. The Wave (http://www.wave.webaim.org) can alert you of non-compliance with many of the WCAG guidelines.

6. Conduct user testing

One of the best ways to determine the accessibility of your pages is to get feedback from individuals with disabilities. Most are very willing to give you feedback if it will help increase the accessibility of your content to the disability community. Sometimes features of the site that you believed would increase accessibility end up being very confusing or inaccessible. Be willing to make changes based on user testing. Especially seek feedback on your navigation structure and use of language. These two things can pose huge accessibility barriers to a large group of individuals. As soon as their recommendations for changes have been made, have them test again and see if things are better. Encourage feedback from all of your site visitors.

7. Repeat this process

Web accessibility is a continual process and one that should be evaluated often. Each time you update or change content, quickly run through the previous 6 steps. You will quickly get very good at them and eventually you will understand how to both evaluate and create accessible Web content.

Accessibility Testing Software Compared


How is conformance ascertained

What is accessibility?

An accessible web is available to people with disabilities, including those with:

  • Vision impairment (e.g. low vision or colour blindness) or vision loss affecting their ability to discern or see the screen
  • Physical impairment affecting their ability to use a mouse or keyboard
  • Hearing impairment or loss affecting their ability to discern or hear online audio
  • Cognitive impairments (e.g. dyslexia, ADD, learning difficulties, memory impairment) affecting their ability to comprehend or understand your site
  • Literacy impairments (e.g. low reading skills or English is not their first language) possibly affecting their ability to fully understand your site and its messages

Beneficiaries from an accessible web, however, are a much wider group than just people with disabilities and also include:

  • People with poor communications infrastructure, especially rural Australians
  • Older people and new users, often computer illiterate
  • People with old equipment (not capable of running the latest software)
  • People with “non-standard” equipment (e.g. WAP phones and PDA’s)
  • People with restricted access environments (e.g. locked-down corporate desktops)
  • People with temporary impairments or who are coping with environmental distractions

How do we check for an accessible web site?

The Web Accessibility Initiative (WAI) outlines approaches for preliminary and conformance reviews of web sites [Brewer & Letourneau]. Both approaches recommend the use of ‘accessibility evaluation tools’ to identify some of the issues that occur on a web site. The WAI web site includes a large list of software tools to assist with conformance evaluations [Chisholm & Kasday]. These tools range from automated spidering tools such as the infamous Bobby [ Watchfire, 2003], to tools to assist manual evaluation such as The WAVE [WebAIM], to tools to assist assessment of specific issues such as colour blindness. Some of the automated accessibility assessment software tools also have options for HTML repair.

What role can automated tools play in assessing the accessibility of a web site?

WCAG 1.0 comprises 65 Checkpoints. Some of these are qualified with “Until user agents …” and with the advances in browsers and assistive technology since 1999, some of these are no longer applicable – leaving us with 61 Checkpoints. Of these only 13 are clearly capable of being tested definitively, with another 27 that can be tested for the presence of the solution or potential problem, but not whether it has definitively been resolved satisfactorily. With intelligent algorithms many of the tools can narrow down the instances of potential issues that need manual checking, e.g. the use of “spacer” as the alt text for spacer.gif used to position elements on the page.

These automated tools are very good at identifying pages and lines of code that need to be manually checked for accessibility. Unfortunately, many people misuse these tools and place a “passed” (e.g. XYZ Approved) graphic on their site when the tool can not identify any specific accessibility issues, but the site has not been competently manually assessed for issues that are not software checkable.

So, automated software tools can:

  • check the syntax of the site’s code
  • identify some actual accessibility problems
  • identify some potential problems
  • identify pages containing elements that may cause problems
  • search for known patterns that humans have listed

However, automated software tools cannot:

  • check for appropriate meaning
  • check for appropriate rendering (auditory, variety of visual)

The interpretation of the results from the automated tools requires assessors trained in accessibility techniques with an understanding of the technical and usability issues facing people with disabilities. A thorough understanding of accessibility is also required in order to competently assess the checkpoints that the automated tools cannot check such as consistent navigation, and appropriate writing and presentation style.


Review Teams for Evaluating Web Site Accessibility


This document describes the optimum composition, training, and operation of teams of reviewers evaluating accessibility of Web sites, based on experience from individuals and organizations who have been evaluating accessibility of Web sites for a number of years. The combination of perspectives afforded by a team rather than individual approach to Web site evaluation can improve the quality of evaluation. It provides links to potential training resources related to types of expertise needed in Web site evaluation, and suggests practices for effective coordination and communication during the review process.

The description of Web accessibility review teams in this document is informative, and not associated with any certification of review teams. Operation of Web accessibility review teams according to the description in this document does not guarantee evaluation results consistent with any given law or regulation pertaining to Web accessibility.

This document does not address repair of inaccessible Web sites. The recommended process for evaluation of Web site accessibility, and selection of software for evaluation, is addressed in a separate document, Evaluating Web Sites for Accessibility.

Composition of Review Teams

A review team, rather than individuals, is the optimum approach for evaluating accessibility of Web sites because of the advantages that the different perspectives on a review team can bring. It is possible for individuals to evaluate Web site accessibility effectively, particularly if they have good training and experience in a variety of areas, including expertise with the increasing number of semi-automated evaluation tools for Web accessibility. However, it is less likely that one individual will have all the training and experience that a team approach can bring.

Affiliations among members of review teams may vary. For instance, a team might be a group of colleagues working in the same physical office space, or they might be a team of volunteers communicating consistently by email but located in different countries and working for different organizations.

Likewise the nature of review teams as organizations may vary. For instance, the review team might operate as a for-profit business enterprise; or as an assigned monitoring and oversight group within a corporation or government ministry; or strictly on a voluntary basis.

Expertise of Review Teams

Review teams should have expertise in the following areas. Links to potential training resources are provided, although in areas such as use of assistive technologies and adaptive strategies, the most effective review approach can be inclusion of users with different disabilities.

  • Web mark-up languages and validation tools
    • W3C Technical Reports
    • [point to which online training resources, or describe generically?]
  • Web Content Accessibility Guidelines 1.0 (WCAG 1.0) and Techniques for WCAG 1.0
    • [link to these docs, to curriculum, and to training resource suite?]
  • Conformance evaluation process from Evaluating Web Sites for Accessibility
    • [link to conf eval section; anything to link to on eval curriculum? slide sets from hands-on workshops?]
  • Use of a variety of evaluation tools for Web site accessibility
    • [any training resources available for evaluation tools?]
  • Use of computer-based assistive technologies and adaptive strategies
    • [link to How People with Disabilities Use the Web sections]
    • [write the piece on cautions about sighted people evaluating pages with screen readers, etc.]
  • Web design and development.
    • [describe generic online resources?]

Operation of Review Teams

Communicate review process and expectations in advance

Depending upon the type of review, the review process may start with advance communication about how the review will be conducted, whether the results will be public or private, how the results will be presented, how much follow-up guidance will be given, etc. In instances where review teams are fulfilling a monitoring, oversight, or advocacy function, this advance notice may not be necessary.

Coordinate review process and communication of results

Most review teams will maintain a private e-mail discussion list for internal exchange of preliminary results during the process of reviewing a site, and for coordinating on one summary of the results from the review team.

Reference specific checkpoints when explaining results

Reports from review teams are most effective when they cite and link to specific WCAG 1.0 checkpoints which are not conformed to.

Compare evaluation results with other review teams

Review teams can benefit from periodically comparing summaries of Web site reviews with other teams as a quality check.

Provide feedback on guidelines and implementation support resources

Feedback on implementation support resources contributes to improving the quality of review tools and processes for all. Where possible, provide feedback on accessibility guidelines and implementation support resources, including the following:

  • Evaluating Web Sites for Accessibility
  • Review Teams for Evaluating Web Site Accessibility (this document)
  • Web Content Accessibility Guidelines 2.0 (Working Draft)
  • Accuracy of evaluation tools (provide feedback directly to tool developers as well as to WAI)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: