Since its’ inception in the late 1980s and early 1990s, the World Wide Web has been recognized as a major game changer for persons living with one or more type(s) of disability. Tim Berners-Lee, W3C Director and inventor of the World Wide Web, puts it this way. “The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect. (W3C Prss Release)”
The question then becomes, how do we insure that the web is accessible and usable to all, despite the rapid advances in technology, such as HTML 5, and the widespread use of multimedia, such as graphics and videos? The answer is to use various methods for testing the accessibility of websites.
For many years, web accessibility testing has been focused primarily on automated accessibility checkers, which are tools programmed to flag common accessibility errors on web sites, such as missing alternative text for images, unlabeled buttons, etc. However, recent trends have been moving toward the use of combining results from automated testing tools, with results provided through human-based testing.
Automated Testing – Positive Aspects
In general, automated web accessibility testing tools can, and are often, used as a means of obtaining an overall view of the accessibility, or inaccessibility, of web sites, based on published accessibility standards, such as the WCAG 2.0 Guidelines. Automated testing tools are useful in the following ways:
- They are able to locate common errors, such as missing form field labels, missing alternative textfor images, unlabeled buttons and links, etc.
- They can also test large websites in a relatively short period of time, allowing for efficient identification of common issues, without intense scrutiny of the HTML source code of each page, which can be labor intensive and error prone.
- They can also serve as an easy means of formulating a general overall plan for addressing any major accessibility issues on any given website.
Automated Testing – Negative Aspects
Automated web accessibility testing tools are not good at detecting other common, but subtle, accessibility issues, such as determining whether alternative text supplied for images is meaningful to users of assistive technologies, such as screen readers. For instance, the alternative text for an image on a web page could be the word “image,” which may pass an automated accessibility test, but would fail a human-based test, since the word “image” is not descriptive, and does not convey any meaningful information of value to a user who is blind.
Another reason for utilizing human testers is the fact that most automated tools cannot assess the accessibility of interactive multimedia, such as videos, script-based dynamic applications, like online games and interactive street maps, or determine the usefulness of labels for form fields. All of which can have a significant impact on the accessibility of our modern content-rich web, if steps are not taken to insure accessibility and usability for all.
In summary, automated accessibility testing tools can provide a good starting point for any accessibility testing. However, no automated test is comprehensive enough to determine real-world accessibility. It is therefore imperative to use human-based testing to fill in the gaps left by automated testing tools.