The most common way organizations test for website accessibility is through automated software programs. It appears on the surface to be a fast and easy tool to reach accessibility compliance requirements. However, the accuracy of automated testing results versus actual user experience is currently being debated. Some automated software manufacturers have recently released information which indicates that their products are only about 40% accurate, when compared with human user experience testing. The primary issue with the accuracy, and credibility, of accessibility testing software, is related to the “false positives/negatives” triggered during use.
A “false positive” is a statement of a detected problem that is not actually an issue for the user. This happens with automated testing tools indicate that something is wrong when it isn’t. One example could be if the software indicates that no headings are marked on a webpage, when the user encounters them as marked.
A “false negative” on the other hand is a true problem that never gets flagged. For example, images on a web site must be labeled with a descriptive element called an alternative text tag, or alt tag, so that blind users can understand what images are attempting to convey. If an automated test can see that an image is alt tagged, it will write it off as not a problem. However, the text in that alt tag could just be a series of keywords so that search engines will direct users to the site. This is a big problem because the user can’t receive an accurate description of the image from the key words, and thus, has no idea what it is designed to tell them. If the web developers assumes nothing is wrong, this could be extremely frustrating for the end-user.
The way WeCo measures website accessibility is by measuring the user’s experience of the website.
Here at WeCo, we use human-based user experience testing to measure website accessibility. These testers who work with us have disabilities that are an accurate representation of the four main disabilities classifications identified by the U.S. department of human services; sight-related, hearing-related, motor skill-related, and cognitive-related disabilities. This gives us a concise understanding of the accessibility problems websites have.
We then translate our tester’s experiences via criteria which encapsulates the Web Content Accessibility Guidelines 2.0 (WCAG), Section 508 of the Rehabilitation Act of 1973, the Americans with Disabilities Act and WeCo’s Standards of Accessibility.
One important reason why testing accessibility with users who represent real life users with disabilities is so vital is that many disabled people cannot afford the most up to date software or hardware. WeCo testers use the software, hardware, and assistive technology from their everyday lives, making the results exceptionally close to what other individuals living with disabilities encounter on websites. Our company recognized that our testers are already experts in the field of accessibility before they arrive at our office door, making them perfect accessibility testers.