siteSifter
Accessibility Audits
In an ideal world, a website would be written so as to be accessible from day one, and remain that way throughout its lifetime. In the less than ideal, but far more real, world of the World Wide Web this is rarely achieved.
This brings up the need to perform accessibility audits – i.e. analyse websites to identify any difficulties users may have with accessing the information in it. Without such analysis, problems can remain undetected.
Testing Methodology
There are only two viable methods for testing the accessibility of a website. Either a group of users are set the task of manually reviewing the site, or an automated tool is given the same job.
However, accessibility is a far more complicated issue than it might seem. A site can easily appear accessible to one user, and inaccessible to another, even given the same level of ability. The user-centred methodology suffers from the difficulty inherent in a baseline formed by personal preference.
The tool-based analysis should focus on a procedure by which users cooperate with engineers in establishing which requirements exist for a generic individual to access the information in a document. These requirements are then used to create a set of guidelines; and these guidelines form a baseline against which tests can be made.
Automated, specification-centred, analysis can, however, not be relied upon to test every single aspect of accessibility. Certain points, such as the suitability of language, is - and will likely remain - outside our capabilities to programmatically determine.
It is worth noting that user testing can also be applied to the same type of guidelines as outlined above. However, for two, or more, individuals to arrive at the same conclusion regarding a checkpoint in a guideline, the checkpoint must be objectively measurable. If so, it is also a prime candidate for automated testing.
Choice of Methodology
A number of questions must be answered before an informed choice of testing methodology can be made. However, a few common goals exist which may be of aid to us.
We will start by examining two different view-points on accessibility status.
One-Point Accessibility
In this view we regard a web document as something static. When it is completed, it is tested against accessibility guidelines, and then left to its own devices. At one point the document was accessible. This view makes it easy to test a document, as it need be done only once.
Any-Point Accessibility
This view regard web documents as dynamic entities, which are updated and changed. In such a situation the accessibility tests and repairs must be carried out throughout the active life of the document; at any point the document must be accessible. This view makes testing far more difficult, as it must be repeated, and often.
It should be clear that choosing manual, user-centred, testing for any-point accessibility requires massive logistical and financial effort, whilst a one-point view is much less arduous.
In addition, accessibility work often centre on public electronic resources such as government departments and agencies. In order for guidelines to be legally and ethically applied to such resources, they need to be objectively and repeatedly applied to a number of websites from different sources.
In order to do so with human testers, it is required that the same test panel is applied to each source and each occasion. Such a practice is obviously tremendously difficult to implement.
Enter siteSifter
siteSifter is an automated tool for analysing websites. It is written in the well-known Perl programming language, and can test against a number of different baselines due to its modular construction.
The system is based on pattern-matching and heuristic algorithms. This means that the program will go through various codes making up a website and look for patterns associated with practises that make accessibility difficult.
siteSifter 2 runs offline, mirroring a target site before analysing the content and markup. siteSifter can also be run in batch mode, testing a number of sites in quick succession, or in scheduled mode, testing one or more sites at regular intervals.
During the fourth quarter of 2004, the front-end siteSifter Journal will become operational, in which clients can themselves schedule such automated testing, select their own baselines, and chose from a number of different report formats.
End-Note
No single testing method will ever produce a fully reliable result. It is now, as always, our recommendation that automated tested must never be relied upon without manual review, and vice versa.