Favelets for Checking Web Accessibility

The work I like the most as a consultant is the careful manual evaluation of a small number of web pages for a client. The output of the evaluation is a table of the issues that I find, the severity of those issues, and my recommendation for fixing them.

This human evaluation process is fundamentally based on the Web Accessibility Tool Bar which up until the Target decision last year, I was saying was the best thing that ever happened to accessibility. Now maybe it is the second best thing!

In order for this review process to be efficient, I needed to make some changes to those Web Accessibility Toolbar functions which is the reason for these favelets that I am writing about here. Here are the things that I was looking for.

Tell me what you found

I wanted information about what the favelet found presented to me immediately in a JavaScript alert. So for example, the Form Labels favelet might produce the following alert.

screen shot of alert

The alert simply says "3 errors out of 6 form controls with 3 to check out". So we know there are three errors - and recommendations for correct form labeling are needed for those, but also there are three that need verification - is the labeling adequate? These are really two quite different tasks.

Tell me about active images separately

The second big difference between these favelets and the WAT function(s) deals with images. There are different kinds of images which need to be handled differently. The most important distinction is active vs. inactive images. Alt-text on active images should say where the image link goes or what the button goes. Whereas inactive images which convey information should have alt-text that conveys the same information.

With the Web Accessibility Toolbar, you have to check each image separately, probably visually checking the mouse pointer as it moves over the image to see if it is a link. My favelet saves you the trouble. You just apply the Active Image favelet and your analysis will just concern images whose alt-text should convey the purpose of the active object.

The Active Images favelet also includes more information than included with the Web Accessibility Toolbar Show Images function. This favelet checks client-side image map area elements and image buttons (input with type="image"). I think the image buttons may be missed entirely in the Toolbar functions, but the image map area elements can be checked with a separate function which lists the image map areas, Show Images Maps, with WAT. The screen shot below of part of the Target.com home page illustrates these extra functions. There is one image button with alt="go" and one image map which happens to have just one area. A JavaScript alert is also present in the screen shot with the message, "0 errors out of 33 active images with 33 to review".

Screen shot of part of Target.com page with active images favelet applied

Separate formatting images from other images

Amongst the images that are not active, there is a further division, those which convey information and those that don't. Of course it is impossible to make that division with a script; human review is necessary for that. But we can help a lot. The other two image favelets are Formatting Images and Larger Images. This is a very useful distinction. Formatting images are defined to be images with at least one dimension less than 10 pixels. It is assumed that these should have alt="" so errors are only highlighted when no alt-text is present. Finally, the Larger Images favelet checks for images with both dimensions 10 pixels or larger and though some may be decorative these are the ones than need to be checked for alt-text that conveys the same information as the image.

Include title attributes as legitimate form labels

The Form Labels favelet checks for label elements as well as for the presence of title attributes which are sometimes necessary instead of label elements especially when the prompting text in not adequate or is dispersed. Also this favelet watches out for duplicate id attributes (an error), empty label elements, and label elements coded as containers of both the control and the prompt, a situation that is "compliant" but not well supported. Suzanne Taylor has modified our Form Labels favelet so it now also checks for fieldset and legend. With WAT you need to use both the Forms function and the Titles function to get the same infromation.

The only tool for checking skip links

I think ours is the only accessibility favelet to check for local links and to check if they will "work". I have been writing about the fact that sometimes, even often, Skip links will not work. The problem is that after following a skip link and tabbing one more time you be back to where you started, i.e., to the top of the page. The Skip Link favelet identifies local anchors and also indicates whether they will work by marking up the target of each in-page link. Here is a screen shot of the Web Accessibility Initiative home page which has a visible skip link at the top.

Screen shot od W A I home page with skip link favelet applied

The screen shot shows the skip link (Skip to Content) highlighted with href="#main" identified. Then there is a thick blue border around the full page identified as target for #main. That means that when one tabs after following the skip link they will be back at the "Skip to content" link, the first active item in the blue box. Try it in IE on the WAI page; then you will understand. Actually you won't! The WAI skip link now works and the favelet is not correctly marking the target. If you disable CSS (WAT function) then you'l see that the text for the target is inserted but it is not being displayed.

The rest of the accessibility checking favelets

The Data Tables favelet annotates the page with table borders and all useful accessible table markup on the page, including th, scope, summary and caption. The Table Header function in the Web Accessibility Toolbar does a much simpler job of highlighting (reverse colors) the th cells but doesn't indicate scope attributes.

I rarely use the Long Description favelet in client reviews. But it was part of the human review process described below that was the motivation for writing these favelets in the first place. It announces the number of images with longdesc attributes and provides links to the long description file in each case. After installing these favelets on your machine, you can try this favelet on the section of the 508 Web Accessibility course dealing with images, https://jimthatcher.com/webcourse2.htm. - the page has some longdesc attributes.

Some new favelets

As I mentioned above Suzanne Taylor has been improving the operation of the favelets and also has added four new ones. In the improvement area, she has adapted them for Firefox and for later versions of IE (we're good with IE9 now). My favorite improvement is what happens when you run the favelet twice. Why would you do that? Well I dismiss the alert too quickly and want to see it again. So I run the favelet as second time - but the markup on the page does not double up; a mathematician would call the operation of the favelet on the page "idempotent;" at least this mathematician would.

One of the most important favelets is Suzanne's new Aria Markup favelet which has the function of highlighting all ARIA code on the page, including roles, tabindex="-1" and all other key information. Speaking of tabindex, we now have a Tabindex favelet which is essentially the same as the WAT Toolbar Tabindex function. The Frames favelets marks up the frames on a page as well as producing a list of frames and iframes that can be opened individually.

The last of Suzanne's new favelets is the LargeText favelet. When you are testing for adequate contrast using the very helpful Colour Contrast Analyser you may be confonted with a report that the contrast ratio is inadequate for normal text but passes for "large text." What's "large text?" The answer from WCAG 2.0 is "18 point or 14 point bold." That really doesn't help very much, does it? But Suzanne's LargeText favelet will help.

Screen shot of large text favelet in use

As is shown in the screen shot above, the favelet overlays a box with samples of 14 point bold and 18 point text so that you can check the text you are testing with the Contrast Analyser. The box can be moved with the mouse, with arrow keys, or with u d r and l. In addition, for keyboard use you can adjust the size of the motion using numeric keys, from 1 for pixel size steps (tiny) trough 4 for steps about a quarter page (large).

And as of the Fourth of July, 2012, we have another favelet from Suzanne Taylor, the Landmarks favelet. This robust tool will indicate all ARIA and HTML 5 landmarks, highlighting the text referenced by aria-labelledby. Also it will alert you to misspellng of aria-labeledby.

The human review favelets

You can check the application of any favelet on this page by activating (clicking on) the corresponding link below. To use them on other pages you must install them in your favorites list in your browser. These used to only work in IE7, but now since they have been modified by Suzanne Taylor (Pearson Learning) they also work in IE8 and IE9 and are grouped as a toolbar for Firefox. They are listed here so they are convenient to the installation process described in the next section.

Active Images The Active Images favelet counts alt-text errors and highlights alt text on active images to review. Active images include images inside anchor tags (image links), <input> elements with type=image and <area> elements for image maps. This favelet adds "NoALT" markup text for image errors with all image borders highlighted.

Larger Images The Larger Images favelet counts alt-text errors and highlights alt-text on larger images to review. Larger images are <img> elements which are not active and whose height and width are both greater than 9 pixels.

Long Description The Long Description Highlights and provides links to view long descriptions when present.

Formatting Images The Formatting Images favelet counts alt-text errors and highlights alt-text on formatting images for review. Formatting images are defined here as images which are not active and which have height or width less than 10 pixels. This favelet does not add markup text for formatting image errors although all image borders are highlighted.

Form Labels The Form Labels favelet highlights form labeling and counts form labeling errors. Errors are also highlighted.

Data Tables The Data Tables favelet simply displays all accessible table markup - th, summary, scope, axis, id, and headers. In this way, the human reviewer can see if necessary accessible table markup is present. The favelet also draws borders around all tables.

Skip Links The Skip Links favelet highlights the location of in-page links and where input focus (not visual foces) will land when each link is activated.

Headings The Headings favelet counts and highlights all HTML headings (h1, h2, ..., h6) on the page.

ARIA Markup The Aria Markup favelet counts and highlights with h6 tags all instances of ARIA markup on the page.

Tabindex The Tabindex favelet counts and highlights with h6 tags all instances of tabindex on the page.

LargeText The LargeText favelet creates a movable box on the page with samples of "large text".

Frames The Frames favelet marks up frames and iframes on the page and also creates a list of the frames and their properties in a new winodw. Form there you can open individual frames by clicking on the src link.

LandMarks The LandMarks favelet marks up all ARIA and HTML 5 landmarks (<nav>). It also checks for and marks up the text specified by aria-labelledby. The favelet even checks for misspellng of "aria-labeledby."

Slider The Slider favelet is the first in the "widget group," revealing ARIA roles Slider, Spinbutton, Progressbar and Scroll bar as well as HTML5 input elements Range and Number.


Installation of the favelets

Use the following sequence steps to install each of the Favelets in your IE Favorites collection.

  1. In your IE Favorites create a folder, say "My Favelets"
    • Favorites | Organize Favorites | Create Folder
    • screen shot of create folder dialog

    • The Screen Shot of the Organize Favelets dialog shows a "Create Folder" button (Alt+C).
    • Enter the name, "My Favelets," in the edit field that becomes available. Then close the Organize folders dialog (Alt+L).

At this point, you have a "My Favelets" folder created in your Favorites. Below are the steps to install each of the 14 favlets.

  1. Open the favorites side panel so that the new folder is visible ( View | Explorer Bar | Favorites or Crtl+I).
  2. If you are using a mouse, drag each favelet from this page to the My-Favelets folder - Select "Yes" on each security warning if it appears (screen shot below).

    Security warning. You are adding a favorite that may not be safe.

    This process can be accomplished from the keyboard although the functions of most favelets are purely visual. Give focus to the desired favelet. Open the context menu with Shift+F10 (right click). Choose "Add to Favorites" (F). If the Security warning opens, select "Yes." Then in the Add to Favorites dialog, find the folder My Favelets in the "Create In" list and select OK (Enter).

Installation on FireFox

The favelets install as a toolbar on Firefox. This, FaveletToolbar.xpi, is a link to the crucial xpi file containing the data for that toolbar. To install, open that link (Enter or double click) and approve th installation with "Allow"

Screen shot of Allow dialog

Then choose "Install Now" on the subsequent "Install AddOns" dialog.


An IBM idea for checking accessibility of large web sites

These favelets were originally created for IBM in a project with Matt Bronstadt (MIT) coordinated by Matt King (IBM) - about 5 years after I retired from IBM (after 37 years). The results of the project are written up in the IBM Systems Journal, Managing usability for people with disabilities in a large Web presence.

The background is that Matt King had been monitoring the IBM internal and external web sites (about 13 million pages) for a number of years. This monitoring was checking only a very small number of accessibility errors like missing alt-text and missing form labels, errors that could mechanically be checked (by computer) with certainty. Reports were sent to responsible managers and vice presidents. And over the years the number of errors dramatically decreased.

The question became - what about those errors that the accessibility checking tool could not detect, those errors that require human review - and there are many more of those than the ones that can be checked by computer. Matt Bronstadt (the statistician amongst us) developed a sampling methodology whereby relatively small sets of pages would be evaluated by humans for accessibility errors, and the results of those samples would be projected to the larger population, very much like relatively small polling samples are used to predict election results or generally to represent the opinion of large populations.

The problem was that even the sampling process required performing human evaluation on fairly large numbers of pages. We needed a review process that was quick and simple and didn't require an accessibility consultant with a PhD to do the work. The Favelets were the first half of the story - providing an interface that facilitates human review.

The second part of the idea for rapid human review comes from the AIR programs sponsored by Knowbility. In the AIR-Austin event, local tech teams are recruited and trained in accessibility. Knowbility also recruits local non-profits that need a web site. The tech teams are paired with non-profits and in a one day Rally the teams build accessible sites for the non-profits. Then, and most importantly, the sites are judged for accessibility by a panel of local accessibility experts. To make that judging process reasonably short, the scoring includes simple cutoffs - if you have more than 3 errors in many of the categories, the score is zero for that category and the judge doesn't need to keep counting errors for that item. It is that cutoff idea that is used here in the Rapid Human Review Process.

Rapid human review for accessibility

  1. New Page

    1. Open the next page
    2. Record the URL of the page
  2. Active alt-text (Score 10)

    1. Apply the Active Images favelet to the page.
    2. If a new window is opened containing a list of <frame> elements, skip top step number 9.
    3. Record the number of active image errors, the number of active images, and the number of active images to review as reported by the favelet.
    4. Record number [up to 3] of objects (images, image buttons, areas) with inappropriate alt-text, like alt-text is too long, doesn't match the image, is generic or otherwise useless. Comment as appropriate. Make sure the alt-text conveys the function of the image. If the image is text, the alt-text should (usually) be that text.
  3. Inactive alt-text on non-formatting images (Score 8)

    1. Use F5 to refresh page.
    2. Apply the Larger Images favelet to the page.
    3. Record the number of larger image errors, the number of larger images, and the number of larger images to review as reported by the favelet.
    4. Record number [up to 3] of larger images with inappropriate alt-text, like alt-text is too long, doesn't match the image. Comment as appropriate. Watch especially for generic and useless alt-text. Perhaps the image should have alt="" if it is just providing "eye candy".
  4. Long Descriptions (Score 10)

    1. Record the number of large images from the previous step (Step 3) that are charts, graphs, or presentation slides, for which alt-text does not convey the important information in the image and a long description is needed.
    2. If a long description is in-line no long description attribute is necessary.
    3. Apply the Long Description favelet to the page.
    4. Look at long descriptions for the large images (links are made available for those by the favelet).
    5. Record the number [up to 3] of images in Step 4.1 that do not have in-line descriptions and do not have long descriptions or for which the in-line or long description is inadequate.
  5. Alt text on formatting images (Score 4)

    1. Use F5 to refresh page.
    2. Apply the Formatting images favelet to the page.
    3. Record the number of formatting image errors, the number of formatting images, and the number of formatting images to review as reported by the favelet.
    4. Record number [up to 3] of formatting images with inappropriate alt-text. Comment as appropriate. Watch especially for things like "spacer", "1_pix.gif" or "rule".
  6. Form labeling (label elements or title attributes). (Score 10)

    1. Use F5 to refresh page.
    2. Apply the Form labels favelet to the page.
    3. Record the number of form controls, the number of form control errors and the number of form controls to check out as reported by the favelet.
    4. Review the labels or titles, when specified, for adequacy. Record the number (up to 3) of form elements in which the label/title is present and not adequate. Watch for both title and label. If significantly different it is an error. Comment as appropriate.
  7. Data tables (Score 10)

    1. Use F5 to refresh page.
    2. Count and record the number of data tables on the page.
    3. Apply the Data Tables favelet to the page.
    4. Record the number of tables observed.
    5. Record the number [up to 3] of Data Table markup errors. Any missing TH on a heading cell is an error. If the table is complex the, any header cell without an id or data cell without a headers attribute is an error.
  8. Skip links (Score 10)

    1. Use F5 to refresh page.
    2. If there are 5 or more navigation links preceding the main content of the page, the record 1 for human review for skip links.
    3. Apply the Skip Links favelet to the page.
    4. If there is a skip link at the top of the page, check that its target is correct - the top of blue border should be the top of the main content. If it is not, there is a skip link error.
    5. If there is no skip link, check the headings on the page by applying the Headings favelet to the page. If there is a heading at the top of the main content then there is no skip link error; if there is no such heading then there is a skip link error.
    6. Comment as appropriate.
    7. The review is complete; skip step 9.
    8. The following screen shot shows the results of the Skip Links favelet having been applied to the IBM Web Accessibility Guidelines page, Checkpoint 8.

      screen shot ibm.com page with skip link favelet showing good target for skip link but nat targets for the other three in-page links

      There are 6 in-page links on this IBM page. Four of those are visible in the screen shot. The target of all is the object with the navy blue border which is OK for the skip link but not good for the other in page links. As it turns out for each of those in page links (Rationale, Techniques, Testing) the tab after following the link will return input focus to the "Rationale" link, an annoying fact for the keyboard user.

  9. Frames (Score 10)

    1. This step is to be followed only if a frame list is opened in step 2. The window opened in step 2 above contains a list of <frame> elements with the associated name and title attributes. Each missing or inadequate title attribute is an error.
    2. All other items score 0 errors.

Scoring the human review process

The scoring is based on system used for judging in the AIR Programs hosted by Knowbility, Inc. In the classic format of the AIR event, tech teams are trained in accessible web development and paired with non-profit organizations. In a one-day rally, each tech team builds a web site for its non-profit. Then the web sites are judged by a team of accessibility experts. The judging process is based on a judging form which has been developed over the years for ease of use and accuracy. For us the key ingredient of that form is the idea that you only need to count a few errors, three in particular. If there are three active image errors, then the team receives 0 points in the active image category. The idea is that by the time there are three failures - it is clear the page developer has failed that test, and that more errors do not significantly change the situation. The other part of the idea is that this eases the burden on judges in that there is no need to count more than three errors. Since our goal is to complete this review in a very short time this is a significant saving.

The following table shows the way each of the human review steps is scored. The maximum score is 62 for a page with no errors.

Number of errors
0 1 2 3
Active alt-text
10 5 1 0
Inactive alt-text
8 4 1 0
Long descriptions
10 5 1 0
Formatting Images
4 2 1 0
Form Labels
10 5 1 0
Data Tables
10 5 1 0
Skip links
10 0 0 0
10 5 1 0
Max Score