We design with viewports in mind, keep track of loading times, and hunt down even the smallest browser bugs all to create the best possible user experience. But despite all these efforts to constantly improve our products, theres still one aspect that, unfortunately, comes up short quite often: accessibility.
Have you ever tried to navigate your website using only your keyboard? Your mobile application with a screen reader? And do you consider your color choices with accessibility in mind? With the help of this eBook, you will gain a deeper understanding of common accessibility pitfalls and learn to circumvent them to create a better experience for everyone.
The first step towards making informed decisions about accessible design, though, is fully grasping how the underlying technology works. Thats why well start off this eBook with a closer look at accessibility APIs. Based on that, our authors consider UX principles for accessibility and share best coding practices that guarantee a better and smoother interaction, no matter how a user interacts with your content. Finally, we cover strategies and tools to simulate how someone with visual impairments experiences your website, as well as key lessons from designing for older people. As you will see, with accessibility in mind, we can serve many more people than we already do. Its about time to finally remove the existing barriers and build a more inclusive web the effort is reasonable, and all our users will benefit from it.
Accessibility APIs: A Key To Web Accessibility
By Lonie Watson & Chaals McCathie Nevile
Web accessibility is about people. Successful web accessibility is about anticipating the different needs of all sorts of people, understanding your fellow web users and the different ways they consume information, empathizing with them and their sense of what is convenient and what frustratingly unnecessary barriers you could help them to avoid.
Armed with this understanding, accessibility becomes a cold, hard technical challenge. A firm grasp of the technology is paramount to making informed decisions about accessible design.
How do assistive technologies present a web application to make it accessible for their users? Where do they get the information they need? One of the keys is a technology known as the accessibility API (or accessibility application programming interface, to use its full formal title).
Reading The Screen
To understand the role of an accessibility API in making web applications accessible, it helps to know a bit about how assistive technologies provide access to applications and how that has evolved over time.
A World of Text
With the text-based DOS operating system, the characters on the screen and the cursor position were held in a screen buffer in the computers memory. Assistive technologies could obtain this information by reading directly from the screen buffer or by intercepting signals being sent to a monitor. The information could then be manipulated for example, magnified or converted into an alternative format such as synthetic speech.
Getting Graphic
The arrival of graphical interfaces such as OS/2, Mac OS and Windows meant that key information about what was on the screen could no longer be simply read from a buffer. Everything was now drawn on screen as a picture, including pictures of text. So, assistive technologies on those platforms had to find a new way to obtain information from the interface.
They dealt with this by intercepting the drawing calls sent to the graphics engine and using that information to create an alternate off-screen version of the interface. As applications made drawing calls through the graphics engine to draw text, carets, text highlights, drop-down windows and so on, information about the appearance of objects on the screen could be captured and stored in a database called an off-screen model. That model could be read by screen readers or used by screen magnifiers to zoom in on the users current point of focus within the interface. Rich Schwerdtfegers seminal 1991 article in Byte, , describes the then-emerging paradigm in detail.
Off-Screen Models
Recognizing the objects in this off-screen model was done through heuristic analysis. For example, the operating system might issue instructions to draw a rectangle on screen, with a border and some shapes inside it that represent text. A human might look at that object (in the context of other information on screen) and correctly deduce it is a button. The heuristics required for an assistive technology to make the same deduction are actually very complex, which causes some problems.
To inform a user about an object, an assistive technology would try to determine what the object is by looking for identifying information. For example, in a Windows application, the screen reader might present the Window Class name of an object. The assistive technology would also try to obtain information about the state of an object by the way it is drawn for example, tracking highlighting might help deduce when an object has been selected. This works when an objects role or state can easily be determined, but in many cases the relevant information is unclear, ambiguous or not available programmatically.
This reverse engineering of information is both fallible and restrictive. An assistive technology could implement support for a new feature only once it had been introduced into the operating system or application. An object might not convey useful information, and in any case it took some time to identify it, develop the heuristics needed to support it and then ship a new version of the screen reader. This created a delay between the introduction of new features and assistive technologys ability to support it.
The off-screen model needs to shadow the graphics engine, but the engines dont make this easy. The off-screen model has to independently calculate things like white-space management and alignment coordination, and errors would almost inevitably mount up. These errors could result in anomalies in the information conveyed to assistive technology users or in garbage buildup and memory leaks that lead to crashes.
Accessibility APIs
From the late 1990s, operating system accessibility APIs were introduced as a more reliable way to pass information to assistive technologies. Instead of applying complex heuristics to determine what an on-screen object might be, assistive technologies could query the accessibility API for specific information about each object. Authors could now provide the necessary information about an application in a form that they knew assistive technology would understand.
An accessibility API represents objects in a user interface, exposing information about each object within the application. Typically, there are several pieces of information for an object, including: