We’ve all heard a great deal of buzz about AJAX in the last few months, and with this talk has come a legion of articles, tips, presentations and practical APIs designed to explore the possibilities and try to arrive at best-practice techniques. But, for all of the excitement and hype, still very little has been said on the subject of AJAX and accessibility.
Google does yield some results, notably the article “AJAX and Accessibility” at standards-schmandards, which talks about ensuring that applications work without JavaScript, and also moots the idea of using an alert dialog to relay information to screen readers; but it’s clear from the tone of the article that the author is only guessing that this approach will work (as we’ll see later, it may not). Simon Willison takes up the subject in the SitePoint blog, but there he speaks of accessibility only in terms of JavaScript support.
More complex and subtle problems arise with devices that do support JavaScript, but which still may not be able to interact with your application. Browser-based screen readers are like this: they are script-capable browsers, but their practical JavaScript support is nothing like on par with that of the browser on its own.
The article “Accessibility of AJAX Applications (Part 1)” at WebAIM addresses this point, explaining that if dynamic interface changes are to be accessible, the application must actively inform the user that a change has occurred, then allow direct access to the new content. This is as far as the article goes — as yet, it doesn’t say anything about how this might actually be done. It confidently promises solutions in Part 2, but promising is easy!
Wherever we look, from discussions at AccessifyForum, to popular blogs like those of Derek Featherstone, and Peter Paul-Koch, the one thing we can all agree on is that we need more information.
And that’s why I’ve written this article: to present some of the data and analysis I’ve compiled, and see if it points to a useful conclusion.
A Little Background…
Over the last few months (and earlier) I’ve been involved in researching how the leading screen readers and other assistive devices respond to JavaScript: what kinds of events they generate or respond to, and under what circumstances. The research is based at Access Matters, and coordinated by Bob Easton, Derek Featherstone, Mike Stenhouse and myself.
In addition to that, I did a great deal of primary research for my recently published book, The JavaScript Anthology. The research was designed to find out how assistive devices respond to scripts that update the DOM periodically or asynchronously, such as the items in a scrolling news-ticker, or responses to an XMLHttpRequest.
What we found is that script support in screen readers is incredibly erratic and fragmentary — yet that isn’t even the biggest problem! There are ways and means by which we can create useable hooks (for example, all the screen readers we tested generate click events on links and form controls), but the real sticking point is this: how does a screen reader user know that the content has changed?
A sighted user has random access to a page, by virtue of the fact that he or she can look at different bits of it; if something changes, we can draw the user’s attention to it visually. But people who are blind have no such access. Their approach to a page is linear, so if part of that page changes before or after the location of their current focus, the user won’t notice that happen, and may not subsequently realize that it’s happened even when they encounter it.
A screen reader doesn’t announce dynamic changes to the DOM — those changes just happen in the background — so any given change will more than likely go unnoticed, unless we notify the user in some way.
And this is the $64,000 question: how do we do that? To answer that question, we’ll need to try some different tactics, then see (or rather, hear) the results!
The Tests
Before we start, you might like to download an archive of all these tests, so you can refer to them or run the tests yourself.
The First Test
The first test simply updates a paragraph of text directly beneath the trigger element. Here’s the core HTML:
<p>
<a href="./" id="trigger">This link is the trigger.</a>
</p>
<p id="response">
This paragraph will update with the response.
</p>
<p>
This is some text that comes after the response,
to check continuity.
</p>
And here’s the JavaScript:
window.onload = function()
{
var trigger = document.getElementById('trigger');
var response = document.getElementById('response');
trigger.onclick = function()
{
var request = null;
if(typeof window.XMLHttpRequest != 'undefined')
{
request = new XMLHttpRequest();
}
else if(typeof window.ActiveXObject != 'undefined')
{
try { request = new ActiveXObject('Microsoft.XMLHTTP'); }
catch(err) { request = null; }
}
if(request != null)
{
request.onreadystatechange = function()
{
if (request.readyState == 4
&& /^(200|304)$/.test(request.status.toString()))
{
response.innerHTML = request.responseText;
}
}
request.open('GET', 'test.php?msg=Hello+World', true);
request.send(null);
}
return false;
};
};
The “test.php” script simply outputs a message for the request’s responseText
; it could have been anything:
<?php
echo "And here's the response - " . $_GET['msg'];
?>
To perform the test, we navigate to the trigger link with the keyboard, and pres Enter to actuate that link. All devices are expected to fire the function, but how they respond afterwards will probably vary quite a lot.
Results for the First Test
All devices fire the function and most update the response paragraph, but no device automatically reads it (as expected). This test is simply used to make sure the content update is universally recognized, but unfortunately, it isn’t: Windows Eyes doesn’t update its spoken output until the triggering link has blurred, which won’t occur if we simply let the reader read on. So, depending on the user’s interactions, they may not get to hear the updated message at all.
Still, that’s not a bad start, and maybe our Windows Eyes problem is unique to this example. What we’re looking for here is more than just an update — we want a way to have the response spoken automatically, without further user intervention; let’s press on to that intent.
The Second Test
The second test is almost the same as the first, but this time we’ll take the additional step of setting the document.location
to the fragment-identifier (ID) of the response paragraph (making it an in-page target). Here’s the addition to the onreadystatechange
function (shown in bold):
request.onreadystatechange = function()
{
if (request.readyState == 4
&& /^(200|304)$/.test(request.status.toString()))
{
response.innerHTML = request.responseText;
document.location = '#response';
}
}
Results for the Second Test
These results are rather more convoluted:
- In Home Page Reader 3.02 the response text is automatically read out, but the reader doesn’t stop there: it continues reading the rest of the page. This would make it a viable option if the response element is at the very end of the page.
- In Home Page Reader 3.04 (note, a more recent version) the location setting no longer works correctly. The reader jumps back to the top of the page, instead of to the response paragraph (I also tried it with location.replace, to see if that would make a difference, but it doesn’t).
- In Hal 6.5 and Connect Outloud 2.0 the reader announces a new page load, but then starts reading from the element after the response, missing the response completely.
- In JAWS 5.0 and 6.2 the code doesn’t work, and sometimes doesn’t do anything in response at all; other times it re-reads the trigger link text again, or the top-level heading; occasionally it behaves the same way as Hal and Connect Outloud.
- In Windows Eyes 5.0 the content updates! But over and above that, it behaves in way that seems just like Home Page Reader 3.02: it announces a new page load, then starts reading from (and including) the response element. But this behavior is not what it seems: the device only works that way because Windows Eyes remembers your previous position when loading a page you’ve visited before, and since the response comes directly after the trigger, that’s the next thing you’ll hear. If that were not the case, it would simply read whatever was directly after the trigger.
- Windows Eyes 5.5 (beta) behaves exactly the same way as Hal and Connect Outloud.
There’s a pattern of ambiguity there, in that several devices all do the same thing, jumping past the response paragraph and starting from the element that appears after it. It occurred to me that the HTML might be a factor, so I changed it to look like this:
<p>
<a name="response" id="response" href="#">
This link will update with the response.</a>
</p>
And, using the same location setting, the results for the second test do indeed change. Even though we’re not using the link’s href, its addition makes the anchor a focusable element (where a paragraph, or an anchor with no href, is not), and that seems to make it work more reliably for some devices.
Results for the Modified Second Test
Both versions of Home Page Reader behave as they did before, and are joined by Connect Outloud, which now behaves like HPR 3.02 (it works, but carries on reading). Both versions of Windows Eyes now behave as 5.5 did before (they start reading from the element after the response). But in JAWS and Hal, the code works perfectly — the response text is spoken, but nothing further occurs (although JAWS may also re-read the page’s top-level heading first, before saying the response text).
The Third Test
In the third test, we’ll replace the location setting with a programmatic focus() call on the response link, once its text has been updated. The new HTML looks like this:
<p>
<a href="./" id="response">
This link will update with the response.</a>
</p>
Again, only a small modification is necessary to the original onreadystatechange
function (changes are shown in bold):
request.onreadystatechange = function()
{
if (request.readyState == 4
&& /^(200|304)$/.test(request.status.toString()))
{
response.innerHTML = request.responseText;
response.focus();
}
}
Results for the Third Test
This code doesn’t work in any device except JAWS 5.0 and Connect Outloud (it’s curious that it doesn’t work in JAWS 6.2, given that it succeeds in the earlier version). Failing to work in most devices means nothing happens at all; however, in JAWS 6.2 the trigger link will be spoken again, while Windows Eyes continues to behave exactly as it did for the modified second test (starts reading from the element after the response).
The Fourth Test
The fourth test dispenses with the response element altogether, and presents the response text in an alert dialog instead. The HTML is just the trigger link, while the onreadystatechange function is simplified to this:
request.onreadystatechange = function()
{
if (request.readyState == 4
&& /^(200|304)$/.test(request.status.toString()))
{
alert(request.responseText);
}
}
Results for the Fourth Test
This should be safe for everyone, but astonishingly it isn’t: Windows Eyes 5.0 doesn’t always speak the dialog text. Sometimes, it just announces the dialog, and doesn’t tell you what the dialog says!
The Fifth Test
For the fifth test, we’ll move on to form elements. First, we’ll try updating and focusing a text field:
<form action="">
<div>
<input type="text" id="response" size="50"
value="This field will update with the response">
</div>
</form>
Here’s the applicable onreadystatechange
function:
request.onreadystatechange = function()
{
if (request.readyState == 4
&& /^(200|304)$/.test(request.status.toString()))
{
response.value = request.responseText;
response.focus();
}
}
Results for the Fifth Test
This test doesn’t work in Home Page Reader or Hal (nothing happens at all, though there’s the typical visual response). It also fails in JAWS 6.2, where as with the third test, it repeats the trigger link again and may re-announce the top-level heading, as well.
This code also fails in Windows Eyes, which behaves just as it did for the third test (i.e. it starts reading from the element after the response). The only readers in which this code works are JAWS 5.0 and Connect Outloud, although they do also say “edit” to announce the edit box before speaking its value.
The Sixth Test
In the sixth test, we’ll do almost the same thing. However, this time, instead of focusing the element, we’ll programmatically select its text:
request.onreadystatechange = function()
{
if (request.readyState == 4
&& /^(200|304)$/.test(request.status.toString()))
{
response.value = request.responseText;
if (typeof response.createTextRange != 'undefined')
{
var range = response.createTextRange();
range.select();
}
else if (typeof response.setSelectionRange != 'undefined')
{
response.setSelectionRange(0, response.value.length);
}
}
}
Results for the Sixth Test
The pattern of success and failure here is identical to the previous test.
In the seventh and final test we’ll use a button for the response element:
<form action="">
<div>
<button type="button"
id="response">This button will update with the response
</button>
</div>
</form>
Then we’ll change the button text and focus it, much as we did for the fifth test:
request.onreadystatechange = function()
{
if (request.readyState == 4
&& /^(200|304)$/.test(request.status.toString()))
{
response.firstChild.nodeValue = request.responseText;
response.focus();
}
}
Results for the Seventh Test
This test also produces the same results as the fifth and sixth tests, but with the small and expected variation that JAWS 5.0 and Connect Outloud (in which it works) announce the response widget by saying “button” after the text, rather than “edit” before it.
Conclusion
There doesn’t appear to be any reliable way to notify screen readers of an update in the DOM. There are piecemeal approaches that work for one or more devices, but no overall approach or combination that would cover them all, given that even the humble alert may not work correctly in Windows Eyes.
So what does that mean for us, as developers — does it mean we should stop using AJAX techniques?
Yes?
Let’s face it, a great many AJAX applications (dare I say, “most”?) use this approach for its own sake, and don’t really benefit from it all — they could just as well use traditional POST and response.
I would even go a step further to call for a fundamental re-assessment of our priorities here. What we’re talking about is making dynamic client interfaces work effectively in screen readers, but maybe that was never the point. Isn’t the real point to make the applications themselves work effectively in screen readers?
Interactions are just details, and perhaps what we’ve really been doing is projecting our own desires and preferences onto users for whom they’re not really relevant. Maybe dynamic client interfaces don’t benefit screen reader users at all; perhaps what would really work for them would be to play to the task for which the reader was originally built: individual page requests, and the interactions of HTTP. These are exactly the kind of interactions that the screen reader was designed to deal with.
No?
Maybe we should just ask people using screen readers to turn JavaScript off, until such time as the technology is up to the task. Or perhaps we should add user-preferences at the start of our applications, so that users can pre-select their choice of interface. If we can feel confident that a screen reader user doesn’t have JavaScript at all, then we can design non-scripted functionality that will work for them, falling back on the POST/response paradigm, as for any non-script user.
But there’s no denying that some kinds of AJAX applications can only work that way — in some cases, it’s impossible to provide a truly equivalent alternative that doesn’t rely on JavaScript. (Gmail is a prime example: it offers a no-script version, but it’s nowhere near as well-featured as its scripted equivalent.) Perhaps we should look to screen reader vendors themselves, as they may reasonably be expected to respond to the increasing popularity of remote scripting by providing the necessary hooks and feedback to help make it accessible for their users.
IBM is currently working with GW Micro (the makers of Windows Eyes) and the Mozilla Foundation, to introduce “roles” and “states” (defined by element attributes) that can convey information about the nature and state of an element. In theory, this completely solves the problem, and means that any appropriate element can convey all necessary information: its own meaning, its behavioral role, and its current state.
But although these are very exciting developments, this is not something we can really use now, because it’s not backward compatible: it provides no functionality at all to browsers other than Internet Explorer or Firefox, and only very limited functionality to device combinations other than Firefox 1.5 plus Windows Eyes 5.5.
So?
I’m forced to conclude that, unless a way can be found to notify screen readers of updated content, AJAX techniques cannot be considered accessible, and should not be used on a production site without a truly equivalent non-script alternative being offered to users up-front.
However, I freely and happily concede that I’ve analyzed only a limited number of tests — I’ve covered the angles I could think of, but I’m sure there are plenty more ideas out there, and we only need one of them to pan out!
So if you think I’m giving up too easily, please consider this a call-to-arms: let’s find a way to make it work!
Frequently Asked Questions (FAQs) about AJAX and Screen Readers
How does AJAX work with screen readers?
AJAX, which stands for Asynchronous JavaScript and XML, is a set of web development techniques used to create interactive web applications. When AJAX works with screen readers, it allows the screen reader to read the content that is dynamically updated by AJAX. This is particularly useful for visually impaired users as it allows them to access and interact with web content in real time.
Why is my app reporting screen reader in use?
If your app is reporting that a screen reader is in use, it means that an accessibility tool is currently active on your device. This could be a built-in screen reader like VoiceOver for iOS or TalkBack for Android, or a third-party application. The app detects this to ensure it provides an accessible user experience.
How can I allow Discord to track my screen reader usage?
To allow Discord to track your screen reader usage, you need to enable the ‘Screen Reader Mode’ in the Discord settings. This can be found under the ‘Accessibility’ section. Once enabled, Discord will be able to track and optimize your user experience based on your screen reader usage.
What is the Screen Reader Data Toggle on Discord?
The Screen Reader Data Toggle on Discord is a feature that allows users to control whether Discord collects data about their screen reader usage. When enabled, it helps Discord improve its accessibility features.
How can I find out what app is reading my screen?
If you suspect that an app is reading your screen, you can check your device’s accessibility settings. Here, you can see which apps have been granted permission to use the screen reader. If you find an app that you did not authorize, you can revoke its permission.
How does AJAX affect the performance of screen readers?
AJAX can significantly enhance the performance of screen readers by allowing them to read dynamically updated content. However, if not implemented correctly, AJAX can also create accessibility issues, such as causing screen readers to miss updates or read out-of-date information.
Can I use AJAX with all screen readers?
While AJAX is compatible with most modern screen readers, its effectiveness can vary depending on the specific screen reader and how it’s implemented on the website. It’s always best to test your website with multiple screen readers to ensure optimal accessibility.
Why is my screen reader not working with AJAX?
If your screen reader is not working with AJAX, it could be due to several reasons. The AJAX updates might not be properly announced to the screen reader, or the screen reader might not support the specific AJAX implementation. It’s recommended to consult with a web accessibility expert or the screen reader’s support team for assistance.
How can I improve the compatibility of AJAX with screen readers?
To improve the compatibility of AJAX with screen readers, ensure that your website follows the Web Content Accessibility Guidelines (WCAG). This includes providing text alternatives for non-text content, making sure content can be accessed in different ways, and creating content that can be presented in different ways without losing information.
Are there alternatives to AJAX for creating accessible web applications?
While AJAX is a popular choice for creating interactive web applications, there are alternatives that can also provide an accessible user experience. These include HTML5, CSS3, and JavaScript frameworks like React and Vue.js. These technologies offer a range of features for creating accessible web applications, such as semantic elements, ARIA roles, and accessible forms.
James is a freelance web developer based in the UK, specialising in JavaScript application development and building accessible websites. With more than a decade's professional experience, he is a published author, a frequent blogger and speaker, and an outspoken advocate of standards-based development.