Reputation: 1557
I have a web application that is already accessible using iOS and Android devices, but I'm trying to make the same thing work on a device where there is no screen reader. I have a solution in the form of an application shell that I can launch my web app within; the shell can navigate through/manipulate the elements just like Talk Back, without problem...effectively, I've already achieved accessibility for the mobility impaired using alternative screen manipulation devices.
I also need to support screen-readers, and what I don't know is whether there's a way (in JavaScript/Typescript) to retrieve what should be read for a given button. For example, if presented with an button titled "Next", I would like to retrieve the text "next, button" or something like it.
Is there a standard-based way to do this? Alternatively, is there a third-party library that others have successfully used to solve a similar problem? Or am I destined to write my own?
To clarify: I have access to JAWS and NVDA and have validated my solution on both of them.
TL;DR: I am looking for a way for JavaScript to query an element and retrieve the content that would be spoke, not for my own edification, but so that I can pass the information that is being spoken to another tool of my own creation.
Upvotes: 1
Views: 1134
Reputation: 18951
Perhaps you could use Puppeteer's accessibility API to work out what the accessibility tree for a given node looks like.
The Accessibility class provides methods for inspecting Chromium's accessibility tree. The accessibility tree is used by assistive technology such as screen readers or switches.
I'm not sure you can work out exactly the output for each screen readers but it may be worth a shot.
Upvotes: 1
Reputation: 182
What you’re looking for is the accessible name computation. Jump to section 4.3.1 “Terminology” and you’ll find the algorithm step by step.
It’s quite a mouthful. I have yet to find a good open-source implementation of it. And even if you follow it correctly, you will find that established screen-reader software doesn’t always.
You could also wait until the Accessibility API (or the AOM Project) gains wide support, and just utilize that, but who knows when that will happen.
However, if you completely control the speech output, why not optimize the experience in collaboration with visually impaired users, instead of aiming for what the software would come up with?
Upvotes: 1
Reputation: 3424
Basically, you have no possibility to get a string spoken by a particular screen reader, neither can you determine whether a screen reader is launched at all.
More than that, different screen readers convey information in different ways, and even moreover, a user can adjust his/her screen reader so it would say, following your example, next button, button next or just next (or play a sound instead of saying "Button"). Thus, you have no real opportunity to get what you need, it seems.
Upvotes: 2
Reputation: 1719
I think the best way is to actually install / enable a screenreader (macOS has one built in for example), then close your eyes and hear for yourself.
Upvotes: 4