Apples and Oranges: All Screen Readers are not Created Equal

Its time to talk about screen reader software. For the uninitiated, a low- or no-vision web user leverages screen reader software to read the contents of a website out loud. The listener then chooses how to interact with the content, press links, and input information into forms. There are three major screen readers on the market right now, and the experience that they offer are very different. As the title suggests, to understand just one screen reader does not mean you understand all screen readers.

Why is this important? Reviewing your projects in only one screen reader is like conducting quality assurance in only one browser. You lose a larger picture. If 40% of your visitors use Chrome, 20% use IE Edge, 20% use Firefox, and another 20% use four other browsers, would you test the website in only Chrome? What about the other 60% of users? It wouldn’t make sense.

We need to understand screen readers as well as we understand browsers. Firefox, Safari, Edge, and Chrome behave differently from Mac to Windows and from iPhones to Androids. Why, then, would we choose to only review our sites in VoiceOver and not also JAWS, or NVDA, or TalkBack?

An Accessibility History of two Operating Systems

The advent of the personal computer was a huge step towards autonomy for differently-able people. Instead of relying on others for assistance users could use a machine to help them. As internet-enhanced services grew, so did the opportunity to be more and more independent.

Microsoft saw this opportunity early. They released the first version of Windows in 1985 and the first accessibility support program in 1988. It focused on users who are deaf, hard of hearing, or have limited dexterity. Converting written text to spoken word to support blind users — a difficult task in the early days of computing — did not come about until 1992.

Microsoft unveiled an internal initiative known as MSAA, or Microsoft Active Accessibility, to lure more investment and government contracts. Two states pushed Windows 95 to be the most accessible version yet. By many measures, it was.

Apple waited to release software that would address the disabled community. Other vendors tried to fill the void with their own commercial add-ons. 25 years passed between the successful Apple II in 1977 and the first accessible software product, Universal Access, in 2002.

Even with Apple’s depth in the education market, its accessibility lagged behind Windows for a long time. When Apple finally committed to accessibility, though, they changed the lives of millions of people in the process.

With the release of the iPhone 3GS and built-in VoiceOver in 2009, Apple quickly became a favorite among the blind community. It was a pocket-sized computer with accessible gestures, a camera, and a speaker. It became a reliable digital assistant that could explore the world in a human way.

Today Microsoft continues to lead a path to think inclusively. Both companies have a high level of commitment to accessibility. Both want to serve as many different types of abilities as possible, but in their own way.

Screen Reader usage patterns over Time

Like browsers, there are statistics around how many people use which screen reader. The data is very sparse, though. WebAIM conducts a bi-yearly survey of the community, but it relies on anecdotal evidence. There is no way to detect which screen reader is being used programmatically.

The juggernaut in the screen reader industry is Job Access With Speech, or JAWS, from Freedom Scientific for Windows computers. In its first survey from 2008, 74% of users reported using JAWS. Since then, JAWS has remained in the top spot even as other software has entered and left the market.

Another player on Windows machines is Non-Visual Desktop Access, or NVDA. While JAWS is expensive software, NVDA is developed by blind engineers and distributed free. It is grass roots, with a mission to distribute to as many people as possible to increase access.

VoiceOver is free software as well, but with a caveat — it comes free as long as you buy an expensive iPhone or Mac. While the device itself is not inexpensive like PC hardware, the argument goes that the quality is high. VoiceOver comes bundled with every Mac device and works consistently from phone to tablet to desktop.

Over the years, according to this unique but imperfect survey, we can see how VoiceOver and NVDA are gaining in popularity while JAWS’s market share is slipping:

Year# ParticipantsJAWSNVDAVoiceOverWindowEyes
2008112174.0%8.0%6.0%23.0%
200966566.4%2.9%8.9%10.4%
2010124559.2%8.6%9.8%11.2%
2012178249.1%13.7%9.2%12.3%
2014146550.0%18.6%10.3%6.7%
2015*251530.2%14.6%7.6%20.7%
2017179246.6%31.9%11.7%1.5%
2019122440.1%40.6%12.9%

WebAIM attributes the jump of Window-Eyes users to its free availability to Microsoft Office users, which was a recent development at the time. Windows-Eyes was discontinued in 2017.

For the first time in the history of these surveys, JAWS has slipped to second place by a fraction and NVDA is enjoying the largest market share it has ever seen. In our view, that makes three major players in the screen reader market.

Competing Philosophies

Again, it took Apple a long time to enter the world of accessibility. When it did, it made a large splash. Why? What did Apple do once it finally decided to do it? Apple decided to adopt and embrace an idea based upon the principles of Universal Design.

Universal Design means usable by all people, to the greatest extent possible, without the need for adaptation or specialized design. The last bit is the most important part and the part that Apple has fully embraced. Adaptation means an individual has to change the way they typically interact with a thing. Specialized design is the creation of a product for a specific demographic or need. Without the need for adaptation or specialized design means, therefore, that a person does not have to change their behavior or use a special device.

This philosophy is the biggest difference between software like JAWS or NVDA and software like VoiceOver. Microsoft decided that users with different abilities would have their own paths through the software. There is a visual path and a non-visual path. The non-visual user may not have the same features as the visual user, and vice versa. There is a disconnect between the two. If a blind user described an application on a Windows machine it would sound like a different piece of software to someone who was not blind.

But a user of an Apple product has a universal experience. A blind user on a discussion forum about the release of VoiceOver with OSX 10.5 wrote this powerful example:


Apple insists that blind users should use their Macs in exactly the same way as their sighted peers do. Although I found this initially confusing, Apple really does mean that, if a sighted user needs to drag and drop an icon, a task normally performed with the mouse, a blind user should do exactly the same thing, albeit without the physical use of the mouse. Apple believes that, if product documentation refers to a blue icon located on the left side of the screen, the blind user should be able to find that blue icon, should know that it’s blue, and should understand its position in relation to other objects on the screen.


— Steve Sawczyn, the Braille Monitor, 2009

To think this through, it means that a blind user and a sighted user can describe the same piece of software and use it in the same way even though they might have a different physical experience while using it. This is what the blind community has been advocating for years — a universal, common experience regardless of ability.


This difference in philosophy drives the differences in the ways that the software interpret content on the web. Understanding these differences can lead to great experiences for any user.


Real-World Examples

For the purpose of this article, we tested a few basic things in JAWS and VoiceOver. We did not get into more complicated form elements and situations which usually present an accessibility challenge. Instead, we wanted to illustrate simple details that highlight fundamental differences in how these two screen readers work.

Text to readWhat JAWS saysWhat VoiceOver says
A question?A question (with an ending “up” inflection, but not always)A question (with an ending “up” inflection, but not always)
(An aside)left paren an aside right parenan aside
This, then thatThis then thatThis [pause] then that
counter-culturecounter dash culturecounter culture
<h1>A Headline</h1>Heading Level 1, a headlineHeading Level 1, a headline
USA, U.S.A.U S A U dot S dot AU S A [pause] U S A
NASA, N.A.S.A.Nassa [pause] N dot A dot S dot ANassa [pause] N A S A
• Bulleted list item (default list-style)List with one item, bullet bulleted list itemList with one item, bullet bulleted list item
○ Fancy bullet (circle list-style)List with one item, bullet fancy bulletList with one item, white circle fancy bullet
▪ Fancy bullet (square list-style)List with one item, bullet fancy bulletList with one item, black square fancy bullet
I. Numbered list item (upper-roman list-style)List with one item, eye numbered list itemList with one item, capital eye numbered list item
From 2pm – 4pmFrom two p m n dash four p mFrom two p m four p m
From 2p.m. – 4p.m.From two p dot m n dash four p dot mFrom two p m to four p m
2 + 2 = 4Two plus two equals fourTwo plus two equals four
4 – 2 = 2 (that’s a hyphen, technically)Four dash two equals twoFour two equals two

Default “verbosity” settings were used for these tests. Verbosity controls how detailed the speech pattern will be. A higher verbosity will announce more punctuation. By default, VoiceOver uses a more relaxed verbosity setting, skipping hyphens, dashes, parentheses and brackets. Commas, semicolons, colons, ellipses, and periods all create the same pause in speech. If punctuation is important, the user’s verbosity settings and their own personal preference will win out over any fancy tricks we employ.

Not surprisingly, some speech patterns rely on context. Notice how the time duration “2pm – 4pm” is read correctly by VoiceOver only when punctuation is used for the time meridians (but the periods are not said out loud). Abbreviations are tough as well — some are said as words, some are said as letters.

The split in philosophies is evident in this short list. Consider the way that unordered lists are announced. VoiceOver speaks the default as “bullet” and the fancier options with their visual presentation — “white circle,”  “black square,” and “capital eye.”

Personal Preferences

That’s right. Browser have preference settings that most people never change. That’s why we can assume that 16px is the text size people expect to use. But for screen readers, the defaults are only that. Users who rely on their screen reader will adjust the settings frequently, sometimes from one task to another, to fit the way they prefer to work.

That means when we test a scenario with a screen reader we need to know what the common defaults are but also how someone might change those defaults. It makes it more difficult to test, but we should try. At Oomph, we encourage people to learn the basics of using VoiceOver. Its a very different way to interact with a webpage and that new perspective helps us craft the best possible experience.

Some content modes that screen readers offer:

  • Announce all headings on the page in (source) order
  • Announce all links on the page in order or alphabetically
  • Use the tab key to jump from one link to another, bypassing any content that is not a link
  • Navigate tables cell by cell, announcing the position of that cell within the table by row number and column heading
  • If a search form is marked up in proper HTML, a user may be able to jump directly to it
  • If the page is marked up properly, a user can skip the navigation menu and jump to the content

Using a keyboard to navigate a web page is another way that we can experience something different and new. There are lots of people that can not use a mouse or a touch screen, and this is another user mode to consider when creating a new web page.

It is important to understand that we (as designers and developers) do not control the end user experience. Does that mean that we should give up? Absolutely not. But it does mean that we should stick to what we can control — well-formed markup, accessibility best practices and the intent to do the best that we can.

“Progressive Aural Enhancement”

Browsers and their quirks have created the idea of “Progressive Enhancement.” Older browsers get a baseline, non-broken experience while newer browsers get enhancments — animations, transitions, device-native features, gesture support. Meanwhile, responsive design has (hopefully) killed the idea that every page on every device needs to be pixel perfect. When it comes to the “out loud” experience, we should think in the same way.

Not every screen reader will produce the same experience, but the user’s preference is more important. Maybe they prefer the way they have their screen reader set up or maybe they simply get used to it, but it is their experience. A personal one. We as designers should not encroach upon that and force our own will.

We might not be able to make every screen reader announce “2pm – 4pm” as “two p m to four p m,” but the listener might already be used to the way their screen reader talks. They already get it. Is it worth trying to use extra markup to make the visual leverage a dash but the aural experience say “to”? Can we move away from the dash and use the word “to” for everyone?

Understanding the quirks of screen readers makes us think the latter — don’t force a technical solution on a human communication problem. Use a screen reader (or two), use only your keyboard, get out of your comfort zone… and then, design the best experience possible for the most people as possible.


Be sure to browse more articles in our Accessibility Series.

Comments? Start a conversation on Twitter! You can also reach out to the author on Facebook. We’d love to hear your thoughts.

ARTICLE AUTHOR

More about this author

J. Hogue

Director, Design & User Experience

I have over 20 years of experience in design and user experience. As Director of Design & UX, I lead a team of digital platform experts with strategic thinking, cutting-edge UX practices, and visual design. I am passionate about solving complex business problems by asking smart questions, probing assumptions, and envisioning an entire ecosystem to map ideal future states and the next steps to get there. I love to use psychology, authentic content, and fantastically unique visuals to deliver impact, authority, and trust. I have been a business owner and real-estate developer, so I know what is like to run a business and communicate a value proposition to customers. I find that honest and open communication, a willingness to ask questions, and an empathy towards individual points of view are the keys to successful creative solutions.

I live and work in Providence, RI, and love this post-industrial city so much that I maintain ArtInRuins.com, a documentation project about the history and evolution of the local built environment. I help to raise two amazing girls alongside my equally strong and creative wife and partner.