Apple’s accessibility consistency
This article will explore Apple’s consistent attention to accessibility, and how other tech companies with commitments to accessibility, like Microsoft and Google, compare to Apple in their accessibility efforts. It also shows where these companies can improve their consistency, and that no company is perfect at being an Assistive Technology provider yet.
Introduction
Apple has shown a commitment to accessibility since the early days of the iPhone, and since mac OSX Tiger. Its VoiceOver screen reader was the first built-in screen reader of any usability on a personal computer and smart phone. Now, VoiceOver is on every Apple product, even the HomePod. It is so prevalent that people I know have begun calling any screen reader “VoiceOver.” This level of consistency should be congratulated in a company of Apple’s size and wealth. But is this a continual trend, and what does this mean for competitors?
This will be an opinion piece. I will not stick only to the facts as we have them, and won’t give sources for everything which I show as fact. This article is a testament to how accessibility can be made a fundamental part of a brand’s experience for effected people, so feelings and opinions will be involved.
The trend of accessibility
The following sections of the article will explore companies trends of accessibility so far. The focus is on Apple, but I’ll also show some of what its competitors have done over the years as well. As Apple has a greater following of blind people, and Applevis has documented so much of Apple’s progress, I can show more of it than I can its competitors, whose information written by their followers are scattered, thus harder to search for.
Apple
Apple has a history of accessibility, shown by this article Written just under a decade ago, it goes over the previous decade’s advancements. As that article has done, I will focus on little of a company’s talk of accessibility, but more so its software releases and services.
Apple is, by numbers and satisfaction, the leader in accessibility for users of its mobile operating systems, but not in general purpose computer operating systems. Microsoft’s Windows is used far more than Apple’s MacOS. Besides that, and services, Apple has made its VoiceOver screen reader on iOS much more powerful, and even flexible, than its competitor, Google’s TalkBack.
iOS
As iPhones were released each year, so were newer versions of iOS. In iOS 6 accessibility settings began working together, VoiceOver’s Rotor gained a few new abilities, new braille displays worked with VoiceOver, and bugs were fixed. In iOS 7, we gained the ability to have more than one high quality voice, more Rotor options, and the ability to write text using handwriting.
Next, iOS 8 was pretty special to me, personally, as it introduced the method of writing text that I almost always use now, Braille Screen Input. This lets me type on the screen of my phone in braille, making my typing exponentially faster. Along with typing, I can delete text, a word or character, and now, send messages from within the input mode. I can also change braille contraction levels, and lock orientation into one of two typing modes. Along with this, Apple added the Alex voice, its most natural yet, which was only before available on a Mac. For those who do not know braille or handwriting, a new “direct touch typing” method allows a user to type as quickly as a sighted person, if they can memorize exactly where the keys are, or have spell check and autocorrection enabled.
In iOS 9, VoiceOver users are able to choose Siri voices to speak using VoiceOver, as an extension of the list of Vocalizer voices, and Apple’s Alex voice. One can now control speech rate more easily, and the speed of speech can be greater than previously possible. One can control the time a double tap should take, a better method of selecting text, braille screen input improvements, and braille display fixes and new commands.
Then, iOS 10 arrived, with a new way to organize apps, a pronunciation dictionary, even more voices, reorganized settings, new sounds for actions, a way to navigate threaded email, and some braille improvements. One great thing about the pronunciation editor is that it does not only apply to the screen reader, as in many Windows screen readers, but to the entire system speech. So, if you use VoiceOver, but also Speak Screen, both will speak as you have set them to. This is a testament to Apple’s attention to detail, and control of the entire system.
With the release of iOS 11, we gained the ability to type to Siri, new Siri voices, verbosity settings, the ability to have subtitles read or brailled, and the ability to change the speaking pitch of the voice used by VoiceOver. VoiceOver can now describe some images, which will be greatly expanded later. We can now find misspelled words, which will also be expanded later. One can now add and change commands used by braille displays, which, yes, will be expanded upon later. A few things which haven’t been expanded upon yet are the ability to read formatting, however imprecise, with braille “status cells,” and the “reading” of Emoji. Word wrap and a few other braille features were also added.
Last year, in iOS 12, Apple added commands to jump to formatted text for braille display users, new Siri voices, verbosity options, confirmation of rotor actions and sent messages, expansion of the “misspelled” rotor option for correcting the misspelled word, and the ability to send VoiceOver to an HDMI output.
Finally, In iOS 13,Apple moved accessibility to the main settings list, out of the General section, provided even more natural Siri voices, haptics for VoiceOver, to aid alongside, or replace, the sounds already present, and the ability to modify or turn them off. A “vertical scroll bar” has also been added, as another method of scrolling content. VoiceOver can now give even greater suggestions for taking pictures, aligning the camera, and with the iPhone 11, what will be in the picture. One can also customize commands for the touch screen, braille display, and keyboard, expanding the ability braille users already had. One can even assign Siri shortcuts to a VoiceOver command, as Mac users have been able to do with Apple Script. One can now have VoiceOver interpret charts and graphs, either via explanations of data, or by an audible representation of them. This may prove extremely useful in education, and for visualizing data of any type. Speaking detected text has improved over the versions to include the detecting of text in unlabeled controls, and now can attempt to describe images as well. Braille users now have access to many new braille tables, like Esperanto and several other languages, although braille no longer switches languages along with speech.
MacOS
MacOS has not seen so much improvement in accessibility over the years. VoiceOver isn’t a bad screen reader, though. It can be controlled using a trackpad, which no other desktop screen reader can boast. It can be used to navigate and activate items with only the four arrow keys. It uses the considerable amount of voices available on the Mac and for download. It simply isn’t updated nearly as often as VoiceOver for iOS.
OSX 10.7, 10.8, and 10.9 have seen a few new features, like more VoiceOver voices, braille improvement, and other things. I couldn’t find much before Sierra, so we’ll start there.
In Sierra, Apple added VoiceOver commands for controlling volume, to offset the absence of the physical function keys in new MacBook models. VoiceOver can also now play a sound for row changes in apps like Mail, instead of interrupting itself to announce “one row added,” because Apple’s speech synthesis server on the Mac doesn’t innately support a speech queue. This means that neither does VoiceOver, so interruptions must be worked around. Some announcements were changed, HTML content became web areas, and interaction became “in” and “out of” items. There were also bug fixes in this release.
In High Sierra, one can now type to Siri, VoiceOver can now switch languages when reading multilingual text, as VoiceOver on the iPhone has been able to do since iOS 5 at least, improved braille editing and PDF reading support, image descriptions, and improved HTML 5 support.
In MacOS Mojave Apple added the beginning of new iPad apps on Mac. These apps work poorly with VoiceOver, even still in Catalina. There were no new reported VoiceOver features in this release.
This year, in MacOS Catalina, Apple added more control of punctuation, and XCode 11’s text editor is now a little more accessible, even though the Playgrounds function isn’t, and the Books app can now, after years of being on the Mac, be used for basic reading of books. Braille tables from iOS 13 are also available in MacOS.
The future of Apple accessibility
All of these changes, however, were discovered by users. Apple doesn’t really talk about all of its accessibility improvements, just some of the highlights. While I see great potential in accessible diagrams and graphs, Apple didn’t mention this, and users had to find this. Subsequently, there may be fixes and features that we still haven’t found, three versions of iOS 13 later. Feedback between Apple and its customers has never been great, and this is only to Apple’s detriment. Since Apple rarely responds to little feedback, users feel that their feedback doesn’t mean anything, so they stop sending it. Also of note is that on VoiceOver’s Mac accessibility page the “Improved PDF, web, and messages navigation” section is from macOS 10.13, two versions behind what is currently new in VoiceOver.
Another point is that services haven’t been the most accessible. Chief among them is Apple Arcade, which has no accessible games so far. Apple research, I’ve found, has some questions which have answers that are simply unlabeled buttons. While Apple TV Plus has audio description for all of their shows, this is a minor glimmer of light, shrouded by the inaccessibility of Apple Arcade, which features, now, over one hundred games, none of which I can play with any success. In all fairness, a blind person who is patient may be able to play a game like Dear Reader, which has some accessible items, but the main goal of that game is to find a word in a different color and correct it, which is completely at odds with complete blindness, but could be handled using speech parameter changes, audio cues, or other signals of font, color, or style changes.
Time will tell if this new direction, no responsibility for not only other developers’ work, but also the Mac and work done by other developers and flaunted by Apple, will become the norm. After all, Apple Arcade is an entire Tab of the App Store; inaccessibility is in plain view. As a counterpoint, the first iPhone software, and even the second version, was inaccessible to blind people, but now the iPhone is the most popular smart phone, in developed nations, for blind people.
Perhaps next year, Apple Arcade will have an accessible game or two. I can only hope that this outcome comes true, and not the steady stepping back of Apple from one of their founding blocks: accessibility. We cannot know, as no one at Apple tells us their plans. We aren’t the only ones, though, as mainstream technology media shows. We must grow accustom to waiting on Apple to show new things, and reacting accordingly, but also providing feedback, and pushing back against encroaching inaccessibility and decay of macOS.
Apple’s competitors
In this blog post, I compare operating systems. To me, an operating system is the root of all software, and thus, the root of all digital accessibility. With this in mind, the reader may see why it is imperative that the operating system be as accessible, easy and delightful to use, and promote productivity as much as possible. Microsoft and Google are the largest competitors of Apple in the closed source operating system space, so they are what I will compare Apple to in the following sections.
Google
Google is the main contributor to the Android and Chromium projects. While both are open source, both are simply a base to be worked from, not the end result. Not even Google’s phones run “pure” Android, but have Google services and probably other things on the phone as well. Both, though, have varying accessibility as well. While Apple pays great attention to its mobile operating system’s accessibility, Google does not seem to put many resources towards that. However, its Chrome OS, which is used much in education, is much more easily accessible, and even somewhat of an enjoyable experience for a lite operating system.
Android
Android was released one year after iOS. TalkBack was released as part of Android 1.6. Back then, it only supported navigation via a keyboard, trackpad, or scroll ball. It wasn’t until version 4 when touch screen access was implemented into TalkBack for phones, and up to this day, only supports commands done with one finger, two finger gestures being passed through to Android as one finger commands. TalkBack has worked around this issue by recently, in Android version 8, gaining the ability to use the finger print sensor, if available, as a gesture pad for setting options, and the ability the switch spoken language, if using Google TTS, when reading text in more than one language. TalkBack uses graphical menus for setting options otherwise, or performing actions, like deleting email. It can be used with a Bluetooth keyboard. By default, it uses Google TTS, a lower quality, offline version of speech used for things like Google Translate, Google Maps, and the Google Home. TalkBack cannot use the higher quality Google TTS voices. Instead, voices from other vendors are downloaded for more natural sound.
BrailleBack, discussed on its Google Support page, is an accessibility service which, when used with TalkBack running, provides rudimentary braille support to Android. Commands are rugged, meaningless, and unfamiliar to users of other screen readers, and TalkBack’s speech cannot be turned off while using Brailleback, meaning that, as one person helpfully provided, that one must plug in a pair of headphones and not wear them, or turn down the phone’s volume, to gain silent usage of one’s phone using braille. Silent reading is one of braille’s main selling points, but accessibility, if not given the resources necessary, can become a host of workarounds. Furthermore, brailleback must be installed onto the phone, providing another barrier to entry for many deaf-blind users, so some simply buy iPods for braille if they wish to use an Android phone for customization or contrarian reasons, or simply stick with the iPhone as most blind people do.
Now, though, many have moved to a new screen reader created by a Chinese developer, called Commentary. This screen reader does, however, have the ability to decrypt your phones if you have encryption enabled. For braille users BRLTTY is used for braille usage. This level of customization, offset by the level of access which apps have to do anything they wish to your phone, is an edge that some enjoy living on, and it does allow things like third-party, and perhaps better screen readers, text to speech engines, apps for blind people like The vOICe which gives blind people artificial vision, and other gray area apps like emulators, which iOS will not accept on the App Store. Users who are technically inclined do tend to thrive on Android, finding workarounds a joy to find and use, whereas people who are not, or are but do not want to fiddle with apps to replace first-party apps which do not meet the needs of the user, and unoptimized settings, find themselves doing more configuring of the phone than using it.
Third party offerings, like launchers, mail apps, web browsers, file managers, all have variable accessibility, which can change from version to version. Therefore, one must navigate the shifting landscape of first party tools which may sort of be good enough, third party tools which are accessible enough but may not do everything you need, and tools which users have found workarounds for using them. Third party speech synthesizers are also hit or miss, with some not working at all, others, like Eloquence, being now unsupported, and more, like ESpeak, sounding unnatural. The only good braille keyboard which is free hasn’t been updated in years, and Google has not made one of their own.
Because of all this, it is safe to say that Android can be a powerful tool, but has not attained the focus needed to become a great accessibility tool as well. Google has begun locking down its operating system, taking away some things that apps could do before. This may come to inhibit third party tools which blind people now use to give Android better accessibility. I feel that it is better to have been on iOS, where things are locked down much, but you have, at least somewhat, a clear expectation of fairness on Apple’s part. Android is not a big income source for Google, so Google does not have to answer to app developers.
Chrome OS
Chrome OS is Google’s desktop operating system, running Chrome as the browser, with support for running Android apps. Its accessibility has improved plenty over the years, with ChromeVox gaining many features which make it a good screen reader. You can read more about chromeVox One of the main successes to ChromeVox is its braille support. It is normal for most first-party screen readers to support braille nowadays. When one plugs in a braille display to a Chromebook with ChromeVox enabled, ChromeVox begins using that display automatically, if it is supported. The surprise here is that if one plugs it in when ChromeVox is off, ChromeVox will automatically turn on, and begin using the display. This is beyond what other screen readers can do. ChromeVox, and indeed TalkBack, do not yet support scripting, editing punctuation and pronounciation speech, and do not have “activities” as VoiceOver for iOS and Mac have, but ChromeVox feels much more polished and ready for use than TalkBack.
The future of Google accessibility
Judging by the past, Google may add a few more features to TalkBack, but less than Apple adds to iOS. They have much to catch up on, however, as they have only two years ago added the ability for TalkBack to detect and switch languages, and use the finger print sensor like VoiceOver’s rotor. I have not seem much change over the two years since, except making a mode for tracking focus from a toggle to a mandatory feature. I suspect that, in time, they will remove the option to disable explore by touch, if they’ve not already.
With Chrome OS, and Google Chrome in general, I hope that the future brings better things, now that Microsoft is involved in Chromium development. It could become even more tied to web standards. Perhaps ChromeVox will gain better sounding offline voices than Android’s lower quality Google TTS ones, or gain sounds performed using spacial audio for deeper immersion.
Microsoft
Microsoft makes only one overarching operating system, with changes for XBox, HoloLens, personal computers, and other types of hardware. Windows has always been the dominant operating system for general purpose computing for blind people. It hasn’t always been accessible, and it is only in recent years that Microsoft have actively turned their attention to accessibility on Windows and XBox.
Now, Windows’ accessibility increases with each update, and Narrator becomes a more useful screen reader. I feel that, in a year or so, blind people may be trained to use Narrator instead of other screen readers on Windows.
Windows
In the early days of Windows, there were many different screen readers competing for dominance. JAWS, Job Access with Speech, was the most dominant, with Window-Eyes, now abandoned, as second. They gathered information from the graphics card to describe what was on the screen. There were no accessibility interfaces back then.
Years later, when MSAA, Microsoft Active Accessibility, was created, Window-Eyes decided to lean on that, while JAWS continued to use video intercept technology to gather information. In Windows 2000, Microsoft shipped a basic screen reader, Narrator. It wasn’t meant to be a full, useful screen reader, but one made so that a user could set up a more powerful one.
Now, we have UI Automation which is still not a very mature product, as screen readers are still not using it for everything, like Microsoft Office. GW Micro, makers of Window-eyes, bonded with AI Squared, producers of the ZoomText magnifier, which was bought by Freedom Scientific, whom promptly abandoned Window-eyes. These days, JAWS is being taken on by NVDA, Nonvisual Desktop Access, a free and open source screen reader, and Microsoft’s own Narrator screen reader.
In Windows 8, Microsoft began adding features to Narrator. Now, in Windows 10, four years later, Narrator has proven itself useful, and in some situations, helpful in ways that all other screen readers have not been. For example, one can install, setup, and begin using Windows 10 using Narrator. Narrator is the only self-described screen reader which can, with little configuration, show formatting not by describing it, but by changing its speech parameters to “show” formatting by sound. The only other access technology which does this automatically is Emacspeak, the “complete audio desktop.” Narrator’s braille support must be downloaded and installed, for now, but is still better than Android’s support. Narrator cannot, however, use a laptop’s trackpad for navigation. Instead, Microsoft decided to add such spacial navigation to touchscreens, meaning that a user must reach up and feel around a large screen, instead of using the level trackpad as a smaller, more manageable area.
Speaking of support, Microsoft’s support system is better in a few ways. First, unlike Apple, their feedback system allows more communication between the community and Microsoft developers. Users can comment on issues, and developers can ask questions, a bit like on Github. Windows Insider builds come with announcements by Microsoft with what is new, changed, fixed, and broken. If anything changes regarding accessibility, it is in the release notes. Microsoft is vocal about what is new in accessibility of Windows, in an era when many other companies seem almost ashamed to mention it in release notes. This is much better than Apple’s silence on many builds of their beta software, and no notice of accessibility improvements and features at all. Microsoft’s transparency is a breath of fresh air to me, as I am much more confident in their commitment to accessibility for it.
Their commitment, however, doesn’t seem to pervade the whole company. The Microsoft Rewards program is hard to use for me, and contains quizzes where answers must be dragged and dropped. This may be fun for sighted users, but I cannot do them with any level of success, so they aren’t fun for me at all. Another problem is the quality of speech. While Apple has superb speech options like Macintalk Alex, Vocalizer, or the Siri voices, Microsoft’s offline voices sound bored, pause for too long, and have a robotic buzzing sound as they speak. I think that a company of Microsoft’s size could invest in better speech technology, or make their online voices available for download for offline use. Feedback has been given about this issue, so perhaps the next version of Windows will have more pleasant speech.
Windows has a few downsides, though. It doesn’t support sound through its Linux subsystem, meaning I cannot use Emacs, with Emacspeak. Narrator does not yet report when a program opens, or when a new window appears, and other visual system events. Many newer Universal Windows apps can be tricky to navigate, and the Mail app still automatically expands threads as I arrow to them, which I do not want to happen, making the mail app annoying to use.
The future of Microsoft accessibility
I think that the future of Microsoft, regarding accessibility, is very bright. They seem dedicated to the cause, seeking feedback much more aggressively than Apple or Google, and many in the blind community love giving it to them. Windows will improve further, possibly with Narrator gaining the ability to play interface sounds in immersive audio using Windows Sonic for Headphones, braille becoming a deeper, and built in part of Narrator, and higher quality speech made available for download. Since Microsoft is also a gaming company, it could work on creating sound scapes for different activities: browsing the web, writing text, coding, reading, to aid in focus or creativity. Speech synthesis could be given even more parameters for speaking even more types of formatting or interface item types. really, with Microsoft’s attention to feedback, I feel that their potential is considerable for accessibility. Then again, it is equally possible that Apple will implement these features, but they aren’t as inviting as Microsoft when it comes to sharing what I’d love in an operating system as Microsoft has been, so I now just report bugs, not giving Apple new ideas.
Conclusion
It may be interesting to note the symmetry of accessibility: Apple’s phone is the dominant phone, but Microsoft’s Windows platform is the dominant laptop and desktop system among blind people. Apple’s iPhone is more accessible than Google’s Android, but Google’s Chrome OS is more polished and updated accessibility-wise than Apple’s MacOS. Personally, I use a Mac because of its integration with iOS Notes, Messages, Mail, and other services, the Mail app is a joy to breeze through email with, and open source tools like Emacs with Emacspeak do not work as well on Windows. Also, speech matters to me, and I’d probably fall asleep much more often hearing Microsoft’s buzzing voices than the somewhat energetic sound of Alex on the Mac, who speaks professionally, calmly, and never gets bored. I do, however, use Windows for heavy usage of the web, especially Google web apps and services, and gaming.
Time will tell if companies continue in their paths, Apple forging ahead, Microsoft burning bright, and Google… being Google. I hope, nevertheless, that this article has been useful for the reader, and that my opinions have been as fair as possible towards the companies. It should be noted that the accessibility teams for each company are individuals, have their own ideas of what accessibility is, means, and should be, and should be treated with care. After all, this past decade has been a long journey of, probably, most effort spent convincing managers that the features we now have are worth spending time on, and answering user complaints of “my phone is talking to me and i want it turned off right now!”.
This does not excuse them for the decay of Android and Mac accessibility, and the lack of great speech options on Windows. It does not excuse them for Apple Arcade’s lack of accessible games, or Microsoft Rewards’ inaccessible quizzes. We must give honest, complete, and critical feedback to these people. After all, they do not know what we need, what will be useful, or, if we dare tell, what will be delightful for us to use, unless we give them this feedback. This applies to all software, whether it be Apple’s silent gathering of feedback, Microsoft’s open arms and inviting offers, or open source software’s issue trackers, Discord servers, mailing lists, and Github repositories. If we want improvement, we must ask for it. If we want a better future, we must make ourselves heard in the present. Let us all remember the past, so that we can influence the future.
Now, what do you think of all this? Do you believe Apple will continue to march ahead regarding accessibility, or do you think that Microsoft, or even Google, has something bigger planned? Do you think that Apple is justified in their silence, or do you hope that they begin speaking more openly about their progress, at least in release notes? Do you like how open Microsoft is about accessibility, or do they even talk about accessibility for blind users enough to you? I’d love to know your comments, corrections, and constructive criticism, either in the comments, on Twitter, or anywhere else you can find me. Thanks so much for reading!