So, boiling my feelings on iOS versus Android down into a simpler post, as of now, iOS = features, Android = stable and seemingly more supportive of all the PWA apps that are zipped up and put on the app stores. Apple makes me feel like Squidward when I’m typing in Braille, pressing Space with dots 4-5 to translate, one… word… at… a… time! All that just to clear the translation queue or whatever when it gets stuck or whatever. And that’s where the crap comes in. On iOS, I have no idea what’s causing these types of anger-inducing issues. Oh and the bug where if you press Enter in, say, iMessage to send a message, except a menu pops up, is still there. That bug was supposed to be fixed in the latest iOS 16.2 update. But nope. I guess VoiceOver has lived long enough to be a mess. A sluggish, frustrating mess that no amount of image and screen recognition can fix.
Meanwhile, on Android, while Braille doesn’t have nearly the amount of features, it at least doesn’t have the bugs that exist on iOS. The translation system is about as good as JAWS. It doesn’t slow down, it doesn’t get stuck needing to be practically plunged like iOS, and the only issue is that when typing a colon then a right parenthesis, it doesn’t make the smiley face, but does something like conar or con) instead. So much for UEB making Braille better for computers to prosses. I think that will get much better with a Liblouis update, though.
Now, for the part about web apps, or something similar to Electron. With the Evidation app, on iOS, you get a lot of tasks, like how you’re feeling today, or health questions, out of order. So you hear one thing, then the second, then actions for the first then, and so on. I don’t doubt that this is an accessibility issue on Evidation’s part. But if Android can get this right, even when using a Braille display, Apple can get this right as well. Besides, TalkBack is the open source app, right? Apple can even learn from it. Imagine that.
I mean, I know most blind people don’t care too much about all this. Most people love their iPhones and Apple Watches and AirPods. And I respect that. They are, after all, great devices with much vendor lock-in. But as bugs pile up, as garbage begins to stink, as dishes bring some flies around, more and more people are going to Android. Like, it’s already happening. Sure, it’s not a lot, but it’s growing, 1% a year I’d guess, at the least. And then they see that TalkBack has a tutorial, for getting started, and it talks about this Braille onscreen Keyboard, using a Braille display, icon descriptions, which I might add, are quite a bit more helpful than VoiceOver’s because VoiceOver is focused on the image, and has so much data and can’t really seem to zoom in and tell that that’s just an icon.
And Google isn’t slowing down either. I read a week or so ago that Google is opening an accessibility office in London I think. Somewhere in the UK I know. I’m not sure if that’s just going to be a place where Trusted Testers can go and test things, or if there will be more to it, but that, to me, shows that they’re done napping like they were since Android 5 to 10. And I’m here for it. Yes, Google has a ton of catching up to do. But I think we’ll see them put their own spin on catching up, like describing icons first, having tools that do one thing well, but linking them all together, like Linux, rather than having VoiceOver do everything as Apple does. So, during this Christmas, I’ve gone to live with family for a while, leaving my iPhone. I don’t feel like I’ll need it for a while. And maybe, with SoundScape being slated for decommitioning, I won’t have much more of a reason to go back to the iPhone. I just have to find good headphones and a good watch for Android and I’m good. It already works with my PC and ChromeBook, much better than the iPhone works with the Mac, so I just have to get good accessories.
So I was sitting at a restaurant, waiting on my food and typing on my NLS EReader Humanware Braille display connected to my iPhone SE 2020. As I typed, I noticed that words I’d just been typing weren’t showing up. So I did the space with 4-5 “translate” command to force the stupid piece of jump to work correctly. I had to do that all throughout typing today and I just … I’m not as patient as I used to be. Like Android doesn’t have that problem. And it doesn’t have the problem where if I press Enter sometimes it pops up a menu, oh hey wasn’t that supposed to be fixed in 16.2? Well it happened today and I just quit. Ugh I get so tired of these bugs. I mean I know Android’s Braille support is new enough to not have had time to accumulate these kinds of bugs, but my goodness this has been there for long enough that you’d think they’d have fixed them by now. It just makes me not even want to use a phone.
A new beginning
So, I’m writing this from a Windows computer, using Notepad, with WinSCP providing SFTP access to the server. This won’t come as a surprise for those who follow me on Mastodon and such, but I want to put this in the blog, so everything is complete.
About half a year ago, I installed Linux. Sometimes, I get curious as to if anything has changed in Linux, or if it’s any better than it once was. And I want to know if I can tackle it, or if it’s even worth it. Half a year ago, I installed Arch using the Anarchy installer, got accessibility switches turned on, and got to work trying to use it.
Throughout my journey with Linux, I found myself having to forego things that Windows users took for granted. Stuff like instant access to all audio games for computers, regular video games which, even being accessible, used only Windows screen readers for speech. And all the tools that made life a little easier for blind people, like built-in OCR for all screen readers on the platform, different choices in Email clients and web browsers, and even stuff like RSS and Podcatcher clients made by blind people themselves, not to mention Twitter clients. Now, there is OCR Desktop, but it doesn’t come with Orca, and you must set up a keyboard command for it.
But I had Emacs, GPodder for podcasts, Firefox, Chromium when I wanted to deal with that, and Thunderbird for lagging my system every time it checked for email. It was usable, and a few blind people do make use of it as their daily driver. But I just couldn’t. I need something that’s easy to setup and use, otherwise my stress levels just keep going up as I not only have to fight with config files and all that, but accessibility issues as well.
The breaking point
A few days ago, I wanted to get my Android phone talking with my Linux computer, so that I could text, get notifications, and make calls. KDE Connect wasn’t accessible, so I tried Device Connect. I couldn’t get anything out of that, so I tried GSConnect. In order to use that Gnome extension, I needed to start Gnome. I have Gnome 40, since I’m on Arch, so I logged in using that session, and got started. Except, Gnome had become much less accessible since the last time I’d tried it. The Dash was barely usable, the top panels trapped me in them until I opened a dialog from them, and I was soon just too frustrated to go much further. And then I finally opened the Gnome Extensions app, only to find that it’s not accessible at all.
There’s only so much I can take until I just give up and go back to Windows, and that was it. It doesn’t matter how powerful a thing is if one cannot use it, and while Linux is good for simple, everyday tasks, when you really start digging in, when you really start trying to make Linux your ecosystem, you start finding barriers all over the place.
Now, I’m using Windows, have Steam installed with a few accessible video games, Google Chrome, NVDA with plenty of addons, and the “Your Phone” app on Windows and Android works great, except for calls. But it still works much better than any Linux integration I could do. Also, with Windows and Android, I can open the Android phone screen in Windows, and, with NVDA or other screen readers, control the phone from the keyboard using Talkback keyboard commands. That’s definitely not something Linux developers would have thought of.
Writing Richly
Whenever you read a text message, forum post, Tweet, or Facebook status, have you ever seen some one surround a word with stars, like *this*? Have you noticed some one surround a phrase with two stars? This is a part of Markdown, a form of formatting text for web usage.
I believe, however, that Markdown deserves more than just web usage. I can write in Markdown in this blog, I can use it on Github, and even in a few social networks. But wouldn’t it be even more useful everywhere? If we could write in Markdown throughout the whole operating system, couldn’t we be more expressive? And for accessibility issues, Markdown is great because a blind person can just write to format, instead of having to deal with clunky, slow graphical interfaces.
So, in this article, I will discuss the importance of rich text, how Markdown could empower people with disabilities, and how it could work system-wide throughout all computers, even the ones in our pockets.
What’s this rich text and who needs all that?
Have you ever written in Notepad? It’s pretty plain, isn’t it? That is plain text. No bold, no italics, no underline, nothing. Just, if you like that, plain, simple text. If you don’t like plain text, you find yourself wanting more power, more ability to link things together, more ways to describe your text and make the medium, in some ways, a way to get the message across.
Because of this need, rich text was created. One can use this in Word Pad, Microsoft Word, Google Docs, LibreOffice, or any other word processor worth something. When I speak of rich text, to make things simple, I mean anything that is not plain text, including HTML, as it describes rich text. Rich text is in a lot of places now, yes, but it is not everywhere, and is not the same in the places that it is in.
So, who needs all that? Why not just stick with plain text? I mean come on man, you’re blind! You can’t see the rich text. In a way, this is true. I cannot see the richness of text, but in a moment, we’ll get to how that can be done. But for sighted people, which text message is better?
Okay, but how’s your day going?
Okay, but how’s your day going?
Okay, but how’s *your* day going?
For blind people, the second message has the word “your” italicized. Sure, we may have gotten used to stars surrounding words meaning something, but that is a workaround, and not nearly the optimal outcome of rich text.
So what can you do with Markdown? You can do plenty of stuff. You could use it for simply using one blank line between blocks of text to show paragraphs in your journal. You could use it to create headings for chapters in your book. You could use it to make links to websites in your email. You could even simply use it to italicize an emphasized word in a text. Markdown can be as little or as much as you need it to be. And if you don’t add any stars, hashes, dashes, brackets, or HTML markup, it’s just as it is, plain text.
Also, it doesn’t have to be hard. Even Emacs, an advanced text editor, gives you questions when you add a link, like “Link text,” “Link address,” and so on. Questions like that can be asked of you, and you simply fill in the information, and the Markdown is created for you.
Okay but what about us blind people?
To put it simply, Markdown shows us rich text. In the next section, I’ll talk about how, but for now, let’s focus on why. With nearly all screen readers, text formatting is not shown to us. Only Narrator on Windows 10 shows formatting with minimal configuration, and JAWS can be used to show formatting using a lot of configuration of speech and sound schemes.
But, do we want that kind of information? I think so. Why wouldn’t we want to know exactly what a sighted person sees, in a way that we can easily, and quickly, understand? Why would we not want to know what an author intended us to know in a book? We accept formatting symbols in Braille, and even expect it. So, why not in digital form?
NVDA on Windows can be set to speak formatting information as we read, but it can be bold on quite arduous to hear italics on all this italics off as we read what we write bold off. Orca can speak formatting like NVDA, as well. VoiceOver on the Mac can be set to speak formatting, like NVDA, and also has the ability to make a small sound when it encounters formatting. This is better, but how would one distinguish bold, italics, or underline from a simple color change?
Even VoiceOver on iOS, which arguably gets much more attention than its Mac sibling, cannot read formatting information. The closest we get is the phrase separated from the rest of the paragraph into its own item, showing that it’s different, in Safari and other web apps. But how is it different? What formatting was applied to this “different” text? Otherwise, text is plain, so blind people don’t even know that there is a possibility of formatting, let alone that that formatting isn’t made known to us by the program tasked with giving us this information. In some apps, like notes, one can get some formatting information by reading line by line in the Note text field, but what if one simply wants to read the whole thing?
Okay but what about writing rich text? I mean, you just hit a hotkey and it works, so what could be better than that? First, when you press Control + I to italicize, there is no guarantee that “italics on” will be spoken. In fact, that is the case in LibreOffice for Windows: you do not know if the toggle key toggled the formatting on or off. You could write some text, select it, then format it, but again, you don’t know if you just italicized that text, or removed the italics. You may be able to check formatting with your screen reader’s command, but that’s slow, and you would hate to do that all throughout the document. Furthermore, dealing with spoken formatting as it is, it takes some time to read your formatted text. Hearing descriptions of formatting changes tires the mind, as it must interpret the fast-paced speech, get a sense of formatting flipped from off to on, and quickly return to interpreting text instead of text formatting instruction. Also, because all text formatting changes are spoken like the text surrounding it, you may have to slow down your speech just to get somewhat ahead of things enough to not grow tired from the relentless text streaming through your mind. This could be the case with star star bold or italics star star, and if screen readers would use more fine control of the pauses of a speech synthesizer, a lot of the exhausting sifting through of information which is rapidly fired at us would be lessened, but I don’t see much of that happening any time soon.
Even on iOS, where things are simpler, one must deal with the same problems as on other systems, except knowing if formatting is turned on or off before writing. There is also the problem of using the touch screen, using menus just to select to format a heading. This can be worked around using a Bluetooth keyboard, if the program you’re working in even has a keyboard command to make a heading, but not everyone has, or wants, one of those.
Markdown fixes, at least, most of this. We can write in Markdown, controlling our formatting exactly, and read in Markdown, getting much more information than we ever have before, while also getting less excessive textual information, hearing “star” instead of “italics on” and “italics off” does make a difference. “Star” is not usually read surrounding words, and has already become, in a sense, a formatting term. “Italics on” sounds like plain text, is not a symbol, and while it is a formatting term, has many syllables, and just takes time to say. Coupled with the helpfulness of Markdown for people without disabilities, adding it across an entire operating system would be useful for everyone; not just the few people with disabilities, and not just for the majority without.
So, how could this work?
Operating systems, the programs which sit between you and the programs you run, has many layers and parts working together to make the experience as smooth as the programmers know how. In order for Markdown to be understood, there must be a part of the operating system that translates it into something that the thing that displays text understands. Furthermore, this thing must be able to display the resulting rich text, or Markdown interpretation, throughout the whole system, not just in Google Docs, not just in Pages, not just in Word, but in Note Pad, in Messages, in Notes, in a search box.
With that implemented, though, how should it be used? I think that there should be options. It’s about time some companies released their customers from the “one size fits all” mentality anyway. There should be an option to replace formatting done with Markdown with rich text unless the line the formatting is on has input focus, a mode for simply showing the Markdown only and no rich text, and an option for showing both.
For sighted people, I imagine seeing Markdown would be distracting. They want to see a heading, not the hash mark that makes the line a heading. So, hide Markdown unless that heading line is navigated to.
For blind people, or for people who find plain text easier to work with, and for whom the display of text in different sizes and font faces is jarring or distracting, having Markdown only would be great, while being translated for others to see as rich text. Blind people could write in Markdown, and others can see it as rich text, while the blind person sees simply what they wrote, in Markdown.
For some people, being able to see both would be great. Being able to see the Markdown they write, along with the text that it produces, could be a great way for users to become more comfortable with Markdown. It could be used for beginners to rich text editing, as well.
But, which version of Markdown should be used?
As with every open source, or heatedly debated, thing in this world, there are many ways of doing things. Markdown is no different. There is:
and probably many others. I think that Pandoc’s Markdown would be the best, most extended variant to use, but I know that most operating system developers will stick with their own. Apple will stick with Swift Markdown, Microsoft may stick with Github Markdown, and the Linux developers may use Pandoc, if Pandoc is available as a package on the user’s architecture, and if not, then it’s some one else’s issue.
Conclusion
In this article, I have attempted to communicate the importance of rich text, why Markdown would make editing rich text easy for everyone, including people with disabilities, and how it could be implemented. So now, what do you all think? Would Markdown be helpful for you? Would writing blog posts, term papers, journal entries, text messages, notes, or Facebook posts be enhanced by Markdown rich text? For blind people, would reading books, articles, or other text, and hearing the Markdown for bold, italics, and other such formatting make the text stand out more, make it more beautiful to you, or just get in your way? For developers, what would it take to add Markdown support to an operating system, or even your writing app? How hard will it be?
Please, let me know your thoughts, using the Respond popup, or replying to the posts on social media made about this article. And, as always, thank you so much for reading this post.
Apple’s accessibility consistency
This article will explore Apple’s consistent attention to accessibility, and how other tech companies with commitments to accessibility, like Microsoft and Google, compare to Apple in their accessibility efforts. It also shows where these companies can improve their consistency, and that no company is perfect at being an Assistive Technology provider yet.
Introduction
Apple has shown a commitment to accessibility since the early days of the iPhone, and since mac OSX Tiger. Its VoiceOver screen reader was the first built-in screen reader of any usability on a personal computer and smart phone. Now, VoiceOver is on every Apple product, even the HomePod. It is so prevalent that people I know have begun calling any screen reader “VoiceOver.” This level of consistency should be congratulated in a company of Apple’s size and wealth. But is this a continual trend, and what does this mean for competitors?
This will be an opinion piece. I will not stick only to the facts as we have them, and won’t give sources for everything which I show as fact. This article is a testament to how accessibility can be made a fundamental part of a brand’s experience for effected people, so feelings and opinions will be involved.
The trend of accessibility
The following sections of the article will explore companies trends of accessibility so far. The focus is on Apple, but I’ll also show some of what its competitors have done over the years as well. As Apple has a greater following of blind people, and Applevis has documented so much of Apple’s progress, I can show more of it than I can its competitors, whose information written by their followers are scattered, thus harder to search for.
Apple
Apple has a history of accessibility, shown by this article Written just under a decade ago, it goes over the previous decade’s advancements. As that article has done, I will focus on little of a company’s talk of accessibility, but more so its software releases and services.
Apple is, by numbers and satisfaction, the leader in accessibility for users of its mobile operating systems, but not in general purpose computer operating systems. Microsoft’s Windows is used far more than Apple’s MacOS. Besides that, and services, Apple has made its VoiceOver screen reader on iOS much more powerful, and even flexible, than its competitor, Google’s TalkBack.
iOS
As iPhones were released each year, so were newer versions of iOS. In iOS 6 accessibility settings began working together, VoiceOver’s Rotor gained a few new abilities, new braille displays worked with VoiceOver, and bugs were fixed. In iOS 7, we gained the ability to have more than one high quality voice, more Rotor options, and the ability to write text using handwriting.
Next, iOS 8 was pretty special to me, personally, as it introduced the method of writing text that I almost always use now, Braille Screen Input. This lets me type on the screen of my phone in braille, making my typing exponentially faster. Along with typing, I can delete text, a word or character, and now, send messages from within the input mode. I can also change braille contraction levels, and lock orientation into one of two typing modes. Along with this, Apple added the Alex voice, its most natural yet, which was only before available on a Mac. For those who do not know braille or handwriting, a new “direct touch typing” method allows a user to type as quickly as a sighted person, if they can memorize exactly where the keys are, or have spell check and autocorrection enabled.
In iOS 9, VoiceOver users are able to choose Siri voices to speak using VoiceOver, as an extension of the list of Vocalizer voices, and Apple’s Alex voice. One can now control speech rate more easily, and the speed of speech can be greater than previously possible. One can control the time a double tap should take, a better method of selecting text, braille screen input improvements, and braille display fixes and new commands.
Then, iOS 10 arrived, with a new way to organize apps, a pronunciation dictionary, even more voices, reorganized settings, new sounds for actions, a way to navigate threaded email, and some braille improvements. One great thing about the pronunciation editor is that it does not only apply to the screen reader, as in many Windows screen readers, but to the entire system speech. So, if you use VoiceOver, but also Speak Screen, both will speak as you have set them to. This is a testament to Apple’s attention to detail, and control of the entire system.
With the release of iOS 11, we gained the ability to type to Siri, new Siri voices, verbosity settings, the ability to have subtitles read or brailled, and the ability to change the speaking pitch of the voice used by VoiceOver. VoiceOver can now describe some images, which will be greatly expanded later. We can now find misspelled words, which will also be expanded later. One can now add and change commands used by braille displays, which, yes, will be expanded upon later. A few things which haven’t been expanded upon yet are the ability to read formatting, however imprecise, with braille “status cells,” and the “reading” of Emoji. Word wrap and a few other braille features were also added.
Last year, in iOS 12, Apple added commands to jump to formatted text for braille display users, new Siri voices, verbosity options, confirmation of rotor actions and sent messages, expansion of the “misspelled” rotor option for correcting the misspelled word, and the ability to send VoiceOver to an HDMI output.
Finally, In iOS 13,Apple moved accessibility to the main settings list, out of the General section, provided even more natural Siri voices, haptics for VoiceOver, to aid alongside, or replace, the sounds already present, and the ability to modify or turn them off. A “vertical scroll bar” has also been added, as another method of scrolling content. VoiceOver can now give even greater suggestions for taking pictures, aligning the camera, and with the iPhone 11, what will be in the picture. One can also customize commands for the touch screen, braille display, and keyboard, expanding the ability braille users already had. One can even assign Siri shortcuts to a VoiceOver command, as Mac users have been able to do with Apple Script. One can now have VoiceOver interpret charts and graphs, either via explanations of data, or by an audible representation of them. This may prove extremely useful in education, and for visualizing data of any type. Speaking detected text has improved over the versions to include the detecting of text in unlabeled controls, and now can attempt to describe images as well. Braille users now have access to many new braille tables, like Esperanto and several other languages, although braille no longer switches languages along with speech.
MacOS
MacOS has not seen so much improvement in accessibility over the years. VoiceOver isn’t a bad screen reader, though. It can be controlled using a trackpad, which no other desktop screen reader can boast. It can be used to navigate and activate items with only the four arrow keys. It uses the considerable amount of voices available on the Mac and for download. It simply isn’t updated nearly as often as VoiceOver for iOS.
OSX 10.7, 10.8, and 10.9 have seen a few new features, like more VoiceOver voices, braille improvement, and other things. I couldn’t find much before Sierra, so we’ll start there.
In Sierra, Apple added VoiceOver commands for controlling volume, to offset the absence of the physical function keys in new MacBook models. VoiceOver can also now play a sound for row changes in apps like Mail, instead of interrupting itself to announce “one row added,” because Apple’s speech synthesis server on the Mac doesn’t innately support a speech queue. This means that neither does VoiceOver, so interruptions must be worked around. Some announcements were changed, HTML content became web areas, and interaction became “in” and “out of” items. There were also bug fixes in this release.
In High Sierra, one can now type to Siri, VoiceOver can now switch languages when reading multilingual text, as VoiceOver on the iPhone has been able to do since iOS 5 at least, improved braille editing and PDF reading support, image descriptions, and improved HTML 5 support.
In MacOS Mojave Apple added the beginning of new iPad apps on Mac. These apps work poorly with VoiceOver, even still in Catalina. There were no new reported VoiceOver features in this release.
This year, in MacOS Catalina, Apple added more control of punctuation, and XCode 11’s text editor is now a little more accessible, even though the Playgrounds function isn’t, and the Books app can now, after years of being on the Mac, be used for basic reading of books. Braille tables from iOS 13 are also available in MacOS.
The future of Apple accessibility
All of these changes, however, were discovered by users. Apple doesn’t really talk about all of its accessibility improvements, just some of the highlights. While I see great potential in accessible diagrams and graphs, Apple didn’t mention this, and users had to find this. Subsequently, there may be fixes and features that we still haven’t found, three versions of iOS 13 later. Feedback between Apple and its customers has never been great, and this is only to Apple’s detriment. Since Apple rarely responds to little feedback, users feel that their feedback doesn’t mean anything, so they stop sending it. Also of note is that on VoiceOver’s Mac accessibility page the “Improved PDF, web, and messages navigation” section is from macOS 10.13, two versions behind what is currently new in VoiceOver.
Another point is that services haven’t been the most accessible. Chief among them is Apple Arcade, which has no accessible games so far. Apple research, I’ve found, has some questions which have answers that are simply unlabeled buttons. While Apple TV Plus has audio description for all of their shows, this is a minor glimmer of light, shrouded by the inaccessibility of Apple Arcade, which features, now, over one hundred games, none of which I can play with any success. In all fairness, a blind person who is patient may be able to play a game like Dear Reader, which has some accessible items, but the main goal of that game is to find a word in a different color and correct it, which is completely at odds with complete blindness, but could be handled using speech parameter changes, audio cues, or other signals of font, color, or style changes.
Time will tell if this new direction, no responsibility for not only other developers’ work, but also the Mac and work done by other developers and flaunted by Apple, will become the norm. After all, Apple Arcade is an entire Tab of the App Store; inaccessibility is in plain view. As a counterpoint, the first iPhone software, and even the second version, was inaccessible to blind people, but now the iPhone is the most popular smart phone, in developed nations, for blind people.
Perhaps next year, Apple Arcade will have an accessible game or two. I can only hope that this outcome comes true, and not the steady stepping back of Apple from one of their founding blocks: accessibility. We cannot know, as no one at Apple tells us their plans. We aren’t the only ones, though, as mainstream technology media shows. We must grow accustom to waiting on Apple to show new things, and reacting accordingly, but also providing feedback, and pushing back against encroaching inaccessibility and decay of macOS.
Apple’s competitors
In this blog post, I compare operating systems. To me, an operating system is the root of all software, and thus, the root of all digital accessibility. With this in mind, the reader may see why it is imperative that the operating system be as accessible, easy and delightful to use, and promote productivity as much as possible. Microsoft and Google are the largest competitors of Apple in the closed source operating system space, so they are what I will compare Apple to in the following sections.
Google
Google is the main contributor to the Android and Chromium projects. While both are open source, both are simply a base to be worked from, not the end result. Not even Google’s phones run “pure” Android, but have Google services and probably other things on the phone as well. Both, though, have varying accessibility as well. While Apple pays great attention to its mobile operating system’s accessibility, Google does not seem to put many resources towards that. However, its Chrome OS, which is used much in education, is much more easily accessible, and even somewhat of an enjoyable experience for a lite operating system.
Android
Android was released one year after iOS. TalkBack was released as part of Android 1.6. Back then, it only supported navigation via a keyboard, trackpad, or scroll ball. It wasn’t until version 4 when touch screen access was implemented into TalkBack for phones, and up to this day, only supports commands done with one finger, two finger gestures being passed through to Android as one finger commands. TalkBack has worked around this issue by recently, in Android version 8, gaining the ability to use the finger print sensor, if available, as a gesture pad for setting options, and the ability the switch spoken language, if using Google TTS, when reading text in more than one language. TalkBack uses graphical menus for setting options otherwise, or performing actions, like deleting email. It can be used with a Bluetooth keyboard. By default, it uses Google TTS, a lower quality, offline version of speech used for things like Google Translate, Google Maps, and the Google Home. TalkBack cannot use the higher quality Google TTS voices. Instead, voices from other vendors are downloaded for more natural sound.
BrailleBack, discussed on its Google Support page, is an accessibility service which, when used with TalkBack running, provides rudimentary braille support to Android. Commands are rugged, meaningless, and unfamiliar to users of other screen readers, and TalkBack’s speech cannot be turned off while using Brailleback, meaning that, as one person helpfully provided, that one must plug in a pair of headphones and not wear them, or turn down the phone’s volume, to gain silent usage of one’s phone using braille. Silent reading is one of braille’s main selling points, but accessibility, if not given the resources necessary, can become a host of workarounds. Furthermore, brailleback must be installed onto the phone, providing another barrier to entry for many deaf-blind users, so some simply buy iPods for braille if they wish to use an Android phone for customization or contrarian reasons, or simply stick with the iPhone as most blind people do.
Now, though, many have moved to a new screen reader created by a Chinese developer, called Commentary. This screen reader does, however, have the ability to decrypt your phones if you have encryption enabled. For braille users BRLTTY is used for braille usage. This level of customization, offset by the level of access which apps have to do anything they wish to your phone, is an edge that some enjoy living on, and it does allow things like third-party, and perhaps better screen readers, text to speech engines, apps for blind people like The vOICe which gives blind people artificial vision, and other gray area apps like emulators, which iOS will not accept on the App Store. Users who are technically inclined do tend to thrive on Android, finding workarounds a joy to find and use, whereas people who are not, or are but do not want to fiddle with apps to replace first-party apps which do not meet the needs of the user, and unoptimized settings, find themselves doing more configuring of the phone than using it.
Third party offerings, like launchers, mail apps, web browsers, file managers, all have variable accessibility, which can change from version to version. Therefore, one must navigate the shifting landscape of first party tools which may sort of be good enough, third party tools which are accessible enough but may not do everything you need, and tools which users have found workarounds for using them. Third party speech synthesizers are also hit or miss, with some not working at all, others, like Eloquence, being now unsupported, and more, like ESpeak, sounding unnatural. The only good braille keyboard which is free hasn’t been updated in years, and Google has not made one of their own.
Because of all this, it is safe to say that Android can be a powerful tool, but has not attained the focus needed to become a great accessibility tool as well. Google has begun locking down its operating system, taking away some things that apps could do before. This may come to inhibit third party tools which blind people now use to give Android better accessibility. I feel that it is better to have been on iOS, where things are locked down much, but you have, at least somewhat, a clear expectation of fairness on Apple’s part. Android is not a big income source for Google, so Google does not have to answer to app developers.
Chrome OS
Chrome OS is Google’s desktop operating system, running Chrome as the browser, with support for running Android apps. Its accessibility has improved plenty over the years, with ChromeVox gaining many features which make it a good screen reader. You can read more about chromeVox One of the main successes to ChromeVox is its braille support. It is normal for most first-party screen readers to support braille nowadays. When one plugs in a braille display to a Chromebook with ChromeVox enabled, ChromeVox begins using that display automatically, if it is supported. The surprise here is that if one plugs it in when ChromeVox is off, ChromeVox will automatically turn on, and begin using the display. This is beyond what other screen readers can do. ChromeVox, and indeed TalkBack, do not yet support scripting, editing punctuation and pronounciation speech, and do not have “activities” as VoiceOver for iOS and Mac have, but ChromeVox feels much more polished and ready for use than TalkBack.
The future of Google accessibility
Judging by the past, Google may add a few more features to TalkBack, but less than Apple adds to iOS. They have much to catch up on, however, as they have only two years ago added the ability for TalkBack to detect and switch languages, and use the finger print sensor like VoiceOver’s rotor. I have not seem much change over the two years since, except making a mode for tracking focus from a toggle to a mandatory feature. I suspect that, in time, they will remove the option to disable explore by touch, if they’ve not already.
With Chrome OS, and Google Chrome in general, I hope that the future brings better things, now that Microsoft is involved in Chromium development. It could become even more tied to web standards. Perhaps ChromeVox will gain better sounding offline voices than Android’s lower quality Google TTS ones, or gain sounds performed using spacial audio for deeper immersion.
Microsoft
Microsoft makes only one overarching operating system, with changes for XBox, HoloLens, personal computers, and other types of hardware. Windows has always been the dominant operating system for general purpose computing for blind people. It hasn’t always been accessible, and it is only in recent years that Microsoft have actively turned their attention to accessibility on Windows and XBox.
Now, Windows’ accessibility increases with each update, and Narrator becomes a more useful screen reader. I feel that, in a year or so, blind people may be trained to use Narrator instead of other screen readers on Windows.
Windows
In the early days of Windows, there were many different screen readers competing for dominance. JAWS, Job Access with Speech, was the most dominant, with Window-Eyes, now abandoned, as second. They gathered information from the graphics card to describe what was on the screen. There were no accessibility interfaces back then.
Years later, when MSAA, Microsoft Active Accessibility, was created, Window-Eyes decided to lean on that, while JAWS continued to use video intercept technology to gather information. In Windows 2000, Microsoft shipped a basic screen reader, Narrator. It wasn’t meant to be a full, useful screen reader, but one made so that a user could set up a more powerful one.
Now, we have UI Automation which is still not a very mature product, as screen readers are still not using it for everything, like Microsoft Office. GW Micro, makers of Window-eyes, bonded with AI Squared, producers of the ZoomText magnifier, which was bought by Freedom Scientific, whom promptly abandoned Window-eyes. These days, JAWS is being taken on by NVDA, Nonvisual Desktop Access, a free and open source screen reader, and Microsoft’s own Narrator screen reader.
In Windows 8, Microsoft began adding features to Narrator. Now, in Windows 10, four years later, Narrator has proven itself useful, and in some situations, helpful in ways that all other screen readers have not been. For example, one can install, setup, and begin using Windows 10 using Narrator. Narrator is the only self-described screen reader which can, with little configuration, show formatting not by describing it, but by changing its speech parameters to “show” formatting by sound. The only other access technology which does this automatically is Emacspeak, the “complete audio desktop.” Narrator’s braille support must be downloaded and installed, for now, but is still better than Android’s support. Narrator cannot, however, use a laptop’s trackpad for navigation. Instead, Microsoft decided to add such spacial navigation to touchscreens, meaning that a user must reach up and feel around a large screen, instead of using the level trackpad as a smaller, more manageable area.
Speaking of support, Microsoft’s support system is better in a few ways. First, unlike Apple, their feedback system allows more communication between the community and Microsoft developers. Users can comment on issues, and developers can ask questions, a bit like on Github. Windows Insider builds come with announcements by Microsoft with what is new, changed, fixed, and broken. If anything changes regarding accessibility, it is in the release notes. Microsoft is vocal about what is new in accessibility of Windows, in an era when many other companies seem almost ashamed to mention it in release notes. This is much better than Apple’s silence on many builds of their beta software, and no notice of accessibility improvements and features at all. Microsoft’s transparency is a breath of fresh air to me, as I am much more confident in their commitment to accessibility for it.
Their commitment, however, doesn’t seem to pervade the whole company. The Microsoft Rewards program is hard to use for me, and contains quizzes where answers must be dragged and dropped. This may be fun for sighted users, but I cannot do them with any level of success, so they aren’t fun for me at all. Another problem is the quality of speech. While Apple has superb speech options like Macintalk Alex, Vocalizer, or the Siri voices, Microsoft’s offline voices sound bored, pause for too long, and have a robotic buzzing sound as they speak. I think that a company of Microsoft’s size could invest in better speech technology, or make their online voices available for download for offline use. Feedback has been given about this issue, so perhaps the next version of Windows will have more pleasant speech.
Windows has a few downsides, though. It doesn’t support sound through its Linux subsystem, meaning I cannot use Emacs, with Emacspeak. Narrator does not yet report when a program opens, or when a new window appears, and other visual system events. Many newer Universal Windows apps can be tricky to navigate, and the Mail app still automatically expands threads as I arrow to them, which I do not want to happen, making the mail app annoying to use.
The future of Microsoft accessibility
I think that the future of Microsoft, regarding accessibility, is very bright. They seem dedicated to the cause, seeking feedback much more aggressively than Apple or Google, and many in the blind community love giving it to them. Windows will improve further, possibly with Narrator gaining the ability to play interface sounds in immersive audio using Windows Sonic for Headphones, braille becoming a deeper, and built in part of Narrator, and higher quality speech made available for download. Since Microsoft is also a gaming company, it could work on creating sound scapes for different activities: browsing the web, writing text, coding, reading, to aid in focus or creativity. Speech synthesis could be given even more parameters for speaking even more types of formatting or interface item types. really, with Microsoft’s attention to feedback, I feel that their potential is considerable for accessibility. Then again, it is equally possible that Apple will implement these features, but they aren’t as inviting as Microsoft when it comes to sharing what I’d love in an operating system as Microsoft has been, so I now just report bugs, not giving Apple new ideas.
Conclusion
It may be interesting to note the symmetry of accessibility: Apple’s phone is the dominant phone, but Microsoft’s Windows platform is the dominant laptop and desktop system among blind people. Apple’s iPhone is more accessible than Google’s Android, but Google’s Chrome OS is more polished and updated accessibility-wise than Apple’s MacOS. Personally, I use a Mac because of its integration with iOS Notes, Messages, Mail, and other services, the Mail app is a joy to breeze through email with, and open source tools like Emacs with Emacspeak do not work as well on Windows. Also, speech matters to me, and I’d probably fall asleep much more often hearing Microsoft’s buzzing voices than the somewhat energetic sound of Alex on the Mac, who speaks professionally, calmly, and never gets bored. I do, however, use Windows for heavy usage of the web, especially Google web apps and services, and gaming.
Time will tell if companies continue in their paths, Apple forging ahead, Microsoft burning bright, and Google… being Google. I hope, nevertheless, that this article has been useful for the reader, and that my opinions have been as fair as possible towards the companies. It should be noted that the accessibility teams for each company are individuals, have their own ideas of what accessibility is, means, and should be, and should be treated with care. After all, this past decade has been a long journey of, probably, most effort spent convincing managers that the features we now have are worth spending time on, and answering user complaints of “my phone is talking to me and i want it turned off right now!”.
This does not excuse them for the decay of Android and Mac accessibility, and the lack of great speech options on Windows. It does not excuse them for Apple Arcade’s lack of accessible games, or Microsoft Rewards’ inaccessible quizzes. We must give honest, complete, and critical feedback to these people. After all, they do not know what we need, what will be useful, or, if we dare tell, what will be delightful for us to use, unless we give them this feedback. This applies to all software, whether it be Apple’s silent gathering of feedback, Microsoft’s open arms and inviting offers, or open source software’s issue trackers, Discord servers, mailing lists, and Github repositories. If we want improvement, we must ask for it. If we want a better future, we must make ourselves heard in the present. Let us all remember the past, so that we can influence the future.
Now, what do you think of all this? Do you believe Apple will continue to march ahead regarding accessibility, or do you think that Microsoft, or even Google, has something bigger planned? Do you think that Apple is justified in their silence, or do you hope that they begin speaking more openly about their progress, at least in release notes? Do you like how open Microsoft is about accessibility, or do they even talk about accessibility for blind users enough to you? I’d love to know your comments, corrections, and constructive criticism, either in the comments, on Twitter, or anywhere else you can find me. Thanks so much for reading!
Advocacy of open source software
In this post, I’ll detail my experiences of advocating for accessibility in open source software, why it is important, and how others can help. I’ve not been doing it for long, but at least now, I’ve done a bit. I’ll also touch upon why I think open source software, on all operating systems, is important, and what closed source and closed feedback systems cannot offer, which open source grants. On the other hand, there are things which closed source somewhat grants, but which has faltered slightly in recent days. I will attempt to denote what is fact and what is opinion, this goes for any post of a commentary of informative nature.
The Appeal of Open Source
Open source, or free software, basically means that a person can view and change the source code of software that they download or own. While this doesn’t mean much to users, it does mean that many different people can work on a project to make it better. This has no value on its own, see the “heartbleed” SSL bug and its Aftermath, but as with SSL, things can obviously improve when given an incentive.
For now, open source technology is used in many closed source operating systems. For example, the Liblouis braille tables are used in iOS, macOS, and most Linux distributions through BRLTTY. While the software is not perfect, it is often made for more than one operating system, has a helpful community of users, and, greatest for accessibility, developers who are more likely to consider accessibility. This is greatly improved with platforms for open source development, like Github and Gitlab, which allow users to post “issues” on projects, including accessibility ones.
The Appeal of Closed Source
People like getting paid. I should know, as a working blind person who does love getting paid for time and effort well spent. People love keeping things hidden while being worked on. I wouldn’t want a reader reading an incomplete blog post, after all, and spreading the word that “Devin just kind of wrote a few words and that’s all I got from the blog.” People love being able to claim their work as theirs, instead of having to share the credit with other people or companies. I don’t have direct experience with this, because I need all the help I can get, but in my opinion, it is a factor in choosing to create on your own, as a user or a company. Another great thing about closed source is that your competitors can’t copy what you’re doing, as you do it, and when you’re an important company, with allegiance to your shareholders, you must do anything to keep making money. But, what about accessibility?
Open Source Accessibility
Accessibility of open source projects vary a lot. For example, before Retroarch was made accessible, its interface was not usable by blind people. Now, though, I can use it easily. However, current versions of the KDE Plasma desktop do not work well with the Orca screen reader. The following quote is from the release notes for KDE’s latest desktop version:
#+beginquote KDE is an international technology team that creates free and open source software for desktop and portable computing. Among KDE’s products are a modern desktop system for Linux and UNIX platforms, comprehensive office productivity and groupware suites and hundreds of software titles in many categories including Internet and web applications, multimedia, entertainment, educational, graphics and software development. KDE software is translated into more than 60 languages and is built with ease of use and modern accessibility principles in mind. KDE’s full-featured applications run natively on Linux, BSD, Solaris, Windows and Mac OS X. #+endquote
“Modern accessibility principals,” you say? In my opinion, we seem to be talking about different definitions of “accessibility.” Yes, there are multiple definitions. One is accessibility in the sense of being able to be accessed, another is the ability to be found, and the ability of being easy to deal with. As stated in the About section of the site, I use accessibility to mean being able to be used completely by blind people. This carries with it the implication that every single function, and all needed visual information, can be conveyed to a blind person in order for it to be accessible. This rules out the “good enough” approach that so many blind people accept as the status quo. Luckily for blind people who would love to use KDE, there is Work being doneon this issue.
Gnu, the project behind much of Linux, also has anAccessibility Statement which does seem to be very out of date, as it references flash player and Silverlight, which are no longer in common use, and does not reference Apple’s iOS, Google’s Android, and other modern technologies which are not open source (or are, but might as well not be because of the necessity of closed-source services), but which include assistive technologies. I encourage every adventurous blind person to make themselves available for testing open source software and operating systems; user testing was mentioned by the KDE team as something blind people could do to help. Believe me, having an environment which is a “joy to use” is a dream of mine.
Gnome, and Mate, accessibility are okay, but they do not come close to the accessibility of Windows and Mac systems. For a good example, if you press Alt + F1 in Gnome, and probably Mate too (tested, Mate works a lot better than Gnome), you may only hear “window.” Advanced users will know to type something in Gnome, or use the Arrow Keys in Mate, but regular users should not have to learn to hunt around due to bad accessibility, and the fact that less technically inclined users use Linux is a testament to blind people’s ingenuity and ability to adapt, rather than the accessibility of the platform.
Open source accessibility is so hit and miss because there are so many standards. There is the GTK framework for building graphical apps, which does have some accessibility support, but developers must label the items in their programs with text. There is the QT framework, which seems to have more poor accessibility support. Basically, developers can do anything they want, which is good for freedom, but often is not great for accessibility. Also, much of the community has not heard of accessibility practices, do not know that blind people use computers, or think that we must use braille interfaces to interact with computers and digital devices. This is a failure on our part, as we do not “get out there” on the Internet enough. With the advent of an accessible Reddit clientthis may begin to change. Further work must be done to give blind users an accessible Reddit interface on the web for users to use on computers, not iPhones. However, Github is very accessible, and there is nothing stopping one from submitting issues.
Closed Source Accessibility
“Okay but what about Windows? And Apple? You like Apple, right?” Basically, it’s hard to tell. Software doesn’t write itself, it is written, for now, by people. People can make mistakes, ignore guidelines, or simply not care about accessibility. However, those guidelines do exist, and are usually one standard, like the iOS accessibility standard. This means that companies can develop accessible software easily, and are held accountable by managers to uphold accessibility. But, even the best of accessible companies do not always do the right thing. Apple, for example, has created two services, Apple Arcade and Apple Research. Apple Arcade contains no games which a blind gamer can play without expending much more effort than a sighted gamer. Apple Research contains some questions with answer buttons which are not labeled, or cannot be activated. Does Apple think that blind people do not want to game, or that we don’t care about our hearing, heart, or for women, their reproductive health? Apple has also created Swift Playgrounds, an app for children to learn to code. This is accessible. But what about adults? Shouldn’t blind adults, who are usually technically inclined enough, be given a chance to learn to code? I’ll probably rant about this in a future article.
Microsoft has been on an accessibility journey for a few years now, but even they have a few problems. First, the voices in Windows 10 are poor for screen reading tasks. They pause way too long at the end of clauses and sentences, leading me, at least, to press Down Arrow to move to the next line before the last line was actually done being spoken, all because it paused just long enough to make me think that there was no more text to speak. Microsoft’s XBox Game Pass is great, but I could not find any accessible games in the free rotations. Sure, there’s Killer Instinct that many blind people can enjoy playing, but I found it not only inaccessible, as the menus do not speak, but boring, as the characters all seemed to simply do the same thing. I know that games do not have to be accessible to be fun, but I expect companies who showcase games, like Apple with Arcade, to have at least one accessible game for blind people to enjoy. And I also know that neither Apple nor Microsoft makes these games, but they do choose to advertise them, endorse them even, and it shows that, for Apple Arcade at least, video games are not something which they expect blind people to play. Microsoft is proving them wrong, with the release of Halo with screen reader usability in menus, and the possibility that the new Halo game will be accessible.
Another problem with Microsoft is that not all of their teams are onboard. Like Apple with Arcade and Research, Microsoft has the Rewards team. Their quizzes require one to move items around to reorder answers to get the quiz correct. This may be easy, and perhaps fun, for sighted people, but is simply frustrating for blind people. Other problems include the release of the new Microsoft Edge, which, for most users of screen readers, require that the user turn off UI Automation in order to read some items on the web. Otherwise, if Microsoft’s upcoming foldable phone comes with greatly enhanced accessibility relative to pure Android, and the Narrator screen reader, optimized and made great and enjoyable for a mobile experience, I think that Microsoft could take plenty of market share back from Apple of mobile phone users. Update: It’s barely any better than any other Android phone, so Apple still wins. They already have most general purpose computer users who are blind, so taking from Apple would be a huge win for them regarding accessibility. But, on that, we’ll have to wait and see how far Microsoft takes their commitment to accessibility. The more cynical side of me says that Microsoft will simply slap Android on a folding phone and release it, because why fight Apple.
Reporting Bugs
So, what can we do to make accessibility better? Just about all open source software, previously including the stuff making up this blog, is hosted on Github. Just about all companies, of closed source software, claim to want your feedback. So, I recommend giving them any feedback you have. I know that giving feedback to Apple is like throwing $100 bills into the ocean, giving your valuable time to something which may offer no results, and just gives you the robotic “thanks” message. I know that sometimes talking to Microsoft’s accessibility team may seem unproductive, because they lead you from Twitter to one of a number of feedback locations. I know that feedback to open source software projects may take a lot of time and explaining and promoting accessibility to a community which has never considered it before, but it all may help.
For a great, and successful, Github issue regarding accessibility, see this issue on accessibility of Retroarch. You can see that I approached the Retroarch team respectfully, with knowledge of basic accessibility and computer terminology. Note that I gave what should happen, what is happening, and what can be done to fix the problem. As the saying goes, if you do not contribute to a solution to a problem, you are a part of the problem. Blind people will need to remember to give solutions, not just whine about something not working and can’t play Poke A Man like everyone else.
Also, share links to your feedback with other blind people who can vote, thumb up, or comment on it. Remember, if you do comment, please remember that feedback does not net instant results. I’m still waiting on Webcamoid to have an accessible interface. But, at least I’ll know when something changes, and I could even Pay for features to be implemented.
This is opposed to the closed source model, where feedback is “passed on to the team,” or you are thanked, by your iPhone, for your feedback, but do not hear anything back from developers, and you most definitely can not pay for specific features to be worked on, or donate to projects that you feel deserve it. You must hope and have faith that large companies with more than one billion users cares enough to hear you. For perspective, if every blind person stopped using an iPhone, Apple would not miss many lost sales, compared to the billions of sighted users. However, the engineers who work on iOS accessibility are people too, with deadlines, lives, and feelings, and we should also respect that they are probably tightly restricted in answering feedback, fixing bugs, and creating new, exciting features.
As for me, I will continue to support open source software. I’ll keep using this mac and iPhone because they work the best for me and what I do for work and writing. But, believe me, when something better comes along, I’ll jump ship quickly. As blind people, I feel, we cannot afford to develop brand loyalty. Apple, Microsoft, or Google, I think, could drop accessibility tomorrow, and there we’d be, left in the cold. I highly doubt they will. They may let it lie stagnant, but they probably won’t remove it. I do not write this to scare you in the least, but to make you think about how much control you actually have over what you use, how companies and developers view us, and how we can improve the situation for ourselves. if sighted people notice a bug or want a feature in iOS or Windows, they can gather their tech press and pressure Apple or Microsoft. If we find an accessibility bug, do we have enough clout, or unity, to pressure these companies? Writing feedback, testing software, trying new things, writing guides and fixing documentation, or, if able, translating software into other languages are all things that any blind person can do. I’m not saying that I’m perfect at any of this. I just think that we as a community can grow tremendously if we strike out from our comfortable Windows PC’s, Microsoft Word, audio games, TeamTalk, and old speech synthesizers.
I’ll give some projects you could try out and give feedback on:
If you have sight, imagine that in every digital interface, the visuals are beamed directly into your eyes, into the center and peripheral vision, blocking out much else, and demanding your attention. All “visuals” are mostly text, with a few short animations every once in a while, and only on some interfaces. You can’t move it, unless you want to move everything else, like videos and games. You can’t put it in front of you, to give you a little space to think and consider what you’re reading. You can’t put it behind you. You can make it softer, though, but there comes a point where it’s too soft and blurry to see.
Also imagine that there is a form of art that 95% of other humans can produce and consume, but for you is either blank or filled with meaningless letters and numbers ending in .JPEG, .PNG, .BMP, or other computer jargon, and the only way to perceive it is to humbly ask that the image is converted to the only form of input your digital interface can understand, straight, plain text. This same majority of people have access to everything digital technology has to offer. You, though, have access to very little in comparison. Your interface cannot interpret anything that isn’t created in a standards-compliant way. And this culture, full of those who need to stand out, doesn’t like standards.
There is, though, a digital interface built by Apple which uses machine learning to try to understand this art, but that’s Apple only, and they love control too much to share that with other interfaces on other company’s systems. And there are open source machine learning models, but the people that could use it are too busy fixing their interface to work with breaks in operating system behaviour and UI bugs to research that. Or you could pay $1099, or $100 per year, for an interface that can describe the art, by sending it to online services of course, and get a tad bit more beauty from the pervasive drab, plain text.
Now, you can lessen the problem of eye strain, blocked out noise, and general information fatigue by using a kind of projector, but other people see it too, and it’s very annoying to those who don’t need this interface, with its bright, glaring lights, moving quickly, dizzyingly fast. It moves in a straight line, hypnotically predictable, but you must keep up, you must understand. Your job relies on it. You rely on it for everything else too. You could save up for one of those expensive interfaces that show things more like print on a page… if the page had only one small line and was rather slow to read, but even that is dull. No font, no true headings, no beauty. Just plain, white on black text, everywhere. Lifeless. Without form and void. Deformed and desolate. Still, it would make reading a little easier, even if it is slower. But you don’t want to be a burden to others or annoy them, and you’ve gotten so used to the close, direct, heavy mode of the less disruptive output that you’re almost great at it. But is that the best for you? Is that all technology can do? Can we not do better?
This is what blind people deal with every day. From the ATM to the desktop workstation, screen readers output mono, flat, immovable, unchanging, boring speech. There is no HRTF for screen readers. Only one can describe images without needing to send them to online services. Only a few more can describe images at all. TalkBack, a mobile screen reader for Android, and ChromeVox, the screen reader on Chromebooks, can’t even detect text in images, let alone describe images. Update: TalkBack can read text and icons now, but not describe images. ChromeVox still can’t do any of that. All of them read from top to bottom, left to right, unless they are told otherwise. And they have to be specifically told about everything, or it’s not there. We can definitely do better than this.
Response to “Why Linux Is More Accessible Than Windows and MacOS”
Today, I came across an article called Why Linux Is More Accessible Than Windows and macOS. Here, I will give responses to each point of the article. While I applaud the author’s wish to promote Linux, I think the points given are rather shallow and very general in nature, and could be given about any computing operating system comparison.
1. The Power of Customization
In this section, the author argues that, while closed source systems do have accessibility options, people with disabilities, (who the author calls “differently abled, which some people with disabilities would consider ableism due to the fact of differently abled feeling more like inspiration porn), have to compromise on what modifications they can make to their closed source operating systems. This can be true, but from my experience using MacOS, Windows, iOS, Android, and Linux, closed source systems have a wider community of people with disabilities using them, thus have addons and extensions to allow for as few compromises to the user’s experience as possible.
Another point that must be kept in mind is that Linux is not the most user-friendly OS yet. The modifications that can be made with Linux are more than in MacOS and Windows, yes. But I, for example, want to hold down the Space bar and have that register as holding the Control key. I probably cannot do that in Windows and MacOS. I surely can do it in Linux, but it would take a lot of learning about key codes and how to change keyboard maps throughout the console and X/Wayland sessions. The GUI will not provide this ability. The best I can do with the GUI is change Capslock to Control.
Also, let’s say a new user installs a distribution like Fedora Linux, and needs a screen reader, or any accessibility service. The user has done a little homework, so knows to turn on Orca with Alt + Super + S. The user then launches firefox from the “run application” dialog. And it doesn’t work. Nothing reads. Or the user runs a media player, and gets the same result. Why is this? I’ll spare you the arduous digging needed to find the answer. In the personalization menu of a desktop’s system menu, or in the Assistive Technologies dialog, there is a checkbox which needs to be checked in order to even enable the assistive technology to work correctly with the rest of the system. The user has to know that it’s there, how to get to it in the chosen desktop environment, and has to know how to check the box and close the dialog. This, before even doing anything else with their system.
This means that, out of the box, on almost all Linux distributions, this one key shows that the Linux GUI, by nature of needing this box to be checked, is hostile to people with disabilities. Can distribution maintainers check this box by default? Yes. Do they? No. Does this box even need to be there? No. Assistive Technologies could be enabled by default, with advanced users, after receiving warning in comments of a configuration file, able to disable it, only via changing the configuration file.
2. Linux Is Stable and Reliable
About fifteen minutes ago, I was using Gmail within the latest Google Chrome on Fedora Linux. Suddenly, the screen reader, Orca, stopped responding as I tried to move to the next heading in an email. I switched windows, and nothing happened. I got speech back in a good 20 seconds, but that shows that Linux isn’t quite as stable as the author may believe. At least, not every distribution.
My experience is my own; I do not claim to be an expert in Linux usage or administration. But this is still my experience; while Linux is stable, and I can use it for work purposes, it is not as stable, especially in the accessibility department, as Windows or MacOS. I would say, though, that it is more usable than MacOS, where just about anything in Safari, the web browser, results in Safari going unresponsive for a good five seconds or more.
Another important point is that while many developers hammer away at the core of Linux, how many people maintain ATSPI, the Linux bridge between programs and accessibility services? How many people make sure the screen reader is as lean and performant as possible? How many people make sure that GTK is as quick to give information on accessibility as it is to draw an image? How many people make sure that when a user starts a desktop, that focus is set somewhere sensible so that a screen reader reads something besides “window”? My point is, open source is full of people that work on what they want to work on. If a developer isn’t personally impacted by accessibility needs, that developer is much less likely to code with accessibility in mind. So let’s stop kidding ourselves into thinking that overall work on Linux includes even half the needed work on accessibility specifically.
While Linux’s accessibility crawls towards betterment at about one fix per month or two, Windows and MacOS have actual teams of people working specifically on accessibility, and a community of disabled developers working on third-party solutions to any remaining problems. Do all the problems get fixed? No, especially not in MacOS. But the fact that the more eyes on a problem there are, the more things get noticed applies significantly to accessibility.
3. Linux Runs on Older Hardware
This section is one I can agree with completely. Linux running on old hardware is what will drive further adoption when Windows 11 begins getting more features than Windows 10. This is even more important for people with disabilities, who usually have much less money than people without disabilities, so cannot upgrade computers every year, or even every three or five years.
4. Linux Offers Complete Control to the Users
This is true if the user is an advanced Linux user. If the user is just starting out with Linux, or even just starting out with computers in general, it is very false. How would it feel to be trapped in a place without a gate, without walls, without doors, without windows? That’s how a new computer user would feel when dealing with Linux, especially if the person is blind, and thus needs to know how to use the keyboard, what the words the speech is saying mean, what all the terminology means, but not even knowing where the Space bar is, or even how to turn the computer on.
This is a huge issue for every operating system, but was somewhat solved by MacOS by adding a wonderful tutorial for VoiceOver, its screen reader, and guiding the user to turn it on when the computer starts, without the user having to touch a single key.
As for this piece:
#+beginquote On the other hand, Linux shares every line of code with the user, providing complete control and ownership over the platform. You can always try new technologies on Linux, given its inherent nature, compatibility, and unending support for each of its distros. #+endquote
This is practically wrong. First, new Linux users won’t understand the code that Linux “shares” with them. New Linux users will not know where to look to find this code. So, this really doesn’t help them. Open source or closed, the OS is going to be a black box to any new user. And new users are what count. If new users do not want to stay on Linux, they will not spend the time to become old users, who can then teach newer users. Also, good luck trying new technologies on Debian.
Accessibility Comparison Between Linux and Windows
Here, the author compares a few access methods. A thing the author calls “screen reader” on Linux, which I hope they know is called Orca, versus Windows Narrator, the worst option, but built in.
The author doesn’t mention NVDA on Windows, which is far more powerful than Narrator, and has several addons to enhance its functionality even further. One can add many different “voice modules” to Windows, and NVDA has plenty of addon voice modules as well, many of which are not a part of Linux, like DecTalk, Softvoice, and Acapela TTS.
Accessible-Coconut: An Accessible Linux Distribution
I’m going to be blunt here: this distribution is daded off of an old, LTS version of Ubuntu, will lack the latest version of Orca, ATSPI, GTK, and everything else. If you want something approaching good, try Slint Linux. That’s about the most user-friendly distribution for the blind out there right now. Fedora’s Mate spin is what I use, but Orca doesn’t come on at startup, and neither is assistive technology support enabled.
Linux Distros Cater to Every User Type
This summary continues the points expressed in the article, and ends with the author inviting “you” to try Linux if “you” want your computer to be more accessible. I suppose the author is pointing people to try Accessible Coconut. At this point, I would rather users do a ton of reading about Linux, the command line, Orca, all the accessibility documentation they can find, try Windows Subsystem for Linux, and then, if they want more, put Linux on a separate hard drive and try it that way. I would definitely start with Slint, or Fedora, but never with a lackluster distro like Accessible Coconut.
Analyzing the Windows 11 Accessibility Announcement
Microsoft announced Windows 11 a few weeks ago, and, from my searches at least, still doesn’t have an audio described version of the announcement. Update: There’s one now. Anyways, they also released a post on their Windows blog about how Windows 11 was going to be the most accessible, inclusive, amazing, delightful thing ever! So, I thought I’d analyze it heading by heading to try to figure out what’s fluff and what’s actual new stuff worthy of announcement.
Beyond possible, efficient and yes, delightful
So, they’re trying to reach what the CEO in his book “hit refresh,” called the “delightful” experience he wanted to work towards. His gist was that Windows was pretty much required now, but he wanted to make it delightful. Well, the only user interface that is delightful to me is Emacspeak. MacOS and iOS come close. What makes them delightful are a few things: sound and quality speech and parameter changes. I won’t go over all that here, my site has plenty on all that already. But it’s safe to say that Microsoft isn’t going near that anytime soon.
Instead of trying to offload cognitive strain from parsing speech all day, they put even more on it. Microsoft Edge has “page loading. Loading complete.” Teams has similar textual descriptions of what’s going on. And while I appreciate knowing what’s going on, speech takes a second to happen, be heard, and be processed. Sound happens a lot quicker, and over time, a blind user can get pretty good at recognizing what’s going on. But whenever I brought this up to the VS Code team, they said something about not having the ability to add sounds, so they’d have to drag in some other dependency, so they’d have to bring that up with the team and all that. Well, they won’t become the most delightful editor for the blind any time soon. Just the most easy to use.
And, while this is partly the fault of screen reader developers who just won’t focus on sound or speech parameter changes for text formatting and such like that, Microsoft could be leading the way in that with Narrator. And yeah, they’ve got a few sounds, and their voices can change a little for text formatting, but their TTS is just too limited to make it really flexible and enjoyable. Instead of changing pitch, rate, and entonation, they change pitch, rate, and volume, and sometimes it’s jarring, like the volume changes. But there’s not really much else they can do with their current technology. I guess they’ll have to maybe change the speech synthesis engine a bit, if they’re even able to. In the past six years, I’ve not seen any new, or better, first-party voices for US English for Windows. Sure, they have their online voices, which are rather good, but they haven’t shown any inclination to bring that quality to Windows OneCore Voices.
People fall asleep listening to Microsoft David. He’s boring and should not be the default voice. While this is anecdotal, I’ve heard quite a few complaints about it, and if you listen to him for a long time, you’d probably get bored too. This is seriously not a good look, or rather, sound, for people who are newly blind and learning to use a computer without sight, or someone who doesn’t know that there are other voices, or even if Microsoft wants to demonstrate Narrator to people who haven’t used it before. And while NVDA users can use a few other voices, the defaults should really be good enough. Apple has had the Alex voice for years. Over ten years, in fact. He’s articulate, can parse sentences and clauses at a time, allowing him to intone very close to the way humans speak, with context. He’s also not the most lively voice, but he sounds professional. And, Alex is the default voice on MacOS. David, on Windows, just sounds bored. And so blind people, particularly those used to Siri and VoiceOver from iOS, just plain fall asleep. It’s nowhere near delightful.
## Windows 11 is the most inclusively designed version of Windows
Okay, sure. Even though from what I’ve heard from everyone else, it’s just the next release of Windows 10. But sure, hype it up, Microsoft, and watch the users be disappointed when they figure out that, yeah, it’s the same old bullcrap. Bullcrap that works okay, yeah, but still bullcrap.
#+beginquote People who are blind, and everyone, can enjoy new sound schemes. Windows 11 includes delightful Windows start-up and other sounds, including different sounds for more accessible Light and Dark Themes. People with light sensitivity and people working for extended periods of time can enjoy beautiful color themes, including new Dark themes and reimagined High Contrast Themes. The new Contrast Themes include aesthetically pleasing, customizable color combinations that make apps and content easier to see. #+endquote
Okay, cool, new sounds. But are there more sounds? Are there sounds for animations? Are there sounds for when copying or other processes complete? Are there sounds that VS Code and other editors can use? Are there sounds for when auto-correct or completion suggestions appear? Are their sounds for when an app launches in the background, or a system dialog appears? Are there sounds for when windows flash to get users’ attention?
#+beginquote And, multiple sets of users can enjoy Windows Voice Typing, which uses state-of-the-art artificial intelligence to recognize speech, transcribe and automatically punctuate text. People with severe arthritis, repetitive stress injuries, cerebral palsy and other mobility related disabilities, learning differences including with severe spelling disabilities, language learners and people that prefer to write with their voice can all enjoy Voice Typing. #+endquote
Um, yeah, this has been on Windows for years. Windows + H. I know. I get it.
design and user experience. It is modern, fresh, clean and beautiful.
Okay, but is it fresh, clean and beautiful for screen readers? Are there background sounds to help us focus, or maybe support for making graphs audible for blind people, or support for describing images offline? Oh wait, wrong OS, haha. Funny how Apple’s OS’ are more modern when it comes to accessibility than Microsoft.
Windows accessibility features are easier to find and use
Okay, this whole section has been talked about before, because it’s no different than the latest Windows Insiders’ build. Always note that if companies have to fill blog posts with stuff that they’ve had for like months or a year now, it means they really, really don’t have anything new to show, or say. They just talk because not doing so would hurt them even more. Contrast this with Apple’s blog post on Global Accessibility Awareness Day, where everything they talked about was new or majorly improved. And all Microsoft did that day was “listen”. There’s a point where listening has gathered enough data, and its time to act! Microsoft passed that point long ago.
#+beginquote Importantly, more than improving existing accessibility features, introducing new features and making users’ preferred assistive technology compatible with Windows 11, we are making accessibility features easier to find and use. You gave us feedback that the purpose of the “Ease of Access” Settings and icon was unclear. And you said that you expected to find “Accessibility” settings. We listened and we changed Windows. We rebranded Ease of Access Settings to Accessibility and introduced a new accessibility “human” icon. We redesigned the Accessibility Settings to make them easier to use. And of course, Accessibility features are available in the out of box experience and on the Log on and Lock screens so that users can independently setup and use their devices, e.g., with Narrator. #+endquote
So, the most important thing they’ve done this year is what they’ve already done. Got it. Oh and they changed Windows. Just for us guys. They did all that hard work of changing a name and redoing an icon, just for us! Oh so cringeworthy. This “courage” thing is getting out of hand. Also, if changing Windows is so hard, maybe it’s time to talk to the manager. Seriously. If it’s so hard to do your job that changing a label and icon is hard work, there’s something seriously wrong, and I almost feel bad for the Windows Accessibility team now.
Windows accessibility just works in more scenarios
#+beginquote Windows 11 is a significant step towards a future in which accessibility “just works,” without costly plug-ins or time-consuming work by Information Technology administrators. With Windows 10, we made it possible for assistive technologies to work with secure applications, like Word, in Windows Defender Application Guard (WDAG). With Windows 11, we made it possible for both Microsoft and partner assistive technologies to work with applications like Outlook hosted in the cloud… #+endquote
Okay, so, from Twitter, Joseph Lee has complained that the Windows UI team isn’t writing proper code to let screen readers read and interact with apps in Windows 11’s Insider builds. So right there, we’re going to still need Windows App Essentials, an NVDA add-on that makes Windows 11 a lot easier to use. This add-on is mostly for the first-party apps, like weather and calculator. So, um, what’s all this about again? So, nothing seems to be new. We will still need “costly” addons and plugins and junk. Because I don’t see Microsoft fixing those UI issues by release. System admins, keep that list of NVDA addons around, because they’ll still be needed in Windows 11.
Remote Application Integrated Locally (RAIL) using Narrator. While that may sound like a lot of jargon to most people, the impact is significant. People who are blind will have access to applications like Office hosted in Azure when they need it.
Yeah because people with disabilities are dumb and can’t understand tech speak. Sure. Okay. Keep dumbing us down, Microsoft. We really enjoy the slap in the face. Just explain the terms, like RAIL. With a quick Google search, it looks like Azure supports Ruby on Rails, so, I guess that’s what it is. Which doesn’t make much sense because Rails makes web apps, from what I understand. Ah well. Keep lording your tech knowledge over us, oh great Elites at Microsoft.
What I want to see is Electron apps getting OS-level support in accessibility, so that VS-code doesn’t have to feel like a web app, because it shouldn’t feel like that on Microsoft’s own OS.
Now, being able to host Office on a server and have Narrator, and hopefully other screen readers (because Narrator is still not good enough), support it, is nice. But that’s not really a user-facing feature. Users probably won’t know Word is hosted on a server.
#+beginquote Windows 11 will also support Linux GUI apps like gedit through the Windows Subsystem for Linux (WSL) on devices that meet the app system requirements. And, we enabled these experiences to be accessible. For example, people who are blind can use Windows with supported screen readers within WSL. In some cases, the assistive technology experience is seamless. For example, Color Filters, “just work.” Importantly, the WSL team prioritized accessibility from the start and committed to enable accessible experiences at launch. They are excited to share more with Insiders and to get feedback to continue to refine the usability of their experiences. #+endquote
In some cases… Wanna elaborate a bit, Microsoft? Will I be able to use Gedit with a screen reader? Or Kate? Or Emacs? I have gotten Emacs with Emacspeak working on WSLG in Windows Insider builds. But it’s too sluggish to be used productively. So yeah, if that’s the same experience as using a screen reader with it, I don’t see myself using it much, if at all.
experiences we introduced last week like our partnership with Amazon to bring Android apps to Windows in the coming months.
Okay, well I’m waiting. I suspect they’ll use something similar to what they did with the Your Phone app, just pipe accessibility events to the screen reader through, the title bar I think? That’ll be okay I guess, but no sound feedback would mean that the experience isn’t quite to TalkBack standards, as low as that is.
Modern accessibility platform is great for the assistive technology ecosystem
closely with assistive technology industry leaders to co-engineer what we call the “modern accessibility platform.” Windows 11 delivers a platform that enables more responsive experiences and more agile development, including access to application data without requiring changes to Windows.
I’m not going to pretend to understand that last bit, but if the UI problems found by Joseph Lee are any indication, a lot more has been broken than fixed or new. Also, which Assistive Technology industry leaders? And what biases do they have?
#+beginquote We embraced feedback from industry partners that we need to make assistive technology more responsive by design. We embraced the design constraints of making local assistive technology like Narrator “just work” with cloud hosted apps over a network. We invented and co-engineered new Application Programming Interfaces (APIs) to do both; to improve the communication between assistive technologies like Narrator and applications like Outlook that significantly improve Narrator responsiveness in some scenarios. The result is that Narrator feels more responsive and works over a network with cloud-hosted apps. #+endquote
I, as a user, don’t care about cloud-hosted apps. Office may at some point become a cloud-hosted app, and that’s what they may be preparing for, but I don’t care about that. Responsiveness is cool and good, but NVDA is very responsive, and some people still fall asleep using it. Why? Because it sounds boring! The voices in Windows suck. No audible animations or anything to make Windows delightful.
#+beginquote We also embraced feedback from industry partners that we need to increase assistive technology and application developer agility to increase the pace of innovation and user experience improvements. We made it possible for application developers, like Microsoft Office, to expose data programmatically without requiring Windows updates. With Windows 11, application developers will be able to implement UI Automation custom extensions, including custom properties, patterns and annotations that can be consumed by assistive technologies. For users, this means we can develop usability and other improvements at the speed of apps. #+endquote
At the speed of apps. That’s pure marketing crap. A lot is said in this article that is pure marketing, and not measurable fact. I want real, factual updates, not this. And the fact that they don’t provide that is a hint that they have nothing to provide. Now, having “custom” rolls and states and such things is nice for developers who have to reinvent the wheel and the atoms that make up that wheel, so maybe new applications have a chance of being accessible. But accessibility won’t happen with developers unless its in their face. They probably won’t know about these abilities, or even care in many cases.
Try Windows 11 and give us feedback
I’ve read feedback from those who have tried Windows 11 Preview. I myself can’t try it because no TPM chip and I don’t feel like being rolled back to Windows 10 when 11 is released. The feedback I’ve gotten so far from others is, well, very little, actually. From what I’ve heard, it’s still just Windows 10.
Conclusion
So, why should I even care about Windows 11? Not much is new or changed or fixed for accessibility, as this article full of many empty words shows. Six years of development, and the Mail app still has that annoying bug of expanding threads whenever keyboard focus lands on them, instead of waiting for the user to manually expand them. The Reply box still doesn’t alert screen readers that it’s opened, so it thinks its still in the message pane being replied to, and not the reply edit field. The Microsoft voices still sound pretty bad, even worse than Google’s offline TTS now, and that’s pretty bad.
Will any of this change? I doubt it. I’ve lost a lot of confidence in Microsoft, first because of their do-nothing stance on Global Accessibility Awareness Day, then their event without audio description, which Apple did perfectly, and now this article which tells us very little, and is almost a slap in the face when it talks about Windows being “delightful” because really, it’s not, and it won’t change substantially enough before release to be so.
Digging into TalkBack’s source code
Braille
For a while now, I’ve been curious about which platform’s accessibility is, at its foundation, more secure, more “future proof”, and better able to be extended. Today, I’m looking into the TalkBack source code that is currently on Github, which I cloned just today. I’ll go through the source, to see if I can find anything interesting.
This whole project was started in 2015. Of course, we then have this one:
Which shows that it is copyright 2020. The first just seems to wrap Liblouis in Java, but what about this one?
Ah, it seems to be the thing that translates the table files and such into Java things. So that’s kind of where the Braille keyboard gets its back-end. Now let’s look at the front-end.
So, this was made in 2019. I do like seeing that they have been working on this stuff. Now, here, we have:
/**
/**
/**
/** Stub implementation of analytics used by the open source variant. */
Yeah, figured I wouldn’t get much out of this file.
Over the last few months, I've been focusing a lot on Braille. Much of it is
because the Bluetooth earbuds I have, (Galaxy Buds Pro, Linkbuds S, Razer
Hammerheads), either have poor battery life or have audio lag that's just
annoying enough to make me not want to use them for regular screen reading. So,
grabbing a Focus 14, I began to use Braille a lot. I've now spend a good two
weeks using Android'TalkBack's new Braille support, and two weeks with
VoiceOver's Braille support.
In this article, I'll overview Android's passed support for Braille, and talk
about how its current support works. I'll also compare it to Apple's
implementation. Then, I'll discuss how things could be better on both systems.
Since there have probably been many posts on the other sites about iOS' Braille
support, I don't feel like I need to write much about that, but if people want
it, I can write a post from that angle as well.
BrailleBack and BRLTTY
When Google first got into making accessibility a priority of Android, back
around Android 2.3, it created a few stand-alone apps. Well, they were kind of
standalone. TalkBack, the screen reader, KickBack, the vibration feature for
accessibility focus events, and BrailleBack, for Braille support. There may have
been more, but we'll focus on BrailleBack here. BrailleBack connected to Braille
displays over Bluetooth, and used drivers to communicate with them. It started
out well for a first version, but wasn't updated much. In the years that
followed, the biggest update was to support a new, less expensive Braille
display. This has been Google's problem for a while now, having great ideas, but
not giving them the attention they need to thrive. Luckily, TalkBack is still
being worked on, and hasn't been killed by Google. At least now, Braille support
is built in. BrailleBack wasn't even installed on phones when it was being
developed, but TalkBack is. So, things may improve starting now.
BRLTTY started out as a Linux program. It connects to Braille displays using
USB, Serial, or Bluetooth, and supports a huge variety of displays. It tries to
give the Braille user as much control over the Linux console from the display as
possible, using as many buttons as a display has. It came to Android and offered
a better experience for some use cases, but the fact that you can't type in
contracted Braille, a sort of shorthand that is standardized into the Braille
system, may be off putting to some. Another issue is that it tries to bring the
Linux console reading experience to an Android phone, which takes a bit of
getting used to it.
So, here, we've got two competing apps. BRLTTY gets updated frequently, has many
more commands, but has a higher bar for entry. BrailleBack is stale, supports
few displays, but allows for writing in contracted Braille, and has more
standardized commands. So, you'd think Deaf-Blind users would have choices,
enough to use an Android phone, right?
App support matters
Let's take something that Braille excels at: reading. In Android, due to the
poor support of Braille from Google up to this point, and the fact that Braille
support wasn't installed, meaning that Deaf-Blind users couldn't easily set up
their phones without knowing about this separate app, and having sighted
assistance to install it, meant that third-party apps, like Kindle, and even
first-party apps, like Google Play Books, didn't take Braille into account
during development. The Kindle app, for example, just has the user double tap a
button, and the system text-to-speech engine begins reading the book. The Play
Books app does similar, with the option for the app to use the
high quality, online Google speech engine instead of the offline one.
This is how things are today, too. In Kindle, we can now read a page of text,
and swipe, on the screen, to turn the page. On Play Books, though, focus jumps
around too much to even read a page of text. It's easier to just put on
headphones and let the TTS read for you, so that Braille literacy, for Android
users, is too frustrating to cultivate.
So, if you want to read a book on your phone, using popular apps like Kindle, you have to use the system
text-to-speech engine. This means that Braille users are cut out from this
experience, the one thing Braille is really good at. There are a few apps, like
Speech Central, which do display the text in a scrolling web view, so that
Braille users can now read anything they can import into that app, but this is a
workaround that a third-party developer shouldn't have to make. This is
something that Google should have had working well about five years ago.
With the release of iOS 8, 8 years ago, Apple gave Braille users the ability to
“Turn Pages while Panning.” This feature allowed Braille users to read a book
without having to turn the page. Even before that, unlike Android even now,
Braille users could use a command on their Braille display to turn the page.
Eight years ago, they no longer had to even do that.
A year later, the Library of Congress released an app called BARD Mobile,
allowing blind users to access books available from the
service for the blind and print disabled on their phone. Along with audio books,
Braille books were available. Using a Braille display, readers could read
through a book, which was just a scrolling list of Braille lines, without
needing any kind of semblance of print pages. Android's version of BARD Mobile
got this feature about a year ago. And now, the new Braille support doesn't
support showing text in Computer Braille, which is required to show the
contracted Braille of the book correctly. I'd chalk this up to a tight schedule
from Google and not having been working on this for long. Perhaps version 14 of
TalkBack will include this feature, allowing Braille readers to read even
Braille books.
Now in Android... Braille
With the release of TalkBack 13, Braille support was finally included, finally.
Beforehand, though, we got a bit of a shock when we found out that HID Braille
wouldn't be supported. This, again, I can chalk up to the Braille support being
very new, and the Android team responsible for Bluetooth not knowing that that's
something they'll need to get implemented. Still, it sowered what could have
It was a great announcement. Now, instead of supporting “all” displays, they
support... “most” displays. So much for Android users being able to use their
brand new NLS EReader, right? Technically, they can use it through BRLTTY, but
only if it's plugged into the USB C port. Yeah, very mobile.
The Braille support does, however, have a few things going for it. For one, it's
very stable. I found nothing that could slow it down. I typed as fast as I
could, but never found that the driver couldn't keep up with me. Compare that to
iOS, where even on a stable build, there are times where I have to coax the
translation engine into finishing translating what I've written. There's also
this nice feature where if you disconnect the display, speech automatically
come back on. Although, now that I think about it, that may only be useful for
hearing blind people, and Deaf-Blind people wouldn't even know until a sighted
person told them that they now know all about that chat with the neighbor about
the birthday party they were planning, and that it's no longer a surprise. Ah
Well, so much for the customizability of Android. In contrast, when speech is
muted on iOS, it stays muted.
iOS doesn't sit still
In the years after iOS 8's release, Braille support has continued to improve.
Braille users can now read Emoji, for better or worse, have their display
automatically scroll forward for easier reading, and customize commands on most
displays. New displays are now supported, and iOS has been future-proofed by
supporting multi-line or tactile graphics displays.
iOS now also mostly supports displays that use the Braille HID standard,
and work continues to be done on finishing that support. This is pretty big
because the National Library service for the Blind in the US, the same that
offers the BARD service, is teaming up with Humanware to provide an EReader,
which while allowing one to download and read books from BARD, Bookshare, and
the NFB Newsline, also allows one to connect it to their phone or PC, to be used
as a Braille display. This means, effectively, that whoever wants Braille, can
get Braille. The program is still in its pilot phase, but will be launched
sooner or later. And Apple will be ready.
No, Android doesn't support these new displays that use the Braille HID
standard. It also doesn't support multi-line or graphics displays, nor does it
support showing math equations in the special Nemeth Braille code, nor does it
support automatically scrolling the display, changing Braille commands, and so
on. You may then say “Well, this is just version one of a new Braille support.
They've not had time to make all that.” A part of that is true. It is version
one of their new Braille subsystems of TalkBack. But they've had the same amount
of time to build out both Braille support, and TalkBack as a whole, that Apple
has. In fact, they've had the same eight years since iOS 8 to both learn from
using Apple's accessibility tools, and to implement them themselves.
So, let's say that Google has begun seriously working on TalkBack for the last 3
years, since new management has taken the wheel and, thankfully, steered it
well. Google now may have to take at least 4 years to catch up to where Apple is
now. Apple, however, isn't sitting still. They've put AI into their screen
reader years before the AI-first company, Google, did. How much longer will it
take Google to add things like auto-scroll to their screen reader to serve an
even smaller subset of their small data pool of blind users?
Neither system is perfect
While Apple's Braille support is fantastic, is is only rather rusty with age,
both systems could be using Braille a bit better to really show off why Braille
is better than just having a voice read everything to you. One example that I
keep coming back to its formatting. For example, a Braille user won't be able to
tell what type of formatting I'm using here on either system, even though there
are formatting symbols for what I just used in Braille. And no, status cells
don't count, they can't tell a reader what part of a line was formatted, and
the “format markers” used in Humanware displays are a lazy way of getting
around... I don't even know what. If BrailleBlaster, using LibLouis and its
supporting libraries and such, can show formatting just fine, I don't see why
screen readers in expensive phones can't.
Both systems could really take a page out of the early Braille NoteTakers. The
BrailleNote Apex not only showed formatting, but showed things like links by
enclosing them in Braille symbols, meaning that not only could a user tell where
the link started and ended, just line sighted people, they could do so in a way
that needed no abbreviated word based on speech. BRLTTY on Android shows switches
and checkboxes in a similar way, using Braille to build a nonvisual, tactile
interface that uses Braille pictograms, for lack of a better term, to make
screen reading a more delightful, interesting experience, while also shortening
the Braille needed to comprehend what the interface item is. This kind of stuff
isn't done by anyone besides people who really understand Braille, read Braille,
and want Braille to be as efficient, but enjoyable, as possible.
Another thing both companies should be doing is testing Braille rigorously.
There is no reason why Braille users shouldn't be able to read a book, from
start to end, using Google Play Books. There's also no reason why notifications
should continue to appear on the display when they were just cleared. Of course,
one issue is much more important than the other, but small issues do add up, and
if not fixed, can drag down the experience. I really hope that, in the future,
companies can show as much appreciation for Braille as they do for visuals,
audio, haptics, and even screen readers.
Until then, I'll probably use iOS for Braille, image descriptions, and an
overall smoothe experience, and use Android for its raw power, and honestly
better TTS engine, well if you have Samsung that is. With the ability to
customize Braille commands, iOS has given me an almost computer-like experience
when browsing the web. Android has some of that, but not the ability to
customize it.
Conclusion
I hope you all have enjoyed this article, and learned something from my diving
into the Braille side of both systems. If so, be sure to share it with your
friends or coworkers, and remember that speech isn't the only output modality
that blind, and especially Deaf-Blind, people use. As Apple says on their
accessibility site, Braille is required for Deaf-Blind users. Thank you all for
reading.
This post is to reflect on what can be gained by using Android, as opposed to
iOS. My previous post, My Dear Android talked a little about this, but I
wanted to go into further detail here.
USB-C has won
I have a lot of accessories, for computers and phones. I have a game controller,
which uses Micro USB, but if you buy it now, it'll likely come with USB-C. I
have a game controller for a phone, which uses USB-C. I have a Braille display,
which uses USB-C. In fact, I'd say just about every modern Braille display uses
USB-C. I have USB-C earbuds. All of these technologies use USB-C or can be made to
use it with a dongle.
When I use an iPhone, any iPhone today, I have to put all these accessories
through a dongle. I don't have a USB-C to Lightning dongle yet, but I do have a
Lightning to USB A one. So, whenever I want to plug in, say, a USB-C Flash
drive, I can't. I can't plug in my USB-C earbuds into the iPhone. Now, are their
dongles for this? Sure. But why deal with that. USB-C has won, soundly, over
Lightning. Lightning was always going to be a closed, Apple-only system. No one
likes non-standard junk.
Audio and standards
As mentioned in another article, I have a pair of Sony LinkBuds S. These are a
pair of truly wireless earbuds that have noise-canceling, transparency mode,
integration with Google Assistant and Alexa, integrate with Spotify and Endel,
and sound fantastic. When I used them with my iPhone, which has Bluetooth 5.0
(which the newest iPhone SE 2022 also has), the lag was just too much to deal
with. When I use them with Android, the lag is noticeable, yes, but much less,
and much easier to deal with. This really pushed me back to Android. With
iPhone, I would have to get all Apple products to have the best experience. I
would need to get the new AirPods or AirPods Pro. I would need an Apple Watch. I
would need a Mac. With Android, interoperability means I can get any
Android-supported accessories, and they would work just fine.
Another difference between the two ecosystems is that Google Assistant readily works with
the LinkBuds S. Assistant reads incoming messages, reads notifications, and does
just about everything one can do with the Pixel Buds pro. On iPhone, there is no
way to get Siri to automatically read new notification unless you have a pair of
AirPods. Seeing this, Android works with many more accessory types, not just in
a basic way, but supporting them to their fullest potential.
Also, did I mention the Bluetooth codecs? In Android, several phones have 3 or
more different codecs, to support the widest range of audio types. On iPhone,
there's just SBC, the lowest quality codec that must be supported, and AAC,
Apple's own codec. No APTX, no LDAC, no LC3. So, even if you get an expensive
pair of headphones that supports APTX low latency audio, you won't get that
support on an iPhone. To be fair, some Android phones don't support APTX either,
but on Android, you have that choice of phones. On iPhone, you don't.
Works with Windows and Chromebooks
If you use your computer a lot, you may want to text with it. If you're blind,
chances are that you have a Windows PC. Well, iPhone works exclusively with Mac
computers, so you can't text from your PC, or make calls from your PC, or
control your phone from your PC. Oh yeah, you can't control your iPhone from
your Mac either. Anyway, you can do all this from a Windows computer. If you use
Google Messages, you can even read and send texts from the web, using your phone
and phone number. As an added bonus, the Messages for Web page is very
accessible, and has keyboard commands for navigating to the conversations list
or messages list.
This gives me the freedom to do what I want, from whatever device I'm on. I
don't have to switch contexts from my computer, to my phone, just to send a
text, or read a text. I can just open Messages for Web, and do everything there.
Are you a Developer?
How much do you think you'll save if you didn't have to pay $100 per year?
That's how much it costs to have an app on the Apple App Store. If you're a
blind developer, you may be paying for JAWS every year too, so that's $200 a
year, just to make great apps for iPhone. Along with all that, you have to deal
with the sometimes frustrating experience of not only using a Mac, but
developing on it, in Xcode. Now, you may be using a framework like React Native,
or Beeware for Python, where you don't have to code in Swift, or touch Xcode all
that much. If so, that probably cuts down on a lot of stress. But you still have
to spend money just to keep your app on the App Store.
On Android, all a developer needs to do is pay $25, once. That's it. There is
the 15% service fee on in-app purchases, but if your app is free, you don't have
to worry about any of that. Also, you aren't limited to one language. You can
use Java, Kotlin, some C++, C#, Python, JavaScript, Dart, and Corona (Lua). Of
course, a few of these, like JavaScript (React Native and such) can be used to
create iPhone apps too. But with Android, you can use your superior Windows
platform, VS Code, and NVDA or JAWS to develop Android apps easily. Also,
Android Studio is accessible on Windows too.
Accessibility, the mixed bag
Now we get into the thing I'm all about, accessibility. If you use apps like
Telegram, DoorDash, Messenger, YouTube, and others, you may find that they don't
work as well as they should on iPhones. YouTube, just recently, gained a bug
where you can't go passed the third or so item in the home tab. Android doesn't
have that problem. DoorDash has reviews in the middle of their menus, and tells
you the time the delivery will reach you, not the estimated time in minutes as
it does on Android. In Telegram on the iPhone, if you have a message that covers
more than the screen height, VoiceOver will not navigate to the next message
until you scroll forward. On Android, TalkBack will eventually reach the next
message, and will not get stuck.
This shows, to me at least, a slice of something strange. Android seems to have
more of a flexible accessibility framework, allowing for code to tell more of
the story than visuals. On iPhone, VoiceOver doesn't look passed the current
screen of content, or the cross-platform framework doesn't tell VoiceOver about
it, but does tell Android and lets TalkBack navigate to it. However the code
works, it results in a worse experience on iPhone, and a better one on Android.
I can't argue with results.
Now, for image descriptions. I do miss them, being on Android. But, I'm sure
Google is working on them, with its testers. After all, TalkBack can describe
text, and icons now. And it does that very well. So, I'm sure they'll get image
descriptions down in maybe a year. In the meantime, I still have an iPhone,
Lookout, Bixby Vision, and Envision to hold me over until then.
I'm also hoping Google works on audible graphs, as that's pretty helpful. I
could see them integrating that with image descriptions to describe graphical
graphs, which iOS doesn't do yet.
Now, for Braille, things have improved. I grabbed a Focus 14 to work with, and
find that I can use my phone with Braille support for about 30 minutes, without
growing tired of it. One really nice thing that TalkBack does is focus
management. So, if you leave an app, then come back to it, focus will remain at
the spot that you left it. So, if you're reading a Reddit thread in RedReader,
and you go to Messenger to read and reply to a message, when you come back to
Redreader, your focus will be on the exact comment you left it on. I don't
recall that ever happening on the iPhone.
Mostly, it's a very good start of Android's new Braille implementation. One
that, even though it's new, is very stable, and all commands work fine. There
isn't the issue of HID-based displays on iOS, where you cannot assign the “enable
autoscroll” command and such. Input works great, and there is no time when the
input process gets stuck, and you have to press the “translate” command several
times to plunge it out.
Conclusion
After spending a week with iPhone, I'm back on Android. Yes, I'm looking forward
to greater accessibility, like image descriptions, Braille improvements, and
audible graphs and charts, but I also love what Android is right now. Android is
open, allows for greater innovation by developers, allows accessory manufacturers
to create great, integrated experiences, and in quite a few cases, is even more
accessible than the iPhone.
Android also allows one to use many of its services from a Windows computer,
which is more popular in the blind community than Macs. This allows the user to
stay in the same context, without needing to pull out a phone just to check a
text. One can also make calls and control their Android phone from a PC.
In closing, thanks for reading this article, in my journey with Android and
iPhone. I know I'm not done with this, and as the two operating systems grow and
age, things will change, on either Android or iOS' side. Feel free to subscribe
to my blog, or leave comments.
Last night, I turned on my Galaxy S20 FE (5G) again. To update apps, and compare a week away with the iPhone to how Android feels now. And, I must say, Android is still charming to me.
Telegram works better than on iOS. On iOS, VoiceOver gets stuck on a long message, not moving to the next message at all until you scroll forward. I've not tried Whatsapp yet, but I wouldn't be surprised if it worked better there too. Also, Doordash is a lot easier to use on Android, without all the reviews and junk getting in my way like on iOS.
But the big thing was my earbuds. I have a pair of Sony LinkBuds S, which sound great, work with either Google Assistant or Alexa directly, and not through the framework where you hold down the home button and use that sort of voice control interface, and has all the good stuff like noise canceling and transparency mode. That can also be changed through Google Assistant.
So, I can use it with my iPhone X R. It works pretty well, and I can use Alexa through it. But, the latency is awful, and it took a few tries before I could get it set up. On Android, though, the latency is mild enough to where I can deal with it, and setup was quick and easy. This is a symptom of Apple's issue of wanting control. I don't have the Airpods. I don't have the AirPods Pro. I do have the Sony LinkBuds S, which probably blow all AirPods out of the water with it's pretty literally chest-thumping base (at least for me and my hearing). The AirPods Pro, first generation, didn't have that. I have little hope that the second generation, or the regular AirPods third generation (that can get confusing really fast), would have that. Plus, there's one chord to rule them all.
That's right, USB C. I love it! It's everywhere, used on just about everything, and I can connect my phone to my dock at work and use it with a keyboard and wired headphones at work. Speaking of wired headphones, there are actually USB C headphones. There aren't many Lightning headphones. Yes, I can get Apple's wired headphones. But that's $30. What if I want a pair of $250 cans I can rock out to?
Lastly, TalkBack is on a good path. They've added basic Braille support, which they'll hopefully be improving throughout the coming year, Android 13 has Audio Description API's, and hopefully in the next update to TalkBack, image descriptions so I can see my cat that I had to give up recently. Poor little Ollie. On the iPhone, while image descriptions are bright and vibrant, Braille is starting to suffer a good many pesky bugs that make me not even want to use it. Maybe one or two will be fixed when iOS 16 is released, but they've got a week to do it, and I don't see them spending that much time on a minority of a minority. However, TalkBack's Braille support, while new, is pretty solid, a good base to work upon.
So, I wanted to post this to balance things out from my other post when I went from Android to Apple. My journey is definitely not over, and neither are the two operating systems in question. While we know what iOS 16 brings, new voices, door detection, and probably other stuff, the TalkBack team has been pretty tight-lipped on what they've been working on. I miss the days when we had more open dialog with them. But at least they have a blind person on the team that does interact with the community some.
During the last six months, I’ve used Android exclusively as my primary device. I made calls with it, texted with it, read emails with it, browsed the web, Reddit, Twitter, and Facebook with it, and played games on it. Now, I’m going to express my thoughts on it, its advancements, and its issues.
This will contain mostly opinions, or entirely opinions, depending on whether you really love Android or not. But whatever your stance, these are my experiences with the operating system. My issues may not be your issues, and so on.
Daily Usage
To put things into perspective, I’ve used my phone, the Samsung Galaxy S20 FE 5G, for the following, with the apps I’ve used:
Email: Gmail, Aquamail, DeltaChat
Messaging: Google Messages
RSS: Feeder
Podcasts: Podcast Addict, PocketCasts
Terminal: Termux
HomeScreen: 1UI
Screen reader: Google TalkBack
Speech Engine: Samsung TTS and Speech Services by Google
Text Recognition/object detection: Google Lookout and Envision AI
Gaming: PPSSPP, Dolphin, EtherSX2
Reddit: Redreader
I’m sure I’m forgetting a few apps, but that’s basically what I used most often. For Facebook, Twitter, YouTube and other popular services, I used their default apps, with no modifications. I used all the Google services that I could, and rarely used Samsung’s apps. So, this is to show that I was deep into the Android ecosystem, with Galaxy Buds Pro, a ChromeBook, and a TickWatch E3.
The good
I want to start off the comparison with what worked well. First, Samsung TTS voices are really nice, sounding even more smooth, sometimes, than Alex on iOS, and much more so than the Siri voices. I still love the Lisa voice, which, to me, sounds as close to Alex as possible with her cadence and professional-sounding tone. Yes, the voices could be sluggish if fed lots of text at once, but I rarely ran into that.
I also love the wide variety of choice. Apple includes the AAC Bluetooth codec on their iPhones. So, if you get APTX, or Samsung’s Variable codec, or other headphones with other codecs, it won’t matter, and you’ll go back to SBC, which sounds the worst out of all of them. If your headphones have AAC, of course, it’ll get used on the iPhone. But if not, you’re stuck with SBC. Android phones, though, usually come with a few different codecs for headphones to choose from, and in the developer settings, you can choose the codec to use.
Another great feature of all modern Android phones is USB-C. Everything else uses USB-C now, including Braille displays, computers, watches, and even iPads. With Android, you can plug all these things into your phone with the same cable. If your flash drive has USB-C, you can even plug that in! With iPhone, though, you have to deal with Lightning, which is just another cable, and one you’ll likely have less of, since less stuff uses it.
The last thing is that Android phone makers typically try out new technology before Apple does, leading to bigger cameras, folding phones, or faster Wi-fi or cellular data. Now that the new iPhone SE has 5G, and probably the latest Wi-fi, though, that’s most likely less of an issue. Still, if you like folding phones, Android is your only choice right now.
Starting on the software, it’s pretty close to Linux, so if you plug in a game controller, keyboard, or other accessory, it’ll probably work with it. If you have an app for playing music using a Midi keyboard, and you plug one in, it’ll likely work. On iPhone, though, you need apps for more things, like headphones and such.
Another nice thing, beginning in the accessibility arena, is that the interface is simple. Buttons are on the screen at all times, not hidden behind menus or long-press options like they are a lot of the time on iOS. If you can feel around the screen with one finger, or swipe, you’ll find everything there is to find. This is really useful for beginners.
Another pleasant feature is the tutorial. The TalkBack tutorial guides Android users through just about every feature they’ll need, and then shows them where they can learn more. VoiceOver has nothing like that.
On Android, things are a lot more open. Thanks to that, we have the ability for multiple screen readers, or entirely new accessibility tools, to be made for Android. This allows BRLTTY to add, at least, USB support for HID Braille displays, and Commentary to OCR the screen and display it in a window. This is one of the things that really shines on Android.
The bad
Those Bluetooth headphones I was talking about? The Galaxy Buds Pro are very unresponsive with TalkBack, making them almost useless for daily use. The TickWatch has its own health apps, so it doesn’t always sync with Google Fit, and doesn’t sync at all with Samsung Health. Otherwise, the watch is a nice one for Android users. On iPhone though, it doesn’t even share the health data with the health app, just Google Fit, which doesn’t sync with the health app either.
A few days ago, a few things happened that brought the entire Android experience into focus for me. I was using the new Braille support built into TalkBack, with an older Orbit Reader Braille display, since my NLS EReader doesn’t work with TalkBack, since there is no Bluetooth Braille HID support. I found that reading using the Google Play Books app is really not a great experience when reading in Braille. Then, I found a workaround, which I’ll talk about soon, but the fact is that it’s not a great experience on Google Play Books.
So, I got confirmation that someone else can reproduce the issue. The issue is that the display is put back at the beginning of the page before even reading the next page button, and that one then cannot easily move to the next page. I then contacted Google Disability support. Since their email address was no longer their preferred means of contact, I used their form. On Twitter, they always just refer you to the form.
The form was messy. With every letter typed into the form, my screen reader, NVDA on Windows, repeated the character count, and other information. It’s like no blind person has ever tested the form that blind people are going to use to submit issues. “No matter,” I thought. I just turned my speech off and continued typing, in silence.
When the support person emailed me back, I was asked some general info, and to take a video of the issue. This would require, for me, a good bit of setup. I’d need three devices: the phone, the Braille display, and something to take the video with, probably a laptop. Then I’d need to get everything in the frame, show the bug, and hope the support person can read Braille enough to verify it.
This was a bit much for me. I have a hard job, and I have little energy afterwards. I can’t just pop open a camera app and go. So, I asked around, and found a workaround. If you use the Speech Central app, you can stop the speech, touch the middle of the screen, and then read continuously. But why?
This really brought home to me the issues of Android. It’s not a very well put together system. The Google Play Books team still uses the system speech engine, not TalkBack, to read books. The Kindle app does the same thing. There is barely a choice, since TalkBack reads the books so poorly. This is Google’s operating system, Google’s screen reader, and Google’s book-reading app. There is little excuse for them to not work well together.
Then, either that night or the night after that, I got a message on Messenger. It was a photo message. So, naturally, I shared it with Lookout, by Pressing Share, then finding Lookout in the long list of apps, double tapping, waiting for the image recognition, and reading the results. And then I grabbed the iPhone, opened Messenger, opened the conversation, double-tapped the photo, put focus on the photo, and heard a good description. And I thought, “Why am I depriving myself of better accessibility?”
And there’s the big issue. On iOS, Braille works well, supports the NLS EReader, and even allows you to customize commands, on touch, Braille, and keyboard. Well, there are still bugs in the HID Braille implementation that I’ve reported, but at least the defaults work. That’s more than I can say for TalkBack and Android.
And then the big thing, image descriptions, and by extension, screen recognition. TalkBack finally has text detection, and icon detection. That’s pretty nice. But why has it taken this long? Why has it taken this long to add Braille support? Why do we still have robotic Google TTS voices when we use TalkBack? After all these years, with Google’s AI and knowledge, Android should be high above iOS on that front. And maybe, one day, it will be. But right now, Android’s accessibility team is reacting to what Apple has done. Braille, image descriptions, all that. And if there’s a central point to what I’ve learned, it’s this: do not buy a product based on what it could be, but what it currently is.
Then, I started using the iPhone more, noticing the really enjoyable, little things. The different vibration effects for different VoiceOver actions. Not just one for the “gesture begin”, “gesture end,” “interactable object reached”, and “text object reached”. No, there are haptics for alerts, reaching the boundary of the screen or text field, moving in a text field, using Face ID, and even turning the rotor. And you can turn each of these on or off. What’s that about Android being so customizable?
Then there’s the onscreen Braille keyboard. On Android, to calibrate the dots, you hold down all six fingers, hold it a little longer, just a bit longer… Ah, good, it detected it this time. Okay, now hold down for two more seconds. Now you’re ready to type! Yes, it takes just about that long.
On iOS, you quickly tap the three fingers of your left hand, then the fingers of your right hand, and you’re ready! Fast, isn’t it? These kinds of things were jarring with their simplicity, coming from Android, where I wasn’t even sure if calibration would work this time. I do miss the way typing on the Android Braille keyboard would vibrate the phone, letting you know that you’d actually entered that character. However, the iPhone’s version is good enough that I usually don’t have to worry about that.
I want to talk a bit more about image descriptions. While I was on Android, I learned to ignore images. Sure, I wanted to know what they were, but I couldn’t easily get that info, not in like a few seconds, so I left them alone. On iOS, it’s like a window was opened to me. Sure, it’s not as clear as actually having sight, and yes, it gets things wrong occasionally. But it’s there, and it works enough that I love using it. Now, I go on Reddit just to find more pictures!
And for the last thing, audio charts. Google has nothing like this. They try to narrate the charts, but it’s nothing like hearing the chart, and realizing things about it yourself. Hearing the chart is also much faster than hearing your phone reading out numbers and labels and such.
The ugly
Here, I’ll detail some ugly accessibility issues on Android, that really make iOS look as smooth as glass in comparison. Some people may not deal with this, but I did. Maybe, by the time you read this article, they’ll be fixed in Android 13, or a TalkBack update or something.
First, text objects can’t be too long, or TalkBack struggles to move on to the next one. This can be seen best in the Feeder app, which, for accessibility reasons, uses the Android native text view for articles. This is nice, unless a section of an article spans one screen of text. Take the Rhythm of War rereads on Tor. Some of those sections are pretty long, and it’s all in one text element. So TalkBack will speak that element as you swipe to the next element, until it finally reaches the next one. This can take one swipe, or three, or five. This happens a lot in Telegram too, where messages can be quite long.
Another issue is clutter. A lot of the time, every button a user needs is on the screen. For example, the YouTube app has the “Go to channel” and “actions” buttons beside every video. This means you have to swipe three times per video. On iOS, each video is on one item, and the actions are in the actions rotor. TalkBack has an actions menu, but apps rarely use it. Gmail does, for example, but YouTube doesn’t. This makes it even more tricky for beginners, who would then have to remember which app uses it and which app doesn’t, and how to get to it and such.
When an Android phone wakes up, it reads the lock status, which usually is something like “Swipe with two fingers or use your fingerprint to unlock.” Then, it may read the time. That’s a lot of words just to check what time it is. Usually dependably, an iPhone reads the time, and then the number of notifications. Apple’s approach is a breath of fresh air, laced with the scent of common sense and testing by actual blind people. This may seem like a small thing, but those seconds listening to something you’ve heard a hundred times before add up.
If you buy a Pixel, you get Google TTS as your speech engine. It sucks pretty badly. They’re improving it, but TalkBack can’t use the improvements yet, even if other screen readers and TTS apps can. Crazy, right? However, with the Pixel, you get Google’s Android, software updates right at launch, and the new tensor processor, voice typing, and so on. If you get Samsung, you get a good TTS engine, for English and Korean at least. You also get a pretty good set of addons to Android, but a six-month-old screen reader and an OS that won’t be updated in about six months either. This is pretty bad mostly because of TalkBack. You see, there are two main versions of TalkBack. There is Google’s version, and Samsung’s version. Samsung’s TalkBack is practically the same as Google’s, but at least one major version behind, all the time. With iPhone, you get voices aplenty, from Eloquence—starting next month—Alex, Siri voices, and Vocalizer voices, with rumors that third-party TTS engines can be put on the store soon. You get a phone that, with phones as old as the 8, can get the latest version of iOS, and get them on the day they’re released. And there is no older version of VoiceOver just floating around out there.
Further thoughts
I still love Android. I love what it stands for in mobile computing. A nice, open source, flexible operating system that can be customized by phone makers, carriers, and users to be whatever is needed. But there really isn’t that kind of drive for accessibility. TalkBack languished for years and years, and is only just now trying to hurry to catch up to VoiceOver. Will they succeed? Maybe. However, VoiceOver isn’t going to sit still either. They now have the voice that many in the blind community can’t do without. On Android, that voice, Eloquence, is now abandoned, and can’t be bought by new users. And when Android goes 64-bit only, who knows whether Eloquence will work or not. iOS, on the other hand, officially supports Eloquence, and the vocalizer voices, and even the novelty voices for the Mac. They won’t be abandoned just because a company can’t find it within themselves to maintain such a niche product. Furthermore, all these voices are free. Of course, when a blind person buys an $800 phone, they’d better be free.
I’m also not saying iOS is perfect. There are bugs in VoiceOver, iOS, and the accessibility stack. Braille in particular suffers from some bugs. But nothing is bug-free. And no accessibility department will be big, well-staffed, well-funded, or well-appreciated. That’s how it is everywhere. The CEO or president or whoever is up top will thank you for such a great job, but when you need more staff, better tools, or just want appreciation, you’ll often be gently, but firmly, declined. Of course, the smaller the company, the less that may happen, but the disability community can never yell louder than everyone else. Suddenly, the money, the trillions, or billions of dollars, just isn’t there anymore when people with disabilities kindly ask.
But, the difference I see is what the two accessibility teams focus on. Apple focuses on an overall experience. When they added haptics to VoiceOver, they didn’t just make a few for extra feedback, they added plenty, for a feedback experience that can even be used in place of VoiceOver’s sounds. When they added them, they used the full force of the iPhone’s impressive haptic motor. Just feel that thump when you reach the boundary of the screen, or the playful tip tick of an alert popping up, or the short bumps as you feel around an empty screen. All that gives the iPhone more life, more expression in a world of boring speech and even more boring plain Braille.
The iPhone has also been tested in the field, for even professional work like writing a book. One person wrote an entire book on his iPhone, about how a blind person can use the iPhone. That is what I look for in a product, that level of possibility and productivity. As far as I know, a blind person has never written a book using Android, preferring the “much more powerful” computer. I must say that the chips powering today's phones are sometimes even more powerful than laptop chips, especially from older laptops. No, it’s the interface, and how TalkBack presents it, that gets in the way of productivity.
Lastly, I’m not saying that a blind person cannot use Android. There are hundreds of blind people that use Android, and love it. But if you rely on Braille, or love image descriptions, or the nice, integrated system of iOS, you may find Android less productive. If you don’t rely on these things, and don’t use your phone for too much, then Android may be a cheaper, and easier, option for you. I encourage everyone to try both operating systems out, on a well-supported phone, for themselves. I’ll probably keep my Android phone, since I never know when a student will come in with one. But I most likely won’t be using it that much. After all, iOS and VoiceOver, offer so much more.
This is going to be a more emotional post, which mirrors my mental state as of now. I just have to write this down somewhere, and my blog should be a good place to put it. It may also be helpful for others who struggle with this.
I've used just about every operating system out there, from Windows to Mac to Linux, ChromeOS, and Android and iOS. I've still not found one I can be completely happy with. I know, I may never find an OS that fits me perfectly, but so many others have found Linux to be all they ever need. I wish I could find that. Feel that feeling of not needing to switch to another laptop just to use a good Terminal with a screen reader that will always read output, or the ability to use Linux GUI apps, like GPodder or Emacs with Emacspeak.
There are times when Windows is great. Reading on the web, games, and programs that were made by blind people to help with Twitter, telegram, Braille embossing, and countless screen reader scripts. Other times, I want a power-user setting. I want GPodder for podcasts, or to explore Linux command-line apps. I asked the Microsoft accessibility folks about Linux GUI accessibility, and they just said to use Orca. I've never gotten Orca to run reliably on WSL2. It's always been reliable on ChromeOS with Crostini.
Whenever I get enough money, I'll get 16 GB RAM, so maybe I can run a Linux VM. But still, that's not bare metal. And if I switch to Linux, I would have to run a Windows VM, for the few things that run better on Windows, like some games, and probably the Telegram and Twitter support. It's all just kind of hard to have both. Dual booting may work, but I've also heard that Windows gets greedy and messes with the bootloader.
But, with there being a blind person working on Linux accessibility at Red hat, I hope that, soon, I won't need Windows anymore. I can hope, at least. But with there still being a few who have the mindset that I must fix everything myself, I must still remain cautious, and unexcited about this development among the hardcore Linux community, lest the little amount of joy a full-time Linux accessibility person being hired gives me, is taken away by their inflexibility and cold, overly-logical mindset.
But, I'm not done yet. With the little energy taking vitamins has given me, I've made a community for FOSS accessibility people on Matrix, bridged to IRC. I continue to study books on Linux, although I've not gotten up the energy to continue learning to program and practice. Maybe I'll try that today.
Mostly, I don't want newcomers to Linux to feel as alone in their wrestling with all this as I do. All other blind people are already so far ahead. Running Arch Linux, able to code, or at least happy with what they have and use. I don't want future technically inclined blind people to feel so alone. Kids who are just learning to code, who are just getting into GitHub, who are just now learning about open source. And they're like “so what about a whole open source operating system?”
And then they look, find Linux, and find so few resources for it for them. Nothing that they can identify with. Well shoot, there it is. Documentation I guess. I do want to wait until Linux, and Gnome or whatever we ultimately land on, is better. Marco (in Mate), shouldn't be confused whenever a QT or Elektron-based app closes and focus is left out in space somewhere. An update shouldn't break Elektron apps' ability to show a web view to Orca. And we definitely shouldn't be teaching kids a programming language, Quorum, made pretty much specifically for blind people. But I'm glad we're progressing. Slowly, yes, but it's happening at least.
Why tools made by the blind, are the best for the blind.
Introduction
For the past few hundred years, blind people have been creating amazing
technology and ways of dealing with the primarily sighted world. From Braille to
screen readers to canes and training guide dogs, we've often shown that if we
work together as a community, as a culture, we can create things that work
better than what sighted people alone give to us.
In this post, I aim to celebrate what we've made, primarily through a free and
open source filter. This is because, firstly, that part of what we've made is
almost always overlooked and undervalued, even by us. And secondly, it fits with
what I'll talk about at the end of the article.
Braille is Vital
In the 1800's, Louis Braille created a system of writing that was made up of six
dots configured in two columns of three dots, which made letters. This followed
the languages of print, but in a different writing form. This system, called
Braille after its inventor, became the writing and reading system of the blind.
Most countries, even today, use the same configurations created by Louis, but
with some new symbols for each language's needs. Even Japanese Braille uses
something resembling that system.
Now, Braille displays are becoming something that the 20 or 30 percent of
employed blind people can afford, and something that the US government is
creating a program to give to those who cannot afford one. Thus, digital Braille
is becoming something that all screen reader creators, yes even Microsoft,
Apple, and Google, should be heavily working with. Yet, Microsoft doesn't even
support the new HID Braille standard, and neither does Google. Apple supports
much of it, but not all of it. As an aside, I've not even been able to find
the standards document, besides This technical notes document from the NVDA
developers.
However, there is a group of people who has taken Braille seriously since 1995.
That is the developers of BRLTTY, of which you can read some
history. This
program basically makes Braille a first-class citizen in the Linux console. It
can also be controlled by other programs, like Orca, the Linux graphical
interface screen reader.
BRLTTY has gone through the hands of a few amazing blind hackers (as in increddibly
competent programmers)), to land at https://brltty.app, where you can download it not
only for Linux, where it's original home is at, but for Windows, and even
Android. BRLTTY not only supports the Braille HID standard, but is the only
screen reader that supports the Canute 360, a multi-line Braille display.
BRLTTY, and its spin-off project of many Braille tables (called LibLouis), have
proven so reliable and effective that they've been adopted by proprietary screen
readers, like JAWS, Narrator, and VoiceOver. VoiceOver and JAWS use LibLouis,
while Narrator uses them both. This proves that the open source tools that blind
people create are undeniably good.
But what about printing to Braille embossers? That is important too. Digital
Braille may fail to work for whatever reason, and we should never forget
hardcopy Braille. Oh hey lookie! Here's a driver for the Index line of Braille
embossers.
The CUPS (Common Unix Printing System) program has support, through the
cups-filters package, for embossers! This means that Linux, that impennitrable,
unknowable system for geeks and computer experts, contains, even out of the box
on some systems, support for printing directly to a Braille embosser. To be
clear, not even Windows, or MacOS, or iOS, has this. Yes, Apple created CUPS,
but they've not added the drivers for Braille embossers.
Let that sink in for a moment. All you have to do is set up your embosser, set
the Braille code you want to emboss from, the paper size, and you're good. If
you have a network printer, just put in the IP address, just like you'd do in
Windows. Once that's sunk in, I have another surprise for you.
You ready? You sure? Okay then. With CUPS, you can emboss graphics on your
embosser! Granted, I only have an Index D V5 to test with, but I was able to
print an image of a cat, and at least recognize its cute little feet. I looked
hard for a way to do this on Windows, and only found an expensive tactile
graphics program. With CUPS, through the usage of connecting to other Linux
programs like ImageMagick, you can get embossed images, for free. You don't even
have to buy extra hardware, like embossers especially made for embossing graphics!
Through both of these examples, we see that Braille is vital. Braille isn't an
afterthought. Braille isn't just a mere echo of what a screen reader speaks
aloud. Braille isn't a drab, text-only deluge of whatever a sighted
person thinks is not enough or too much verbiage. Braille is a finely crafted,
versitile, and customizable system which the blind create, so that other blind
people can be productive and happy with their tools, and thus lessen the already
immense burden of living without sight in a sighted world. And if electronic
Braille fails, or if one just wants to use printed material like everyone else
can, that is available, and ready for use, both to print text and pictures.
Speech matters too
If a blind person isn't a fast Braille reader, was never taught Braille, or just
prefers speech, then that option should not just be available for them, but be
as reliable, enjoyable, and productive an experience as possible. After all,
wouldn't a sighted person get the best experience possible? Free and open source
tools may not sound the best, but work is being done to make screen readers as
good as possible.
In the Linux console, there are three options. One can use
Speakup,
Fenrir, and
TDSR. On the desktop, the screen reader has
been
Orca,
but another is being written, called Odilia. Odilia is
being written by two blind people, in the Rust programming language.
If one uses the Emacs text editor, one can also take advantage of
Emacspeak. This takes information not
from accessibility interfaces, but Emacs itself, so it can provide things like
aural syntax highlighting, or showing bold and italics through changes in speech.
Recently, however, there is a new way for all these groups, and sighted
developers, to join together with, hopefully, more blind people, more people
with other disabilities, and other supporters. This is the Fossability
group. This is, for now, a Git
repository, mailing list, and Matrix space. It's where we can all make free and
open source software, like Linux, LibreOffice, Orca, Odilia, desktop
environments, and countless other projects, as useful and accessible as possible.
Blind people should own the technology they use. We should not have to grovel
at the feet of sighted people, who have little to know idea what it's like to be
blind, for the changes, fixes, and support we need. We should not have to wait
months for big corporations (corpses), to gather their few accessibility
programmers to add HID Braille support to a screen reader. We should not have
to wait years for our file manager to be as responsive as the rest of the
system. We should not have to wait a decade for our screen reader to get a
basic tutorial, so that new users can learn how to use it. We should not have
to beg for our text editor to not just support accessibility, but support
choices as to how we want information conveyed. This kind of radical
community support requires that blind people are able to contribute up the
entire stack, from the kernel to
the screen reader. And with Linux, this is entirely possible.
Now, I'm not saying that sighted people cannot be helpful, it's the exact
opposite. Sighted people have designed the GUI that we all use today. Sighted
people practically designed all forms of computing. Sighted developers can help
because they know graphical toolkits, so can help us fix any accessibility with
that. And I'm not trying to demean the ongoing, hard, thankless job of
maintaining the Orca screen reader. Again, that's not even the maintainer's job
that she gets paid for. However, I do think that if more blind people start
using and contributing to Linux and other FOSS projects, even with just support
or bug reports, a lot of work will be lifted from sighted people's shoulders.
So, let's own our technologies. Let's take back our digital sovereignty! We
should be building our own futures, not huge companies with overworked,
underpaid and underappreciated, burnt-out and understaffed accessibility
engineers. Because while they work on proprietary, closed-off, non-standard
solutions, we can build on the shoulders of the giants that have gone before us,
like BRLTTY, the CUPS embosser drivers, and so many other projects by the
blind, for the blind. And with that, we can make the future of Assistive
Technology open, inviting, welcoming, and free!
In the past, I've mostly written articles about problems with operating systems, products, services, and general technology. But, in this article, I want to shed a little light on what good things are going on. This doesn't really negate all the bad, but it helps to think about the good things that are going on more than just the bad.
Google
Lately, Google has been putting a lot more effort into Android accessibility than in previous years. A few years ago, Google added commands to TalkBack that could use more than one finger. This means that complex two-part commands, like swiping up then right, or right then down, which are more like commands you'd perform on a video game joystick than a phone, don't have to be used. Instead, one can use two, three, or four finger taps or swipes instead. These are also pretty customizable.
Then, in Android 12, Google brought those commands, which were previously only for Pixel and Samsung devices, out of (beta I guess) exclusivity, and onto every Android device. Oh and in Android 11 or so they added an onscreen braille keyboard, which I now can't live without, and previously couldn't on iOS either. That's the one thing that gave me a good enough excuse to jump to Android.
Now, they're adding Braille display support, so if a blind person owns a refreshable Braille display, they can connect it through Bluetooth to Android. This will be coming out in Android 13 later this year. And if Samsung doesn't hurry it up, I won't be very happy if I have to wait until next year to get 13. Ah well, Dolby Atmos is pretty worth it.
I hope they keep improving their AI stuff. Right now, they can detect text in images, but I'd love to be able to go through my photo library and hear descriptions of images, like I can on iOS. No, having to send the image to another app isn't the same thing. But they're getting closer!
Apple
Apple still leads the way on adding new features to their accessibility settings, at least on mobile. Okay, text checking on Mac is pretty cool. Anyway, this year was really interesting, as they've added lots of new voices (basically fonts for blind people), except they're all monochrome and sometimes look awful depending on who's listening.) Other than that, they added support for door detection and ... I can't really think of much else. The really big thing is voices, since they've added one that the blind community has been using for about 25 years, Eloquence, which I'm sure they had to do a lot of engineering, compatibility with 32-bit libraries, and spaghetti code to get working with Apple silicon. Still, there's nothing that makes basically the whole blind community want to beta test like some new voices!
Microsoft
So, modernizing a whole OS is probably really hard. They still want to be backward compatible, but they also want to move things forward. So, they're still trying to push towards using UI Automation Even though File Explorer can be really sluggish, even on this new PC, and screen readers don't really have anything like the VoiceOver rotor which is invisible and instantly available. Windows is still the OS of choice for blind people. Microsoft has outlived the Mac hype, and still chugs along even with phones taking over the computing world.
Lately, they kind of seem to repeat themselves a lot. They continually talk about their new voices, only available to Narrator and no other screen reader cause Narrator has to be the premier screen reader experience. But, from a positive point of view, it could just mean they're planning something really nice for the next Windows release. I'd love to see offline image recognition that all screen readers could tap into, like the already-included text recognition.
ChromeOS
Crostini is really great. It lets me use Linux command line apps, through TDSR, or even GUI apps, through Orca, but with a nice window manager, notification system, and ChromeOS provides the web support and Android apps. And Emacspeak isn't sluggish as crap like it is in WSL2.
Linux
At least a lot of blind Linux users like either Mint or Arch. And there's Emacspeak. And GPodder, and Thunderbird is kinda nice when it wants to be, and LSHW gives loads of info on hardware, and Bash is far, far better than PowerShell. Like, “stop computer”? Who wants to type all that?
Braille
I've recently started reading, thanks to my Humanware NLS EReader, and I'm really starting to enjoy it. Thanks to, I think, my vitamins, and practice, I'm finding that I'm able to think ahead of the current reading point, to predict the rest of the sentence, and if the prediction is right, skim passed that. It's kinda cool. I'm not sure if I was able to do that before, but I'm definitely noticing it now.
Conclusion
In this blog post, I talked about how stuff still mainly works, Google's starting to give a crap, Apple still blazes ahead in some areas, and Microsoft still talks a lot. Oh and ChromeBook is still a nice Linux system lol, and Braille is good.
While reading This article on how much Windows phones home to Microsoft, I thought about just how much we don't really have control over our data when running Windows. Who knows what all is being sent over all those network transmissions. I mean, when your cryptographic services contact other services over the Internet, like, why? On the Hacker News posting about this, a commenter asked why, after all this, someone would still use Windows. I responded with the usual “because accessibility, unfortunately.”
So, in this article, I'll talk about why accessibility should be the first thing every contributor to free and open source software thinks about. People with disabilities are some of the most disadvantaged, unvalued, discarded, and underrepresented people on Earth. Abled people don't want to think about us, because they don't want to imagine what it'd be like to be one of us. They fear going blind, deaf, or losing mental faculties, even though they know it'll happen eventually. So, supporting us is the right thing to do, provides an alternative to a disadvantaged population, and supports yourself when you need it most.
Got morals?
If you practice any kind of moral system, you probably know that you should help the poor. Some moral systems include people with disabilities, as we're often some of the most poor, especially in non-western countries. If you practice religion, you may or may not have seen a verse insisting that you not put a stumbling block in front of the blind, or other such admonitions. This should be the case in software as well.
We're all human, except for the bots crawling through this for keywords for search engines and such. We all are born with different traits. Some of us were smaller babies. Some of us were smarter babies. And some of us were disabled babies, or born prematurely, or survived even though the hope for such was low. So, shouldn't we account for these things? Shouldn't we prepare, in advance, for, say, a deaf person to use your chat program, or a blind person to try your audio editor?
Supporting people with disabilities is the right thing to do. It's the human thing to do. You don't want to look like those soulless corporations, do you? And even the corporations make an effort to support disabled people, even if to prop up their image. Can the open source community not do better than an uncaring, unfeeling money-printing machine? Surely, humans are better than the corporate machine!
And yet, in open source communities, people with disabilities are often ignored, or told they'll have to be a developer to make things better, or told to “be the change you want to see,” which is just plain demoralizing to a non-developer. Developers, and communities in general, must learn to empathize with all users, before they themselves become the ones needing empathy.
We are Everywhere
Have you ever called a bank, a hospital, a non-profit organization, the Internal Revenue Service, or your phone company? Yes? Then chances are, you could have been speaking with a person with disabilities. Blind people work in many call centers, and at many phone network providers, like Verizon, AT&T, and others. Do you know what operating system they're more than likely using? That's right, Windows. Why? Because accessibility on Windows, using Windows screen readers, and Chrome or Edge, is top-notch. Now, they may not be using the latest version of Windows, and hopefully it's all patched up, but we don't know that. The only company that does is Microsoft, and it sure isn't going to talk about its weaknesses.
So, how about the developers of free and open source desktop environments, web browsers, and operating systems be the stronger party and ensure that no one has to ever run Windows? After all, it's your data that's being stored on Windows computers, in Windows servers, spoken by, more than likely, $1099 closed source screen readers that could be doing anything with your data. If it sounds like I'm trying to scare you, you're right. We have asked nicely for the last decade to be taken seriously. All we've gotten is a shrug, a few nice words, and a “don't bother me I'm engineering,” kind of vibe after that. Well, you might as well start engineering for us before it's too late for you.
Where do you want to be in forty years?
It's no secret that we're all getting older. We age every second of every day. And, as we age, our bodies and minds start to fail.
Our eyes grow dim, our ears don't hear the birds outside anymore, and our minds tick slower and slower. But our hobbies, or our jobs, never quite leave us. Some developers can just climb the chain until they're high enough to not need to code anymore, thus bypassing the need to confront their failing eyesight, on the job at least. Some developers just retire and quit coding, choosing to give the wheel to younger, and hopefully brighter, generations. But why? You know so much! You still have those ideas! You still want to see freedom win!
Let's try another problem, those who become disabled younger in life. There are many genetic issues, diseases (like COVID-19), and so on that may cause even a younger person to become disabled. You may lose your vision, have a car accident, lose some hearing from listening to loud music, or maybe you just don't have the energy that you used to have. But you still want to code! You still want to create! And you have unfinished projects that need fixing!
In both cases, helping people that are disabled will help you when you need it most. We, people who are disabled, simply came with what you'll be getting in the future. So why not start now? Help make desktop environments a joy to use for blind people, so when your eyes start to hurt after a while of using them, you can just close them, turn on accessibility features, and continue working with your eyes closed! Or, if you make things easy for people with mobility issues, you can work one-handed when the other cramps up. Or, if you work on spell checking, autocorrection, and word suggestions, you can take advantage of that when a word just won't come to you, or when you forget how to spell a word.
So how can I help?
We need people, not companies. Companies, like Canonical, will sit there and work on their installer accessibility, while the real issue is the desktop environment. The System 76 folks only need accessibility help when they get to the GUI of the desktop environment that they're building. The Gnome folks say that they need coders, not users. So I have little faith in corporate-backed open source. They're just another machine.
So, community support is where it's at, I think. But it can't just be one person. It has to be everyone. Everyone should be invested in how they're going to use computers in the future. Everyone should care about themselves enough to consider what they'll do when, not if, they go blind, lose hearing, lose energy, lose memory, lose mental sharpness. Everyone should be into this, for their own sake.
There are many Linux desktop environments besides Gnome. KDE is what I'd be using if I could see. There's also Mate, Cinnamon, LXDE, XFCE, and others. Why mainstream distributions of Linux choose to stick with Gnome is beyond me. Below are some ideas to get the community started.
Use Linux with a screen reader. If you don't like it, we probably won't either.
Add Accessibility labels to whatever you're making.
Gather people with disabilities to get feedback on your desktop environment or distribution.
Have either your entire team focus on accessibility, or, if you must, make an accessibility team.
Spread the word about your accessibility fixes, put them front and center!
See how much your image improves, and how loyal disabled people are!
Yeah, it looks a bit selfish. But I've grown to expect people to be selfish, and care about how they're seen, and getting more users and such. That's just how we disabled people have to be most of the time. So, prove us wrong. Show us that the world of communities, democracies, people of high ideals, care about the disadvantaged, about their own security and the security of others, and themselves in the future. Let's make open source really open to everyone. Let's make freedom free for everyone. Why not?
Imagine, for a moment, that there is a ring. It's dim and gray, lifeless. There are many rings encircling it, but these are darker, more foreboding than the central ring. You hold in your hand a light, which can only shine inward. How will you proceed?
Let's try putting the light in the central ring, and see what happens. You place the light on the rim of the ring, and step back, watching the light fill the central ring with vibrant, lively color.
Despair
Darkness is all I know. I stand in the center of my ring, one of the outermost rings in the system. It's cold here, dark, no one wants to come near us, for fear of catching our Darkness. I don't blame them. We fight so much. Just thinking of battle makes me feel… better. Like I have something to blame. Someone to hate.
I sigh, looking up. Up at the Light ring. The central ring. No one wants us anywhere near there.
But I wanted it. Or to bring them out. Yes, I would destroy their Light, make them feel our anguish, our despair, our hate. I tell my tired and beaten-down body to move. To Climb the rings, to seek that Light, and snuff it out. I would find whoever put that light there, and make them feel my pain, my agony.
Well that wasn't such a good story, now was it? Let's hope that guy doesn't find you, right? Here, let's reset and try again. This time, let's put the Light on the outermost ring, and see what happens.
Peace
Light reassures me as I stand on the rim of my ring. I feel it's heat, and people from other rings say it allows them to sense things from a distance, to know what's around without them making sounds. That's alright. We have machines that use the light, like Investiture or a power source, to tell us what's around. In my free time, I enjoy exploring the rings, helping people, and anything else I can do for our system. I look to the next ring, where some of my family live. I jump there, and spend a while searching for things new in the ring. I scan the describer device upward, to further rings, and to the central ring, which, I'm told, glows with the casted light of all other rings. I lift my face, and feel the warmth of the Light.
That night, I dream of another world, where the Light has chosen to selfishly glow only on the central ring. I woke up sobbing at the idea. Seeing the people, filled with fear and hurt and pain, just barely surviving, and beg Elyon to have mercy on those people, if they do exist.
Ah, that's better. Since the central ring already has some light, putting the big Light on the outer ring allows the light to move inward, giving all rings light, not just one. Yet, software and web developers selfishly think of the central ring, which stands for “the 99%” or “majority” of people, and disregard those who need their services the most. Thus, people with disabilities, neurodivergent people, people who don't speak English, people who have trouble reading, people who have trouble processing images, Autistic people, and so many others live in a world that slaps them in the face every moment of every day.
Technology doesn't care. Bits and bytes can be used to help people with disabilities in so many ways. Yet, they are used, in so many ways, by the ignorant abled people, to bar access to so many things, from playing video games to COVID tests. And we can't even move towards the Light, as it were. With Linux, the free and Open Source system, made by abled people, almost every desktop environment has huge accessibility issues. In fact, even if you find a good one, you still have to enable Assistive Technology Support. And now, the Mate desktop, which has been the most accessible we have, only because it's based on Gnome 2 from 10+ years ago, and is starting to show its age. Chrome-based apps, like VS Code and Google Chrome, crash out of nowhere. Pidgin crashes while writing a long message. And if a Chrome-based app crashes, Orca is lost, and one has to immediately set focus to the desktop or it'll be totally lost until it sees a dialog it creates.
So, that means we can't even get into a great position to learn to make our own stuff, from some of the best courses like the Odin project, which requires you either use Linux, MacOS, or the Linux system on Chromeos. Windows, the most accessible system, which is supported by a large community of blind developers, and is created by a company which, in recent times, is getting more into accessibility, isn't allowed.
So think on this, when inspiration strikes for a new site, a new app, a new package. If you help the least of us, you'll help the best of us too!
You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!
About a week ago, I got a new laptop. It's an HP, with an AMD 5500 processor. With 8 gigs of RAM, 512 GB SSD storage, and a modern processor, I think it'll last a good while. I do hope I can swap out the RAM for two 8 GB sticks instead of 4 GB sticks.
After using Windows 11 for a while, I got the Linux itch again. Windows was... slower than I expected. Along with some games being more frustrating than fun, I decided to just do it.
So I installed Fedora. I chose the Fedora 35 Mate spin for this. Well, first I tried the regular Gnome version but Orca couldn't read the installer so that's great. After getting it installed, turning on Orca at the login screen, and the desktop, setting Orca to start after login always, and turning on assistive technology support, I was ready to go. Except...
Bumps in the Road
I mainly use Google Chrome for browsing. After getting that installed, I opened it, prepared to sync my stuff and get to browsing. But upon its opening, there was nothing there. Orca read absolutely nothing. Baffled, I installed VS Code. Still the same, nothing.
So, I hunted down the accessibility cheat codes I used to magically make things work:
After restarting the computer, things worked. I could use Chrome and VS Code. Then, I set up Emacs with Emacspeak. After a lot of looking around, I discovered I need lots of ALSA stuff, like alsa-utils, and mplayer, sox, and all that sound stuff. Oh and replace serve-auditory-icons with play-auditory-icons so all icons play.
It was during my setup of Emacs that I found one of the joys of Linux, dotfiles. I copied the .emacs files from my ChromeBook to the new Linux PC, and it was like I'd just simply opened the Emacs on my ChromeBook. Everything was there. My plugins, settings, even my open files were there.
Linux is really snappy. Like, I can open the run dialog, type google for google-chrome, press Enter, and there's Chrome, ready almost before I am. Pressing keys yields instant results, even faster than Windows.
Nothing's Perfect
Even with all this: fast computing, Emacs, updated system, freedom to learn about computing, there are some rough edges. If you close a Chrome-based app, like VS Code and such, you have to move to the desktop immediately, or Orca will get stuck on nothing. If that happens, you have to press Insert + h for help, then F2 to bring up any kind of dialog for Orca to get onto. Seems Mate's Window manager doesn't put focus on the next window. Also the top panel on Mate has lots of unlabeled items. And there are very few accessible games natively for Linux, but with Audiogame Manager, there are plenty of Windows games I can play.
You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!