Beyond Parity: The Case for True Accessibility Affordances

There’s a debate in the world of digital accessibility. Should the experience for a screen reader user simply mirror that of a sighted person, achieving functional parity? Or should it strive to be as helpful and efficient as possible, even if it means creating unique capabilities for non-sighted users? I argue for the latter. The goal shouldn’t be mere parity, but the creation of powerful affordances that make digital interfaces genuinely more accessible.

Defining Affordances for Accessibility

In design theory, an “affordance” is a quality of an object or environment that allows an individual to perform an action. A handle affords pulling; a button affords pushing. For this discussion, I want to expand that definition: an accessibility affordance is a feature that provides a non-sighted user with a capability that a sighted user doesn’t typically have, creating a more efficient and powerful user experience.

A good example is the virtual buffer used by screen readers on Windows. It allows a user to select and copy non-editable text from a webpage—something a sighted user can’t do without enabling a special mode like caret Browsing.

This difference is very pronounced on mobile. On an iPhone, VoiceOver’s Text Selection rotor lets me select any text on a webpage. This is really useful for sharing quotes or saving information. On Android, TalkBack lacks this fundamental affordance, forcing users to rely on third-party apps like Universal Copy to achieve the same result.

Affordances Everywhere

On every operating system, affordances are used. On Android, for example, the Google Messages app sends accessibility notifications when a new message is recieved. This allows a blind person to hear a new message in a conversation without having to navigate to it. If the philosophy of no affordances was used, then the blind person would only hear the “incoming message” sound. Then, the user would manually feel around the screen to read new messages.

MacOS and iOS are full of affordances. VoiceOver on MacOS, even though it’s an abomination of hacks sitting on top of another mess, has things like the Application and Windows switchers. Even though MacOS allows you to press Command + Tab to switch between applications and Command + Accent to switch between open windows, you can also open a list of running applications, or a list of open windows in that application using VoiceOver. This allows you to set focus to system windows, like a problem report.

On iOS, in the Books app, a blind person can swipe down with two fingers when focused on the page of a book, and VoiceOver will begin reading the book, automatically turning pages, and continuing to read. It will continue only until the end of the chapter, which I think is a bug. In the Mail app, when you open a conversation with many messages, you can use the Messages rotor to swipe between them, which skips all the headers and action buttons for each and every message. None of this is possible on Google Play Books and Gmail for Android.

Now, Android has a few stand-out affordances. Face in View, only available on Pixel phones, allows a blind person to take a selfie by giving them hints on where to move the camera, and when the face is in view, even takes the picture for them. Not even iOS takes a picture automatically. When using your fingerprint to unlock your device, Android gives you instructions on where to move your finger to find the fingerprint sensor. This works on Pixels, and will work on OneUI 8 when released, but doesn’t work on the OnePlus 13 because they broke it and it’s not a priority for them to fix. TalkBack also comes with the ability to have images described using Gemini.

So, we’ve established that every operating system comes with affordances for accessibility built in. But what about parts that don’t? I’ve given a few already, but I will make it clear in the next section.

When Affordances aren’t used

Google, and OnePlus, make it way too easy to show how the philosophy of accessibility differs between people who seem to let the OS, accessibility frameworks, and apps do all the work, and Apple, which seemingly tries to “script” VoiceOver into working with as much of the OS and first-party apps, as possible.

Let’s take a look at AI. It’s the big thing everyone is focused on, especially Google. Gemini is their star AI product. So, how does it do with accessibility? Well, it’s mixed between meh and awful.

On Android, Gemini with TalkBack works like this. If you speak to Gemini by holding down the side button, things generally work well. That is, unless you touch the screen. Then TalkBack speaks, and Gemini stops speaking. So if you accidentally touch an item, you’ll need to have TalkBack read the response, which requires more feeling around the screen.

If you type to Gemini, Gemini will not send the response to TalkBack to be spoken, and will not speak it itself. Instead, you’ll have to figure out a way to know when a response is ready to be read. A blind person has said that sighted people do not know when Gemini is done generating a response either. I believe that a response being there is a good enough way to know as any. An affordence would be for Gemini to send the response as an accessibility announcement to TalkBack to be read. The Copilot app does this, and that’s why I even use it.

On the desktop and iOS, Gemini says that it has replied, even though it is still generating a response. It has been this way for at least a year, and shows no sign of improving.

Why develop Affordances?

Developing these features is not about adding “bells and whistles.” It’s about efficiency, respect for the user’s time, and a deeper understanding of the non-visual user experience.

Let’s take the Notification Shade on Android for example. After each notification is an expand button, however this could be an Accessibility Action, which would firstly allow a blind person to swipe through their notifications quicker. Each notification would take up one element instead of two. Second, they can expand the notification with an action, or if it’s a grouped notification, by simply double tapping. Let’s take Gmail next. If I open a conversation of about 10 messages, the only way to read through them all is to swipe through each and every single message. The header text, and then the body, and then the action buttons, for each and every message. I might could scroll, but then I might miss one. So, having Accessibility Actions to move to the previous or next message in the conversation would make things much, much faster. iOS already has this, in its built-in Mail app.

The TalkBack screen reader itself could be so much more powerful. Why can’t it select text in a web view? Why isn’t there a built-in OCR feature that can scan the screen for inaccessible elements and make them navigable, as screen readers on Windows and iOS have done for years? These are not edge cases; they are fundamental tools for independence and efficiency.

Conclusion

Given these criticisms, one might ask why I continue to use Android. The answer is that no platform is perfect, and my choice is a pragmatic one based on a series of trade-offs. The responsiveness of TalkBack on my OnePlus device, superior video game emulation, longer battery life on my peripherals, and the convenience of Google Messages for Web are features that I value highly.

This personal calculus, however, only reinforces the central point: we must advocate for better accessibility on all platforms. By embracing a philosophy of creating powerful, intelligent affordances, developers can move beyond the baseline of “making it work” and start building experiences that are truly empowering.

Devin Prater @devinprater