[blind-philly-comp] Article From Applevis: Why I'm still on a Mac: a guide for skeptics and those who feel that grass may be greener with Windows

  • From: David Goldfield <dgoldfield1211@xxxxxxxxx>
  • To: blind-philly-comp@xxxxxxxxxxxxx
  • Date: Mon, 29 Jan 2018 18:11:28 -0500

Why I'm still on a Mac: a guide for skeptics and those who feel that grass may be greener with Windows
Submitted by Devin Prater on 28 January, 2018 and last modified on 28 January, 2018
Introduction
This article will show you, the reader who uses a Mac with the Voiceover screen reader, exactly why I haven’t jumped ship and fled to Windows 10. I’ll compare the state of accessibility of Windows and Mac as operating systems, features of first-party apps, advancements in first-party screen readers, and the outlook of accessibility at both companies. I will attempt to be objective in this article, so as to present my arguments in a logical and useful way.
The core of Windows and Mac accessibility
Windows and Mac have grown up together, battling for the space on our desks and the pleasure of our minds. While accessibility has been a small part of both companies’ focuses, their differing ways of interacting with the user has translated into differing levels of accessibility. While Windows gives assistive technology vendors the ability to run on Windows with similar access as its own screen reader, Narrator, Voiceover is the only screen reader that currently exists on the Mac. While this seems like the stifling of innovation for some, most Mac users are happy with this, possibly because they are used to only Voiceover on iOS, Apple’s mobile platform which is extremely popular among the blind.
While Microsoft’s accessibility API’s focus on one window at a time, the Mac allows the user to know if a background app has been launched, and even if that app has a new window. This extends to so-called “system dialogues,” which are common on both systems, but whereas Windows automatically tries to put focus on them, the Mac merely alerts the user, and allows for the navigation to it at any time.
Windows uses keyboard access universally, as many Assistive Technology companies have screen readers on Windows. Tab, shift+tab, arrow keys, F 6, and Alt+Tab are most of the keyboard commands one uses to navigate Windows. This lends itself well for those who are good at memorizing where everything is in tab order, but the visual information has been lost, as focus can land either at the top of an app or wherever an app developer wishes it to. On the Mac, with Voiceover, one can explore the screen completely, either with the Voiceover keys and arrow keys, or with a trackpad. Not only does this give the user a sense of where things are, it also allows for some of that memorizing brain power to be used for more important tasks necessary for utilizing a computer to its full extent. App developers can still set where voiceover focus will initially be, although that too is able to be configured via the Voiceover utility, a blind user of the Mac isn’t nearly as likely to get lost. At to that the ability for VoiceoveR users to interact, or focus in on, a section of the screen, such as headers in email to read by only that particular column, and the Mac user may be more productive than their Windows-using counterpart.
Windows uses voices in many languages created by Microsoft. Apple has purchased the ability to use voices provided by Nuance, but also has the famous Alex voice, which sounds realistic, has the capability to recognize parts of speech based on whole paragraphs rather than phrases or sentences, and breathes, which may alert the listener to the possible word which will start the next sentence. While Microsoft’s voices are well-made, they are also very small and have a noticeable synthetic buzz sound when they are made to speak. Apple’s voices, along with the Nuance ones, do not suffer from those traits. While you can acquire other speech engines for Windows, you must often purchase them, whereas all voices that come with the Mac are free for Mac owners, and there are plenty of high quality voices, in many different languages, to choose from.
Windows has a system of navigation wherein a user can type the beginning of an item within the list they’re in, and the system focus jumps to that item. This system is usually called first-letter navigation, even though nowadays one can usually type multiple letters to “narrow down the list,” as it were. This system works well, and I use it often. However, this system isn’t always available, making it tricky and frustrating to have to type one letter over and over again until the right object is found. Within the Mac, this works everywhere, as far as I’ve found. You can type “IT,” on the dock, to move directly to iTunes. You can type “IB,” on the dock to move to iBooks. Note: “iBooks,” may change to “Books,” in a later macOS update. In the finder, where you can browse files, you can begin typing the name of a file or folder to jump to it. In System Preferences, you can begin typing the name of a pane you wish to explore, after interacting with the scroll area. Best of all, you can open system preferences by simply putting focus on the menu bar, pressing Down arrow, typing “SY,” then pressing Return (or Enter). This makes navigation on the Mac a breeze compared to the plentiful key presses required on Windows.
Writing is an essential part of using a computer. Word processors, notepads, email clients, Twitter apps, browsing the web, and using the Terminal all utilize written input from the user. How much help do we get while we write? On Windows 10, spell checking is available in many Universal windows apps, but nearly all desktop apps have no spell checking at all. Word completion is making its way to windows 10, although the current implementation isn’t very productive as words are spelled out, rather than spoken then spelled out. Punctuation marks, such as quotes, ellipses, and other such malleable symbols are simply printed out plainly, without care of position or style. There is no way, in Windows itself, to print symbols which are not on your keyboard without memorizing Unicode numeric values for each symbol. On the Mac, spell checking and auto-correct functions are available system-wide, as is a comprehensive dictionary, thesaurus, and search mechanisms for Wikipedia and other such databases. One can also have the Mac attempt the complete typed parts of words, helpful for long words which are easy to pronounce, but hard to spell. Symbols, like quotes, are paired into their left and right symbols, making work on the Mac not only a joy to type, but also a joy to read. Symbols, like • (bullet), ≥ (greater-than or equal-to), é (E acute), … (ellipsis), and π (Pi), are easily created by holding down the option key and typing a letter or symbol already on the keyboard. If this isn’t enough, there is an emoji and symbols picker which allows for the choosing of just about any symbol one can imagine. All of these options, and more such as text replacement, are all configurable in the keyboard screen of System Preferences, and all usable in any app.
First-party app accessibility
Program accessibility has come a long way. From the invasive capturing of information used in the last century to the accessibility API’s utilized today, apps have grown not only more powerful, but also more accessible. Within Windows, apps are caught between legacy desktop implementations, and the new Universal Windows Platform standards. These two standards handle accessibility differently. Legacy apps contain menu bars, toolbars, and plentiful keyboard commands. Universal Windows apps contain no menu bar, plenty of navigation buttons, and some have keyboard commands for many options, and others have none at all. The Mac in its current form, macOS, has one form of app, bringing consistency across all of its apps. There are menu bars, toolbars, and plenty of keyboard shortcuts in all of Apple’s apps.
Advancements in Narrator and Voiceover
Screen readers have been advancing nearly since the beginning of the 80’s. From text-based programs which could output text to a speech synthesizer the size of a modern desktop CPU case, to the screen readers built from web technology, third party screen readers have come a long way. It wasn’t until the beginnings of the twenty-first century, though, that Microsoft boldly stepped into the Assistive Technology game. Narrator was a minimalist screen reader designed to allow a user to set up their computer on their own before they receive their full screen reader, likely JAWS or Window-eyes at the time. In the next few versions of Narrator, this minimalistic design was perfected, and was pretty complete at the time of Windows 7. During the creation of Windows 8, though, something changed. Narrator became more robust, defying its earlier reputation of being simply a crumb and growing into a snack. During the years which followed, Narrator’s power grew, and now it can hold its own with the likes of NVDA when browsing the web, has Braille support, for now tied to BRLTTY, and can be used in many more circumstances than just to wait for your JAWS disks. Voiceover, however, took a different rout. Starting out as a prerelease module for MacOSX 10.3, Voiceover was designed from the beginning to be a full screen reader. Released to the public with MacOSX 10.4, Voiceover was able to not only read the screen, it could navigate the screen, allowing the user to know exactly what was on the screen on their Mac and act on it in an efficient manner. A year later saw the release of MacOSX 10.5, giving the Mac the Alex voice, which is still being updated today. Throughout the years, Voiceover was given the ability to work with Braille displays, use the newly acquired Nuance voices, set different “activities,” which allowed one to define a set of preferences for one app or web site, read complex web pages and emails, and Mac apps grew more accessible with it.
The Outlook of accessibility at Microsoft and Apple
Microsoft started out simple with accessibility. They allowed third-party companies to make technology for their customers. Apple also started out this way. Since Apple is very secretive, I don’t know if they were researching accessibility before OSX or not. Microsoft took a small but crucial set forward in creating Narrator, and Apple also took a step, much larger, in providing Voiceover. Narrator stayed simple for many years as Voiceover grew, but now Narrator is leaping forward asVoiceover’s growth has slowed. Microsoft now speaks of accessibility as one of its core values, but Apple is silent about this much of the time, but still pushes forward on features for Voiceover and iOS. Microsoft’s accessibility has improved over the last 6 years, but still hasn’t caught up with Apple. Microsoft still allows third-party screen readers to dominate on their platform, but Apple never had third-party screen readers on MacOSX to begin with.
Conclusion
This article has shown why I, as a user of assistive technology, have not turned from Mac to PC. I’ve discussed the accessibility of the Mac and Windows operating systems, first-party apps, how the first-party screen readers have advanced, and the outlook of the two companies regarding accessibility and their screen readers. I hope this will help someone choose an operating system to stick with, or an operating system to try.

--
David Goldfield, Assistive Technology Specialist WWW.David-Goldfield.Com

Other related posts: