Forget the iPhone and the iPad, they are yesterday’s news. There is hype – justified or not – over the next big thing in technology: wearable computers.
Whether it is a Dick Tracy-like wristband computer from Apple (a rumor making the rounds lately), a computer mounted on eyeglasses, or even a current Nano with a wristband, clearly the mobile revolution has not ended with the smartphone and the tablet.
All this is incredibly fascinating and amazing. Unfortunately – I hate to burst the bubble – history teaches us that each new innovation in technology creates new accessibility problems. Wearable technology WILL make it possible to do more hands-free computing, and deliver access in more places than ever – which will change lives for those who are not able to use their desktops and mobiles the way most of us do. But it will negatively affect some disability groups.
When cellphones broke into the mainstream, the TTYs and TDDs that deaf people relied on for phone communications were not designed to accommodate these mobile products, and quickly became outmoded because of slow communication speeds. As more hearing people moved their telephone communications away from their desktops and homes, communication mobility increased around the world. Yet that form of mobility did not benefit deaf people – this segment continued to be tethered to desktop communications because there was simply no effective mobile alternative out there. It was only until hearing-aid compatibility standards, along with front-facing cameras, were implemented on smartphones (and later, tablets) that deaf people could finally use their “phones” anywhere.
I showed the Google Glass video (posted above) to my Introduction to Marketing classes at Gallaudet University, a higher-education institution for the deaf and hard-of-hearing, as an in-class example of an innovative new product. The goal was to encourage the class to identify which markets would be attracted to this product. The feedback was almost uniform among my students: Google Glass was not Deaf-friendly. More specifically, for those who primarily use sign language to communicate.
For a potential technology that could dominate pop culture in the next decade, this raises a relevant issue: can Google Glass – or wearable computers for that matter — ever be completely accessible? Amazing possibilities exist for people who have difficulty using their hands to operate computers, tablets, or smartphones. And for deaf and hard-of-hearing people who rely on spoken language as a primary mode of communication, Google Glass would bring captioning and voice recognition to a very accessible level – indeed, to understand anyone anytime, anywhere. However, for those who rely on sign language, wearable computing – as it is currently designed – fails the accessibility test.
The way sign language is communicated, it requires the communicators to position themselves almost face-to-face. The outward-facing camera in Google Glass will, at best, look at sign language from behind – and even then, at a very awkward angle. Most of the time, sign language will not be recorded with this product.
Wearable technologies are primarily designed to free up hands for other activities. Since smartphones and tablets must be held with at least one hand when communicating over the phone, deaf signers need a way to communicate with both hands. That is the implied promise of Google Glass.
So as – if I predict correctly – wearable technologies take the lead over smartphones as the next hottest cultural product, it is my hope that there can be innovative ways to incorporate visual communication into this increasingly popular technology platform.