The Wainhouse Research Blog
News & Views
on Unified Communications & Collaboration
From time to time I get excited about applications and today’s and tomorrow’s use cases that help humanity (maybe that’s why I’m drawn to ed tech, the healthcare vertical, and even public sector apps that support emergency responders).
So, I found myself torn as I made a major decision at the recent SXSW Interactive Conference in Austin, TX. Attend Thad Starner’s talk Not Your Mama’s Wearables or Ray and Amy Kurzweil on Collaboration and the Future. Oh, the horror: they were booked at the same exact time slot. Rather than rant about SXSW Interactive’s firehose of competing sessions, I’ll turn this to my advantage, as you’ll see in my conclusion below.
Ray Kurzweil is well known as an inventor, author, and futurist, with significant contributions to OCR, voice, text-to-speech (ever hear of Nuance Communications?), and as a futurist, pushing the envelope on concepts regarding the law of accelerating returns, the human neocortex, genetics, nanotechnology, robots and Artificial Intelligence, and more. You can read about him here.
Sorry Ray (and daughter Amy, cartoonist and author): I went with Thad Starner. Why? Starner is an MIT graduate, Professor at Georgia Tech, and longstanding Technical Lead for Google Glass. Though I never owned a pair of Google Glasses, I watched with bemusement as it progressed from over-hyped consumer tech to yesterday’s news. I always expected the product category to come back, and after hearing Starner, believe it will. After all: this guy believes that wearable computers enable superpowers. Who can argue with that?
In his session Starner described how his work in voiceless speech recognition and brain-computer interfaces are likely to help fuse (his words) “minds, bodies, and devices” to create a more future UX. So, he began with Google Glass. After walking us through a history of tech innovations that support today's and tomorrow's wearables (CMOS cameras, consumer lithium batteries, high-resolution small displays, cellular and Bluetooth network developments), Starner showed us several videos, one of which describes how the firefighter Patrick Jackson designed a Google Glass app for first responders. Imagine a firefighter accessing locations of fire hydrants while pulling up to a burning building, or accessing extraction plans as he enters the firestorm, or beaming back to fellow responders on-scene video (perhaps from a body camera, but someday perhaps from Glass). Another video showed a team of nurses working on a resuscitation exercise, each wearing Glasses and able to see or access body function information or search for healthcare data.
These sorts of apps are just one small piece of the future of wearables. Starner went on to talk about other futures that are being made possible by advances in speech recognition technologies, nanotech, magnetometers, LTE and high-speed 5G cellular networks, and cloud computing.
Imagine tongue-piercings that allow a mute person to mouth words and trigger what’s called silent speech recognition. Imagine sensors that can recognize phrases from brain signals, helping individuals who have lost motor control to communicate via brain-computer interfaces. Imagine wearable gloves that provide passive haptic learning, enabling motor memory that (in his example) can result in learning a piano tune without paying attention (in his video, it takes 20 minutes using tactile gloves). These all are being developed in labs and in some instances being commercialized.
Finally, Starner was joined by his colleague Dr. Melody Jackson, who demonstrated the Fido Project (Facilitating Interactions for Dogs with Occupations). Fido’s goals are to facilitate human-animal interactions, using technologies to be a part of the ability to communicate in far greater detail because the animals are given methods of telling us what they see or do. Animals can be trained to use sensor-enabled wearables or other technologies to share information with their handlers – communicating using their powers of discrimination to be specific: “hey, a tornado warning is going off, wake up!” Or “hey, you are about to have a seizure and I’m going to push you against the wall and lick your face till you get through it.” As Dr. Jackson asked, “what if that dog could activate a sensor on a vest that calls 911 and texts one’s spouse?”
One of the Fido Lab Dogs About to Show Off
Lately it’s been trendy to have pundits “go long and short,” describing where they are bullish and where they are negative. Lately I’m going long on wearables. I’m a bit shorter on Virtual Reality for anything other than learning, gaming, or marketing, when I think about commercial apps. But the use cases for wearables are too compelling. Google Glass – whether from Google or other vendors – will be back. Joined by a wide assortment of sensor-laden clothing, mitts, headpieces, and ear- or tongue-piercing devices as well. Let's not forget about augmented reality, which will be an element essential to those wearables and which, according to some, just may help kill off smartphones.
This has implications for those building conference room technologies and personal collaborative technologies as well – will the keyboard and mouse, or smartphone/tablet touchscreen – be the UX of the future? I’m going long that a lot of other approaches to UX will begin to enter our collaboration industry over time.
Well, I missed Kurzweil’s talk, but one of his famous statements could just as easily apply to the idea of Google Glass, as he said once: “I realize that most inventions fail not because the R&D department can’t get them to work, but because the timing is wrong—not all of the enabling factors are at play where they are needed. Inventing is a lot like surfing: you have to anticipate and catch the wave at just the right moment.”