(Want to get insights into emerging tech on a more regular basis? Sign up for the official Traction Report newsletter here).
Back in early 2015, Google shut down its Google Glass Explorer program, and the media reaction was amazingly dismissive. Titles like these were the standard response:
However, Google claimed at the time that they were regrouping, and refocusing on the application of Glass on various work contexts — like manufacturing, healthcare, construction — where the productivity benefits of augmented reality ‘eyeables’ were undeniable.
As I wrote at the time,
The best way to view this is more like the Newton ‘failing’ as a device, but the DNA of that failure setting the stage for the iPod and then the iPhone. Yes, Google Glass ‘failed’ to capture mass market interest, and [Nest founder Tony] Fadell might want to distance whatever comes later from the Google Glass name — Glasshole will be hard to get away from. But in the industries where the technology has made a dent — like medical application (see Wearables, earables, eyeables: Welcome to the next wave of computing) — the response was very positive.
So, recently Google announced the return of Glass as Glass Enterprise Edition with the tagline:
Glass is a hands-free device, for hands-on workers.
Steven Levy wrote about the new Glass, relating the rise and fall of the first generation product, which was buggy, awkward, and without a compelling use case, except for pissing people off who thought they might be getting videoed. But something was brewing in the background: a number of workplace technology companies were buying Explorer units, and programming workplace AR apps for them:
In April 2014, Google started a “Glass at Work” program that highlighted some of the early developers. And that year when a few people from X visited Boeing, which was testing Glass, they reported that their minds were blown by a side-by-side comparison of workers doing intricate wire-framing work with Glass’s help. It was like the difference between putting together Ikea furniture with those cryptic instructions somewhere across the room and doing it with real-time guidance from someone who’d constructed a million Billys and Poängs.
Google forked off a team to focus on the workplace opportunity in 2014, creating channel partnerships with companies who would create the apps, and market and support the products with end-user companies.
Along the way Google and its partners solved a number of the bugs and design flaws of Glass: they made it lighter, faster, and better. For example the camera was upgraded from five megapixels to eight, and battery life has been improved making possible an eight-hour shift without recharge.
Companies like GE and DHL claim large productivity gains for workers using Glass. Healthcare professionals claim large time savings by video and transcription of patient examinations and less time spent looking at tablets and screens: more time face-to-face and eye-to-eye with clients leads to better outcomes.
The explosion of AI in driverless cars, and other computationally intense areas, is propelling the investment into new chipsets that will move processing from cloud data centers directly onto mobile devices, like smartphones, cars, and eyeables. So the push in those fields will also accelerate development of faster and more capable goggles, from Google and others.
Google isn’t alone in this space. There is a great deal of blue sky innovation going on. The story of Mira is a great example.
Four University of Southern California seniors prototyped an innovative, low cost approach to AR, which relies on an iPhone as the brains of the device. Because they couldn’t afford Microsoft’s Hololens — which goes for $3000 — they built the earliest version of what is now their Prism headset using plastic fishbowls the bought on Amazon, as told by CEO Ben Taft:
“To get custom one-off optics custom coated and everything would’ve cost us $10,000, and we simply didn’t have that,” Taft said. “To save money, instead of going a very complicated route, we basically were like, how can we make the low-cost version of that?
“We realized these plastic fishbowls on Amazon had a radius of curvature that was exactly right for what we needed, so we ordered $10 fishbowls on Amazon, cut out square lenses, and then the same film you put over windows to make them semireflective,” he continued. “We just prototyped it. And that’s what we raised the seed round on.”
The Mira holds the iPhone at a precise angle, so that the screen is reflected on the fishbowl lens using a special photosensitive film. As a result users can see the real world through the lens, as well as what is being displayed by the iPhone, and the visual recognition of the user’s surroundings is performed by the iPhone as well.
The development of Mira Prism and Glass Enterprise Edition could not have happened without the moonshot that was Glass Explorer Edition. But now, just as the myriad applications of smartphones proceed without users or developers looking back at the Newton, we’re in a time when AR will move past and eventually forget the stumbling start of Glass.
In the near term we can expect that AR will stream into every niche where its benefits will play. That will be just about everywhere that ‘hands on’ workers can apply AR. Voice computing is advancing at a breakneck pace, and the consumer application of voice will benefit eyeables greatly.
And I haven’t given up on the application of AR outside of work. My sense is that AR and voice represents the next computing platform, which we will be transitioning to very quickly, over the next few years. At some point, we will get back to casually wearing AR goggles as we go through our day, and relying on AR representation in the view and voice communication with our devices. And no one will call us Glassholes for doing it.