Google Glass: the scientists behind Google's augmented reality glasses

 

From Terminator-style enhanced contact lenses to robot carers, the scientists behind Google Glass have created some eye-popping inventions, writes Shona Ghosh.
 Google Glass, the search giant's smart glasses concept, begins shipping in 2013

“We have succeeded in making people live longer. Now we need to make them live better.
So reads the tagline for the Nursebot, a robotic carer that helps look after the elderly in their homes, reminding them to take their medication, see the doctor and even doing the washing. This slightly creepy catchphrase, which sounds like something from a dystopian science fiction film, was dreamt up by Sebastian Thrun, the ex-Stanford professor responsible bringing his robotics expertise to Google.
 Nursebot was a robot carer for the elderly.

If Thrun’s work on Google’s augmented reality glasses and driverless cars defies imagination, then Nursebot and the rest of his research point towards even more eye-popping inventions.

Thrun’s focus on how robots sense and navigate their surroundings won him the attention of Google co-founder Larry Page in 2005 when, along with his Stanford lab team, he unveiled Stanley, a self-driving car. Impressed by Stanley’s performance at a desert race sponsored by the US military, Page hired the team to apply their expertise in navigation to Google’s own mapping services. Google Street View was born in 2007, with driverless cars following a few years later.
This wasn’t Thrun’s first brush with Google, nor Google’s with robots. When working on the Nursebot in 2000, Thrun and his team won funding and spare parts from a local robotics obsessive named Andy Rubin. Rubin would go on to set up a software company – aptly named Android Inc. – and sell it to Google in the same year Thrun would meet Larry Page.

Thrun’s breakthroughs in driverless cars don’t just have implications for consumers. Funded by the US defence department, his team built the Segbot, a modified version of the Segway, to explore the scooter’s potential in battle. And citing a paper by Thrun and his co-researchers at Stanford, BAE Systems have recently patented a method of tracking that could be used by soldiers to navigate combat zones, particularly indoors where GPS readings become inaccurate. Given that Thrun’s Stanford co-researcher on driverless cars, David Stavens, worked on NASA’s 2009 Mars Rover project, it’s feasible that the technology has implications for space exploration too.

Now it seems Thrun is bringing some aspects of these prototypes to life in Google’s R&D labs, though the precise nature of his work there is under wraps. Recent patents filed under his name give some hint, however, like the 3D mapping system that helps self-driving vehicles detect road features such as traffic lights, or an auto-pilot system for cars. Google’s tests for its cars show they can already safely navigate around the streets of Nevada, partly due to Thrun’s earlier work in 3D mapping.

If Thrun makes machines more intelligent, his co-creator on Google Glass, Babak Parviz, specialises in making humans more machine. Parviz was brought onto Glass due to his research on nanotechnology, essentially engineering at a molecular level. Or as his Google+ profile would have it: “I dig making really small things.”

As with Thrun, it isn’t entirely clear what advances in the field he has made in the secrecy of Google’s labs, but he has made prior breakthroughs in wearable technology. In 2008, he was tackling the problem of fusing electronics with unusual materials like plastic or glass – a clear precursor to his work on creating a pair of glasses with a visual display for Google.

Quite aside from Glass, though, Parviz has form in enhancing human sight, having already created augmented reality contact lenses. In 2009, he voiced his admiration for the Terminator films, pointing to the “virtual captions that enhance the cyborg’s scan of a scene.” By 2011, working with Microsoft, he had helped develop contact lenses that could help diabetics monitor their blood sugar electronically and without the hassle of needles.
The smart contact lens contains hundreds of LEDs.

Containing minute components, the lenses are big enough to accommodate LEDs, but small enough not to melt the wearer’s eyes. Engineering at such a microscopic level required Parviz and his team to cram hundreds of LEDs into each lens, powering a Terminator-like display of words, charts and photographs.

Partway through the project, Parviz and his team noted that building tiny radios, sensors and antennae into the lenses meant they could keep tabs on bodily functions. According to Parviz, many of the “biomarkers” that doctors glean from blood samples are also found on the surface of the eye, meaning that suitably customised lenses could measure blood sugar or cholestoral levels. Low blood sugar would bring up an alert before the wearer’s eyes, prompting them to eat a snack before the effects of hypoglycemia set in.

Also working on the project in 2011 was Microsoft’s senior researcher, Desney Tan, who curiously identified the same flaws that might hamper the adoption of Google Glass now.

“They aren’t socially quite as intrusive as wearing the goggles that are sort of the state of the art in the field right now,” said Tan in a company blog post. Though developed before Google Glass, the lenses have not been tested for human use as yet and may not beat the search giant’s glasses to the consumer market.

Perhaps it’s no surprise that Parviz was attracted to Google. Long before he was approached either by the search company or even Microsoft, he had detailed his vision of his contact lenses as a platform for developers.

“We already see a future in which the humble contact lens becomes a real platform, like the iPhone is today, with lots of developers contributing their ideas and inventions,” he wrote in 2009. “As far as we’re concerned, the possibilities extend as far as the eye can see, and beyond.”

No comments:
Write comments