Thursday, October 20, 2016
Feature: Page (1) of 1 - 07/16/13 Email this story to a friend. email article Print this page (Article printing at page facebook
Don't Call Us, We'll Call You?
Google Bets We're Ready To Listen
By Mark McClelland

Computers autonomously trade stocks, manage the power grid, control air traffic, watch for terrorist activity, and perform myriad other functions that require tremendous coordination. To do this, they initiate conversations with each other, across networks of networks, day in and day out... but when it comes to interactions with us, we have historically preferred to be the ones to start things off. Google believes we're ready for that to change.
Prior to the Internet, personal computers served primarily as passive tools. We used them for record-keeping, editing and printing documents, performing calculations, and other tasks well-suited to a standalone machine. We would turn them on, give them a bit of work to do, wait for them to finish, then move on to the next thing. We essentially had a "Don't call us, we'll call you" relationship with them. When they gained communications capabilities, we started leaving them on for longer periods of time, and allowed them to interrupt when another person contacted us (think "you've got mail"). Hobbyists envisioned software agents that would collect information and give us vital news reports, but for the vast majority of people, computers simply didn't have much of their own to share.


With the rise of smartphones, computers have become much more tightly integrated into our lives, and yet - aside from reminders and message notifications - most people still prefer that smartphones speak only when spoken to. Even something as seemingly innocuous as auto-correct can be unbearably annoying. But Google is betting that it has the right combination of data, artificial intelligence, and innovative human-computer interfaces to bring about a paradigm shift. If they're right, our relationship with computers is in the midst of transformation, and more people will be saying to their computers, "I want you in my life".
Back in the 90s, Microsoft made a highly visible attempt to create an application that was situationally aware and smart enough to make helpful, unsolicited recommendations. It was called Clippy, and it turned out to be one of the biggest flops in the history of user-interface design. The application, officially called Office Assistant, used crude artificial intelligence to offer contextually appropriate tips by way of an animated character, the default choice being Clippy, the too-cute paper clip. The basic idea was, since users rarely read Help documentation, maybe it would be better to bring Help to the users. Unfortunately, Clippy had little sense of the user's skill level, and its clumsy attempts at assistance were seen by most people as an insulting nuisance. It was a high-profile failure, and it cast a long shadow over the use of proactive artificial intelligence in mainstream software.




Fifteen years later, Google introduced Google Now, an intelligent personal assistant that goes far beyond offering tips on formatting a letter. If permitted, it takes advantage of your search history, calendar, email, location data, and Google's staggering knowledge of information trends to present you with updates it thinks you'll find useful. Combine it with the concept of Google Glass - a computer you wear like eyeglasses - and you have unsolicited artificial intelligence that's literally in your face. Will this combination work, or will it be another Clippy?
Google has far more situational data at its disposal than any of its predecessors, and more computing horsepower to make sense of it all. For many people, the idea of computers offering tips based on their habits and interests feels invasive and alienating. Others welcome it, seeing it as a chance to streamline and enhance their information-rich lives. If the concepts of Google Now can be tactfully applied to the head-mounted Glass display, which is always in your field of view, it will mark the beginning of a new era in human-computer interaction. In its first iteration, it may turn out to be awkward and overhyped, but ultimately, those receptive to unsolicited assistance from computers will have an advantage over those who are not.
On a smartphone, Google Now can already learn commute habits, tell you when the next bus will arrive, and use search history to recommend web pages of interest for you to read on your way to work. If allowed to access Gmail, it will offer shipping updates based on tracking numbers, flight status based on airline confirmations, and directions based on hotel or restaurant reservations, without your having to enter details or even ask to see these things. Knowing a user's location, it can highlight nearby attractions and photo spots, and even broadcast local safety alerts. When in a foreign country, it will offer translation and currency conversion tools. There are some fairly useful features here, and new cards are being added often.
Combine this kind of predictive assistance with a wearable camera and microphone, and the potential for this technology becomes far greater. Giving an intelligent personal assistant the ability to see what you see and hear what you hear vastly expands its situational awareness. To get a sense of where this could lead, imagine you're a parent at a backyard birthday party. Your child toddles over and says, "I just ate a blueberry!" He points to a nearby bush and says, "See?" But you're in the middle of a conversation, and it doesn't occur to you just how dangerous this "blueberry" might be. Fortunately, your assistant is a little more tuned in. Using natural language processing, it interprets what your child is saying, recognizes he's pointing not at a blueberry bush but at a Virginia creeper, and warns you that eating the berries of this plant can be fatal. Taking it one step further, it could even provide first aid tips and offer to call poison control.
While this scenario goes far beyond what Project Glass promises, it won't be long before this kind of sophistication is possible. Before you write off Google Glass as Clippy on steroids, think of where the technology is headed.
Even for those receptive to the guidance of well-informed software, there's still the issue of privacy. A computer aimed as much at the world as at the user, Glass is attracting scrutiny from many sources, including the congressional Bipartisan Privacy Caucus. While Google is taking steps to protect against privacy violations, such as disallowing facial recognition in Glass apps, there's nothing preventing rooted devices from bypassing such restrictions.
But does Glass actually introduce much that's new in this regard? Watches and glasses with hidden cameras are nothing new, and can be paired with a smartphone to achieve much of the privacy-violating potential of Glass. While there's clearly demand for etiquette, it's unlikely that privacy concerns will do much to prevent computers from seeing and interpreting the world. I, for one, am looking forward to some extra help keeping up with it all.

Page: 1

Mark McClelland is a software developer at a Chicago-based trading firm, and author of the award-winning novel Upload, a near-future techno thriller, available at Amazon, Barnes& Noble, IndieBound, and most other booksellers.  He blogs for The Huffington Post and writes poetry for his wife.  His short stories have recently appeared in Communications of the ACM and FlashFlood.

Outside of coding and writing, he enjoys traveling, reading, studying foreign languages, drawing, sailing, gaming, and inventing cocktails as Honorary Chief Mixologist at Jo Snow Syrups.  He is currently working on a board game, a follow-up to Upload, and a children?s book.


Our Privacy Policy --- About The U.S. Daily News - Contact Us - Advertise With Us - Privacy Guidelines