A newly-launched iPhone application allows Google searches through voice alone. This brings us closer to when non-computing types can work and play in a Web 2.0 world. Imagine: If this future comes to pass, productivity increases in many industries would be huge.
More significant to us marketers, large swaths of the workforce will no longer consider the computing world to be hostile — or at the very least, impenetrable. As I speculated two years ago many workers simply will not make portable computing a habit until it is easy enough to do through speech alone.
You might consider this Part II of a two-part post. Last week I reported on Powerset, Microsoft’s acquisition in semantic search. Now, here is an exciting stride in the the voice-recognition half of the hands-free computing equation.
Below is how the New York Times characterized the voice recognition arms race (at least, the race for the juicy prize of mobile search dominance):
Both Yahoo and Microsoft already offer voice services for cellphones. The Microsoft Tellme service returns information in specific categories like directions, maps and movies. Yahoo’s oneSearch with Voice is more flexible but does not appear to be as accurate as Googleâ€™s offering. The Google system is far from perfect, and it can return queries that appear as gibberish. Google executives declined to estimate how often the service gets it right, but they said they believed it was easily accurate enough to be useful to people who wanted to avoid tapping out their queries on the iPhoneâ€™s touch-screen keyboard.
The service can be used to get restaurant recommendations and driving directions, look up contacts in the iPhoneâ€™s address book or just settle arguments in bars. The query â€œWhat is the best pizza restaurant in Noe Valley?â€ returns a list of three restaurants in that San Francisco neighborhood, each with starred reviews from Google users and links to click for phone numbers and directions.
The emphasis above is mine. Here’s a demo of the new Google app for the iPhone:
This is going to get very interesting, very fast.
As Raj Reddy, an artificial intelligence researcher at Carnegie Mellon University, reported in the NY Time’s piece: “Whatever [Google] introduces now, it will greatly increase in accuracy in three or six months.”
The semantic search problem, when solved, will help computers understand what people are saying based on their wording and a phrase’s context. On the other hand, voice recognition requires something at least as daunting: Penetrating regional accents. The most visible flaw in this first full week of the iPhone app’s release is it is baffled by British accents.