Tag Archives: Google

“Alexa, add voice shopping to my to-do list”

Amazon is promoting voice shopping as part of its deals for Prime Day next week. Shoppers will get $10 credit just for making their first voice purchase from a list of “Alexa Deals“, items that are already greatly discounted. That’s a major incentive just to push consumers into something that should actually be a great benefit – effortless, simple, zero-click shopping. Why does Amazon have to go through so much trouble to get shoppers to use something that’s supposedly so helpful?

To understand the answer, it’s worthwhile to first understand how valuable voice shopping is for Amazon. In all demos and videos for the various Alexa devices, voice shopping is positioned as the perfect tool for spontaneous, instant ordering purchases, such as “Alexa, I need toilet paper / diapers / milk / dog food / …” That easily explains why you would need to be an Amazon Prime subscriber in order to use voice shopping, and getting Prime to every household is a cornerstone to Amazon’s business strategy.

In addition, Alexa orders are fulfilled by 1-click  payment, yet another highly valuable Amazon tool. Amazon also guarantees free returns for Alexa purchases, just in case you’re concerned about getting your order wrong. Now, combine all of these together and you can see how voice shopping is built to create a habit, of shopping as a frictionless, casual activity. That is probably also why the current offer does not apply for voice shopping from within Amazon’s app, as the long process of launching it and reaching the voice search in it ruins the spontaneity.

And yet – shoppers are not convinced. In last year’s Prime Day, a similar promotion offered by Amazon drove on average one voice order per second. This may sound like a lot, but ~85K orders are still a tiny fraction of the total ~50M orders consumers placed on Amazon that day. This year Amazon raised the incentive even further, which indicates there is still much convincing to do. Why is that?

Mute Button by Rob Albright @ Flickr (CC)

For starters, Amazon’s Alexa devices were never built to be shopping-only. Usage survey reports consistently show that most users prefer to use the Alexa assistant to ask questions, play music, and even to set timers, much more than to shop. This does not mean that Amazon has done a bad job, quite the contrary. Voice shopping may not be that much of a habit initially, and getting used to voice-controlling other useful skills helps build habit and trust. Problem is, when you focus on non-shopping, you also get judged by it. That’s how Amazon gets headlines such as “Google Assistant is light-years ahead of Amazon’s Alexa“, with popular benchmarks measuring it by search, question answering and conversational AI, fields where Google has historically invested more than Amazon by orders of magnitude. The upcoming HomePod by Apple is expected to even further complicate Amazon’s stand, with Apple growing to control the slot of a sophisticated, music-focused, high-end smart home device.

The “How it works” page for the Prime Day Alexa deals hints at other issues customers have with shopping in particular. Explanations aim to reassure that no unintended purchases take place (triggered by your kids, or even your TV), and that if your imperfect voice interaction got you the wrong product, returns are free for all Alexa purchases. These may sound like solved issues, but keep in mind the negative (and often unjustified) coverage around unintended purchases has sent countless Echo owners to set a passcode on ordering, which is actually a major setback for the frictionless zero-click purchasing Amazon is after.

But most importantly, voice-only search interfaces have not yet advanced to support interactions that are more complex than a simple context-less pattern recognition. It’s no accident that the most common purchase flows Alexa supports are around re-ordering, where the item is a known item and no search actually takes place. This means that using Alexa for shopping may work well only for those simple pantry shopping, assuming you already made such purchases in the past. Google, on the other hand, is better positioned than Amazon in this respect, having more sophisticated conversational infrastructure. It even enables external developers to build powerful and context-aware Google Assistant apps using tools such as api.ai (for a quick comparison on these developer platforms, see here).

So what might Amazon be doing to make voice shopping more successful?

Re-ordering items is the perfect beginner use-case, being the equivalent of “known item” searches. Amazon may work on expanding the scope of such cases, identifying additional recurring purchase types that can be optimized. These play well with other recent moves by Amazon, such as around grocery shopping and fulfillment.

Shopping lists are a relatively popular Alexa feature (as well as on Google Home), but based on owner testimonials it seems that most users use these for offline shopping. Amazon is likely working to identify more opportunities for driving online purchases from these lists.

Voice interface has focused mainly on a single result, yielding a “I’m Feeling Lucky” interaction. Using data from non-voice interactions, Amazon could build a more interactive script, one that could guide users through more complex decisions. An interesting case study for this has been eBay with its “ShopBot” chatbot, though transitioning to voice-only control still remains a UX challenge.

And finally – it’s worth noting that in the absence of an item in the purchase history (or if the user declines it), Alexa recommends products from what Amazon calls “Amazon’s Choice“, which are “highly rated, well-priced products” as quoted from this help page. This feature is in fact a powerful business tool, pushing vendors to compete for this lucrative slot. In the more distant future, users may trust Alexa to the point of just taking its word for it and assuming this is the best product for them. That will place a huge lever in Amazon’s hands in its relationship with brands and vendors, and it’s very likely that other retailers as well as brands will fight for a similar control, raising the stakes even more on voice search interfaces.

Advertisements

Feeling Lucky Is the Future of Search

If you visit the Google homepage on your desktop, you’ll see a rare, prehistoric specimen – one that most Google users don’t see the point of: the “I’m Feeling Lucky” button.

Google has already removed it from most of its interfaces, and even here it only serves as a teaser for various Google nitwit projects. And yet the way things are going, the “Feeling Lucky” ghost may just come back to life – and with a vengeance.

lucky

In the early years, the “I’m Feeling Lucky” button was Google’s way of boldly stating “Our results are so great, you can just skip the result lists and head straight to destination #1”. It was a nice, humorous touch, but one that never really caught on as users’ needs grew more complex and less obvious. In fact, it lost Google quite a lot of money, since skipping the result list also meant users saw fewer and fewer sponsored results – Google’s main income source. But usability testing showed that users really liked seeing the button, so Google kept it there for a while.

But there’s another interface rising up that prides itself on returning the first search result without showing you the list. Did you already guess what it is?

robots

Almost every demo of a new personal assistant product will include questions being answered by the bot tapping into a search engine. The demos will make sure to use simple single-answer cases, like “Who is the governor of California?” That’s extremely neat, and was regarded as science fiction not so many decades ago. Amazing work on query parsing and entity extraction from search results has led to great results on this type of query, and the quality of the query understanding, and resulting answers, is usually outstanding.

michelle

However, these are just some of the possible searches we want our bots to run. As we get more and more comfortable with this new interface, we will not want to limit ourselves to one type of query. If you want to be able to get an answer for “Give me a good recipe for sweet potato pie” or “Which Chinese restaurants are open in the area now?”, you need a lot more than a single answer. You need verbosity, you need to be able to refine – which stretches the limits of how we perceive conversational interfaces today.

Part of the problem is that it’s difficult for users to understand the limits of conversational interfaces, especially when bot creators pretend that there are no such limits. Another problem lies in the fact that a natural language interface may simply be a lousy choice for some interaction types, and imposing it on them will only frustrate users.

There is a whole new paradigm of user interaction waiting to be invented, to support non-trivial search and refine through conversation – for all of those many cases where a short exchange and single result will probably not do. We will need to find a way to flip between vocal and visual, manage a seamless thread between devices and screen-based apps, and make digital assistants keep context on a much higher level.

Until then, I guess we’ll continue hoping that we’re feeling lucky.

 

siri-physics

Marketing the Cloud

watsonIBM made some news a couple of days ago announcing consumers can now use Watson to find the season’s best gifts. A quick browse through the app, which is actually just a wrapper around a small dedicated website, shows nothing of the ordinary – Apple Watch, Televisions, Star Wars, Headphones, Legos… not much supercomputing needed. No wonder coverage turned sour after an initial hype, so what was IBM thinking?

tensorflowRewind the buzz machines one week back. Google stunned tech media, announcing it is open sourcing its core AI framework, TensorFlow. The splashes were high: “massive potential“, “Machine Learning breakthrough“, “game changer“… but after a few days, the critics were out, Quorans talking about the library’s slowness, and even Google-fanboy researchers wondering – what exactly is TensorFlow useful for?

Nevertheless, within 3 days, Microsoft quickly announced its own open source Machine Learning toolkit, DMTK. The Register was quick to mock the move, saying “Google released some of its code last week. Redmond’s (co-incidental?) response is pretty basic: there’s a framework, and two algorithms”…

So what is the nature of all these recent PR-like moves?

marketing-cloud

There is one high-profit business shared by all of these companies: Cloud Computing. Amazon leads the pack in revenue, and uses the cash flow from cloud business to offset losses on its aggressive ecommerce pricing, but also Microsoft and Google are assumed to come next with growing cloud business. Google even goes as far as predicting cloud revenue to surpass ads revenue in five years. It is the gold rush era for the industry.

But first, companies such as Microsoft, Google and IBM will need to convince corporates to hand them their business, rather than to Amazon. Hence they have to create as much “smart” buzz for themselves, so that executives in these organization, already fatigued by the big-data buzzwords, will say: “we must work with them! look, they know their way with all this machine-learning-big-data-artifical-intelligence stuff!!”

So the next time you hear some uber-smart announcement from one of these companies that feels like too much hot air, don’t look for too much strategy; instead, just look up to the cloud.

Thoughts on Plus – Revisited

plusTwo weeks ago, Google decided to decouple Google+ from the rest of the Google products, and to not require a G+ login when using those other products (e.g. YouTube), in effect starting to gradually relieve it from its misery. Mashable published excellent analysis on the entire history of the project, and of the hubris demonstrated by Vic Gundotra, the Google exec who led it.

Bradley Horowitz, who conceived Google+ along with Gundotra and is now the one to oversee the transition, laid out the official Google story in a G+ blog post. He talked of the double mission Google assigned to the project – become a unifying platform, as well as a product on its own. A heavy burden to carry, as in many cases these two missions will surely conflict each other and mess up the user experience, as they did. Horowitz also explains what G+ should have focused on, and now will: “…helping millions of users around the world connect around the interest they love…”

Well, unfortunately Horowitz seems to not be a regular reader of Alteregozi 🙂 Had he read this post, exactly 4 years ago right here, perhaps G+ would have had more of a differentiation, and a chance.

Thoughts on Plus

So what’s the deal with Google+? is Google really taking on Facebook? is that a classic “me too” play, or something smarter?

It took me a while to figure out my opinion, but several interesting articles got the stars aligned just right for a split second to make some sense (until some new developments will soon de-align them again :-)).

Take a deep breath. OK, here it comes:

Google+ is Google’s take on Social.

Yes, I know, who would have thought?…
It’s just that Google’s definition of Social is a bit different.

At Facebook (and really, for most of us), Social is about conversations with people you actually know.
At Google, Social is the new alias for Personalization.

It’s pretty simple: Google’s business model has always meant the more I know about you, the better I can monetize through more targeted ads. At first, it was all about the search engine being where you always start your surfing, and Google was well seated. As traffic to social networks grew, culminating with Facebook overtaking Google on March 2010, it became increasingly clear that a larger portion of our information starts being served to us from social networks. Google was left out.

Why was that so important? Google still had tons of searches, an ever-growing email market share, and successful news aggregation and rss reader, among other assets. That’s quite a lot to know about us, isn’t it?

It turned out that the missing link often was the starting point. You would learn about the new thing, the new trend, the new gadget you want to get, while you were out of Google’s reach. By the time you got into the Google web, you may have already got your mind set on what you want to get and even where, making the Google ads a lot less effective.

The Follow versus Friend model is also a huge issue. It means that G+ is about self-publishing and positioning yourself, and not about conversations. That suits Google very well, and is not just a differentiation from FB. This model drives you to follow based on interest, building an interest graph rather than a social graph, and being a lot more useful to profiling you than your social connections.

That interest graph, in turn, makes sure your first encounter with those things that make you tick is inside the Google web. It also links back well to the fine assets that Google holds today, from your docs to your publishing tools. So when Google News announces those funny badges, and you may have thought “Heh, who would want to put these stinking badges on their profiles…” – think again. Their private nature is just fine for Google. It’s a way to ask you to validate your inferred interests: “So tell us, is that interest of yours in US politics that we have inferred from your news reading a real inherent interest, or was it just a transient interest that will melt away after the election?“. Again – big difference for profiling.

Finally, Google+ is positioned to be a professional network. Focusing on interests and having anyone able to follow you, will keep away the teens and lure the self-proclaimed professionals. In that sense, LinkedIn may have more of a reason for concern, at least as the content network it now tries to be. It’s quite likely that G+ does not even aim to unseat Facebook, only to dry it out of its professional appeal, and leave it with what we started with – party/kids photos and keeping track of what those old friends are up to.

I guess I already know what network I’ll be posting a link to this post to…

Google Nails Down Social Search

Google’s Social Search is doing the walk, all the rest are just doing the talk. As soon as I activated the Social Search experiment, my next search yielded a social result. No setting up, showing how I am connected to that result (including friends of friends), showing as part of the standard web results…

google-social-searchContrast this with Microsoft’s poor attempt at “social search” by indexing tweets and status messages and showing them regardless of the actual searcher (example search, you’ve got to be on “United States” locale on bing to see it).

Then also contrast it with Facebook’s announcement back in August of its implementation of searching within friends’ posts – a less grandiose announcement that yet delivered far more social experience than Bing’s. Nevertheless, it’s a very limited experience and far from being a true information source for any serious search need.

So how does Google overcome the main obstaclecollecting your connections?

Google relies on its own sources and on open sources it can obtain by crawling the social graph. That is the true reason why Facebook is not part of Google’s graph (no XFN/FOAF marking on Facebook’s public pages). Google may be counting on Facebook’s inevitable opening up, and with Gmail’s rising popularity it becomes a reasonable alternative even for Facebook users like me.

Sadly, all this great news gave zero credit to Delver, where it all happened first

Bart Simpson working at Google??

“Phone call for Al…Al Coholic…is there an Al Coholic here?”
“Wait a minute… Listen, you little yellow-bellied rat jackass, if I ever find out who you are, I’m gonna kill you!”

Sweet little Bart Simpson must have hacked his way into the training data the guys at Google Scholar are using. I was running a simple Google query for user manuals that Googlebot indexed at sears.com, and got these goodies in the results:

Google Scholar Bart SimpsonFor the perplexed readers, the image on the right is what the Google Scholar parser saw for the DVD result (click to enlarge), then assumed it’s an academic paper and desperately tried to find an author name. As Google freely admits, “…Automated extraction of information from articles in diverse fields can be tricky”. Yep.

sony-dvd-manual

It gets even better: since there are many such “academic papers” with the same author name, Google clusters them together, even when the manuals are for different products. Try one of those “All xxx versions” links, e.g. this one, all by our good friend O. Instructions. Interested students are encouraged to proceed and find out the etymology of other fascinating author names such as R. Parts and NO. Model.

And what about our old friend Al Coholic, you ask? well, Google Scholar tells us he did actually publish something! but wait – 1877? Annals of the New York Academy of Sciences? young Simpson, have you no shame boy!?