Tag Archives: Search

ChatGPT will kill Google Queries, not Results

It has been a very long time – too long – since Search was disrupted. So it was only appropriate for the hype cycle to reach disproportionate levels with both Microsoft and Google embracing Large Language Models, namely ChatGPT and Bard, into their search experience. It should be noted that “experience” is the key word here, since LLMs have been part of the search backend for quite some time now, only behind the scenes.

By now, we’ve all heard the arguments against these chat-like responses as direct search results. The models are trained to return a single, well articulated piece of text, that pretends to provide an answer, not just a results list. Current models were trained to provide a confident response with lower emphasis on accuracy, which clearly shows when they are put to actual usage. Sure, it’s fun to get a random recipe idea, but getting the wrong information about a medical condition is a totally different story.

So we are likely to see more efforts invested in providing explainability and credibility, and in training the model to project the appropriate confidence vased on sources and domain. The end result may be an actual response for some queries, while for others more of a summary of “what’s out there”, but in all cases there will likely be a reference to the sources, letting the searcher decide whether they trust this reponse, or still need to drill into classic links to validate.

This begs the question then – is this truly a step function, versus what we already have today?

A week ago, I was on the hunt for a restaurant to go to, with a friend visiting from abroad. That friend had a very specific desire – to dine at a fish restaurant, that also serves hummus. Simple enough, isn’t it? Asking Google for “fish restaurants in tel aviv that also serve hummus” quickly showed how very much not so. Google simply failed to understand me. I got plenty of suggestions, some serving fish and some serving hummus, but no guarantee to serving both. I had to painstakingly go one by one and check them out, and most of them had either one or the other. I kept refining that query over and over, as my frustration kept growing.

With the hype still fresh on my mind, I headed over to ChatGPT:

Great. That’s not much help, is it? I asked for a fish restaurant and got a hummus restauarant. For such lack of understanding, I could have stuck with Google. Let’s give it one last try before giving up…

That, right there, was my ‘Aha’ moment.

This result, and the validation it included, was precisely what I was looking for. ChatGPT’s ability to take both context pieces and combine them in a way that reflected back to me what information it is providing, totally made all the difference.

This difference is not obvious. Almost all of the examples in those launch events could do great also with keywords. Google’s Bard announcement post primary examples (beyond the “James Webb” fiasco) were “is the piano or guitar easier to learn, and how much practice does each need?” and “what are the best constellations to look for when stargazing?“. But take any of these as a regular Google queries, and you will get a decent result snippet from a trusted source, as well as a list of very relevant links. At least here you know where the answer is coming from, and can decide whether to trust it or not!

Left: Bard results from announcement post. Right: current Google results for the same query

In fact, Bing’s announcement post included better examples, ones that would work, but would not be optimal for classic search results, such as “My anniversary is coming up in September, help me plan a trip somewhere fun in Europe, leaving from London” (“leaving from London” is not handled well in a search query), or “Will the Ikea Klippan loveseat fit into my 2019 Honda Odyssey?” (plenty of related search results, but not for this exact ikea piece).

The strength of new language models is their ability to understand a much larger context. When Google started applying BERT into their query understanding, that was a significant step in the right direction, moving further away from what their VP Search described as “keyword-ese”, writing queries that are not natural, but that searchers imagine will convey the right meaning. A query he used there was “brazil traveler to usa need a visa” which previously gave results for US travelers to Brazil – perfect example for how looking only at keywords (or “Bag of Words” approach) would fail when not examining the entire context.

I am a veteran search user; I still am cognizant of these constraints when I formulate a search query. That is why I find myself puzzled when my younger daughter enters a free-form question into Google rather than translate it to carefully-selected keywords, as I do. Of course, that should be the natural interface, it just doesn’t work well enough. That is not just a technical limitation – human language is difficult. It is complex, ambiguous, and above all, highly dependent on context.

New language models can enable the query understanding modules in search engines to better understand these more complex intents. First, they will do a much better job at getting keywords context. Then, they will provide reflection; the restaurant example demonstrates how simply reflecting the intent back to the users, enabling them to validate that what they get is truly what they meant, goes a long way to help compensate for mistakes that NLP models will continue to make. And finally, the interactive nature, the ability to reformulate the query as a result of this reflection by simply commenting on what should change, will make the broken experience of today feel more like a natural part of a conversation. All of these will finally get us closer to that natural human interface, as the younger cohort of users so rightfully expects.

Semantic Search using Wikipedia

Today I gave my master’s thesis talk in the Technion as part of my master’s duties. Actually, the non-buzzword title is “Concept-Based Information Retrieval using Explicit Semantic Analysis”, but that’s not a very click-worthy post title :-)…

The whole thing went far better than I expected – the room was packed, the slides flew smoothly (and quickly too, luckily Shaul convinced me to add some spare slides just in case), and I ended up with over 10 minutes of Q&A (an excellent sign for a talk that went well…)

Click to view on Slideshare

BTW – anybody has an idea how to embed slideshare into a hosted blog? doesn’t seem to work…

Google Labs is now Google

Quick, name this search engine!


No, not Kumo. That’s Google’s recent launch, trying to compete with Twitter search (“Recent results”), to preempt Microsoft (clustering result types), to show a different, though quite ugly UI metaphor (“wonder wheel”), and generally to roll out a whole bunch of features that should have been Google Labs features before making (or not) their way into a public product. So what’s next? buttons next to search results moving them up or down with no opt-out?? Ah, wait, that waste of real estate is already there.

Flash Gordon Gets the Drop on Arch-Enemy Ming the Mericiless - Flickr/pupleslog

Someone is panicking. OPEN FIRE! ALL WEAPONS!!! DISPATCH WAR ROCKET AJAX!!! The same spirit that brought us the failure of knols, is bringing us yet further unnecessary novelty, but this time it’s a cacophony of features, each deserving a long Google Labs quarantine by itself.

I noticed that much of my recent blog posts have to do with Google criticism :-). I wrestle with that, there really ought to be more interesting stuff to blog about in the IR world, and there is also great stuff coming from Google (can you imagine the fantastic similar images feature is still in labs? can Google please apply this to the ridiculously useless “similar pages” link in main web search results??), but I truly think we see a trend. Google is dropping the ball, losing the clear and spotless logic we have seen in the past, and the sensible slow graduation of disruptive features from Google Labs. Sadly, though, it’s not clear if anyone is there, ready to pick that ball…

Clustering Search (yet again)

Microsoft is rolling an internal test for a search experience upgrade on Live (codenamed Kumo) that clusters search results by aspect. See internal memo and screenshots covered by Kara Swisher.

As usual, the immediate reaction is – regardless of the actual change, how feasible is it to assume you could make users switch from their Google habit? but let’s try to put that aside and do look at the actual change.

Search results are grouped into clusters based on the aspects of this particular search query. This idea is far from being new, and was attempted in the past by both Vivisimo (at Clusty.com) and by Ask.com. One difference, though, is that Microsoft pushes the aspects further into the experience, by showing a long page of results with several top results from each aspect (similar to Google’s push with spelling mistakes).

At least judging by the (possibly engineered) sample results, the clustering works better than previous attempts. Most search engines take the “related queries” twist on this, while Kumo includes related queries as a separate widget:

kumo-comparisonClusty.com’s  resulting clusters, on the other hand, are far from useful for a serious searcher with enquire/purchase intent.

At least based on these screenshots, it seems like Microsoft succeeded in distilling interesting aspects better, while maintaining useful labels (e.g. “reviews”). Of course, it’s possible this is all done as a “toy”, limited example, e.g. using some predefined ontology. But together with other efforts, such as the “Cashback” push and the excellent product search (including reviews aggregation and sentiment analysis), it seems like Microsoft may be in the process of  positioning Live as the search engine for ecommerce. Surely a good niche to be in…


We’re sorry… but we ran out of CAPTCHAs

Sometimes I want to check the exact number of pages indexed in Google for some query. You know how it goes – you enter a query, it says “Results 1 – 10 of about 2468 gazillions“, then when you page forward enough, the number goes slightly down to, say, 37 results. Trouble is, very quickly Google thinks I’m a bot and blocks me:


Now, it’s quite clear Google has to fight tons of spammers and SEO people who bomb them with automatic queries. But that’s what CAPTCHAs are for, isn’t it? well, for some reason Google often saves on them, and instead provides you with the excellent service of referral to CNET to get some antivirus software. Dumb.

The amazing part is that you can get this from a single, well-defined, world-peace-disrupting query, for allintitle:”design”. Booh!

Why Search Innovation is Dead

We like to think of web search as quite a polished tool. I still find myself amazed at the ease with which difficult questions can be answered just by googling them. Is there really much to go from here?

"Hasn't Google solved search?" by asmythie/Flickr

Brynn Evans has a great post on why social search won’t topple Google anytime soon. In it, she shares some yet to be published results on difficulty in forming the query being a major cause for failed searches. That resonated well with some citations I’m collecting right now for my thesis (on concept-based information retrieval). It also reminded me of a post Marissa Mayer of Google wrote some months ago, titled “The Future of Search“.  One of the main items on that future of hers was natural language search, or as she put it:

This notion brings up yet another way that “modes” of search will change – voice and natural language search. You should be able to talk to a search engine in your voice. You should also be able to ask questions verbally or by typing them in as natural language expressions. You shouldn’t have to break everything down into keywords.

Mayer gives some examples to questions that were difficult to query or formulate by keywords. But clearly she has the question in her head, so why not just type it in? after all, Google does attempt to answer questions. Mayer (and Brynn too) mentions the lack of context as one reason. Some questions, if phrased naively, refer to the user’s location, activities or other context. It’s a reasonable, though somewhat exaggerated point.  Users aren’t really that naive or lazy, if instead of using search they’d call up a friend, they wouldn’t ask “can you tell me the name of that bird flying out there?”. The same info they would provide verbally, they can also provide to a natural-language search engine, if properly guided.

The more significant reason in my eyes revolves around habits. Search is usually a means, rather than a goal. So we don’t want to think where and how to search, we just want to type something quickly into that good old search box and fire it away. It’s no wonder that the search engine most bent on sending you away asap, has most loyal users coming back for more. That same engine even has a button, that hardly anyone uses, and supposedly costs them over 100M$ a year in revenues, that sends users away even faster.  So changing this habit is a tough task for any newcomers.

But these habits go deeper than that. Academic researchers have long studied natural-language search and concept-based search. A combination of effective keyword-based search, together with a more elaborate approach that kicks in when the query is a tough one, could have gained momentum, and some attempts were made for commercial products (most notable Ask, Vivisimo and Powerset). They all failed. Users are so used to the “exact keyword match” paradigm, the total control it provides them with, and its logic (together with its shortcomings) that a switch is nearly impossible, unless Google will drive such a change.

Until that happens, we’ll have to continue limiting innovations to small tweaks over the authorities…

Evaluating Search Engine Relevance

Web search engines must be the most useful tools the Web brought us. We can answer difficult questions in seconds, find obscure pieces of information and stop bothering about organizing data. You would expect that systems with such impact on our lives will be measured, evaluated and compared, so that we can make an informed decision on which one to choose. Nope, nothing there.

Some years ago, search engines competed in size. Danny Sullivan wrote angry pieces on that, and eventually they stopped, but still six months ago Cuil launched and made a fool of itself by boasting size again (BTW – Cuil is still alive, but my blog is not indexed, not much to boast about coverage there).

TRECNow, academic research on search (Information Retrieval, or IR in academic jargon) does have a very long and comprehensive tradition of relevance evaluation methodologies, TREC being the best example. IR systems are evaluated, analyzed, and compared across standard benchmarks, and TREC researchers carry out excellent research into the reliability and soundness of these benchmarks. So why isn’t this applied to evaluating web search engines?

One of the major problems is, yes, size. Much of the challenges TREC organizers are facing, is scaling the evaluation methods and measurements to web size scale. One serious obstacle was the evaluation measure itself. Most IR research uses Mean Average Precision (MAP), which proved to be a very reliable and useful measure, but it requires knowing stuff you just can’t know on the web, such as the total number of relevant documents for the evaluated query. Moreover, with no use case reasoning, there was no indication that it indeed measures true search user satisfaction.

Luckily, the latest volume of TOIS journal (Transactions on Information Systems) included a paper that could change that picture. Justin Zobel and Alistair Moffat, two Australian key figures in IR and IR evaluation, with Zobel a veteran of TREC methodology analysis, suggest a new measure called “Rank-Biased Precision” (RBP). In their words, the model goes as follows:

The user has no desire to examine every answer. Instead, our suggestion is that they progress from one document in the ranked list to the next with persistence (or probability) p, and, conversely, end their examination of the ranking at that point with probability 1− p… That is,we assume that the user always looks at the first document, looks at the second with probability p, at the third with probability p2, and at the ith with probability pi−1. Figure 3 shows this model as a state machine, where the labels on the edges represent the probability of changing state.

The user model assumed by rank-biased precision

They then go to show that the RBP measure,  derived from this user model, does not depend on any unknowns, behaves well with real life uncertainties (e.g. unjudged documents, queries with no relevant documents at all), and is comparable to previous measures in showing statistically significant differences between systems.

Eventually,  beyond presenting an interesting web search user model, RBP also eliminates one more obstacle to true comparison of search engine relevance. The sad reality, though, is that with Yahoo’s and Live’s current poor state of results relevance, such a comparison may not show us anything new, but an objective, visible measurement could at least provide incentive to measurable improvements on their account. Of course, then we’ll get to the other major issue, of what constitutes a relevant result…

Update: I gave a talk on RBP in my research group, slides are here.

Did Facebook just drop Live Search… again?

Exactly 3 months ago Facebook and Microsoft announced live search’s integration in Facebook. The search functionality was up, down, then up again.

Today, it doesn’t seem to be available anymore, the web tab is simply gone.



There doesn’t seem to be any buzz about it so far, is it just a temporary or local glitch?… 

Update: OK, note to self – when they sayNow Facebook users in the U.S. have the option to “Search Facebook” or “Search the Web.”“, they probably mean it. Oh well. It is strange, though, that 3 months after integration, Live search is still not rolled out in Facebook’s main growth segment, which is outside the US. Surely not a technical difficulty.

IBM IR Seminar Highlights (part 2)

The seminar’s third highlight for me (in addition to IBM’s social software and Mor’s talk), was the keynote speech by Human-Computer Interaction (HCI) veteran Professor Ben Schneiderman of UMD. Ben’s presentation was quite an experience, but not in a sophisticated Lessig way (which Dick Hardt adopted so well for identity 2.0), rather by sheer amounts of positive energy and passion streaming out of this 60-year-old.

[Warning – this post turned out longer and heavier than I thought…]

Ben Shneiderman in front of Usenet Treemap - flickr/Marc_SmithBen is one of the founding fathers of HCI, and the main part of his talk focused on how visualization tools can serve as human analysis enhancers, just like the web as a tool enhances our information.

He presented tools such as ManyEyes (IBM’s),  SpotFire (which was his own hitech exit), TreeMap (with many examples of trend and outlier spotting using it) and others. The main point was in what the human eye can do using those tools, that no predefined automated analysis can, especially in fields such as Genomics and Finance.

Then the issue moved to how to put such an approach to work in Search, which like those tools, is also a power multiplier for humans. Ben described today’s search technology as adequate mainly in “known item finding”. The more difficult tasks that can’t be answered well in today’s search, are usually for a task that is not “one-minute job”, such as:

  • Comprehensive search (e.g. Legal or Patent search)
  • Proving negation (Patent search)
  • Finding exceptions (outliers)
  • Finding bridges (connecting two subsets)

The clusters of current and suggested strategies to address such tasks are:

  • Enriching query formulation – non-textual, structured queries, results preview, limiting of result type…
  • Expanding result management – better snippets, clustering, visualization, summarization…
  • Enabling long-term effort – saving/bookmarking, annotation, notebooking/history-keeping, comparing…
  • Enhancing collaboration – sharing, publishing, commenting, blogging, feedback to search provider…

So far, pretty standard HCI ideas, but then Ben started taking this into the second part of the talk. A lot of the experimentation employed in these efforts by web players has built an entire methodology, that is quite different from established research paradigms. Controlled usability tests in the labs are no longer the tool of choice, rather A/B testing on user masses with careful choice of system changes. This is how Google/Yahoo/Live modify their ranking algorithms, how Amazon/NetFlix recommend products, how the Wikipedia collective “decides” on article content.

This is where the term “Science 2.0” pops up. Ben’s thesis is that some of society’s great challenges today have more to learn from Computer Science, rather than traditional Social Science. “On-site” and “interventionist” approaches should take over controlled laboratory approaches when dealing with large social challenges such as security, emergency, health and others. You (government? NGOs? web communities?) could make actual careful changes to how specific social systems work, in real life,  then measure the impact, and repeat.

This may indeed sound like a lot of fluff, as some think, but the collaboration and decentralization demonstrated on the web can be put to real life uses. One example on HCIL is the 911.gov project for emergency response, as emergency is a classic case when centralized systems collapse. Decentralizing the report and response circles can leverage the power of the masses also beyond the twitter journalism effect.