Yesterday’s seminar was also packed with some very interesting talks from a wide range of social aspects to IR and NLP.
Mor Naaman of Rutgers University and formerly at Yahoo! Research gave an excellent talk on using social inputs to improve the experience of multimedia search. The general theme was about discovering metadata for a given multimedia concept from web 2.0 sites, then using those to cluster potential results and choose representative ones.
In one application, this approach was used to identify “representative” photos of a certain landmark, say the Golden Gate bridge, see WorldExplorer for an illustration. So first, you’d find all flickr photos geotagged and/or fickr-tagged by the location and name of the bridge (or any given landmark). Next, image processing (SIFT) is applied to those images to cluster them into subsets that are likely to be of the same section and/or perspective of the bridge. Finally, relations between the images in each cluster are formed based on the visual relation, and link analysis is employed to find a “canonical view”. The result is what we see on the right sidebar in World Explorer, and described in this WWW’08 paper.
[Update: Mor commented that the content-based analysis part is not yet deployed in World Explorer. Thanks Mor!]
Another example applied this approach to concerts on YouTube, and the purpose was to find good clips of the concert itself, rather than videos discussing it etc. Metadata describing the event (say, an Iron Maiden concert) was collected from both YouTube and sites such as Upcoming.org, and Audio Fingerprinting was employed to detect overlapping video sections, as it’s quite likely the concert itself would have the most overlap. Note that in both cases, the image/audio processing is a heavy task, and applying it only to a small subset filtered by social tags makes the work involved more feasible.
I’ll talk about the keynote (by Prof. Ben Schneiderman) on another post, this one is already way too long… Here are soundbites from some other talks:
Emil Ismalon of Collarity referred to personalized search (e.g. Google’s) as a form of overfitting, not letting me learn anything new as it trains itself only on my own history. That, of course, as a motivation for community-based personalization.
Ido Guy of IBM talked about research they did comparing social network extracted from public and private sources. The bottom line is that some forms of social relations are stronger, representing collaboration (working on projects together, co-authoring papers or patents), and others are weaker, being more around the socializing activities (friending/following on SN, commenting on blogs etc) . Of course, that would be relevant for Enterprise social graph, not necessarily personal life…
Daphne Raban of Haifa University summarized her (empirical) research into motivations of participants in Q&A sites. The main bottom lines were: 1) money was less important to people who participate very often, but it’s a catalyst, 2) Being awarded with gratitude and conversation is the main factor driving people to become more frequent participants, and 3) in quality comparison, paid results ranked highest, free community results (Yahoo! Answers) ranked close, and unpaid single experts ranked lowest.
Was cool hearing you talk as well.
I even have 2 photos of you from the event – http://tr.im/2f2z
All the best,
“I tweet @headup”
headup.com – The Semantic Browser Extension
Thanks for the inclusion in the highlights! Just a side note, World Explorer does not currently incorporate the content-based analysis. That remains hidden in our WWW08 paper 🙂
Delver is exciting – I am sure you will do well. Enjoyed tour presentation as well!
p.s. I declare you winner of this month’s “best blog name”. 🙂
Thanks, Mike, and good luck to the entire SemantiNet team!
Mor, thanks for the compliment and correction, I’ve updated my post. And as for names – I’m sure the Ay- and Nay-man held that title for quite some months! 🙂