Tag Archives: Machine Learning

The Data-Product-Scientist-Manager

What’s the difference between Machine Learning, Artificial Intelligence, Deep Learning and Data Science? The huge buzz around these concepts in recent years makes it seem as if they could be used interchangeably.

Several months ago, I gave a meetup talk, going through a history of machine-learning algorithms through the prism of learning algorithms for game playing (video, in Hebrew). As the talk did not require prior knowledge, I began with a quick intro to artificial intelligence (AI) and machine learning (ML) –

  • AI is the science of building machines that mimic what we humans perceive as intelligent behavior
  • ML is a branch within AI, concerned with algorithms that learn their “intelligence” from data (rather than explicit coding)
  • Deep Learning is a very particular method within ML, which uses artificial neural networks and huge volumes of data

At the time of building the talk, I was struggling with making a reference to data science (DS) as well, to help my audience make sense of all the current hype terms.

But is DS a discipline within AI? Within ML? Is it not simply a fancy name for Statistics? I ended up leaving it out.

A few days ago, I stumbled upon an episode of the excellent ML podcast Talking Machines, where Neil Lawrence made an observation that put it all into place for me. Lawrence posited that DS arises from the new streams of data we collect in this era, which are generated in huge volumes out of sensors and interactions, and its mission is to extract value from these. In other words – “Here, we have all of this data, what can we do with it?”

This may seem like a petty technicality, but it makes all the difference. In classic experimentation, scientists would make a hypothesis, collect data to validate or invalidate it, and then run the needed statistical analysis.

With DS, there is no such prior hypotheses. So the core of DS becomes coming up with these hypotheses, and then validating them right there and then in the data we already have. But whose job is it to come up with hypotheses?

There are several relevant roles out there to consider:

  • Data (and business) analysts have a strong understanding of how to wrangle data and query it, and will run analysis (either on-demand or on their own) against clear business objectives. But their role and state of mind is not to find new such objectives or disruptive new ways to reach them
  • Data and ML engineers build the technology and libraries on which the data is collected and crunched. They love to see their systems used to generate powerful insights and capabilities, but see themselves as the infrastructure for generating and validating these hypotheses, rather than as the users
  • Data scientists apply their strong statistics and ML skills to the above data infrastructure to build models, enabling new capabilities and user-value out of validated hypotheses. But a model is not built in a vacuum: they need a clear mission, derived from a validated hypothesis (or even a yet-to-be-validated one)
  • Product managers are the classic hypothesis-creator types. They analyze the market, meet with customers, dive into analytics and business data, and then create product hypotheses, collected into roadmaps. But they hardly use the above “big” data infrastructure for generating hypotheses, mostly due to tech know-how gaps

What we need for data to be fully leveraged is a new role, a hybrid of the latter two. The data science product manager is a data scientist with the instincts and user-centric thinking of the product manager, or a product manager with the data exploration intuitions of a data scientist. Which skills will this require?

  • Strong data instincts, the ability and desire to explore data both assisted and unassisted, applying intuition to identify ad-hoc patterns and trends
  • User-centric thinking, seeing the users and real-life scenarios behind the data, almost like Neo in “The Matrix”
  • Technical acumen, though not necessarily coding. Today’s DS and ML tools are becoming more and more commoditized, and require less and less writing from scratch
  • Very strong prioritization capabilities – creating hypotheses from data may be easy, almost too easy. Hence the need to further explore only the most promising ones, turning them into a potential roadmap.
  • Ability to work closely with the data team and “speak their language” to quickly validate, understand the productization cost, and estimate ROI for a large list of such hypotheses

While this role could still be fulfilled by a strong partnership between two individuals working in tandem (PM and data scientist), it is clear that a single individual possessing all of these skills will achieve results far more efficiently. Indeed, as a quick search on LinkedIn shows, the combined role is emerging and exploding in demand.

Learning to Play

Ever since I took my first course in Artificial Intelligence, I have been fascinated by the idea of AI in its classical meaning – teaching machines to perform tasks deemed by us humans as requiring intelligence.

Recently, I gave a talk at my company on some of the intriguing instances of one of these tasks – learning to play (and win!) games. I often found the human stories behind the scenes even more fascinating than the algorithms themselves, and that was my focus in this talk. It was really fun both to assemble as well as deliver, so I wanted to capture these stories in this blog post, to accompany the embedded slides below.

 

So let’s get started!

a humble start

Game playing is a fantastic AI task, one that researchers were always excited about. Just like a toddler being taught to swing a baseball bat by an excited parent, the algorithm gets clear rules, a measurable goal and training input. But above all, testing the result involves the fun act of playing against the opponent you yourself have created, just like a proud parent. What a great way to do AI research!

As we go way back in the AI time machine, the first known implementation of an AI game was in 1950. Josef Kates was a young Jewish Austrian engineer, whose family fled the Nazis’ rise to power and ended up in Canada. Kates worked on radar and vacuum tubes design at a company named Rogers Majestic, and later developed his own patented tube, which he called the Additron. While waiting for the patent to be registered, he wanted to demonstrate the power of his invention in a local technology fair, so he built a machine that could play Tic-Tac-Toe, calling it “Bertie the Brain”.

Comedian Danny Kay pleased after "beating" Bertie the Brain during the fair

Comedian Danny Kaye pleased after “beating” Bertie the Brain during the fair

“Bertie the Brain” was a huge success at the fair. Kates made sure to adjust its level of difficulty to allow players to occasionally beat it, and visitors lined up to play. Nevertheless, at the end of the fair it was dismantled and forgotten. Unfortunately for Kates, the Additron took a very long time to go through patenting, and by the time it was approved technology had already moved on toward transistors.

minimaxThe algorithms pioneered and used in those early days were based on the Minimax method – constructing a tree of all possible moves by the player and opponent, and evaluating the proximity to a winning position. In each move, the algorithm would assume best play with the computer playing the move with MAXimal value and the opponent playing its own maximum, which is the computer’s MINimal value. Thus, the algorithm could calculate into the future as much as time allowed.

With only 765 unique board positions in Tic-Tac-Toe, the game was small enough that all positions and moves could be calculated in advance, making Bertie unbeatable. AI researchers call this situation a “Solved” game. In fact, perfect game play will always end in a draw, and if you watched the 1983 movie “War-Games” with Matthew Broderick, you’ll recall how this fact saved the world from nuclear annihilation…

advance to world-class wins

So if Tic-Tac-Toe is too simple, how about a more complex game such as checkers?

Checkers has, well, slightly more board positions: at 5 x 1020 board positions, it was a much more challenging AI task. The best-known checkers program, even if not the first, was the one written by Arthur Samuel at IBM. Samuel’s checkers was considered a real classic, and for several decades it was considered the best that can be achieved. It still used Minimax, but expanded its repository of board positions from actual games played, often against itself, thus becoming a true learning algorithm. However, it never got to the level of beating master human players.

checkers

In 1989, a group of researchers – led by Jonathan Schaeffer from the University of Alberta – set out to use advances in computing and break that glass ceiling with a new program called Chinook. I had the privilege of attending a fascinating talk by Schaeffer at the Technion 10 years ago, and the blog post I wrote subsequently summarizes the full story. That story has fascinating twists and touching human tributes in it, but it ends with machines being the clear winners – and with AI researchers declaring the game of checkers as solved as well.

The obvious next challenge in our journey would be what’s considered the ultimate game of intelligence – chess. Using the same board as checkers, but with more complex moves, chess has approximately 10120 board positions – that’s about the number of checkers positions, squared. A famous chess-playing machine was The Turk, designed and constructed in Austria by Wolfgang von Kempelen as early as 1770. The Turk was a wonder of its age, beating experienced chess players and even Napoleon Bonaparte. It was a hoax, of course, cleverly hiding a human sitting inside it, but the huge interest it created was a symbol of the great intelligence attributed to playing the game.

kasparovThe huge search space in which Minimax had to be applied for chess made early programs extremely weak against humans. Even with the introduction of minimax tree-pruning methods such as Alpha-Beta pruning, it seemed like no algorithmic tuning would produce a breakthrough. As the decades passed, though, more powerful computers enabled faster computations and larger space to hold billions of possible board positions. This culminated in the famous 1996 duel between IBM’s Deep Blue chess-playing computer – already capable of evaluating 200 million positions per second – and the world champion at the time, Garry Kasparov. Despite losing two games to the supercomputer, Kasparov won the tournament easily, 4-2. IBM went on to further improve Deep Blue and invited Kasparov to a re-match the following year. Kasparov won the first game easily, and was so confident as a result that he lost the next game, a loss he blamed on cheating by IBM. The match ended 3.5-2.5 to Deep Blue, a sensational first win for a machine over a presiding world champion.

from brute force to TRUE learning

The shared practice that connected all the work we saw so far – from Bertie the Brain to Deep Blue – was to feed huge amounts of knowledge to the software, so that it could out-do the human player by sheer computing power and board positions stored in its vast memory. This enabled algorithms such as Minimax to process enormous numbers of positions, apply the human-defined heuristics to them and find the winning moves.

Let’s recall the toddler from the start of our journey. Is this how humans learn? Would we truly consider this artificial intelligence?

If we want to emulate true intelligence, what we’d really like to build are algorithms that learn by themselves. They will watch examples and learn from them; they will build their own heuristics; they will infer the domain knowledge rather than have it fed into them.

In 2014, a small London start up named DeepMind Technologies, founded less than three years earlier, was acquired by Google for the staggering sum of $600 million before it had released even one product to the market. In fact, reporters struggled to explain what DeepMind was doing at all.

deepmind-logoThe hints at what attracted Google to DeepMind lie in a paper its team published in December 2013. The paper, presented in NIPS 2013, was titled “Playing Atari with Deep Reinforcement Learning“. It was about playing games, but unlike ever before. This was about a generic system, learning to play games without being given any knowledge, nothing but a screen and the score-keeping part in it. You could equate it to a human who had never played Pac-Man, taking the controls and just hitting them in all directions, watching the score and gradually figuring out how to play it like a pro and then doing the same for many other games, all using the same method. Sounds human? This was the technology Google was after.

Watching DeepMind play Atari Breakout (seen in this video) is like magic. The algorithm starts out moving randomly, barely hitting the ball once every many misses. After an hour of training, it starts playing at an impressive pro level. Then it even learns the classic trick that any Breakout player eventually masters – tunneling the ball to the top so that it hits bricks off with little effort. The beauty of it all was that the exact same system mastered several other games with no custom optimizations – only the screen raw input and an indication of where the score is, nothing else. This was no Minimax running, no feeding of grandmaster moves books or human-crafted heuristic functions. It was generic deep-learning neural networks, using reinforcement learning that would look at a series of moves and their score outcome, and uncover the winning patterns all by itself. Pure magic.

AI Building games

For the last part of the talk, I deviated to a related topic. For this part, I was walking through a wonderful series of blog posts I stumbled upon called “Machine Learning is Fun!”, where the author, Adam Geitgey, walks through basic concepts in Machine Learning. In part two, he describes how Recurrent Neural Networks can be trained to learn and generate patterns. The simplest example we all know and appreciate (or sometimes not…) is the predictive text feature of mobile keyboards, where the system attempts to predict what word we are trying to type – the cause of so many great texting gaffes.

Moving to more elaborate examples, Geitgey fed an RNN implementation with a Hemingway book (“The Sun Also Rises”), and trained it recurrently on the book’s text, then having it spit out texts of its own that would match the book. It starts out with incomprehensible strings of text, but gradually takes the form of words and sentences, to the point that the sentences almost make sense and retain Hemingway’s typically curt dialogue style.

Geitgey then takes this system and applies it to none other than the Super Mario Maker. This is a version of Super Mario that allows players to build levels of their own. He transforms game levels into text streams and feeds these into the learning system. Again here, at first the system spits out nonsense. But then it gradually learns the basic rules and eventually generates actual playable levels. I’m no expert on Super Mario so I couldn’t tell, but I showed it to my son and he said it’s a great level that he would be happy to play. That’s intelligent enough for me!

supermario

 

Marketing the Cloud

watsonIBM made some news a couple of days ago announcing consumers can now use Watson to find the season’s best gifts. A quick browse through the app, which is actually just a wrapper around a small dedicated website, shows nothing of the ordinary – Apple Watch, Televisions, Star Wars, Headphones, Legos… not much supercomputing needed. No wonder coverage turned sour after an initial hype, so what was IBM thinking?

tensorflowRewind the buzz machines one week back. Google stunned tech media, announcing it is open sourcing its core AI framework, TensorFlow. The splashes were high: “massive potential“, “Machine Learning breakthrough“, “game changer“… but after a few days, the critics were out, Quorans talking about the library’s slowness, and even Google-fanboy researchers wondering – what exactly is TensorFlow useful for?

Nevertheless, within 3 days, Microsoft quickly announced its own open source Machine Learning toolkit, DMTK. The Register was quick to mock the move, saying “Google released some of its code last week. Redmond’s (co-incidental?) response is pretty basic: there’s a framework, and two algorithms”…

So what is the nature of all these recent PR-like moves?

marketing-cloud

There is one high-profit business shared by all of these companies: Cloud Computing. Amazon leads the pack in revenue, and uses the cash flow from cloud business to offset losses on its aggressive ecommerce pricing, but also Microsoft and Google are assumed to come next with growing cloud business. Google even goes as far as predicting cloud revenue to surpass ads revenue in five years. It is the gold rush era for the industry.

But first, companies such as Microsoft, Google and IBM will need to convince corporates to hand them their business, rather than to Amazon. Hence they have to create as much “smart” buzz for themselves, so that executives in these organization, already fatigued by the big-data buzzwords, will say: “we must work with them! look, they know their way with all this machine-learning-big-data-artifical-intelligence stuff!!”

So the next time you hear some uber-smart announcement from one of these companies that feels like too much hot air, don’t look for too much strategy; instead, just look up to the cloud.

Microsoft Israel ReCon 2014

Microsoft Israel R&D Center held their first Recommendations Technology conference today, ReCon. With an interesting agenda and a location that’s just across the street from my office, I could not skip this one… here are some impressions from talks I found worth mentioning.

The first keynote speaker was Joseph Sirosh, who leads the Cloud Machine Learning team at Microsoft, recently joining from Amazon. Sirosh may have aimed low, not knowing what his audience will be like, but as a keynote this was quite a disappointing talk, full of simplistic statements and buzzwords. I guess he lost me when he stated quite decisively that the big difference about putting your service on the cloud is that it means it will get better the more people use it. Yeah.

Still, there were also some interesting observations he pointed out, worth mentioning:

  • If you’re running a personalization service, benchmarking against most popular items (i.e. Top sellers for commerce) is the best non-personalized option. Might sound trivial, but when coming from an 8-year Amazon VP, that’s a good validation
  • “You get what you measure”: what you choose to measure is what you’re optimizing, make sure it’s indeed your weakest links and the parts you want to improve
  • Improvement depends on being able to run a large number of experiments, especially when you’re in a good position already (the higher you are, the lower your gains, and the more experiments you’ll need to run to keep gaining)
  • When running these large numbers of experiments, good collaboration and knowledge sharing becomes critical, so different people don’t end up running the same experiments without knowing of each other’s past results

Elad Yom-Tov from Microsoft Research described work his team did on enhancing Collaborative Filtering using browse logs. They experimented with adding user browser logs (visited urls) and search queries to the CF matrix in various ways to help bootstrapping users with little data and to better identify short-term (recent) intent for these users.

An interesting observation they reached was that using the raw search queries as matrix columns worked better than trying to generalize or categorize them, although intuitively one would expect this would reduce the sparsity of such otherwise very long-tail attributes. It seems that the potential gain in reducing sparsity is offset by the loss of specificity and granularity of the original queries.

unique

Another related talk which outlined an interesting way to augment CF was by Haggai Roitman of IBM Research. Haggai suggested the feature of “user uniqueness” –  to what extent the user follows the crowd or deliberately looks for the esoteric choices, as a valuable signal in recommendations. This uniqueness would then determine whether to serve the user with results that are primarily popularity-based (e.g. CF) or personalized (e.g. content-based), or a mix of the two.

The second keynote was by Ronny Lempel of Yahoo! Labs in Haifa. Ronny talked about multi-user devices, in particular smart TVs, and how recommendations should take into account the user that is currently in front of the device (although this information is not readily available). The heuristic his team used was that the audience usually doesn’t change in consecutive programs watched, and so using the last program as context to recommending the next program will help model that unknown audience.

Their results indeed showed a significant improvement in recommendations effectiveness when using this context. Another interesting observation was that using a random item from the history, rather than the last one, actually made the recommendations perform worse than no context at all. That’s an interesting result, as it validates the assumption that approximating the right audience is valuable, and if you make recommendations to the parent watching in the evening based on the children’s watched programs in the afternoon, you are likely to make it worse than no such context at all.

Cortana

The final presentation was by Microsoft’s Hadas Bitran, who presented and demonstrated Windows Phone’s Cortana. Microsoft go out of their way to describe Cortana as friendly and non-creepy, and yet the introductory video from Microsoft Hadas presented somehow managed to include a scary robot (from Halo, I presume), dramatic music, and Cortana saying “Now learning about you”. Yep, not creepy at all.

Hadas did present Cortana’s context-keeping session, which looks pretty cool as questions she asked related to previous questions and answers, were followed through nicely by Cortana (all in a controlled demo, of course). Interestingly, this even seemed to work too well, as after getting Cortana’s list of suggested restaurants Hadas asked Cortana to schedule a spec review, and Cortana insisted again and again to book a table at the restaurant instead… nevertheless, I can say the demo actually made the option of buying a Windows Phone pass through my mind, so it does do the job.

All in all, it was an interesting and well-organized conference, with a good mix of academia and industry, a good match to IBM’s workshops. Let’s have many more of these!