Mortar has joined Datadog, the leading SaaS-based monitoring service for cloud applications. Read more about what this means here.

Recommendation Engine Basics

Recommender systems take data collected on existing user behaviors, and use it to determine what users might also like. It’s a very technical version of what humans do intuitively—we might recommend an ice cream shop to a friend with a sweet tooth, but a coffee place to another friend who is avoiding carbs. By using past behavior of a large number of people we can predict the taste preferences of an individual.

User-Item Affinity

Mortar uses a graph-based approach that can blend collaborative filtering with content-based recommendations and other signals. “Collaborative filtering” just means that it uses preference information from many users to make recommendations. This can be blended with "content information" about relationships between items.

The first step in the recommender system is to understand affinities between users and items. "Items" in this case can really be anything—merchandise, restaurants, movies, book genres—the important part is to be able to create an affinity score from a user action.

This often comes in the form of view or purchase data. If I watched a movie, chances are it’s the kind of movie I like. If I buy a sofa, that’s probably representative of my taste. If I'm browsing a website, the pages I visit are probably representative of what I’m interested in, but less so than the books I buy, and so on.

We want to take all the actions that users perform and convert that information into a numeric affinity between users and items: higher numbers mean more affinity.

Suppose we have a website where users can view items, favorite items, and make purchases. We could say that viewing is worth .1, favoriting is worth .5 and buying is worth 1. Then for every user and every item we could decide their affinity by adding up the actions the user took on that item.

Graph1

In this example, Alvin favorited the hula hoop and Simon bought one. Simon favorited the toothbrush. Theodore viewed the toothbrush twice, and the toy plane once. When we put it together we make a graph with edges from users to items. This kind of graph is called bipartite, which means that there are two kinds of nodes in it, and edges only go between one kind of node and the other kind.

Item-Item Affinity

The next step is to get affinities between items directly, removing the users. You can think of this as condensing the graph above by tracing from one item to another through the user, and then taking out the user.

Graph2

The toy plane and the hula hoop are connected by Simon. Notice that we take the lesser of the two connections (.5 instead of 1). The idea behind this is real-world logic: if Simon has a strong interaction with the hula hoop, and a less strong interaction with the toothbrush, if we connect them with the strong value we are giving more credit to the toothbrush than it "deserves." Simon probably favorites more things than he buys, and views more things than he favorites. It's safer not to connect a buy to a view with the stronger signal, because that could result in a lot of noise.

Also note that Alvin doesn’t contribute any lines to this second graph because he only interacted with one item; he's not connecting two items, so there is no line to draw.

Item-Item Recommendations

At this point we could generate item-item recommendations just by looking at the neighbors of each item:

    For hula hoop recommend toothbrush
    For toothbrush recommend hula hoop first (.5 > .1)
    For toothbrush recommend toy plane second (.1 < .5)
    For toy plane recommend toothbrush

If we need more recommendations, or more diverse recommendations, we could trace one step out on the graph and look at the neighbors of neighbors. So starting at hula hoop we first look at toothbrush (neighbor) and then at its neighbor (toy plane). This is actually accomplished by inverting all of the scores and running a shortest path algorithm, but conceptually it’s just as simple as looking at neighbor’s neighbors.

    For hula hoop recommend toothbrush first
    For hula hoop recommend toy plane second
    For toothbrush recommend hula hoop first
    For toothbrush recommend toy plane second
    For toy plane recommend toothbrush first
    For toy plane recommend hula hoop second

At this point we have recommendations of items for other items. This is useful for display on product pages (think "people who bought this item also bought" on Amazon).

User-Item Recommendations

Ultimately, you may want your recommendation engine to suggest highly relevant products to individual users based on the users' tastes. To get those user-item recommendations all we need to do is combine the two pieces of information we have already generated: user-item affinities and item-item affinities.

Graph3

In a full recommender system we’d want dozens of recommendations for each item, but for the purposes of drawing coherent pictures, we’re assuming only one. We can just add up the paths to find the best one:

    Alvin -> hula hoop -> toothbrush: 1
    Simon -> hula hoop -> toothbrush: 1.5
    Simon -> toothbrush -> hula hoop: 1
    Theodore -> toy plane -> toothbrush: .2
    Theodore -> toothbrush -> hula hoop: .7

So for Alvin we recommend a toothbrush, and for Theodore we recommend a hula hoop, but what about Simon? Both of Simon’s recommendations are also items that he has interacted with, so we don’t want to use them—they wouldn’t be showing him anything new. Ultimately we can choose how to handle that (for example, recommending viewed items might be fine, but purchased items not so much).

And that’s basically it: those are the core concepts. But there are lots of advanced techniques we will cover later to tune your results, to gracefully handle times when there is insufficient data to make good recommendations, and more.