turicreate.recommender.ranking_factorization_recommender.RankingFactorizationRecommender¶
-
class
turicreate.recommender.ranking_factorization_recommender.
RankingFactorizationRecommender
(model_proxy)¶ A RankingFactorizationRecommender learns latent factors for each user and item and uses them to rank recommended items according to the likelihood of observing those (user, item) pairs. This is commonly desired when performing collaborative filtering for implicit feedback datasets or datasets with explicit ratings for which ranking prediction is desired.
RankingFactorizationRecommender contains a number of options that tailor to a variety of datasets and evaluation metrics, making this one of the most powerful models in the Turi Create recommender toolkit.
Creating a RankingFactorizationRecommender
This model cannot be constructed directly. Instead, use
turicreate.recommender.ranking_factorization_recommender.create()
to create an instance of this model. A detailed list of parameter options and code samples are available in the documentation for the create function.Side information
Side features may be provided via the user_data and item_data options when the model is created.
Additionally, observation-specific information, such as the time of day when the user rated the item, can also be included. Any column in the observation_data SFrame that is not the user id, item id, or target is treated as a observation side features. The same side feature columns must be present when calling
predict()
.Side features may be numeric or categorical. User ids and item ids are treated as categorical variables. For the additional side features, the type of the
SFrame
column determines how it’s handled: strings are treated as categorical variables and integers and floats are treated as numeric variables. Dictionaries and numeric arrays are also supported.Optimizing for ranking performance
By default, RankingFactorizationRecommender optimizes for the precision-recall performance of recommendations.
Model parameters
Trained model parameters may be accessed using m.get(‘coefficients’) or equivalently m[‘coefficients’], where m is a RankingFactorizationRecommender.
Notes
Model Details
RankingFactorizationRecommender trains a model capable of predicting a score for each possible combination of users and items. The internal coefficients of the model are learned from known scores of users and items. Recommendations are then based on these scores.
In the two factorization models, users and items are represented by weights and factors. These model coefficients are learned during training. Roughly speaking, the weights, or bias terms, account for a user or item’s bias towards higher or lower ratings. For example, an item that is consistently rated highly would have a higher weight coefficient associated with them. Similarly, an item that consistently receives below average ratings would have a lower weight coefficient to account for this bias.
The factor terms model interactions between users and items. For example, if a user tends to love romance movies and hate action movies, the factor terms attempt to capture that, causing the model to predict lower scores for action movies and higher scores for romance movies. Learning good weights and factors is controlled by several options outlined below.
More formally, the predicted score for user \(i\) on item \(j\) is given by
\[\operatorname{score}(i, j) = \mu + w_i + w_j + \mathbf{a}^T \mathbf{x}_i + \mathbf{b}^T \mathbf{y}_j + {\mathbf u}_i^T {\mathbf v}_j,\]where \(\mu\) is a global bias term, \(w_i\) is the weight term for user \(i\), \(w_j\) is the weight term for item \(j\), \(\mathbf{x}_i\) and \(\mathbf{y}_j\) are respectively the user and item side feature vectors, and \(\mathbf{a}\) and \(\mathbf{b}\) are respectively the weight vectors for those side features. The latent factors, which are vectors of length
num_factors
, are given by \({\mathbf u}_i\) and \({\mathbf v}_j\).Training the model
The model is trained using Stochastic Gradient Descent with additional tricks to improve convergence. The optimization is done in parallel over multiple threads. This procedure is inherently random, so different calls to create() may return slightly different models, even with the same random_seed.
In the explicit rating case, the objective function we are optimizing for is:
\[\min_{\mathbf{w}, \mathbf{a}, \mathbf{b}, \mathbf{V}, \mathbf{U}} \frac{1}{|\mathcal{D}|} \sum_{(i,j,r_{ij}) \in \mathcal{D}} \mathcal{L}(\operatorname{score}(i, j), r_{ij}) + \lambda_1 (\lVert {\mathbf w} \rVert^2_2 + || {\mathbf a} ||^2_2 + || {\mathbf b} ||^2_2 ) + \lambda_2 \left(\lVert {\mathbf U} \rVert^2_2 + \lVert {\mathbf V} \rVert^2_2 \right)\]where \(\mathcal{D}\) is the observation dataset, \(r_{ij}\) is the rating that user \(i\) gave to item \(j\), \({\mathbf U} = ({\mathbf u}_1, {\mathbf u}_2, ...)\) denotes the user’s latent factors and \({\mathbf V} = ({\mathbf v}_1, {\mathbf v}_2, ...)\) denotes the item latent factors. The loss function \(\mathcal{L}(\hat{y}, y)\) is \((\hat{y} - y)^2\) by default. \(\lambda_1\) denotes the linear_regularization parameter and \(\lambda_2\) the regularization parameter.
When
ranking_regularization
is nonzero, then the equation above gets an additional term. Let \(\lambda_{\text{rr}}\) represent the value of ranking_regularization, and let \(v_{\text{ur}}\) represent unobserved_rating_value. Then the objective we attempt to minimize is:\[\begin{split}\min_{\mathbf{w}, \mathbf{a}, \mathbf{b}, \mathbf{V}, \mathbf{U}} \frac{1}{|\mathcal{D}|} \sum_{(i,j,r_{ij}) \in \mathcal{D}} \mathcal{L}(\operatorname{score}(i, j), r_{ij}) + \lambda_1 (\lVert {\mathbf w} \rVert^2_2 + || {\mathbf a} ||^2_2 + || {\mathbf b} ||^2_2 ) + \lambda_2 \left(\lVert {\mathbf U} \rVert^2_2 + \lVert {\mathbf V} \rVert^2_2 \right) \\ + \frac{\lambda_{rr}}{\text{const} * |\mathcal{U}|} \sum_{(i,j) \in \mathcal{U}} \mathcal{L}\left(\operatorname{score}(i, j), v_{\text{ur}}\right),\end{split}\]where \(\mathcal{U}\) is a sample of unobserved user-item pairs.
In the implicit case when there are no target values, we use logistic loss to fit a model that attempts to predict all the given (user, item) pairs in the training data as 1 and all others as 0. To train this model, we sample an unobserved item along with each observed (user, item) pair, using SGD to push the score of the observed pair towards 1 and the unobserved pair towards 0. In this case, the ranking_regularization parameter is ignored.
When binary_targets=True, then the target values must be 0 or 1; in this case, we also use logistic loss to train the model so the predicted scores are as close to the target values as possible. This time, the rating of the sampled unobserved pair is set to 0 (thus the unobserved_rating_value is ignored). In this case, the loss on the unobserved pairs is weighted by ranking_regularization as in the non-binary case.
To choose the unobserved pair complementing a given observation, the algorithm selects several (defaults to four) candidate negative items that the user in the given observation has not rated. The algorithm scores each one using the current model, then chooses the item with the largest predicted score. This adaptive sampling strategy provides faster convergence than just sampling a single negative item.
The Factorization Machine is a generalization of Matrix Factorization. Like matrix factorization, it predicts target rating values as a weighted combination of user and item latent factors, biases, side features, and their pairwise combinations. In particular, while Matrix Factorization learns latent factors for only the user and item interactions, the Factorization Machine learns latent factors for all variables, including side features, and also allows for interactions between all pairs of variables. Thus the Factorization Machine is capable of modeling complex relationships in the data. Typically, using linear_side_features=True performs better in terms of RMSE, but may require a longer training time.
num_sampled_negative_examples: For each (user, item) pair in the data, the ranking sgd solver evaluates this many randomly chosen unseen items for the negative example step. Increasing this can give better performance at the expense of speed, particularly when the number of items is large.
When ranking_regularization is larger than zero, the model samples a small set of unobserved user-item pairs and attempts to drive their rating predictions below the value specified with unobserved_rating_value. This has the effect of improving the precision-recall performance of recommended items.
** Implicit Matrix Factorization**
RankingFactorizationRecommender had an additional option of optimizing for ranking using the implicit matrix factorization model. The internal coefficients of the model and its interpretation are identical to the model described above. The difference between the two models is in the nature in which the objective is achieved. Currently, this model does not incorporate any columns beyond user/item (and rating) or side data.
The model works by transferring the raw observations (or weights) (r) into two separate magnitudes with distinct interpretations: preferences (p) and confidence levels (c). The functional relationship between the weights (r) and the confidence is either linear or logarithmic which can be toggled by setting ials_confidence_scaling_type = linear (the default) or log respectively. The rate of increase of the confidence with respect to the weights is proportional to the ials_confidence_scaling_factor (default 1.0).
Examples
Basic usage
>>> sf = turicreate.SFrame({'user_id': ["0", "0", "0", "1", "1", "2", "2", "2"], ... 'item_id': ["a", "b", "c", "a", "b", "b", "c", "d"], ... 'rating': [1, 3, 2, 5, 4, 1, 4, 3]}) >>> m1 = turicreate.ranking_factorization_recommender.create(sf, target='rating')
For implicit data, no target column is specified:
>>> sf = turicreate.SFrame({'user': ["0", "0", "0", "1", "1", "2", "2", "2"], ... 'movie': ["a", "b", "c", "a", "b", "b", "c", "d"]}) >>> m2 = turicreate.ranking_factorization_recommender.create(sf, 'user', 'movie')
Implicit Matrix Factorization
>>> sf = turicreate.SFrame({'user_id': ["0", "0", "0", "1", "1", "2", "2", "2"], ... 'item_id': ["a", "b", "c", "a", "b", "b", "c", "d"], ... 'rating': [1, 3, 2, 5, 4, 1, 4, 3]}) >>> m1 = turicreate.ranking_factorization_recommender.create(sf, target='rating', solver='ials')
For implicit data, no target column is specified:
>>> sf = turicreate.SFrame({'user': ["0", "0", "0", "1", "1", "2", "2", "2"], ... 'movie': ["a", "b", "c", "a", "b", "b", "c", "d"]}) >>> m2 = turicreate.ranking_factorization_recommender.create(sf, 'user', 'movie', solver='ials')
Including side features
>>> user_info = turicreate.SFrame({'user_id': ["0", "1", "2"], ... 'name': ["Alice", "Bob", "Charlie"], ... 'numeric_feature': [0.1, 12, 22]}) >>> item_info = turicreate.SFrame({'item_id': ["a", "b", "c", "d"], ... 'name': ["item1", "item2", "item3", "item4"], ... 'dict_feature': [{'a' : 23}, {'a' : 13}, ... {'b' : 1}, ... {'a' : 23, 'b' : 32}]}) >>> m2 = turicreate.ranking_factorization_recommender.create(sf, ... target='rating', ... user_data=user_info, ... item_data=item_info)
Optimizing for ranking performance
Create a model that pushes predicted ratings of unobserved user-item pairs toward 1 or below.
>>> m3 = turicreate.ranking_factorization_recommender.create(sf, ... target='rating', ... ranking_regularization = 0.1, ... unobserved_rating_value = 1)
Methods
RankingFactorizationRecommender.evaluate (dataset) |
Evaluate the model’s ability to make rating predictions or recommendations. |
RankingFactorizationRecommender.evaluate_precision_recall (dataset) |
Compute a model’s precision and recall scores for a particular dataset. |
RankingFactorizationRecommender.evaluate_rmse (…) |
Evaluate the prediction error for each user-item pair in the given data set. |
RankingFactorizationRecommender.export_coreml (…) |
Export the model in Core ML format. |
RankingFactorizationRecommender.get_num_items_per_user () |
Get the number of items observed for each user. |
RankingFactorizationRecommender.get_num_users_per_item () |
Get the number of users observed for each item. |
RankingFactorizationRecommender.get_similar_items ([…]) |
Get the k most similar items for each item in items. |
RankingFactorizationRecommender.get_similar_users ([…]) |
Get the k most similar users for each entry in users. |
RankingFactorizationRecommender.predict (dataset) |
Return a score prediction for the user ids and item ids in the provided data set. |
RankingFactorizationRecommender.recommend ([…]) |
Recommend the k highest scored items for each user. |
RankingFactorizationRecommender.recommend_from_interactions (…) |
Recommend the k highest scored items based on the |
RankingFactorizationRecommender.save (location) |
Save the model. |
RankingFactorizationRecommender.summary ([output]) |
Print a summary of the model. |