xxx
xxx

Reputation: 41

How can I do cross validation on user-item interactions matrix for LightFM movie recommender system?

I have an interactions matrix (scipy.sparse.csr_matrix) from movielens dataset with movie ratings from users, and I am building a LightFM model with item_features. Right now I have the matrix divided into train and test, but how do I do cross-validation for it? How do I measure the efficiency?

!pip install lightfm
from lightfm import LightFM, cross_validation
from lightfm.evaluation import precision_at_k, auc_score

train, test = cross_validation.random_train_test_split(user_item, test_percentage=0.25)
model_lightfm = LightFM(loss='warp', learning_rate=0.01, k=10)
model_lightfm.fit(train, item_features=item_features, epochs=50)

def recommend(model, user_id):
  n_users, n_items = train.shape
  best_rated = ratings_df[(ratings_df.userId == user_id) & (ratings_df.rating >= 4.5)].movieId.values

  known_positives = metadata.loc[metadata['MOVIEID'].isin(best_rated)].title_clean.values

  scores = model.predict(user_id, np.arange(n_items), item_features=item_features) 
  top_items = metadata['title_clean'][np.argsort(-scores)]

  print("User %s likes:" % user_id)
  for k in known_positives[:10]:
    print(k)

  print("\nRecommended:")
  for x in top_items[:10]:
    print(x)

recommend(model_lightfm, 10)


train_precision = precision_at_k(model_lightfm, train, k=10, item_features=item_features).mean()
test_precision = precision_at_k(model_lightfm, test, k=10, item_features=item_features, train_interactions=train).mean()

train_auc = auc_score(model_lightfm, train, item_features=item_features).mean()
test_auc = auc_score(model_lightfm, test, item_features=item_features, train_interactions=train).mean()

print('Precision: train %.2f, test %.2f.' % (train_precision, test_precision))
print('AUC: train %.2f, test %.2f.' % (train_auc, test_auc))

Upvotes: 1

Views: 1341

Answers (1)

Rohit upadhyay
Rohit upadhyay

Reputation: 138

The evaluation metrics available for lightFM are auc_score and precision@k. You are calculating both the metrics, Precision and auc_score. Efficiency of your model can be judged by looking at

  1. auc_score for test - Tells you how good your model is, in predicting the right recommendation for users, considering all the movies predicted. Doesn't considers the order/rank of movies predicted.

  2. precision_at_k for test - Tells you the precision of you model, at generating top k(10 in your case) predictions. Helps if you want to judge your model in generating top-n recommendations.

Upvotes: 0

Related Questions