user9291966
user9291966

Reputation: 11

Obtaining feature importance/Sensitivity for model interpretability

I'm new to recommender models and I'm using LightFM for a project. I'm creating model for customer like/dislike recommendations (no ratings involved). Are there any options for model interpretability in such cases?

I understand LightFM is a hybrid approach (content based & collaborative), but is there a way I can rank user/item features on the basis of importance in the model predictions. Or understand the impact of user/item features on predictions. In regular ML models I can assess this using permutation feature importance, partial dependency plots for example.

Please let me know if someone has done a related analysis or if there's just no way for model interpretability in this case.

Upvotes: 1

Views: 302

Answers (0)

Related Questions