Noise Contrastive Estimation for Scalable Linear Models for One-Class Collaborative Filtering

Authors: Ga Wu, Maksims Volkovs, Chee Loong Soon, Scott Sanner, Himanshu Rai

8 pages

Abstract: Previous highly scalable one-class collaborative filtering methods such as Projected Linear Recommendation (PLRec) have advocated using fast randomized SVD to embed items into a latent space, followed by linear regression methods to learn personalized recommendation models per user. Unfortunately, naive SVD embedding methods often exhibit a popularity bias that skews the ability to accurately embed niche items. To address this, we leverage insights from Noise Contrastive Estimation (NCE) to derive a closed-form, efficiently computable "depopularized" embedding. While this method is not ideal for direct recommendation using methods like PureSVD since popularity still plays an important role in recommendation, we find that embedding followed by linear regression to learn personalized user models in a novel method we call NCE-PLRec leverages the improved item embedding of NCE while correcting for its popularity unbiasing in final recommendations. An analysis of the recommendation popularity distribution demonstrates that NCE-PLRec uniformly distributes its recommendations over the popularity spectrum while other methods exhibit distinct biases towards specific popularity subranges, thus artificially restricting their recommendations. Empirically, NCE-PLRec outperforms state-of-the-art methods as well as various ablations of itself on a variety of large-scale recommendation datasets.

Submitted to arXiv on 02 Nov. 2018

Explore the paper tree

Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant

Also access our AI generated Summaries, or ask questions about this paper to our AI assistant.

Look for similar papers (in beta version)

By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.