In recent years, recommender systems have advanced rapidly, where embedding learning for users and items plays a critical role. A standard method learns a unique embedding vector for each user and item. However, such a method has two important limitations in real-world applications: (1) it is hard to learn embeddings that generalize well for users and items that have rare interactions, and (2) it may incur unbearably high memory costs when the number of users and items scales up. Existing approaches either can only address one of the limitations or have flawed overall performances. In this article, we propose Clustered Embedding Learning (CEL) as an integrated solution to these two problems. CEL is a plug-and-play embedding learning framework that can be combined with any differentiable feature interaction model. It is capable of achieving improved performance, especially for cold users and items, with reduced memory cost. CEL enables automatic and dynamic clustering of users and items in a top-down fashion, where clustered entities could jointly learn a shared embedding. The accelerated version of CEL has an optimal time complexity, which supports efficient online updates. Theoretically, we prove the identifiability and the existence of a unique optimal number of clusters for CEL in the context of nonnegative matrix factorization. Empirically, we validate the effectiveness of CEL on three public datasets and one business dataset, showing its consistently superior performance against state-of-the-art methods. In particular, when incorporating CEL into the business model, it brings an improvement of
\(+0.6\%\)
in AUC, which translates into a significant revenue gain; meanwhile, the size of the embedding table gets 2,650 times smaller. Additionally, we demonstrate that if there is enough memory, learning a personalized embedding for each user and item around their clustering centers is feasible and can further boost performance. In this article, we enhance and extend the personalization technique we initially proposed in our earlier work [
4
], which introduced an offset regularization to prevent personalized embeddings from drifting too far away from the central (cluster) embedding, thereby mitigating overfitting. However, in [
4
], we simply applied a uniform regularization weight across all embeddings, which, given the considerable variation in the number of their associated interactions, is suboptimal. To address this, we investigate in this article the strategies for non-uniform offset regularization that adjusts regularization weights according to the number of associated interactions, which leads to significant improvements compared with uniform offset regularization. Furthermore, we extend CEL into Meta-CEL, factoring in future personalization during cluster optimization, which leads to additional enhancements in personalization performance.