The talk focuses on one of the hardest problems in fashion recommendation systems—new users and new items in rapidly changing catalogs—and how recent advances in large language models enable fundamentally different approaches to representation, understanding, and bootstrapping recommendations at scale.
The session will share practical system designs, trade-offs, and lessons learned from using LLMs to address cold start across candidate generation and ranking, including how we combine textual, visual, and contextual signals to reduce dependence on historical interaction data. The emphasis will be on what translated to measurable online impact, and where LLM-based approaches helped—or failed—compared to traditional heuristics and embedding-based methods.
I believe this talk would resonate well with ML practitioners, recommender system engineers, and applied researchers, and would complement the conference’s focus on recommender systems, applied machine learning, and real-world deployments.