Modern IR Evaluation in the Generative RAG Era
Hosted by Nandan Thakur and Hamel Husain
Wed, Jul 2, 2025
7:30 PM UTC (30 minutes)
Virtual (Zoom)
Free to join
By continuing, you agree to Maven's Terms and Privacy Policy.
Go deeper with a course


Wed, Jul 2, 2025
7:30 PM UTC (30 minutes)
Virtual (Zoom)
Free to join
Go deeper with a course


What you'll learn
Traditional Retrieval Evaluations Are Stale
Rigorous Academic Evaluations Still Power Real-World Evals
Evaluation Research Is Evolving To Meet New Needs
Why this topic matters
You'll learn from
Nandan Thakur
RAG researcher @ UWaterloo. Creator of BEIR and MIRACL benchmarks
Nandan Thakur is fourth-year PhD student at University of Waterloo working on building efficient embedding models and realistic evaluation benchmark, advised by Professor Jimmy Lin. Nandan’s research has been hugely influential in pioneering new benchmarks for information retrieval, having notably introduced the BEIR and MIRACL benchmarks. His current work explores novel ways to evaluate retrieval in the age of LLMs. He has previously interned at Google, Vectara and Databricks, and collaborated with industry partners including Snowflake, Micrsoft and Huawei.
Hamel Husain
ML Engineer with 20 years of experience
Hamel is a machine learning engineer with over 20 years of experience. He has worked with innovative companies such as Airbnb and GitHub, which included early LLM research used by OpenAI, for code understanding. He has also led and contributed to numerous popular open-source machine-learning tools. Hamel is currently an independent consultant helping companies build AI products.
Learn directly from Nandan Thakur and Hamel Husain
By continuing, you agree to Maven's Terms and Privacy Policy.
By continuing, you agree to Maven's Terms and Privacy Policy.