Building AI-Native Products
Build Mixture-of-Experts for LLMs with PyTorch from Scratch
Hosted by Damien Benveniste, PhD
Fri, Jun 6, 2025
4:30 PM UTC (1 hour)
Virtual (Zoom)
Free to join
By continuing, you agree to Maven's Terms and Privacy Policy.
Go deeper with a course
Fri, Jun 6, 2025
4:30 PM UTC (1 hour)
Virtual (Zoom)
Free to join
Go deeper with a course
What you'll learn
Demystify Mixture‑of‑Experts (MoE) Routing
Build a Sparse MoE Layer From Scratch
Add Load‑Balancing Loss for Stable Training
Why this topic matters
You'll learn from
Damien Benveniste, PhD
Former Meta ML Tech Lead, CEO @ AiEdge
Welcome, my name is Damien Benveniste! After a Ph.D. in theoretical Physics, I started my career in Machine Learning more than 10 years ago.
I have been a Data Scientist, Machine Learning Engineer, and Software Engineer. I have led various Machine Learning projects in diverse industry sectors such as AdTech, Market Research, Financial Advising, Cloud Management, online retail, marketing, credit score modeling, data storage, healthcare, and energy valuation. Previously, I was a Machine Learning Tech Lead at Meta on the automation at scale of model optimization for Ads ranking.
I am now training the next generation of Machine Learning engineers.
Previously at
Learn directly from Damien Benveniste, PhD
By continuing, you agree to Maven's Terms and Privacy Policy.
By continuing, you agree to Maven's Terms and Privacy Policy.