🤗 accelerate: A low-level distributed training API

Hosted by Zach Mueller

Thu, Jul 17, 2025

5:00 PM UTC (45 minutes)

Virtual (Zoom)

Free to join

Invite your network

Go deeper with a course

Scratch to Scale: Large Scale Training in the Modern World
Zachary Mueller
View syllabus

What you'll learn

Why low level frameworks matter

Why not the transformers Trainer? Axolotl? Or other high-level wrapping APIs?

What all *can* it do?

🤗 accelerate has many faces, come learn about all three

Learn how it helps make distributed training easier

One script, any distributed topology. How does it work?

Why this topic matters

Distributed training is the norm. Come learn about one of the easiest frameworks to get started with that: 🤗accelerate. Keep your PyTorch code, and just tweak it with a few lines to make the most use out of your multiple GPUs

You'll learn from

Zach Mueller

Instructor, 🤗 Technical Lead

I've been in the field for almost a decade now. I first started in the fast.ai community, quickly learning how modern-day training pipelines are built and operated. Then I moved to Hugging Face, where I'm the Technical Lead on the accelerate project and manage the transformers Trainer.


I've written numerous blogs, courses, and given talks on distributed training and PyTorch throughout my career.

Hugging Face
Accenture

Learn directly from Zach Mueller

By continuing, you agree to Maven's Terms and Privacy Policy.

© 2025 Maven Learning, Inc.