LLM Engineering - The Foundations


(13 ratings)

4 Weeks

Cohort-based Course

LLM Engineering has expanded into our AI Engineering Bootcamp! Visit the link for more info! maven.com/aimakerspace/ai-eng-bootcamp

Course overview

Understand LLM training from first principles

Master Large Language Model architecture, pretraining, prompt engineering, fine-tuning, and alignment. From the Transformer to RLAIF.

Data science and engineering teams around the world are being asked to rapidly build LLM applications through prompt engineering and supervised fine-tuning. Some companies are even working to train their own proprietary domain-specific LLMs from scratch!

In order to build a "ChatGPT" for your customers or internal stakeholders, using your own proprietary data, you'll want to understand how GPT-style models are actually built, step-by-step.

From closed-source models like OpenAI's GPT-series, Google's PaLM models, Anthropic's Claude, others, to open-source models like LLaMA 2-70B, Mistral-7B, 01.AI's Yi-34B, Mosaic's MPT-30B, or Falcon-180B, these decoder-only architectures are at the core, made in the same way.

This course will provide you with the foundational concepts and code you need to demystify how these models are created, from soup to nuts and to actually get started training, fine-tuning, and aligning your own LLMs.

From there, it's up to you to make the business case, organize the data, and secure the compute to give your company and your career a competitive LLM advantage.

Who is this course for


Aspiring AI Engineers looking to explore new career opportunities in Generative AI


Data scientists and Machine Learning Engineers who want to train their own LLMs


Stakeholders interested in training and deploying proprietary LLMs and applications

What you鈥檒l get out of this course

Understand Large Language Model transformer architectures and how they process text for next word prediction

Grok how base-model next word prediction is improved for chat models and aligned with humans through RL
Deep dive on quantization strategies for more efficiently leveraging and fine-tuning LLMs including LoRA/qLoRA
Learn the story of of OpenAI's Generative Pre-Trained Transformers models from GPT to GPT-4 Turbo

Understand how Meta's LLaMA 2 was trained and why it serves as a benchmark for open-source LLM developers
Train models yourself each class using the classic techniques of pretraining, fine-tuning, and reinforcement learning

Work with other talented builders to bring your very own custom GPT-style Large Language Model to life

Explore frontiers of LLM Engineering including Mixture of Experts (MOE) models and Small Language Models (SLMs)

This course includes

Interactive live sessions

Lifetime access to course materials

10 in-depth lessons

Direct access to instructor

14 projects to apply learnings

Guided feedback & reflection

Private community of peers

Course certificate upon completion

Maven Satisfaction Guarantee

This course is backed by Maven鈥檚 guarantee. You can receive a full refund within 14 days after the course ends, provided you meet the completion criteria in our refund policy.

Course syllabus

Expand all modules
  • Week 1

    Mar 19鈥擬ar 24

    Week dates are set to instructor's time zone


    • Session 1: The Transformer

    • Session 2: From GPT to GPT-2

  • Week 2

    Mar 25鈥擬ar 31

    Week dates are set to instructor's time zone


    • Session 3: From GPT-2 to GPT-3

    • Session 4: Quantization and PEFT-LoRA

  • Week 3

    Apr 1鈥擜pr 7

    Week dates are set to instructor's time zone


    • Session 5: Alignment I: Prompt Engineering & Instruction Tuning

    • Session 6: Alignment II: Reinforcement Learning with Human Feedback (RLHF)

  • Week 4

    Apr 8鈥擜pr 11

    Week dates are set to instructor's time zone


    • Session #7: Reinforcement Learning with AI Feedback (RLAIF)

    • Session #8: Frontiers of LLM Engineering

  • Post-Course


    • From LLM Engineering to LLM Operations

    • Demo Day


(13 ratings)

What students are saying


A strong background in fundamental Machine Learning and Deep Learning

Understanding supervised learning, unsupervised learning, and neural network architectures are required. Introductory NLP and CV knowledge is encouraged.

A ability to program in Python within a Jupyter Notebook Environment

Understand basic Python syntax and constructs. You should be comfortable training and evaluating simple ML & DL models using test, train, and dev sets.

Free resource

馃搮 Detailed Schedule!

Understand how everything comes together in the course to provide a holistic overview of the how LLMs are engineered.

Get all the details about the assignments, associated papers, and key concepts you'll learn!

Send me the deets 鉁岋笍

Meet your instructors

Dr. Greg Loughnane

Dr. Greg Loughnane

Founder & CEO @ AI Makerspace

I've worked as an AI product manager, university professor, data science consultant, AI startup advisor, and ML researcher; TEDx & keynote speaker, lecturing since 2013.

From 2021-2023 I worked at FourthBrain (Backed by Andrew Ng's AI Fund) to build industry-leading online bootcamps in ML Engineering and ML Operations (MLOps):

馃敆 Resources links:聽Deeplearning.ai demos,聽AI Makerspace demos,聽LinkedIn,聽Twitter,聽YouTube,聽Blog.

Chris "The LLM Wizard 馃獎" Alexiuk

Chris "The LLM Wizard 馃獎" Alexiuk

Co-Founder & CTO @ AI Makerspace

I'm currently working as the Founding Machine Learning Engineer at Ox - but in my off time you can find me creating content for Machine Learning: either for the AI Makerspace, FourthBrain, or my YouTube Channel!

My motto is "Build, build, build!", and I'm excited to get building with all of you!

馃敆 Resources links:聽YouTube,聽LinkedIn,聽Twitter

A pattern of wavy dots
Be the first to know about upcoming cohorts

LLM Engineering - The Foundations


Bulk purchases

Course schedule

4-6 hours per week
  • Class!

    Tuesdays & Thursdays, 4:00-6:00pm PT

    • Jan 9th, 11th
    • Jan 16th, 18th
    • Jan 23rd, 25th
    • Jan 30th, Feb 1st
  • Programming Assignments

    2-4 hours per week

    Each class period, we will get hands-on with Python coding homework!

  • Office Hours

    Tuesdays and Fridays

    • Greg Loughnane: Mondays @ 8 AM PT
    • Chris Alexiuk: Fridays @ 3 PM PT

Build. Ship. Share.

Build.  Ship.  Share.

Get hands-on with code, every class

We're here to teach concepts plus code. Never one or the other.

Pair programming made fun and easy

Grow your network. Build together. Feel the difference that expert facilitation makes.

Meet your accountability buddies

Join a community of doers, aimed in the same career direction as you are.

Frequently Asked Questions

What happens if I can鈥檛 make a live session?
I work full-time, what is the expected time commitment?
What鈥檚 the refund policy?
What if I'm not yet proficient in Python and the foundations of Machine Learning?
What if I don't know Git or how to use GitHub?
How can I learn more about AI Makerspace?
Are there any volume discounts if I want my whole team to take the course?
A pattern of wavy dots
Be the first to know about upcoming cohorts

LLM Engineering - The Foundations


Bulk purchases