LLM Engineering - The Foundations

5.0

(5 ratings)

Β·

4 Weeks

Β·

Cohort-based Course

Master Large Language Model architecture, pretraining, prompt engineering, fine-tuning, and alignment. From the Transformer to RLAIF.

Course overview

Understand LLM training from first principles

Data science and engineering teams around the world are being asked to rapidly build LLM applications through prompt engineering and supervised fine-tuning. Some companies are even working to train their own proprietary domain-specific LLMs from scratch!


In order to build a "ChatGPT" for your customers or internal stakeholders, using your own proprietary data, you'll want to understand how GPT-style models are actually built, step-by-step.


From closed-source models like OpenAI's GPT-series, Google's PaLM models, Anthropic's Claude, others, to open-source models like LLaMA 2-70B, Mistral-7B, 01.AI's Yi-34B, Mosaic's MPT-30B, or Falcon-180B, these decoder-only architectures are at the core, made in the same way.


This course will provide you with the foundational concepts and code you need to demystify how these models are created, from soup to nuts and to actually get started training, fine-tuning, and aligning your own LLMs.


From there, it's up to you to make the business case, organize the data, and secure the compute to give your company and your career a competitive LLM advantage.

Who is this course for

01

Aspiring AI Engineers looking to explore new career opportunities in Generative AI

02

Data scientists and Machine Learning Engineers who want to train their own LLMs

03

Stakeholders interested in training and deploying proprietary LLMs and applications

What you’ll get out of this course

Understand Large Language Model transformer architectures and how they process text for next word prediction


Grok how base-model next word prediction is improved for chat models and aligned with humans through RL
Deep dive on quantization strategies for more efficiently leveraging and fine-tuning LLMs including LoRA/qLoRA
Learn the story of of OpenAI's Generative Pre-Trained Transformers models from GPT to GPT-4 Turbo


Understand how Meta's LLaMA 2 was trained and why it serves as a benchmark for open-source LLM developers
Train models yourself each class using the classic techniques of pretraining, fine-tuning, and reinforcement learning


Work with other talented builders to bring your very own custom GPT-style Large Language Model to life


Course syllabus

Jan 10β€”Feb 2
Mar 19β€”Apr 12
10 modules β€’ 28 lessons β€’ 15 projects
Expand all modules
  • Week 1

    Jan 10β€”Jan 14

    Modules

    • The Attention Mechanism

    • The Transformer

  • Week 2

    Jan 15β€”Jan 21

    Modules

    • Unsupervised Pre-Training

    • Supervised Fine-Tuning

    • Prompt Engineering

  • Week 3

    Jan 22β€”Jan 28

    Modules

    • Instruction Tuning

    • Reinforcement Learning with Human Feedback (RLHF)

  • Week 4

    Jan 29β€”Feb 2

    Modules

    • Reinforcement Learning with AI Feedback

    • Demo Day

  • Post-Course

    Modules

    • From LLM Engineering to LLM Operations

5.0

(5 ratings)

What students are saying

Prerequisites

A strong background in fundamental Machine Learning and Deep Learning

Understanding supervised learning, unsupervised learning, and neural network architectures are required. Introductory NLP and CV knowledge is encouraged.

A ability to program in Python within a Jupyter Notebook Environment

Understand basic Python syntax and constructs. You should be comfortable training and evaluating simple ML & DL models using test, train, and dev sets.

Free resource

πŸ“… Detailed Schedule!

Understand how everything comes together in the course to provide a holistic overview of foundational LLM Engineering.


Get all the details about the assignments, and also get links to the papers that we'll be discussing!

Send me the deets ✌️

What people are saying

Β Β Β Β Β Β Β Β Being part of LLM Ops Cohort 1 (LLMO1) accelerated my learning and set me up for success. I'm beyond excited for this next cohort focused on LLM Engineering (LLME1). It's great to learn with a community and folks who have been there and done that!
Harpreet Sahota

Harpreet Sahota

Deep Learning Developer Relations Manager
Β Β Β Β Β Β Β Β During LLM Ops cohort 1 (LLMO1), we covered everything from creating to evaluating, deploying, and monitoring LLM applications. On top of that, I got into AI Makerspace's community of learners and instructors who are always ready to support and collaborate. I highly recommend this course to anyone looking to master the world of LLM engineering.
Juan Olano

Juan Olano

Founder, Senior AI Engineer

Meet your instructors

Dr. Greg Loughnane

Dr. Greg Loughnane

Founder & CEO @ AI Makerspace

I've worked as an AI product manager, university professor, data science consultant, AI startup advisor, and ML researcher; TEDx & keynote speaker, lecturing since 2013.


From 2021-2023 I worked at FourthBrain (Backed by Andrew Ng's AI Fund) to build industry-leading online bootcamps in ML Engineering and ML Operations (MLOps):


πŸ”— Resources links:Β Deeplearning.ai demos,Β AI Makerspace demos,Β LinkedIn,Β Twitter,Β YouTube,Β Blog.

Chris "The LLM Wizard πŸͺ„" Alexiuk

Chris "The LLM Wizard πŸͺ„" Alexiuk

Co-Founder & CTO @ AI Makerspace

I'm currently working as the Founding Machine Learning Engineer at Ox - but in my off time you can find me creating content for Machine Learning: either for the AI Makerspace, FourthBrain, or my YouTube Channel!


My motto is "Build, build, build!", and I'm excited to get building with all of you!


πŸ”— Resources links:Β YouTube,Β LinkedIn,Β Twitter

A pattern of wavy dots
Join an upcoming cohort

LLM Engineering - The Foundations

Cohort 2

$1,250 USD

Dates

Jan 10β€”Feb 2, 2024

Application Deadline

Jan 10, 2024

Cohort 3

$1,250 USD

Dates

Mar 19β€”Apr 12, 2024

Application Deadline

Mar 19, 2024

|

Bulk purchases

Course schedule

4-6 hours per week
  • Class!

    Tuesdays & Thursdays, 4:00-6:00pm PT

    • Jan 9th, 11th
    • Jan 16th, 18th
    • Jan 23rd, 25th
    • Jan 30th, Feb 1st
  • Programming Assignments

    2-4 hours per week

    Each class period, we will get hands-on with Python coding homework!

  • Office Hours

    Tuesdays and Fridays

    • Greg Loughnane: Mondays @ 8 AM PT
    • Chris Alexiuk: Fridays @ 3 PM PT

Build. Ship. Share.

Build.  Ship.  Share.

Get hands-on with code, every class

We're here to teach concepts plus code. Never one or the other.

Pair programming made fun and easy

Grow your network. Build together. Feel the difference that expert facilitation makes.

Meet your accountability buddies

Join a community of doers, aimed in the same career direction as you are.

Frequently Asked Questions

What happens if I can’t make a live session?
I work full-time, what is the expected time commitment?
What’s the refund policy?
What if I'm not yet proficient in Python and the foundations of Machine Learning?
What if I don't know Git or how to use GitHub?
How can I learn more about AI Makerspace?
Are there any volume discounts if I want my whole team to take the course?
A pattern of wavy dots
Join an upcoming cohort

LLM Engineering - The Foundations

Cohort 2

$1,250 USD

Dates

Jan 10β€”Feb 2, 2024

Application Deadline

Jan 10, 2024

Cohort 3

$1,250 USD

Dates

Mar 19β€”Apr 12, 2024

Application Deadline

Mar 19, 2024

|

Bulk purchases

$1,250 USD

5.0

(5)

Β·

4 Weeks