Fine-Tuning LLMs

5.0 (6)

·

5 Weeks

·

Cohort-based Course

Achieve optimal performance from your fine-tuned LLM.

Previous Clients Include

Uber
Amplitude
Anthropic
Amazon Web Services

Course overview

Fine-tuning that *actually* works

A properly fine-tuned LLM can achieve performance that outshines leading foundation models. However, achieving these results is not as simple as making an API call. In this course, we'll demystify fine-tuning by systematically going step-by-step through the entire process.


Step 1: Curate Data

Step 2: Train Model

Step 3: Evaluate Results

Step 4: Apply Guardrails

Step 5: Deploy Model

Step 6: Improve Results


For each step, I will provide proven strategies and real-world case studies from my experience as an AI consultant.


When it comes to fine-tuning, the devil is in the details. Practitioners must be precise and exacting to reach peak performance. I will provide hands-on help so that students successfully build an end-to-end fine-tuning solution during the course project.

Who is this course for

01

Software engineers and data scientists who have dabbled with LLMs and are considering a career switch to AI.

02

AI and ML engineers who want to enhance their skillset.

03

Startups who are evaluating whether fine-tuning would be beneficial for their company.

What you’ll get out of this course

Curate High-Quality Data

Create a fine-tuning dataset that is ideal for your use case.


Adjust your dataset to handle nuances and edge cases.

Train a Model

Apply the QLoRA algorithm to fine-tune a model (and understand what's actually happening under the hood!).


Conduct hyperparameter tuning to optimize training for your use case.

Evaluate Results

Design a test suite to systematically evaluate model results.


Identify weaknesses in your model and determine how to fix them.

Optimize Costs and Scale

Deploy your model using techniques such as quantization to reduce costs.


Optimize inference parameters for your use case.

Build a Data Flywheel

Analyze the production impact of your model.


Implement a process to continuously improve results.

Explore Open-Source LLMs

Identify the pros and cons of various open-source LLMs and evaluate how their performance compares to the leading proprietary models.


Analyze the current state of LLMs and where they are likely to go in the future.

This course includes

5 interactive live sessions

Lifetime access to course materials

31 in-depth lessons

Direct access to instructor

7 projects to apply learnings

Guided feedback & reflection

Private community of peers

Course certificate upon completion

Maven Satisfaction Guarantee

This course is backed by Maven’s guarantee. You can receive a full refund within 14 days after the course ends, provided you meet the completion criteria in our refund policy.

Course syllabus

Week 1

Nov 5—Nov 10

    Welcome! Let's Get Started

    3 items

    Never Fine-Tune Before You Do These Things!

    4 items

    Fine-Tuning Basics

    4 items

    Nov

    5

    Q&A Session 1

    Tue 11/58:00 PM—9:00 PM (UTC)
    Optional

Week 2

Nov 11—Nov 17

    Train Your Model

    9 items

    Nov

    12

    Q&A Session 2

    Tue 11/128:00 PM—9:00 PM (UTC)
    Optional

Week 3

Nov 18—Nov 24

    Evaluation Frameworks

    5 items

    Dataset Improvement Strategies

    6 items

    Nov

    19

    Q&A Session 3

    Tue 11/198:00 PM—9:00 PM (UTC)
    Optional

Week 4

Nov 25—Dec 1

    Deploy Your Model

    6 items

    Nov

    29

    Q&A Session 4

    Fri 11/298:00 PM—9:00 PM (UTC)
    Optional

Week 5

Dec 2—Dec 3

    Dec

    3

    Q&A Session 5

    Tue 12/38:00 PM—9:00 PM (UTC)
    Optional

Bonus

    Private Coaching Session

    1 item

5.0 (6 ratings)

What students are saying

Free resource

7 Cups Increases Engagement 250% Through Fine-Tuning

Discover how 7 Cups fine-tuned an LLM to create an AI therapist that is 250% more engaging than leading proprietary LLMs.

Get the free case study

Meet your instructor

Scott Kramer

Scott Kramer

I've led AI at startups across every stage, from founding engineer through IPO (3 out of 3 successful exits).


After my last exit, I became a solopreneur and specialize in helping companies fine-tune LLMs.


This course is a culmination of my learnings from successfully fine-tuning LLMs for clients.

A pattern of wavy dots

Be the first to know about upcoming cohorts

Fine-Tuning LLMs

Course schedule

4-6 hours per week

  • Self-Paced Lessons

    2 hours per week

    1-2 modules per week with lessons walking you through the step-by-step process of fine-tuning an LLM. Each module includes recorded demo videos, coding examples, and additional resources to help you complete the next project.

  • Course Project

    2 hours per week

    Students will design and implement a fine-tuned model of their choosing. Each week, students will apply strategies from the lessons to further improve their model.

  • Q&A with Scott

    1 hour per week

    Live Q&A calls with Scott and the rest of the cohort to answer questions and receive hands-on help with that week's projects.

  • Private Coaching Session

    1 hour session

    Each student will receive a live coaching session with Scott to review their project, receive personalized feedback, and discuss how to improve their AI skills further.

Learning is better with cohorts

Learning is better with cohorts

Active hands-on learning

This course builds on live workshops and hands-on projects

Interactive and project-based

You’ll be interacting with other learners through breakout rooms and project teams

Learn with a cohort of peers

Join a community of like-minded people who want to learn and grow alongside you

Frequently Asked Questions

Stay in the loop

Sign up to be the first to know about course updates.

A pattern of wavy dots

Be the first to know about upcoming cohorts

Fine-Tuning LLMs