Fine-tuning Small Language Models: dataset to deployment

Hosted by Hamid Bagheri

80 students

In this video

What you'll learn

Define the task, dataset spec, and prepare training data

Write a dataset specification for a domain task, prompt templates, and instruction style.

Run a parameter-efficient fine-tuning

1B to 7B model in Colab, locally, or a single GPU environment

Evaluate and decide if the model is shippable

Quick offline eval, regression checks, and a simple acceptance rubric (quality, safety, latency, cost).

Deploy as base model and adapter with a production contract

Stable outputs, input/output contract, and basic observability.

Why this topic matters

In 2026, enterprise AI is driven by cost, privacy, and control, especially in regulated industries like healthcare and finance. Small language models (SLMs) can be tuned to deliver reliable, domain-specific behavior while keeping data and cost in control. This lightning course shows how to go from a clear dataset spec to a fine-tuned model and a practical path to production.

You'll learn from

Hamid Bagheri

AI Eng Leader, PhD CS | GenAI/LLMs | 20+ yrs software, data science, AI/ML

I am an AI engineering leader with a PhD in Computer Science and 20+ years of experience across software engineering, data science, and generative AI. I’ve built and led teams delivering production LLM systems, agentic platforms, and end-to-end AI products, spanning fine-tuning, RAG, evaluation, and scalable MLOps. I teach from real production experience, emphasizing practical tradeoffs, system design, and responsible AI.



Connect here: LinkedIn

Applied Responsible AI: Substack