4 Weeks
·Cohort-based Course
Build LLM-powered software reliably & from first principles. Learn the GenAI software development lifecycle: agents, evals, iteration & more
This course is popular
16 people enrolled last week.
4 Weeks
·Cohort-based Course
Build LLM-powered software reliably & from first principles. Learn the GenAI software development lifecycle: agents, evals, iteration & more
This course is popular
16 people enrolled last week.
Experience building and lecturing at
Course overview
If you’re a software engineer or data scientist, chances are you’ve built or seen a proof of concept (POC) for a generative AI app. It’s exciting at first—an impressive demo that showcases the potential of large language models.
But here’s the problem: that’s where it usually stops. These POCs often fail to scale into reliable, production-ready applications. The result? Endless iteration cycles, unreliable outputs, and frustration as teams struggle in what we call “POC purgatory.”
It doesn’t have to be this way.
With the right tools, principles, and processes, you can move beyond POCs and build LLM-powered applications that work—reliably, robustly, and at scale. In this course, we’ll show you how.
You’ll not only learn the technical skills to design, test, and deploy generative AI applications but also develop the product mindset needed to iterate and improve rapidly. This isn’t just about building something cool—it’s about creating systems that deliver real value, solve real problems, and scale with confidence.
What to Expect
This is a highly interactive course designed to get you results. Across live workshops, discussions, and hands-on projects, you’ll build your skills while solving real-world challenges.
Expect:
• Live coding sessions to design and refine LLM applications.
• Iteration exercises to explore how small changes can dramatically improve reliability.
• Discussions on user-centric development and integrating feedback loops.
• Expert-led Q&As to help you tackle your toughest challenges.
What You’ll Learn
• How to transform POC demos into production-grade applications.
• Strategies for managing non-determinism and optimizing outputs through prompt engineering.
• How to monitor, log, and debug AI systems to ensure reliability and performance.
• Best practices for handling structured outputs and integrating function calling into your applications.
• Building workflows for iterative development and experimentation.
• Practical skills through hands-on app development, including creating a Generative AI app to query PDFs.
Bonus Perks
All students will receive $1,000 worth of Modal credits to help power their AI applications, with more API credit announcements coming soon.
Who This Course Is For
This course is designed for:
• Data scientists and machine learning engineers who are sick of unreliable POCs and want to ship reliable LLM applications.
• Software engineers who want to learn how to build generative AI systems and master the LLM software development lifecycle.
Who This Course Is Not For
This course is not for those who are:
• Looking for an introductory course in AI—some programming knowledge is required.
• Expecting ready-made solutions for AI problems without active participation.
• Unable to commit to hands-on projects, live discussions, and iterative development exercises.
Why Take This Course
By the end of this course, you’ll:
• Understand the first principles of generative AI development.
• Gain hands-on experience building robust, scalable applications.
• Learn how to troubleshoot, monitor, and optimize AI systems for production.
• Develop workflows to iterate and refine AI applications effectively.
• Leave with the confidence to deploy reliable AI systems at scale.
01
Data scientists, machine learning engineers who are sick & tired of seeing and building prototypes & want to ship reliable LLM applications
02
Software engineers who want to learn how to build Generative AI systems and learn the LLM software development lifecycle.
Move beyond POCs and build production-ready AI systems
You’ve seen impressive generative AI demos—but how do you take them to production? In this course, you’ll learn the exact workflows, tools, and engineering practices needed to turn concepts into reliable, scalable applications that deliver real value.
Master prompt engineering for reliable results
Struggling with inconsistent outputs? Through interactive exercises, you’ll explore proven techniques for optimizing prompts and ensuring your AI delivers actionable results in production settings.
Learn to debug and monitor AI systems like a pro
When your AI application breaks, you’ll know exactly how to fix it. We’ll walk you through real-world debugging sessions, monitoring techniques, and tools to ensure your systems stay reliable at scale.
Build smarter applications with structured outputs
Want your AI to produce precise, usable results? You’ll learn how to handle structured outputs and function calls, making your applications smarter and more user-friendly.
Accelerate your development process with iterative workflows
Stop wasting time in endless iteration cycles. Through guided projects, you’ll discover how to set up workflows that refine your applications faster and with less frustration.
Develop hands-on expertise by building GenAI and LLM apps
Tired of theoretical knowledge? You’ll build a fully functional app that uses text and image models to query PDFs, giving you practical experience solving complex real-world problems.
Advance your career with practical, proven techniques
Whether you’re looking to lead AI projects, impress stakeholders, or level up your technical expertise, this course gives you the tools to stand out as an AI/ML practitioner.
Solve Real Problems at Scale
These aren’t just skills for demos—they’re skills for delivering scalable, production-ready systems that solve real-world business challenges.
Showcase Tangible Results
Leave the course with a project you can add to your portfolio, demonstrate to your boss, or use to showcase your ability to deploy reliable AI systems.
8 interactive live sessions
Lifetime access to course materials
8 in-depth lessons
Direct access to instructor
Projects to apply learnings
Guided feedback & reflection
Private community of peers
Course certificate upon completion
Maven Satisfaction Guarantee
This course is backed by Maven’s guarantee. You can receive a full refund within 14 days after the course ends, provided you meet the completion criteria in our refund policy.
Building LLM Applications for Data Scientists and Software Engineers
Jan
7
Jan
9
Jan
14
Jan
16
Jan
21
Jan
23
Jan
28
Jan
30
Elijah ben Izzy
Hamel Husain
Richard Savel
Cassie Kozyrkov
Krystyna Perez
Host of Vanishing Gradients podcast; ex-Outerbounds; expert in DS/ML.
Hugo Bowne-Anderson is an independent data and AI consultant with extensive experience in the tech industry. He is the host of the industry Vanishing Gradients, where he explores cutting-edge developments in data science and artificial intelligence. As a data scientist, educator, evangelist, content marketer, and strategist, Hugo has worked with leading companies in the field. His past roles include Head of Developer Relations at Outerbounds, a company committed to building infrastructure for machine learning applications, and positions at Coiled and DataCamp, where he focused on scaling data science and online education respectively. Hugo's teaching experience spans from institutions like Yale University and Cold Spring Harbor Laboratory to conferences such as SciPy, PyCon, and ODSC. He has also worked with organizations like Data Carpentry to promote data literacy. His impact on data science education is significant, having developed over 30 courses on the DataCamp platform that have reached more than 3 million learners worldwide. Hugo also created and hosted the popular weekly data industry podcast DataFramed for two years. Committed to democratizing data skills and access to data science tools, Hugo advocates for open source software both for individuals and enterprises.
Ex-Stitch Fix; 13+ Years building / productionizing data, ML, and AI
Stefan Krawczyk is the co-founder and CEO of DAGWorks, an open-source company driving two projects: Hamilton & Burr, whose mission to empower developers to build reliable AI agents & applications. He is a Y Combinator alum, StartX alum, and a Stanford graduate with a Master of Science in Computer Science with Distinction in Research. He has over thirteen years of experience in building and leading data & ML-related systems and teams, at companies like Stitch Fix, Idibon, Nextdoor, and Linkedin, his passion is to make others more successful with data by bridging the engineering gap between data science, machine learning, artificial intelligence, and the business.
Join an upcoming cohort
Cohort 1
$800
Dates
Payment Deadline
4-6 hours per week
Mondays and Wednesdays
4:00pm - 6:00pm PST
Live in person sessions. We'll finalize session times by mid-November.
Weekly projects
2 hours per week
To ensure hands-on practical time, there will be project work to complete throughout the course.
Active hands-on learning
This course builds on live workshops and hands-on projects
Interactive and project-based
You’ll be interacting with other learners through breakout rooms and project teams
Learn with a cohort of peers
Join a community of like-minded people who want to learn and grow alongside you
Join an upcoming cohort
Cohort 1
$800
Dates
Payment Deadline