Co-Founder & CEO @ AI Makerspace | #unautomatable | TEDx Speaker
Co-Founder @ eve.ai | Econometrics @ Fannie Mae
↔️ Learn end-to-end application deployment from first principles and with best-practice AI Engineering tools
🎧 Vibe-code front ends.
🚢 Deploy back ends, locally and remotely.
🧑💻 Start building your GitHub, for real this time.
🐍 Learn the Git and Python functions and commands you really need, no fluff.
🚀 Deploy a public link that you can share with your friends, colleagues, or community...or your boss 🤓
📊 And, of course, do some evals. Or at least, check the vibes.
--
We built this course after working with hundreds of students over the past two years in The AI Engineering Bootcamp.
It is software engineering fundamentals that cause people who are new to writing code the most problems.
Are you solid on fundamentals already?
Take The AI Engineer Challenge and bypass this course.
--
Online courses are not about content. Learn with us for free:
Rather, online learning communities are about accountability, 1:1 support, and feedback from experts. We restrict enrollment to 70 students per cohort, maintaining a 7/1 student/peer-supporter ratio. If you've gone as far as you can alone, and you believe that you can go further together, then welcome.
Build 🏗️, ship 🚢, and share 🚀 an end-to-end LLM application with AI-assisted development, vibe-coding, vibe-checking, and leading tools.
Learn the ropes of the best-practice AI Code Editor on the market
Set up your AI Interactive Development Environment like a pro
Grok the difference between vibe-coding and AI-assisted development
Learn Cmd+L, Cmd+K, how to use Cursor CLI
Develop your own style of vibe checking
Do simple quantitative evaluations on basic LLM wrapper apps
Finally commit to learning Git commands
Start your GitHub, using AI-assist to help you get there
Aspiring AI Engineers who do not yet code every day, but know they want to code every day.
Aspiring AI Engineering Leaders who lead coders and want skin in the game on building, evaluating, and deploying LLM applications.
Potential Aspiring AI Engineers who do not yet code every day, but think they might want to code every day.
Live sessions
Learn directly from "Dr. Greg" Loughnane, Ph.D. & Kat "KatGPT" Gawthorpe, Ph.D. in a real-time, interactive format.
Lifetime access
Go back to course content and recordings whenever you need to.
Community of peers
Stay accountable and share insights with like-minded professionals.
Certificate of completion
Share your new skills with your employer or on LinkedIn.
Maven Guarantee
This course is backed by the Maven Guarantee. Students are eligible for a full refund up until the halfway point of the course.
Not just prompt engineering, but also RAG! Learn why this fundamentally makes intuitive sense for Generative AI
Enhancing search and retrieval is the primary job of agents with access to tools (and MCP servers!) today
How tools like Deep Research offer insights into using parallelization to further scale context optimization
Weekly Programming Projects: 1-3 hours per week
Each class period, we will get hands-on with Python coding homework!
Instructor and Peer Support Office Hours: 1-10 hours per week
We typically have 10+ hours per week of office hours that you can attend (1 for each instructor and peer supporter!)
In this repository, we'll walk you through the steps to create a LLM (Large Language Model) powered application with a vibe-coded frontend!
Are you ready? Let's get started!
🖥️ Accessing "gpt-4.1-mini" (ChatGPT) like a developer
🏗️ Forking & Cloning This Repository
⚙️ Backend Setup with uv
🔥Setting Up for Vibe Coding Success
😎 Vibe Coding a Front End for the FastAPI Backend
🚀 Deploying Your First LLM-powered Application with Vercel
This repo contains the AIE7 Onramp materials — a 6-week live session series originally run as preparation for the AI Engineering Bootcamp (Cohort 7).
We’ve open-sourced these sessions so anyone can use them to build a strong foundation in AI engineering concepts, tools, and practices.
https://github.com/AI-Maker-Space/AIE7-Onramp
*Note, we no longer teach LLM Training, Fine-Tuning, or alignment - for that material check out the resource below below!
Large Language Model Engineering (LLM Engineering) refers to the emerging best-practices and tools for pretraining, post-training, and optimizing LLMs prior to production deployment.
Pre- and post-training techniques include unsupervised pretraining, supervised fine-tuning, alignment, model merging, distillation, quantization. and others.
Course ModulesThis course teaches you the fundamentals of LLMs, and will quickly onramp you up to the practical LLM Engineering edge. When you complete this course, you will understand how the latest Large and Small Language Models are built, and you'll be ready to build, ship, and share your very own.
Module 1: Transformer: Attention Is All You Need🤖 The Transformer
🧐 Attention
🔠 Embeddings
🪙 Next-Token Prediction
🔡 Embedding Models
🚇 Pretraining
🚉 Fine-Tuning
🛤️ Alignment
🥪 Model Merging
⚗️ Distillation
https://github.com/AI-Maker-Space/LLM-Engineering-Foundations-to-SLMs-Open-Source