Class is in session
6 Weeks
·Cohort-based Course
Build and deploy your own LLM-powered agent from scratch - reasoning, APIs, fine-tuning, and more in this hands-on course.
Class is in session
6 Weeks
·Cohort-based Course
Build and deploy your own LLM-powered agent from scratch - reasoning, APIs, fine-tuning, and more in this hands-on course.
Previously at
Course overview
Do you catch yourself asking any of these while building with LLMs?
1. How do I go from using a model to actually building one?
(Do I really need 100 GPUs and a research lab?)
2. What does it really mean to tokenize text—and can I build my own tokenizer?
3. How does attention work under the hood? Can I implement it without black-box magic?
4. What’s the difference between a transformer and an agent—and how do I turn one into the other?
5. Where does fine-tuning start? What’s the difference between training from scratch, LoRA, and just writing better prompts?
6. How do I get my model to talk to real-world APIs and tools? What’s the right loop for reasoning and acting?
7. If my model starts generating junk, how do I debug it? Gradients? Sampling? Caching?
8. Can I really build a full LLM pipeline- training, optimization, inference, fine-tuning, deployment - on Colab or a laptop?
If you’ve wondered about any of these, this course is for you.
👉 This is a hands-on course for engineers, builders, and curious coders who want to go deeper than drag-and-drop AI. It will guide you through designing, implementing, fine-tuning, and deploying a powerful LLM-powered agent from scratch. You'll go beyond just using prebuilt tools - by building tokenizers, training neural networks, implementing Transformers, and deploying efficient inference engines with your own hands.
With live workshops, projects, and personalized support, you’ll master both the theoretical foundations and the practical engineering needed to bring intelligent agents to life.
By the end of the course, you’ll walk away with a deployable, customized LLM agent that can interact with real-world APIs, reason about its environment, and adapt to new tasks.
01
Engineers who want to understand LLMs deeply and build beyond APIs.
02
Researchers who want a practical companion to their theoretical background.
03
Hackers & Indie Builders eager to create agents that reason, plan, and act.
04
Product-minded technologists exploring AI-native interfaces and assistants.
05
Anyone who’s tired of black-box models and wants to build LLMs bottom-up.
This course is beginner-friendly to LLMs - but not to programming. You should be comfortable writing functions, using libraries, debugging
You know what training a model means and understand concepts like loss, accuracy, and gradients.
You enjoy learning by doing and want to go deeper than “just using the API.”
Build a functioning transformer model from scratch without high-level libraries, ready for text generation tasks.
Give students an idea of how they can expect to grow throughout your course. Include specificity and precise results so students can benchmark exactly what they’ll learn.
Implement backpropagation and attention mechanisms to deepen your understanding of model internals.
Give students an idea of how they can expect to grow throughout your course. Include specificity and precise results so students can benchmark exactly what they’ll learn.
Create and deploy a working API or chatbot interface powered by your own trained language model.
Give students an idea of how they can expect to grow throughout your course. Include specificity and precise results so students can benchmark exactly what they’ll learn.
Design and train a custom tokenizer using BPE to handle real-world text inputs.
Give students an idea of how they can expect to grow throughout your course. Include specificity and precise results so students can benchmark exactly what they’ll learn.
Apply LoRA to fine-tune a model for a domain-specific task using real data
Debug and extend your model to handle multimodal inputs or domain-specific tasks.
Generate coherent, creative text outputs using your self-built, fully-trained language model.
Optimize training and inference.
Design and deploy an LLM-powered agent that can reason, act, and respond through real-time API calls or tools.
Live sessions
Learn directly from Mohamed Elrfaey in a real-time, interactive format.
Lifetime access
Go back to course content and recordings whenever you need to.
Community of peers
Stay accountable and share insights with like-minded professionals.
Certificate of completion
Share your new skills with your employer or on LinkedIn.
Maven Guarantee
This course is backed by the Maven Guarantee. Students are eligible for a full refund up until the halfway point of the course.
5 live sessions • 21 lessons • 9 projects
Aug
16
Aug
30
Sep
6
Sep
13
Sep
20
Javier Fornells
Ousmane Ciss
Software Engineer, Amazon Web Services (AWS)
Ahmed Bayaa
Vitaly Kondulukov
A former Amazon engineering leader, AI startup founder, and RecSys researcher
Mohamed is software and machine learning leader with over 18 years of experience in both startups and large tech companies, including Amazon, Intel, HP, and Orange. His work spans cloud computing, AI systems, and large-scale infrastructure, always with a focus on building useful, real-world
technology.
At Amazon, Mohamed led critical engineering efforts across identity, security, and AI domains. His work on authentication and fraud detection systems significantly improved platform reliability and automation, while his contributions to Alexa’s language understanding tools enhanced large-scale language model performance and developer experience across voice-enabled applications.
Mohamed is a hands-on builder with deep research roots. In his earlier work at Intel, Mohamed contributed to research on telecom infrastructure and dynamic spectrum sharing. He participated in developing the ETSI LSA RRS Standard (Licensed Shared Access– Radio Resource Status), aimed at improving how wireless spectrum is managed. He also holds five U.S. patents.
He holds a bachelor’s degree in electrical and communications engineering, a master’s in computer science, and is currently pursuing a PhD at the University of Victoria (UVic), focusing on recommender systems and AI for personalization and search. His research and engineering background includes projects in semantic search, multi-modal systems, and conversational AI.
His entrepreneurial journey includes founding QtMatter Technologies, where he built Uteena, a high-tech learning platform that hits ~25K subscribers in its first two weeks, and partnered with Intel Europe to innovate in smart grid, agriculture, and Smart stadium solutions. Most recently, he led an AI-powered contextual ad targeting platform and has implemented platforms for survival analysis and learning recommender and Podcast AI (a semantic search and agentic ad matching platform for podcasts).
Beyond engineering, Mohamed is a big fan of calligraphy art. What began as a hobby evolved into a deep pursuit - he studied the art of calligraphy for six years at a specialized institute alongside his technical education. Today, he continues to practice and produce original calligraphy art in his free time, blending artistic expression with technical creativity.
Known for making complex systems accessible, Mohamed doesn’t treat AI as a black box. He believes the best way to understand these systems is to build them. That mindset is at the heart of Neural Narratives, where readers implement every component - from tokenizers to transformer and agents - learning how language models and autonomous systems work from the ground up.
Be the first to know about upcoming cohorts
4-6 hours per week
Tuesdays & Thursdays
1:00pm - 2:00pm EST
If your events are recurring and at the same time, it might be easiest to use a single line item to communicate your course schedule to students
May 7, 2022
Feel free to type out dates as your title as a way to communicate information about specific live sessions or other events.
Weekly projects
2 hours per week
Schedule items can also be used to convey commitments outside of specific time slots (like weekly projects or daily office hours).
Active hands-on learning
This course builds on live workshops and hands-on projects
Interactive and project-based
You’ll be interacting with other learners through breakout rooms and project teams
Learn with a cohort of peers
Join a community of like-minded people who want to learn and grow alongside you
Be the first to know about upcoming cohorts