Class is in session

Neural Narratives: From Tokens to Agents

6 Weeks

·

Cohort-based Course

Build and deploy your own LLM-powered agent from scratch - reasoning, APIs, fine-tuning, and more in this hands-on course.

Previously at

Amazon
Intel
Amazon Web Services
HP
Columbia University

Course overview

Build your own LLM-powered Agent - step by step, from theory to production

Do you catch yourself asking any of these while building with LLMs?


1. How do I go from using a model to actually building one?

(Do I really need 100 GPUs and a research lab?)


2. What does it really mean to tokenize text—and can I build my own tokenizer?


3. How does attention work under the hood? Can I implement it without black-box magic?


4. What’s the difference between a transformer and an agent—and how do I turn one into the other?


5. Where does fine-tuning start? What’s the difference between training from scratch, LoRA, and just writing better prompts?


6. How do I get my model to talk to real-world APIs and tools? What’s the right loop for reasoning and acting?


7. If my model starts generating junk, how do I debug it? Gradients? Sampling? Caching?


8. Can I really build a full LLM pipeline- training, optimization, inference, fine-tuning, deployment - on Colab or a laptop?

If you’ve wondered about any of these, this course is for you.


👉 This is a hands-on course for engineers, builders, and curious coders who want to go deeper than drag-and-drop AI. It will guide you through designing, implementing, fine-tuning, and deploying a powerful LLM-powered agent from scratch. You'll go beyond just using prebuilt tools - by building tokenizers, training neural networks, implementing Transformers, and deploying efficient inference engines with your own hands.


With live workshops, projects, and personalized support, you’ll master both the theoretical foundations and the practical engineering needed to bring intelligent agents to life.


By the end of the course, you’ll walk away with a deployable, customized LLM agent that can interact with real-world APIs, reason about its environment, and adapt to new tasks.

Who is this course for

01

Engineers who want to understand LLMs deeply and build beyond APIs.


02

Researchers who want a practical companion to their theoretical background.

03

Hackers & Indie Builders eager to create agents that reason, plan, and act.


04

Product-minded technologists exploring AI-native interfaces and assistants.


05

Anyone who’s tired of black-box models and wants to build LLMs bottom-up.


Prerequisites

  • Basic Python skills

    This course is beginner-friendly to LLMs - but not to programming. You should be comfortable writing functions, using libraries, debugging

  • Familiarity with machine learning basics

    You know what training a model means and understand concepts like loss, accuracy, and gradients.

  • Curiosity and willingness to build from scratch

    You enjoy learning by doing and want to go deeper than “just using the API.”

What you’ll get out of this course

Build a functioning transformer model from scratch without high-level libraries, ready for text generation tasks.

Give students an idea of how they can expect to grow throughout your course. Include specificity and precise results so students can benchmark exactly what they’ll learn.

Implement backpropagation and attention mechanisms to deepen your understanding of model internals.

Give students an idea of how they can expect to grow throughout your course. Include specificity and precise results so students can benchmark exactly what they’ll learn.

Create and deploy a working API or chatbot interface powered by your own trained language model.

Give students an idea of how they can expect to grow throughout your course. Include specificity and precise results so students can benchmark exactly what they’ll learn.

Design and train a custom tokenizer using BPE to handle real-world text inputs.

Give students an idea of how they can expect to grow throughout your course. Include specificity and precise results so students can benchmark exactly what they’ll learn.

Apply LoRA to fine-tune a model for a domain-specific task using real data

Debug and extend your model to handle multimodal inputs or domain-specific tasks.

Generate coherent, creative text outputs using your self-built, fully-trained language model.

Optimize training and inference.

Design and deploy an LLM-powered agent that can reason, act, and respond through real-time API calls or tools.

What’s included

Mohamed Elrfaey

Live sessions

Learn directly from Mohamed Elrfaey in a real-time, interactive format.

Lifetime access

Go back to course content and recordings whenever you need to.

Community of peers

Stay accountable and share insights with like-minded professionals.

Certificate of completion

Share your new skills with your employer or on LinkedIn.

Maven Guarantee

This course is backed by the Maven Guarantee. Students are eligible for a full refund up until the halfway point of the course.

Course syllabus

5 live sessions • 21 lessons • 9 projects

Week 1

Aug 16—Aug 17

    Foundations of Language Modeling (Bigram LM)

    5 items

    Aug

    16

    Foundations of Language Modeling (Intro to Tokenization )

    Sat 8/163:00 PM—4:30 PM (UTC)

Week 2

Aug 18—Aug 24

    Build a Tokenizer from Scratch

    5 items

Week 3

Aug 25—Aug 31

    Aug

    30

    Language Models (Tokenization)

    Sat 8/303:00 PM—4:30 PM (UTC)

    Neural Foundations (Backpropagation & MLPs)

    4 items

Week 4

Sep 1—Sep 7

    Attention and Transformers

    4 items

    Sep

    6

    Code a Transformer Block Together

    Sat 9/63:00 PM—4:30 PM (UTC)

Week 5

Sep 8—Sep 14

    Fine-Tuning with LoRA + SFT

    4 items

    Sep

    13

    Efficient Tuning Techniques

    Sat 9/133:00 PM—4:00 PM (UTC)

Week 6

Sep 15—Sep 21

    Agents and Real-World Tools

    4 items

    Sep

    20

    Building Your First Agent

    Sat 9/203:00 PM—4:30 PM (UTC)

    Deploy It

    4 items

Week 7

Sep 22—Sep 23
    Nothing scheduled for this week

What people are saying

        I’ve been working with Mohamed, and his ability to break down complex topics into clear, actionable steps is outstanding. With him you’re not just learning theory, you’re getting hands-on practical guidance that accelerates your understanding and application. Highly recommended for anyone serious about building with LLMs and AI agents.
Javier Fornells

Javier Fornells

Software Engineer
        Mohamed is one of the few people who can teach deep AI concepts and make them click.
Ousmane Ciss

Ousmane Ciss

Software Engineer | Tech Entrepreneur
        The most hands-on LLM course I’ve seen - built from first principles with clarity. Mohamed’s explanation of backprop finally clicked for me after years of struggling.
Software Engineer, Amazon Web Services (AWS)

Software Engineer, Amazon Web Services (AWS)

Mohamed Madian
        Finally a course that teaches you how LLMs really work - no magic, just solid code.
Ahmed Bayaa

Ahmed Bayaa

Early Beta Participant, Amazon
        This course helped me filter the fluff and got me straight to the core of agents.
Vitaly Kondulukov

Vitaly Kondulukov

ML Engineer, Toronto

Meet your instructor

Mohamed Elrfaey

Mohamed Elrfaey

A former Amazon engineering leader, AI startup founder, and RecSys researcher

Mohamed is software and machine learning leader with over 18 years of experience in both startups and large tech companies, including Amazon, Intel, HP, and Orange. His work spans cloud computing, AI systems, and large-scale infrastructure, always with a focus on building useful, real-world

technology.


At Amazon, Mohamed led critical engineering efforts across identity, security, and AI domains. His work on authentication and fraud detection systems significantly improved platform reliability and automation, while his contributions to Alexa’s language understanding tools enhanced large-scale language model performance and developer experience across voice-enabled applications.


Mohamed is a hands-on builder with deep research roots. In his earlier work at Intel, Mohamed contributed to research on telecom infrastructure and dynamic spectrum sharing. He participated in developing the ETSI LSA RRS Standard (Licensed Shared Access– Radio Resource Status), aimed at improving how wireless spectrum is managed. He also holds five U.S. patents.


He holds a bachelor’s degree in electrical and communications engineering, a master’s in computer science, and is currently pursuing a PhD at the University of Victoria (UVic), focusing on recommender systems and AI for personalization and search. His research and engineering background includes projects in semantic search, multi-modal systems, and conversational AI.


His entrepreneurial journey includes founding QtMatter Technologies, where he built Uteena, a high-tech learning platform that hits ~25K subscribers in its first two weeks, and partnered with Intel Europe to innovate in smart grid, agriculture, and Smart stadium solutions. Most recently, he led an AI-powered contextual ad targeting platform and has implemented platforms for survival analysis and learning recommender and Podcast AI (a semantic search and agentic ad matching platform for podcasts).


Beyond engineering, Mohamed is a big fan of calligraphy art. What began as a hobby evolved into a deep pursuit - he studied the art of calligraphy for six years at a specialized institute alongside his technical education. Today, he continues to practice and produce original calligraphy art in his free time, blending artistic expression with technical creativity.


Known for making complex systems accessible, Mohamed doesn’t treat AI as a black box. He believes the best way to understand these systems is to build them. That mindset is at the heart of Neural Narratives, where readers implement every component - from tokenizers to transformer and agents - learning how language models and autonomous systems work from the ground up.

Amazon
Intel
HP
Amazon Web Services
A pattern of wavy dots

Be the first to know about upcoming cohorts

Neural Narratives: From Tokens to Agents

Course schedule

4-6 hours per week

  • Tuesdays & Thursdays

    1:00pm - 2:00pm EST

    If your events are recurring and at the same time, it might be easiest to use a single line item to communicate your course schedule to students

  • May 7, 2022

    Feel free to type out dates as your title as a way to communicate information about specific live sessions or other events.

  • Weekly projects

    2 hours per week

    Schedule items can also be used to convey commitments outside of specific time slots (like weekly projects or daily office hours).

Learning is better with cohorts

Learning is better with cohorts

Active hands-on learning

This course builds on live workshops and hands-on projects

Interactive and project-based

You’ll be interacting with other learners through breakout rooms and project teams

Learn with a cohort of peers

Join a community of like-minded people who want to learn and grow alongside you

Frequently Asked Questions

A pattern of wavy dots

Be the first to know about upcoming cohorts

Neural Narratives: From Tokens to Agents