LLMs for Everyone

4.5

(6 ratings)

·

4 Days

·

Cohort-based Course

Learn to apply the latest prompting techniques and tools to build use cases and applications with LLMs.

TRUSTED BY

Google
Amazon
Microsoft
LinkedIn
Airbnb

Course overview

Effectively prompting and building with LLMs

OVERVIEW OF THE COURSE

LLMs (Large Language Models) show powerful capabilities, but not knowing how to effectively and efficiently use them often leads to reliability and poor performance. Prompt engineering helps to improve discover capabilities, improve reliability, reduce failure cases, and save on computing costs when building with LLMs.


This hands-on course expands your prompting skills to effectively use and build with LLMs. It covers the latest prompting techniques (e.g., few-shot, chain-of-thought, RAG, prompt chaining) that you can apply to a variety of complex use cases such as building personalized chatbots, LLM-powered agents, prompt injection detectors, LLM-powered evaluators, and much more.


Topics include:


• Taxonomy of Prompting Techniques

• Tactics to Improve Reliability

• Structuring LLM Outputs

• Zero-shot Prompting

• Few-shot In-Context Learning

• Chain of Thought Prompting

• Self-Reflection & Self-Consistency

• ReAcT Prompting Framework

• Retrieval Augmented Generation (RAG)

• Fine-Tuning & RLHF

• Function Calling & Tool Usage

• LLM-Powered Agents

• LLM Evaluation & Judge LLMs

• AI Safety & Moderation Tools

• Adversarial Prompting (Jailbreaking and Prompt Injections)

• Common Real-World Use Cases of LLMs

• Prompt Engineering for models like GPT-3.5/4, Mixtral, Gemini, and others 

... and much more


PREREQUISITES

• We will explore and build with no-code tools.

• No knowledge of programming is required.

• Basic knowledge of LLMs is beneficial but not required.


If you have experience using Python, we recommend our advanced course: https://maven.com/dair-ai/prompt-engineering-llms


ABOUT THE INSTRUCTOR

Elvis, the instructor for this course, has vast experience doing research and building with LLMs and Generative AI. He is a co-creator of the Galactica LLM and author of the popular Prompt Engineering Guide. He has worked with world-class AI teams like Papers with Code, PyTorch, FAIR, Meta AI, Elastic, and many other AI startups.


Reach out to training@dair.ai for any questions, corporate training, and group/student discounts.


WHO THE COURSE HAS HELPED

This course has helped AI startups freelancers, and professionals at companies like Microsoft, Google, LinkedIn, Amazon, Coinbase, Asana, Airbnb, Intuit, JPMorgan Chase & Co, and many others.

Who is this course for

01

Professionals who want to explore and build with LLMs.

02

Developers who want to improve LLM reliability, efficiency, and performance for their use cases and applications.

03

Leaders who want to lead their teams to build innovative products with LLMs.

What you’ll get out of this course

Design and optimize prompts
  • Learn key elements and tactics for designing effective prompts
  • Design, test, and optimize prompts to improve model performance and reliability for useful and common tasks such as code generation, text summarization, and information extraction
Build a robust framework to effectively apply advanced prompt engineering techniques
  • Review and apply the latest and most advanced prompt engineering techniques (few-shot learning, chain-of-thought, RAG, prompt chaining, self-consistency, self-verification, etc.)
  • Familiarize and build with approaches like ReAct, RAG, and LLM-powered agents.


Develop use cases and build applications
  • Develop use cases such as tagging systems, personalized chatbots, evaluation systems, product review analyzers, and more
  • Build advanced applications that involve combining conversational assistants with external tools and knowledge bases
Perform evaluations for your applications
  • Design a framework for evaluating and measuring the quality, diversity, safety, and robustness of LLMs
  • Compare prompt engineering, RAG, and fine-tuning
  • Cover AI safety topics like prompt injection and moderation tools


Learn prompt engineering tools
  • Review the latest prompt engineering and LLM tools such as ChatGPT, Llama Index, Comet, LangChain, Flowise, Scale AI's Spellbook, and others
  • Discuss current trends, papers, and future directions in prompt engineering

This course includes

6 interactive live sessions

Lifetime access to course materials

4 in-depth lessons

Direct access to instructor

4 projects to apply learnings

Guided feedback & reflection

Private community of peers

Course certificate upon completion

Maven Satisfaction Guarantee

This course is backed by Maven’s guarantee. You can receive a full refund within 14 days after the course ends, provided you meet the completion criteria in our refund policy.

Course syllabus

Expand all modules
  • Week 1

    Jun 10—Jun 13

    Week dates are set to instructor's time zone

    Events

    • Jun

      10

      LLMs for Everyone - Session 1

      Mon, Jun 10, 4:00 PM - 5:30 PM UTC

    • Jun

      11

      LLMs for Everyone - Session 2

      Tue, Jun 11, 4:00 PM - 5:30 PM UTC

    • Jun

      11

      Optional: LLMs for Everyone - Office Hour 1

      Tue, Jun 11, 5:30 PM - 6:00 PM UTC

    • Jun

      12

      LLMs for Everyone - Session 3

      Wed, Jun 12, 4:00 PM - 5:30 PM UTC

    • Jun

      12

      Optional: LLMs for Everyone - Office Hour 2

      Wed, Jun 12, 5:30 PM - 6:00 PM UTC

    • Jun

      13

      LLMs for Everyone - Session 4

      Thu, Jun 13, 4:00 PM - 5:30 PM UTC

    Modules

    • Session 1 - Structuring Effective Prompts

    • Session 2 - Advanced Prompting & Improving LLM Reliability

    • Session 3 - LLM Evaluation & AI Safety

    • Session 4 - Prompt Engineering Tools & Applications

4.5

(6 ratings)

What students are saying

Meet your instructor

Elvis Saravia

Elvis Saravia

Elvis is a co-founder of DAIR.AI, where he leads all AI research, education, and engineering efforts. His primary interests are training and evaluating large language models and developing applications on top of them. He is the co-creator of the Galactica LLM and was a technical product marketing manager at Meta AI where he supported and advised world-class teams like FAIR, PyTorch, and Papers with Code. Prior to this, he was an education architect at Elastic where he developed technical curriculum and courses.

A pattern of wavy dots
Be the first to know about upcoming cohorts

LLMs for Everyone

|

Bulk purchases

Course schedule

3-5 hours per week
  • Live Sessions

    4 x 1.5 hour sessions

    Live sessions, demos, exercises, and projects

  • Live Office Hours

    1 hour

    Optional office hours to ask questions and receive guidance related to the course topics

  • Bonus Content

    2 hours per week

    Includes additional readings and self-paced tutorials + bonus exercises to practice prompt engineering techniques and tools for different use cases and applications

Learning is better with cohorts

Learning is better with cohorts

Active hands-on learning

This course builds on live workshops and hands-on projects

Interactive and project-based

You’ll be interacting with other learners through breakout rooms and project teams

Learn with a cohort of peers

Join a community of like-minded people who want to learn and grow alongside you

Frequently Asked Questions

What happens if I can’t make a live session?
I work full-time, what is the expected time commitment?
What are the prerequisites for this course?
A pattern of wavy dots
Be the first to know about upcoming cohorts

LLMs for Everyone

|

Bulk purchases