Experiments in AI-Generated Media

5 Weeks

·

Cohort-based Course

Join an interdisciplinary research group from the MIT Media Lab to experiment with AI-Generated media.

Course overview

Deepfakes for Good

A rigorous five-week course created by researchers at the MIT Media Lab that takes participants through the foundations and applications of AI-generated media.

Who is this course for

01

Engineers, designers, and artists who want to create synthetic media

02

Anyone interested in the Media Lab's interdisciplinary approach to designing technology

03

Technology leaders who need to know how synthetic media might change their industry

What to expect

Technical Foundations

Master the technical and theoretical foundations of deepfakes generation, neural networks, GANs, etc...

Hands-on Projects

Create and manipulate synthetic characters using a series of programming notebooks prepared for this course

Analysis and Critique

Discuss key ethical and governance questions AI-generated media raises

Community

Work with a great cohort that includes participants with different perspectives and backgrounds

What past learners are saying

        This jam-packed and engaging course provided me with the foundation and inspiration I needed to pursue creative deepfakes ethically and consciously within my company and for personal projects. The instructors are excellent and the assignments enable students from any technical background to realize their ideas using the latest ML tools.
Emily Salvador

Emily Salvador

Principal Product Manager, Yahoo
        This stellar course offers a hands-on approach to making sense of synthetic media. Through exposure to foundational ML knowledge and creative experiments with cutting-edge tools, instructors and cohort peers challenged me to think more critically about my role in advancing emerging technologies in the broader societal context.
Alex Norton

Alex Norton

Product Designer, Google
A pattern of wavy dots

Be the first to know about upcoming cohorts

Experiments in AI-Generated Media

Meet your instructors

Pattie Maes

Pattie Maes

Professor, MIT Media Lab

Professor Maes runs the MIT Media Lab’s Fluid Interfaces research group, which aims to radically reinvent the human-machine experience. Coming from a background in artificial intelligence and human-computer interaction, she is particularly interested in the topic of cognitive enhancement, or how immersive and wearable systems can actively assist people with creativity, memory, attention, learning, decision making, communication, and wellbeing. 

Roy Shilkrot

Roy Shilkrot

Roy Shilkrot completed his PhD as a research assistant in the MIT Media Lab's Fluid Interfaces group in 2015, where he is currently a research associate with interests in Augmented Reality, Human Computer Interaction, Computer Graphics and Computer Vision. He is the chief scientist at Tulip, a MIT Media Lab spinoff.

Joanne Leong

Joanne Leong

Joanne Leong has a keen interest in understanding human perception of the world, and designing technologies that complement us and can bring positive change to our lives—in how we learn, work, play, and connect with one another and the things around us. She is currently a Ph.D. student in the Fluid Interfaces Group at the Lab.


Pat Pataranutaporn

Pat Pataranutaporn

Pat Pataranutaporn is an antidisciplinary technologist/scientist/artist in the Fluid Interfaces research group. Pat’s research is at the intersection of biotechnology and wearable computing, specifically at the interface between biological and digital systems. He is currently a Ph.D. student in the Fluid Interfaces Group at the Lab. 

Course syllabus

01

Creativity

This kick-off session introduces the MIT Media Lab, its unique approach to innovation and experimentation and the course content and instructors. You will get onboarded into the Google Colab platform that we use for the course projects and we will cover an overview of how synthetic media are used in art and industry today.

02

Computation I (Intro to Deep Learning)

Introduction to the tools and techniques underpinning the creation of synthetic media: machine learning, deep learning, and convolutional visual models. 

03

Computation II (Visual Deepfake Generation)

Learn how to apply your new skills to create AI-generated visuals using Image2Image models, StyleGAN, first order models, and facial generative models. 

04

Computation III (Audio Generation + Detection)

Use your growing toolbox to create audio deepfakes and understand how to detect AI generated media. 

05

Critique

Discussion on the potential legal implications of synthetic media and its use. Analysis of the different ways in which AI-generated content could pose a threat to our notions of trust and our democracy. We will discuss ethical concerns and potential policy interventions.

A pattern of wavy dots

Be the first to know about upcoming cohorts

Experiments in AI-Generated Media

Online Learning Media Lab style

Online Learning Media Lab style

Active learning, not passive watching

This course builds on live workshops and hands-on projects

Interactive and project-based

You’ll be interacting with other learners through breakout rooms and project teams

Learn with a cohort of peers

Join a community of like-minded people who want to learn and grow alongside you

Frequently Asked Questions

Stay in the loop

Sign up to be the first to know about course updates.

A pattern of wavy dots

Be the first to know about upcoming cohorts

Experiments in AI-Generated Media