Join an interdisciplinary research group from the MIT Media Lab to experiment with AI-Generated media.
Professor and Ph.D. research team at the MIT Media Lab
A rigorous five-week course created by researchers at the MIT Media Lab that takes participants through the foundations and applications of AI-generated media.
Engineers, designers, and artists who want to create synthetic media
Anyone interested in the Media Lab's interdisciplinary approach to designing technology
Technology leaders who need to know how synthetic media might change their industry
Master the technical and theoretical foundations of deepfakes generation, neural networks, GANs, etc...
Create and manipulate synthetic characters using a series of programming notebooks prepared for this course
Discuss key ethical and governance questions AI-generated media raises
Work with a great cohort that includes participants with different perspectives and backgrounds
Professor Maes runs the MIT Media Lab’s Fluid Interfaces research group, which aims to radically reinvent the human-machine experience. Coming from a background in artificial intelligence and human-computer interaction, she is particularly interested in the topic of cognitive enhancement, or how immersive and wearable systems can actively assist people with creativity, memory, attention, learning, decision making, communication, and wellbeing.
Roy Shilkrot completed his PhD as a research assistant in the MIT Media Lab's Fluid Interfaces group in 2015, where he is currently a research associate with interests in Augmented Reality, Human Computer Interaction, Computer Graphics and Computer Vision. He is the chief scientist at Tulip, a MIT Media Lab spinoff.
Joanne Leong has a keen interest in understanding human perception of the world, and designing technologies that complement us and can bring positive change to our lives—in how we learn, work, play, and connect with one another and the things around us. She is currently a Ph.D. student in the Fluid Interfaces Group at the Lab.
Pat Pataranutaporn is an antidisciplinary technologist/scientist/artist in the Fluid Interfaces research group. Pat’s research is at the intersection of biotechnology and wearable computing, specifically at the interface between biological and digital systems. He is currently a Ph.D. student in the Fluid Interfaces Group at the Lab.
This kick-off session introduces the MIT Media Lab, its unique approach to innovation and experimentation and the course content and instructors. You will get onboarded into the Google Colab platform that we use for the course projects and we will cover an overview of how synthetic media are used in art and industry today.
Introduction to the tools and techniques underpinning the creation of synthetic media: machine learning, deep learning, and convolutional visual models.
Learn how to apply your new skills to create AI-generated visuals using Image2Image models, StyleGAN, first order models, and facial generative models.
Use your growing toolbox to create audio deepfakes and understand how to detect AI generated media.
Discussion on the potential legal implications of synthetic media and its use. Analysis of the different ways in which AI-generated content could pose a threat to our notions of trust and our democracy. We will discuss ethical concerns and potential policy interventions.
This course builds on live workshops and hands-on projects
You’ll be interacting with other learners through breakout rooms and project teams
Join a community of like-minded people who want to learn and grow alongside you
Sign up to be the first to know about course updates.