2.S972: Making Music in the Metaverse

Fall, 2022

Massachusetts Institute of Technology

Welcome to the syllabus website for Making Music in the Metaverse. In this class, students worked in teams of interdisciplinary peers from Berklee College of Music, Harvard University, and MIT to develop Virtual Reality experiences. Guest lecturers were experts in both academia and industry. Students learned how to make virtual reality experiences in the game engine Unity, and the semester culminated with a demos of new product/service which were presented to an audience of distinguished guests/investors/partners.

Teaching Team

Ken Zolot

Ken Zolot is a Senior Lecturer in the department of Mechanical Engineering at MIT, and a Professor of Creative Entrepreneurship at The Berklee College of Music. His other work can be found at http://www.mit.edu/people/zolot

Aubrey Simonson

Aubrey is a Unity developer and VR interaction designer. He holds an MS from the MIT Media Lab, where he wrote the thesis An Integrated System for Interaction in Virtual Environments as a memeber of the Fluid Interfaces Group, and BAs from Wellesley College in Political Science and Media Arts and Sciences. His other work can be found at aubreysimonson.com. He also made this website :)

Mateo Larrea Ferro

Mateo is an Ecuadorian developer and sound artist and recent graduate of Berklee College of Music. His research focuses on exploring computer code as an expressive medium, procedural audio, extended reality, psychoacoustics, and human-computer interaction. His other work can be found at mateolarreaferro.com

Rob Jaczko

Rob Jaczko is an independent recording engineer and record producer, and Chair, Music Production and Engineering Dept. at Berklee College of Music. He is a former staff engineer at A&M Studios, Hollywood, California, where his engineering credits include Aerosmith, Vinnie Colaiuta, Sheryl Crow, Crowded House, and Hall and Oates. He is also the founder of View Works, specializing in stereoscopic 3D imaging and immersive audio solutions.

Guest Speakers

Rus Gant

Rus Gant is a well-regarded international multi-media artist, XR Architect, computer engineer, educator and visual futurist. He is currently on the Research staff at Harvard University in the Department of Earth & Planetary Sciences, where he currently runs the Visualization Research Laboratory and the Virtual Harvard Project, he was recently a Research Fellow at MIT at the Center for Media Studies and for 10 years was adjunct faculty at Tokyo’s Showa Women’s University's Institute for Language and Culture.

He is currently pursuing work in the future of real-time 3D computer graphics and imaging for virtual production, next generation virtual reality, augmented reality, AI for visualization and immersive telepresence for science teaching and research. He has served as the Lead Technical artist for the Giza 3D project at Harvard and the Museum of Fine Arts reconstructing the pyramids, temples and tombs on the Egyptian Giza Plateau in virtual reality. He is a past fellow at the MIT Center for Advanced Visual Studies, was director of the Visualization Group of MIT's Project Athena and a Fellow at the Center for Creative Inquiry at Carnegie Mellon University. He was the architect of the first digital visualization lab at Polaroid. He was the founder of the first Multi-media Group at International Computers Ltd. in the UK and Created the first Visualization Centre for the Futures Group of the DTI in London.

For more than 40 years he has applied his visualization skills to work in art, computer science, science education, archaeology and museology for some of the world’s leading museums and universities. For more than 50 years his art practice has been about exploring new media and technology to tell old stories. As a computer hardware and software engineer he has constantly been at the forefront of the science of computer visualization. As a researcher and artist he has created and developed new techniques in 3D visualization, virtual reality and digital museology and archaeology. These techniques have often been applied to scientific research in multiple disciplines including the reconstruction of the art and architecture of ancient cultures.

Reading assigned:

  • None :)

Lori Landay

Lori Landay is a Professor of Cultural Studies at Berklee College of Music. Her creative and critical work explores themes of transformation in audiovisual cultural forms, technology, and perception. She is the author of I Love Lucy (TV Milestones Series) and Madcaps, Screwballs, and Con Women: The Female Trickster in American Culture, as well as numerous publications on topics including Minecraft, LEGO, virtual worlds, virtual subjectivity, digital narrative, silent film, and gender and comedy. She teaches Dream Machine, as well as other courses.

Reading assigned:

Iulian Radu

Iulian Radu is a Principal Research Scientist at the Harvard University Graduate School of Education. His work intersects educational innovation and user-centered design, specifically focusing on AR / VR education, digital fabrication, software / hardware engineering, embodied cognition, child development, co-design and socio-technological evolution.

Reading assigned:

David Lobser

David Lobser is a 3D animator and VR artist. He taught 3D animation at Harvard University before shifting careers to XR development. He received a BFA from the School of Visual Arts (SVA) in NYC and a Master of Professional Studies (MPS) from the Interactive Telecommunications Program (ITP) at New York University's Tisch School of the Arts. Under Ken Perlin he worked as a researcher and artist-in-residence at NYU's "Future Reality Lab," where he focused on multi-user, shared-space VR experiences. His recent work focuses on therapeutic uses for VR. He co-founded "Luxury Escapism," New York's first digital spa and developed a variety of VR, projection and physical computing therapies to fill it out, including "Cosmic Sugar" and "Visitations."

Reading assigned:

Sam Chin

Sam Chin is a graduate student in the Responsive Environments Research Group at the MIT Media Lab. Her work is currently focused on sensory augmentation for pilots, and explores the question: How do we learn and use new senses that have no analog in our current five biological senses?

Reading assigned:

Erica Knowles

Erika Knowles is psychoacousician, and an Associate Professor at Berklee College of Music. She is the lab director of the Berklee Psychology of Music Research Lab, which studies the impact of music training on learning, memory, and auditory processing. Dr. Knowles is interested in how we acquire and understand musical structure. She is particularly interested in: 1) How non-musicians learn this information without musical training and 2) how musical training can influence how we use this knowledge.

Reading assigned:

Akito van Troyer

Akito van Troyer is an Assistant Professor of Electronic Production and Design at Berklee College of Music and a Research Affiliate at MIT Media Lab. His interdisciplinary research focuses on the exploration and development of new musical experiences that enrich people's lives and impact the future of human expression. Akito conducts and accomplishes his research through innovations in the fields of musical instrument design, music production, performance, and audience participation. He obtained his Ph.D. degree from the MIT Media Lab in 2018, designing and building innovative interactive music systems that inspire and guide people in discovering their own musical language. Akito previously completed his Masters through the MIT Media Lab in 2012, designing new performance systems that encourage audience participation and augment the experience of audience members through interconnected networks. He also earned a Masters degree in 2010 from the Georgia Tech Center for Music Technology, building computer-based live performance platforms for laptop orchestra.

Reading assigned:

Pattie Maes

Pattie Maes is a professor at MIT's Program in Media Arts and Sciences. She runs the Media Lab's Fluid Interfaces research group, which does research at the intersection of Human Computer Interaction and Artificial Intelligence with a focus on applications in health, wellbeing and learning. Maes is also a faculty member in MIT's center for Neuro-Biological Engineering. She is particularly interested in the topic of cognitive enhancement, or how wearable, immersive and brain-computer interface systems can actively assist people with issues such as memory, attention, learning, decision making, communication, wellbeing, and sleep.

Reading assigned:

Andrzej Banburski-Fahey

Andrzej Banburski-Fahey is a Principal Researcher at Microsoft Research, and former Postdoctoral Researcher at the Center for Brains, Minds + Machines at MIT, with broad interests in Artificial Intelligence, Neuroscience, Quantum Gravity and Mixed Reality.

Reading assigned:

Student Projects

Musicraft by EMMM®

Aria Xiying Bao

3D Artist, UX Designer, Unity Developer

Nix Liu Xin

3D Artist

Emmanuel Serrano

Sound Designer

Yinghou Wang

Project Manager, 3D Artist

Davide Zhang

Unity Developer, Hardware Specialist

Musicraft is a VR audio-visual gaming experience and platform where the user creates and remixes music and architecture at the same time. It explores the idea that architecture is frozen music and music is flowing architecture.

In this experience, users are able to generate their own large-scale architecture by freely interacting with the blocks. Meanwhile, audio is presented in a synaesthetic way. Different interaction choices of the user leads to different visual output. Each music rhythm, respective building block as a tableau, and the composing of which illustrates a dynamic, non-linear, unpredictable, potentially infinite structure. Music, composition, special effects—everything, down to an open experience allowing a non-musician, non-3D builder, or non-visual artist to create their own piece. We look forward to checking out users’ composition of music and architecture through their constant imagination and experiments and uncovering what’s possible in the remix of both in Musicraft.

APK link

Magic

Jessica Boye-Doe

Nia-Simone Egerton

Prim Rattanathumawat

Yubo Zhao

Jose Pescador

For our Final project our group decided to create a sequencer that interacted with the music inside of the environment. Along with playing around with sounds inside of our world, we wanted our user to get a taste of what it is like to travel across the globe. Inside of our project, the user will be introduced to different environments and music. In turn the player will be invited to play around with the objects they see. Our goal is for the user to create music with the items in front of them. We want our player to leave the experience with a feeling of satisfaction and an intrigue to continue exploring parts of the world and composition.

APK link

StageFright

Ilknur Aspir

Unity Implementation, Anxiety Intervention

Lancelot Blanchard

User Study, Programming

Ingrid Chan

Literature Review, Anxiety Intervention, Immersion

Lleyton Elliott

Hardware Specialist / Biometrics, Literature Review

Kevin Huang

Programming, Project Manager, Biometrics

Jenny Jiang

Immersion, Project Manager, Biometrics

Myron Layese

Sound Design, Biometrics, Programming

Per Pintaric

Aesthetics and UI Design, Immersion

StageFright is a Mixed Reality app aiming to help performing musicians deliver more confident and relaxed performances in person. It provides a VR performance venue environment for the user to play their instrument in, as well as a waiting room and backstage area to simulate the full experience of preparing for a performance. The app offers helpful reminders and provides research-based CBT (Cognitive Behavioral Therapy) techniques to help the user relax. These are triggered by biometric sensors that detect change in the user’s sweat level. The user experience is as follows: (waiting room → backstage → performance stage → waiting room). This process introduces and assists the musician with 1) getting comfortable rehearsing in the venue and 2) using biometric feedback to help the performer regulate their anxiety levels through visual and auditory guidance.

APK link

Aura

Kenny Lam

3D Level Design and Unity Programming

Mrinmoy Saha

3D Level Design and Unity Programming

Alexander Antaya

Project Manager and Audio Director/Composer

Cynthia Liu

Executive Audio Designer

“Aura” is a meditative, immersive VR experience that is (hopefully) aesthetically pleasing and heart-warming. In the story, the protagonist is a snow spirit, and collects aura (mystic particles) to orchestrate an aurora show at night. The scene is an interactive experience where the player/snow spirit ingests aura stored in a magic tank, in an arctic setting. In the end, as the player ingests/interacts with the aura coming out of the tank, they will orchestrate an aurora show in the night sky with audio and visual elements.

APK link

Tone Soup

Lucy Nester

Lead Unity Developer, Instrument Design, Environment Design

Ari Davids

Sound and Music Design

Josh Kwok

Asset Design and Sound Effects

Peggy Yin

Conceptual/Narrative Design, Unity Developer, Environment Design

In virtual reality, music becomes technology: what we can hear and perceive as music is controlled entirely by what we can electronically synthesize, produce, and generate. This opens up worlds of possibility for the creation of new forms of music making, catalyzed by increased opportunities for radical collaboration, experimentation with neurological and physical laws, and a soundscape that has yet to be defined. Our project, Tone Soup, introduces an intuitive gestural interface to foster this sort of intense musical exchange and experimentation. Through collective user input, our VR experience will help compose the “folk music” of the metaverse from folk narratives of what it means for music—and technology—to be live.

APK link