Welcome to the syllabus website for Making Music in the Metaverse. In this class, students worked in teams of interdisciplinary peers from Berklee College of Music, Harvard University, and MIT to develop Virtual Reality experiences. Guest lecturers were experts in both academia and industry. Students learned how to make virtual reality experiences in the game engine Unity, and the semester culminated with a demos of new product/service which were presented to an audience of distinguished guests/investors/partners.
Rus Gant is a well-regarded international multi-media artist, XR Architect, computer engineer, educator and visual futurist. He is currently on the Research staff at Harvard University in the Department of Earth & Planetary Sciences, where he currently runs the Visualization Research Laboratory and the Virtual Harvard Project, he was recently a Research Fellow at MIT at the Center for Media Studies and for 10 years was adjunct faculty at Tokyo’s Showa Women’s University's Institute for Language and Culture.
He is currently pursuing work in the future of real-time 3D computer graphics and imaging for virtual production, next generation virtual reality, augmented reality, AI for visualization and immersive telepresence for science teaching and research. He has served as the Lead Technical artist for the Giza 3D project at Harvard and the Museum of Fine Arts reconstructing the pyramids, temples and tombs on the Egyptian Giza Plateau in virtual reality. He is a past fellow at the MIT Center for Advanced Visual Studies, was director of the Visualization Group of MIT's Project Athena and a Fellow at the Center for Creative Inquiry at Carnegie Mellon University. He was the architect of the first digital visualization lab at Polaroid. He was the founder of the first Multi-media Group at International Computers Ltd. in the UK and Created the first Visualization Centre for the Futures Group of the DTI in London.
For more than 40 years he has applied his visualization skills to work in art, computer science, science education, archaeology and museology for some of the world’s leading museums and universities. For more than 50 years his art practice has been about exploring new media and technology to tell old stories. As a computer hardware and software engineer he has constantly been at the forefront of the science of computer visualization. As a researcher and artist he has created and developed new techniques in 3D visualization, virtual reality and digital museology and archaeology. These techniques have often been applied to scientific research in multiple disciplines including the reconstruction of the art and architecture of ancient cultures.
Musicraft is a VR audio-visual gaming experience and platform where the user creates and remixes music and architecture at the same time. It explores the idea that architecture is frozen music and music is flowing architecture.
In this experience, users are able to generate their own large-scale architecture by freely interacting with the blocks. Meanwhile, audio is presented in a synaesthetic way. Different interaction choices of the user leads to different visual output. Each music rhythm, respective building block as a tableau, and the composing of which illustrates a dynamic, non-linear, unpredictable, potentially infinite structure. Music, composition, special effects—everything, down to an open experience allowing a non-musician, non-3D builder, or non-visual artist to create their own piece. We look forward to checking out users’ composition of music and architecture through their constant imagination and experiments and uncovering what’s possible in the remix of both in Musicraft.
For our Final project our group decided to create a sequencer that interacted with the music inside of the environment. Along with playing around with sounds inside of our world, we wanted our user to get a taste of what it is like to travel across the globe. Inside of our project, the user will be introduced to different environments and music. In turn the player will be invited to play around with the objects they see. Our goal is for the user to create music with the items in front of them. We want our player to leave the experience with a feeling of satisfaction and an intrigue to continue exploring parts of the world and composition.
StageFright is a Mixed Reality app aiming to help performing musicians deliver more confident and relaxed performances in person. It provides a VR performance venue environment for the user to play their instrument in, as well as a waiting room and backstage area to simulate the full experience of preparing for a performance. The app offers helpful reminders and provides research-based CBT (Cognitive Behavioral Therapy) techniques to help the user relax. These are triggered by biometric sensors that detect change in the user’s sweat level. The user experience is as follows: (waiting room → backstage → performance stage → waiting room). This process introduces and assists the musician with 1) getting comfortable rehearsing in the venue and 2) using biometric feedback to help the performer regulate their anxiety levels through visual and auditory guidance.
“Aura” is a meditative, immersive VR experience that is (hopefully) aesthetically pleasing and heart-warming. In the story, the protagonist is a snow spirit, and collects aura (mystic particles) to orchestrate an aurora show at night. The scene is an interactive experience where the player/snow spirit ingests aura stored in a magic tank, in an arctic setting. In the end, as the player ingests/interacts with the aura coming out of the tank, they will orchestrate an aurora show in the night sky with audio and visual elements.
In virtual reality, music becomes technology: what we can hear and perceive as music is controlled entirely by what we can electronically synthesize, produce, and generate. This opens up worlds of possibility for the creation of new forms of music making, catalyzed by increased opportunities for radical collaboration, experimentation with neurological and physical laws, and a soundscape that has yet to be defined. Our project, Tone Soup, introduces an intuitive gestural interface to foster this sort of intense musical exchange and experimentation. Through collective user input, our VR experience will help compose the “folk music” of the metaverse from folk narratives of what it means for music—and technology—to be live.