• Sharing a passion for music and interactive technologyWhile an undergraduate at MIT, Eran Egozy never took a class that combined his passions for computers and music. That's because when he was an undergraduate in the early '90s, there weren't any.

    Today there are several, and Egozy '95, MEng '95 — who went on to co-found Harmonix Music Systems and launch the hugely successful video games "Guitar Hero" and "Rock Band" — is back on campus teaching one of them: 21M.385 / 6.809 (Interactive Music Systems), the first MIT music class that is also an electrical engineering and computer science class. The upper-level undergraduate course enables MIT students to explore audio synthesis, musical structure, and human-computer interaction. Ultimately, the students produce their own interactive music systems.

    Using interactive technology to deepen music-making and experience

    "I'm interested in ways of using technology to enhance a person's experience in either listening to or making music," says Egozy, who was recently named a professor of the practice in MIT Music, based in the School of Humanities, Arts, and Social Sciences.

    "People often experience music passively, by simply turning on a playlist in the background. But the inner workings of music are incredibly deep, and I believe every person has the capability of understanding and engaging with music in a much deeper way than they do now, even if they have not been formally trained."

    At Harmonix, Egozy and co-founder Alex Rigopulos '92 SM '94 designed "Guitar Hero" to give users the experience of playing an instrument. Later games enabled players to re-create the experience of playing in a rock band or performing as a dancer. One of the pre-eminent game development studios in the world, Harmonix has developed more than a dozen critically acclaimed music-based video games.

    Teaching interactive music systems

    At MIT, Egozy says he hopes to continue researching ways in which computers can help people understand music while also helping students pursue their own passions — because he thinks that's the key to success. "We started Harmonix not because we wanted to make a bunch of money, but because we wanted to continue playing around with interactive music technologies after graduating," he says.

    Although he is now full-time at MIT, Egozy still keeps in close touch with Harmonix and continues to serve on the company’s board of directors. Recently, Egozy had a chance to show his students the Harmonix ethos in action, during a tour of the studio. "It was a really fun day," he says. "The students got to see the inner-workings of Harmonix, and see demos of our products, some of which have not yet been released."

    An accomplished clarinetist with the Radius Ensemble, Egozy first developed the Interactive Music Systems class for the spring 2015 term, when he came to MIT as a visiting artist. Eighty undergrads pre-registered for that first class, and Egozy admitted 16 students who gradually moved from simple programming tasks — such as creating a virtual harp that can be played with Kinect motion sensors — to their final project for the class: designing a system that incorporates sounds, graphics, and animation.

    Algorithms for understanding music

    Egozy expected the 2015 class to be a one-off, but he found he enjoyed teaching enormously. So, when he learned that MIT Music had an opening for a professor of music technology, he immediately applied. "This was the first time I had to actually apply for a job," he says. "It's pretty intense."

    Egozy secured the position and officially joined the faculty in the spring 2016 term — when he again offered Interactive Music Systems. Now, he is developing a new undergraduate class focused on the algorithms that enable computers to understand music, and he is delighted to be back at MIT Music — "this hidden gem" that he discovered as an undergraduate.

    Music and the MIT mission

    "As an MIT undergraduate, I did not at all regret coming here, rather than going to a conservatory, which was the other choice I was considering," says Egozy, who received BS and MEng degrees in electrical engineering and computer science with a minor in music performance. "Once you're here, at MIT, you can do whatever you want."

    The key, he says, is passion. "Why take a music class? It exercises the artistic part of your brain, which encourages creativity, and that creativity can certainly be applied to engineering and science," he says. "But I think students should take classes in the humanities and arts simply because they are rich and wonderful subjects. Ultimately, the truly great things that happen in the world happen when people pursue the work they love."

    ______________________________________

    Story prepared by SHASS Communicatons
    Editorial and Design Director: Emily Hiestand
    Senior Writer: Kathryn O'Neill
    Photograph of MIT student musician: Jon Sachs

    Harmonix co-founder and "Guitar Hero" and "Rock Band" co-creator Eran Egozy '95, MEng '95 returns to MIT as professor of the practice in music technology. His course, Interactive Musics Systems, enables MIT students to explore audio synthesis, musical structure, and human-computer interaction.

  • Book explores the "Musical Institute of Technology"As many have discovered, MIT’s centers of excellence include the arts as well as the sciences and technology. One great strength of the arts at MIT is the Institute's music program, which welcomes all enrolled MIT students — regardless of major — and includes a conservatory-level track.

    The vast majority of of MIT’s incoming students have advanced experience in the arts, most especially in music. For these students, MIT’s combination of a world-class science/engineering education and superb music training is one key to their creativity, success, and well-being — while at MIT and throughout their lives.

    In any given year, nearly half of all undergraduates are engaged with the MIT Music program. Many students earn a major, minor, or concentration in music; after graduation, thousands of MIT alumni continue to perform in regional orchestras and chamber ensembles; and a notable group of MIT trained musicians go on to professional careers as composers, performers, and music scholars.

    A recently published book, "Musical Institute of Technology," digs into the longstanding affinity for music at MIT. Along with insights from students and faculty, the book presents selections from an ongoing series of photographs by Jon Sachs, principal photographer for SHASS Communications.

    "This is a marvelous book,” says Melissa Nobles, the Kenan Sahin Dean of MIT-SHASS. “Using the students’ own words it highlights the enormous value of music education here at MIT. The students who take our music classes are as diverse as the music itself. Yet, they all share a deep enthusiasm and appreciation for the intellectual, emotional, and cultural doors that music opens.”

    The photographs, taken in rehearsals, classrooms, and concerts, feature the 10 performance groups in MIT Music. Accompanying text explores the significance of music in MIT's mission, including ideas about:

    the intersection of music with technology, science, and linguistics;
    why serious music training correlates with outsize success in other fields;
    what accounts for the strong affinity between music and the STEM fields;
    how music teaches collaboration and imaginative risk-taking;
    how playing music develops cognitive powers that help us integrate ideas and help us become more aware of present and future contexts simultaneously; and
    music as a lens on global culture.
    An acclaimed music faculty dedicated to teaching

    Reflecting on what makes MIT Music so successful, composer Peter Child, the Class of 1949 Professor of Music, and head of MIT Music and Theater Arts, points to a top-flight faculty that is 100 percent dedicated to teaching undergraduate students.

    “That’s extremely unusual for a large research university,” Child says.

    The caliber of MIT’s student body is another critical factor: “It’s an extraordinary phenomenon at MIT that an unexpectedly large percent of students are just very, very talented musicians,” Child says. “At the very top level, we have performers and composers who are so good they could thrive in a conservatory. We keep them challenged and enable them to progress to the highest level.”

    A key to creativity, success, and well-being

    In addition to instruction in music history, culture, composition, and theory, MIT Music provides opportunities for individual and ensemble performance. The 10 groups highlighted in "Musical Institute of Technology" include the Festival Jazz Ensemble; Concert Choir; Gamelan Galek Tika, a traditional Balinese orchestra; Vocal Jazz Ensemble; Emerson program; Rambax, a Senegalese drumming ensemble; Chamber Music Society; Wind Ensemble; Chamber Chorus; and the Symphony Orchestra.

    MIT meets the needs of those at music’s top echelons through the Emerson Program, which provides select students with conservatory-level training.

    “The Emerson Program has allowed me to study with one of the premier cello professors in Boston and perform annual solo recitals while pursuing a PhD in oceanography,” says Ellie Bors, one of many students an alumni quoted in "Musical Institute of Technology." “What a gift it has been to continue my musical studies at such a high level.”

    Synergies of music, science, and technology

    MIT offers music at every level, providing unusual opportunities even for novices. For example, at MIT, a newcomer to musicology can study with a scholar who works on the cutting edge of the field, and an enthusiastic but less experienced performer can play alongside fellow students who are heading toward careers in music.

    “I look back on my time in the Vocal Jazz Ensemble as one of the major, defining parts of my MIT experience,” says Ben Bloomberg ’11, who is quoted in the book. “There are very few programs where it is possible to work so closely with such distinguished, prolific, and inspirational faculty.”

    Faculty members point out that unusual synergies also arise from the confluence of great technical minds and extraordinary musical talent.

    “Our students will be the ones to develop new theories of how people interact with technology as art, and art as technology," says Michael Cuthbert, musicologist, associate professor of music, and the creator of the Music21 computer tools.

    “Every semester I witness something that just blows me away," says Evan Ziporyn, noted composer, clarinetist, and the Kenin Sahin Distinguished Professor of Music. "It could be an electrical engineering major who’s improvising jazz piano at a professional level, a classical cellist designing interactive music systems, or a class building gorgeous instruments from scrap parts."

    Of the music/science connection, Andrew Wang '11 says, “My MIT classes in music theory and history transformed my understanding of music — and also deepened my relationship to the sciences.”

    Elena Ruehr, an acclaimed MIT composer, notes that the musical experiences MIT students have also inform their work in other fields: “Studying music teaches discipline, discernment, and problem-solving," she says. "It makes your mind more fluid and gives you the ability to shift perspective, to see the same thing from many angles.” 

    For non-performers, MIT Music also provides the community with dozens of opportunities to hear live music throughout the year. “Music has to be played, witnessed, and heard,” says Institute Professor Marcus Thompson, the Robert R. Taylor Professor of Music and an internationally recognized violist. “We understand that as part of our mission — sharing with listeners.”

    The musical alumni of MIT

    The end result of such offerings is that MIT Music, while not centered on training professionals, nevertheless boasts numerous alumni with successful musical careers — often with interesting technical dimensions.

    For example, Andrew MacPherson, a double major in electrical engineering and music, who studied composition at MIT with Peter Child and John Harbison, is a noted composer of electronic music who teaches at the University of London's Centre for Digital Music, and is the creator of a hybrid acoustic-electronic instrument that augments the traditional grand piano.

    Alex Rigopulos ’92 SM ’94, a former MIT Music major, and Eran Egozy ’95 SM ’95 joined forces to found Harmonix Music Systems, the company responsible for "Guitar Hero" and "Rock Band," which were among the most successful video games of the 2000s. In "Musical Institute of Technology," Rigopulos sums up what MIT Music meant to him: “MIT’s music program saved me as a person," he says. "I was lucky enough to be in this special environment where I could study science and engineering at a serious level and at the same time pursue music with great intensity. MIT provided an unusual environment where I could explore the intersection of both worlds."

    A new book, "Musical Institute of Technology," digs into the longstanding affinity for music at MIT. Along with insights from students and faculty, the book presents selections from an ongoing series of photographs by Jon Sachs, principal photographer for the MIT School of Humanities, Arts, and Social Sciences.

  • Finding harmony with big dataIf you ever use Spotify, or a similar music-streaming service, there’s a good chance your song recommendations, and other personalized features, are powered by novel technology developed and marketed by two MIT alumni entrepreneurs. Brian Whitman PhD ’05 and Tristan Jehan SM ’01, PhD ’05 are co-founders of Echo Nest, whose technology — based on their MIT research — mines data from millions of songs streaming online. Sometimes called “the big data of music,” the company has compiled about a trillion data points from 35 million songs by 2.5 million artists. Its music-intelligence platform — recently praised in publications such as Fast Company, Wired and Business Insider, among others — then translates this data into information for music-app developers, who use the information to build smarter, more personalized music apps.Now, as a leader in the music-intelligence industry, Echo Nest has dozens of big-name clients, including MTV, BBC, Rdio, VEVO, Foursquare, Nokia, Sirius XM, Clear Channel’s iHeartRadio, Univision Radio and Intel. The company also provides third-party developers with access to this data via an application programming interface (API) that has become the technological blueprint for more than 400 apps, including iHeartRadio and eMusic.  “Early on, we always wanted an API for developers, instead of being this closed company, where only people who paid us could use it,” Whitman says. “The point of that is to see what people can build on top of our data. And there’s been some amazing things.”The co-founders say the company’s success is due, in part, to technology that predicted the growth of today’s booming online-music market — which ushered in a host of music-streaming sites and saw the growth of Internet radio. “When all that technology was rising around us, we were ready,” Whitman says. Combining music content and cultural analysisThe foundations of Echo Nest’s technology trace back to the MIT Media Lab, where the co-founders, then doctoral students, decided to combine their dissertations on music-data mining.Jehan’s dissertation, which he conducted in the Hyperinstruments Group, focused on the “content analysis” of music, extracting data on musical elements such as tempo, key and time signature. Whitman’s work — conducted under the tutelage of professor emeritus Barry Vercoe — looked at a “cultural analysis” of music, focusing on what different types of people were saying about music online.Seeing technological and commercial potential in combining the two projects, the co-founders mixed and tweaked their studies — a content-based and cultural analysis of music — and created what Whitman calls “a big database of what music sounds like to a computer, and what it means to people.” Now, when someone uses a music-streaming app that utilizes Echo Nest’s platform to, say, generate a playlist, Whitman says, “the site accesses both parts of the combined technology and says, ‘Here are the songs you should be listening to based on what we know about you and the music.’ At the end of the day, we tell people what music they should hear.”Most of the developer clients, for instance, use Echo Nest’s data to better understand listeners’ tastes and behaviors and create smarter music-streaming features, such as song recommendations, playlist generation, taste profiling, acoustic analysis, acoustic fingerprinting (an audio sample used to identify songs) and data feeds. But an additional perk of Echo Nest’s massive database, the co-founders say, is that it can help increase the visibility of rising Internet musicians who may have slipped through the song-recommendation cracks of earlier music-streaming services. For instance, MTV’s music-streaming service is using it to help listeners discover artists who may be popular on the Web, but who don’t get radio play.“We’re both musicians, and it’s frustrating knowing that an independent artist may not get noticed in music-streaming sites. We wanted to change that,” says Whitman, who recorded as an electronica musician before starting Echo Nest. Jehan is a keyboardist and guitar player who used to play in a Boston-based Brazilian band. From scientists to entrepreneursIn the company’s early days, the co-founders say they found support through MIT’s Venture Mentoring Service (VMS) and the MIT Media Lab, which helped turn them from scientists to entrepreneurs. Meeting regularly with business mentors such as Roman Lubynsky, VMS’s senior venture advisor, the two learned the basics for growing a company and were introduced to a variety of contacts, including lawyers, accountants and investors. “It was a very connected culture,” Jehan says. The co-founders say the MIT Media Lab also helped them make their technology accessible to investors — something foreign to some scientists, Jehan says. “Technology is not a product in itself,” Jehan says. “Some people don’t get that. The technology can be artistic, but you have to create artifacts people can grasp. We learned how to make it accessible to investors, or ‘productize’ it.”Whitman agrees, adding that the MIT Media Lab helped with patents and other legal issues. The experience taught him how to pitch ideas to the business community — something that helped Echo Nest acquire its initial investors. “As a scientist, being forced to explain your work to someone who’s not a scientist was a valuable lesson,” Whitman says.As scientists who freely accessed data for their dissertations, Whitman and Jehan have made sure to pay it forward, making some of Echo Nest’s data and technologies readily available for research purposes. In 2011, the company released a million-song dataset to academic institutions and released Echoprint, an open source music-identification system. “We come from the research world, and having access to data was really important,” Whitman says. “So, we’re trying to make sure that stays alive in our world.”

    Technology developed by two MIT alumni entrepreneurs is helping developers create smarter online music-streaming services.

  • When it comes to fostering innovation, student group says 'Do it!'MIT fosters innovation and new ideas, but what if students don’t know what to do with their ideas or don’t understand approaches or methodologies for innovation? That’s where do.it@MIT — the Do Innovation team, a student-run organization focused on fostering innovation — comes in.Sneha Kannan, a senior in biological engineering, founded do.it@MIT as a program to encourage innovation by MIT students. “Our goal is to understand innovation,” she said. “We want to get everyone on campus aware of innovation and break down some of the barriers for those who are interested in being a part of it.”Since last October, do.it@MIT has been sponsoring dinner discussions with prominent innovators in a variety of fields. Free and open to the entire MIT community, the events have drawn more than 1,000 attendees. The speakers are asked to focus their presentations on innovation, and also the importance of learning from failure.“Traditionally, we’ve seen that underclassmen are worried about creating because they fear making mistakes,” Kannan said. “We hope to bring to MIT a culture of embracing failure as a necessary step to success.”Last year, do.it@MIT welcomed speakers from mechanical, software and biological engineering, and earlier this year hosted Dan’l Lewin, corporate vice president of Microsoft. This month, the group highlighted a different field when it hosted Fernando Garibay, a long-time collaborator with Lady Gaga and an executive with Interscope Records.“We thought Fernando was a departure from the people MIT typically brings to campus, so we jumped at the chance to host him,” Kannan said. “We thought he'd be a great choice because of his dynamic personality, as well as his prominent and fascinating work.”Garibay’s presentation focused on how the Internet has changed the way labels approach artists, as well as how consumers look at musicians. “Music has lost its value,” he said. “We lost the prestige because of how accessible our artists now are.”Garibay also talked about how labels are working to create a 360-degree management model that embraces this new Internet culture — using the Internet as a tool to reach more people in a variety of ways, from streaming services such as Spotify and Pandora to the use of smartphone apps working in tandem with albums.Nearly 150 students, representing a variety of majors, attended the event, which included a question-and-answer session at the end.“I thought it was very fascinating to get a glimpse into an industry that MIT students rarely hear about,” said Christina Qi, a senior studying management who attended the event. “Learning about the music-making process was eye-opening in that it's much tougher than one would expect. The event made me consider the changing relationship between music and technology in a new way.”do.it@MIT is continuing to host dinner discussions through the rest of the year. Its next event on Nov. 30 will feature Pranav Mistry, of the MIT Media Lab, who developed Sixth Sense, a wearable gestural interface that augments the user’s physical environment with digital information.Kannan said she hopes to continue expanding do.it@MIT’s programs so that more students can understand innovation in a variety of fields, as well as the difficulties they faced and overcame. “So many of the brilliant people who come to MIT to talk only talk about the successes,” she said. “I think it can be more worthwhile for them to talk about their mistakes, mostly because I find those lessons far more valuable.”

    Lady Gaga collaborator and Interscope executive highlights do.it@MIT’s wide-ranging approach.

  • Composing for loudspeakers: computer music pioneer John Chowning visits MITIn 1967, late one night in the eucalyptus-scented hills of Palo Alto, John Chowning stumbled across what would become one of the most profound developments in computer music. “It was a discovery of the ear,” says Chowning, who gave a lecture and concert on Oct. 11 sponsored by the Media Lab and the MIT Center for Art, Science & Technology (CAST). While experimenting with extreme vibrato in Stanford’s Artificial Intelligence Lab, he found that once the frequency passed out the range of human perception — far beyond what any cellist or opera singer could ever dream of producing — the vibrato effect disappeared and a completely new tone materialized.

    What Chowning discovered was FM synthesis: a simple yet elegant way of manipulating a basic waveform to produce a potpourri of new and complex sounds — from sci-fi warbles to metallic beats. Frequency modulation (FM) synthesis works, in essence, by using one sound to control the frequency of another sound; the relationship of these two sounds determines whether or not the result will be harmonic. Chowning's classically trained ear had sounded out a phenomenon whose mathematical rationale was subsequently confirmed by his colleagues in physics, and would populate the aural landscape with the kind of cyborg sounds that gave the 1980s its musical identity.

    Chowning licensed and patented his invention to a little known Japanese company called Yamaha when no American manufacturers were interested. While the existing synthesizers on the market cost about as much as a car, Yamaha had developed an effective yet inexpensive product. In 1983, Yamaha released the DX-7, based on Chowning's FM synthesis algorithm — and the rest is history. The patent would become one of Stanford's most lucrative, surpassed only by the technology for gene-splicing and an upstart called Google.

    With its user-friendly interface, the DX-7 gave musicians an entrée into the world of programmers, opening up a new palette of possibility. Part of a rising tide of technological developments — such as the introduction of personal computers and the musical lingua franca MIDI — FM synthesis helped deliver digital music from the laboratory to the masses.

    The early dream of computer music

    The prelude to Chowning's work was the research of scientists such as Jean-Claude Risset and Max Mathews at AT&T's Bell Telephone Laboratories in the 1950s and '60s. These men were the early anatomists of sound, seeking to uncover the inner workings of its structure and perception. At the heart of these investigations was a simple dream: that any kind of sound in the world could be created out of 1s and 0s, the new utopian language of code. Music, for the first time, would be freed from the constraints of actual instruments.

    As Mathews wrote in the liner notes of Music from Mathematics, the first recording of computer music, "the musical universe is now circumscribed only by man's perceptions and creativity."

    "That generation," says Tod Machover, the Muriel R. Cooper Professor of Music and Media at the MIT Media Lab, "was the first to look at the computer as a medium on its own." But both the unwieldy, expensive equipment and the clumsiness of the resulting sounds — two problems that Chowning helped surmount — inhibited these early efforts (by Chowning's calculations, as he noted in his lecture, the Lab's bulky IBM 7090 would be worth approximately nine cents today). But by the mid-1960s, the research had progressed to the point where scientists could begin to sculpt the mechanical bleeps and bloops into something of musical value.

    Frequency modulation played a big part. Manipulating the frequency unlocked the secrets of timbre, that most mysterious of sonic qualities. In reproducing timbre — the distinctive soul of a note — Chowning was like a puppeteer bringing his marionette to life. The effects of FM synthesis conveyed "a very human kind of irregularity," Machover says.

    The future of music

    Today, the various — and often unexpected — applications of FM synthesis are omnipresent, integrated so completely into everyday life that we often take them for granted– a ringing cellphone, for instance. Yet while digital technologies became more and more pervasive, Chowning's hearing began to worsen and he slowly withdrew from the field. For a composer whose work engaged the most subtle and granular of sonorities, this hearing loss was devastating.

    Now, thanks to a new hearing aid, Chowning is back on the scene. The event at MIT on Thursday — titled "Sound Synthesis and Perception: Composing from the Inside Out" — marked the East Coast premiere of his new piece Voices featuring his wife, the soprano Maureen Chowning, and an interactive computer using the programming language MaxMSP. Chowning sees the piece as a kind of rebuttal to those who once doubted the "anachronistic humanists" who feared the numbing encroachments of the computer. In Voices, he says, the "seemingly inhuman machine is being used to accompany the most human of all instruments, the singing voice." The piece also sums up a lifetime of Chowning's musical preoccupations, his innovations in our understanding of sound and its perception, and the far-reaching aesthetic possibilities in the dialogues between man and machine.

    At MIT, Chowning enjoyed meeting the next generation of scientists, programmers and composers, glimpsing into the future of music. "The machinery is no longer the limit," he announced to the crowd. Indeed, MIT has its own rich history of innovation in the field, as embodied by figures such as Professor Emeritus Barry Vercoe, who pioneered the creation of synthetic music at the Experimental Music Studio in the 1970s before going on to head the Media Lab's Music, Mind, and Machine group. "MIT is in many ways a unique institution," Chowning says, where, "cutting edge technology interacts with highly developed artistic sensibilities." In the Media Lab, Chowning saw the dreams of his generation pushed forward. One thing, in his mind, is clear: "music has humanized the computer."

    The inventor of FM synthesis, Chowning revolutionized the music industry; saw a glimpsing into the future of music at the Institute.

  • Video: The Paradiso Synthesizer Video: Lucy Lindsey and Melanie GonickIn 1973, Media Lab associate professor Joe Paradiso was an undergraduate at Tufts University, and didn’t know anyone who had built an analog music synthesizer, or “synth,” from scratch. It was a time, he says, when information and parts for do-it-yourself projects were scarce, and digital synthesizer production was on the rise. But, he decided to tackle the project — without any formal training — and sought out advice from local college professors, including his now-colleague in the Media Lab, Barry Vercoe. Paradiso gathered information from manufacturers’ data sheets and hobbyist magazines he found in public libraries. He taught himself basic electronics, scrounged for parts from surplus stores and spent a decade and a half building modules and hacking consumer keyboards to create the synth, which he completed in the 1980s. That synthesizer, probably the world’s largest with more than 125 modules, is now on display in the MIT Museum. Every few weeks, Paradiso changes the complex configurations of wires connecting the synthesizer’s modules, called "patches,” to create a new sonic environment. The synthesizer streams live online 24 hours a day at http://synth.media.mit.edu; starting this week, visitors to the synthesizer’s website can even change the patch parameters online. Learn more about Paradiso’s synthesizer

    Media Lab associate professor’s massive modular synthesizer now on exhibit in the MIT Museum.

  • Disembodied performanceLater this month, the Opera of the Future Group at the MIT Media Lab will premiere Death and the Powers, an opera more than 10 years in the making. Featuring life-sized singing robots and a musical chandelier, the opera could redefine how technology can enhance live performance and help reestablish opera’s spirit of innovation.Created by composer and MIT Media Lab Professor Tod Machover, who has designed customized instruments for musicians like Yo-Yo Ma and Prince, the one-act opera will premiere Sept. 24-26 in Monaco (the city-state’s ruler, Prince Albert II, is the honorary patron of the project and will attend the gala opening). More than 60 students and collaborators are traveling with Machover to help stage the complex production.

    Video: Tod Machover and Dan Ellsey play new music at TED

    With Death and the Powers, Machover seeks to expand the traditional definition of opera through the use of technology — but in a way that enhances the human presence on stage and therefore strengthens the bond between audience and performers. “In theater, technology has consistently pulled music in the wrong direction,” says Machover. Recalling a Taylor Swift concert he recently attended with his teenage daughters, Machover bristles at the way in which “gigantic mega-screens and boom-box-like audio systems” have come to overshadow human performers, creating an experience that “forces rather than entices.”For this project, Machover and his team attempted to use technology to bring the stage to life, almost as another character: Death and the Powers features an animated set and nine singing “OperaBots” that serve as the chorus and frame the narrative.Creating “The System”The opera tells the story of Simon Powers, a successful inventor who wants to ensure his legacy. To do so, he constructs “The System,” which makes it possible to download his memories and personality into the physical environment. As soon as Powers enters the system and disappears from the stage at the end of the first scene, the stage takes on his persona. His character expresses himself through giant bookcases with thousands of lights that move to the rhythm of the music, as well as a sinuous, light-emitting musical chandelier with resonant Teflon strings that can channel Simon’s presence while being strummed by his wife, Evvy.By capturing the essence of a performer whom the audience can’t see, Death and the Powers creates what Machover calls a “disembodied performance.” This is done using software that Peter Torpey and Elly Jessop, two PhD students in Machover’s Opera of the Future Group, developed to measure aspects of a singer’s performance that the singer is likely aware of, including volume and pitch, as well as those he or she may not be monitoring, including muscle tension and breathing patterns. These conscious and unconscious elements then become part of the look and feel of “The System,” whether it’s through the movement of walls and chandeliers, pulsating lights or specially designed sounds.

    MIT Media Lab Professor Tod Machover discusses his robotic opera, Death and the Powers.Video: Paula Aguilera/Jonathan Williams/Nobuyuki Ueda/Yolanda Spínola Elías; additional footage/stills: Melanie GonickThis creative fusion of music and technology could reposition opera as an art form that embraces innovation, says Marc Scorca, president and CEO of Opera America, a nonprofit that serves U.S. opera companies. He notes that for hundreds of years, opera was known for welcoming innovation through new technologies and instrumentation. But that role was usurped in the late 19th century when film emerged as the most innovative art form; opera appeared staid in comparison.“I’m always cheering when I see opera once again reasserting itself as the richest tapestry for innovative, live art,” Scorca says.Not only does Scorca consider Death and the Powers to be groundbreaking because it tests the “definitional boundaries” of opera, but he also notes how rare it is for an opera to be conceived and produced outside the framework of a traditional opera company. The fact that Machover’s group at the Media Lab produced Death and the Powers “shows opera’s potent viability as a medium that has creative potential for anyone who is innovating in interdisciplinary art,” he says.While Scorca hopes that the use of technology in Death and the Powers will inspire other operas, Machover cautions that it will be some time before the opera’s influence is clear — either within the world of opera or beyond. He notes that many of his larger endeavors have had unexpected results, such as his audience-interactive Brain Opera, which yielded many of the technologies behind the Guitar Hero video game. Although Machover believes that techniques like disembodied performance will influence how emotions are captured and communicated in performances, he thinks that a major impact of Death and the Powers will be through its story and music. “‘Powers’ is packed with vivid melodies, quirky rhythms and pungent textures that I hope might stick in the ear, stir the imagination and resonate in unexpected ways,” he says.

    Tod Machover’s Death and the Powers, which features robots as performers, premieres this month. Is this the future of opera?

  • 3 Questions: Evan Ziporyn on his new operaWhile a master of many forms of music, Evan Ziporyn has a particular affinity for the sounds of Bali. Ziporyn, the Kenan Sahin Distinguished Professor of Music, has been involved with Balinese gamelan - a kind of percussion orchestral music - since his 1981 Murray Fellowship from Yale University. In 1993, three years after coming to MIT as an assistant professor, he founded Gamelan Galak Tika, a popular performing group. Yet Ziporyn, a composer and musician, writes and performs music ranging from classical, hard rock, alternative to ensemble. He is a composer and soloist with the Bang on a Can All-Stars, a high-energy chamber ensemble, and has toured as a saxophonist with Paul Simon. Ziporyn now adds another credit to his lengthy resume: an opera based on a true-life story that combines Balinese and Western musical forms. "A House in Bali," based on a 1930s memoir of the same title, traces the roots of the West's century-long infatuation with Bali, through the true story of three Westerners - composer Colin McPhee, anthropologist Margaret Mead, and artist Walter Spies - during their 1930s sojourn in Bali. Ziporyn composed the music, which will be performed by the Bang on a Can All-Stars and a Balinese gamelan directed by Dewa Ketut Alit, with choreography by Kadek Dewi Aryani. Marc Molomot, Anne Harley and Timur Bekbosunov will perform the roles of McPhee, Mead and Spies, respectively. "A House in Bali" (www.houseinbali.org) will premiere on June 26-27 in the Puri Saraswati, a part of the palace complex in the village of Ubud, Bali; it will be performed on Sept. 26-27 in the Zellerbach Auditorium at the the University of California, Berkeley. (See www.houseinbali.org.) The MIT News Office caught up with Ziporyn by e-mail while he was in Bali preparing for the production. Q: Why did you choose to create an opera as opposed to another kind of musical form? A: The word "opera" just means "works" - and to me it means total theater, a combination of all the performing arts - music, theater, dance, lights and costumes, etc. This is what it meant to Wagner as well, but that doesn't mean all opera has to sound like Wagner - not that there's anything wrong with that. I am in fact working with three opera singers (mainly from Baroque opera, as I prefer those vocal qualities), and the piece does tell a story, but I'm also working with three traditional Balinese singers, whose voices and mannerisms have nothing to do with western opera or, for that matter, western music. Q: What is your source material? What will the music be like? A: "A House in Bali" is based on a memoir of the same name by the first Western composer to travel to Bali, Colin McPhee. McPhee heard the first recordings of Balinese gamelan in 1928 and immediately went there to study and document the music. He did a tremendous job of it: his book and transcriptions are still considered the definitive source on the music of the period, both by Westerners and the Balinese. And he loved Bali. He came back to America as WWII loomed and, sadly, never got his life back on track. His own music was never the same, and he died without ever finding a way to return. He's a very important figure to me, both a model and a warning, and his story is truly tragic: unrequited love, but the object of affection is a culture, rather than a person. As with the singers, the instrumental music brings together two worlds. My own ensemble, Bang on a Can, is involved, and our instrumentation is basically a rock band with strings. Beyond that, I have a full Balinese gamelan - 16 musicians from the Ubud area - and the combination of these disparate sounds mirrors and frames the story. They come together and drift apart, mesh and clash - just as they do in my own imagination and, I think, as they did in McPhee's. Q: Is there anything that is particularly characteristic of MIT in the opera? A: Only that it's sui generis or "in a class of its own." There have been other operas that have employed Balinese music for color or exoticism, but I don't know of any piece that's interweaved these two cultures so extensively. There may be a reason for this; we'll find out.

  • Music at MIT hitting all the right notesLater this month, the MIT community will celebrate the 70th birthday of one of America's most prominent and prolific composers with a special tribute concert and symposium.That the individual in question, Pulitzer Prize-winning musician John Harbison, has been a member of the MIT faculty for four decades may come as a surprise to many in the outside world who tend to equate the Institute with white coats, computer algorithms, rocket science, quantum physics and cutting-edge efforts to cure cancer and solve the energy crisis.Harbison and the music scene at MIT are among the Institute's best-kept secrets, but they shouldn't be. Scientists and engineers have often been avid musicians — think of Albert Einstein and his violin or physicist Richard Feynman and his drums. The fact is, music at MIT plays a cathartic role in campus life and displays many of the bold characteristics — innovation, ingenuity, excellence and creativity — that lie at the heart of the MIT culture."Music at MIT is superb — and John is emblematic of that quality," says Associate Provost and Ford International Professor of History Philip S. Khoury, who has known Harbison for nearly 30 years. "He is one of the world's most distinguished and musically versatile composers, and he has always been completely devoted to teaching and, as he would say, learning from our remarkably talented students."An artist known for lucidity and logic in his compositions and performances, Harbison is equally adept at opera, choral and jazz. His Pulitzer Prize came in 1987 for his choral work "The Flight into Egypt," with text from the Gospel of Matthew. Two years later, he was awarded a MacArthur "genius" grant for his work, and in 1995 he became an Institute Professor — the highest honor MIT can bestow on a member of the faculty.Harbison, who is currently working on music inspired by Alice Munro's short stories, says MIT students bring with them the right ingredients for studying, composing and performing music: high intelligence, logical thinking, interest in structure and a curiosity about how things are made. In true MIT spirit, he tells students to break new ground and take risks."Go out and write things that your teacher won't necessarily approve of," he advises.Music on the MindWhether it's tinkering with music-editing software, performing in one of MIT's eight professionally led music groups or making brain waves audible, music at MIT can mean many things. In the Department of Brain and Cognitive Sciences, associate professor Pawan Sinha and graduate students are working on way to create music and art from brainwaves. Intrigued with the possibility of understanding how minds extract meaning from sounds, Sinha has charted the electroencephalographic (EEG) response of brain neurons to tone sequences. Using a form of video gaming headsets that pick up these brain signals and by associating them with specific sounds, Sinha eventually hopes to allow an individual to "perform" in an orchestra simply by thinking. Sinha is also designing a "Your Brain on Music" program in which a person would watch a shifting electronic projection of EEG signals that reflects his or her brain's response to a piece of music. And, in what is perhaps his most ambitious project, Sinha hopes to design a "Brain Jukebox" that would let listeners hear music through the transformational lens of another person's brainSinha's research is in line with MIT's emphasis on interdisciplinary collaborations — and he is not alone in melding music with basic or applied research. Elaine Chew SM '98, PhD '00, an engineer and pianist who has designed algorithms for real-time analysis of music compositions based on mathematical models, and used them in her performances and lecture-concerts, says her engineering and music studies at MIT were entwined. "There are deep connections between the way the human mind works when making music, and when it solves problems in the sciences," she says. "Asking if my music studies help my engineering studies is analogous to asking a computational biologist if her biological studies help her statistical studies."Talented students, talented teachersChew's passion for music is fairly typical of the average MIT student. More than 60 percent of incoming freshmen declare advanced proficiency in a musical instrument, and at least 1,400 MIT students enroll each year in a music and theater arts class. As part of the Emerson Program for Private Instruction, the Institute offers scholarships each year to some 50 of its most talented scholar-musicians to pursue private instruction on their instrument with local master teachers.While only a few MIT students eventually pursue a full-time career in music, many graduates incorporate performance or composing into their professional and private lives. Such alumni include Eran Egozy '95, MEng '95, and Alex Rigopulos '92, SM '94, who founded the company that created "Guitar Hero." This hugely popular video game emerged from the pair's interest in providing a way for average people to express themselves musically through technology. "Students are engaged in music and the arts in general at MIT as they are with all their other academic work: with intensity, passion, commitment and rigor," says Fred Harris, director of wind ensembles and lecturer in music. "Over and over I am told by students and alums that it's the opportunity to explore, study, create and perform music that is among their most important, treasured and long-lasting experiences at MIT." Janet Sonenberg, professor of theater arts and head of the Music and Theater Arts Section, says much of the credit for MIT's creative music spirit goes to Harbison, who made it possible to attract an "extraordinary" group of arts faculty to MIT. Harbison, in turn, praises MIT for seeking to hire faculty with new approaches instead of being merely content to hand the baton to professors cut from the same template. Such hires include Evan Ziporyn, the Kenan Sahin Distinguished Professor of Music. In 1993, Ziporyn founded the Gamelan Galak Tika, a Balinese music ensemble, not because it was logical for the Institute to have such a group but because he thought it would fit the Institute's quirky, expansive nature. "The kind of person that is going to seek out a Gamelan is similar to the kind of person who is going to seek out a robot club to build robots," he says. Teaching music at MIT was once thought to be about training the audiences of tomorrow, but today it's about letting students have all manner of musical experiences, says Ziporyn. Among other things, he has taught a course in computer music composition, in which students write music with computer-processed sound. Many of the students who take that course have little formal music training, but know far more about computers than he does, Ziporyn says. "One of the things I always love about teaching a computer music course is I would have all students in there making pieces of really weird music," Ziporyn says. "They ended up realizing, 'I can write a piece of music. Maybe I'm not Mozart, but I can write a piece of music.'" The Harbison celebration concert, which begins at 8 p.m., April 24 in Kresge Auditorium, is free and open to the public. For more information, please visit: http://web.mit.edu/arts/announcements/prs/2009/0212_Harbison.html.

    A version of this article appeared in MIT Tech Talk on April 15, 2009 (download PDF).

  • 'Chameleon Guitar' blends old-world and high-techNatural wood, with its unique grain patterns, is what gives traditional acoustic instruments warm and distinctive sounds, while the power of modern electronic processing provides an unlimited degree of control to manipulate the characteristics of an instrument's sound. Now, a guitar built by a student at MIT's Media Lab promises to provide the best of both worlds. The Chameleon Guitar — so named for its ability to mimic different instruments — is an electric guitar whose body has a separate central section that is removable. This inserted section, the soundboard, can be switched with one made of a different kind of wood, or with a different structural support system, or with one made of a different material altogether. Then, the sound generated by the electronic pickups on that board can be manipulated by a computer to produce the effect of a different size or shape of the resonating chamber. Its creator, Media Lab master's student Amit Zoran, explains that each piece of wood is unique and will behave in a different way when it is part of an instrument and begins to vibrate in response to the strings attached to it. Computers can't model all the details of that unique responsiveness, he says. So, as he began experimenting with the design of this new instrument, he wondered "what would happen if you could plug in acoustic information, like we do with digital information on a memory stick?" Under the direction of Media Lab Associate Professor Pattie Maes, and with help from experienced instrument builder Marco Coppiardi, he built the first proof of concept version last summer, with a variety of removable wooden inserts. The concept worked, so he went on to build a more polished version with an easier quick-change mechanism for switching the inserts, so that a musician could easily change the sound of the instrument during the course of a concert — providing a variety of sound characteristics, but always leaving the same body, neck and frets so that the instrument always feels the same. With Coppiardi's help, he selected spruce and cedar for the initial soundboard inserts. This January, he demonstrated the new instrument at the annual Consumer Electronics Show in Las Vegas, where it received an enthusiastic response. He also demonstrated the earlier version at two electronics conferences last year. The five electronic pickups on the soundboard provide detailed information about the wood's acoustic response to the vibration of the strings. This information is then processed by the computer to simulate different shapes and sizes of the resonating chamber. "The original signal is not synthetic, it's acoustic," Zoran says. "Then we can simulate different shapes, or a bigger instrument." The guitar can even be made to simulate shapes that would be impossible to build physically. "We can make a guitar the size of a mountain," he says. Or the size of a mouse. Because the actual soundboard is small and inexpensive, compared to the larger size and intricate craftsmanship required to build a whole acoustic instrument, it will allow for a lot of freedom to experiment, he says. "It's small, it's cheap, you can take risks," he says. For example, he has a piece of spruce from an old bridge in Vermont, more than 150 years old, that he plans to use to make another soundboard. The wooden beam is too narrow to use to make a whole guitar, but big enough to try out for the Chameleon Guitar. The individual characteristics of a given piece of wood — what Zoran refers to as the "romantic value" of the material, "is very important for the player," he says, and helps to give an individual instrument a particular, unique sound. Digital processing provides an infinite range of variety. "Now," he says, "it's possible to have the advantages of both." For now, Zoran is concentrating on developing the guitar as a thesis project for his master's degree, and hopes to continue working on it as his doctoral thesis project. After that, he says, he hopes it will develop into a commercial product. A version of this article appeared in MIT Tech Talk on February 4, 2009 (download PDF).

    Guitar built by a student at MIT's Media Lab promises to provide the best of electronic and acoustic.

  • MIT's Makan wins Rome PrizeMIT professor Keeril Makan, a musician and composer acclaimed for his technique of layering recorded and live sounds, has been awarded the prestigious Luciano Berio Rome Prize for musical composition by the American Academy in Rome for 2008-2009.The prize, announced Thursday, April 10, in New York, carries a stipend of $24,000, and work and living accommodations for 11 months at the academy.Makan, assistant professor of music, originally trained as a violinist. He describes his music as an outgrowth of the western classical tradition, using familiar instruments and other musical traditions in new ways. Makan's music moves fluidly among disparate sounds, weaving them into instrumental combinations that range from small chamber ensembles to works for orchestra. Innovative and exploratory, it has required the composer to develop hieroglyph-like notations for musicians performing his work. In a saxophone piece, "Voice within Voice," for example, a row of jagged markings that look like shark's teeth means "put your teeth on the reed and grind." But notation is not where the process of composing starts for Makan, a 36-year-old native of New Jersey. "I write by physically interacting with the instrument I'm composing for. If I'm writing for the oboe, I'll play it in as many ways as I can imagine," he says. "As I work, new musical possibilities develop. This is how I get the raw materials for a piece; I record myself, then I figure out how I'll work with the material."Makan will devote the 11-month residency in Rome to working on three major pieces, he says.One project will be to compose "Tracker," a five-part chamber opera in which technological instruments of the past, such as 19th-century contraptions for measuring pulse and motion, are linked thematically to current technologies and to the impact of technology on the imagination and emotional experience.Sketches for "Tracker" are now taped in five columns to the wall of Makan's MIT office, a small room packed with books and musical gear. Photographs by 19th century scientist Etienne-Jules Marey top each column; poem-shaped segments of Jena Osman's libretto spill downward like adding machine paper. There are no visible musical notes.In addition to the opera, Makan's plan for Rome is to complete a work for electric guitar and orchestra, commissioned by the American Composers Orchestra, to be premiered this November at Carnegie Hall. He will also finish a trio for flute, viola and harp, commissioned by the Harvard Musical Association, for violist and MIT professor Marcus Thompson.A tall order for 11 months, but Makan, who owns neither a car nor a television, finds economy in technology. He relies on Finale, a notation program, for experimenting with time and modeling, and on a digital audio workstation for analyzing the frequency components of pre-recorded sounds, en route to creating new ones.Recent MIT winners of the Rome Prize include Pulitzer Prize-winning novelist Junot Diaz, associate professor in Writing and Humanistic Studies, and John Ochsendorf, associate professor of architecture.A national competition, the Rome Prize is awarded annually to 15 emerging artists in various fields. A version of this article appeared in MIT Tech Talk on April 16, 2008 (download PDF).

  • Alum 'zaps' Museum of Science April 27

    Composer Christine Southworth (S.B. 2002) rehearses "Zap!," a composition in which the Van de Graaff generator provides static and flashing lights for her musical composition with flutes, guitar, cello, bass, piano, robots and human voices. "Zap!" will be performed at the Museum of Science's Theater of Electricity (Science Park, Boston) on Friday, April 27 at 7 p.m. and 8:30 p.m. as part of the Cambridge Science Festival. Tickets cost $10; for more information, see cambridgesciencefestival.org/. Photo / Bill Southworth

    A version of this article appeared in MIT Tech Talk on April 25, 2007 (download PDF).

  • ICA presents Machover workMIT Media Lab composer Tod Machover, known for his innovativeness as a musician and as a creator of new technology for musical instruments, will present an evening performance of work commissioned for the Grammy Award-winning Ying Quartet. The concert at Boston's Institute for Contemporary Art will feature Machover's composition "…but not simpler…" on Friday, April 6, at 8 p.m. at the ICA's waterfront site. (Also, due to popular demand, a second show has been scheduled on the same date for 6 p.m. Tickets for the 6 p.m. show are on sale now.)Built around Machover's new string quartet commissioned by the Ying, the upcoming concert of continuous music brings together works from diverse periods and styles, from Bach to Beethoven, Cage to Carter, Byrd to the Beatles. The concert also includes original electronic interludes. The concert premiered last season in New York and was given a rave by the New York Times. April 6 is the event's Boston premiere. Media arts and sciences graduate student Mike Fabio helped design the technology and sound infrastructure for the event. For more information and to reserve tickets, please go to the ICA's web page for the peformance.Machover's work will also be featured this spring in other venues, including a concert piece on "Music, Mind and Health," to be presented at the Media Lab's "Human 2.0" symposium in Kresge on May 9 and the world premiere of "VinylCello," his new work for Hypercello and live DJ, to be performed by Matt Haimovitz, DJ Olive with Kent Nagano and the Berkeley Symphony Orchestra on May 11.

    A version of this article appeared in MIT Tech Talk on March 21, 2007 (download PDF).

  • Japanese hip-hop: from 50 Cent to mirror balls and world peaceSix months of hanging out in smoky, grungy "genbas," or Japanese hip-hop clubs, gave cultural anthropologist Ian Condry insight into how American rap music and attitudes were being transformed by the youth in Japan. But he couldn't figure out the mirror balls. Every club, from large to small, had a mirror ball that sent glittering light into the sweaty haze above the Japanese hip-hop fans, artists, music executives and first-timers. So "I had to develop my own philosophy of the mirror ball," Condry, associate professor of Japanese cultural studies, told an audience on March 1 during a discussion of his new book, "Hip-Hop Japan: Rap and the Paths of Cultural Globalization" (2006, Duke University Press). That philosophy highlights the relationships within the hip-hop community, he explained. The mirror ball illuminated "no single star on stage but rather spotlighting and then passing over all of the participants," Condry said, reading from his book. "The dynamic interaction among all these actors is what brings a club scene to life. Mirror balls evoke this multiplicity, splashing attention on each individual for a moment and then moving on--not unlike the furtive glances of desire between clubbers in a zone of intimate anonymity." Such details were crucial to Condry's insight into how affluent Japanese youth had transformed the music that came straight out of Compton into something distinctly Japanese. "The evolution of the Japanese hip-hop scene reveals a path of globalization that differs markedly from the spread of cultural styles driven by major corporations such as Disney, McDonald's and Wal-Mart,'' Condry said. "Indeed hip-hop in Japan is illuminating precisely because it was initially dismissed as a transient fad by major corporations and yet took root as a popular style, nevertheless." Condry's talk was part of "Cool Japan: Media, Culture, Technology," a Feb. 28-March 3 conference at MIT and Harvard that explored the power and significance of Japanese popular culture.To illustrate his points, Condry played the video of the song "911" by King Giddra, a Japanese hip-hop group named after a three-headed monster in the Godzilla movie series. The video movingly juxtaposed images of Hiroshima with the destruction of the World Trade Center on Sept. 11, 2001, as the group rapped about the elusive nature of world peace.Japanese hip-hop--which Condry sees as having the four basic elements of rapping, deejaying, break dancing and graffiti art--quickly jettisoned the use of English, which had lingered in rock music. Japanese rapping has almost no talk of guns and very little mention of drugs but incorporates images of samurai or uses Kabuki performance style and often focuses on global political issues. Yet bravado remains crucial: One female rapper uses the eighth-century poetry style of waka; "yet she does it to say, 'I'm the number one rapper and I can beat the boys,'" Condry said. Japanese rappers say they're not into American culture, Condry explained in an interview. "They say they're into black culture. They say, 'I don't care abut America per se. But I love Spike Lee movies and I read the autobiography of Malcolm X … and I appreciate what black Americans have struggled to achieve.'''In the late 1990s, Japanese rap became more commercialized but a wide underground hip-hop movement also emerged, which spread throughout the country among a wide range of social and economic backgrounds. Only in the last four or five years, have "poor Japanese found a voice in hip-hop," he said.Condry admitted, with a laugh, that there were moments when hanging out in the genbas when he wondered if this was appropriate field work for a cultural anthropologist. Of course, he loves surveys as much as the next academic, but "You become part of the world. You see what's important to them," he said. "To get into that world, you need to learn a lot." He also admitted that the Japanese hip-hop fans began to imitate him, although politeness prevented them from showing him how he was copied. The "Cool Japan" conference was sponsored by the MIT Japan Program, Harvard's Reischauer Institute of Japanese Studies, the Harvard Asia Center, MIT Foreign Languages and Literatures and MIT Comparative Media Studies.

    A version of this article appeared in MIT Tech Talk on March 7, 2007 (download PDF).

  • Media Lab plans 'sonic bath' for Lewis Music LibraryThe Lewis Music Library will be transformed into what Tod Machover, professor of media arts and sciences, calls a "sonic bath" next week as graduate students from the Media Laboratory join him in a collaboration with Music Library staff to present "Library Music," a group of interactive music installations that explore the relationships among space, movement, touch and sound.

    Musical stairs, a tactile rainfall and a sonorous, robotic chandelier are among the 10 "experiences" to be featured in "Library Music." Workshop sessions, open to members of the MIT community, will give participants an opportunity to discuss the concepts and technologies behind each installation with Machover and the student designers.

    The workshops will take place January 16 to 18, culminating with a demonstration on Friday, Jan. 19 from 2 to 5 p.m. in the Lewis Music Library (Room 14E-109).

    No advance sign-up is required for the workshops, and participants are welcome at individual sessions. The final demonstration is open to the public.According to Machover, the installations were developed individually but have been assembled so that they work nicely together in a progression through the library spaces, turning the library into a comprehensive, sound-filled experience. Some installations will be explored with use of headphones; some will be set up in separate, enclosed rooms, and some will be in the open spaces.One of the installations, a robotic Music Chandelier, will be shown for the first time in "Library Music." Mike Fabio, graduate student in media arts and sciences, designed the laser-based system for the chandelier, which can be played by the public in its current iteration. Fabio's chandelier is being developed for Machover's opera, "Death and the Powers," which will premiere in Monte-Carlo, Monaco, in November 2008."A library to listen to should be fun!" said Machover, expressing delight that the Music Library, a place normally devoted to listening to and thinking about music in silence, will be transformed by willing staff members and Machover's group into an interactive, musical environment. At the Jan. 19 demonstration, the student designers will explain the how, what and why of their installations and will be available to guide visitors through each experience. Also, Lewis Music Library staff will share some of its hidden treasures that relate to sound installations and experimental music technology. Refreshments will be served.For more information, contact Ariane Martins, x3-1613, e-mail: ariane@media.mit.edu.

    A version of this article appeared in MIT Tech Talk on January 10, 2007 (download PDF).