Hi everyone! As you know Barry Schrader will be giving his farewell concert at CalArts on September 26. The following is the beginning of my interview with him. I opted to post the questions and answers as they come in. New QAs will get a new post so you do not miss them and they will be added to this post so we have one central post for the full interview. This should make it easier for all of us to consume in our busy lives, and it will allow you to send in any questions that may come to mind during the interview process. If you have anything you'd like to ask Barry, feel free to send it in to firstname.lastname@example.org. This is a rare opportunity for us to get insight on a significant bit of synthesizer history, specifically with early Buchla systems, and I'd like to thank Barry for this opportunity. Thank you Barry!
1. When and how did you first get exposed to electronic music?
[Left image via wikipedia]
My first exposure was with the film Forbidden Planet, back in 1956. I would have been ten or eleven at the time, depending on the month the film came out. I was fascinated by the movie and its music, and sat through it three times and would have probably gone for a fourth were it not for my father finding me in the theatre and taking me home. Little did I realize that one of the composers of that music, Bebe Barron, would become one of my best friends decades later. In the same year, my father bought me a tape recorder, a Voice of Music stereo machine, an expensive item in those days. I had a lot of fun recording and splicing sounds, but I knew nothing of Schaeffer's work or anything else regarding electro-acoustic music. Growing up in Johnstown, Pennsylvania, there was no way I could have experienced any electronic music. I did once hear a broadcast of some electronic music on a New York radio station, and, in 1961, I heard part of a Badings/Raaijmakers LP (Epic BC 1118) being played in a Sam Goody store in Pittsburgh. That was about all the experience I had until I was in graduate school.
Even though I had been taking piano lessons since I was five, and later added organ lessons, I didn't major in music as an undergraduate, preferring instead to deal with English literature and writing. I did, however, perform a great deal, particularly as the accompanist to the men’s glee club, and later as organist for high mass at Heinz Chapel. I was then majoring in musicology for my M.A. degree at the University of Pittsburgh. As for composition, I essentially pursued that on my own. By that time, 1968, I had heard several works of concrete and electronic music, but there was no institutional electro-acoustic music studio in Pittsburgh. Then, in 1969, The Music Department at Pitt hired Morton Subotnick as a visiting professor to set up a studio and teach classes. I knew Mort’s work from the two Nonesuch LPs that he had released, and I was excited about working with him and learning the equipment. I was made Mort's Teaching Assistant, and, because he was there only a few days twice a month, I was put in charge of the studio and expected to teach faculty and students how to use the equipment. The studio consisted of a good-sized Buchla 100 system (which by then had become owned by CBS Instruments), two stereo tape recorders, amps, speakers and several mikes. Being in charge of the studio, I could come and go as I pleased, and I often spent nights and weekends learning and working with the equipment. As a result of this, and reading whatever I could find about the medium, I had a pretty good working knowledge of the Buchla by the time I graduated in 1970. Mort then brought me out to LA as one of his graduate assistants in the just-opened CalArts School of Music, where I also taught the studio classes. I graduated with an M.F.A. in composition in 1971, and was hired on the faculty by Mel Powell the same year. So, when I retire from CalArts at the end of June 2016, I will have been on the faculty for forty-five years.
2. Was the Buchla 100 your first exposure to the synthesizer in person? What did you think of it when you first saw it? What was it like at the start when you were learning the system?
When Pitt’s Buchla 100 system was delivered in 1969, it was the first time I had ever seen an analog voltage-controlled synthesizer in the flesh. I had seen pictures of the Moog 900 series on albums and in magazines. (I still remember the original 1968 Switched-On Bach LP with the cover of a Bach impersonator supposedly listening to something through headphones that were plugged into the input of a 914 filter that was not connected to any of the other modules.) Seeing a fairly large system in person was quite different, a sort of epiphany. The only previous experience of mine that came close was the first time I had to deal with a large pipe organ with 4 or more keyboards (the Aeolian-Skinner Opus 922 organ in Heinz chapel had 4 keyboards, 4,295 pipes and 78 stops).
Although seeing the Buchla 100 without knowing anything about using it should have given me pause, it didn’t. First of all, Mort Subotnick, who was then the main composer using the system, was going to be my tutor. Secondly, CBS Musical Instruments published an operational manual, written by a young Hubert Howe, in late 1969 or early 1970, and this explained the modules and included tutorials with patch diagrams. This was very helpful in both learning and teaching the system. (The manual is avaialable as a pdf online
But when it came to technical matters, things like amplitude and frequency modulation, neither the manual nor Mort were very specific. Howe does define ring modulation as a type of amplitude modulation giving only the sum and difference of the input frequencies, but AM is referred to as tremolo and FM as vibrato, both of which are correct at a modulation rate of 20Hz or less. At that time I knew nothing of sidebands, or anything else about synthesis theory. The comprehension of the acoustics of musical instruments, which became important in my design of timbres, was also largely unknown to me in 1969, although John Backus’ book The Acoustical Foundations of Music, published that year, later became an important resource to me, as did, much later, Chowning & Bristow’s FM Theory & Applications (1986). I did learn the basics of synthesis theory by the early 1970s, some of which I included in my book Introduction to Electro-Acoustic Music (1982), but not in a very technical way.
Learning the Buchla 100 was both exciting and exasperating, but I didn’t let the frustration get to me. By repeating the making of certain patches, I gained the ability to predict the outcome, even if I didn’t then have a good technical understanding of what was happening. I would listen to Mort’s music created on the 100 and try to reverse engineer what he had done. One of the most rewarding things about electronic music has always been the almost instantaneous feedback one gets in terms of an audible result. This allows a composer to progress more rapidly as opposed to writing for instruments and having to wait for a performance to hear the music. So being able to spend a lot of time with the 100 allowed me to advance fairly rapidly in gaining some mastery of it in a relatively short time. Besides working in the studio, I also had to deal with teaching theory and my own academic work. In retrospect, this was probably one of the most interesting times of my life: I was entering a new world of electronic music at the same time I was studying the transcription of medieval music manuscripts, taking computer programming classes (Michigan Algorithmic Decoder), and analyzing Stravinsky’s Le Sacre du Printemps, the subject of my master’s thesis, a large part of which was written to refute Boulez’s analyses of several sections of the ballet in his Relevés d'apprenti. Of all these pursuits, working with the 100 system proved to be the most important to my career as a composer of electronic music, but the others were also significant to what I would do in the future.
3. Who came through the studio in the early days in Pittsburgh and later? Were there any notable experiences that you remember?
I don’t recall many composers coming through the studio at Pitt. There was one main exception, however, that proved to be important to me in later life. In 1969, Vladimir Ussachevsky [2nd image left via Electrospective Music] was giving an evening presentation at Duquesne University in Pittsburgh. Dr. Robert Snow, then the Chair of the Pitt Music Department, asked me to go to the presentation, introduce myself to Ussachevsky after the presentation, and take him out for dinner and/or a drink. I was certainly aware of Ussachevsky’s work and his importance in the field of what he had termed “tape music,” and being a young nobody I felt somewhat apprehensive about doing this. But Ussachevsky turned out to be a very pleasant person, and was actually grateful for the invitation as he hadn’t had dinner and was quite hungry. We went to dinner, and, afterwards, he asked to see the studio at Pitt, so I took him there. He even asked me to play him some of my work, which was very elemental at that time, but praised it nevertheless, and encouraged me to continue composing electronic music. Eventually, Vladimir and I became friends, and we saw and communicated with each other many times over the next two decades. The first SEAMUS Lifetime Achievement Award was given to Vladimir with unanimous approval from the Board. He was a special person to me, both as a composer, and as someone who had gone through an amazing journey in life. My final communication with him was late in 1989, shortly before he died. He could no longer speak, but I communicated with his assistant, with Vladimir listening on the phone.
Photo: Bruno Maderna (L) and Luciano Berio editing tape] Maderna was the guest conductor of the Pittsburgh Symphony for some weeks in October of 1969, and Mort Subotnick invited him to see the studio. I remember Mort describing and explaining the studio to Maderna, who seemed to intermittently nod off during the session. We were doing a concert of new music at Frick Auditorium at the time, and had decided to include Maderna’s Musica su due dimensioni (1952), originally composed for flute, percussion, and electro-acoustic music on tape, but quickly revised for just flute and tape. Bernard Goldberg, the principal flutist of the Pittsburgh Symphony, was performing the work. Bernie’s Italian was limited, and mine was non-existent, and we were having difficulty understanding some of the Italian-language directions given in the score. So we went to Maderna about an hour before the concert to ask him what the instructions meant. Maderna spoke very little English, but we were able to convey our request for explanation to him. He looked over the score for several minutes, flipping the pages back and forth, and finally looked at us with upturned hands and a shrug, indicating that he, himself, didn’t understand what he had written seventeen years earlier. The performance went off quite well anyway. Some nights, Mort, Maderna, pianist Marvin Tartak, and myself would go out for a drink at the old Hotel Webster Hall. Maderna wasn’t really able to enter the conversation much, but he would think up combinations of composers’ names and, every few minutes, offer one up as sort of joke. The only one I remember today is 'Bachausen.'
I also remember composer Gerald Shapiro stopping by in 1970 with an early ARP in the back of his van. It must have been a 2500 at that time, but I’m not certain.
The electronic music studio at Pitt did attract several students, many of who took one or more of Mort’s courses. One of these was Guy Klucevsek, who has become a noted composer and accordionist.
[Daytime image via Historic Pittsburgh: "The "Sky Ballet" in Point State Park. Artist Otto Piene created the "Sky Ballet" using balloons. The piece was exhibited from April 16 to April 18, 1970. Piene is known for his journal "Zero" and his experimentation with light, movement, and space throughout the 1960s." You can see video of a smaller scale exhibit of Otto Piene's Light Ballet here. Note the audio is not Todd Barton's in that piece.]
4. Who came through the studio in the early days at CalArts? Were there any notable experiences that you remember?
The first year of CalArts’ operation, 1970-1971, was held at Villa Cabrini in Burbank, an old, abandoned Catholic school for girls. The campus in Valencia was still under construction, and so the school was forced to find temporary quarters. Much has been written and said about the initial year at CalArts; a fairly good overview can be had at east of borneo.
In my mind, the first year that CalArts was in session was a study in anarchy. It’s not that things didn’t happen; actually quite a lot took place. But there wasn’t much in the way of organization, and many students and some faculty were very influenced by the somewhat radical social and cultural idealism of the 1960s. One could do pretty much what one wanted, although there were course requirements, and a sort of bureaucracy that demanded a certain amount of paperwork. Since we were free to organize things as we wished, being a night person, I taught a class on the Buchla 100 that started at midnight, not a great time for most people I suppose, but students showed up anyway. No one had thought of ordering many chairs for the temporary music quarters at Cabrini, so the studio was, essentially, on the floor, with the Buchla 100 cabinets elevated by a few bricks. What actually did seem to happen at the Cabrini campus, though, was the development of a real spirit of community, something that I’ve never experienced before or since. The number of students was relatively small, only a few hundred, and people were constantly interacting with each other, moving across schools and disciplines. Most of the students’ lives centered around what was going on at the Cabrini campus, and interactions among faculty and students were commonplace.
As students, during CalArts’ first year, we could generate our own projects with little faculty oversight, and so it wasn’t unusual for all kinds of things to be going on around the clock at the Cabrini campus involving people from various disciplines. I remember one work I did in early 1971, Elysium, for harp (performed by the late Susan Allen), two dancers, prerecorded electronic music, envelope detectors (and other Buchla CV modules), and SCR-controlled slide projectors. I worked with people from most of the other schools in putting this together. The piece was successfully performed for an RAI television crew at the Burbank campus, and a year later at the LACMA Monday Evening Concert series. But a disastrous performance at UNLV, where equipment failure was rife, convinced me to leave the multimedia world and concentrate on music.
Nature, even human nature, doesn’t like a vacuum, and this sort of community-of-artists feeling would start to dissipate once CalArts moved to the new Valencia campus in 1971. The School of Music was forced to move in February 1971, because of the San Fernando earthquake that destroyed most of the buildings that the School of Music then occupied in Burbank. Stephan (Lucky) Mosko was working in the Cabrini-based studio at the time of the earthquake, but quickly left afterwards, noting in the studio log something negative about God’s perspective of low frequencies.
There’s a fascinating (although sometimes strangely edited) series of recordings of interviews done by Charles Amirkhanian and Richard Friedman with the original Deans of CalArts’ various schools and departments in January of 1970, before the school had opened. These are online at radiOM.org. Listening to these is like being in a time machine, taking you back to what were considered very progressive ideas of the late 1960s. There is more philosophy than reality in these interviews. Subotnick’s Touch opens and accompanies the first interview with Herb Blough, the first Dean of the School of Theatre. I can remember Mort telling me in 1969 that the term and concept of “composer” in the traditional sense was now meaningless. I’m not certain that he would feel the same way today.
Nerve.com on the parties at CalArts in the 1970s - somewhat NSFW]
During the first semester at the Cabrini campus, there were many artists who visited the school. Some, like Ravi Shankar, were listed as adjunct faculty, but they seldom, if ever, actually appeared. Shankar did come once or twice, and he asked me to show him the Buchla equipment, expressing interest in learning how to use it, but that never happened. Ussachevsky was one of the visiting composers in 1970, but I can’t specifically recall others, although visitors, of one kind or another, were an almost daily occurrence, and many of these were from the underground art cultures of the day.
Once we moved to Valencia, especially from late February to June of 1971, the School of Music was alone at the Valencia campus, which was still undergoing the final stages of construction. There were very few visitors during that period, and, essentially, no interaction with the other schools. As a result, the music faculty and students worked and socialized with each other a great deal, and we all got to know each other fairly well. I can remember all-night concerts and parties attended by most of the students and many of the faculty, one in particular that featured Indonesian shadow puppets accompanied by the gamelan orchestra.
In the fall of 1971, the second year of CalArts opened with all of the schools at the now-completed Valencia campus. Things began to be more regimented and organized. Some faculty had left because of their disillusionment with what had and hadn’t happened during the first year of the school’s existence. But the composition faculty remained impressive: Hal Budd, Mel Powell, Morton Subotnick, Jim Tenney were all there, and Mel hired me to join them. This, of course, was a great honor, but integration proved to be more difficult than I had imagined. One does not change roles while remaining in the same play without experiencing some complications.
During the late 1970s and throughout the 1980s, there were a myriad of visiting composers at CalArts, particularly during the Contemporary Music Festival, and also the CalArts Electro-Acoustic Music Marathons from 1979 to 1983. But in the early 1970s, not a lot of composers visited. In, I think, the fall of 1971 or early in 1972, Donald Buchla showed up to check out the 200 systems and brought with him a young David Rosenboom [pictured left with his Brainwave Music Interface Buchla], who, many years later, would become Dean of the CalArts School of Music. Earlier in 1971, in April, I think, Mort and I, along with Serge Tcherepnin, who was then working at CalArts, took two Buchla 200 systems to the old Ambassador Hotel where the Audio Engineering Society convention was being held, and did a real-time improvisation. Serge played along with us on his violin. We needed a lot of people to help us move the equipment, as the old 200s in their heavy wooden cases were not exactly designed to be portable.
[left: Serge Tcherepnin, early 1970s, with an early Serge system via warrenburt.com]
I knew Serge fairly well in those days, and I know he started designing his synthesizer modules in 1972. But I had no direct involvement with that project, so there’s little that I can say about it. I do know that John Payne, who held the audio engineer-studio technical director faculty position from 1971 through 2004, had the schematics for the Buchla 100 modules. Peter Grenader [of Plan B] tells me that while Serge may have been influenced by the functionality of the Buchla modules, the circuitry Serge designed was of his own invention. I’m fairly certain that CalArts contributed to the research and development of the Serge system, because I know Serge was supposed to deliver two completed systems once they went into production. This never happened, however, and so CalArts had no Serge equipment until the early 1990s. [see this post for a history of Serge from Darel Johansen who donated the "Black Serge" system.]
[directly below: Serge systems via Experimental Sound Practices CalArts. Side note: Peter Grenader of Plan B restored the Black Serge. The system in the background was missing many internals.]
One development that I was involved with was a rather personal one. In late 1971, Yamaha sent a young engineer, Fukushi Kawakami, to CalArts to study developments in electronic music production. He took on the nickname of “Fortune,” and quickly became an integral part of the school. Learning English and electronics on his own, both at a remarkable pace, he asked me if there were some modules that didn’t exist on the Buchla 200 that I would like to have. After discussing this with him, the results were the four Fortune Modules that he built for me. All of these became important in my work, but, by far, the most important was the control voltage matrix gate, which allowed me to automate the mixing and modification of control voltages, resulting in more complex results than would otherwise be possible. I used the Fortune Modules in all of my work done with the Buchla 200 from 1972 until I left the analog world for the digital realm in 1985. I don’t believe anyone else ever used these devices. These unique modules are now in the collection of Grant Richter. Dr. Kawakami is still active as professor at Shizuoka University of Art & Culture, and as President of Sound Concierge Co., Ltd. We still communicate with each other on an occasional basis.
For me, one of the most important things in my life in the 1970s was the Currents Electro-Acoustic Music Programs at the Theatre Vanguard in West Hollywood. Judith Thomas Stark, who, with her first husband William Thomas, had developed JBL, had profited greatly from the sale of the company and wished to start a non-profit venue for the presentation of new music, dance, theatre, and film. She bought the old Stage Society Theatre at 9014 Melrose Avenue in what is now West Hollywood and opened it in 1973 as the Theatre Vanguard. This became the center for new work in the Los Angeles area until its closure in 1978. Leonard Stein, who was on the Board of Directors of the Vanguard, asked me to begin a series of programs featuring electro-acoustic music. Even though there had been concerts of electro-acoustic music dating back to at least 1952 in New York, I believe this was the first ongoing concert series devoted to the medium, at least outside of academia. I’ve written about Currents in an article on my website, documenting some of the concerts and the history of the series and Theatre Vanguard. There was a close connection between CalArts and Theatre Vanguard, and I often used equipment from CalArts in producing the concerts, as well as presenting performers who were CalArts faculty and students. Carl Stone, well-known to most fans of electro-acoustic music, and who had been a student at CalArts from its inception, was the technician of the Currents series. I presented well over one-hundred works of electro-acoustic music over the life of the series, many of which were premieres, including a number of works from outside the United States. Currents was well-received and, I think, was instrumental in bringing what was then often considered music of novelty and curiosity into a more mainstream existence.
Adam Beckett - Heavy Light - 1973
Published on Mar 28, 2015 TiBOR Nagy
In 1973, I composed music for two short films that were to have some impact on my later career. It’s amazing to me that, in one way or another, they’ve had some significance even to the present day. The first was a short abstract animation by the brilliant animator Adam Beckett called Heavy-Light. A restored version of this film is available on YouTube [embedded above]. Adam, who died at the age of 29 in 1979 after working on the first Star Wars movie, was an incredibly inventive animator. Heavy-Light was his most abstract and most unusual completed work, made from just thirteen drawings that were processed using an Oxberry animation camera and an optical printer. The music I composed for the film was done with a Buchla 200 system, using a 16mm projector, a paper punch, and a stopwatch to coordinate sound production and recording with the film. Several Revox stereo and Ampex four-channel tape decks were used. As the film is a sort of triptych, each section having three parts, I composed the music in nine sections, each of which has a one-event introduction that precedes the entrance of the animation. To assemble the sound track, we used a flatbed with one film and two sound transports, which allowed for mixing and dovetailing the sections. The final mix ended up on an optical track, which, at that time, didn’t allow for great audio quality. Nevertheless, the music track on the restored video, taken from a pristine copy of the film, sounds today the same as it did then. At first, Adam wasn’t sure he liked the music, saying that I’d made the film into a sort of horror movie, but, eventually, he came around to liking what I’d composed, considering the music closely married to the images. Heavy-Light has become a classic in the world of experimental animation, and remains unique in the history of the field. Heavy-Light is also available on an Iota Center DVD containing all of Adam’s finished animation.
eContact!. Death of the Red Planet was the first film to use laser images as the visual source material. Even though there was a sort of story to the film, the images were abstract. For the music, the director, Dale Pelton, wanted a quadraphonic score. In the early 1970s, major first-run theaters were using four-channel sound where there were left front, right front, center front, and center rear channels. I used this arrangement to create the illusion of the music starting in the rear, moving to the front center, and then spreading out to the sides. Quadraphonic audio tracks appeared as four separate magnetic tracks on the 35mm film. Pelton wanted a huge score, one that matched his vision of the story. What I composed was always intended to exist in four channels, and some of the sound is very big, indeed. At the premiere of the film at the Todd-AO studios in Hollywood, Pelton wanted the volume rather high, and the circuit breakers on the speakers kicked in about half way through the film, so we had to stop and restart the projection. Death of the Red Planet played in major theaters as a short accompanying the film Yessongs, a documentary about the band Yes. Being young and naïve, I signed a contract giving me a percentage of the film’s net (what Eddie Murphy calls “monkey points”) and so never saw a penny in return. I didn’t keep a copy of the entire score, but I do have a ten-minute suite from the film in the original quad format. Until recently, I thought this film was lost, but I’ve recently learned that there is one or more of the original prints in existence, but since it’s never been digitized, it would be difficult to project as one would have to find one of the old 35mm projectors that read four-stripe magnetic audio tracks. I’ve tried mixing the music down to stereo, but, so far, it never sounds very satisfactory because there’s so much going on in parts of the score, causing phasing cancellations and reinforcements, and difficult dynamic ranges. Perhaps some day I’ll succeed in creating a decent stereo version of this music."
5. Who came through the studio in the later years at CalArts? Were there any notable experiences that you remember?
The middle 1970s through the late 1980s was the heyday of important composers visiting CalArts. This was partially because of money available from both the National Endowment for the Arts and the California Arts Council, both of which, in those days, supported composers and festivals of new music. In addition, as I recall, there was a full composition position left unfilled so that the money could be used to bring in visiting composers. This allowed Mort Subotnick to create the CalArts Contemporary Music Festival. I think the first festival was in 1978, at least this is the year of the earliest documentation that I can find. The scope and size of the festival continued to grow through 1987, but started to shrink in 1988. You can see the nature and changes in the festival documented in the reviews of the festival in these years from the Los Angeles Times: http://articles.latimes.com... The festival lasted through the early 1990s, but became increasingly smaller as the funds to support it disappeared.
[Photo of Terry Riley and Barry Schrader at Betty Freeman's in 1985]
One of the important supporters of the CalArts Contemporary Music Festival was Betty Freeman, a well-known philanthropist for new art and music. Freeman financed several composers and projects of new music, and, in the fall and winter, hosted a monthly gathering of important society and new music figures at her Beverly Hills estate, the one immortalized by David Hockney in several paintings. These events were an important showcase for the new music scene in the 1980s and 1990s. While they were by invitation only, there were usually more than 100 guests at each event. I especially enjoyed seeing Nicolas Slomninsky at these events, along with many other composers from the LA new music world. Each affair would feature two composers presenting and discussing their music for about an hour each, followed by an open buffet. Terry Riley and I were on the same event in 1985. Freeman’s support of the CalArts New Music Festival was a major part of the festival’s funding, and as she moved towards supporting other things in the late 1980s, the loss to the festival was sorely felt.
[Excerpt from 1983 poster for Electro-Acoustic Music Marathon festival.]
After the close of the Theatre Vanguard, I began another festival at CalArts: The Electro-Acoustic Music Marathon. This festival ran only four years, 1979-1983. It was quashed by the recently-appointed Dean of the School of Music Frans van Rossum, who believed that electro-acoustic music didn’t deserve to have its own festival. The final edition of the Marathon was an exiting one, ending with a concert of live/electro-acoustic music from the 1950s, very early works that were rarely heard. Among other things, this was the first time that Varèse’s original tapes for Déserts had been presented in the US; I asked François Bayle, then head of the Groupe de Recherches Musicales in Paris, where the tapes were kept, to prepare a stereo composite of the two mono tapes Varèse had created.
Because of the festivals and people invited as guests, a great many composers were at CalArts during the late 1970s through the late 1980s. Unfortunately, CalArts has not been good at documenting its own history, other than an administrative narrative. But as for the actual artistic happenings at the school, little is recorded. So here’s a very incomplete list of some notable composers who visited CalArts over the years, a list made up of those names that are in my memory: Aaron Copland, Alvin Curran, Alvin Lucier, Annea Lockwood, Anthony Braxton, Barton & Priscilla McLean, Ben Johnston, Brian Ferneyhough, Bunita Marcus, Charles Amirkhanian, Charles Dodge, David Behrman, David Del Tredici, Earle Brown, Elliott Carter, Frank Royon Le Mée, Frederic Rzewski, Gordon Mumma, Harrison Birtwistle, Iannis Xenakis, Ivan Tcherepnin, Jean-Claude Éloy, Jean-Claude Risset, Joan La Barbara, Joan Tower, John Adams, John Cage, John Chowning, John Eaton, John Zorn, Jon Appleton, Kenneth Gaburo, La Monte Young, Larry Polansky, Leonard Rosenman, Libby Larsen, Lou Harrison, Louis & Bebe Barron, Louis Andriessen, Luc Ferrari, Luciano Berio, Lukas Foss, Martin Bresnick, Mauricio Kagel, Max Neuhaus, Michel Redolfi, Milton Babbitt, Morton Feldman, Nicholas Slomninsky, Pamela Z, Paul Chihara, Paul Dresher, Paul Lansky, Pauline Oliveros, Phill Niblock, Pril Smiley, Richard Teitelbaum, Robert Ashley, Roger Reynolds, Salvatore Martirano, Steve Reich, Terry Riley, Tod Machover, Vinko Globokar, Virko Baley, Vladimir Ussachevsky, and William Kraft. I suspect that a complete list would be considerably longer than this one. Some composers were there for only a day or two, and some stayed an entire semester. Some, like Earle Brown, were visiting faculty on more than one occasion.
Most of these visitors are not composers of electro-acoustic music, but several are. Few, however, actually worked in the CalArts studios. During the late 1980s and early 1990s, the NEA was sponsoring residency and outreach programs for composers, and I arranged long-term visits for Michel Redolfi, Frank Royon Le Mée, and Vladimir Ussachevsky. There were other long-term residencies, such as that of Sal Martirano, but the only visiting composer I remember actually completing a work composed in the CalArts studios was Michel Redolfi. His work Desert Tracks, a four-movement electro-acoustic music composition using both concrete and electronic material, is based on journeys through the Mojave Desert. (A new vinyl recording of this will be released soon on Sub Rosa Records.) I also ran a series of workshops for LA-based composers on synthesis and studio techniques. I don’t remember most of the composers who took that workshop, but Bebe Barron and Rodney Oakes were two of the local composers who attended.
[Barry Schrader in studio B303 in 1979 with Buchla 200 in the background]
As for the equipment in the CalArts studios during the 1980s through 2000, the bases of the studios were the two large Buchla 200 systems, one in studio B303, and one in B304. There were two other studios: B305 and B308, the latter eventually becoming the Dizzy Gillespie Digital Recording Studio. In the 70s & 80s, B305 was a “multimedia” studio. In the early 1970s, we had a lot of Buchla 100 modules, including the original red ones that had “San Francisco Tape Music Center” printed on them. These 100 modules were spread across all of the studios. Particularly important were the many envelope detectors that we had which allowed interfacing with lights, motors, and other things using SCRs.
Around 1974, the Buchla 500 system was installed in B304 [you can see it here and here]. This was a hybrid system using a Computer Automation PDC-216 computer to control the analog 200 modules using the 500 interface modules. Only 3 of these were made, and CalArts had the original. The 500 was an early hybrid system, and while an interesting concept, it never worked well enough to compose with. I’m unaware of anyone having completed a composition on the 500 at CalArts. (One of the 500s went to the short-lived Norwegian Studio for Electronic Music, and Hal Clark has released recordings of works in which he states that he used the 500.)
The Buchla 300 developed in 1977, was installed at CalArts around 1979 or 1980. The 300 was also a hybrid system, relatively compact, and could be used as a manually controlled analog system or a digital-controlled hybrid system. It used an updated version of the Patch IV programming language. (An excellent monograph on the 300 by Dale Millen can be downloaded from http://webcache.googleusercontent.com/...) Several composers worked with the 300 at CalArts, and I had to teach it, but I never composed with it. By the time it arrived, I had mastered the 200 system to such a degree that the possibilities of the 300 seemed rather paltry to me, particularly with regard to timbral complexity. [click here for an image of Morton Subotnick's Buchla 300]
The 300 convinced me that we needed to expand our technology beyond what Buchla was doing, and I was greatly impressed with what had been done in designing the Dartmouth Digital Synthesizer, which became the Synclavier in 1977. So I campaigned to get one into our studios, but I was unsuccessful. Mort Subotnick wanted to keep working with and supporting Buchla and his designs, so the next system we got was the Buchla 400 in 1982. The 400 was a big improvement over the 300, offering six digital oscillators, and the ability to compose with a programming language (MIDAS – an improvement over Patch IV), or with a graphic score interface. The filters on the 400 were still analog. In addition to traditional AM & FM synthesis, the 400 utilized waveshaping (transfer functions). Again, I had to learn and teach the 400, but I never found it useful for my own work. [Click here for a pic of a Buchla 400]
Barry Schrader: Moon-Whales
I continued to use the Buchla 200 systems for composing, and my last completed composed work was a large song cycle for soprano and tape, Moon-Whales and Other Moon-Songs (1982-83) based on poems by Ted Hughes. A video of the 7th and final movement of this work can be found on YouTube [video above]. After this, I created only real-time automated installations with the 200, most of which were never recorded. I was feeling like I wanted to move in a different direction technologically, but I didn’t have the tools that interested me available at CalArts. It was around this time, 1983, that MIDI was introduced commercially, and Yamaha began releasing its X-design digital synthesizers. I got to play around with a DX7, but we had no Yamaha equipment until 1985 when we got a Yamaha TX816 and a QX1 sequencer. The QX1 was a terrible tool to use; using an alphanumeric keyboard and having only a two-line readout of data made programming the TX816 a laborious task. But the possibilities of this technology fascinated me, and in early 1986, I bought myself a DX7, a Mac Plus, some software from MOTU and Opcode, and a MIDI interface. This was my first home studio. I couldn’t afford a TX816 at this point, but I bought one a couple of years later. After 1985, I no longer composed with the Buchla 200. Chowning’s book on FM synthesis came out in 1987, and I was quick to buy it and teach myself how to work seriously with what the Yamaha engineers had done with his research. As yet, you were not able to record on a home computer, but I was able to realize a lot with the TX816 in real-time, and the Opcode editor Galaxy made it easy to change programs as the computer was “playing” the 816, using MOTU’s Performer or Opcode’s Vision. My composition Triptych (1986) [below] was awarded first prize in the Yamaha-URBAN 15 competition in 1987 for a real-time work done with the 816, and, I think, demonstrates what can be done with just one 816 rack, the right software, and the knowledge of how to use them. I’ve always thought that composers, especially academic composers, undervalued the Yamaha X-design synths, and particularly the TX816. The DX7’s use in popular music, usually with the factory installed patches, created the idea of great limitations of the X-design synths, which was unfortunate.
[CalArts studio B308 in 1986 (L to R: David Roitsetin, John Payne, Barry Schrader, Alan Chaplin) (the Buchla 400 is on the left above (I think) a Yamaha KX88 controller; John is sitting in front of an Emulator 3 with a TX816 to the right of the old Mac)]
I became Director of the CalArts Studios in 1986, and I quickly set about to change the nature of the studios. While we kept one Buchla 200 system up and running, the other studios were outfitted with Yamaha synths such as the TX816 and TX802, and the KX8 keyboard controller. We also bought an Emulator III for doing sampling. I rarely have used concrete material in my works, but I used both the 816 and Emulator III in Dance from the Outside (1989). The Buchla 300 and 400 units were still available for use, but few people wanted to use them. They were removed from the studios in the later 1980s. With the advent of digital recording and programming software for the Mac, people eventually moved away from using external hardware. And having everything in the computer certainly made things easier as compared to having to worry about MIDI and SCSI connections, as well as storing data on a variety of different media. In 1990, when David Rosenboom became Dean of the CalArts School of Music, he took over directing the studios. By then, most of the Buchla equipment had been removed, and it would be all gone within a few years, having been sold or thrown away. I continued to teach the introductory tech courses until 2000, when I asked to be relieved of these classes; I felt that I had taught this sort of class enough after thirty years, and I wanted to explore teaching other things. I had already stopped using the studios for my own work by this time, composing only in my home studio, with the exception of time working with the Waveframe workstation at UC Santa Barbara.
It’s obvious that I made a big switch from using analog equipment without any computer control to working with digital hardware controlled by computer. Eventually, I got rid of external hardware altogether, but I used the TX816 in my work through 2007. Since then, I use only the computer with a variety of software. But by this point I had created hundreds of timbral designs with the 816, and I transferred all of these via syex dumps to Native Instruments FM8. So I still have patches from twenty years ago that I sometimes use as jumping off points for creating new timbres, and everything I learned from working with the 816 is still of great value to me. While everyone was getting rid of analog gear in the late 80s and throughout the 90s, a sort of renaissance with analog equipment began in the late 90s and early 2000s. The use of computers to record and control patches in modern analog equipment has certainly made analog synths easier to use, and the immediacy of physical interfacing holds a great deal of import for many people. But I have no interest in going back to using analog equipment. For me, the control that I have with contemporary computer software is something I don’t think I could have with analog equipment. I appreciate what those who use analog synths see in them, and also how they relate to them. I’ve had several students in the last decade who own, build, and compose with analog equipment. A few even want to use analog tape recorders, something that I find puzzling, but then many people are now buying vinyl again. I think there’s a sort of Hegelian dialectic at work here, and I don’t know how far the pendulum will swing before moving in the other direction.
I’ve known several pioneers of electro-acoustic music, both those specializing in electronic and concrete compositions, as well as some mixing both types of source material together. One thing that I’ve noticed is that some, perhaps many of them, can’t seem to move forward when a new technology is introduced. If we go back to 1948 with Pierre Schaeffer’s work, we see that he perfected his early techniques using phonograph discs. In the early 1950s, when the GRM facilities were switching to tape recorders, Schaeffer was opposed to this change of technology. Nevertheless, he went along with it, but he essentially quit composing in 1962, even though he lived for thirty-three more years. Some of the pioneers I’ve known who perfected their compositional practice in the classical studio environment found it difficult to learn how to compose with early analog systems, and many were unable to move into the digital realm, either with digital hardware or with computers. I started composing electro-acoustic music in 1969 with the Buchla 100, and I never was very good using classical studio techniques, especially cutting and splicing, although I did learn how to make good tape recordings and mix down to as much as fourth generation masters that were very good. But from 1969 through the early 2000s, there were a lot of changes in the technology behind electro-acoustic music, and a lot of things that I had to learn about and then discard as they became obsolete or impractical. Behind all of this, however, is theory: synthesis theory, the physics of acoustics and psychoacoustics, and the theory of how we perceive linear kinetic processes – these things remain constant and are a foundation of my work. So I’ve been able to keep up with what I needed to in order to stay current. But, at 70, I’m not eager to jump into the next big thing in electro-acoustic music. Fortunately, for me, things seem to have plateaued some time ago, and the only movements I’m seeing are retro ones. I think I’m probably set, technically speaking, for the rest of my life. But, who knows for certain?
Galaxy Of Terror 1981 [some NSFW scenes]
[Yes, this is the full film. It was scored completely with a Buchla 200. Note The New World logo intro and the ray-gun sound effects over the title were not done by Barry. via Barry: "I don't know who did the logo, but the ray-gun effect was done by the sound effects guy on the film. Everything else is me using the Buchla 200. I did use a sitar (played by the late Amiya Dasgupta) as the background music beginning around 2:45, and I incorporated a soprano (Maurita Phillips-Thornburgh (multi tracked)) starting at around 24:00 and at the beginning of the end titles.]
I’ll speak of my music in more detail in response to a later question. But one more thing I probably should mention in this reply is the score I composed for the movie Galaxy of Terror in 1981. While I had composed for several short films before this, including experimental animation, a TV documentary, and live action shorts, this was my first and last commercial movie score. While the director was Bruce D. Clark (who never directed after this), the film was very much a Roger Corman enterprise, and he heavily controlled the end result. I found Corman a very congenial man to work with, but not so Clark and Marc Siegler, who had written the script together. Over a three-week period in the summer of 1981, I had to compose about an hour’s worth of music. This isn’t unusual for a feature film, but working on the Buchla 200, without a traditional keyboard, made the experience difficult and exhausting. I used a 35mm moviela to play back the rushes I would get, sometimes with a bluescreen background because the postproduction was going on at the same time. I used the same hole punch method for syncing that I had in the past, recording most of the original tracks on a 1” Ampex 8-track deck. But sometimes, this wasn’t a sufficient number of tracks for what I was recording, and so the use of stereo and 4-track ½” decks was also employed; everything was mixed down to ¼” tape for delivery to the studio. I got little sleep during this period, and had to work every day, going back and forth between the studio at CalArts and Corman’s studio in Venice, which is a long drive. After the composition was done, I had to work closely with the editors for another week or so to make sure the music was appropriately synced with the film. By that time, I was pretty exhausted, because I was also engaged in writing Introduction to Electro-Acoustic Music for Prentice-Hall at the same time, and facing a number of difficult situations in my personal life. It’s not a period of my life that I remember fondly. At the end of all this was a two-day mixdown of all of the sound tracks (dialog, music, and sound effects) on a Hollywood soundstage. Many film composers avoid attending these mixdown sessions, as they can get very heated in arguments from the various parties involved over what should be the relative volume of dialog, music, and sound effects (now referred to as sound design). That, indeed, was the situation with the final mix of Galaxy of Terror. To make matters worse, the sound effects designer never finished the last two reels. So, during the final mix process, they took short snippets of what I had done, recorded this material on 35mm mag stock loops, and used this as sound effects at various points. At the end of the two days of people haggling and fighting over the mix, I had had enough. This experience, combined with the facts that I simply couldn’t afford to build my own home studio at the time, and also the idea that I thought it was better to keep my academic career as opposed to taking a chance on going into film scoring, made me decide to abandon dealing with commercial film scoring.
Galaxy of Terror is not exactly what I’d call a great film, but it’s achieved a cult status among aficionados of sci-fi/horror films. As for the film’s music, it’s become both celebrated and reviled, depending upon whose opinion your getting. Like with everything else, personal points of view are subjective and can vary widely. I think for some people, the music for Galaxy of Terror was too abstract, even weird, in a way, perhaps, that the music for Forbidden Planet wasn’t. But I’m not trying to compare the music of the two films as they occupy very different historical periods, aesthetic perspectives, and technological backgrounds. I don’t think any other electronic music score could compete with the historical import of the music for Forbidden Planet. I also don’t know of any film other than Galaxy of Terror that was scored completely with a Buchla 200, though. (I did use the sounds of a soprano and a sitar in the music, but only in a few places.) Interestingly, the music for the film keeps coming up in my life. When Galaxy of Terror was released on Blu-ray in 2010, it caused quite as stir. Included on this disc is an excellent documentary Tales from the Lumbar Yard: The Making of Galaxy of Terror. I also did an interview about the score with in the July/August 2011 issue of Fangoria magazine.
6. How did technology over the years impact your music and creative process? Can you walk us through your albums and what the general process was like for each?
Art and technology have always been related in a circular way. This is certainly true of music, especially electro-acoustic music. You can only produce what the technology you’re using allows, and this, in turn, influences what you compose. Limitations always exist, for one reason or another. Working within these restrictions, whether self-imposed and/or system-imposed, is part of the compositional process. Extending your technical capabilities is an important part of developing your compositional skills, but technical knowledge, alone, isn’t sufficient to make a good composition. What we call “talent” in a given field is also necessary, and this seems to be a combination of innate ability and conscious experience.
The Czech critic Z. K. Slabý has stated that my music can be heard as a combination of Stockhausen and (Pierre) Henry with Mahler and Bruckner. While these may not be the exact composers I might select, the idea that my work combines the past with the present is quite accurate. I have always considered what I compose to be taking historical musical traditions and propelling them into the contemporary world using electronics. All of my work is concerned with form, and much of it is programmatic. In my own music, I have rejected much of what I regard as the “translational” music techniques and philosophies of the 20th and 21st centuries (most of which, it seems to me, lead to dead ends), creating instead works that are “relational” in their use of musical information. In this way I hope to be able to communicate ideas through the medium of music to a select audience, communication, for me, being the main purpose of all art.
In exploring my compositions, I think it’s best to take a few works and deal with them one at a time. The first one I’ll consider is Trinity composed in 1976 using one of CalArts Buchla 200 systems. Beginning with Trinity, all of my pieces done using the Buchla 200 used the same basic patch:
This patch grew out of my interest in both what I called at the time “timbral transformations” (what would now be referred to as “morphing”), and time-variant timbral structures. These are, I think, special possible qualities of electro-acoustic music not easily accomplished in the acoustic domain. I began to explore these ideas in a multi-movement work named Bestiary (1972-73), but I didn’t develop the basic patch I’m describing until working on Trinity. Using five sine-sawtooth oscillators (Model 258), and two control voltage processors (Model 257), I would take the output of each processor into one set of the two control voltage inputs of each of the oscillators. This created two separate sets of control voltages for the oscillators as a group. The oscillators were tuned so that at 0 volts, all of them would be at a unison. One set of control voltage inputs was scaled so that when a signal of 15 volts, the maximum, was applied, the result would be a five-octave spread. The second set of control voltages was tuned to allow the oscillators to track exactly, over the audible range, in whatever intervals had been created by the first set of control voltages. Thus I could simultaneously control the frequency of the five oscillators in both a contrary and parallel fashion, which gave me a great range of frequency combinations to use as partials in a spectrum, as well as the ability to change this spectrum in real-time. In addition, since the waveforms of the oscillators could be changed in a linear fashion from sine to sawtooth, and the amplitude of any signals used as modulators could be gated (thus allowing for a sliding modulation index), the possibilities were enormous. With the addition of filters, I could, at once, make use of additive, subtractive, amplitude, and frequency modulation processes.
As I mentioned previously, all of my compositions are concerned with form, and Trinity is one in which structure is paramount. Trinity is composed in rondo-variations form wherein the theme alternates with variations of itself. In addition, the original form of the theme, which serves here as the refrain, is slightly altered in each repetition. Trinity’s theme is not the traditional set of pitches, nor is it a particular group of elements from any other dimension of music. Rather it is a musical gestalt that may be graphically represented as
As realized in the first statement of the theme, this idea becomes a continuous expansion of sound, particularly with respect to pitch, timbre, and dynamics. In this initial presentation, you can hear one of the possible results of the patch described above, where both sets of control voltages are changed simultaneously. To consider this theme is also to postulate its permutations. A number these were selected for Trinity resulting in the final overall form of the work which may be represented as
Since this notion of theme represents such a general but fundamental musical concept, it lends itself to countless possibilities of variation and combination, each of which, in turn, can be represented in many ways by the various dimensions of music. The second variation, for example, is realized initially through changes in timbre and rhythm until, through amplitude modulation created by the rapid change of channel assignment, the two dimensions become part of a perceptually larger continuum. At this point, the focus shifts to changes of pitch and timbre as an increasing pitch range creates an expanding additive timbral structure. This is, of course, only one of the many possible ways this variation could have been realized.
Trinity, like most of my other works, is greatly concerned with the establishment of new and interesting electronically generated timbres, as well as with their transformation. Timbral transformations may occur in a linear fashion as in the original theme and at the close of the second variation, or as changes of discrete steps along a timbral continuum, a continuum that may be unique to a particular timbre, as in the first variation. Used in this way, timbre becomes not only thematic, but definitional and functional as well. This is, I believe, a characteristic musical possibility unique to electro-acoustic music. Trinity is the earliest work of mine that fully exhibits my concerns with both time-variant timbres and timbral transformations, concepts that continue to be important in much of my music.
There is a particular frequency heard throughout Trinity that, because of the way it is used, takes on tonic qualities. This frequency is 313 Hz, one that does not represent any traditional pitch since it is not within the accepted tuning of the tempered scale. This frequency was selected for just such a reason, as well as for its obvious relation to the structure of the work.
B. Lost Atlantis (1977)
(cover of the Laurel Record LP of Lost Atlantis released in 1986 - click on the image for a larger version)
Lost Atlantis is one of my programmatic works, and was released on the same CD as Trinity (Innova 629). I’ve done several of these large multi-movement works dealing with extra-musical narratives. In this case, the work is based on the account of Atlantis in Plato’s Critias. Plato also wrote about Atlantis in the Timaeus dialogue, and these are the earliest written accounts of Atlantis that we have.
(graphic by Peter Grenader - click on the image for a larger version)
I used a translation of Plato’s text, adding a little extra myself when I wanted a basis for the musical structure I had created, fabricating an idea that didn’t exist in the original text , but most of the narration I used for Lost Atlantis, about 98%, is directly from the Critics. The spoken narration for the work (not included on the commercial releases of Lost Atlantis) was recorded by Nicholas England, then Dean of the CalArts School of Music. Nick had a marvelous speaking voice, one that carried a certain clarity and gravity that I thought went well with the story.
Nicholas England (1921-2003)
The narration can be found here on my website. The six sections of narration are intended to precede each of the six movements of Lost Atlantis:
1. Introduction: The Pillars of Hercules - The Great Harbor
2. The Gardens of Cleito
3. The Temple of Poseidon - The Dance of the Gods
4. The Gathering of the Kings - The Hunting of the Bulls
5. The Mystery Rites of Purification
6. The Destruction of Atlantis - Epilogue: and Atlantis shall rise.
Many people have downloaded the narrations and arranged them in sequence with the musical movements in iTunes or similar apps.
The overall structure of Lost Atlantis was determined before I began composing the work. Movements 1, 3, 4, and 6 are compound sections, each dealing with two related ideas, while movements 2 and 5 are associated with only one thing. The structures of the individual sections were worked out as I composed the piece, using a variety of musical forms based upon the narrative. Initially, these structures were created using graphic representations of what I wanted to compose, and these were later translated into more detailed precompositional information. I’ve never been very good about saving the notes I’ve made in composing any given work, but some of the sketches for sections of Lost Atlantis do survive in the archive that the CalArts Library is creating on me and my work. However, I have little or no idea at this point exactly what these represented to me back in 1977.
Lost Atlantis was composed using a large Buchla 200 system, along with the Fortune modules, and I used the same basic patch as described in my discussion of Trinity for all of the movements. The work took me a long time to compose, and I essentially spent most of my waking hours in CalArts Studio B303 from May through early September in 1977. As there was no summer school at that time, I had the studio to myself and could leave my patches in place for as long as I liked. Being a night person, I would start working around 6:00 pm and continue through until around 9:00 am the next morning when the staff personnel would be starting their day. I loved the isolation and the ability to concentrate on composing. I know that I have a somewhat obsessive personality when it comes to work, and I’ll often concentrate on a piece until it’s finished, the result of which is often burnout, resulting in a long hiatus from composing afterwards. This was the case with Lost Atlantis, and it took me a few years before I did anything other than automated patches with the Buchla 200.
It would be too tedious to relate all of the compositional and technical considerations of such a long work as Lost Atlantis, but I’ll mention two sections that I’m rather proud of. The first is the second half of the third movement, The Dance of the Gods section. Plato never mentions this in his discussion of the Temple of Poseidon located in the center of the island, but I wanted to have a “dance” movement at this point, and so I invented this idea, one that I thought didn’t seem too implausible. Like almost all of my analog pieces, Lost Atlantis was composed as a quadraphonic work; both the 1980s Laurel LP release and the more recent Innova CD release are stereo mixes of the original. Perhaps, some day, I’ll release the original 4-track files online.
This module allowed for the independent movement and/or placement of up to four signals at a time. Each channel location had a specific X/Y voltage address. As I remember, going clockwise from the left front, they were 0/0, 0/15, 15/15, and 15/0. Voltages between 0 and 15 on the X and Y-axes would place the sound at a point between two of the four channels. Voltage inputs from any discrete voltage source (touch “keyboard,” sequencer, random voltage) could be used to control the location of a given signal with the model 204. Continuous interpolated voltages from envelope generators could be used to move the signal between channels any way you wished. I had already figured out how to use the rows of a sequencer in a series, and this combined with the Fortune control voltage matrix gate gave me a lot of ways to program the model 204. The only problem was that since I had to record all four channels at once, I could only “play” four “instruments” at a time, and there were more than that in the Dance of the Gods section. Fortunately, I had access to three 4-channel Ampex tape recorders, and so I could mix two 4-channel takes down into one quad recording. The final Dance of the Gods section required more than one mixdown, and so the master is actually a third generation recording in order to have all of the voices present. The effect I was after, however, was successfully accomplished, and the smooth jumping movement of the individual timbres is quite striking when heard in the original quad version. Much of this is lost in the stereo mix. The third movement of Lost Atlantis is here on YouTube.
Second, I’ve spoken before about my interest in timbral transformations or morphing. The last section of Lost Atlantis, Epilogue: “…and Atlantis shall rise.” is a good example of this. Although this part is multitracked in order to create the overlapping and movement effects (the latter really only effective in the quad version), each track was recorded in real-time, so that the timbral transformation from a more elemental design to the white noise material representing the raging waves of the ocean and back to another but different simpler timbre was accomplished in single takes. This required a lot of programming, but I was very pleased with the result. The idea for this final section of the work comes not from Plato, but from the psychic Edgar Cayce, who predicted that Atlantis would rise off the east coast of North America. The final movement of Lost Atlantis is here on YouTube.
Published on Nov 8, 2014 Barry Schrader - Topic
C. Triptych (1987, revised 2000)
Triptych was composed in 1987 and revised in 2000 for the Innova 575 release on the EAM album. The revision consisted of a slight reduction in the length of the work accomplished by changing the fadeouts of the ends of the major sections of the piece.
Triptych was composed using a Mac-controlled Yamaha TX816, using MOTU's Performer, and Opcode's Vision and Galaxy. Since there was no digital recording software for the Mac Plus at the time, Triptych was designed to be performed in real-time. It was awarded first prize in a competition for real-time works created with the TX816 sponsored by Yamaha and URBAN 15, a new music organization in San Antonio, Texas.
Triptych consists of three continuous movements, the first two being in rondo-variations form, in which the refrain is alternated with variations on the refrain itself. The third movement is a non-traditional form: a period structure (ABAC) is repeated multiple times with timbral changes in each repetition. The resulting form of Triptych may be represented by this graphic:
Besides formal considerations, Triptych also deals with an expression of my own theoretical concerns regarding the implementation of primary musical dimensional information. Historically, the two primary musical dimensions have been those of pitch and rhythm. Most music uses pitch organization as the primary dimension, while much less music uses rhythm. Rhythm is, I think, rather misunderstood by many people. It consists of four different possible stresses in time: psychological (metric), agogic (durational), dynamic (amplitude-related), and registral (frequency-related). This is not the place to go into a discussion of rhythm, but I will state that all of these types of stresses are always present in any series of sound events, including speech. The idea of a primary (as opposed to a secondary) dimension of sound is that it is the perceived dimension used for the meaningful organization of musical ideas. One's concept of what music is, is formed in the first few years of life; this can be expanded in later stages of life, but never replaced. So, initially, what defines music to someone is created by what they hear as a child in whatever time and place they grow up. This reality dispels any notion that music is a universal language. Historically, timbre has never been a primary dimension. You've commonly experienced people singing melodies or tapping out metric patterns, but you've never heard anyone recalling a work of music by somehow reproducing a series of timbres. Schoenberg experimented with trying to use timbre as an organizational principle, which he called Klangfarbenmelodie, as described in his Harmonielehre, but it's an idea which never proved perceptually effective. Although many composers would disagree with me, I don't think that timbre can be a primary dimension in acoustic music. The reasons for this is that there are no historical precedents that would lead someone to hear music in this way, and also because, without the aid of electro-acoustic modification, acoustic timbres have very small ranges of possible variation. So, for me, in acoustic music, timbre is a secondary dimension.
Timbre, itself, consists of two perceptual areas: spectra and what I call the event envelope. The spectrum of a timbre consists of the partials being sounded at any given point. Each of these partials has its own envelope, a description of the amplitude characteristics of the sound in time. The combination of all of the envelopes of all of the partials in a given timbre is what I call the event envelope. These two domains of sound data give each sound event its characteristic timbre.
With electro-acoustic music (in this case specifically electronic music) I think it may be possible to lead the listener into following timbre as a primary dimension. In experimenting with creating this possibility, my thinking is that timbre has to be organized in a way similar to how a series of pitches or a succession of rhythmic stresses has been historically organized in works of music. But since there is no precedent for following timbral progressions, I've sought to guide the listener into this mode of perception by creating timbral morphs that evolve in a linear fashion from one timbral point to another. In the 1970s, I began calling these timbral transformations, and I've already discussed an example of this with regard to the Atlantis shall rise section of Lost Atlantis. In Triptych, the third movement is organized so that the timbre evolves from a fairly simple structure in section A1 to that of noise in A13. In keeping the pitch and rhythmic material of the period structure repetitious, I hoped to focus the listener's attention on what was changing, and that is timbre. In Gramophone's review of the EAM album, Ken Smith writes, [Schrader] has managed to implement timbre fully as a structural tool - a point that many composers have discussed without true success. So I think I'm somewhat justified in thinking I've at least partially accomplished what I set out to do. I don't think timbre can easily be made to function as a primary dimension, but I continue to experiment with the possibilities.
The first two movements of Triptych focus, respectively, on pitch and rhythm as the primary dimensions. To make pitch the primary dimension is easy. To focus the listener on rhythm, I used only two pitches in the second movement.
While I've mentioned the evolution of timbre in the third section of Triptych as moving from a relatively simple one into noise, the design of each of the timbres used in this movement are themselves time-variant structures. Using Chowning's formula for calculating FM spectra (k cm for k=0,1,2n), and using Yamaha algorithm 16, I created the first of what I called the Clutch patches. Here is a sketch I made in composing Triptych of my calculations used in making the Clutch 1 patch:
While this shows the spectra of the different FM chains in this algorithm, it doesn't describe the envelopes of the operators. To do this, it's necessary to use Bessel functions to calculate amplitudes. After that, I constructed the envelopes of the 6 operators so that they would create a time-variant structure. Here is a graphic representation of the envelopes of the 6 operators. It's in the FM8 format as I've done sysex dumps of my Yamaha patches into FM8. This graphic only shows the beginning of the envelopes, and operator C (in FM 8 parlance) hasn't yet begun in the timeframe shown. In addition, the patch was velocity-sensitive, so that the number and amplitude of sidebands were controllable using data in the event lists in Performer.
I made 16 different versions of the Clutch patch, and these were used in the 13 sections of the repeated period structure in the third movement, adding new variants in each repetition so that the overall timbral movement ends in noise spectra. The point of all of this in considering timbral transformations and time-variant timbral structures is that the concept of time variance or morphing exists on both the microstructural level (the individual Clutch patches) and the overall structure of the third movement, moving from simpler to more complex timbres. In this way, I hoped to focus the listener's attention on timbre as the primary dimension.
As I said before, Triptych is, in several ways, a demonstration of some of my theoretical concerns with composition. In one way or another, this is true of most of the music I've composed. For some people, the technical details I've mentioned in this discussion of the piece (and I've only covered a little of that) is somewhat off-putting, and may lead them to think that the work is only a treatise on academic music concerns. But nothing could be further from the truth in my intentions. I would hope that people hear Triptych as a work of music that is interesting without any considerations of the theory used in creating it. In fact, I want the technology behind all of my music to be invisible; I don't create music that is merely a demonstration of a technical process. But I also don't think that a composer can accidentally stumble into creating a good piece of music.
D. Duke’s Tune (2002)
Duke’s Tune is unique among my works, and probably among all other musical works, in that it’s based on a melody created by a potbellied pig, Duke. Duke lived at the Lil Orphan Hammies rescue and shelter run by Susan Parkinson in Solvang, California, a home for abused and abandoned potbellied pigs. I’ve been a supporter of Lil Orphan Hammies almost since it first opened in 1992, and I regularly visit the shelter which spreads over several acres in the Santa Ynez mountains. Duke was born at the sanctuary in 2000, and it was quickly apparent that he was a precocious pig. Among his toys was a little xylophone, and Parkinson made a short video of him playing it in 2001.
I transcribed what Duke had played, and used it as the basis for Duke’s Tune:
Duke’s Tune is a sort of extreme journey into the concept of theme and variations, as everything in the piece comes from Duke’s original melody. The piece is also a study in musical structure, and, in certain sections, the use of melodic and rhythmic counterpoint. The overall structure and the form of the first two movements are as follows:
DUKE’S TUNE STRUCTURAL ANALYSIS
Introduction 0:00 - 0:53
Part I 0:53 - 5:20
Part II 5:20 - 9:38
Part III 9:38 - 13:31
Coda 13:31 - end (14:14)
IA 0:53 - 2:07 rondo (a b a' c a’’)
a 0:53 - 1:12
b 1:12 - 1:28
a' 1:28 - 1:38
c 1:38 - 1:54
a’’ 1:54 - 2:07
IB 2:07 - 3:41
IA' 3:41 - 5:20 rondo (a1 b a1'c a1’' [a1’’A a1’’B])
a1 3:41 - 3:58
b 3:58 - 4:11
a1' 4:11 - 4:28
c 4:28 - 4:45
a1'' 4:45 - 5:20
IIA 5:20 - 6:44
IIB 6:44 - 8:20
IIA' 8:20 - 9:38
i 8:20 - 8:58
ii 8:58 - 9:38
I used the Yamaha TX816 to create the sound material, and recorded and modified it in Digital Performer. Once I had access to Digital Performer (as opposed to the older Performer app), I could employ multitracking and generate more complex mixes. I also had a variety of plugins available such as the Cycling 74’s Pluggo set.
The piece begins and ends with a slow presentation of Duke’s original melody using a synthetic xylophone-like timbre. This is in contrast to the very “bouncy” material found in the Part I.
The two rondos that create the first and third sections in Part I are similar, but not identical. The variations here are all over the map. The middle section, IB, consists of two simultaneous four-voice contrapuntal groups, one melodic, the other rhythmic. The style of the music in IB references a sort of imaginary middle eastern dance music. While the IA and IA’ sections quickly jump around several and often drastically different incarnations of the tune, the IB section remains consistent throughout.
Part II of Duke’s Tune is, like Part I, a ternary structure overall. Some people have extracted a little excerpt from this section, like this one, for a ring tone:
Part III is a processional. It develops slowly and becomes contrapuntal. The very beginning, with the solo percussive timbre, demonstrates how I sometimes construct timbres. As the piece progresses, variations of this percussive timbre are layered onto the original, so that the timbre increases in complexity. This is similar to what I did in the third section of Triptych, which I’ve already discussed in this interview. The processional develops into a large and stately melodic line, and a shortened version of the introduction reappears to serve as a coda and end the piece.
KNX 1070 AM, the CBS news radio station in Los Angeles, also did a piece on Duke’s Tune. It won a Golden Mike award for the reporter, Diane Thompson. KNX gave me permission to put this on my website.
Sadly, Duke is no longer with us; he passed in 2014. Duke’s Tune was released on the Beyond CD (Innova 640), which also contains the three-movement work Death. Many people find Death the most difficult work of mine to listen to as all of the movements are slow, and some of it is amorphous and non-linear. The inclusion of Duke’s Tune on this CD provides some contrast. You can hear Duke’s Tune on YouTube." [embed directly below]
Published on Feb 11, 2015 Barry Schrader - Topic
E. Monkey King (2005-2007)
Provided to YouTube by NAXOS of America ℗ 2008 Innova Released on: 2008-01-01 [start the video above and read below for Barry's commentary.]
I see the book as being in two main parts: The first section describes Sun Wukong as an energetic, ambitious, and audacious creature, constantly challenging the powers of earth and heaven, and always winning. Eventually, he gives himself the title of Great Sage Equal of Heaven. But he is finally challenged by Buddha, who punishes Sun Wukong by placing him under a mountain for five-hundred years. In the second part of the book, Sun Wukong is released and is charged, along with two other characters (Pigsy and Sandy), to accompany the monk Xuanzang to India to collect the sacred sutras and bring them back to China. At the end of the book, Sun Wukong is rewarded by being transformed into the Buddha Victorious in Strife. Thus, I think, one can see the first part of the book as a metaphor for vainglorious and reckless youth, and the second part as coming to maturity through responsibility and overcoming tribulation, as well as accepting authority.
I decided to select certain scenes from Journey to the West and compose Monkey King around them. The work is in four two-section movements, a type of structure I had first used in Lost Atlantis back in 1977. So each movement depicts two related programmatic events from the book. Here are some notes on the four movements.
Part I – The Land of Ao-Lai – The Birth of Monkey
Ao-Lai is the fictional name for the land containing the Monkey King’s birthplace, The Mountain of Flowers and Fruit. This mountain seems to correspond to Mount Huaguo located in Yuntai, in the Jiangsu Province of China; because of its importance in the book, the area is a popular tourist spot.
One of the first decisions I made regarding all of Monkey King was that I would use only the pentatonic scale, as this is used in traditional Chinese music. However, using this scale as historically applied would have been far too limiting, so I decided to allow the use of transpositions of the scale, as well as the use of simultaneous multiple transpositions. In this way, I attempted to create something that would have some qualities of historical Chinese music without actually imitating it. Monkey King has been criticized by some for not sounding sufficiently Chinese, but that was never my intention. In creating all of my programmatic music, I’ve often quoted the line from the poet Robert Lowell to state one of my main purposes: “I want to make something imagined, not recalled.” So Monkey King is not Chinese music in any concrete sense, but rather Chinese music filtered through my imagination and compositional processes. I do think that the unusual use of the pentatonic scale throughout the work gives a sort of dimensional unity to the whole.
The first movement of Monkey King begins with an eight-beat rhythmic theme. Eight is a very auspicious number in Chinese numerology, indicating good fortune. But if you listen to this phrase carefully, you can hear that it is made up of two four-beat units. Four is considered unlucky because it is nearly homophonous to the Mandarin word for "death." I created this rhythmic motive in order to indicate a duality: Sun Wukong is usually very lucky, but he often tempts fate, and his challenge from Buddha ends badly in the short term. What follows after this is sort of “fairy-tale” music depicting the beauty and serenity of Ao-lai, and, after that the hatching of the egg and the birth of the Monkey King. The middle of the first movement represents Sun Wukong’s first steps into the world, and his first speech. In order to create Sun Wukong’s “voice,” I used waveshaping synthesis to create the pitched chattering sounds used for this theme. There are three statements of the theme, and each time the fundamental becomes clearer. After The Monkey King bows to the four directions, light beaming from his eyes, the music returns to the Ao-lai material in a different “orchestration.”
Part II – Monkey’s Underwater Journey – The Staff of the Milky Way
The “underwater” music uses, among other things, convolution synthesis techniques. It also uses a compositional technique I developed in the early 1970s with the Buchla 200 of subdividing a metric unit in multiple ways to create unpredictable yet comprehensible rhythmic phrases. But the most difficult aspect of the underwater sections of this movement were the timbres I created: they were very unstable due to the use of multiple, simultaneous synthesis techniques, and the design could very easily disintegrate into noise and distortion. While I was able to create the timbre I wanted, controlling it was another matter. As a result, it took me around six months to compose this movement, and it was one of my most frustrating compositional experiences; until it was actually finished, I wasn’t sure that I could pull it off. This section also contains a favorite device of mine, which I’ve used several times: I call it “imbedded melodies,” themes which arise from the combinations of events that may not initially be perceived as melodies.
Monkey is traveling underwater to visit the palace of the Dragon King of the Eastern Sea. In the Dragon King’s treasury, Sun Wukong finds the Staff of the Milky Way, which the gods used to pound the Milky Way flat in the creation of the heavens and the stars. The staff is 20 feet long and weighs almost 7 tons, and the Monkey King is the only mortal creature who can wield the iron staff that magically assumes any length he wishes. This becomes Monkey’s weapon of choice, and he’s usually seen carrying it in illustrations. I represented Sun Wukong’s retrieval of the Staff with two three-event phrases, followed by a sort of “Chinese celebration” music using various percussive timbres; this represents Monkey’s new powers associated with the Staff. After this, the underwater music returns as Sun Wukong makes his way back home.
Part III – Monkey’s Magic Dance – Jumping Buddha’s Palm
Sun Wukong eventually becomes immortal and the most powerful creature in heaven and earth, save for Buddha. Monkey rejoices in his supremacy with a dance. This section begins with swirling arpeggios of plucked-string-like timbres, to which drum-like percussive timbres are eventually added. In composing this section, I was influenced by a rather old documentary I had seen many years ago about a rural Chinese village and the ancient customs they had continued to observe into the 20th century. I was especially struck by a dance they performed that used only two pitches for the musical accompaniment, and so I decided to use only two pitches for the percussive material in this section.
The second part of this movement depicts the end of what I see as the first half of the book. Sun Wukong, having gone beyond the control of even the armies of heaven, is finally challenged by Buddha in order to end the Monkey’s rampages. Buddha proposes a wager: If Monkey can jump from Buddha’s palm and land anywhere beyond Buddha’s hand, Buddha will allow Sun Wukong to continue his activities and also claim the throne of Heaven’s Jade Emperor. But if Monkey fails, he will be punished. The Monkey King, assured of his great powers, accepts the challenge. He has flown around the world many times, and so has no fear of being able to move off of Buddha’s hand. As Sun Wukong leaps into the air, he is sure that he is slowly reaching high into the heavens and traveling very far, but when he descends, he realizes that he is still in Buddha’s palm. The fingers of Buddha’s right hand transform into the five elements of wu xing (metal, wood, earth, water, fire), becoming the five-peaked mountain Wu Xing Shan, which slowly descends upon Monkey, encasing him for the next 500 years.
In order to depict Buddha’s hand, I created five shimmering pitch “columns” using the pentatonic scale, and spread these across the stereo field, left to right. Monkey’s jump and flight is represented by a constant portamento of dozens of tracks, first ascending in a logarithmic curve, and then descending in a quicker linear fashion. Chords build under the ascending section to represent Monkey’s hubris, but they disappear in the descending passage; Sun Wukong once again finds himself in Buddha’s hand. As the movement ends, the descent of the mountain is represented by a noise-spectrum-timbre that seems to come ever closer due to the application of Doppler effects.
Here’s a rough graphic analysis of this movement:
Part IV – Procession of the Immortals – Monkey Becomes a Buddha
The first section of the final movement is a processional, and, although more elaborate, it can be seen to be somewhat structurally related to the final section of Duke’s Tune. Initially, there is the use of bell and gong-like timbres, followed by a second layer of “strumming” percussion. In the middle of this part, a surprise injection of rhythmic polyphony created by the addition of short-enveloped-plucked timbres occurs, after which the march resumes. A sudden return of the eight-beat rhythmic phrase leads to the music of Sun Wukong’s apotheosis. This consists of a period structure repeated (almost) four times, being a melody of transformation based on the second motive used in the first movement. The work ends with a linear timbral transformation (morphing) into a sound mass, and closes with a huge percussive chord.
The technology used in composing Monkey King consists mainly of the Yamaha TX816 and several computer applications, particularly FM8. Monkey King was the last time I used the TX816 in my work, and I employed it essentially to create some of the timbres that were then further processed in various programs and assembled in Digital Performer. After I finished Monkey King, I transferred all of my TX816 designs to FM8 through SYSEX dumps. This took several months, as there were thousands of timbres; I didn’t want to lose what I had created in over more than two decades. I’ve mentioned a few of the techniques I used in Monkey King in describing the individual movements. To go into more detail, I think, would be of interest only to a few, and, in any case, Monkey King, like all of my works, is not about the technology I compose with. To me, technology is a means to an end, not an end in itself. Electro-acoustic music is a distinctly new field that, essentially, didn’t exist before the 20th century. I’ve chosen to work in this area because of what the technology affords me in terms of control and personal expression. But my musical goals have always been to try to take musical concepts of the past and develop them in a new way with the use of electronic means. In this way, I’ve hoped to create a continuation and extension of musical thought, not invent something radically new, which, I think, despite what many composers have said, is, perhaps, impossible. You can easily create something by contradiction. You can, for instance, substitute disorder for order. I see nothing remarkable about this. It’s far more difficult to extend past concepts of order in new ways that are meaningful and relate something of the personality of the composer. What I call “translational” procedures have, in my opinion, unfortunately been substituted for “relational” procedures far too often, resulting in works that are usually banal. So while electro-acoustic music has been around for a long time, many practitioners continue to concentrate on the technology instead of the music. If you were to compose a work for the violin, it would be necessary to learn about the instrument and the techniques of playing it, but in talking about the piece itself, it would seem pointless to only discuss the construction of the instrument used to play it. I feel the same way about the technology of electro-acoustic music: it’s absolutely necessary to have it in order to compose the music, and it’s imperative that the composer understand how to use the technology in order to create the work, but the point of the activity should be the music itself and not the technology. In my opinion, works that are essentially demonstrations of technology are rather boring. I have tried in my teaching, writing, and professional activities to have people focus on the music rather than the technology, but I’ve failed to have much of an impact in this regard. Most of what is written about, discussed, and taught in the field of electro-acoustic music is the technology used to make it, not the music that is created.
[The following is a playlist of all of the videos above for those that want to hear Monkey King in completion.]