Broadway run for object-based approach to 3D audio

The Band’s Visit, a critically acclaimed new musical that celebrates the deeply human ways music and laughter connect us, opened in November to rave reviews at the Barrymore Theatre on Broadway. It

is the story of the Alexandria Ceremonial Police Orchestra’s arrival in Israel for the opening of an Arab Cultural Centre, only to find out that they have boarded the wrong bus to the wrong town.

A big part of the musical’s success is celebrated sound designer Kai Harada’s innovative sound design, at the heart of which is Astro Spatial Audio (ASA)’s true object-based approach to immersive, three dimensional audio.

“I have known (Asa director) Bjorn van Munster for a number of years, and although I had heard about the system, it was only last year that I saw and heard a demo of the system in California,” said Harada. “A few years ago, I had used a competitor’s system on a Broadway show that required precise localisation, and then last year I found myself designing The Band’s Visit, which would require the same precision, and I thought it would be a perfect chance to try the Astro Spatial Audio system.”

At the heart of the ASA solution is the conversion of audio signals into audio objects. ASA’s SARA II Premium Rendering Engine – a 3U road and rack ready processor offering up to 128 MADI or 128 Dante configurable network pathways at 48kHz/24-bit resolution – utilises extensive metadata attached to each audio object. The result is a precise calculation of that object’s position within virtual 3D space, processed in realtime up to 40 times per second for each individual object, as well as that object’s acoustic effect on the virtual space around it. The result for the engineer is a truly three-dimensional audio canvas on which to play.

“In The Band’s Visit, several musicians play their instruments in a variety of locations on the stage, and it was incredibly important to me to preserve a transparent sound system design,” said Harada. “In my opinion, the more we attracted attention to the sound system, the less the audience would connect with the actors and the story on stage, so natural-sounding reinforcement was the goal.”

It quickly became apparent how easy the system was to set up: Harada’s associate, Josh Millican, drafted all the speaker positions in CAD, and when it was time to commission the system in SARA II the measurements were verified and the values simply entered.

“ASA allowed us to precisely place the instrument source as an audio object within a graphical interface, while it did all the calculations to make it sound correct. Changes to staging were easily accommodated. In addition, having used other acoustic enhancement systems on other shows, I was eager to try the ASA room enhancement to give the illusion that the theatre was a larger acoustic space for some key moments in the show.

“Also, there were a number of very localised sound effects – coming from a prop radio, or a jukebox, or a baby – and although we had many wireless loudspeaker systems to play with, we used SARA II to reinforce the localisation through the main PA: the initial waveform comes from the practical loudspeaker, but SARA II ensures that the sound is localised correctly for all audience members.”

All the stage band and practical sound effects inputs were routed, post-fader, from a Studer Vista 5 console into SARA II, where they were represented as audio objects; the Studer fired MIDI changes to QLab, which in turn fired OSC commands to SARA II to move between snapshots. SARA II’s outputs were routed back into the console and routed to the appropriate loudspeaker systems, which were then processed using Meyer Galileo units. The system had to function first as a traditional reinforcement system, and secondarily integrate all of SARA II’s power.

The production is configured with 162 mono inputs and 24 stereo inputs – which comprise a cast of 15 performers, four musicians that play in a purpose-built room under the stage, the five additional stage musicians (who also play in the room when they are not on stage) – totalling 68 band inputs, 26 playback (QLab) inputs, 36 SARA II returns into the console, and a host of reverb returns and utility channels. Reflecting the fact the Astro Spatial Audio system is entirely brand agnostic, 90% of the loudspeakers used in the show are from Meyer Sound (M1D, LINA, UPJ-1P, UPJr, UPQ-1P, MM4, UPM-1P, UMS-1P, UPA-2P), with the rest from d&b audiotechnik with E5s as surrounds.

“The ASA system is not tied to any one loudspeaker brand,” said van Munster. “We believe it is in the best interests of the market that Astro Spatial Audio remains brand independent. Users should benefit from object-based immersive audio regardless of which loudspeaker they invest in. Similarly, we support a range of protocols, including MADI and Dante, and we intend to continue working closely with our good friends in the industry to bring our technology to as many people as possible, and to create incredible experiences for audiences everywhere.”

Broadway-hit-The-BandWhile automation is a key feature of the ASA immersive solution, the system is equally focused on allowing the audio operator limitless creativity in a live environment. “The show is mixed manually,” said Harada. “My operator, Liz Coleman, mixes every word, line by line, and helps augment the dynamics of both the stage musicians and the musicians in the trap room. The console’s automation helps out by grouping inputs in logical ways, but Liz is very much performing along with the musicians. Nothing is on timecode on our end; sound effects are triggered manually by Liz, sometimes based on a visual cue, sometimes on a musical cue. All commands to the SARA II system are also triggered by Liz.”

ASA’s ultimate purpose is enhanced audience enjoyment. “The goal was not merely an immersive audio experience; the goal was a transparent audio experience, and I think we were very successful,” said Harada. “Many people have commented about the quality of the audio on the show, and I am quite proud of it. I do believe we have achieved our goal of creating an intimate, organic-sounding show but still delivering dynamics when appropriate. The story is so human and conversational that we needed to preserve that feeling but still ensure that everyone in the audience had a very good aural experience.

“The localisation algorithms helped create a very natural sounding reinforcement system also for the musicians onstage. I did appreciate the room acoustic enhancement feature as well, although I chose to use it sparingly and subtlely, and only when dramatically appropriate for the piece. Most theatres I get to work in already have an acoustic – some of them, like at the Barrymore Theatre, are quite nice, so it was never my intent to fight the acoustic, just to augment it.”

We use cookies to provide you with the best possible browsing experience on our website. You can find out more below.
Cookies are small text files that can be used by websites to make a user's experience more efficient. The law states that we can store cookies on your device if they are strictly necessary for the operation of this site. For all other types of cookies we need your permission. This site uses different types of cookies. Some cookies are placed by third party services that appear on our pages.
+Necessary
Necessary cookies help make a website usable by enabling basic functions like page navigation and access to secure areas of the website. The website cannot function properly without these cookies.
ResolutionUsed to ensure the correct version of the site is displayed to your device.
essential
SessionUsed to track your user session on our website.
essential
+Marketing
Marketing Cookies are used for various purposes.
Live ChatThis allows our Live Chat Functionality
Yes
No

More Details