Networked Audio Gets Super Bowl LVI Halftime Show On-Air
The broadcast audio of the hip-hop event was mixed from a control room across the street
The Halftime Show at Super Bowl LVI was a moment to remember for a number of reasons, not least because it was the first time that all the headline performers — Dr. Dre, Snoop Dogg, Eminem, Mary J. Blige, and Kendrick Lamar — were hip-hop artists, reflecting the significant cultural and economic shift that has taken place over the past decade. But the event’s audio also reflected how far remote audio mixing has come and where it might head in the future.
The broadcast audio for the halftime event was mixed from the NFL’s new facility, which sits in the $5 billion, 298-acre Hollywood Park mixed-use development on the same Inglewood campus as — and literally across the road from — the Los Angeles Rams’ SoFi Stadium. At the facility, five audio-control rooms are fitted with SSL S500 and S300 consoles, part of its System T broadcast range. One S500 was the main mixer for the halftime show (with an S300 in the room as backup), manned by A1 Tom Holmes.
Audio signals from the stadium were transported in the Dante format over single-mode fiber to the facility, which opened last September as an expansion of the league’s production facility there. Pregame audio (“America the Beautiful,” the National Anthem) was sent to NEP Super 8 game truck in the broadcast-production compound adjacent to the stadium. The Halftime Show’s music mix was sent via direct-to-transmission path to NBC.
Holmes, the longtime Grammy Awards’ production mixer, was handling his second Super Bowl Halftime Show (his first time on the S500). He was impressed, even a little awed, by the league’s new studio and office complex, particularly its vast 19,000-plus–I/O Dante network, which he describes as “at the same time a really good thing and maybe not a really good thing.”
His ambivalence has to do with the fact that a Dante network, like any other IT network, requires protocol-specific permissions for connection with external devices. For a project with as many inputs as the Super Bowl, that can be cumbersome, requiring the equivalent of a digital “hall pass” from the network administrator for each device, whether it’s a printer in another office or a microphone preamp on a stage across the campus.
“You have to query every device or patch,” Holmes explains. “A single patch can take literally up to 30 seconds.” He points out that there were hundreds of such connections to be made over the fiber connection the Dante network rode on between the stadium and the NFL media center, using a total of 58 Focusrite RedNet interfaces, the equivalent of more than 1,700 available audio channels. (Specifically, RedNet A16R and RedNet D16R interfaces connected digital and analog sources and feeds to and from the NFL Network; RedNet D64R MADI bridges were deployed for connecting signals to and from the SSL and DiGiCo digital audio consoles in the system and for connections between production groups.)
“If you want to take a patch from a new Dante box and patch it to your console,” Holmes says, “[you have] to basically go and check with every other device on the Dante network and see who owns that patch. You have to sit there and wait for this query to be done over the entire network. You can moderate some of that lag time, but you would have to go into the SSL [operating] system and integrate all of the added physical devices as ‘logic’ devices, and then [each] patch gets a device label. This speeds up the query process by allowing the console to filter out and ignore all but the devices you want to use. But somebody needs to spend time typing each patch-point label into the system.”
The permissions process and related efforts, such as implementing MADI-to-Dante conversions, were time-consuming but were mostly finalized two days before the show. Holmes credits the NFL’s IT staff for making what was an extremely complex connectivity proposition manageable.
However, that complexity, along with the use of particular cameras and visual frame rates mandated by the producers, inevitably introduced a considerable amount of latency into the mix: 243.75 ms of delay, to be exact. If not corrected, that would have been enough to be very noticeable in the form of lip-synch problems. But it was corrected: by delaying all the audio except that not associated with a camera (for example, the EVS playbacks) by 243.75 ms prior to audio and video being embedded together for broadcast.
Other sources of latency, such as the difference between hard-wired and RF cameras, can delay picture by an additional several milliseconds, though usually not enough to be noticeable on screen, and are even less of an issue for quick-cut productions like music shows.
A Proper Recording Studio
Previous Super Bowl Halftime Shows have been mixed on dedicated remote-production trucks, usually designed specifically for on-location music mixing and/or recording —and exceptionally good at their particular task. On more-conventional remote-production trucks, the audio is relegated to a small compartment at the rear of a trailer.
The Super Bowl LVI’s halftime mix took place in essentially the equivalent of an Abbey Road-level environment, featuring a Genelec 7.1.4 monitoring system and as good as or even better than the studios the show’s songs were originally recorded and mixed in. Like all first-time experiences, though, Holmes’s was somewhat fraught with the exigencies of network connectivity overlaying the year’s single biggest 12 minutes of live music on television.
On the other hand, he says, “the fact that they have five studios decked out with the latest, most modern gear in the world is a compelling reason to use it.” The facility’s IT staff was “excellent,” he adds, as was his experience on the S500 console.
Not every Super Bowl will be fortunate enough to take place next door to a world-class mixing studio, but, says SSL SVP Phil Wagner, nor will they necessarily need to be.
“We could do this from any stadium anywhere,” he says. “There’s nothing stopping it technologically. Latency isn’t an issue because audio has far less latency than the video. As long as there’s a fiber and network connection, it can be done.”
That will be important as the marriage between music and broadcast sports gets closer and is poised to move into immersive formats.