Categories
audiophile JavaScript Loudness Web Audio

Getting EBU R128 on the web

I hate squashed and over-compressed music. It leads to ear-fatigue quickly, is often distorted and sounds dull and low-fi compared to dynamic music. And although the Loudness War apparently is over, there’s still the need for proper loudness metering, so that people don’t fall into the trap of making their music too loud and destroy the liveliness of their precious recordings.

A few years ago, the European Broadcasting Union (EBU) has released a recommendation on how to measure loudness and how to distribute audio material with the right loudness. After that, some metering plugins for DAWs popped up but I haven’t seen anything like that for the web.

That’s why I created something called LoudEv, an open-source online loudness evaluator, which is compliant to EBU R128.

LoudEv uses the Web Audio API, Web Workers and the great wavesurfer.js by katspaugh to do its thing: Analyzing an audio file (on the client-side, no server upload necessary) and then creating a two-dimensional loudness map of the song as well as a dynamics map. The loudness map shows the song’s short-term loudness over time. The dynamics map shows the peak to short-term loudness and indicates if and what sections of a song are too loud. If it gets red-ish, the dynamic range is at or below 8 LU. If it’s black, you can hardly call that music anymore. If most sections of your song are green-ish, you’re on the safe side. This color scheme derives from the recommendations of mastering engineer Ian Shepherd. According to him, your masters should never become louder than 10 LUFS to prevent a potention loss of punch, impact and space in your mix. You should listen to him, he knows what he says and his masters sound great.

The technical side

To obtain the subjective loudness of a piece, the EBU reccomendation demands of R128-compliant meters to apply some filters (a lowpass and a shelving filter) to the signal. These filters are described in the ITU loudness standard document. But unfortunately, the document does not provide frequency, Q or gain values for these filters. Instead it gives us filter coefficients for a biquad filter that only works with audio of a sampling rate of 48 kHz.

So all incoming audio had to be resampled to 48 kHz, because I wanted to use these filter coefficients. But how to do resampling in JavaScript? I googled a lot about this topic until I came across a test for Google Chrome, where an OfflineAudioContext is used, that is set to the target sampling rate. An AudioBuffer with the source is applied to a SourceNode within this context and played. Then the OfflineAudioContext is rendered, which gives as a new AudioBuffer with the target sampling rate. It seems to me that there is no other convenient way to resample audio with the Web Audio API.

Having obtained a 48 kHz version of my audio, I decided to implement the biquad filter function myself, after learning that, as of today, the creation of custom IIR filters with the Web Audio API hasn’t been implemented in Chrome yet.

Due to my initial lack of knowledge in implementing biquad filters myself, I had a tough time of it with the biquad filter equation, but then the great Audio EQ cookbook by Robert Bristow-Johnson came to the rescue and showed my code that I could use:

 y[n] = (b0/a0)*x[n] + (b1/a0)*x[n-1] + (b2/a0)*x[n-2]
                        - (a1/a0)*y[n-1] - (a2/a0)*y[n-2]

So finally, I had my R128-compliant values for the short-term loudness.

Measuring True Peak

After that, I tried to implement a true-peak meter which considers inter-sample peaks. The recommendation suggests the following way to do this: Resample (upsample/zero-stuff + interpolate) the signal to 192 kHz and then seek for the sample with the absolute maximum (see Annex 2 of the ITU document).

I was like “Yippieh, I know how to resample in JavaScript, I can do this!”, just to learn, that Chrome did not resample the waveform to 192 kHz the way I wanted. Using a very loud song with digital clipping (Sowing Season by Brand New), that means a waveform with a sample maximum of exactly 1, gave me another flat clipped waveform were as flat and clipped as before the resampling, even when reducing the gain of song by 0.5 before resampling. I’m not a DSP expert, but this looks to me like Chrome does not resample by zero-stuffing and then low-passing the signal, but by copying the samples. So no true-peak yet.

Even if it would work, I had to learn that by creating an OfflineAudioContext and an AudioBuffer of 192kHz often results in a crash.

chrome crash icon

Chrome allows for a memory limit of about 200 MB per web page. This limit is reached very quickly when you deal with 192kHz audio.

Next, I will try the filter suggested by the ITU document. It provides filter coefficients for a FIR interpolation of an upsampled (zero-stuffed) signal. Russel McClellan at iZotope has written an insightful assessment of this filter.

That is the end of the first chapter of bringing R128 onto the web. There’s a lot going on with the Web Audio spec at the moment so I expect to be able to do things soon that I cannot do now.

Let me know, if you know things that I don’t know or if you wish to contribute. The source code of LoudEv is on GitHub.

Try it out here and let me know what you think:

https://webaudiotech.com/sites/loudev

Please be aware, that it only works with mono/stereo files and audio file types that are supported by your browser. Both Chrome and Firefox accept MP3, for example.

Happy Metering!

4 replies on “Getting EBU R128 on the web”

Hey Sebastian,

thanks for the great post and code!

I’m trying to implement a real-time LUFS meter based on the EBU Standards and have come across some difficulties. I’m new to Web Audio Tech and just wanted ask if you could direct me to the correct path. I’m gonna have questions that are relating generally to the Web Audio API, hope it’s fine to ask here.

I’m fetching the audio stream in my app using an analyserNode and then pass these packets of the audio stream through the two pre-filters and eventually into a worker to do the LUFS calculation.

1. In your code, you collect the samples into a buffer using createBufferSource() – is this necessary to apply the filters later on? I’m connecting the analyserNode to the two filters but I think this is wrong – is the filter applied at all if it’s not connected directly to a sound source (being an oscillator or a bufferSource)?

2. You build up an EBU graph in your code and I don’t seem to understand the logic on line 79 in main.js:

source
.connect(highshelf_filter)
.connect(highpass_filter)
.connect(square_gain);
highpass_filter.connect(square_gain.gain);
square_gain.connect(OAC_IL.destination);

What is the purpose of square_gain here? What’s the logic behind connecting the filter separately to square_gain.gain? Would a graph like this not suffice [meter being Tone.meter()]:

source.connect(highshelf_filter).connect(highpass_filter).connect(meter)

and then use meter.getValue() to get the actual values to pass into the worker?

Thanks a lot!

Best,
Adam

Thanks for the comment, Adam.

Regarding your questions:

1.
It’s not necessary to use buffer source nodes. If you want to analyse real-time audio data, you could also use a live audio source node (like MediaStreamAudioSourceNode [1]), connect it to the filters, connect them to the analyzer and then grab the time domain data from the analyzer. Or use an AudioWorklet [2] for all the analyzing.

2.
The ITU documentation of the filters [3] mentions computing the a mean square of the signal (see page 3, figure I). For this, we first have to square the sample values of the signal. This is done by connecting the highpass filter to the gain node itself as well as to the same gain node’s gain AudioParam. It’s a neat trick if you don’t want to square the samples manually with `value = Math.pow(value, 2)`. The mean is then computed in [4].

I hope this is somewhat helpful, as I realize that this code was written a while ago. I think with today’s Web Audio features, especially AudioWorklet, you could do it in a cleaner way. Here’s a cool exmaple implementation of it: [5]

[1] https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamAudioSourceNode
[2] https://developer.mozilla.org/en-US/docs/Web/API/AudioWorklet
[3] https://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.1770-4-201510-I!!PDF-E.pdf
[4] https://github.com/SebastianZimmer/LoudEv/blob/master/js/workers/integrated-loudness-worker.js#L38
[5] https://github.com/padenot/ringbuf.js

Hey Sebastian,

thanks for your answer!

Which part of the calculation exactly would you utilise in an AudioWorklet? Since the integrated/short-term calculations are already in a web-worker, what improvement would it bring to move onto an AudioWorklet?

Thanks,
Adam

In my code, not all the calculations are done in a web worker which makes the code somewhat hard to understand. So the advantage of using an AudioWorklet instead of my solution is just that you could encapsulate all the code in separate files.

I could imagine that the advantage of using an AudioWorklet over an AudioWorker is in your case of a live meter, that the AudioWorklet can continuously stream the calculation results back into the main thread. But to be honest, I’m not sure how well it would work. You’d need to try out.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.