Should your web audio app have a limiter?

Did you come across digital clipping in web audio apps? I certainly did several times (mostly in my own apps though). This undesired effect occurs when you play several sound sources at the same time, which results in a signal that is louder than the maximum of 0 dBFS. Since a digital system is unable to reproduce higher amplitudes, you will hear nasty distortion and get an unworthy waveform looking like this:

No Limiter applied

This is why you should consider strapping a limiter on your processing chain, right before the signal arrives at AudioContext.destination.

What is a limiter?

A limiter is an extreme variant of a compressor. Its ratio is very high. When the ratio is above about 15:1, the compressor is seen as a limiter.

Unlike compressors, which are best used for obtaining a more consistent level by reducing louder parts of the recording without squashing the peaks, limiters are best used for reducing peaks or spikes in the recording without affecting anything else.

Digital limiters are commonly used to avoid clipping distortion.

How to implement one with Web Audio?

There are several ways to implement a limiter because there are several kinds of limiters. One way would be to use Web Audio’s DynamicsCompressorNode, like in this this Codepen.

It is quite easy to implement and understand. But although this is A LOT better than using no limiter at all, this filter does not apply a look-ahead, so it does not see peaks coming from the future. This is why it can only react with some lag to peaks and thus is not able to completely avoid clipping. I tested it and the result looks like this:

dynamicscompressornode limiter

Limiter with look-ahead = Brickwall limiter

A limiter with look-ahead, that anticipates peaks before they occur, is also called “brickwall limiter”. It is like there is a super-solid wall that lets no peak hop through the 0 dBFS barrier. To implement such a brickwall limiter, we have to write our own. Do you know what that means? Yes! It’s time for our beloved


That was irony. ScriptProcessorNode is bad for several reasons which I don’t want to go into now. When AudioWorklets arrive, we should use them. But for now, ScriptProcessorNode is our only way to apply custom DSP code to our signal chain.

Googling around for example DSP code, I came across this great tutorial by Christian Floisand. He created a compressor/limiter with look-ahead and soft knee for a Unity game. Although this particular post is written with Unity/C# in mind, the theory and code is easy enough to adapt to JavaScript. If you are really interested in how a compressor/limiter works, you should definitely read this. It consists of three parts:

Dynamics processing: Compressor/Limiter, part 1

Dynamics Processing: Compressor/Limiter, part 2

Dynamics Processing: Compressor/Limiter, part 3

I however will just briefly explain what my adapted JavaScript code does.

Here’s the function that calculates an envelope of the incoming signal. It creates an envelope curve by analysing the signal’s amplitudes over time.

var envelopeSample = 0;
var getEnvelope = function(data, attackTime, releaseTime, sampleRate){
	//attack and release in milliseconds
	var attackGain = Math.exp(-1/(sampleRate*attackTime));
	var releaseGain = Math.exp(-1/(sampleRate*releaseTime));	

	var envelope = new Float32Array(data.length);
	for (var i=0; i < data.length; i++){
		var envIn = Math.abs(data[i]);
		if (envelopeSample < envIn){
			envelopeSample = envIn + attackGain * (envelopeSample - envIn);
		else {
			envelopeSample = envIn + releaseGain * (envelopeSample - envIn);
		envelope[i] = envelopeSample;
	return envelope;

After applying some pre-gain to the signal and aquiring the envelope, we delay the incoming stream by the look-ahead time:

	if (lookAheadTime > 0){
		//write signal into buffer and read delayed signal
		for (var i = 0; i < out.length; i++){
			out[i] =;

For the delay buffer, I implemented this (very simple) ring buffer class, which is suitable for dealing with streams:

function DelayBuffer(n) {
    n = Math.floor(n);
    this._array = new Float32Array(2 * n);
    this.length = this._array.length;  // can be optimized!
    this.readPointer = 0;
    this.writePointer = n - 1;
	for (var i=0; i<this.length; i++){
		this._array[i] = 0;
} = function() {
    var value = this._array[this.readPointer % this.length];
    return value;
DelayBuffer.prototype.push = function(v) {
    this._array[this.writePointer % this.length] = v;

Finally, the gain reduction is applied to the signal, but only if the signal’s envelope exceeds the threshold:

//limiter mode: slope is 1
var slope = 1;
for (var i=0; i<inp.length; i++){
    var gainDB = slope * (threshold - ampToDB(envelopeData[i]));
    //is gain below zero?
    gainDB = Math.min(0, gainDB);
    var gain = dBToAmp(gainDB);
    out[i] *= (gain * postGainAmp);

And this is what we’ll get, no clipping at all:

brickwall limiter

Be aware, that the look-ahead naturally introduces some latency to the signal, in this case 5 ms. (Not to mention, that the mere usage of ScriptProcessorNode itself introduces some latency.) Furthermore, this algorithm currently works only with mono streams. The effort to make it stereo-compatible should be trivial, though.


I have created a little limiter comparison which, I think, reveals quite well, how the algorithms differ in sound. Please be aware, that the demo is capable of producing severe clipping distortion, so turn your speakers/headphones down! (Of course, I am not liable for damaging your equipment. 😉 )

Limiter Comparison

But in what scenarios does a limiter make sense?

It depends on the nature of your app, if a limiter does make sense. I want to give you my opinion on four main use cases of the Web Audio API.


Probably yes.

If it is even remotely possible that some sound effects played at the same time (e. g. two gunshots) together with some underlying music could result in a signal that becomes at any point louder than 0 dBFS, I’d recommend that you use a limiter. Even the web audio specification itself mentions games as a main use case of the DynamicsCompressorNode:

Dynamics compression is […] especially important in games and musical applications where large numbers of individual sounds are played simultaneous to control the overall signal level and help avoid clipping (distorting) the audio output to the speakers.

DAWs / Audio Tools for Professionals

Probably no.

Digital Audio Workstations are usually meant to be professional tools. Users of Digital Audio Workstations typically know what they are doing and expect bit-perfect audio. Noone would want a limiter interfere with their hand-crafted signal chain unless they decide for themselves that a limiter would be a good idea.

Synthesizers/noise-producing apps

It depends on your target audience.

A sophisticated synthesizer does not need to have a limiter because it expects the user to recognize a too-loud signal.
Take Native Instrument’s MASSIVE for example. It brings a VU meter that indicates if a signal is clipping so that the user can take action to prevent this from further happening.

On the other hand, if your app explicitly targets people, that have no background in music technology, it may be a good idea though. This is the case with Beatsketch. I want users without musical knowledge to play around, make music and have fun. I do not want them to care about clipping. Here, a limiter would be a good idea, as long as it does not introduce so much latency that there is no fun in playing anymore.

Music Player


Music Players nowadays apply loudness-normalization to music.

[This means] every song is played at a similar level, aiming for a “target” loudness, which is different for every service.

Loud songs are turned down, quiet songs are turned up – IF there’s enough peak headroom.

Ian Shepherd,

In some scenarios, this process of turning up quieter songs can result in peaks above 0 dBFS.

Be aware though, that some people (like me) don’t like limiters messing with their music. There is even a campaign to remove the limiter from Spotify’s loudness normalization process. Instead of using a limiter, we are supporting the suggestion that Spotify should just restrict the volume boost to prevent clipping. That would make the limiter unnecessary.

Be mindful of crossfades between songs though.


Happy limiting!

As this is quite an opinionated post, feel free to agree or disagree with me in the comments.

UPDATE 1 (2016-01-31)

As web audio spec editor Paul Adenot has pointed out in the comments, current implementations of DynamicsCompressorNode in Chrome and Firefox actually do some look-ahead, though the exact behaviour has not been specified yet.

Further resources

The ScriptProcessorNode word art was created with


  1. The Web Audio API compressor actually has a fixed pre-delay, and computes the compression amount from the non-delayed version, so it does look-ahead, see [0] (Firefox) and [1] (Chromium). The Firefox code was forked from the blink code, explaining the similarities.

    The DynamicsCompressorNode is very unspecified, though, and this could change in the future.


  2. I noticed you use the absolute value of the signal, are things any different (better?) if you use the mean square root?

  3. I improved the code of the Brickwall Limiter and added stereo and multi-channel support and simplified the code in one object class (Limiter) which support arguments.

    You can get the improved code (used in my webapp) here:


    var limiter = new Limiter(); // instantiate a Limiter
    var limiterProcessor = offlineContext.createScriptProcessor(BUFFER_SIZE, buffer.numberOfChannels, buffer.numberOfChannels); // create new script processor
    limiterProcessor.onaudioprocess = limiter.limit; // point to the limit function of the Limiter

Leave a comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.