I have spent my day diving into Service Workers and Push Notifications since I wanted my webapp STQ (a German Star Trek trivia quiz app) to send out push messages to users. This has probably been the most wanted feature ever since.
Addy Osmani’s guide on PWAs and this codelab by Sam Dutton on implementing push notifications are a great start. But since the technology is still cutting edge, there were a two gotchas along my way, which I want to share.
Continue reading “Two Service Worker & Push Notification GOTCHAs”
If you are anything like me, you are curious about the current state and the future of Web Audio. So I asked one of the Web Audio API spec editors, Mozilla’s Paul Adenot, if I could shoot some questions. He said sure, and was so kind to take some time and answer them elaborately. Here are his answers, stuffed with lots of useful information. Continue reading “Interview with Paul Adenot, Web Audio Spec Editor”
Did you come across digital clipping in web audio apps? I certainly did several times (mostly in my own apps though). This undesired effect occurs when you play several sound sources at the same time, which results in a signal that is louder than the maximum of 0 dBFS. Since a digital system is unable to reproduce higher amplitudes, you will hear nasty distortion and get an unworthy waveform looking like this:
Continue reading “Should your web audio app have a limiter?”
I just wanted to mention that I did this thing called Beatsketch last year. It lets you make music on the web without having to know much about making music.
BeatSketch from Sebastian Zimmer is a collaborative music production tool that Sebastian developed for his Master’s degree in Computer Science. A song consists of multiple tracks, and each track is backed by a grid-based sequencer. Any changes you make are synchronised between connected collaborators immediately. It also supports mixing the final song down to a WAV file for downloading. An impressive set of features and a very useful exploration of possible methods of implementing collaborative working.
Chris Lowis on Web Audio Weekly #43
Continue reading “Beatsketch”
Inspired by @AudioGrains tweet, I made this little Emoji Piano.
Emoji Piano lets you create simple melodies and encodes them with Unicode emojis which you can share and tweet.
Continue reading “Emoji Piano”
THREE.js developer Mr.doob has posted an important comment on this.
WebVR matters. And the great WebVR Boilerplate by Boris Smus allows to get started with it immediately.
Playing around with it, I got the idea to use the Web Audio API to spatialize the sound of an object within the matrix, so that a person wearing a headphone could not only see, but also hear where an object is located.
Since the Web Audio API is great, you can do that with ease.
Continue reading “Wiring up WebAudio with WebVR”
Currently, I’m doing some XML manipulation and transformation in the web browser. I have encountered some obstacles, tips and thoughts that I want to share with you.
Continue reading “XML in the browser”
I LOVE Web Audio. It’s one of the most fun things in the browser right now. But doing more and more stuff with it, I came to realize that there are three limits that prevent this technology from making traditional pro audio software obsolete, at least at the moment:
When I open a big session in Samplitude, it likely uses up to 2 GB of memory. Chrome on the other hand, allows about 200 MB memory per web page. If your script tries to allocate more, the site crashes. That’s a good thing. Older machines and mobile devices have their hardware limits and you don’t want to push them too hard. But if you deal with AudioBuffers with high sampling rates like 192 kHz, like pro users do, you may reach this limit very quickly, if the browser even supports such high sampling rates. I did reach the limit several times. Implementations have to support rates for an OfflineAudioContext “only” up to 96 kHz.
A browser is a browser. It’s a very universal piece of software you can do all sorts of things with. Since browsers are not dedicated pieces of software built for audio synthesizing/manipulation, they usually use your system’s standard audio driver. In Windows this is WASAPI (Windows Audio Session API). WASAPI (Shared Mode) isn’t suitable for pro audio applications, as it introduces round-trip latencies well over 20 ms. With Windows 10, this has gotten better. But it still cannot compete with drivers dedicated to real-time audio processing, like ASIO. In the best case, ASIO allows for latencies of about 2 ms. This could be less than the time a sound needs to travel from a speaker to your ear through the air.
People (like me) once proposed that Chrome would implement ASIO support. But let’s be realistic: That is unlikely to happen.
I made another little tool after CAAT, the WAV Builder. This time it’s not about testing filter algorithms, but synthesizing waveforms which are then rendered and saved as a WAV file. It helps me sometimes to test stuff. If it should help you too, that’s great!
WAV Builder uses the great Recorder.js.
CAAT, the custom audio algorithm tester is a page that let’s you try out your own simple audio filter algorithms.
Just (mis)use the textarea for coding and listen to what you get. There are some examples on how you would do basic things.
It helps me sometimes, when I just want to check something out very quickly.
Of course, this is a very unperformant way to implement audio filter algorithms for several reasons. This is just a demo. If you’re interested in how to implement algorithms the right way, I recommend using Web Audio API’s Audio Worklets or the talk “C++ in the Audio Industry” by Timur Doumler.