Did you come across digital clipping in web audio apps? I certainly did several times (mostly in my own apps though). This undesired effect occurs when you play several sound sources at the same time, which results in a signal that is louder than the maximum of 0 dBFS. Since a digital system is unable to reproduce higher amplitudes, you will hear nasty distortion and get an unworthy waveform looking like this:
I just wanted to mention that I did this thing called Beatsketch last year. It lets you make music on the web without having to know much about making music.
BeatSketch from Sebastian Zimmer is a collaborative music production tool that Sebastian developed for his Master’s degree in Computer Science. A song consists of multiple tracks, and each track is backed by a grid-based sequencer. Any changes you make are synchronised between connected collaborators immediately. It also supports mixing the final song down to a WAV file for downloading. An impressive set of features and a very useful exploration of possible methods of implementing collaborative working.
Chris Lowis on Web Audio Weekly #43
Has anybody created an emoji keyboard that’s actually a piano keyboard for writing musical notation? Wanted a quick way to tweet a melody.
— AudioGrains (@AudioGrainsBlog) 5. Januar 2016
Inspired by @AudioGrains tweet, I made this little Emoji Piano.
Emoji Piano lets you create simple melodies and encodes them with Unicode emojis which you can share and tweet.
Lissajous curves are fun. And who doesn’t dream of standing right inside one all the time? The boys from Tame Impala certainly do, because some of their concert’s light shows consisted of little else than Lissajous curves:
When I was at one of their shows, I actually saw how they put a camera in front of an old analogue oscilloscope in a corner of the stage to capture them.
WebVR now makes it possible to fully immerse in these curves.
Continue reading “Chilling inside a giant Lissajous curve with WebVR”
THREE.js developer Mr.doob has posted an important comment on this.
Playing around with it, I got the idea to use the Web Audio API to spatialize the sound of an object within the matrix, so that a person wearing a headphone could not only see, but also hear where an object is located.
Since the Web Audio API is great, you can do that with ease.
Continue reading “Wiring up WebAudio with WebVR”
Inspired by this article (German), I decided to build a kitchen radio from my old cell phone and some car speaker last year. Here’s how it turned out, combined with an instruction. Continue reading “Building a kitchen radio from your old phone”
I fell in love with synthesized bass sounds when listening for the first time to Joan as Police Woman’s performance of Holy Fire on Later:
After that I enthusiasticly tried to recreate this sound with the awesome Moog emulator Monark by Native Instruments.
Monark comes already with some quite good presets. Here you can download the one I have created to come as close to Joan’s bass sound as possible. It’s based on the preset “Humble Bee” but with tiny adjustments:
Currently, I’m doing some XML manipulation and transformation in the web browser. I have encountered some obstacles, tips and thoughts that I want to share with you.
Continue reading “XML in the browser”
I bought a record at HDtracks to see if it has any benefits in contrast to CD-like audio files at a sample rate of 44,1 kHz and a bit depth of 16. I want to share my findings with you.
Continue reading “I bought a record at HDtracks”
I LOVE Web Audio. It’s one of the most fun things in the browser right now. But doing more and more stuff with it, I came to realize that there are three limits that prevent this technology from making traditional pro audio software obsolete, at least at the moment:
When I open a big session in Samplitude, it likely uses up to 2 GB of memory. Chrome on the other hand, allows about 200 MB memory per web page. If your script tries to allocate more, the site crashes. That’s a good thing. Older machines and mobile devices have their hardware limits and you don’t want to push them too hard. But if you deal with AudioBuffers with high sampling rates like 192 kHz, like pro users do, you may reach this limit very quickly, if the browser even supports such high sampling rates. I did reach the limit several times. Implementations have to support rates for an OfflineAudioContext “only” up to 96 kHz.
A browser is a browser. It’s a very universal piece of software you can do all sorts of things with. Since browsers are not dedicated pieces of software built for audio synthesizing/manipulation, they usually use your system’s standard audio driver. In Windows this is WASAPI (Windows Audio Session API). WASAPI (Shared Mode) isn’t suitable for pro audio applications, as it introduces round-trip latencies well over 20 ms. With Windows 10, this has gotten better. But it still cannot compete with drivers dedicated to real-time audio processing, like ASIO. In the best case, ASIO allows for latencies of about 2 ms. This could be less than the time a sound needs to travel from a speaker to your ear through the air.
People (like me) once proposed that Chrome would implement ASIO support. But let’s be realistic: That is unlikely to happen.