I tried it out today during a coffee break. I captured a “photosphere” of Cologne’s university campus with the Cardboard Camera App on a Samsung Galaxy S7. This is due to the fact that Germans apparently do not have access to the default Google Camera in the Play Store.
If you are anything like me, you are curious about the current state and the future of Web Audio. So I asked one of the Web Audio API spec editors, Mozilla’s Paul Adenot, if I could shoot some questions. He said sure, and was so kind to take some time and answer them elaborately. Here are his answers, stuffed with lots of useful information. Continue reading “Interview with Paul Adenot, Web Audio Spec Editor”
Did you come across digital clipping in web audio apps? I certainly did several times (mostly in my own apps though). This undesired effect occurs when you play several sound sources at the same time, which results in a signal that is louder than the maximum of 0 dBFS. Since a digital system is unable to reproduce higher amplitudes, you will hear nasty distortion and get an unworthy waveform looking like this:
I just wanted to mention that I did this thing called Beatsketch last year. It lets you make music on the web without having to know much about making music.
BeatSketch from Sebastian Zimmer is a collaborative music production tool that Sebastian developed for his Master’s degree in Computer Science. A song consists of multiple tracks, and each track is backed by a grid-based sequencer. Any changes you make are synchronised between connected collaborators immediately. It also supports mixing the final song down to a WAV file for downloading. An impressive set of features and a very useful exploration of possible methods of implementing collaborative working.
Lissajous curves are fun. And who doesn’t dream of standing right inside one all the time? The boys from Tame Impala certainly do, because some of their concert’s light shows consisted of little else than Lissajous curves:
When I was at one of their shows, I actually saw how they put a camera in front of an old analogue oscilloscope in a corner of the stage to capture them.
Playing around with it, I got the idea to use the Web Audio API to spatialize the sound of an object within the matrix, so that a person wearing a headphone could not only see, but also hear where an object is located.
I fell in love with synthesized bass sounds when listening for the first time to Joan as Police Woman’s performance of Holy Fire on Later:
After that I enthusiasticly tried to recreate this sound with the awesome Moog emulator Monark by Native Instruments.
Monark comes already with some quite good presets. Here you can download the one I have created to come as close to Joan’s bass sound as possible. It’s based on the preset “Humble Bee” but with tiny adjustments: