Photo by Steve Harvey on Unsplash
With the shipment of the AudioWorklet feature in Chrome 64, it’s probably reasonable to say 2018 was a good year for the Web Audio API. Nearly a year later after its release, there are still relatively few examples outside of the resources at Google Chrome Labs and dsp.audio for developers to draw from. These serve as great introductions to the interface, but with relatively few user-created examples to learn from, we’re left to our own devices when figuring out how to implement it out in the wild. The reality is, the task of getting AudioWorklets to play nice with React and other UI frameworks isn’t as straightforward as it might seem. The aim of this article is to show programmers already familiar with the Web Audio API how they can connect AudioWorklets to a React interface. For more info on the AudioWorklet specification, see the links at the very bottom of this article.
As a conservatory-trained musician entering the world of software engineering and DSP via computational musicology, I was (and still am) giddy at the prospect of creating web interfaces with a dedicated audio rendering thread. In short, upon hearing the news, I was ready to blast off! I fired up create-react-app
, copied some examples from Chrome Labs, and ran yarn start
only to receive the following error:
Failed to compile../src/worklet/worklet-node.js Line 2: 'AudioWorkletNode' is not defined no-undef
Is my version of Chrome outdated? It’s not? Why is AudioWorkletNode undefined? Thankfully Dan Abramov was quick to relieve my confusion on StackOverflow (thanks, Dan!):
Create React App is configured to enforce that you access browser APIs like this with a
window.
qualifier. This way it's clear you're using a global and didn't forget to import something (which is very common).
This should work:
class MyWorkletNode extends window.AudioWorkletNode {
(In the future releases, ESLint will be aware of this global by default, and you will be able to remove
window.
)
Ok, no big deal! We should be good now, right? Not quite. Trying to load my AudioWorklet processor using context.audioWorklet.addModule()
threw the vaguest, most damning error no developer would ever want to come across:
DOMException: The user aborted a request.
Back to StackOverflow I went. John Weisz at AudioNodes pointed out that this error may be a bug in the Chromium module loader:
…it parses the
worklet/processor.js
file by removing whitespace, which in turn causes it to have JavaScript syntax errors everywhere, which then finally causes this generic non-explanatory error message to show up.
He went on to suggest serving the module with specified headers Content-Type: application/javascript
. I didn’t know where or how to specify AudioWorklet content headers, so I took a long break from AudioWorklets with the hopes that more examples would pop up over time.
Serve your AudioWorklet processors from the public
folder. Just do it. I was fiddling around and discovered that the addModule('path/to/your/module')
method points there by default. No imports, no requires, no {process.env.PUBLIC_URL}/my-worklet-processor
needed.
With all that behind us, it would be a great exercise to port the four audio processing demos from Google Chrome Labs to React: the Bypasser, One Pole Filter, Noise Generator, and BitCrusher. If you’re impatient and want to dig into the entire codebase immediately, the link to the GitHub repo is listed at the very bottom. Simply clone it, install dependencies with yarn install
, and run the app with yarn start
.
Click here to view the demo.
In the following walkthrough, we’ll create a dead simple UI over create-react-app boilerplate code. We’ll work with Ant Design components and gain familiarity with AudioWorklets by making slight modifications to each processor to accept data via the messagePort to toggle it on or off. The user will be able to choose between the four audio processing demos from a drop-down menu. We’ll stick a button next to the drop-down which will toggle the current node on and off. That’s it! If you want to see the demo in action, click the link to the left (it’s above if you’re viewing on mobile).
Below is a broad overview of the project we are going to create. Directories and files we are adding to create-react-app boilerplate are bolded and italicized with neighboring descriptions.
| — README.md| — package.json| — public| | — favicon.ico| | — index.html| | — manifest.json| ` — worklet /*Contains AudioWorkletProcessors*/_| | — bit-crusher-processor.js| | — bypass-processor.js| | — noise-generator.js| ` — one-pole-processor.js_| — src| | — App.css| | — App.js /* Main UI */| | — App.test.js| | — Demos.js /*Functions that interface with the processors*/| | — index.css| | — index.js| | — logo.svg| — serviceWorker.js| — yarn.lock
Step 1: create-react-app react-audio-worklet
Step 2: Install dependencies. In this case, we’re just using one package for the UI, so go ahead and run the command yarn add antd
. Make sure to import 'antd/dist/antd.css'
into index.js.
Step 3: Bring in the necessary components to create a drop-down menu and set up the initial state:
Step 1: Keeping separation of concerns in mind, let’s include callbacks to trigger the audio demos from a separate file called Demos.js:
Step 2: Create AudioWorklet processers in public/worklet:
Note: Like the demos, these are practically the Chrome Team’s code verbatim with the slight modification of binding the port’s onMessage function to the AudioWorkletProcessor for better readability.
Step 3: Now we’ll create methods in App.js to handle the selection/loading of processor modules, as well as toggling playback of the currently selected module:
Before we forget, let’s import Demos.js into the main app. Your final App.js should now read as follows:
Now go ahead and run yarn start
and listen to those demos. Hear that? That’s the sweet sound of Web Audio processing on its own dedicated rendering thread!
I hope this article has clarified how to avoid some common hiccups that many may encounter when trying to integrate AudioWorklets into React. I also hope it has helped in gaining familiarity in using the Worklet API itself. The Chrome WebAudio team has introduced powerful technology to the web platform with AudioWorklets — and with WebAssembly on the rise, the future of DSP on the web is looking bright.
Nathan Bloomfield is a conservatory-trained musician who collided with the world of software engineering and DSP via computational musicology.
LinkedIn | Instagram | Twitter | bloom510.art