AudioWorkletGlobalScope
The AudioWorkletGlobalScope
interface of the Web Audio API represents a global execution context for user-supplied code, which defines custom AudioWorkletProcessor
-derived classes.
Each BaseAudioContext
has a single AudioWorklet
available under the audioWorklet
property, which runs its code in a single AudioWorkletGlobalScope
.
As the global execution context is shared across the current BaseAudioContext
, it's possible to define any other variables and perform any actions allowed in worklets — apart from defining AudioWorkletProcessor
derived classes.
Instance properties
This interface also inherits properties defined on its parent interface, WorkletGlobalScope
.
currentFrame
Read only-
Returns an integer that represents the ever-increasing current sample-frame of the audio block being processed. It is incremented by 128 (the size of a render quantum) after the processing of each audio block.
currentTime
Read only-
Returns a double that represents the ever-increasing context time of the audio block being processed. It is equal to the
currentTime
property of theBaseAudioContext
the worklet belongs to. sampleRate
Read only-
Returns a float that represents the sample rate of the associated
BaseAudioContext
.
Instance methods
This interface also inherits methods defined on its parent interface, WorkletGlobalScope
.
registerProcessor()
-
Registers a class derived from the
AudioWorkletProcessor
interface. The class can then be used by creating anAudioWorkletNode
, providing its registered name.
Examples
In this example we output all global properties into the console in the constructor of a custom AudioWorkletProcessor
.
First we need to define the processor, and register it. Note that this should be done in a separate file.
// AudioWorkletProcessor defined in : test-processor.js
class TestProcessor extends AudioWorkletProcessor {
constructor() {
super();
// Logs the current sample-frame and time at the moment of instantiation.
// They are accessible from the AudioWorkletGlobalScope.
console.log(currentFrame);
console.log(currentTime);
}
// The process method is required - output silence,
// which the outputs are already filled with.
process(inputs, outputs, parameters) {
return true;
}
}
// Logs the sample rate, that is not going to change ever,
// because it's a read-only property of a BaseAudioContext
// and is set only during its instantiation.
console.log(sampleRate);
// You can declare any variables and use them in your processors
// for example it may be an ArrayBuffer with a wavetable
const usefulVariable = 42;
console.log(usefulVariable);
registerProcessor("test-processor", TestProcessor);
Next, in our main scripts file we'll load the processor, create an instance of AudioWorkletNode
— passing the name of the processor to it — and connect the node to an audio graph. We should see the output of console.log()
calls in the console:
const audioContext = new AudioContext();
await audioContext.audioWorklet.addModule("test-processor.js");
const testNode = new AudioWorkletNode(audioContext, "test-processor");
testNode.connect(audioContext.destination);
Specifications
Specification |
---|
Web Audio API # AudioWorkletGlobalScope |
Browser compatibility
BCD tables only load in the browser