Table of Contents

Class BaseAudioContext

Namespace
CSharpToJavaScript.APIs.JS
Assembly
CSharpToJavaScript.dll

The BaseAudioContext interface of the Web Audio API acts as a base definition for online and offline audio-processing graphs, as represented by AudioContext and OfflineAudioContext respectively. You wouldn't use BaseAudioContext directly — you'd use its features via one of these two inheriting interfaces.

[Value("BaseAudioContext")]
public class BaseAudioContext : EventTarget
Inheritance
BaseAudioContext
Derived
Inherited Members

Remarks

A BaseAudioContext can be a target of events, therefore it implements the EventTarget interface.

-Using the Web Audio API
-AudioContext
-OfflineAudioContext

See also on MDN

Constructors

BaseAudioContext()

public BaseAudioContext()

Properties

AudioWorklet

The audioWorklet read-only property of the
BaseAudioContext interface returns an instance of
AudioWorklet that can be used for adding
AudioWorkletProcessor-derived classes which implement custom audio
processing.

[Value("audioWorklet")]
public AudioWorklet AudioWorklet { get; }

Property Value

AudioWorklet

An AudioWorklet instance.

Remarks

CurrentTime

The currentTime read-only property of the BaseAudioContext
interface returns a double representing an ever-increasing hardware timestamp in seconds that
can be used for scheduling audio playback, visualizing timelines, etc. It starts at 0.

[Value("currentTime")]
public Number CurrentTime { get; }

Property Value

Number

A floating point number.

Remarks

Destination

The destination property of the BaseAudioContext
interface returns an AudioDestinationNode representing the final
destination of all audio in the context. It often represents an actual audio-rendering
device such as your device's speakers.

[Value("destination")]
public AudioDestinationNode Destination { get; }

Property Value

AudioDestinationNode

An AudioDestinationNode.

Remarks

Listener

The listener property of the BaseAudioContext interface
returns an AudioListener object that can then be used for
implementing 3D audio spatialization.

[Value("listener")]
public AudioListener Listener { get; }

Property Value

AudioListener

An AudioListener object.

Remarks

Onstatechange

[Value("onstatechange")]
public EventHandlerNonNull Onstatechange { get; set; }

Property Value

EventHandlerNonNull

RenderQuantumSize

[Value("renderQuantumSize")]
public ulong RenderQuantumSize { get; }

Property Value

ulong

SampleRate

The sampleRate property of the BaseAudioContext interface returns a floating point number representing the sample rate, in samples per second, used by all nodes in this audio context.
This limitation means that sample-rate converters are not supported.

[Value("sampleRate")]
public Number SampleRate { get; }

Property Value

Number

A floating point number indicating the audio context's sample rate, in samples per
second.

Remarks

State

The state read-only property of the BaseAudioContext
interface returns the current state of the AudioContext.

[Value("state")]
public AudioContextState State { get; }

Property Value

AudioContextState

A string. Possible values are:

Remarks

Methods

CreateAnalyser()

The createAnalyser() method of the
BaseAudioContext interface creates an AnalyserNode, which
can be used to expose audio time and frequency data and create data visualizations.

[Value("createAnalyser")]
public AnalyserNode CreateAnalyser()

Returns

AnalyserNode

An AnalyserNode.

Remarks

NOTE

The AnalyserNode(BaseAudioContext, AnalyserOptions) constructor is the
recommended way to create an AnalyserNode; see
Creating an AudioNode.

NOTE

For more on using this node, see the
AnalyserNode page.

-Using the Web Audio API

See also on MDN

CreateBiquadFilter()

The createBiquadFilter() method of the BaseAudioContext
interface creates a BiquadFilterNode, which represents a second order
filter configurable as several different common filter types.

[Value("createBiquadFilter")]
public BiquadFilterNode CreateBiquadFilter()

Returns

BiquadFilterNode

A BiquadFilterNode.

Remarks

CreateBuffer(ulong, ulong, Number)

The createBuffer() method of the BaseAudioContext
Interface is used to create a new, empty AudioBuffer object, which
can then be populated by data, and played via an AudioBufferSourceNode.

[Value("createBuffer")]
public AudioBuffer CreateBuffer(ulong numberOfChannels, ulong length, Number sampleRate)

Parameters

numberOfChannels ulong
length ulong
sampleRate Number

Returns

AudioBuffer

An AudioBuffer configured based on the specified options.

Remarks

For more details about audio buffers, check out the AudioBuffer
reference page.

NOTE

createBuffer() used to be able to take compressed
data and give back decoded samples, but this ability was removed from the specification,
because all the decoding was done on the main thread, so
createBuffer() was blocking other code execution. The asynchronous method
decodeAudioData() does the same thing — takes compressed audio, such as an
MP3 file, and directly gives you back an AudioBuffer that you can
then play via an AudioBufferSourceNode. For simple use cases
like playing an MP3, decodeAudioData() is what you should be using.

For an in-depth explanation of how audio buffers work, including what the parameters do, read Audio buffers: frames, samples and channels from our Basic concepts guide.

-Using the Web Audio API

See also on MDN

CreateBufferSource()

The createBufferSource() method of the BaseAudioContext
Interface is used to create a new AudioBufferSourceNode, which can be
used to play audio data contained within an AudioBuffer object.
AudioBuffers are created using CreateBuffer(ulong, ulong, Number) or returned by DecodeAudioData(ArrayBuffer, DecodeSuccessCallback?, DecodeErrorCallback?) when it successfully decodes an audio track.

[Value("createBufferSource")]
public AudioBufferSourceNode CreateBufferSource()

Returns

AudioBufferSourceNode

An AudioBufferSourceNode.

Remarks

CreateChannelMerger(ulong)

The createChannelMerger() method of the BaseAudioContext interface creates a ChannelMergerNode,
which combines channels from multiple audio streams into a single audio stream.

[Value("createChannelMerger")]
public ChannelMergerNode CreateChannelMerger(ulong numberOfInputs = 0)

Parameters

numberOfInputs ulong

Returns

ChannelMergerNode

A ChannelMergerNode.

Remarks

CreateChannelSplitter(ulong)

The createChannelSplitter() method of the BaseAudioContext Interface is used to create a ChannelSplitterNode,
which is used to access the individual channels of an audio stream and process them separately.

[Value("createChannelSplitter")]
public ChannelSplitterNode CreateChannelSplitter(ulong numberOfOutputs = 0)

Parameters

numberOfOutputs ulong

Returns

ChannelSplitterNode

A ChannelSplitterNode.

Remarks

CreateConstantSource()

The createConstantSource()
property of the BaseAudioContext interface creates a
ConstantSourceNode object, which is an audio source that continuously
outputs a monaural (one-channel) sound signal whose samples all have the same
value.

[Value("createConstantSource")]
public ConstantSourceNode CreateConstantSource()

Returns

ConstantSourceNode

A 'ConstantSourceNode' instance.

Remarks

CreateConvolver()

The createConvolver() method of the BaseAudioContext
interface creates a ConvolverNode, which is commonly used to apply
reverb effects to your audio. See the spec definition of Convolution for more information.

[Value("createConvolver")]
public ConvolverNode CreateConvolver()

Returns

ConvolverNode

A ConvolverNode.

Remarks

CreateDelay(Number)

The createDelay() method of the
BaseAudioContext Interface is used to create a DelayNode,
which is used to delay the incoming audio signal by a certain amount of time.

[Value("createDelay")]
public DelayNode CreateDelay(Number maxDelayTime = null)

Parameters

maxDelayTime Number

Returns

DelayNode

A DelayNode. The default DelayTime is 0
seconds.

Remarks

CreateDynamicsCompressor()

The createDynamicsCompressor() method of the BaseAudioContext Interface is used to create a DynamicsCompressorNode, which can be used to apply compression to an audio signal.

[Value("createDynamicsCompressor")]
public DynamicsCompressorNode CreateDynamicsCompressor()

Returns

DynamicsCompressorNode

A DynamicsCompressorNode.

Remarks

Compression lowers the volume of the loudest parts of the signal and raises the volume
of the softest parts. Overall, a louder, richer, and fuller sound can be achieved. It is
especially important in games and musical applications where large numbers of individual
sounds are played simultaneously, where you want to control the overall signal level and
help avoid clipping (distorting) of the audio output.

NOTE

The DynamicsCompressorNode(BaseAudioContext, DynamicsCompressorOptions)
constructor is the recommended way to create a DynamicsCompressorNode; see
Creating an AudioNode.

-Using the Web Audio API

See also on MDN

CreateGain()

The createGain() method of the BaseAudioContext
interface creates a GainNode, which can be used to control the
overall gain (or volume) of the audio graph.

[Value("createGain")]
public GainNode CreateGain()

Returns

GainNode

A GainNode which takes as input one or more audio sources and outputs
audio whose volume has been adjusted in gain (volume) to a level specified by the node's
Gain a-rate
parameter.

Remarks

CreateIIRFilter(List<Number>, List<Number>)

The createIIRFilter() method of the BaseAudioContext interface creates an IIRFilterNode, which represents a general infinite impulse response (IIR) filter which can be configured to serve as various types of filter.

[Value("createIIRFilter")]
public IIRFilterNode CreateIIRFilter(List<Number> feedforward, List<Number> feedback)

Parameters

feedforward List<Number>
feedback List<Number>

Returns

IIRFilterNode

An IIRFilterNode implementing the filter with the specified feedback and
feedforward coefficient arrays.

Remarks

CreateOscillator()

The createOscillator() method of the BaseAudioContext
interface creates an OscillatorNode, a source representing a periodic
waveform. It basically generates a constant tone.

[Value("createOscillator")]
public OscillatorNode CreateOscillator()

Returns

OscillatorNode

An OscillatorNode.

Remarks

CreatePanner()

The createPanner() method of the BaseAudioContext
Interface is used to create a new PannerNode, which is used to
spatialize an incoming audio stream in 3D space.

[Value("createPanner")]
public PannerNode CreatePanner()

Returns

PannerNode

A PannerNode.

Remarks

The panner node is spatialized in relation to the AudioContext&apos;s
AudioListener (defined by the BaseAudioContextlistener
attribute), which represents the position and orientation of the person listening to the
audio.

NOTE

The PannerNode(BaseAudioContext, PannerOptions)
constructor is the recommended way to create a PannerNode; see
Creating an AudioNode.

-Using the Web Audio API

See also on MDN

CreatePeriodicWave(List<Number>, List<Number>, PeriodicWaveConstraints)

The createPeriodicWave() method of the BaseAudioContext interface is used to create a PeriodicWave. This wave is used to define a periodic waveform that can be used to shape the output of an OscillatorNode.

[Value("createPeriodicWave")]
public PeriodicWave CreatePeriodicWave(List<Number> real, List<Number> imag, PeriodicWaveConstraints constraints = null)

Parameters

real List<Number>
imag List<Number>
constraints PeriodicWaveConstraints

Returns

PeriodicWave

A PeriodicWave.

Remarks

CreateScriptProcessor(ulong, ulong, ulong)

IMPORTANT
Deprecated
The createScriptProcessor() method of the BaseAudioContext interface
creates a ScriptProcessorNode used for direct audio processing.
[Value("createScriptProcessor")]
public ScriptProcessorNode CreateScriptProcessor(ulong bufferSize = 0, ulong numberOfInputChannels = 0, ulong numberOfOutputChannels = 0)

Parameters

bufferSize ulong
numberOfInputChannels ulong
numberOfOutputChannels ulong

Returns

ScriptProcessorNode

A ScriptProcessorNode.

Remarks

NOTE

This feature was replaced by AudioWorklets and the AudioWorkletNode interface.

-Using the Web Audio API

See also on MDN

CreateStereoPanner()

The createStereoPanner() method of the BaseAudioContext interface creates a StereoPannerNode, which can be used to apply
stereo panning to an audio source.
It positions an incoming audio stream in a stereo image using a low-cost panning algorithm.

[Value("createStereoPanner")]
public StereoPannerNode CreateStereoPanner()

Returns

StereoPannerNode

A StereoPannerNode.

Remarks

CreateWaveShaper()

The createWaveShaper() method of the BaseAudioContext
interface creates a WaveShaperNode, which represents a non-linear
distortion. It is used to apply distortion effects to your audio.

[Value("createWaveShaper")]
public WaveShaperNode CreateWaveShaper()

Returns

WaveShaperNode

A WaveShaperNode.

Remarks

DecodeAudioData(ArrayBuffer, DecodeSuccessCallback?, DecodeErrorCallback?)

The decodeAudioData() method of the BaseAudioContext
Interface is used to asynchronously decode audio file data contained in an
{{jsxref("ArrayBuffer")}} that is loaded from Windowfetch,
XMLHttpRequest, or FileReader. The decoded
AudioBuffer is resampled to the AudioContext's sampling
rate, then passed to a callback or promise.

[Value("decodeAudioData")]
public Task<AudioBuffer> DecodeAudioData(ArrayBuffer audioData, DecodeSuccessCallback? successCallback = null, DecodeErrorCallback? errorCallback = null)

Parameters

audioData ArrayBuffer
successCallback DecodeSuccessCallback
errorCallback DecodeErrorCallback

Returns

Task<AudioBuffer>

A Promise object that fulfills with the decodedData. If you are using the
XHR syntax you will ignore this return value and use a callback function instead.

Remarks

This is the preferred method of creating an audio source for Web Audio API from an
audio track. This method only works on complete file data, not fragments of audio file
data.

This function implements two alternative ways to asynchronously return the audio data or error messages: it returns a Promise that fulfills with the audio data, and also accepts callback arguments to handle success or failure. The primary method of interfacing with this function is via its Promise return value, and the callback parameters are provided for legacy reasons.

-Using the Web Audio API

See also on MDN