Class BaseAudioContext
- Namespace
- CSharpToJavaScript.APIs.JS
- Assembly
- CSharpToJavaScript.dll
The BaseAudioContext interface of the Web Audio API acts as a base definition for online and offline audio-processing graphs, as represented by AudioContext and OfflineAudioContext respectively. You wouldn't use BaseAudioContext directly — you'd use its features via one of these two inheriting interfaces.
[Value("BaseAudioContext")]
public class BaseAudioContext : EventTarget
- Inheritance
-
BaseAudioContext
- Derived
- Inherited Members
Remarks
A BaseAudioContext can be a target of events, therefore it implements the EventTarget interface.
Constructors
BaseAudioContext()
public BaseAudioContext()
Properties
AudioWorklet
The audioWorklet read-only property of the
BaseAudioContext interface returns an instance of
AudioWorklet that can be used for adding
AudioWorkletProcessor-derived classes which implement custom audio
processing.
[Value("audioWorklet")]
public AudioWorklet AudioWorklet { get; }
Property Value
- AudioWorklet
An AudioWorklet instance.
Remarks
CurrentTime
The currentTime read-only property of the BaseAudioContext
interface returns a double representing an ever-increasing hardware timestamp in seconds that
can be used for scheduling audio playback, visualizing timelines, etc. It starts at 0.
[Value("currentTime")]
public Number CurrentTime { get; }
Property Value
- Number
A floating point number.
Remarks
Destination
The destination property of the BaseAudioContext
interface returns an AudioDestinationNode representing the final
destination of all audio in the context. It often represents an actual audio-rendering
device such as your device's speakers.
[Value("destination")]
public AudioDestinationNode Destination { get; }
Property Value
Remarks
Listener
The listener property of the BaseAudioContext interface
returns an AudioListener object that can then be used for
implementing 3D audio spatialization.
[Value("listener")]
public AudioListener Listener { get; }
Property Value
- AudioListener
An AudioListener object.
Remarks
Onstatechange
[Value("onstatechange")]
public EventHandlerNonNull Onstatechange { get; set; }
Property Value
RenderQuantumSize
[Value("renderQuantumSize")]
public ulong RenderQuantumSize { get; }
Property Value
SampleRate
The sampleRate property of the BaseAudioContext interface returns a floating point number representing the sample rate, in samples per second, used by all nodes in this audio context.
This limitation means that sample-rate converters are not supported.
[Value("sampleRate")]
public Number SampleRate { get; }
Property Value
- Number
A floating point number indicating the audio context's sample rate, in samples per
second.
Remarks
State
The state read-only property of the BaseAudioContext
interface returns the current state of the AudioContext.
[Value("state")]
public AudioContextState State { get; }
Property Value
- AudioContextState
A string. Possible values are:
Remarks
Methods
CreateAnalyser()
The createAnalyser() method of the
BaseAudioContext interface creates an AnalyserNode, which
can be used to expose audio time and frequency data and create data visualizations.
[Value("createAnalyser")]
public AnalyserNode CreateAnalyser()
Returns
Remarks
NOTE
The AnalyserNode(BaseAudioContext, AnalyserOptions) constructor is the
recommended way to create an AnalyserNode; see
Creating an AudioNode.
NOTE
For more on using this node, see the
AnalyserNode page.
CreateBiquadFilter()
The createBiquadFilter() method of the BaseAudioContext
interface creates a BiquadFilterNode, which represents a second order
filter configurable as several different common filter types.
[Value("createBiquadFilter")]
public BiquadFilterNode CreateBiquadFilter()
Returns
Remarks
NOTE
The BiquadFilterNode(BaseAudioContext, BiquadFilterOptions) constructor is the
recommended way to create a BiquadFilterNode; see
Creating an AudioNode.
CreateBuffer(ulong, ulong, Number)
The createBuffer() method of the BaseAudioContext
Interface is used to create a new, empty AudioBuffer object, which
can then be populated by data, and played via an AudioBufferSourceNode.
[Value("createBuffer")]
public AudioBuffer CreateBuffer(ulong numberOfChannels, ulong length, Number sampleRate)
Parameters
Returns
- AudioBuffer
An AudioBuffer configured based on the specified options.
Remarks
For more details about audio buffers, check out the AudioBuffer
reference page.
NOTE
createBuffer()used to be able to take compressed
data and give back decoded samples, but this ability was removed from the specification,
because all the decoding was done on the main thread, socreateBuffer()was blocking other code execution. The asynchronous methoddecodeAudioData()does the same thing — takes compressed audio, such as an
MP3 file, and directly gives you back an AudioBuffer that you can
then play via an AudioBufferSourceNode. For simple use cases
like playing an MP3,decodeAudioData()is what you should be using.
For an in-depth explanation of how audio buffers work, including what the parameters do, read Audio buffers: frames, samples and channels from our Basic concepts guide.
CreateBufferSource()
The createBufferSource() method of the BaseAudioContext
Interface is used to create a new AudioBufferSourceNode, which can be
used to play audio data contained within an AudioBuffer object.
AudioBuffers are created using CreateBuffer(ulong, ulong, Number) or returned by DecodeAudioData(ArrayBuffer, DecodeSuccessCallback?, DecodeErrorCallback?) when it successfully decodes an audio track.
[Value("createBufferSource")]
public AudioBufferSourceNode CreateBufferSource()
Returns
Remarks
NOTE
The AudioBufferSourceNode(BaseAudioContext, AudioBufferSourceOptions)
constructor is the recommended way to create a AudioBufferSourceNode; see
Creating an AudioNode.
CreateChannelMerger(ulong)
The createChannelMerger() method of the BaseAudioContext interface creates a ChannelMergerNode,
which combines channels from multiple audio streams into a single audio stream.
[Value("createChannelMerger")]
public ChannelMergerNode CreateChannelMerger(ulong numberOfInputs = 0)
Parameters
numberOfInputsulong
Returns
Remarks
NOTE
The ChannelMergerNode(BaseAudioContext, ChannelMergerOptions) constructor is the
recommended way to create a ChannelMergerNode; see
Creating an AudioNode.
CreateChannelSplitter(ulong)
The createChannelSplitter() method of the BaseAudioContext Interface is used to create a ChannelSplitterNode,
which is used to access the individual channels of an audio stream and process them separately.
[Value("createChannelSplitter")]
public ChannelSplitterNode CreateChannelSplitter(ulong numberOfOutputs = 0)
Parameters
numberOfOutputsulong
Returns
Remarks
NOTE
The ChannelSplitterNode(BaseAudioContext, ChannelSplitterOptions)
constructor is the recommended way to create a ChannelSplitterNode; see
Creating an AudioNode.
CreateConstantSource()
The createConstantSource()
property of the BaseAudioContext interface creates a
ConstantSourceNode object, which is an audio source that continuously
outputs a monaural (one-channel) sound signal whose samples all have the same
value.
[Value("createConstantSource")]
public ConstantSourceNode CreateConstantSource()
Returns
- ConstantSourceNode
A 'ConstantSourceNode' instance.
Remarks
NOTE
The ConstantSourceNode(BaseAudioContext, ConstantSourceOptions)
constructor is the recommended way to create a ConstantSourceNode; see
Creating an AudioNode.
CreateConvolver()
The createConvolver() method of the BaseAudioContext
interface creates a ConvolverNode, which is commonly used to apply
reverb effects to your audio. See the spec definition of Convolution for more information.
[Value("createConvolver")]
public ConvolverNode CreateConvolver()
Returns
Remarks
NOTE
The ConvolverNode(BaseAudioContext, ConvolverOptions)
constructor is the recommended way to create a ConvolverNode; see
Creating an AudioNode.
CreateDelay(Number)
The createDelay() method of the
BaseAudioContext Interface is used to create a DelayNode,
which is used to delay the incoming audio signal by a certain amount of time.
[Value("createDelay")]
public DelayNode CreateDelay(Number maxDelayTime = null)
Parameters
maxDelayTimeNumber
Returns
Remarks
NOTE
The DelayNode(BaseAudioContext, DelayOptions)
constructor is the recommended way to create a DelayNode; see
Creating an AudioNode.
CreateDynamicsCompressor()
The createDynamicsCompressor() method of the BaseAudioContext Interface is used to create a DynamicsCompressorNode, which can be used to apply compression to an audio signal.
[Value("createDynamicsCompressor")]
public DynamicsCompressorNode CreateDynamicsCompressor()
Returns
Remarks
Compression lowers the volume of the loudest parts of the signal and raises the volume
of the softest parts. Overall, a louder, richer, and fuller sound can be achieved. It is
especially important in games and musical applications where large numbers of individual
sounds are played simultaneously, where you want to control the overall signal level and
help avoid clipping (distorting) of the audio output.
NOTE
The DynamicsCompressorNode(BaseAudioContext, DynamicsCompressorOptions)
constructor is the recommended way to create a DynamicsCompressorNode; see
Creating an AudioNode.
CreateGain()
The createGain() method of the BaseAudioContext
interface creates a GainNode, which can be used to control the
overall gain (or volume) of the audio graph.
[Value("createGain")]
public GainNode CreateGain()
Returns
- GainNode
A GainNode which takes as input one or more audio sources and outputs
audio whose volume has been adjusted in gain (volume) to a level specified by the node's
Gain a-rate
parameter.
Remarks
NOTE
The GainNode(BaseAudioContext, GainOptions)
constructor is the recommended way to create a GainNode; see
Creating an AudioNode.
CreateIIRFilter(List<Number>, List<Number>)
The createIIRFilter() method of the BaseAudioContext interface creates an IIRFilterNode, which represents a general infinite impulse response (IIR) filter which can be configured to serve as various types of filter.
[Value("createIIRFilter")]
public IIRFilterNode CreateIIRFilter(List<Number> feedforward, List<Number> feedback)
Parameters
Returns
- IIRFilterNode
An IIRFilterNode implementing the filter with the specified feedback and
feedforward coefficient arrays.
Remarks
NOTE
The IIRFilterNode(BaseAudioContext, IIRFilterOptions)
constructor is the recommended way to create a IIRFilterNode; see
Creating an AudioNode.
CreateOscillator()
The createOscillator() method of the BaseAudioContext
interface creates an OscillatorNode, a source representing a periodic
waveform. It basically generates a constant tone.
[Value("createOscillator")]
public OscillatorNode CreateOscillator()
Returns
Remarks
NOTE
The OscillatorNode(BaseAudioContext, OscillatorOptions)
constructor is the recommended way to create a OscillatorNode; see
Creating an AudioNode.
CreatePanner()
The createPanner() method of the BaseAudioContext
Interface is used to create a new PannerNode, which is used to
spatialize an incoming audio stream in 3D space.
[Value("createPanner")]
public PannerNode CreatePanner()
Returns
Remarks
The panner node is spatialized in relation to the AudioContext's
AudioListener (defined by the BaseAudioContextlistener
attribute), which represents the position and orientation of the person listening to the
audio.
NOTE
The PannerNode(BaseAudioContext, PannerOptions)
constructor is the recommended way to create a PannerNode; see
Creating an AudioNode.
CreatePeriodicWave(List<Number>, List<Number>, PeriodicWaveConstraints)
The createPeriodicWave() method of the BaseAudioContext interface is used to create a PeriodicWave. This wave is used to define a periodic waveform that can be used to shape the output of an OscillatorNode.
[Value("createPeriodicWave")]
public PeriodicWave CreatePeriodicWave(List<Number> real, List<Number> imag, PeriodicWaveConstraints constraints = null)
Parameters
realList<Number>imagList<Number>constraintsPeriodicWaveConstraints
Returns
Remarks
CreateScriptProcessor(ulong, ulong, ulong)
IMPORTANT
DeprecatedcreateScriptProcessor() method of the BaseAudioContext interfacecreates a ScriptProcessorNode used for direct audio processing.
[Value("createScriptProcessor")]
public ScriptProcessorNode CreateScriptProcessor(ulong bufferSize = 0, ulong numberOfInputChannels = 0, ulong numberOfOutputChannels = 0)
Parameters
Returns
Remarks
NOTE
This feature was replaced by AudioWorklets and the AudioWorkletNode interface.
CreateStereoPanner()
The createStereoPanner() method of the BaseAudioContext interface creates a StereoPannerNode, which can be used to apply
stereo panning to an audio source.
It positions an incoming audio stream in a stereo image using a low-cost panning algorithm.
[Value("createStereoPanner")]
public StereoPannerNode CreateStereoPanner()
Returns
Remarks
NOTE
The StereoPannerNode(BaseAudioContext, StereoPannerOptions)
constructor is the recommended way to create a StereoPannerNode; see
Creating an AudioNode.
CreateWaveShaper()
The createWaveShaper() method of the BaseAudioContext
interface creates a WaveShaperNode, which represents a non-linear
distortion. It is used to apply distortion effects to your audio.
[Value("createWaveShaper")]
public WaveShaperNode CreateWaveShaper()
Returns
Remarks
NOTE
The WaveShaperNode(BaseAudioContext, WaveShaperOptions)
constructor is the recommended way to create a WaveShaperNode; see
Creating an AudioNode.
DecodeAudioData(ArrayBuffer, DecodeSuccessCallback?, DecodeErrorCallback?)
The decodeAudioData() method of the BaseAudioContext
Interface is used to asynchronously decode audio file data contained in an
{{jsxref("ArrayBuffer")}} that is loaded from Windowfetch,
XMLHttpRequest, or FileReader. The decoded
AudioBuffer is resampled to the AudioContext's sampling
rate, then passed to a callback or promise.
[Value("decodeAudioData")]
public Task<AudioBuffer> DecodeAudioData(ArrayBuffer audioData, DecodeSuccessCallback? successCallback = null, DecodeErrorCallback? errorCallback = null)
Parameters
audioDataArrayBuffersuccessCallbackDecodeSuccessCallbackerrorCallbackDecodeErrorCallback
Returns
- Task<AudioBuffer>
A Promise object that fulfills with the decodedData. If you are using the
XHR syntax you will ignore this return value and use a callback function instead.
Remarks
This is the preferred method of creating an audio source for Web Audio API from an
audio track. This method only works on complete file data, not fragments of audio file
data.
This function implements two alternative ways to asynchronously return the audio data or error messages: it returns a Promise that fulfills with the audio data, and also accepts callback arguments to handle success or failure. The primary method of interfacing with this function is via its Promise return value, and the callback parameters are provided for legacy reasons.