Class AudioContext
- Namespace
- CSharpToJavaScript.APIs.JS
- Assembly
- CSharpToJavaScript.dll
The AudioContext interface represents an audio-processing graph built from audio modules linked together, each represented by an AudioNode.
[Value("AudioContext")]
public class AudioContext : BaseAudioContext
- Inheritance
-
AudioContext
- Inherited Members
Remarks
An audio context controls both the creation of the nodes it contains and the execution of the audio processing, or decoding. You need to create an AudioContext before you do anything else, as everything happens inside a context. It's recommended to create one AudioContext and reuse it instead of initializing a new one each time, and it's OK to use a single AudioContext for several different audio sources and pipeline concurrently.
Constructors
AudioContext()
public AudioContext()
AudioContext(AudioContextOptions)
The AudioContext() constructor
creates a new AudioContext object which represents an audio-processing
graph, built from audio modules linked together, each represented by an
AudioNode.
public AudioContext(AudioContextOptions contextOptions = null)
Parameters
contextOptionsAudioContextOptions
Remarks
Properties
BaseLatency
The baseLatency read-only property of the
AudioContext interface returns a double that represents the number of
seconds of processing latency incurred by the AudioContext passing an audio
buffer from the AudioDestinationNode — i.e., the end of the audio graph —
into the host system's audio subsystem ready for playing.
[Value("baseLatency")]
public Number BaseLatency { get; }
Property Value
- Number
A double representing the base latency in seconds.
Remarks
NOTE
You can request a certain latency during
{{domxref("AudioContext.AudioContext()", "construction time", "", "true")}} with thelatencyHintoption, but the browser may ignore the option.
Onerror
[Value("onerror")]
public EventHandlerNonNull Onerror { get; set; }
Property Value
Onsinkchange
[Value("onsinkchange")]
public EventHandlerNonNull Onsinkchange { get; set; }
Property Value
OutputLatency
The outputLatency read-only property of
the AudioContext Interface provides an estimation of the output latency
of the current audio context.
[Value("outputLatency")]
public Number OutputLatency { get; }
Property Value
- Number
A double representing the output latency in seconds.
Remarks
This is the time, in seconds, between the browser passing an audio buffer out of an
audio graph over to the host system's audio subsystem to play, and the time at which the
first sample in the buffer is actually processed by the audio output device.
It varies depending on the platform and the available hardware.
RenderCapacity
[Value("renderCapacity")]
public AudioRenderCapacity RenderCapacity { get; }
Property Value
SinkId
NOTE
ExperimentalsinkId read-only property of theAudioContext interface returns the sink ID of the current output audio device.
[Value("sinkId")]
public Union180 SinkId { get; }
Property Value
- Union180
This property returns one of the following values, depending on how the sink ID was set:
Remarks
-Change the destination output device in Web Audio
-SetSinkId(Union181)
-AudioContextsinkchange
Methods
Close()
The close() method of the AudioContext Interface closes the audio context, releasing any system audio resources that it uses.
[Value("close")]
public Task<GlobalObject.Undefined> Close()
Returns
- Task<GlobalObject.Undefined>
A Promise that resolves with 'undefined'.
Remarks
This function does not automatically release all AudioContext-created objects, unless other references have been released as well; however, it will forcibly release any system audio resources that might prevent additional AudioContexts from being created and used, suspend the progression of audio time in the audio context, and stop processing audio data. The returned Promise resolves when all AudioContext-creation-blocking resources have been released. This method throws an INVALID_STATE_ERR exception if called on an OfflineAudioContext.
CreateMediaElementSource(HTMLMediaElement)
The createMediaElementSource() method of the AudioContext Interface is used to create a new MediaElementAudioSourceNode object, given an existing HTML {{htmlelement("audio")}} or {{htmlelement("video")}} element, the audio from which can then be played and manipulated.
[Value("createMediaElementSource")]
public MediaElementAudioSourceNode CreateMediaElementSource(HTMLMediaElement mediaElement)
Parameters
mediaElementHTMLMediaElement
Returns
Remarks
For more details about media element audio source nodes, check out the MediaElementAudioSourceNode reference page.
CreateMediaStreamDestination()
The createMediaStreamDestination() method of the AudioContext Interface is used to create a new MediaStreamAudioDestinationNode object associated with a WebRTC MediaStream representing an audio stream, which may be stored in a local file or sent to another computer.
[Value("createMediaStreamDestination")]
public MediaStreamAudioDestinationNode CreateMediaStreamDestination()
Returns
Remarks
The MediaStream is created when the node is created and is accessible via the MediaStreamAudioDestinationNode's stream attribute. This stream can be used in a similar way as a MediaStream obtained via Navigator.GetUserMedia — it can, for example, be sent to a remote peer using the addStream() method of RTCPeerConnection.
For more details about media stream destination nodes, check out the MediaStreamAudioDestinationNode reference page.
CreateMediaStreamSource(MediaStream)
The createMediaStreamSource() method of the AudioContext
Interface is used to create a new MediaStreamAudioSourceNode
object, given a media stream (say, from a GetUserMedia(MediaStreamConstraints)
instance), the audio from which can then be played and manipulated.
[Value("createMediaStreamSource")]
public MediaStreamAudioSourceNode CreateMediaStreamSource(MediaStream mediaStream)
Parameters
mediaStreamMediaStream
Returns
- MediaStreamAudioSourceNode
A new MediaStreamAudioSourceNode object representing the audio node
whose media is obtained from the specified source stream.
Remarks
For more details about media stream audio source nodes, check out the MediaStreamAudioSourceNode reference page.
CreateMediaStreamTrackSource(MediaStreamTrack)
The createMediaStreamTrackSource() method of the AudioContext interface creates and returns a MediaStreamTrackAudioSourceNode which represents an audio source whose data comes from the specified MediaStreamTrack.
[Value("createMediaStreamTrackSource")]
public MediaStreamTrackAudioSourceNode CreateMediaStreamTrackSource(MediaStreamTrack mediaStreamTrack)
Parameters
mediaStreamTrackMediaStreamTrack
Returns
- MediaStreamTrackAudioSourceNode
A MediaStreamTrackAudioSourceNode object which acts as a source for
audio data found in the specified audio track.
Remarks
This differs from CreateMediaStreamSource(MediaStream), which creates a MediaStreamAudioSourceNode whose audio comes from the audio track in a specified MediaStream whose Id is first, lexicographically (alphabetically).
-Web Audio API
-Using the Web Audio API
-MediaStreamTrackAudioSourceNode
GetOutputTimestamp()
ThegetOutputTimestamp() method of the
AudioContext interface returns a new AudioTimestamp object
containing two audio timestamp values relating to the current audio context.
[Value("getOutputTimestamp")]
public AudioTimestamp GetOutputTimestamp()
Returns
- AudioTimestamp
An
AudioTimestampobject, which has the following properties.
Remarks
The two values are as follows:
Resume()
The resume() method of the AudioContext
interface resumes the progression of time in an audio context that has previously been
suspended.
[Value("resume")]
public Task<GlobalObject.Undefined> Resume()
Returns
- Task<GlobalObject.Undefined>
A Promise that resolves when the context has resumed. The promise is
rejected if the context has already been closed.
Remarks
This method will cause an INVALID_STATE_ERR exception to be thrown if
called on an OfflineAudioContext.
SetSinkId(Union181)
NOTE
ExperimentalsetSinkId() method of the AudioContext interface sets the output audio device for the AudioContext. If a sink ID is not explicitly set, the default system audio output device will be used.
[Value("setSinkId")]
public Task<GlobalObject.Undefined> SetSinkId(Union181 sinkId)
Parameters
sinkIdUnion181
Returns
- Task<GlobalObject.Undefined>
A Promise that fulfills with a value of
undefined.Attempting to set the sink ID to its existing value (i.e., returned by SinkId), throws no errors, but it aborts the process immediately.
Remarks
To set the audio device to a device different than the default one, the developer needs permission to access to audio devices. If required, the user can be prompted to grant the required permission via a GetUserMedia(MediaStreamConstraints) call.
In addition, this feature may be blocked by a speaker-selection Permissions Policy.
-Change the destination output device in Web Audio
-SinkId
-AudioContextsinkchange
Suspend()
The suspend() method of the AudioContext Interface suspends the progression of time in the audio context, temporarily halting audio hardware access and reducing CPU/battery usage in the process — this is useful if you want an application to power down the audio hardware when it will not be using an audio context for a while.
[Value("suspend")]
public Task<GlobalObject.Undefined> Suspend()
Returns
- Task<GlobalObject.Undefined>
A Promise that resolves with 'undefined'. The promise is rejected if the context has already been closed.
Remarks
This method will cause an INVALID_STATE_ERR exception to be thrown if called on an OfflineAudioContext.