Class RTCInboundRtpStreamStats
- Namespace
- CSharpToJavaScript.APIs.JS
- Assembly
- CSharpToJavaScript.dll
The RTCInboundRtpStreamStats dictionary of the WebRTC API is used to report statistics related to the receiving end of an RTP stream on the local end of the RTCPeerConnection.
[ToObject]
public class RTCInboundRtpStreamStats : RTCReceivedRtpStreamStats
- Inheritance
-
RTCInboundRtpStreamStats
- Inherited Members
Remarks
The statistics can be obtained by iterating the RTCStatsReport returned by GetStats(MediaStreamTrack?) or GetStats() until you find a report with the type of inbound-rtp.
Constructors
RTCInboundRtpStreamStats()
public RTCInboundRtpStreamStats()
Fields
AudioLevel
The audioLevel property of the RTCInboundRtpStreamStats dictionary indicates the audio level of the received (remote) track.
[Value("audioLevel")]
public Number AudioLevel
Field Value
- Number
A real number.The value is between 0..1 (linear), where 1.0 represents 0 dBov (decibels relative to full scale (DBFS)), 0 represents silence, and 0.5 represents approximately 6 dB SPL change in the sound pressure level from 0 dBov.
Remarks
The audioLevel is averaged over some small interval, using the algorithm described under TotalAudioEnergy.
The interval used is implementation dependent.
NOTE
The value is undefined for video streams.
-AudioLevel for audio levels of local tracks (that are being sent)
BytesReceived
The bytesReceived property of the RTCInboundRtpStreamStats dictionary indicates the total number of bytes received so far from this synchronization source (SSRC), not including header and padding bytes.
[Value("bytesReceived")]
public ulong BytesReceived
Field Value
- ulong
An positive integer.
Remarks
The value can be used to calculate an approximation of the average media data rate:
The property value is reset to zero if the sender's SSRC identifier changes for any reason.
ConcealedSamples
The concealedSamples property of the RTCInboundRtpStreamStats dictionary indicates the total number of concealed samples for the received audio track over the lifetime of this stats object.
[Value("concealedSamples")]
public ulong ConcealedSamples
Field Value
- ulong
A positive integer.
Remarks
A concealed sample is a sample that was lost or arrived too late to be played out, and therefore had to be replaced with a locally generated synthesized sample.
Note that lost samples are reported in RTCInboundRtpStreamStats.PacketsLost, while late packets are reported in FecPacketsDiscarded.
NOTE
The value is undefined for video streams.
-SilentConcealedSamples
-ConcealmentEvents
-Packet loss concealment on Wikipedia
ConcealmentEvents
The concealmentEvents property of the RTCInboundRtpStreamStats dictionary indicates the total number of concealment events for the received audio track over the lifetime of this stats object.
[Value("concealmentEvents")]
public ulong ConcealmentEvents
Field Value
- ulong
A positive integer.
Remarks
A concealed sample is a sample that was lost or arrived too late to be played out, and therefore had to be replaced with a locally generated synthesized sample.
Any number of consecutive concealed samples following a non-concealed sample comprise a single concealment event.
The value in this property will therefore be less than or equal to ConcealedSamples, which counts every sample.
NOTE
The value is undefined for video streams.
-ConcealedSamples
-SilentConcealedSamples
-Packet loss concealment on Wikipedia
CorruptionMeasurements
[Value("corruptionMeasurements")]
public ulong CorruptionMeasurements
Field Value
DecoderImplementation
[Value("decoderImplementation")]
public string DecoderImplementation
Field Value
EstimatedPlayoutTimestamp
NOTE
ExperimentalestimatedPlayoutTimestamp property of the RTCInboundRtpStreamStats dictionary indicates the estimated playout time of this receiver's track.
[Value("estimatedPlayoutTimestamp")]
public Number EstimatedPlayoutTimestamp
Field Value
Remarks
This is the Network Time Protocol (NTP) timestamp of the last playable audio sample or video frame that has a known timestamp, extrapolated with the time elapsed since it was ready to be played out.
In other words, it is the estimated current playout time of the track in the NTP clock time of the sender, and can be present even if there is no audio currently playing.
This can be used to estimate how much audio and video tracks from the same source are out of sync.
FecBytesReceived
[Value("fecBytesReceived")]
public ulong FecBytesReceived
Field Value
FecPacketsDiscarded
The fecPacketsDiscarded property of the RTCInboundRtpStreamStats dictionary indicates the number of {{Glossary("RTP")}} Forward Error Correction (FEC) packets that have been discarded.
[Value("fecPacketsDiscarded")]
public ulong FecPacketsDiscarded
Field Value
- ulong
An positive integer value.
Remarks
A FEC packet provides parity information that can be used to attempt to reconstruct RTP data packets which have been corrupted in transit.
This kind of packet might be discarded if all the packets that it covers have already been received or recovered using another FEC packet, or if the FEC packet arrived outside the recovery window and the lost RTP packets have already been skipped during playback as a result.
The value of RTCInboundRtpStreamStatsfecPacketsReceived includes these discarded packets.
FecPacketsReceived
The fecPacketsReceived property of the RTCInboundRtpStreamStats dictionary indicates how many Forward Error Correction (FEC) packets have been received by this RTP receiver from the remote peer.
[Value("fecPacketsReceived")]
public ulong FecPacketsReceived
Field Value
- ulong
An positive integer value.
Remarks
A FEC packet provides parity information that can be used to attempt to reconstruct RTP data packets which have been corrupted in transit.
-{{RFC(5109)}} (RTP Payload Format for Generic Forward Error Correction)
FecSsrc
[Value("fecSsrc")]
public ulong FecSsrc
Field Value
FirCount
[Value("firCount")]
public ulong FirCount
Field Value
FrameHeight
The frameHeight property of the RTCInboundRtpStreamStats dictionary indicates the height of the last decoded frame, in pixels.
[Value("frameHeight")]
public ulong FrameHeight
Field Value
- ulong
A positive integer, in pixels.
Remarks
Note that the resolution of the encoded frame may be lower than that of the media source, which is provided in Height.
NOTE
The property is undefined for audio streams, and before the first frame is decoded.
FrameWidth
The frameWidth property of the RTCInboundRtpStreamStats dictionary indicates the width of the last decoded frame, in pixels.
[Value("frameWidth")]
public ulong FrameWidth
Field Value
- ulong
A positive integer, in pixels.
Remarks
Note that the resolution of the encoded frame may be lower than that of the media source, which is provided in Width.
NOTE
The value is undefined for audio streams, or before the first frame is encoded.
FramesAssembledFromMultiplePackets
NOTE
ExperimentalframesAssembledFromMultiplePackets property of the RTCInboundRtpStreamStats dictionary indicates the total number of correctly decoded frames for this RTP stream that were assembled from more than one RTP packet.
[Value("framesAssembledFromMultiplePackets")]
public ulong FramesAssembledFromMultiplePackets
Field Value
- ulong
A positive integer.
Remarks
This property can be used with RTCInboundRtpStreamStatstotalAssemblyTime to determine the average assembly time: totalAssemblyTime / framesAssembledFromMultiplePacket.
A higher average assembly time might indicate network issues or inefficiencies in the receiving pipeline.
NOTE
The value is undefined for audio streams.
FramesDecoded
The framesDecoded property of the RTCInboundRtpStreamStats dictionary indicates the total number of video frames which have been decoded successfully for this media source.
[Value("framesDecoded")]
public ulong FramesDecoded
Field Value
- ulong
An positive integer.
Remarks
This represents the number of frames that would have been displayed assuming no frames were skipped.
NOTE
The property is undefined for audio streams.
FramesDropped
[Value("framesDropped")]
public ulong FramesDropped
Field Value
FramesPerSecond
The framesPerSecond property of the RTCInboundRtpStreamStats dictionary indicates the number of frames decoded in the last second.
[Value("framesPerSecond")]
public Number FramesPerSecond
Field Value
- Number
A positive integer.
Remarks
Note that this may be lower than the media source frame rate, which is provided in FramesPerSecond.
NOTE
The value is undefined for audio streams.
FramesReceived
The framesReceived property of the RTCInboundRtpStreamStats dictionary indicates the total number of complete frames received on this RTP stream over its lifetime.
[Value("framesReceived")]
public ulong FramesReceived
Field Value
- ulong
A positive number.
Remarks
Note that this may be lower than the total number of media source frames, which is provided in Frames.
NOTE
The value is undefined for audio streams.
FramesRendered
[Value("framesRendered")]
public ulong FramesRendered
Field Value
FreezeCount
NOTE
ExperimentalfreezeCount property of the RTCInboundRtpStreamStats dictionary indicates the total number of video freezes experienced by this receiver.
[Value("freezeCount")]
public ulong FreezeCount
Field Value
- ulong
A positive integer.
Remarks
A freeze is counted if the interval between two rendered frames is equal to or greater than the larger of "three times the average duration", or "the average + 150ms".
This ensures that the delay required to increment the freeze count scales appropriately with the frame rate.
NOTE
The value is undefined for audio streams.
HeaderBytesReceived
The headerBytesReceived property of the RTCInboundRtpStreamStats dictionary indicates the total number of RTP header and padding bytes received for this synchronization source (SSRC), including those sent in retransmissions.
[Value("headerBytesReceived")]
public ulong HeaderBytesReceived
Field Value
- ulong
A positive integer.
Remarks
Note that the total number of bytes received as payload over the transport is equal to: headerBytesReceived + BytesReceived.
InsertedSamplesForDeceleration
The insertedSamplesForDeceleration property of the RTCInboundRtpStreamStats dictionary accumulates the difference between the number of samples received and the number of samples played out of the {{glossary("jitter","jitter buffer")}} while audio playout is slowed down.
[Value("insertedSamplesForDeceleration")]
public ulong InsertedSamplesForDeceleration
Field Value
- ulong
A positive integer.
Remarks
The WebRTC jitter buffer sets a target playout delay level such that the inflow and outflow of the jitter buffer should be approximately the same.
If the jitter buffer empties too quickly the audio sample that is next in line to be output may be "ahead of schedule", and the jitter buffer may slow down playout.
If the jitter buffer slows down the playout of the sample by inserting additional audio samples, this property indicates the accumulated number of such added samples.
Slowing down and/or speeding up the audio (as tracked with RemovedSamplesForAcceleration) may result in audible warbling or other distortion.
The totals at the end of the call also give you some indication of how many samples or seconds were impacted, and insertedSamplesForDeceleration can be correlated with RTCInboundRtpStreamStatstotalSamplesReceived to get a relative measure of deceleration.
Logging insertedSamplesForDeceleration and removedSamplesForAcceleration in timeslices can be helpful for isolating the times at which the problem occurred and you can then correlate other metrics in the same timeslice to determine likely causes.
NOTE
The value is undefined for video streams.
-RemovedSamplesForAcceleration
-The better way in "How WebRTC's NetEQ Jitter Buffer Provides Smooth Audio" (webrtchacks.com, June 2025)
JitterBufferDelay
The jitterBufferDelay property of the RTCInboundRtpStreamStats dictionary indicates the accumulated time that all audio samples and complete video frames have spent in the {{glossary("jitter","jitter buffer")}}.
[Value("jitterBufferDelay")]
public Number JitterBufferDelay
Field Value
- Number
A positive number, in seconds.
Remarks
For an audio sample the time is calculated from the time that the sample is received by the jitter buffer ("ingest timestamp"), until the time that the sample is emitted ("exit timestamp").
For a video frame, the ingest time is when the first packet in the frame was ingested until the time at which the whole frame exits the buffer.
Note that several audio samples in an RTP packet will have the same ingest timestamp but different exit timestamps, while a video frame might be split across a number of RTP packets.
jitterBufferDelay is incremented, along with JitterBufferEmittedCount, when samples or frames exit the buffer.
The average jitter buffer delay is jitterBufferDelay / jitterBufferEmittedCount.
The jitter buffer may hold samples/frames for a longer (or shorter) delay, allowing samples to build up in the buffer so that it can provide a more smooth and continuous playout.
A low and relatively constant jitterBufferDelay is desirable, as it indicates the buffer does not need to hold as many frames/samples, and the network is stable.
Higher values might indicate that the network is less reliable or predictable.
Similarly, a steady average delay indicates a more stable network, while a rising average delay indicates growing latency.
-JitterBufferEmittedCount
-JitterBufferMinimumDelay
-JitterBufferTargetDelay
JitterBufferEmittedCount
The jitterBufferEmittedCount property of the RTCInboundRtpStreamStats dictionary indicates the total number of audio samples and/or video frames that have come out of the {{glossary("jitter","jitter buffer")}}.
[Value("jitterBufferEmittedCount")]
public ulong JitterBufferEmittedCount
Field Value
- ulong
A positive number.
Remarks
The jitterBufferEmittedCount and JitterBufferDelay are incremented when samples or frames exit the buffer.
The average jitter buffer delay is jitterBufferDelay / jitterBufferEmittedCount.
-RTCInboundRtpStreamStatsjitterBufferDelay
JitterBufferMinimumDelay
The jitterBufferMinimumDelay property of the RTCInboundRtpStreamStats dictionary indicates the minimum {{glossary("jitter","jitter buffer")}} delay that might be achieved given only the network characteristics such as jitter and packet loss.
[Value("jitterBufferMinimumDelay")]
public Number JitterBufferMinimumDelay
Field Value
- Number
A positive number, in seconds.
Remarks
The jitter buffer delay may be impacted by user settings such as JitterBufferTarget, and WebRTC mechanisms such as AV synchronization.jitterBufferMinimumDelay can be compared to the JitterBufferTargetDelay to examine the effect of these external factors on the delay.
The property is updated when JitterBufferEmittedCount is updated.
-JitterBufferEmittedCount
-JitterBufferDelay
-JitterBufferTargetDelay
JitterBufferTargetDelay
The jitterBufferTargetDelay property of the RTCInboundRtpStreamStats dictionary indicates the accumulated target {{glossary("jitter","jitter buffer")}} delay, in seconds.
[Value("jitterBufferTargetDelay")]
public Number JitterBufferTargetDelay
Field Value
- Number
A positive number, in seconds.
Remarks
The target jitter buffer delay is the playout delay that the jitter buffer estimates that it needs to maintain in order to compensate for jitter and ensure smooth playback.
The estimate is affected by network variability and latency as well as mechanisms such as AV synchronization. Developers can influence it by setting the JitterBufferTarget property.
The property is updated when JitterBufferEmittedCount is updated.
The average target jitter buffer delay is jitterBufferTargetDelay / jitterBufferEmittedCount.
The property can be compared to the average of the JitterBufferMinimumDelay to determine the effects of external factors on the target, such as the configured jitterBufferTarget hint.
-JitterBufferEmittedCount
-JitterBufferMinimumDelay
-JitterBufferDelay
KeyFramesDecoded
The keyFramesDecoded property of the RTCInboundRtpStreamStats dictionary represents the total number of key frames successfully decoded in this RTP media stream.
This includes, for example, key frames in VP8 ({{rfc("6386")}}) or IDR-frames in H.264 ({{rfc("6184")}}).
[Value("keyFramesDecoded")]
public ulong KeyFramesDecoded
Field Value
- ulong
A positive integer.
Remarks
Note that the number of delta frames is calculated by subtracting this value from the total number of frames (FramesDecoded - keyFramesEncoded).
NOTE
The property is undefined for audio streams.
LastPacketReceivedTimestamp
The lastPacketReceivedTimestamp property of the RTCInboundRtpStreamStats dictionary indicates the time at which the most recently received packet arrived from this source.
[Value("lastPacketReceivedTimestamp")]
public Number LastPacketReceivedTimestamp
Field Value
- Number
A Number which specifies the time at which the most recently received packet arrived on this RTP stream.
NOTE
This value differs from the RTCInboundRtpStreamStats.Timestamp,
which represents the time at which the statistics object was created.
Remarks
Mid
The mid property of the RTCInboundRtpStreamStats dictionary is a string that contains the media id negotiated between the local and remote peers.
This uniquely identifies the pairing of source and destination for the transceiver's stream.
[Value("mid")]
public string Mid
Field Value
- string
The value of the corresponding Mid, unless that value is null, in which case this statistic property is not present.
Remarks
NackCount
The nackCount property of the RTCInboundRtpStreamStats dictionary indicates the number of times the receiver sent a NACK packet to the sender.
[Value("nackCount")]
public ulong NackCount
Field Value
- ulong
A positive integer.
Remarks
A NACK (Negative ACKnowledgement, also called "Generic NACK") packet tells the sender that one or more of the RTP packets it sent were lost in transport.
PacketsDiscarded
The packetsDiscarded property of the RTCInboundRtpStreamStats dictionary indicates the cumulative number of {{Glossary("RTP")}} packets that have been discarded by the {{glossary("jitter","jitter buffer")}} due to late or early-arrival, and are hence not played out.
[Value("packetsDiscarded")]
public ulong PacketsDiscarded
Field Value
- ulong
An positive integer value.This is calculated as defined in {{rfc("7002",,"3.2")}} (and appendix A.a.)
Remarks
The value does not include packets that are discarded to due to packet duplication.
-RTCRemoteInboundRtpStreamStats.PacketsLost
-RTCRemoteInboundRtpStreamStats.PacketsReceived
PauseCount
NOTE
ExperimentalpauseCount property of the RTCRemoteInboundRtpStreamStats dictionary indicates the total number of pauses experienced by this receiver.
[Value("pauseCount")]
public ulong PauseCount
Field Value
- ulong
An positive integer.
Remarks
A pause is counted when a new frame is rendered more than 5 seconds after the last frame was rendered.
The average pause duration can be calculated using totalPausesDuration / pauseCount.
NOTE
The property is undefined for audio streams.
PlayoutId
NOTE
ExperimentalplayoutId property of the RTCInboundRtpStreamStats dictionary indicates the RTCAudioPlayoutStats.Id of the RTCAudioPlayoutStats object that corresponds to this stream.
[Value("playoutId")]
public string PlayoutId
Field Value
- string
A string.
Remarks
NOTE
The value is undefined for video streams.
PliCount
[Value("pliCount")]
public ulong PliCount
Field Value
PowerEfficientDecoder
[Value("powerEfficientDecoder")]
public bool PowerEfficientDecoder
Field Value
QpSum
The qpSum property of the RTCInboundRtpStreamStats dictionary indicates the sum of the Quantization Parameter (QP) values for every frame sent or received on the video track corresponding to this RTCInboundRtpStreamStats object.
[Value("qpSum")]
public ulong QpSum
Field Value
- ulong
A positive integer.
Remarks
In general, a larger numbers indicates that the video data is more heavily compressed.
NOTE
This value is only available for video media.
RemoteId
The remoteId property of the RTCInboundRtpStreamStats dictionary specifies the RTCInboundRtpStreamStats.Id of the RTCRemoteOutboundRtpStreamStats object representing the remote peer's RTCRtpSender which is sending the media to the local peer.
[Value("remoteId")]
public string RemoteId
Field Value
- string
A string.
Remarks
RemovedSamplesForAcceleration
The removedSamplesForAcceleration property of the RTCInboundRtpStreamStats dictionary accumulates the difference between the number of samples played out of the {{glossary("jitter","jitter buffer")}} and the number of samples received while audio playout is sped up.
[Value("removedSamplesForAcceleration")]
public ulong RemovedSamplesForAcceleration
Field Value
- ulong
A positive integer.
Remarks
The WebRTC jitter buffer sets a target playout delay level such that the inflow and outflow of the jitter buffer should be approximately the same.
If the jitter buffer empties too slowly the audio sample that is next in line to be output may be "behind schedule", and the engine may speed up playout to catch up.
If the engine speeds up playout by removing some audio samples, this property indicates the accumulated number of such removed samples.
Speeding up or slowing down the audio (as tracked with InsertedSamplesForDeceleration) may result in audible warbling or other distortion.
The totals at the end of the call also give you some indication of how many samples or seconds were impacted, and removedSamplesForAcceleration can be correlated with RTCInboundRtpStreamStatstotalSamplesReceived to get a relative measure of acceleration.
Logging insertedSamplesForDeceleration and removedSamplesForAcceleration in timeslices can be helpful for isolating the times at which the problem occurred and you can then correlate other metrics in the same timeslice to determine likely causes.
NOTE
The value is undefined for video streams.
-InsertedSamplesForDeceleration
-The better way in "How WebRTC's NetEQ Jitter Buffer Provides Smooth Audio" (webrtchacks.com, June 2025)
RetransmittedBytesReceived
[Value("retransmittedBytesReceived")]
public ulong RetransmittedBytesReceived
Field Value
RetransmittedPacketsReceived
[Value("retransmittedPacketsReceived")]
public ulong RetransmittedPacketsReceived
Field Value
RtxSsrc
[Value("rtxSsrc")]
public ulong RtxSsrc
Field Value
SilentConcealedSamples
The silentConcealedSamples property of the RTCInboundRtpStreamStats dictionary indicates the total number of silent concealed samples for the received audio track over the lifetime of this stats object.
[Value("silentConcealedSamples")]
public ulong SilentConcealedSamples
Field Value
- ulong
A positive integer.
Remarks
A concealed sample is a sample that was lost or arrived too late to be played out, and therefore had to be replaced with a locally generated synthesized sample.
A silent concealed sample is one where the inserted sample is either silent or comfort noise.
NOTE
The value is undefined for video streams.
-ConcealedSamples
-ConcealmentEvents
-Packet loss concealment on Wikipedia
TotalAssemblyTime
NOTE
ExperimentaltotalAssemblyTime property of the RTCInboundRtpStreamStats dictionary indicates the total time spent assembling successfully decoded video frames that were transported in multiple RTP packets.
[Value("totalAssemblyTime")]
public Number TotalAssemblyTime
Field Value
- Number
A number, in seconds.
Remarks
NOTE
The value is undefined for audio streams.
TotalAudioEnergy
The totalAudioEnergy property of the RTCInboundRtpStreamStats dictionary represents the total audio energy of a received audio track over the lifetime of this stats object.
[Value("totalAudioEnergy")]
public Number TotalAudioEnergy
Field Value
- Number
A number produced by summing the energy of every sample over the lifetime of this stats object.The energy of each sample is calculated by dividing the sample's value by the highest-intensity encodable value, squaring the result, and then multiplying by the duration of the sample in seconds.
This is shown as an equation below:Note that if multiple audio channels are used, the audio energy of a sample refers to the highest energy of any channel.
Remarks
The total energy across a particular duration can be determined by subtracting the value of this property returned by two different getStats() calls.
NOTE
The value is undefined for video streams.
-TotalAudioEnergy for audio energy of local tracks (that are being sent)
TotalCorruptionProbability
[Value("totalCorruptionProbability")]
public Number TotalCorruptionProbability
Field Value
TotalDecodeTime
The totalDecodeTime property of the RTCInboundRtpStreamStats dictionary indicates the total time spend decoding frames for this media source/stream, in seconds.
[Value("totalDecodeTime")]
public Number TotalDecodeTime
Field Value
- Number
An positive number, in seconds.
Remarks
The time it takes to decode one frame is the time passed between feeding the decoder a frame and the decoder returning decoded data for that frame.
The number of decoded frames is given in FramesDecoded, and the average decode time is totalDecodeTime / framesDecoded.
NOTE
The property is undefined for audio streams.
TotalFreezesDuration
NOTE
ExperimentaltotalFreezesDuration property of the RTCRemoteInboundRtpStreamStats dictionary indicates the total time that the video in this stream has spent frozen, in seconds.
[Value("totalFreezesDuration")]
public Number TotalFreezesDuration
Field Value
- Number
An positive number, in seconds.
Remarks
A freeze is counted if the interval between two rendered frames is equal or greater than the larger of "three times the average duration", or "the average + 150ms", and the time taken between frames is added to the totalFreezesDuration.
The average freeze duration can be calculated using totalFreezesDuration / freezeCount.
NOTE
The property is undefined for audio streams.
TotalInterFrameDelay
The totalInterFrameDelay property of the RTCInboundRtpStreamStats dictionary indicates the total accumulated time between consecutively rendered frames, in seconds.
It is recorded after each frame is rendered.
[Value("totalInterFrameDelay")]
public Number TotalInterFrameDelay
Field Value
- Number
A positive number, in seconds.
Remarks
The inter-frame delay variance can be calculated from totalInterFrameDelay, TotalSquaredInterFrameDelay , and FramesRendered according to the formula: (totalSquaredInterFrameDelay - totalInterFrameDelay^2/ framesRendered)/framesRendered.
NOTE
The property is undefined for audio streams.
TotalPausesDuration
NOTE
ExperimentaltotalPausesDuration property of the RTCRemoteInboundRtpStreamStats dictionary indicates the total time that the video in this stream has spent paused, in seconds
[Value("totalPausesDuration")]
public Number TotalPausesDuration
Field Value
- Number
An positive number, in seconds.
Remarks
A pause is counted when a new frame is rendered more than 5 seconds after the last frame was rendered, and the time taken between frames is added to the totalPausesDuration.
The average pause duration can be calculated using totalPausesDuration / pauseCount.
NOTE
The property is undefined for audio streams.
TotalProcessingDelay
The totalProcessingDelay property of the RTCInboundRtpStreamStats dictionary indicates the total accumulated time spent processing audio or video samples, in seconds.
[Value("totalProcessingDelay")]
public Number TotalProcessingDelay
Field Value
- Number
A positive number, in seconds.
Remarks
The processing time for each audio or video sample is calculated from the time the first RTP packet is received (reception timestamp) to the time that the corresponding sample or frame is decoded (decoded timestamp).
At this point the audio sample or video frame is fully decoded by the decoder and is ready for playout by the MediaStreamTrack.
For audio streams, an RTP packet may contain multiple audio samples: these will share the same reception timestamp.
For video streams, a complete frame may arrive in several RTP packets, and the reception timestamp is that of the first RTP packet that was received that contains data for the frame.
In both cases the decoded timestamp is the time at which the sample or frame is ready to play.
For video, the property only accumulates for decoded frames (not those that were dropped).
The average processing delay can be calculated by dividing the totalProcessingDelay with the FramesDecoded. <!-- audio samples can't get average - totalSamplesDecoded not in spec yet or implemented -->
TotalSamplesDuration
The totalSamplesDuration property of the RTCRemoteInboundRtpStreamStats dictionary indicates the total duration of all audio samples that have been received.
In other words, the current duration of the track.
[Value("totalSamplesDuration")]
public Number TotalSamplesDuration
Field Value
- Number
An positive number, in seconds.
Remarks
This can be used with RTCInboundRtpStreamStatstotalAudioEnergy to compute an average audio level over different intervals.
NOTE
The value is undefined for video streams.
-TotalSamplesDuration for the audio duration of sent samples.
-RTCInboundRtpStreamStatstotalSamplesReceived
TotalSamplesReceived
The totalSamplesReceived property of the RTCInboundRtpStreamStats dictionary indicates the total number of samples received on this stream, including RTCInboundRtpStreamStatsconcealedSamples.
[Value("totalSamplesReceived")]
public ulong TotalSamplesReceived
Field Value
- ulong
An positive integer.
Remarks
NOTE
The value is undefined for video streams.
TotalSquaredCorruptionProbability
[Value("totalSquaredCorruptionProbability")]
public Number TotalSquaredCorruptionProbability
Field Value
TotalSquaredInterFrameDelay
The totalSquaredInterFrameDelay property of the RTCInboundRtpStreamStats dictionary indicates the sum of the square of each inter-frame delay between consecutively rendered frames.
It is recorded after each frame is rendered.
[Value("totalSquaredInterFrameDelay")]
public Number TotalSquaredInterFrameDelay
Field Value
- Number
A positive number.
Remarks
The inter-frame delay variance be calculated from TotalInterFrameDelay, totalSquaredInterFrameDelay, and FramesRendered, according to the formula: (totalSquaredInterFrameDelay - totalInterFrameDelay^2/ framesRendered)/framesRendered.
NOTE
The property is undefined for audio streams.
TrackIdentifier
The trackIdentifier property of the RTCInboundRtpStreamStats dictionary is a string that identifies the MediaStreamTrack associated with the inbound stream.
[Value("trackIdentifier")]
public required string TrackIdentifier
Field Value
- string
A string that identifies the associated media track.
Remarks
This value will match the Id value of the associated track.