Bug 1480161: Fix underrun assert for processed input stream. r=achronop
authorAndreas Pehrson <apehrson@mozilla.com>
Thu, 02 Aug 2018 10:25:05 +0000
changeset 429784 c6bc352b5e6240004c40db86d7ddec1671a5aa99
parent 429783 4fd9991396e23ec1dfd668a846c90728177b2951
child 429785 5121e251f01d60bea909032a32c950a8d9b7d96b
push id34373
push usercsabou@mozilla.com
push dateThu, 02 Aug 2018 13:09:17 +0000
treeherdermozilla-central@a2d65d03e46a [default view] [failures only]
perfherder[talos] [build metrics] [platform microbench] (compared to previous push)
reviewersachronop
bugs1480161
milestone63.0a1
first release with
nightly linux32
nightly linux64
nightly mac
nightly win32
nightly win64
last release without
nightly linux32
nightly linux64
nightly mac
nightly win32
nightly win64
Bug 1480161: Fix underrun assert for processed input stream. r=achronop The logic here intends to (as is written in the comment) append one block of silence to the track to allow for us to underrun one full scratch buffer. The code doesn't match this behavior however, because if we are not underrunning by less than a block, we end up appending *less* than a block. This causes us to append at a later time as the scratch buffer can swallow more (up to a full block) than we appended. Without processing this seems to work because of timing and ordering, but with processing (aec/agc/ns) we tend to add 71 (512 for an iteration - 441 packed) samples of silence, leaving us to hit the assert with a 44% ((128-71)/128) chance during subsequent iterations. Differential Revision: https://phabricator.services.mozilla.com/D2644
dom/media/webrtc/MediaEngineWebRTCAudio.cpp
--- a/dom/media/webrtc/MediaEngineWebRTCAudio.cpp
+++ b/dom/media/webrtc/MediaEngineWebRTCAudio.cpp
@@ -800,32 +800,32 @@ MediaEngineWebRTCMicrophoneSource::Pull(
       // will already be ended. No need to do anything.
       return;
     }
 
     // We don't want to GetEndOfAppendedData() above at the declaration if the
     // allocation was removed and the track non-existant. An assert will fail.
     delta = aDesiredTime - aStream->GetEndOfAppendedData(aTrackID);
 
+    if (delta < 0) {
+      LOG_FRAMES(("Not appending silence for allocation %p; %" PRId64 " frames already buffered",
+                  mAllocations[i].mHandle.get(), -delta));
+      return;
+    }
+
     if (!mAllocations[i].mLiveFramesAppended ||
         !mAllocations[i].mLiveSilenceAppended) {
       // These are the iterations after starting or resuming audio capture.
       // Make sure there's at least one extra block buffered until audio
       // callbacks come in. We also allow appending silence one time after
       // audio callbacks have started, to cover the case where audio callbacks
       // start appending data immediately and there is no extra data buffered.
       delta += WEBAUDIO_BLOCK_SIZE;
     }
 
-    if (delta < 0) {
-      LOG_FRAMES(("Not appending silence for allocation %p; %" PRId64 " frames already buffered",
-                  mAllocations[i].mHandle.get(), -delta));
-      return;
-    }
-
     LOG_FRAMES(("Pulling %" PRId64 " frames of silence for allocation %p",
                 delta, mAllocations[i].mHandle.get()));
 
     // This assertion fails when we append silence here in the same iteration
     // as there were real audio samples already appended by the audio callback.
     // Note that this is exempted until live samples and a subsequent chunk of silence have been appended to the track. This will cover cases like:
     // - After Start(), there is silence (maybe multiple times) appended before
     //   the first audio callback.