Backed out 39 changesets (bug 1199032, bug 1180935, bug 1190776, bug 1194197, bug 1194188, bug 1190970, bug 1197977, bug 1196558, bug 1196353, bug 1199531, bug 1198094, bug 1192675, bug 1197075, bug 1197051, bug 1197125, bug 1188871, bug 1188313, bug 1185827, bug 1195073, bug 1193142, bug 1195071, bug 1193123, bug 1199193)
authorRalph Giles <giles@mozilla.com>
Fri, 28 Aug 2015 15:16:54 -0700
changeset 277406 5bb661db5c6c6e413fc4f566529ff2943270666f
parent 277405 3da0d30d73b0c04d91656e3ec27c851960e6dce2
child 277407 a8c68a70c28579a54d9d814a5e70bc618ba8b132
push id8362
push userrgiles@mozilla.com
push dateFri, 28 Aug 2015 22:17:43 +0000
treeherdermozilla-aurora@5bb661db5c6c [default view] [failures only]
perfherder[talos] [build metrics] [platform microbench] (compared to previous push)
bugs1199032, 1180935, 1190776, 1194197, 1194188, 1190970, 1197977, 1196558, 1196353, 1199531, 1198094, 1192675, 1197075, 1197051, 1197125, 1188871, 1188313, 1185827, 1195073, 1193142, 1195071, 1193123, 1199193, 1199573
milestone42.0a2
backs oute0eaf1f5b1d098abe1bbeab67007df5eccee1185
ee5b4f72fbd2601012dedfe4c536f759a7740bbe
a423bc891649484341a9adeb9b4b97b916ad90aa
6c48368813f8102c4f6c18cc13dd8c25ddb4d4d5
cb7bf94606ad0522d04f7bd75184521c2ef1fd7c
d7a0bba042a7daef9ec2058b0f508a6e4e943945
7cd3b6065d3ebb7ec8e0d21821b989519c5b11c1
a21aa658533cb5e74d29fe0e2d1f801482b2784f
003548c8673548434dd5d575970c5de616fd03d6
c73d9e678d632e439710b7c2179c7f42e9f21c06
c45b13902e53e4b67e67914db6ee11abe42daa95
4c993f78e9f018c5576999a0512e0791151cc6d6
1fb0a72a7fca8a839412752013851e2c57d48482
cbbb8dbd27450cbbfff4b8edb4360ba72aaab2de
e24897436eaa601d8a9ad299c4d47659f7953cdf
23b5669479f855ab687a7bde4e97dcce4d47ba43
efb94d2f8b6832edc408b5deb4d771316b195715
d2ed0c5abd35f821508afe5097394c9824ce7eb3
73daa8e4568dff87d1e8c488d9ab1278c3d9e578
89ec7fc554b9a7bd6d0543a5c7e8e15da2a0bfa2
6c3044880eb142f186dc440fdfd3ac3568099871
1f8570570a4fb48c0f4c0880cf81a2132dd9b6e9
fdc74c64c9c925e885c85f0665db4b2acdbd66f5
c64e11bca05f4488d9bec3f1a05c1de78efdd015
43c9dbedf7dec89862ffbfd98cd4a51becd32952
eef6993d896ed4ae6e19c3e1079b689edd6499cb
744ce5c31af5b6121dd450ec25d32999d80b0090
0a9391fdb35077aa8f36ee9b7687e06ad80a05e7
b171d1b0b0ecbaeb35fa6914a2e37e4c04cafc87
7cac5bfef3c58bda7a2d8d140e385dedb76be7f7
0439f40dddbe6189e3ab0dfdd73750146962ec1c
ac6d673c6fd3741e477960d344a854840a7b3f68
7d421d6457134978ec7dd4b85db4ff631dd08897
174a4342d074ca921fb99aa3aeb1de792951d713
2f76c5ebacaf4a554ddf1df211f0813872eda06d
ff30230e26210fbb6691426cd31e1c16bb672bb4
e9142a9acf59316005522e57e0826488ca971098
6613d06f28bc376578edb759852683e34f615c79
45f3c6119bcaa5e6983e1baef92c6b37bc17e34a
Backed out 39 changesets (bug 1199032, bug 1180935, bug 1190776, bug 1194197, bug 1194188, bug 1190970, bug 1197977, bug 1196558, bug 1196353, bug 1199531, bug 1198094, bug 1192675, bug 1197075, bug 1197051, bug 1197125, bug 1188871, bug 1188313, bug 1185827, bug 1195073, bug 1193142, bug 1195071, bug 1193123, bug 1199193) Youtube regressions. See bug 1199573. Backed out changeset e0eaf1f5b1d0 (bug 1199531) Backed out changeset ee5b4f72fbd2 (bug 1196353) Backed out changeset a423bc891649 (bug 1190776) Backed out changeset 6c48368813f8 (bug 1197125) Backed out changeset cb7bf94606ad (bug 1199032) Backed out changeset d7a0bba042a7 (bug 1199032) Backed out changeset 7cd3b6065d3e (bug 1197977) Backed out changeset a21aa658533c (bug 1197075) Backed out changeset 003548c86735 (bug 1197075) Backed out changeset c73d9e678d63 (bug 1197075) Backed out changeset c45b13902e53 (bug 1197051) Backed out changeset 4c993f78e9f0 (bug 1195073) Backed out changeset 1fb0a72a7fca (bug 1195073) Backed out changeset cbbb8dbd2745 (bug 1195073) Backed out changeset e24897436eaa (bug 1195073) Backed out changeset 23b5669479f8 (bug 1195073) Backed out changeset efb94d2f8b68 (bug 1195073) Backed out changeset d2ed0c5abd35 (bug 1195073) Backed out changeset 73daa8e4568d (bug 1195073) Backed out changeset 89ec7fc554b9 (bug 1195071) Backed out changeset 6c3044880eb1 (bug 1194197) Backed out changeset 1f8570570a4f (bug 1194188) Backed out changeset fdc74c64c9c9 (bug 1193142) Backed out changeset c64e11bca05f (bug 1193123) Backed out changeset 43c9dbedf7de (bug 1199193) Backed out changeset eef6993d896e (bug 1198094) Backed out changeset 744ce5c31af5 (bug 1196558) Backed out changeset 0a9391fdb350 (bug 1192675) Backed out changeset b171d1b0b0ec (bug 1190970) Backed out changeset 7cac5bfef3c5 (bug 1190970) Backed out changeset 0439f40dddbe (bug 1188871) Backed out changeset ac6d673c6fd3 (bug 1188871) Backed out changeset 7d421d645713 (bug 1185827) Backed out changeset 174a4342d074 (bug 1188313) Backed out changeset 2f76c5ebacaf (bug 1180935) Backed out changeset ff30230e2621 (bug 1180935) Backed out changeset e9142a9acf59 (bug 1180935) Backed out changeset 6613d06f28bc (bug 1180935) Backed out changeset 45f3c6119bca (bug 1180935)
dom/media/MediaDecoderReader.cpp
dom/media/MediaDecoderReader.h
dom/media/MediaDecoderStateMachine.cpp
dom/media/MediaFormatReader.cpp
dom/media/MediaFormatReader.h
dom/media/XiphExtradata.cpp
dom/media/XiphExtradata.h
dom/media/mediasource/ContainerParser.cpp
dom/media/mediasource/MediaSource.cpp
dom/media/mediasource/MediaSourceReader.cpp
dom/media/mediasource/MediaSourceReader.h
dom/media/mediasource/SourceBuffer.cpp
dom/media/mediasource/SourceBufferResource.h
dom/media/mediasource/TrackBuffersManager.cpp
dom/media/mediasource/TrackBuffersManager.h
dom/media/mediasource/test/mochitest.ini
dom/media/mediasource/test/test_BufferingWait_mp4.html
dom/media/mediasource/test/test_WaitingOnMissingData_mp4.html
dom/media/mediasource/test/test_WaitingToEndedTransition_mp4.html
dom/media/moz.build
dom/media/omx/MediaCodecReader.cpp
dom/media/omx/MediaCodecReader.h
dom/media/omx/RtspMediaCodecReader.cpp
dom/media/omx/RtspMediaCodecReader.h
dom/media/platforms/agnostic/VorbisDecoder.cpp
dom/media/platforms/android/AndroidDecoderModule.cpp
dom/media/platforms/apple/AppleUtils.h
dom/media/platforms/apple/AppleVDADecoder.cpp
dom/media/platforms/apple/AppleVDADecoder.h
dom/media/platforms/apple/AppleVTDecoder.cpp
dom/media/platforms/apple/AppleVTDecoder.h
dom/media/webm/WebMBufferedParser.cpp
dom/media/webm/WebMBufferedParser.h
dom/media/webm/WebMDemuxer.cpp
dom/media/webm/WebMDemuxer.h
dom/tests/mochitest/general/test_interfaces.html
modules/libpref/init/all.js
testing/web-platform/meta/media-source/mediasource-append-buffer.html.ini
testing/web-platform/meta/media-source/mediasource-config-change-webm-a-bitrate.html.ini
testing/web-platform/meta/media-source/mediasource-config-change-webm-av-audio-bitrate.html.ini
testing/web-platform/meta/media-source/mediasource-config-change-webm-av-framesize.html.ini
testing/web-platform/meta/media-source/mediasource-config-change-webm-av-video-bitrate.html.ini
testing/web-platform/meta/media-source/mediasource-config-change-webm-v-bitrate.html.ini
testing/web-platform/meta/media-source/mediasource-config-change-webm-v-framerate.html.ini
testing/web-platform/meta/media-source/mediasource-config-change-webm-v-framesize.html.ini
testing/web-platform/meta/media-source/mediasource-sourcebuffer-mode.html.ini
--- a/dom/media/MediaDecoderReader.cpp
+++ b/dom/media/MediaDecoderReader.cpp
@@ -295,17 +295,18 @@ public:
   }
 
   NS_METHOD Run()
   {
     MOZ_ASSERT(mReader->OnTaskQueue());
 
     // Make sure ResetDecode hasn't been called in the mean time.
     if (!mReader->mBaseVideoPromise.IsEmpty()) {
-      mReader->RequestVideoData(/* aSkip = */ true, mTimeThreshold);
+      mReader->RequestVideoData(/* aSkip = */ true, mTimeThreshold,
+                                /* aForceDecodeAhead = */ false);
     }
 
     return NS_OK;
   }
 
 private:
   nsRefPtr<MediaDecoderReader> mReader;
   const int64_t mTimeThreshold;
@@ -332,17 +333,18 @@ public:
   }
 
 private:
   nsRefPtr<MediaDecoderReader> mReader;
 };
 
 nsRefPtr<MediaDecoderReader::VideoDataPromise>
 MediaDecoderReader::RequestVideoData(bool aSkipToNextKeyframe,
-                                     int64_t aTimeThreshold)
+                                     int64_t aTimeThreshold,
+                                     bool aForceDecodeAhead)
 {
   nsRefPtr<VideoDataPromise> p = mBaseVideoPromise.Ensure(__func__);
   bool skip = aSkipToNextKeyframe;
   while (VideoQueue().GetSize() == 0 &&
          !VideoQueue().IsFinished()) {
     if (!DecodeVideoFrame(skip, aTimeThreshold)) {
       VideoQueue().Finish();
     } else if (skip) {
--- a/dom/media/MediaDecoderReader.h
+++ b/dom/media/MediaDecoderReader.h
@@ -143,17 +143,17 @@ public:
 
   // Requests one video sample from the reader.
   //
   // Don't hold the decoder monitor while calling this, as the implementation
   // may try to wait on something that needs the monitor and deadlock.
   // If aSkipToKeyframe is true, the decode should skip ahead to the
   // the next keyframe at or after aTimeThreshold microseconds.
   virtual nsRefPtr<VideoDataPromise>
-  RequestVideoData(bool aSkipToNextKeyframe, int64_t aTimeThreshold);
+  RequestVideoData(bool aSkipToNextKeyframe, int64_t aTimeThreshold, bool aForceDecodeAhead);
 
   friend class ReRequestVideoWithSkipTask;
   friend class ReRequestAudioTask;
 
   // By default, the state machine polls the reader once per second when it's
   // in buffering mode. Some readers support a promise-based mechanism by which
   // they notify the state machine when the data arrives.
   virtual bool IsWaitForDataSupported() { return false; }
--- a/dom/media/MediaDecoderStateMachine.cpp
+++ b/dom/media/MediaDecoderStateMachine.cpp
@@ -166,16 +166,17 @@ static TimeDuration UsecsToDuration(int6
 }
 
 static int64_t DurationToUsecs(TimeDuration aDuration) {
   return static_cast<int64_t>(aDuration.ToSeconds() * USECS_PER_S);
 }
 
 static const uint32_t MIN_VIDEO_QUEUE_SIZE = 3;
 static const uint32_t MAX_VIDEO_QUEUE_SIZE = 10;
+static const uint32_t SCARCE_VIDEO_QUEUE_SIZE = 1;
 static const uint32_t VIDEO_QUEUE_SEND_TO_COMPOSITOR_SIZE = 9999;
 
 static uint32_t sVideoQueueDefaultSize = MAX_VIDEO_QUEUE_SIZE;
 static uint32_t sVideoQueueHWAccelSize = MIN_VIDEO_QUEUE_SIZE;
 static uint32_t sVideoQueueSendToCompositorSize = VIDEO_QUEUE_SEND_TO_COMPOSITOR_SIZE;
 
 MediaDecoderStateMachine::MediaDecoderStateMachine(MediaDecoder* aDecoder,
                                                    MediaDecoderReader* aReader,
@@ -1734,34 +1735,36 @@ MediaDecoderStateMachine::RequestVideoDa
   // Time the video decode, so that if it's slow, we can increase our low
   // audio threshold to reduce the chance of an audio underrun while we're
   // waiting for a video decode to complete.
   mVideoDecodeStartTime = TimeStamp::Now();
 
   bool skipToNextKeyFrame = mSentFirstFrameLoadedEvent &&
     NeedToSkipToNextKeyframe();
   int64_t currentTime = mState == DECODER_STATE_SEEKING ? 0 : GetMediaTime();
+  bool forceDecodeAhead = mSentFirstFrameLoadedEvent &&
+    static_cast<uint32_t>(VideoQueue().GetSize()) <= SCARCE_VIDEO_QUEUE_SIZE;
 
   SAMPLE_LOG("Queueing video task - queued=%i, decoder-queued=%o, skip=%i, time=%lld",
              VideoQueue().GetSize(), mReader->SizeOfVideoQueueInFrames(), skipToNextKeyFrame,
              currentTime);
 
   if (mSentFirstFrameLoadedEvent) {
     mVideoDataRequest.Begin(
       ProxyMediaCall(DecodeTaskQueue(), mReader.get(), __func__,
                      &MediaDecoderReader::RequestVideoData,
-                     skipToNextKeyFrame, currentTime)
+                     skipToNextKeyFrame, currentTime, forceDecodeAhead)
       ->Then(OwnerThread(), __func__, this,
              &MediaDecoderStateMachine::OnVideoDecoded,
              &MediaDecoderStateMachine::OnVideoNotDecoded));
   } else {
     mVideoDataRequest.Begin(
       ProxyMediaCall(DecodeTaskQueue(), mReader.get(), __func__,
                      &MediaDecoderReader::RequestVideoData,
-                     skipToNextKeyFrame, currentTime)
+                     skipToNextKeyFrame, currentTime, forceDecodeAhead)
       ->Then(OwnerThread(), __func__, mStartTimeRendezvous.get(),
              &StartTimeRendezvous::ProcessFirstSample<VideoDataPromise>,
              &StartTimeRendezvous::FirstSampleRejected<VideoData>)
       ->CompletionPromise()
       ->Then(OwnerThread(), __func__, this,
              &MediaDecoderStateMachine::OnVideoDecoded,
              &MediaDecoderStateMachine::OnVideoNotDecoded));
   }
--- a/dom/media/MediaFormatReader.cpp
+++ b/dom/media/MediaFormatReader.cpp
@@ -539,17 +539,18 @@ MediaFormatReader::ShouldSkip(bool aSkip
   if (NS_FAILED(rv)) {
     return aSkipToNextKeyframe;
   }
   return nextKeyframe < aTimeThreshold && nextKeyframe.ToMicroseconds() >= 0;
 }
 
 nsRefPtr<MediaDecoderReader::VideoDataPromise>
 MediaFormatReader::RequestVideoData(bool aSkipToNextKeyframe,
-                                    int64_t aTimeThreshold)
+                                    int64_t aTimeThreshold,
+                                    bool aForceDecodeAhead)
 {
   MOZ_ASSERT(OnTaskQueue());
   MOZ_DIAGNOSTIC_ASSERT(mSeekPromise.IsEmpty(), "No sample requests allowed while seeking");
   MOZ_DIAGNOSTIC_ASSERT(!mVideo.HasPromise(), "No duplicate sample requests");
   MOZ_DIAGNOSTIC_ASSERT(!mVideo.mSeekRequest.Exists() ||
                         mVideo.mTimeThreshold.isSome());
   MOZ_DIAGNOSTIC_ASSERT(!mSkipRequest.Exists(), "called mid-skipping");
   MOZ_DIAGNOSTIC_ASSERT(!IsSeeking(), "called mid-seek");
@@ -572,16 +573,17 @@ MediaFormatReader::RequestVideoData(bool
 
   if (!EnsureDecodersSetup()) {
     NS_WARNING("Error constructing decoders");
     return VideoDataPromise::CreateAndReject(DECODE_ERROR, __func__);
   }
 
   MOZ_ASSERT(HasVideo() && mPlatform && mVideo.mDecoder);
 
+  mVideo.mForceDecodeAhead = aForceDecodeAhead;
   media::TimeUnit timeThreshold{media::TimeUnit::FromMicroseconds(aTimeThreshold)};
   if (ShouldSkip(aSkipToNextKeyframe, timeThreshold)) {
     Flush(TrackInfo::kVideoTrack);
     nsRefPtr<VideoDataPromise> p = mVideo.mPromise.Ensure(__func__);
     SkipVideoDemuxToNextKeyFrame(timeThreshold);
     return p;
   }
 
@@ -697,18 +699,18 @@ MediaFormatReader::OnAudioDemuxCompleted
   mAudio.mQueuedSamples.AppendElements(aSamples->mSamples);
   ScheduleUpdate(TrackInfo::kAudioTrack);
 }
 
 void
 MediaFormatReader::NotifyNewOutput(TrackType aTrack, MediaData* aSample)
 {
   MOZ_ASSERT(OnTaskQueue());
-  LOGV("Received new %s sample time:%lld duration:%lld",
-       TrackTypeToStr(aTrack), aSample->mTime, aSample->mDuration);
+  LOGV("Received new sample time:%lld duration:%lld",
+       aSample->mTime, aSample->mDuration);
   auto& decoder = GetDecoderData(aTrack);
   if (!decoder.mOutputRequested) {
     LOG("MediaFormatReader produced output while flushing, discarding.");
     return;
   }
   decoder.mOutput.AppendElement(aSample);
   decoder.mNumSamplesOutput++;
   decoder.mNumSamplesOutputTotal++;
@@ -741,25 +743,27 @@ MediaFormatReader::NotifyDrainComplete(T
 
 void
 MediaFormatReader::NotifyError(TrackType aTrack)
 {
   MOZ_ASSERT(OnTaskQueue());
   LOGV("%s Decoding error", TrackTypeToStr(aTrack));
   auto& decoder = GetDecoderData(aTrack);
   decoder.mError = true;
+  decoder.mNeedDraining = true;
   ScheduleUpdate(aTrack);
 }
 
 void
 MediaFormatReader::NotifyWaitingForData(TrackType aTrack)
 {
   MOZ_ASSERT(OnTaskQueue());
   auto& decoder = GetDecoderData(aTrack);
   decoder.mWaitingForData = true;
+  decoder.mNeedDraining = true;
   ScheduleUpdate(aTrack);
 }
 
 void
 MediaFormatReader::NotifyEndOfStream(TrackType aTrack)
 {
   MOZ_ASSERT(OnTaskQueue());
   auto& decoder = GetDecoderData(aTrack);
@@ -775,21 +779,22 @@ MediaFormatReader::NeedInput(DecoderData
   // We try to keep a few more compressed samples input than decoded samples
   // have been output, provided the state machine has requested we send it a
   // decoded sample. To account for H.264 streams which may require a longer
   // run of input than we input, decoders fire an "input exhausted" callback,
   // which overrides our "few more samples" threshold.
   return
     !aDecoder.mDraining &&
     !aDecoder.mError &&
-    aDecoder.HasPromise() &&
+    (aDecoder.HasPromise() || aDecoder.mForceDecodeAhead) &&
     !aDecoder.mDemuxRequest.Exists() &&
     aDecoder.mOutput.IsEmpty() &&
     (aDecoder.mInputExhausted || !aDecoder.mQueuedSamples.IsEmpty() ||
      aDecoder.mTimeThreshold.isSome() ||
+     aDecoder.mForceDecodeAhead ||
      aDecoder.mNumSamplesInput - aDecoder.mNumSamplesOutput < aDecoder.mDecodeAhead);
 }
 
 void
 MediaFormatReader::ScheduleUpdate(TrackType aTrack)
 {
   MOZ_ASSERT(OnTaskQueue());
   if (mShutdown) {
@@ -901,17 +906,17 @@ MediaFormatReader::DecodeDemuxedSamples(
 
       if (decoder.mNextStreamSourceID.isNothing() ||
           decoder.mNextStreamSourceID.ref() != info->GetID()) {
         LOG("%s stream id has changed from:%d to:%d, draining decoder.",
             TrackTypeToStr(aTrack), decoder.mLastStreamSourceID,
             info->GetID());
         decoder.mNeedDraining = true;
         decoder.mNextStreamSourceID = Some(info->GetID());
-        ScheduleUpdate(aTrack);
+        DrainDecoder(aTrack);
         return;
       }
 
       LOG("%s stream id has changed from:%d to:%d, recreating decoder.",
           TrackTypeToStr(aTrack), decoder.mLastStreamSourceID,
           info->GetID());
       decoder.mInfo = info;
       decoder.mLastStreamSourceID = info->GetID();
@@ -968,21 +973,17 @@ MediaFormatReader::DecodeDemuxedSamples(
     LOGV("Input:%lld (dts:%lld kf:%d)",
          sample->mTime, sample->mTimecode, sample->mKeyframe);
     decoder.mOutputRequested = true;
     decoder.mNumSamplesInput++;
     decoder.mSizeOfQueue++;
     if (aTrack == TrackInfo::kVideoTrack) {
       aA.mParsed++;
     }
-    if (NS_FAILED(decoder.mDecoder->Input(sample))) {
-      LOG("Unable to pass frame to decoder");
-      NotifyError(aTrack);
-      return;
-    }
+    decoder.mDecoder->Input(sample);
     decoder.mQueuedSamples.RemoveElementAt(0);
     samplesPending = true;
   }
 
   // We have serviced the decoder's request for more data.
   decoder.mInputExhausted = false;
 }
 
@@ -1026,22 +1027,16 @@ MediaFormatReader::Update(TrackType aTra
   auto& decoder = GetDecoderData(aTrack);
   decoder.mUpdateScheduled = false;
 
   if (UpdateReceivedNewData(aTrack)) {
     LOGV("Nothing more to do");
     return;
   }
 
-  if (!decoder.HasPromise() && decoder.mWaitingForData) {
-    // Nothing more we can do at present.
-    LOGV("Still waiting for data.");
-    return;
-  }
-
   // Record number of frames decoded and parsed. Automatically update the
   // stats counters using the AutoNotifyDecoded stack-based class.
   AbstractMediaDecoder::AutoNotifyDecoded a(mDecoder);
 
   if (aTrack == TrackInfo::kVideoTrack) {
     uint64_t delta =
       decoder.mNumSamplesOutputTotal - mLastReportedNumDecodedFrames;
     a.mDecoded = static_cast<uint32_t>(delta);
@@ -1076,43 +1071,42 @@ MediaFormatReader::Update(TrackType aTra
       decoder.mDrainComplete = false;
       decoder.mDraining = false;
       if (decoder.mError) {
         LOG("Decoding Error");
         decoder.RejectPromise(DECODE_ERROR, __func__);
         return;
       } else if (decoder.mDemuxEOS) {
         decoder.RejectPromise(END_OF_STREAM, __func__);
+      } else if (decoder.mWaitingForData) {
+        LOG("Waiting For Data");
+        decoder.RejectPromise(WAITING_FOR_DATA, __func__);
       }
-    } else if (decoder.mError) {
+    } else if (decoder.mError && !decoder.mDecoder) {
       decoder.RejectPromise(DECODE_ERROR, __func__);
       return;
-    } else if (decoder.mWaitingForData) {
-      LOG("Waiting For Data");
-      decoder.RejectPromise(WAITING_FOR_DATA, __func__);
-      return;
     }
   }
 
-  if (decoder.mNeedDraining) {
+  if (decoder.mError || decoder.mDemuxEOS || decoder.mWaitingForData) {
     DrainDecoder(aTrack);
     return;
   }
 
   if (!NeedInput(decoder)) {
     LOGV("No need for additional input");
     return;
   }
 
   needInput = true;
 
-  LOGV("Update(%s) ni=%d no=%d ie=%d, in:%llu out:%llu qs=%u sid:%u",
+  LOGV("Update(%s) ni=%d no=%d ie=%d, in:%d out:%d qs=%d sid:%d",
        TrackTypeToStr(aTrack), needInput, needOutput, decoder.mInputExhausted,
        decoder.mNumSamplesInput, decoder.mNumSamplesOutput,
-       uint32_t(size_t(decoder.mSizeOfQueue)), decoder.mLastStreamSourceID);
+       size_t(decoder.mSizeOfQueue), decoder.mLastStreamSourceID);
 
   // Demux samples if we don't have some.
   RequestDemuxSamples(aTrack);
   // Decode all pending demuxed samples.
   DecodeDemuxedSamples(aTrack, a);
 }
 
 void
@@ -1318,28 +1312,28 @@ MediaFormatReader::OnVideoSkipFailed(Med
   MOZ_ASSERT(OnTaskQueue());
   LOG("Skipping failed, skipped %u frames", aFailure.mSkipped);
   mSkipRequest.Complete();
   mDecoder->NotifyDecodedFrames(aFailure.mSkipped, 0, aFailure.mSkipped);
   MOZ_ASSERT(mVideo.HasPromise());
   switch (aFailure.mFailure) {
     case DemuxerFailureReason::END_OF_STREAM:
       NotifyEndOfStream(TrackType::kVideoTrack);
+      mVideo.RejectPromise(END_OF_STREAM, __func__);
       break;
     case DemuxerFailureReason::WAITING_FOR_DATA:
       NotifyWaitingForData(TrackType::kVideoTrack);
+      mVideo.RejectPromise(WAITING_FOR_DATA, __func__);
       break;
     case DemuxerFailureReason::CANCELED:
     case DemuxerFailureReason::SHUTDOWN:
-      if (mVideo.HasPromise()) {
-        mVideo.RejectPromise(CANCELED, __func__);
-      }
       break;
     default:
       NotifyError(TrackType::kVideoTrack);
+      mVideo.RejectPromise(DECODE_ERROR, __func__);
       break;
   }
 }
 
 nsRefPtr<MediaDecoderReader::SeekPromise>
 MediaFormatReader::Seek(int64_t aTime, int64_t aUnused)
 {
   MOZ_ASSERT(OnTaskQueue());
@@ -1357,18 +1351,17 @@ MediaFormatReader::Seek(int64_t aTime, i
     LOG("Seek() END (Unseekable)");
     return SeekPromise::CreateAndReject(NS_ERROR_FAILURE, __func__);
   }
 
   if (mShutdown) {
     return SeekPromise::CreateAndReject(NS_ERROR_FAILURE, __func__);
   }
 
-  mOriginalSeekTime = Some(media::TimeUnit::FromMicroseconds(aTime));
-  mPendingSeekTime = mOriginalSeekTime;
+  mPendingSeekTime.emplace(media::TimeUnit::FromMicroseconds(aTime));
 
   nsRefPtr<SeekPromise> p = mSeekPromise.Ensure(__func__);
 
   AttemptSeek();
 
   return p;
 }
 
@@ -1392,44 +1385,16 @@ MediaFormatReader::OnSeekFailed(TrackTyp
   LOGV("%s failure:%d", TrackTypeToStr(aTrack), aResult);
   if (aTrack == TrackType::kVideoTrack) {
     mVideo.mSeekRequest.Complete();
   } else {
     mAudio.mSeekRequest.Complete();
   }
 
   if (aResult == DemuxerFailureReason::WAITING_FOR_DATA) {
-    if (HasVideo() && aTrack == TrackType::kAudioTrack &&
-        mOriginalSeekTime.isSome() &&
-        mPendingSeekTime.ref() != mOriginalSeekTime.ref()) {
-      // We have failed to seek audio where video seeked to earlier.
-      // Attempt to seek instead to the closest point that we know we have in
-      // order to limit A/V sync discrepency.
-
-      // Ensure we have the most up to date buffered ranges.
-      UpdateReceivedNewData(TrackType::kAudioTrack);
-      Maybe<media::TimeUnit> nextSeekTime;
-      // Find closest buffered time found after video seeked time.
-      for (const auto& timeRange : mAudio.mTimeRanges) {
-        if (timeRange.mStart >= mPendingSeekTime.ref()) {
-          nextSeekTime.emplace(timeRange.mStart);
-          break;
-        }
-      }
-      if (nextSeekTime.isNothing() ||
-          nextSeekTime.ref() > mOriginalSeekTime.ref()) {
-        nextSeekTime = mOriginalSeekTime;
-        LOG("Unable to seek audio to video seek time. A/V sync may be broken");
-      } else {
-        mOriginalSeekTime.reset();
-      }
-      mPendingSeekTime = nextSeekTime;
-      DoAudioSeek();
-      return;
-    }
     NotifyWaitingForData(aTrack);
     return;
   }
   MOZ_ASSERT(!mVideo.mSeekRequest.Exists() && !mAudio.mSeekRequest.Exists());
   mPendingSeekTime.reset();
   mSeekPromise.Reject(NS_ERROR_FAILURE, __func__);
 }
 
@@ -1449,17 +1414,16 @@ void
 MediaFormatReader::OnVideoSeekCompleted(media::TimeUnit aTime)
 {
   MOZ_ASSERT(OnTaskQueue());
   LOGV("Video seeked to %lld", aTime.ToMicroseconds());
   mVideo.mSeekRequest.Complete();
 
   if (HasAudio()) {
     MOZ_ASSERT(mPendingSeekTime.isSome());
-    mPendingSeekTime = Some(aTime);
     DoAudioSeek();
   } else {
     mPendingSeekTime.reset();
     mSeekPromise.Resolve(aTime.ToMicroseconds(), __func__);
   }
 }
 
 void
--- a/dom/media/MediaFormatReader.h
+++ b/dom/media/MediaFormatReader.h
@@ -30,17 +30,17 @@ public:
   virtual ~MediaFormatReader();
 
   nsresult Init(MediaDecoderReader* aCloneDonor) override;
 
   size_t SizeOfVideoQueueInFrames() override;
   size_t SizeOfAudioQueueInFrames() override;
 
   nsRefPtr<VideoDataPromise>
-  RequestVideoData(bool aSkipToNextKeyframe, int64_t aTimeThreshold) override;
+  RequestVideoData(bool aSkipToNextKeyframe, int64_t aTimeThreshold, bool aForceDecodeAhead) override;
 
   nsRefPtr<AudioDataPromise> RequestAudioData() override;
 
   bool HasVideo() override
   {
     return mVideo.mTrackDemuxer;
   }
 
@@ -183,16 +183,17 @@ private:
 
   struct DecoderData {
     DecoderData(MediaFormatReader* aOwner,
                 MediaData::Type aType,
                 uint32_t aDecodeAhead)
       : mOwner(aOwner)
       , mType(aType)
       , mDecodeAhead(aDecodeAhead)
+      , mForceDecodeAhead(false)
       , mUpdateScheduled(false)
       , mDemuxEOS(false)
       , mWaitingForData(false)
       , mReceivedNewData(false)
       , mDiscontinuity(true)
       , mOutputRequested(false)
       , mInputExhausted(false)
       , mError(false)
@@ -216,16 +217,17 @@ private:
     // TaskQueue on which decoder can choose to decode.
     // Only non-null up until the decoder is created.
     nsRefPtr<FlushableTaskQueue> mTaskQueue;
     // Callback that receives output and error notifications from the decoder.
     nsAutoPtr<DecoderCallback> mCallback;
 
     // Only accessed from reader's task queue.
     uint32_t mDecodeAhead;
+    bool mForceDecodeAhead;
     bool mUpdateScheduled;
     bool mDemuxEOS;
     bool mWaitingForData;
     bool mReceivedNewData;
     bool mDiscontinuity;
 
     // Pending seek.
     MozPromiseRequestHolder<MediaTrackDemuxer::SeekPromise> mSeekRequest;
@@ -269,16 +271,17 @@ private:
       // Clear demuxer related data.
       mDemuxRequest.DisconnectIfExists();
       mTrackDemuxer->Reset();
     }
 
     void ResetState()
     {
       MOZ_ASSERT(mOwner->OnTaskQueue());
+      mForceDecodeAhead = false;
       mDemuxEOS = false;
       mWaitingForData = false;
       mReceivedNewData = false;
       mDiscontinuity = true;
       mQueuedSamples.Clear();
       mOutputRequested = false;
       mInputExhausted = false;
       mNeedDraining = false;
@@ -402,17 +405,16 @@ private:
 
   void DoAudioSeek();
   void OnAudioSeekCompleted(media::TimeUnit aTime);
   void OnAudioSeekFailed(DemuxerFailureReason aFailure)
   {
     OnSeekFailed(TrackType::kAudioTrack, aFailure);
   }
   // Temporary seek information while we wait for the data
-  Maybe<media::TimeUnit> mOriginalSeekTime;
   Maybe<media::TimeUnit> mPendingSeekTime;
   MozPromiseHolder<SeekPromise> mSeekPromise;
 
 #ifdef MOZ_EME
   nsRefPtr<CDMProxy> mCDMProxy;
 #endif
 
   nsRefPtr<SharedDecoderManager> mSharedDecoderManager;
deleted file mode 100644
--- a/dom/media/XiphExtradata.cpp
+++ /dev/null
@@ -1,80 +0,0 @@
-/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- */
-/* vim: set ts=8 sts=2 et sw=2 tw=80: */
-/* This Source Code Form is subject to the terms of the Mozilla Public
- * License, v. 2.0. If a copy of the MPL was not distributed with this
- * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
-
-#include "XiphExtradata.h"
-
-namespace mozilla {
-
-bool XiphHeadersToExtradata(MediaByteBuffer* aCodecSpecificConfig,
-                            const nsTArray<const unsigned char*>& aHeaders,
-                            const nsTArray<size_t>& aHeaderLens)
-{
-  size_t nheaders = aHeaders.Length();
-  if (!nheaders || nheaders > 255) return false;
-  aCodecSpecificConfig->AppendElement(nheaders - 1);
-  for (size_t i = 0; i < nheaders - 1; i++) {
-    size_t headerLen;
-    for (headerLen = aHeaderLens[i]; headerLen >= 255; headerLen -= 255) {
-      aCodecSpecificConfig->AppendElement(255);
-    }
-    aCodecSpecificConfig->AppendElement(headerLen);
-  }
-  for (size_t i = 0; i < nheaders; i++) {
-    aCodecSpecificConfig->AppendElements(aHeaders[i], aHeaderLens[i]);
-  }
-  return true;
-}
-
-bool XiphExtradataToHeaders(nsTArray<unsigned char*>& aHeaders,
-                            nsTArray<size_t>& aHeaderLens,
-                            unsigned char* aData,
-                            size_t aAvailable)
-{
-  size_t total = 0;
-  if (aAvailable < 1) {
-    return false;
-  }
-  aAvailable--;
-  int nHeaders = *aData++ + 1;
-  for (int i = 0; i < nHeaders - 1; i++) {
-    size_t headerLen = 0;
-    for (;;) {
-      // After this test, we know that (aAvailable - total > headerLen) and
-      // (headerLen >= 0) so (aAvailable - total > 0). The loop decrements
-      // aAvailable by 1 and total remains fixed, so we know that in the next
-      // iteration (aAvailable - total >= 0). Thus (aAvailable - total) can
-      // never underflow.
-      if (aAvailable - total <= headerLen) {
-        return false;
-      }
-      // Since we know (aAvailable > total + headerLen), this can't overflow
-      // unless total is near 0 and both aAvailable and headerLen are within
-      // 255 bytes of the maximum representable size. However, that is
-      // impossible, since we would have had to have gone through this loop
-      // more than 255 times to make headerLen that large, and thus decremented
-      // aAvailable more than 255 times.
-      headerLen += *aData;
-      aAvailable--;
-      if (*aData++ != 255) break;
-    }
-    // And this check ensures updating total won't cause (aAvailable - total)
-    // to underflow.
-    if (aAvailable - total < headerLen) {
-      return false;
-    }
-    aHeaderLens.AppendElement(headerLen);
-    // Since we know aAvailable >= total + headerLen, this can't overflow.
-    total += headerLen;
-  }
-  aHeaderLens.AppendElement(aAvailable);
-  for (int i = 0; i < nHeaders; i++) {
-    aHeaders.AppendElement(aData);
-    aData += aHeaderLens[i];
-  }
-  return true;
-}
-
-} // namespace mozilla
deleted file mode 100644
--- a/dom/media/XiphExtradata.h
+++ /dev/null
@@ -1,28 +0,0 @@
-/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- */
-/* vim: set ts=8 sts=2 et sw=2 tw=80: */
-/* This Source Code Form is subject to the terms of the Mozilla Public
- * License, v. 2.0. If a copy of the MPL was not distributed with this
- * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
-#if !defined(XiphExtradata_h)
-#define XiphExtradata_h
-
-#include "MediaData.h"
-
-namespace mozilla {
-
-/* This converts a list of headers to the canonical form of extradata for Xiph
-   codecs in non-Ogg containers. We use it to pass those headers from demuxer
-   to decoder even when demuxing from an Ogg cotainer. */
-bool XiphHeadersToExtradata(MediaByteBuffer* aCodecSpecificConfig,
-                            const nsTArray<const unsigned char*>& aHeaders,
-                            const nsTArray<size_t>& aHeaderLens);
-
-/* This converts a set of extradata back into a list of headers. */
-bool XiphExtradataToHeaders(nsTArray<unsigned char*>& aHeaders,
-                            nsTArray<size_t>& aHeaderLens,
-                            unsigned char* aData,
-                            size_t aAvailable);
-
-} // namespace mozilla
-
-#endif // XiphExtradata_h
--- a/dom/media/mediasource/ContainerParser.cpp
+++ b/dom/media/mediasource/ContainerParser.cpp
@@ -6,17 +6,16 @@
 
 #include "ContainerParser.h"
 
 #include "WebMBufferedParser.h"
 #include "mozilla/Endian.h"
 #include "mozilla/ErrorResult.h"
 #include "mp4_demuxer/MoofParser.h"
 #include "mozilla/Logging.h"
-#include "mozilla/Maybe.h"
 #include "MediaData.h"
 #ifdef MOZ_FMP4
 #include "MP4Stream.h"
 #include "mp4_demuxer/AtomType.h"
 #include "mp4_demuxer/ByteReader.h"
 #endif
 #include "SourceBufferResource.h"
 
@@ -173,36 +172,33 @@ public:
       return true;
     }
     // 0x1c53bb6b // Cues
     if (aData->Length() >= 4 &&
         (*aData)[0] == 0x1c && (*aData)[1] == 0x53 && (*aData)[2] == 0xbb &&
         (*aData)[3] == 0x6b) {
       return true;
     }
+    // 0xa3 // SimpleBlock
+    if (aData->Length() >= 1 &&
+        (*aData)[0] == 0xa3) {
+      return true;
+    }
+    // 0xa1 // Block
+    if (aData->Length() >= 1 &&
+        (*aData)[0] == 0xa1) {
+      return true;
+    }
     return false;
   }
 
   bool ParseStartAndEndTimestamps(MediaByteBuffer* aData,
                                   int64_t& aStart, int64_t& aEnd) override
   {
     bool initSegment = IsInitSegmentPresent(aData);
-
-    if (mLastMapping && (initSegment || IsMediaSegmentPresent(aData))) {
-      // The last data contained a complete cluster but we can only detect it
-      // now that a new one is starting.
-      // We use mOffset as end position to ensure that any blocks not reported
-      // by WebMBufferParser are properly skipped.
-      mCompleteMediaSegmentRange = MediaByteRange(mLastMapping.ref().mSyncOffset,
-                                                  mOffset);
-      mLastMapping.reset();
-      MSE_DEBUG(WebMContainerParser, "New cluster found at start, ending previous one");
-      return false;
-    }
-
     if (initSegment) {
       mOffset = 0;
       mParser = WebMBufferedParser(0);
       mOverlappedMapping.Clear();
       mInitData = new MediaByteBuffer();
       mResource = new SourceBufferResource(NS_LITERAL_CSTRING("video/webm"));
       mCompleteMediaHeaderRange = MediaByteRange();
       mCompleteMediaSegmentRange = MediaByteRange();
@@ -241,91 +237,62 @@ public:
       mHasInitData = true;
     }
     mOffset += aData->Length();
 
     if (mapping.IsEmpty()) {
       return false;
     }
 
-    // Calculate media range for first media segment.
+    uint32_t endIdx = mapping.Length() - 1;
 
-    // Check if we have a cluster finishing in the current data.
-    uint32_t endIdx = mapping.Length() - 1;
-    bool foundNewCluster = false;
-    while (mapping[0].mSyncOffset != mapping[endIdx].mSyncOffset) {
-      endIdx -= 1;
-      foundNewCluster = true;
+    // Calculate media range for first media segment
+    uint32_t segmentEndIdx = endIdx;
+    while (mapping[0].mSyncOffset != mapping[segmentEndIdx].mSyncOffset) {
+      segmentEndIdx -= 1;
+    }
+    if (segmentEndIdx > 0 && mOffset >= mapping[segmentEndIdx].mEndOffset) {
+      mCompleteMediaHeaderRange = MediaByteRange(mParser.mInitEndOffset,
+                                                 mapping[0].mEndOffset);
+      mCompleteMediaSegmentRange = MediaByteRange(mParser.mInitEndOffset,
+                                                  mapping[segmentEndIdx].mEndOffset);
     }
 
-    int32_t completeIdx = endIdx;
-    while (completeIdx >= 0 && mOffset < mapping[completeIdx].mEndOffset) {
-      MSE_DEBUG(WebMContainerParser, "block is incomplete, missing: %lld",
-                mapping[completeIdx].mEndOffset - mOffset);
-      completeIdx -= 1;
+    // Exclude frames that we don't have enough data to cover the end of.
+    while (mOffset < mapping[endIdx].mEndOffset && endIdx > 0) {
+      endIdx -= 1;
     }
 
-    // Save parsed blocks for which we do not have all data yet.
-    mOverlappedMapping.AppendElements(mapping.Elements() + completeIdx + 1,
-                                      mapping.Length() - completeIdx - 1);
-
-    if (completeIdx < 0) {
-      mLastMapping.reset();
+    if (endIdx == 0) {
       return false;
     }
 
-    if (mCompleteMediaHeaderRange.IsNull()) {
-      mCompleteMediaHeaderRange = MediaByteRange(mapping[0].mSyncOffset,
-                                                 mapping[0].mEndOffset);
-    }
-    mLastMapping = Some(mapping[completeIdx]);
+    uint64_t frameDuration = mapping[endIdx].mTimecode - mapping[endIdx - 1].mTimecode;
+    aStart = mapping[0].mTimecode / NS_PER_USEC;
+    aEnd = (mapping[endIdx].mTimecode + frameDuration) / NS_PER_USEC;
 
-    if (foundNewCluster && mOffset >= mapping[endIdx].mEndOffset) {
-      // We now have all information required to delimit a complete cluster.
-      int64_t endOffset = mapping[endIdx+1].mSyncOffset;
-      if (mapping[endIdx+1].mInitOffset > mapping[endIdx].mInitOffset) {
-        // We have a new init segment before this cluster.
-        endOffset = mapping[endIdx+1].mInitOffset;
-      }
-      mCompleteMediaSegmentRange = MediaByteRange(mapping[endIdx].mSyncOffset,
-                                                  endOffset);
-    } else if (mapping[endIdx].mClusterEndOffset >= 0 &&
-               mOffset >= mapping[endIdx].mClusterEndOffset) {
-      mCompleteMediaSegmentRange = MediaByteRange(mapping[endIdx].mSyncOffset,
-                                                  mParser.EndSegmentOffset(mapping[endIdx].mClusterEndOffset));
-    }
+    MSE_DEBUG(WebMContainerParser, "[%lld, %lld] [fso=%lld, leo=%lld, l=%u endIdx=%u]",
+              aStart, aEnd, mapping[0].mSyncOffset, mapping[endIdx].mEndOffset, mapping.Length(), endIdx);
 
-    if (!completeIdx) {
-      return false;
-    }
-
-    uint64_t frameDuration =
-      mapping[completeIdx].mTimecode - mapping[completeIdx - 1].mTimecode;
-    aStart = mapping[0].mTimecode / NS_PER_USEC;
-    aEnd = (mapping[completeIdx].mTimecode + frameDuration) / NS_PER_USEC;
-
-    MSE_DEBUG(WebMContainerParser, "[%lld, %lld] [fso=%lld, leo=%lld, l=%u processedIdx=%u fs=%lld]",
-              aStart, aEnd, mapping[0].mSyncOffset,
-              mapping[completeIdx].mEndOffset, mapping.Length(), completeIdx,
-              mCompleteMediaSegmentRange.mEnd);
+    mapping.RemoveElementsAt(0, endIdx + 1);
+    mOverlappedMapping.AppendElements(mapping);
 
     return true;
   }
 
   int64_t GetRoundingError() override
   {
     int64_t error = mParser.GetTimecodeScale() / NS_PER_USEC;
     return error * 2;
   }
 
 private:
   WebMBufferedParser mParser;
   nsTArray<WebMTimeDataOffset> mOverlappedMapping;
   int64_t mOffset;
-  Maybe<WebMTimeDataOffset> mLastMapping;
 };
 
 #ifdef MOZ_FMP4
 class MP4ContainerParser : public ContainerParser {
 public:
   explicit MP4ContainerParser(const nsACString& aType)
     : ContainerParser(aType)
     , mMonitor("MP4ContainerParser Index Monitor")
--- a/dom/media/mediasource/MediaSource.cpp
+++ b/dom/media/mediasource/MediaSource.cpp
@@ -303,20 +303,24 @@ MediaSource::EndOfStream(const Optional<
       return;
     }
     // Notify reader that all data is now available.
     mDecoder->Ended(true);
     return;
   }
   switch (aError.Value()) {
   case MediaSourceEndOfStreamError::Network:
-    mDecoder->NetworkError();
+    // TODO: If media element has a readyState of:
+    //   HAVE_NOTHING -> run resource fetch algorithm
+    // > HAVE_NOTHING -> run "interrupted" steps of resource fetch
     break;
   case MediaSourceEndOfStreamError::Decode:
-    mDecoder->DecodeError();
+    // TODO: If media element has a readyState of:
+    //   HAVE_NOTHING -> run "unsupported" steps of resource fetch
+    // > HAVE_NOTHING -> run "corrupted" steps of resource fetch
     break;
   default:
     aRv.Throw(NS_ERROR_DOM_INVALID_ACCESS_ERR);
   }
 }
 
 /* static */ bool
 MediaSource::IsTypeSupported(const GlobalObject&, const nsAString& aType)
--- a/dom/media/mediasource/MediaSourceReader.cpp
+++ b/dom/media/mediasource/MediaSourceReader.cpp
@@ -41,16 +41,17 @@ extern PRLogModuleInfo* GetMediaSourceLo
 #define EOS_FUZZ_US 125000
 
 namespace mozilla {
 
 MediaSourceReader::MediaSourceReader(MediaSourceDecoder* aDecoder)
   : MediaDecoderReader(aDecoder)
   , mLastAudioTime(0)
   , mLastVideoTime(0)
+  , mForceVideoDecodeAhead(false)
   , mOriginalSeekTime(-1)
   , mPendingSeekTime(-1)
   , mWaitingForSeekData(false)
   , mSeekToEnd(false)
   , mTimeThreshold(0)
   , mDropAudioBeforeThreshold(false)
   , mDropVideoBeforeThreshold(false)
   , mAudioDiscontinuity(false)
@@ -301,17 +302,19 @@ MediaSourceReader::OnAudioNotDecoded(Not
   if (mLastAudioTime - lastAudioTime >= EOS_FUZZ_US) {
     // No decoders are available to switch to. We will re-attempt from the last
     // failing position.
     mLastAudioTime = lastAudioTime;
   }
 }
 
 nsRefPtr<MediaDecoderReader::VideoDataPromise>
-MediaSourceReader::RequestVideoData(bool aSkipToNextKeyframe, int64_t aTimeThreshold)
+MediaSourceReader::RequestVideoData(bool aSkipToNextKeyframe,
+                                    int64_t aTimeThreshold,
+                                    bool aForceDecodeAhead)
 {
   MOZ_ASSERT(OnTaskQueue());
   MOZ_DIAGNOSTIC_ASSERT(mSeekPromise.IsEmpty(), "No sample requests allowed while seeking");
   MOZ_DIAGNOSTIC_ASSERT(mVideoPromise.IsEmpty(), "No duplicate sample requests");
   nsRefPtr<VideoDataPromise> p = mVideoPromise.Ensure(__func__);
   MSE_DEBUGV("RequestVideoData(%d, %lld), mLastVideoTime=%lld",
              aSkipToNextKeyframe, aTimeThreshold, mLastVideoTime);
   if (!mVideoTrack) {
@@ -325,16 +328,17 @@ MediaSourceReader::RequestVideoData(bool
     mDropVideoBeforeThreshold = true;
   }
   if (IsSeeking()) {
     MSE_DEBUG("called mid-seek. Rejecting.");
     mVideoPromise.Reject(CANCELED, __func__);
     return p;
   }
   MOZ_DIAGNOSTIC_ASSERT(!mVideoSeekRequest.Exists());
+  mForceVideoDecodeAhead = aForceDecodeAhead;
 
   SwitchSourceResult ret = SwitchVideoSource(&mLastVideoTime);
   switch (ret) {
     case SOURCE_NEW:
       GetVideoReader()->ResetDecode();
       mVideoSeekRequest.Begin(GetVideoReader()->Seek(GetReaderVideoTime(mLastVideoTime), 0)
                              ->Then(OwnerThread(), __func__, this,
                                     &MediaSourceReader::CompleteVideoSeekAndDoRequest,
@@ -358,17 +362,18 @@ MediaSourceReader::RequestVideoData(bool
 
   return p;
 }
 
 void
 MediaSourceReader::DoVideoRequest()
 {
   mVideoRequest.Begin(GetVideoReader()->RequestVideoData(mDropVideoBeforeThreshold,
-                                                         GetReaderVideoTime(mTimeThreshold))
+                                                         GetReaderVideoTime(mTimeThreshold),
+                                                         mForceVideoDecodeAhead)
                       ->Then(OwnerThread(), __func__, this,
                              &MediaSourceReader::OnVideoDecoded,
                              &MediaSourceReader::OnVideoNotDecoded));
 }
 
 void
 MediaSourceReader::OnVideoDecoded(VideoData* aSample)
 {
@@ -884,16 +889,19 @@ MediaSourceReader::ResetDecode()
   // Do the same for any data wait promises.
   mAudioWaitPromise.RejectIfExists(WaitForDataRejectValue(MediaData::AUDIO_DATA, WaitForDataRejectValue::CANCELED), __func__);
   mVideoWaitPromise.RejectIfExists(WaitForDataRejectValue(MediaData::VIDEO_DATA, WaitForDataRejectValue::CANCELED), __func__);
 
   // Reset miscellaneous seeking state.
   mWaitingForSeekData = false;
   mPendingSeekTime = -1;
 
+  // Reset force video decode ahead.
+  mForceVideoDecodeAhead = false;
+
   // Reset all the readers.
   if (GetAudioReader()) {
     GetAudioReader()->ResetDecode();
   }
   if (GetVideoReader()) {
     GetVideoReader()->ResetDecode();
   }
 
--- a/dom/media/mediasource/MediaSourceReader.h
+++ b/dom/media/mediasource/MediaSourceReader.h
@@ -44,17 +44,17 @@ public:
   // registered TrackBuffers essential for initialization.
   void PrepareInitialization();
 
   bool IsWaitingMediaResources() override;
   bool IsWaitingOnCDMResource() override;
 
   nsRefPtr<AudioDataPromise> RequestAudioData() override;
   nsRefPtr<VideoDataPromise>
-  RequestVideoData(bool aSkipToNextKeyframe, int64_t aTimeThreshold) override;
+  RequestVideoData(bool aSkipToNextKeyframe, int64_t aTimeThreshold, bool aForceDecodeAhead) override;
 
   virtual size_t SizeOfVideoQueueInFrames() override;
   virtual size_t SizeOfAudioQueueInFrames() override;
 
   virtual void ReleaseMediaResources() override;
 
   void OnAudioDecoded(AudioData* aSample);
   void OnAudioNotDecoded(NotDecodedReason aReason);
@@ -250,16 +250,18 @@ private:
 #ifdef MOZ_EME
   nsRefPtr<CDMProxy> mCDMProxy;
 #endif
 
   // These are read and written on the decode task queue threads.
   int64_t mLastAudioTime;
   int64_t mLastVideoTime;
 
+  bool mForceVideoDecodeAhead;
+
   MozPromiseRequestHolder<SeekPromise> mAudioSeekRequest;
   MozPromiseRequestHolder<SeekPromise> mVideoSeekRequest;
   MozPromiseHolder<SeekPromise> mSeekPromise;
 
   // Temporary seek information while we wait for the data
   // to be added to the track buffer.
   int64_t mOriginalSeekTime;
   int64_t mPendingSeekTime;
--- a/dom/media/mediasource/SourceBuffer.cpp
+++ b/dom/media/mediasource/SourceBuffer.cpp
@@ -545,25 +545,16 @@ already_AddRefed<MediaByteBuffer>
 SourceBuffer::PrepareAppend(const uint8_t* aData, uint32_t aLength, ErrorResult& aRv)
 {
   typedef SourceBufferContentManager::EvictDataResult Result;
 
   if (!IsAttached() || mUpdating) {
     aRv.Throw(NS_ERROR_DOM_INVALID_STATE_ERR);
     return nullptr;
   }
-
-  // If the HTMLMediaElement.error attribute is not null, then throw an
-  // InvalidStateError exception and abort these steps.
-  if (!mMediaSource->GetDecoder() ||
-      mMediaSource->GetDecoder()->IsEndedOrShutdown()) {
-    aRv.Throw(NS_ERROR_DOM_INVALID_STATE_ERR);
-    return nullptr;
-  }
-
   if (mMediaSource->ReadyState() == MediaSourceReadyState::Ended) {
     mMediaSource->SetReadyState(MediaSourceReadyState::Open);
   }
 
   // Eviction uses a byte threshold. If the buffer is greater than the
   // number of bytes then data is evicted. The time range for this
   // eviction is reported back to the media source. It will then
   // evict data before that range across all SourceBuffers it knows
--- a/dom/media/mediasource/SourceBufferResource.h
+++ b/dom/media/mediasource/SourceBufferResource.h
@@ -95,21 +95,16 @@ public:
   }
 
   virtual size_t SizeOfIncludingThis(
                       MallocSizeOf aMallocSizeOf) const override
   {
     return aMallocSizeOf(this) + SizeOfExcludingThis(aMallocSizeOf);
   }
 
-  virtual bool IsExpectingMoreData() override
-  {
-    return false;
-  }
-
   // Used by SourceBuffer.
   void AppendData(MediaByteBuffer* aData);
   void Ended();
   bool IsEnded()
   {
     ReentrantMonitorAutoEnter mon(mMonitor);
     return mEnded;
   }
--- a/dom/media/mediasource/TrackBuffersManager.cpp
+++ b/dom/media/mediasource/TrackBuffersManager.cpp
@@ -92,17 +92,16 @@ private:
 
 TrackBuffersManager::TrackBuffersManager(dom::SourceBufferAttributes* aAttributes,
                                          MediaSourceDecoder* aParentDecoder,
                                          const nsACString& aType)
   : mInputBuffer(new MediaByteBuffer)
   , mAppendState(AppendState::WAITING_FOR_SEGMENT)
   , mBufferFull(false)
   , mFirstInitializationSegmentReceived(false)
-  , mNewSegmentStarted(false)
   , mActiveTrack(false)
   , mType(aType)
   , mParser(ContainerParser::CreateForMIMEType(aType))
   , mProcessedInput(0)
   , mAppendRunning(false)
   , mTaskQueue(aParentDecoder->GetDemuxer()->GetTaskQueue())
   , mSourceBufferAttributes(aAttributes)
   , mParentDecoder(new nsMainThreadPtrHolder<MediaSourceDecoder>(aParentDecoder, false /* strict */))
@@ -455,32 +454,37 @@ TrackBuffersManager::DoEvictData(const T
 
   toEvict = mSizeSourceBuffer - finalSize;
 
   // Still some to remove. Remove data starting from the end, up to 30s ahead
   // of the later of the playback time or the next sample to be demuxed.
   // 30s is a value chosen as it appears to work with YouTube.
   TimeUnit upperLimit =
     std::max(aPlaybackTime, track.mNextSampleTime) + TimeUnit::FromSeconds(30);
-  uint32_t evictedFramesStartIndex = buffer.Length();
+  lastKeyFrameIndex = buffer.Length();
   for (int32_t i = buffer.Length() - 1; i >= 0; i--) {
     const auto& frame = buffer[i];
-    if (frame->mTime <= upperLimit.ToMicroseconds() || toEvict < 0) {
-      // We've reached a frame that shouldn't be evicted -> Evict after it -> i+1.
-      // Or the previous loop reached the eviction threshold -> Evict from it -> i+1.
-      evictedFramesStartIndex = i + 1;
+    if (frame->mKeyframe) {
+      lastKeyFrameIndex = i;
+      toEvict -= partialEvict;
+      if (toEvict < 0) {
+        break;
+      }
+      partialEvict = 0;
+    }
+    if (frame->mTime <= upperLimit.ToMicroseconds()) {
       break;
     }
-    toEvict -= frame->ComputedSizeOfIncludingThis();
+    partialEvict += frame->ComputedSizeOfIncludingThis();
   }
-  if (evictedFramesStartIndex < buffer.Length()) {
+  if (lastKeyFrameIndex < buffer.Length()) {
     MSE_DEBUG("Step2. Evicting %u bytes from trailing data",
               mSizeSourceBuffer - finalSize);
     CodedFrameRemoval(
-      TimeInterval(TimeUnit::FromMicroseconds(buffer[evictedFramesStartIndex]->mTime),
+      TimeInterval(TimeUnit::FromMicroseconds(buffer[lastKeyFrameIndex]->GetEndTime() + 1),
                    TimeUnit::FromInfinity()));
   }
 }
 
 nsRefPtr<TrackBuffersManager::RangeRemovalPromise>
 TrackBuffersManager::CodedFrameRemovalWithPromise(TimeInterval aInterval)
 {
   MOZ_ASSERT(OnTaskQueue());
@@ -651,33 +655,31 @@ TrackBuffersManager::SegmentParserLoop()
     // steps:
     if (mAppendState == AppendState::WAITING_FOR_SEGMENT) {
       if (mParser->IsInitSegmentPresent(mInputBuffer)) {
         SetAppendState(AppendState::PARSING_INIT_SEGMENT);
         if (mFirstInitializationSegmentReceived) {
           // This is a new initialization segment. Obsolete the old one.
           RecreateParser(false);
         }
-        mNewSegmentStarted = true;
         continue;
       }
       if (mParser->IsMediaSegmentPresent(mInputBuffer)) {
         SetAppendState(AppendState::PARSING_MEDIA_SEGMENT);
-        mNewSegmentStarted = true;
         continue;
       }
       // We have neither an init segment nor a media segment, this is either
       // invalid data or not enough data to detect a segment type.
       MSE_DEBUG("Found invalid or incomplete data.");
       NeedMoreData();
       return;
     }
 
     int64_t start, end;
-    bool newData = mParser->ParseStartAndEndTimestamps(mInputBuffer, start, end);
+    mParser->ParseStartAndEndTimestamps(mInputBuffer, start, end);
     mProcessedInput += mInputBuffer->Length();
 
     // 5. If the append state equals PARSING_INIT_SEGMENT, then run the
     // following steps:
     if (mAppendState == AppendState::PARSING_INIT_SEGMENT) {
       if (mParser->InitSegmentRange().IsNull()) {
         mInputBuffer = nullptr;
         NeedMoreData();
@@ -694,32 +696,16 @@ TrackBuffersManager::SegmentParserLoop()
       }
       // 2. If the input buffer does not contain a complete media segment header yet, then jump to the need more data step below.
       if (mParser->MediaHeaderRange().IsNull()) {
         AppendDataToCurrentInputBuffer(mInputBuffer);
         mInputBuffer = nullptr;
         NeedMoreData();
         return;
       }
-
-      // We can't feed some demuxers (WebMDemuxer) with data that do not have
-      // monotonizally increasing timestamps. So we check if we have a
-      // discontinuity from the previous segment parsed.
-      // If so, recreate a new demuxer to ensure that the demuxer is only fed
-      // monotonically increasing data.
-      if (newData) {
-        if (mNewSegmentStarted && mLastParsedEndTime.isSome() &&
-            start < mLastParsedEndTime.ref().ToMicroseconds()) {
-          ResetDemuxingState();
-          return;
-        }
-        mNewSegmentStarted = false;
-        mLastParsedEndTime = Some(TimeUnit::FromMicroseconds(end));
-      }
-
       // 3. If the input buffer contains one or more complete coded frames, then run the coded frame processing algorithm.
       nsRefPtr<TrackBuffersManager> self = this;
       mProcessingRequest.Begin(CodedFrameProcessing()
           ->Then(GetTaskQueue(), __func__,
                  [self] (bool aNeedMoreData) {
                    self->mProcessingRequest.Complete();
                    if (aNeedMoreData || self->mAbort) {
                      self->NeedMoreData();
@@ -770,93 +756,40 @@ TrackBuffersManager::ShutdownDemuxers()
     mVideoTracks.mDemuxer->BreakCycles();
     mVideoTracks.mDemuxer = nullptr;
   }
   if (mAudioTracks.mDemuxer) {
     mAudioTracks.mDemuxer->BreakCycles();
     mAudioTracks.mDemuxer = nullptr;
   }
   mInputDemuxer = nullptr;
-  mLastParsedEndTime.reset();
 }
 
 void
 TrackBuffersManager::CreateDemuxerforMIMEType()
 {
   ShutdownDemuxers();
 
 #ifdef MOZ_WEBM
   if (mType.LowerCaseEqualsLiteral("video/webm") || mType.LowerCaseEqualsLiteral("audio/webm")) {
-    mInputDemuxer = new WebMDemuxer(mCurrentInputBuffer, true /* IsMediaSource*/ );
+    mInputDemuxer = new WebMDemuxer(mCurrentInputBuffer);
     return;
   }
 #endif
 
 #ifdef MOZ_FMP4
   if (mType.LowerCaseEqualsLiteral("video/mp4") || mType.LowerCaseEqualsLiteral("audio/mp4")) {
     mInputDemuxer = new MP4Demuxer(mCurrentInputBuffer);
     return;
   }
 #endif
   NS_WARNING("Not supported (yet)");
   return;
 }
 
-// We reset the demuxer by creating a new one and initializing it.
-void
-TrackBuffersManager::ResetDemuxingState()
-{
-  MOZ_ASSERT(mParser && mParser->HasInitData());
-  RecreateParser(true);
-  mCurrentInputBuffer = new SourceBufferResource(mType);
-  // The demuxer isn't initialized yet ; we don't want to notify it
-  // that data has been appended yet ; so we simply append the init segment
-  // to the resource.
-  mCurrentInputBuffer->AppendData(mParser->InitData());
-  CreateDemuxerforMIMEType();
-  if (!mInputDemuxer) {
-    RejectAppend(NS_ERROR_FAILURE, __func__);
-    return;
-  }
-  mDemuxerInitRequest.Begin(mInputDemuxer->Init()
-                      ->Then(GetTaskQueue(), __func__,
-                             this,
-                             &TrackBuffersManager::OnDemuxerResetDone,
-                             &TrackBuffersManager::OnDemuxerInitFailed));
-}
-
-void
-TrackBuffersManager::OnDemuxerResetDone(nsresult)
-{
-  MOZ_ASSERT(OnTaskQueue());
-  MSE_DEBUG("mAbort:%d", static_cast<bool>(mAbort));
-  mDemuxerInitRequest.Complete();
-  if (mAbort) {
-    RejectAppend(NS_ERROR_ABORT, __func__);
-    return;
-  }
-
-  // Recreate track demuxers.
-  uint32_t numVideos = mInputDemuxer->GetNumberTracks(TrackInfo::kVideoTrack);
-  if (numVideos) {
-    // We currently only handle the first video track.
-    mVideoTracks.mDemuxer = mInputDemuxer->GetTrackDemuxer(TrackInfo::kVideoTrack, 0);
-    MOZ_ASSERT(mVideoTracks.mDemuxer);
-  }
-
-  uint32_t numAudios = mInputDemuxer->GetNumberTracks(TrackInfo::kAudioTrack);
-  if (numAudios) {
-    // We currently only handle the first audio track.
-    mAudioTracks.mDemuxer = mInputDemuxer->GetTrackDemuxer(TrackInfo::kAudioTrack, 0);
-    MOZ_ASSERT(mAudioTracks.mDemuxer);
-  }
-
-  SegmentParserLoop();
-}
-
 void
 TrackBuffersManager::AppendDataToCurrentInputBuffer(MediaByteBuffer* aData)
 {
   MOZ_ASSERT(mCurrentInputBuffer);
   int64_t offset = mCurrentInputBuffer->GetLength();
   mCurrentInputBuffer->AppendData(aData);
   // A MediaByteBuffer has a maximum size of 2GiB.
   mInputDemuxer->NotifyDataArrived(uint32_t(aData->Length()), offset);
@@ -1045,23 +978,16 @@ TrackBuffersManager::OnDemuxerInitDone(n
     // This is handled by SourceBuffer once the promise is resolved.
     if (activeTrack) {
       mActiveTrack = true;
     }
 
     // 6. Set first initialization segment received flag to true.
     mFirstInitializationSegmentReceived = true;
   } else {
-    // Check that audio configuration hasn't changed as this is something
-    // we do not support yet (bug 1185827).
-    if (mAudioTracks.mNumTracks &&
-        (info.mAudio.mChannels != mAudioTracks.mInfo->GetAsAudioInfo()->mChannels ||
-         info.mAudio.mRate != mAudioTracks.mInfo->GetAsAudioInfo()->mRate)) {
-      RejectAppend(NS_ERROR_FAILURE, __func__);
-    }
     mAudioTracks.mLastInfo = new SharedTrackInfo(info.mAudio, streamID);
     mVideoTracks.mLastInfo = new SharedTrackInfo(info.mVideo, streamID);
   }
 
   UniquePtr<EncryptionInfo> crypto = mInputDemuxer->GetCrypto();
   if (crypto && crypto->IsEncrypted()) {
 #ifdef MOZ_EME
     // Try and dispatch 'encrypted'. Won't go if ready state still HAVE_NOTHING.
@@ -1123,24 +1049,16 @@ TrackBuffersManager::CodedFrameProcessin
       // Something is not quite right with the data appended. Refuse it.
       // This would typically happen if the previous media segment was partial
       // yet a new complete media segment was added.
       return CodedFrameProcessingPromise::CreateAndReject(NS_ERROR_FAILURE, __func__);
     }
     // The mediaRange is offset by the init segment position previously added.
     uint32_t length =
       mediaRange.mEnd - (mProcessedInput - mInputBuffer->Length());
-    if (!length) {
-      // We've completed our earlier media segment and no new data is to be
-      // processed. This happens with some containers that can't detect that a
-      // media segment is ending until a new one starts.
-      nsRefPtr<CodedFrameProcessingPromise> p = mProcessingPromise.Ensure(__func__);
-      CompleteCodedFrameProcessing();
-      return p;
-    }
     nsRefPtr<MediaByteBuffer> segment = new MediaByteBuffer;
     if (!segment->AppendElements(mInputBuffer->Elements(), length, fallible)) {
       return CodedFrameProcessingPromise::CreateAndReject(NS_ERROR_OUT_OF_MEMORY, __func__);
     }
     AppendDataToCurrentInputBuffer(segment);
     mInputBuffer->RemoveElementsAt(0, length);
   }
 
@@ -1697,16 +1615,18 @@ TrackBuffersManager::RemoveFrames(const 
                    TimeUnit::FromMicroseconds(sample->GetEndTime()));
     removedIntervals += sampleInterval;
     if (sample->mDuration > maxSampleDuration) {
       maxSampleDuration = sample->mDuration;
     }
     aTrackData.mSizeBuffer -= sample->ComputedSizeOfIncludingThis();
   }
 
+  removedIntervals.SetFuzz(TimeUnit::FromMicroseconds(maxSampleDuration));
+
   MSE_DEBUG("Removing frames from:%u (frames:%u) ([%f, %f))",
             firstRemovedIndex.ref(),
             lastRemovedIndex - firstRemovedIndex.ref() + 1,
             removedIntervals.GetStart().ToSeconds(),
             removedIntervals.GetEnd().ToSeconds());
 
   if (aTrackData.mNextGetSampleIndex.isSome()) {
     if (aTrackData.mNextGetSampleIndex.ref() >= firstRemovedIndex.ref() &&
--- a/dom/media/mediasource/TrackBuffersManager.h
+++ b/dom/media/mediasource/TrackBuffersManager.h
@@ -108,17 +108,16 @@ private:
   // All following functions run on the taskqueue.
   nsRefPtr<AppendPromise> InitSegmentParserLoop();
   void ScheduleSegmentParserLoop();
   void SegmentParserLoop();
   void AppendIncomingBuffers();
   void InitializationSegmentReceived();
   void ShutdownDemuxers();
   void CreateDemuxerforMIMEType();
-  void ResetDemuxingState();
   void NeedMoreData();
   void RejectAppend(nsresult aRejectValue, const char* aName);
   // Will return a promise that will be resolved once all frames of the current
   // media segment have been processed.
   nsRefPtr<CodedFrameProcessingPromise> CodedFrameProcessing();
   void CompleteCodedFrameProcessing();
   // Called by ResetParserState. Complete parsing the input buffer for the
   // current media segment.
@@ -147,18 +146,16 @@ private:
   // The current append state as per https://w3c.github.io/media-source/#sourcebuffer-append-state
   // Accessed on both the main thread and the task queue.
   Atomic<AppendState> mAppendState;
   // Buffer full flag as per https://w3c.github.io/media-source/#sourcebuffer-buffer-full-flag.
   // Accessed on both the main thread and the task queue.
   // TODO: Unused for now.
   Atomic<bool> mBufferFull;
   bool mFirstInitializationSegmentReceived;
-  // Set to true once a new segment is started.
-  bool mNewSegmentStarted;
   bool mActiveTrack;
   Maybe<media::TimeUnit> mGroupStartTimestamp;
   media::TimeUnit mGroupEndTimestamp;
   nsCString mType;
 
   // ContainerParser objects and methods.
   // Those are used to parse the incoming input buffer.
 
@@ -169,21 +166,19 @@ private:
 
   // Demuxer objects and methods.
   void AppendDataToCurrentInputBuffer(MediaByteBuffer* aData);
   nsRefPtr<MediaByteBuffer> mInitData;
   nsRefPtr<SourceBufferResource> mCurrentInputBuffer;
   nsRefPtr<MediaDataDemuxer> mInputDemuxer;
   // Length already processed in current media segment.
   uint32_t mProcessedInput;
-  Maybe<media::TimeUnit> mLastParsedEndTime;
 
   void OnDemuxerInitDone(nsresult);
   void OnDemuxerInitFailed(DemuxerFailureReason aFailure);
-  void OnDemuxerResetDone(nsresult);
   MozPromiseRequestHolder<MediaDataDemuxer::InitPromise> mDemuxerInitRequest;
   bool mEncrypted;
 
   void OnDemuxFailed(TrackType aTrack, DemuxerFailureReason aFailure);
   void DoDemuxVideo();
   void OnVideoDemuxCompleted(nsRefPtr<MediaTrackDemuxer::SamplesHolder> aSamples);
   void OnVideoDemuxFailed(DemuxerFailureReason aFailure)
   {
--- a/dom/media/mediasource/test/mochitest.ini
+++ b/dom/media/mediasource/test/mochitest.ini
@@ -33,30 +33,30 @@ support-files =
   bipbop/bipbop11.m4s^headers^ bipbop/bipbop_audio11.m4s^headers^ bipbop/bipbop_video11.m4s^headers^
   bipbop/bipbop12.m4s^headers^ bipbop/bipbop_video12.m4s^headers^
   bipbop/bipbop13.m4s^headers^ bipbop/bipbop_video13.m4s^headers^
 
 [test_BufferedSeek.html]
 [test_BufferedSeek_mp4.html]
 skip-if = ((os == "win" && os_version == "5.1") || (os != "win" && os != "mac")) # Only supported on osx and vista+
 [test_BufferingWait.html]
-skip-if = toolkit == 'android' #timeout android bug 1199531
+skip-if = true # bug 1190776
 [test_BufferingWait_mp4.html]
-skip-if = ((os == "win" && os_version == "5.1") || (os != "win" && os != "mac") || (os == "win" && os_version == "6.1")) # Only supported on osx and vista+, disabling on win7 bug 1191138
+skip-if = ((os == "win" && os_version == "5.1") || (os != "win" && os != "mac")) # Only supported on osx and vista+
 [test_EndOfStream.html]
 skip-if = (true || toolkit == 'android' || buildapp == 'mulet') #timeout android/mulet only bug 1101187 and bug 1182946
 [test_EndOfStream_mp4.html]
 skip-if = (toolkit == 'android' || buildapp == 'mulet') || ((os == "win" && os_version == "5.1") || (os != "win" && os != "mac")) # Only supported on osx and vista+
 [test_DurationUpdated.html]
 [test_DurationUpdated_mp4.html]
 skip-if = ((os == "win" && os_version == "5.1") || (os != "win" && os != "mac")) # Only supported on osx and vista+
 [test_FrameSelection.html]
 [test_HaveMetadataUnbufferedSeek.html]
 [test_HaveMetadataUnbufferedSeek_mp4.html]
-skip-if = ((os == "win" && os_version == "5.1") || (os != "win" && os != "mac") || (os == "win" && os_version == "6.1")) # Only supported on osx and vista+, disabling on win7 bug 1191138
+skip-if = ((os == "win" && os_version == "5.1") || (os != "win" && os != "mac")) # Only supported on osx and vista+
 [test_LoadedMetadataFired.html]
 [test_LoadedMetadataFired_mp4.html]
 skip-if = ((os == "win" && os_version == "5.1") || (os != "win" && os != "mac")) # Only supported on osx and vista+
 [test_MediaSource.html]
 [test_MediaSource_mp4.html]
 skip-if = ((os == "win" && os_version == "5.1") || (os != "win" && os != "mac")) # Only supported on osx and vista+
 [test_MediaSource_disabled.html]
 [test_MultipleInitSegments.html]
@@ -84,16 +84,15 @@ skip-if = ((os == "win" && os_version ==
 skip-if = ((os == "win" && os_version == "5.1") || (os != "win" && os != "mac")) # Only supported on osx and vista+
 [test_SplitAppend.html]
 [test_SplitAppend_mp4.html]
 skip-if = ((os == "win" && os_version == "5.1") || (os != "win" && os != "mac")) # Only supported on osx and vista+
 [test_TimestampOffset_mp4.html]
 skip-if = ((os == "win" && os_version == "5.1") || (os != "win" && os != "mac")) # Only supported on osx and vista+
 [test_TruncatedDuration.html]
 [test_TruncatedDuration_mp4.html]
-skip-if = ((os == "win" && os_version == "5.1") || (os != "win" && os != "mac") || (os == "win" && os_version == "6.1")) # Only supported on osx and vista+, disabling on win7 bug 1191138
+skip-if = ((os == "win" && os_version == "5.1") || (os != "win" && os != "mac")) # Only supported on osx and vista+
 [test_WaitingOnMissingData.html]
 skip-if = true # Disabled due to bug 1124493 and friends. WebM MSE is deprioritized.
 [test_WaitingOnMissingData_mp4.html]
-skip-if = ((os == "win" && os_version == "5.1") || (os != "win" && os != "mac") || (os == "win" && os_version == "6.1")) # Only supported on osx and vista+, disabling on win7 bug 1191138
+skip-if = ((os == "win" && os_version == "5.1") || (os != "win" && os != "mac")) # Only supported on osx and vista+
 [test_WaitingToEndedTransition_mp4.html]
-skip-if = ((os == "win" && os_version == "5.1") || (os != "win" && os != "mac") || (os == "win" && os_version == "6.1")) # Only supported on osx and vista+, disabling on win7 bug 1191138
-
+skip-if = ((os == "win" && os_version == "5.1") || (os != "win" && os != "mac")) # Only supported on osx and vista+
--- a/dom/media/mediasource/test/test_BufferingWait_mp4.html
+++ b/dom/media/mediasource/test/test_BufferingWait_mp4.html
@@ -36,28 +36,24 @@ runWithMSE(function(ms, v) {
     sb.addEventListener('error', (e) => { ok(false, "Got Error: " + e); SimpleTest.finish(); });
     fetchAndLoad(sb, 'bipbop/bipbop', ['init'], '.mp4')
     .then(fetchAndLoad.bind(null, sb, 'bipbop/bipbop', ['1'], '.m4s'))
     .then(fetchAndLoad.bind(null, sb, 'bipbop/bipbop', ['2'], '.m4s'))
     /* Note - Missing |bipbop3| segment here corresponding to (1.62, 2.41] */
     /* Note - Missing |bipbop4| segment here corresponding to (2.41, 3.20]  */
     .then(fetchAndLoad.bind(null, sb, 'bipbop/bipbop', ['5'], '.m4s'))
     .then(function() {
-        // Some decoders (Windows in particular) may keep up to 25 frames queued
-        // before returning a sample. 0.7 is 1.62s - 25 * 0.03333
-        var promise = waitUntilTime(0.7);
+        var promise = waitUntilTime(1.4);
         info("Playing video. It should play for a bit, then fire 'waiting'");
         v.play();
         return promise;
       }).then(function() {
         window.firstStop = Date.now();
         fetchAndLoad(sb, 'bipbop/bipbop', ['3'], '.m4s');
-        // Some decoders (Windows in particular) may keep up to 25 frames queued
-        // before returning a sample. 1.5 is 2.41s - 25 * 0.03333
-        return waitUntilTime(1.5);
+        return waitUntilTime(2.2);
       }).then(function() {
         var waitDuration = (Date.now() - window.firstStop) / 1000;
         ok(waitDuration < 15, "Should not spend an inordinate amount of time buffering: " + waitDuration);
         once(v, 'ended', SimpleTest.finish.bind(SimpleTest));
         return fetchAndLoad(sb, 'bipbop/bipbop', ['4'], '.m4s');
       }).then(function() {
         ms.endOfStream();
       });;
--- a/dom/media/mediasource/test/test_WaitingOnMissingData_mp4.html
+++ b/dom/media/mediasource/test/test_WaitingOnMissingData_mp4.html
@@ -35,19 +35,18 @@ runWithMSE(function(ms, el) {
       ok(true, "Video playing. It should play for a bit, then fire 'waiting'");
       var p = once(el, 'waiting');
       el.play();
       return p;
     }).then(function() {
       // currentTime is based on the current video frame, so if the audio ends just before
       // the next video frame, currentTime can be up to 1 frame's worth earlier than
       // min(audioEnd, videoEnd).
-      // Some decoders (Windows in particular) may keep up to 25 frames queued.
       isfuzzy(el.currentTime, Math.min(audiosb.buffered.end(0), videosb.buffered.end(0)) - 1/60,
-              25 * 1/30, "Got a waiting event at " + el.currentTime);
+              1/30, "Got a waiting event at " + el.currentTime);
       info("Loading more data");
       var p = once(el, 'ended');
       var loads = Promise.all([fetchAndLoad(audiosb, 'bipbop/bipbop_audio', [5], '.m4s'),
                                fetchAndLoad(videosb, 'bipbop/bipbop_video', [6], '.m4s')]);
       loads.then(() => ms.endOfStream());
       return p;
     }).then(function() {
       // These fuzz factors are bigger than they should be. We should investigate
--- a/dom/media/mediasource/test/test_WaitingToEndedTransition_mp4.html
+++ b/dom/media/mediasource/test/test_WaitingToEndedTransition_mp4.html
@@ -33,19 +33,18 @@ runWithMSE(function(ms, el) {
       ok(true, "Video playing. It should play for a bit, then fire 'waiting'");
       var p = once(el, 'waiting');
       el.play();
       return p;
     }).then(function() {
       // currentTime is based on the current video frame, so if the audio ends just before
       // the next video frame, currentTime can be up to 1 frame's worth earlier than
       // min(audioEnd, videoEnd).
-      // Some decoders (Windows in particular) may keep up to 25 frames queued.
       isfuzzy(el.currentTime, Math.min(audiosb.buffered.end(0), videosb.buffered.end(0)) - 1/60,
-              25 * 1/30, "Got a waiting event at " + el.currentTime);
+              1/30, "Got a waiting event at " + el.currentTime);
     }).then(function() {
       var p = once(el, 'ended');
       ms.endOfStream();
       return p;
     }).then(function() {
       is(el.duration, 4.005, "Video has correct duration: " + el.duration);
       is(el.currentTime, el.duration, "Video has correct currentTime.");
       SimpleTest.finish();
--- a/dom/media/moz.build
+++ b/dom/media/moz.build
@@ -142,17 +142,16 @@ EXPORTS += [
     'ThreadPoolCOMListener.h',
     'TimeUnits.h',
     'TimeVarying.h',
     'TrackUnionStream.h',
     'VideoFrameContainer.h',
     'VideoSegment.h',
     'VideoUtils.h',
     'VorbisUtils.h',
-    'XiphExtradata.h',
 ]
 
 EXPORTS.mozilla += [
     'MediaManager.h',
     'MozPromise.h',
     'StateMirroring.h',
     'StateWatching.h',
     'TaskQueue.h',
@@ -244,17 +243,16 @@ UNIFIED_SOURCES += [
     'VideoFrameContainer.cpp',
     'VideoPlaybackQuality.cpp',
     'VideoSegment.cpp',
     'VideoStreamTrack.cpp',
     'VideoTrack.cpp',
     'VideoTrackList.cpp',
     'VideoUtils.cpp',
     'WebVTTListener.cpp',
-    'XiphExtradata.cpp',
 ]
 
 if CONFIG['OS_TARGET'] == 'WINNT':
   SOURCES += [ 'ThreadPoolCOMListener.cpp' ]
 
 if CONFIG['MOZ_B2G']:
     SOURCES += [
         'MediaPermissionGonk.cpp',
--- a/dom/media/omx/MediaCodecReader.cpp
+++ b/dom/media/omx/MediaCodecReader.cpp
@@ -348,17 +348,18 @@ MediaCodecReader::RequestAudioData()
     DispatchAudioTask();
   }
   MOZ_ASSERT(mAudioTrack.mAudioPromise.IsEmpty());
   return mAudioTrack.mAudioPromise.Ensure(__func__);
 }
 
 nsRefPtr<MediaDecoderReader::VideoDataPromise>
 MediaCodecReader::RequestVideoData(bool aSkipToNextKeyframe,
-                                   int64_t aTimeThreshold)
+                                   int64_t aTimeThreshold,
+                                   bool aForceDecodeAhead)
 {
   MOZ_ASSERT(OnTaskQueue());
   MOZ_ASSERT(HasVideo());
 
   int64_t threshold = sInvalidTimestampUs;
   if (aSkipToNextKeyframe && IsValidTimestampUs(aTimeThreshold)) {
     threshold = aTimeThreshold;
   }
--- a/dom/media/omx/MediaCodecReader.h
+++ b/dom/media/omx/MediaCodecReader.h
@@ -78,17 +78,18 @@ protected:
 public:
 
   // Flush the TaskQueue, flush MediaCodec and raise the mDiscontinuity.
   virtual nsresult ResetDecode() override;
 
   // Disptach a DecodeVideoFrameTask to decode video data.
   virtual nsRefPtr<VideoDataPromise>
   RequestVideoData(bool aSkipToNextKeyframe,
-                   int64_t aTimeThreshold) override;
+                   int64_t aTimeThreshold,
+                   bool aForceDecodeAhead) override;
 
   // Disptach a DecodeAduioDataTask to decode video data.
   virtual nsRefPtr<AudioDataPromise> RequestAudioData() override;
 
   virtual bool HasAudio();
   virtual bool HasVideo();
 
   virtual nsRefPtr<MediaDecoderReader::MetadataPromise> AsyncReadMetadata() override;
--- a/dom/media/omx/RtspMediaCodecReader.cpp
+++ b/dom/media/omx/RtspMediaCodecReader.cpp
@@ -77,20 +77,23 @@ nsRefPtr<MediaDecoderReader::AudioDataPr
 RtspMediaCodecReader::RequestAudioData()
 {
   EnsureActive();
   return MediaCodecReader::RequestAudioData();
 }
 
 nsRefPtr<MediaDecoderReader::VideoDataPromise>
 RtspMediaCodecReader::RequestVideoData(bool aSkipToNextKeyframe,
-                                       int64_t aTimeThreshold)
+                                       int64_t aTimeThreshold,
+                                       bool aForceDecodeAhead)
 {
   EnsureActive();
-  return MediaCodecReader::RequestVideoData(aSkipToNextKeyframe, aTimeThreshold);
+  return MediaCodecReader::RequestVideoData(aSkipToNextKeyframe,
+                                            aTimeThreshold,
+                                            aForceDecodeAhead);
 }
 
 nsRefPtr<MediaDecoderReader::MetadataPromise>
 RtspMediaCodecReader::AsyncReadMetadata()
 {
   mRtspResource->DisablePlayoutDelay();
   EnsureActive();
 
--- a/dom/media/omx/RtspMediaCodecReader.h
+++ b/dom/media/omx/RtspMediaCodecReader.h
@@ -48,17 +48,18 @@ public:
     return media::TimeIntervals::Invalid();
   }
 
   virtual void SetIdle() override;
 
   // Disptach a DecodeVideoFrameTask to decode video data.
   virtual nsRefPtr<VideoDataPromise>
   RequestVideoData(bool aSkipToNextKeyframe,
-                   int64_t aTimeThreshold) override;
+                   int64_t aTimeThreshold,
+                   bool aForceDecodeAhead) override;
 
   // Disptach a DecodeAudioDataTask to decode audio data.
   virtual nsRefPtr<AudioDataPromise> RequestAudioData() override;
 
   virtual nsRefPtr<MediaDecoderReader::MetadataPromise> AsyncReadMetadata()
     override;
 
   virtual void HandleResourceAllocated() override;
--- a/dom/media/platforms/agnostic/VorbisDecoder.cpp
+++ b/dom/media/platforms/agnostic/VorbisDecoder.cpp
@@ -1,17 +1,16 @@
 /* -*- Mode: C++; tab-width: 2; indent-tabs-mode: nil; c-basic-offset: 2 -*- */
 /* vim:set ts=2 sw=2 sts=2 et cindent: */
 /* This Source Code Form is subject to the terms of the Mozilla Public
  * License, v. 2.0. If a copy of the MPL was not distributed with this
  * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
 
 #include "VorbisDecoder.h"
 #include "VorbisUtils.h"
-#include "XiphExtradata.h"
 
 #include "mozilla/PodOperations.h"
 #include "nsAutoPtr.h"
 
 #undef LOG
 #define LOG(type, msg) MOZ_LOG(gMediaDecoderLog, type, msg)
 
 namespace mozilla {
@@ -67,27 +66,33 @@ VorbisDataDecoder::Shutdown()
 nsresult
 VorbisDataDecoder::Init()
 {
   vorbis_info_init(&mVorbisInfo);
   vorbis_comment_init(&mVorbisComment);
   PodZero(&mVorbisDsp);
   PodZero(&mVorbisBlock);
 
-  nsAutoTArray<unsigned char*,4> headers;
-  nsAutoTArray<size_t,4> headerLens;
-  if (!XiphExtradataToHeaders(headers, headerLens,
-                              mInfo.mCodecSpecificConfig->Elements(),
-                              mInfo.mCodecSpecificConfig->Length())) {
-    return NS_ERROR_FAILURE;
-  }
-  for (size_t i = 0; i < headers.Length(); i++) {
-    if (NS_FAILED(DecodeHeader(headers[i], headerLens[i]))) {
+  size_t available = mInfo.mCodecSpecificConfig->Length();
+  uint8_t *p = mInfo.mCodecSpecificConfig->Elements();
+  for(int i = 0; i < 3; i++) {
+    if (available < 2) {
       return NS_ERROR_FAILURE;
     }
+    available -= 2;
+    size_t length = BigEndian::readUint16(p);
+    p += 2;
+    if (available < length) {
+      return NS_ERROR_FAILURE;
+    }
+    available -= length;
+    if (NS_FAILED(DecodeHeader((const unsigned char*)p, length))) {
+        return NS_ERROR_FAILURE;
+    }
+    p += length;
   }
 
   MOZ_ASSERT(mPacketCount == 3);
 
   int r = vorbis_synthesis_init(&mVorbisDsp, &mVorbisInfo);
   if (r) {
     return NS_ERROR_FAILURE;
   }
--- a/dom/media/platforms/android/AndroidDecoderModule.cpp
+++ b/dom/media/platforms/android/AndroidDecoderModule.cpp
@@ -376,20 +376,16 @@ nsresult MediaCodecDataDecoder::InitDeco
 }
 
 // This is in usec, so that's 10ms
 #define DECODER_TIMEOUT 10000
 
 #define HANDLE_DECODER_ERROR() \
   if (NS_FAILED(res)) { \
     NS_WARNING("exiting decoder loop due to exception"); \
-    if (mDraining) { \
-      ENVOKE_CALLBACK(DrainComplete); \
-      mDraining = false; \
-    } \
     ENVOKE_CALLBACK(Error); \
     break; \
   }
 
 nsresult MediaCodecDataDecoder::GetInputBuffer(JNIEnv* env, int index, jni::Object::LocalRef* buffer)
 {
   bool retried = false;
   while (!*buffer) {
--- a/dom/media/platforms/apple/AppleUtils.h
+++ b/dom/media/platforms/apple/AppleUtils.h
@@ -38,61 +38,11 @@ public:
   }
 
 private:
   // Copy operator isn't supported and is not implemented.
   AutoCFRelease<T>& operator=(const AutoCFRelease<T>&);
   T mRef;
 };
 
-// CFRefPtr: A CoreFoundation smart pointer.
-template <class T>
-class CFRefPtr {
-public:
-  explicit CFRefPtr(T aRef)
-    : mRef(aRef)
-  {
-    if (mRef) {
-      CFRetain(mRef);
-    }
-  }
-  // Copy constructor.
-  CFRefPtr(const CFRefPtr<T>& aCFRefPtr)
-    : mRef(aCFRefPtr.mRef)
-  {
-    if (mRef) {
-      CFRetain(mRef);
-    }
-  }
-  // Copy operator
-  CFRefPtr<T>& operator=(const CFRefPtr<T>& aCFRefPtr)
-  {
-    if (mRef == aCFRefPtr.mRef) {
-      return;
-    }
-    if (mRef) {
-      CFRelease(mRef);
-    }
-    mRef = aCFRefPtr.mRef;
-    if (mRef) {
-      CFRetain(mRef);
-    }
-    return *this;
-  }
-  ~CFRefPtr()
-  {
-    if (mRef) {
-      CFRelease(mRef);
-    }
-  }
-  // Return the wrapped ref so it can be used as an in parameter.
-  operator T()
-  {
-    return mRef;
-  }
-
-private:
-  T mRef;
-};
-
 } // namespace mozilla
 
 #endif // mozilla_AppleUtils_h
--- a/dom/media/platforms/apple/AppleVDADecoder.cpp
+++ b/dom/media/platforms/apple/AppleVDADecoder.cpp
@@ -35,23 +35,18 @@ AppleVDADecoder::AppleVDADecoder(const V
                                layers::ImageContainer* aImageContainer)
   : mTaskQueue(aVideoTaskQueue)
   , mCallback(aCallback)
   , mImageContainer(aImageContainer)
   , mPictureWidth(aConfig.mImage.width)
   , mPictureHeight(aConfig.mImage.height)
   , mDisplayWidth(aConfig.mDisplay.width)
   , mDisplayHeight(aConfig.mDisplay.height)
-  , mInputIncoming(0)
-  , mIsShutDown(false)
   , mUseSoftwareImages(true)
   , mIs106(!nsCocoaFeatures::OnLionOrLater())
-  , mQueuedSamples(0)
-  , mMonitor("AppleVideoDecoder")
-  , mIsFlushing(false)
   , mDecoder(nullptr)
 {
   MOZ_COUNT_CTOR(AppleVDADecoder);
   // TODO: Verify aConfig.mime_type.
 
   mExtraData = aConfig.mExtraData;
   mMaxRefFrames = 4;
   // Retrieve video dimensions from H264 SPS NAL.
@@ -91,116 +86,69 @@ AppleVDADecoder::Init()
   }
   nsresult rv = InitializeSession();
   return rv;
 }
 
 nsresult
 AppleVDADecoder::Shutdown()
 {
-  MOZ_DIAGNOSTIC_ASSERT(!mIsShutDown);
-  mIsShutDown = true;
-  if (mTaskQueue) {
-    nsCOMPtr<nsIRunnable> runnable =
-      NS_NewRunnableMethod(this, &AppleVDADecoder::ProcessShutdown);
-    mTaskQueue->Dispatch(runnable.forget());
-  } else {
-    ProcessShutdown();
-  }
-  return NS_OK;
-}
-
-void
-AppleVDADecoder::ProcessShutdown()
-{
   if (mDecoder) {
     LOG("%s: cleaning up decoder %p", __func__, mDecoder);
     VDADecoderDestroy(mDecoder);
     mDecoder = nullptr;
   }
+  return NS_OK;
 }
 
 nsresult
 AppleVDADecoder::Input(MediaRawData* aSample)
 {
-  MOZ_ASSERT(mCallback->OnReaderTaskQueue());
-
   LOG("mp4 input sample %p pts %lld duration %lld us%s %d bytes",
       aSample,
       aSample->mTime,
       aSample->mDuration,
       aSample->mKeyframe ? " keyframe" : "",
       aSample->Size());
 
-  mInputIncoming++;
-
   nsCOMPtr<nsIRunnable> runnable =
       NS_NewRunnableMethodWithArg<nsRefPtr<MediaRawData>>(
           this,
           &AppleVDADecoder::SubmitFrame,
           nsRefPtr<MediaRawData>(aSample));
   mTaskQueue->Dispatch(runnable.forget());
   return NS_OK;
 }
 
 nsresult
 AppleVDADecoder::Flush()
 {
-  MOZ_ASSERT(mCallback->OnReaderTaskQueue());
-  mIsFlushing = true;
   mTaskQueue->Flush();
-  nsCOMPtr<nsIRunnable> runnable =
-    NS_NewRunnableMethod(this, &AppleVDADecoder::ProcessFlush);
-  MonitorAutoLock mon(mMonitor);
-  mTaskQueue->Dispatch(runnable.forget());
-  while (mIsFlushing) {
-    mon.Wait();
+  OSStatus rv = VDADecoderFlush(mDecoder, 0 /*dont emit*/);
+  if (rv != noErr) {
+    LOG("AppleVDADecoder::Flush failed waiting for platform decoder "
+        "with error:%d.", rv);
   }
-  mInputIncoming = 0;
+  ClearReorderedFrames();
+
   return NS_OK;
 }
 
 nsresult
 AppleVDADecoder::Drain()
 {
-  MOZ_ASSERT(mCallback->OnReaderTaskQueue());
-  nsCOMPtr<nsIRunnable> runnable =
-    NS_NewRunnableMethod(this, &AppleVDADecoder::ProcessDrain);
-  mTaskQueue->Dispatch(runnable.forget());
-  return NS_OK;
-}
-
-void
-AppleVDADecoder::ProcessFlush()
-{
-  MOZ_ASSERT(mTaskQueue->IsCurrentThreadIn());
-
-  OSStatus rv = VDADecoderFlush(mDecoder, 0 /*dont emit*/);
-  if (rv != noErr) {
-    LOG("AppleVDADecoder::Flush failed waiting for platform decoder "
-        "with error:%d.", rv);
-  }
-  ClearReorderedFrames();
-  MonitorAutoLock mon(mMonitor);
-  mIsFlushing = false;
-  mon.NotifyAll();
-}
-
-void
-AppleVDADecoder::ProcessDrain()
-{
-  MOZ_ASSERT(mTaskQueue->IsCurrentThreadIn());
-
+  mTaskQueue->AwaitIdle();
   OSStatus rv = VDADecoderFlush(mDecoder, kVDADecoderFlush_EmitFrames);
   if (rv != noErr) {
     LOG("AppleVDADecoder::Drain failed waiting for platform decoder "
         "with error:%d.", rv);
   }
   DrainReorderedFrames();
   mCallback->DrainComplete();
+  return NS_OK;
 }
 
 //
 // Implementation details.
 //
 
 // Callback passed to the VideoToolbox decoder for returning data.
 // This needs to be static because the API takes a C-style pair of
@@ -218,23 +166,25 @@ PlatformCallback(void* decompressionOutp
 
   // Validate our arguments.
   // According to Apple's TN2267
   // The output callback is still called for all flushed frames,
   // but no image buffers will be returned.
   // FIXME: Distinguish between errors and empty flushed frames.
   if (status != noErr || !image) {
     NS_WARNING("AppleVDADecoder decoder returned no data");
-    image = nullptr;
-  } else if (infoFlags & kVDADecodeInfo_FrameDropped) {
+    return;
+  }
+  MOZ_ASSERT(CFGetTypeID(image) == CVPixelBufferGetTypeID(),
+             "AppleVDADecoder returned an unexpected image type");
+
+  if (infoFlags & kVDADecodeInfo_FrameDropped)
+  {
     NS_WARNING("  ...frame dropped...");
-    image = nullptr;
-  } else {
-    MOZ_ASSERT(image || CFGetTypeID(image) == CVPixelBufferGetTypeID(),
-               "AppleVDADecoder returned an unexpected image type");
+    return;
   }
 
   AppleVDADecoder* decoder =
     static_cast<AppleVDADecoder*>(decompressionOutputRefCon);
 
   AutoCFRelease<CFNumberRef> ptsref =
     (CFNumberRef)CFDictionaryGetValue(frameInfo, CFSTR("FRAME_PTS"));
   AutoCFRelease<CFNumberRef> dtsref =
@@ -253,84 +203,65 @@ PlatformCallback(void* decompressionOutp
   char is_sync_point;
 
   CFNumberGetValue(ptsref, kCFNumberSInt64Type, &pts);
   CFNumberGetValue(dtsref, kCFNumberSInt64Type, &dts);
   CFNumberGetValue(durref, kCFNumberSInt64Type, &duration);
   CFNumberGetValue(boref, kCFNumberSInt64Type, &byte_offset);
   CFNumberGetValue(kfref, kCFNumberSInt8Type, &is_sync_point);
 
-  AppleVDADecoder::AppleFrameRef frameRef(
+  nsAutoPtr<AppleVDADecoder::AppleFrameRef> frameRef(
+    new AppleVDADecoder::AppleFrameRef(
       media::TimeUnit::FromMicroseconds(dts),
       media::TimeUnit::FromMicroseconds(pts),
       media::TimeUnit::FromMicroseconds(duration),
       byte_offset,
-      is_sync_point == 1);
+      is_sync_point == 1));
 
+  // Forward the data back to an object method which can access
+  // the correct MP4Reader callback.
   decoder->OutputFrame(image, frameRef);
 }
 
 AppleVDADecoder::AppleFrameRef*
 AppleVDADecoder::CreateAppleFrameRef(const MediaRawData* aSample)
 {
   MOZ_ASSERT(aSample);
   return new AppleFrameRef(*aSample);
 }
 
 void
 AppleVDADecoder::DrainReorderedFrames()
 {
-  MonitorAutoLock mon(mMonitor);
   while (!mReorderQueue.IsEmpty()) {
     mCallback->Output(mReorderQueue.Pop());
   }
-  mQueuedSamples = 0;
 }
 
 void
 AppleVDADecoder::ClearReorderedFrames()
 {
-  MonitorAutoLock mon(mMonitor);
   while (!mReorderQueue.IsEmpty()) {
     mReorderQueue.Pop();
   }
-  mQueuedSamples = 0;
 }
 
 // Copy and return a decoded frame.
 nsresult
 AppleVDADecoder::OutputFrame(CVPixelBufferRef aImage,
-                             AppleVDADecoder::AppleFrameRef aFrameRef)
+                             nsAutoPtr<AppleVDADecoder::AppleFrameRef> aFrameRef)
 {
-  if (mIsShutDown || mIsFlushing) {
-    // We are in the process of flushing or shutting down; ignore frame.
-    return NS_OK;
-  }
-
   LOG("mp4 output frame %lld dts %lld pts %lld duration %lld us%s",
-      aFrameRef.byte_offset,
-      aFrameRef.decode_timestamp.ToMicroseconds(),
-      aFrameRef.composition_timestamp.ToMicroseconds(),
-      aFrameRef.duration.ToMicroseconds(),
-      aFrameRef.is_sync_point ? " keyframe" : ""
+      aFrameRef->byte_offset,
+      aFrameRef->decode_timestamp.ToMicroseconds(),
+      aFrameRef->composition_timestamp.ToMicroseconds(),
+      aFrameRef->duration.ToMicroseconds(),
+      aFrameRef->is_sync_point ? " keyframe" : ""
   );
 
-  if (mQueuedSamples > mMaxRefFrames) {
-    // We had stopped requesting more input because we had received too much at
-    // the time. We can ask for more once again.
-    mCallback->InputExhausted();
-  }
-  MOZ_ASSERT(mQueuedSamples);
-  mQueuedSamples--;
-
-  if (!aImage) {
-    // Image was dropped by decoder.
-    return NS_OK;
-  }
-
   // Where our resulting image will end up.
   nsRefPtr<VideoData> data;
   // Bounds.
   VideoInfo info;
   info.mDisplay = nsIntSize(mDisplayWidth, mDisplayHeight);
   gfx::IntRect visible = gfx::IntRect(0,
                                       0,
                                       mPictureWidth,
@@ -376,22 +307,22 @@ AppleVDADecoder::OutputFrame(CVPixelBuff
     buffer.mPlanes[2].mOffset = 1;
     buffer.mPlanes[2].mSkip = 1;
 
     // Copy the image data into our own format.
     data =
       VideoData::Create(info,
                         mImageContainer,
                         nullptr,
-                        aFrameRef.byte_offset,
-                        aFrameRef.composition_timestamp.ToMicroseconds(),
-                        aFrameRef.duration.ToMicroseconds(),
+                        aFrameRef->byte_offset,
+                        aFrameRef->composition_timestamp.ToMicroseconds(),
+                        aFrameRef->duration.ToMicroseconds(),
                         buffer,
-                        aFrameRef.is_sync_point,
-                        aFrameRef.decode_timestamp.ToMicroseconds(),
+                        aFrameRef->is_sync_point,
+                        aFrameRef->decode_timestamp.ToMicroseconds(),
                         visible);
     // Unlock the returned image data.
     CVPixelBufferUnlockBaseAddress(aImage, kCVPixelBufferLock_ReadOnly);
   } else {
     IOSurfacePtr surface = MacIOSurfaceLib::CVPixelBufferGetIOSurface(aImage);
     MOZ_ASSERT(surface, "Decoder didn't return an IOSurface backed buffer");
 
     nsRefPtr<MacIOSurface> macSurface = new MacIOSurface(surface);
@@ -400,51 +331,46 @@ AppleVDADecoder::OutputFrame(CVPixelBuff
       mImageContainer->CreateImage(ImageFormat::MAC_IOSURFACE);
     layers::MacIOSurfaceImage* videoImage =
       static_cast<layers::MacIOSurfaceImage*>(image.get());
     videoImage->SetSurface(macSurface);
 
     data =
       VideoData::CreateFromImage(info,
                                  mImageContainer,
-                                 aFrameRef.byte_offset,
-                                 aFrameRef.composition_timestamp.ToMicroseconds(),
-                                 aFrameRef.duration.ToMicroseconds(),
+                                 aFrameRef->byte_offset,
+                                 aFrameRef->composition_timestamp.ToMicroseconds(),
+                                 aFrameRef->duration.ToMicroseconds(),
                                  image.forget(),
-                                 aFrameRef.is_sync_point,
-                                 aFrameRef.decode_timestamp.ToMicroseconds(),
+                                 aFrameRef->is_sync_point,
+                                 aFrameRef->decode_timestamp.ToMicroseconds(),
                                  visible);
   }
 
   if (!data) {
     NS_ERROR("Couldn't create VideoData for frame");
     mCallback->Error();
     return NS_ERROR_FAILURE;
   }
 
   // Frames come out in DTS order but we need to output them
   // in composition order.
-  MonitorAutoLock mon(mMonitor);
   mReorderQueue.Push(data);
   while (mReorderQueue.Length() > mMaxRefFrames) {
     mCallback->Output(mReorderQueue.Pop());
   }
   LOG("%llu decoded frames queued",
       static_cast<unsigned long long>(mReorderQueue.Length()));
 
   return NS_OK;
 }
 
 nsresult
 AppleVDADecoder::SubmitFrame(MediaRawData* aSample)
 {
-  MOZ_ASSERT(mTaskQueue->IsCurrentThreadIn());
-
-  mInputIncoming--;
-
   AutoCFRelease<CFDataRef> block =
     CFDataCreate(kCFAllocatorDefault, aSample->Data(), aSample->Size());
   if (!block) {
     NS_ERROR("Couldn't create CFData");
     return NS_ERROR_FAILURE;
   }
 
   AutoCFRelease<CFNumberRef> pts =
@@ -485,18 +411,16 @@ AppleVDADecoder::SubmitFrame(MediaRawDat
   AutoCFRelease<CFDictionaryRef> frameInfo =
     CFDictionaryCreate(kCFAllocatorDefault,
                        keys,
                        values,
                        ArrayLength(keys),
                        &kCFTypeDictionaryKeyCallBacks,
                        &kCFTypeDictionaryValueCallBacks);
 
-  mQueuedSamples++;
-
   OSStatus rv = VDADecoderDecode(mDecoder,
                                  0,
                                  block,
                                  frameInfo);
 
   if (rv != noErr) {
     NS_WARNING("AppleVDADecoder: Couldn't pass frame to decoder");
     mCallback->Error();
@@ -510,17 +434,17 @@ AppleVDADecoder::SubmitFrame(MediaRawDat
     // This dictionary can contain client provided information associated with
     // the frame being decoded, for example presentation time.
     // The CFDictionaryRef will be retained by the framework.
     // In 10.6, it is released one too many. So retain it.
     CFRetain(frameInfo);
   }
 
   // Ask for more data.
-  if (!mInputIncoming && mQueuedSamples <= mMaxRefFrames) {
+  if (mTaskQueue->IsEmpty()) {
     LOG("AppleVDADecoder task queue empty; requesting more data");
     mCallback->InputExhausted();
   }
 
   return NS_OK;
 }
 
 nsresult
--- a/dom/media/platforms/apple/AppleVDADecoder.h
+++ b/dom/media/platforms/apple/AppleVDADecoder.h
@@ -3,17 +3,16 @@
 /* This Source Code Form is subject to the terms of the Mozilla Public
  * License, v. 2.0. If a copy of the MPL was not distributed with this
  * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
 
 #ifndef mozilla_AppleVDADecoder_h
 #define mozilla_AppleVDADecoder_h
 
 #include "PlatformDecoderModule.h"
-#include "mozilla/Atomics.h"
 #include "mozilla/ReentrantMonitor.h"
 #include "MP4Decoder.h"
 #include "nsIThread.h"
 #include "ReorderQueue.h"
 #include "TimeUnits.h"
 
 #include "VideoDecodeAcceleration/VDADecoder.h"
 
@@ -76,63 +75,37 @@ public:
   virtual nsresult Flush() override;
   virtual nsresult Drain() override;
   virtual nsresult Shutdown() override;
   virtual bool IsHardwareAccelerated() const override
   {
     return true;
   }
 
-  // Access from the taskqueue and the decoder's thread.
-  // OutputFrame is thread-safe.
   nsresult OutputFrame(CVPixelBufferRef aImage,
-                       AppleFrameRef aFrameRef);
+                       nsAutoPtr<AppleFrameRef> aFrameRef);
 
-protected:
-  // Flush and Drain operation, always run
-  virtual void ProcessFlush();
-  virtual void ProcessDrain();
-  virtual void ProcessShutdown();
-
+ protected:
   AppleFrameRef* CreateAppleFrameRef(const MediaRawData* aSample);
   void DrainReorderedFrames();
   void ClearReorderedFrames();
   CFDictionaryRef CreateOutputConfiguration();
 
   nsRefPtr<MediaByteBuffer> mExtraData;
   nsRefPtr<FlushableTaskQueue> mTaskQueue;
   MediaDataDecoderCallback* mCallback;
   nsRefPtr<layers::ImageContainer> mImageContainer;
+  ReorderQueue mReorderQueue;
   uint32_t mPictureWidth;
   uint32_t mPictureHeight;
   uint32_t mDisplayWidth;
   uint32_t mDisplayHeight;
-  // Accessed on multiple threads, but only set in constructor.
   uint32_t mMaxRefFrames;
-  // Increased when Input is called, and decreased when ProcessFrame runs.
-  // Reaching 0 indicates that there's no pending Input.
-  Atomic<uint32_t> mInputIncoming;
-  Atomic<bool> mIsShutDown;
-
-  const bool mUseSoftwareImages;
-  const bool mIs106;
-
-  // Number of times a sample was queued via Input(). Will be decreased upon
-  // the decoder's callback being invoked.
-  // This is used to calculate how many frames has been buffered by the decoder.
-  Atomic<uint32_t> mQueuedSamples;
-
-  // For wait on mIsFlushing during Shutdown() process.
-  // Protects mReorderQueue.
-  Monitor mMonitor;
-  // Set on reader/decode thread calling Flush() to indicate that output is
-  // not required and so input samples on mTaskQueue need not be processed.
-  // Cleared on mTaskQueue in ProcessDrain().
-  Atomic<bool> mIsFlushing;
-  ReorderQueue mReorderQueue;
+  bool mUseSoftwareImages;
+  bool mIs106;
 
 private:
   VDADecoder mDecoder;
 
   // Method to pass a frame to VideoToolbox for decoding.
   nsresult SubmitFrame(MediaRawData* aSample);
   // Method to set up the decompression session.
   nsresult InitializeSession();
--- a/dom/media/platforms/apple/AppleVTDecoder.cpp
+++ b/dom/media/platforms/apple/AppleVTDecoder.cpp
@@ -53,37 +53,36 @@ AppleVTDecoder::~AppleVTDecoder()
 
 nsresult
 AppleVTDecoder::Init()
 {
   nsresult rv = InitializeSession();
   return rv;
 }
 
-void
-AppleVTDecoder::ProcessShutdown()
+nsresult
+AppleVTDecoder::Shutdown()
 {
   if (mSession) {
     LOG("%s: cleaning up session %p", __func__, mSession);
     VTDecompressionSessionInvalidate(mSession);
     CFRelease(mSession);
     mSession = nullptr;
   }
   if (mFormat) {
     LOG("%s: releasing format %p", __func__, mFormat);
     CFRelease(mFormat);
     mFormat = nullptr;
   }
+  return NS_OK;
 }
 
 nsresult
 AppleVTDecoder::Input(MediaRawData* aSample)
 {
-  MOZ_ASSERT(mCallback->OnReaderTaskQueue());
-
   LOG("mp4 input sample %p pts %lld duration %lld us%s %d bytes",
       aSample,
       aSample->mTime,
       aSample->mDuration,
       aSample->mKeyframe ? " keyframe" : "",
       aSample->Size());
 
 #ifdef LOG_MEDIA_SHA1
@@ -93,51 +92,51 @@ AppleVTDecoder::Input(MediaRawData* aSam
   hash.finish(digest_buf);
   nsAutoCString digest;
   for (size_t i = 0; i < sizeof(digest_buf); i++) {
     digest.AppendPrintf("%02x", digest_buf[i]);
   }
   LOG("    sha1 %s", digest.get());
 #endif // LOG_MEDIA_SHA1
 
-  mInputIncoming++;
-
   nsCOMPtr<nsIRunnable> runnable =
       NS_NewRunnableMethodWithArg<nsRefPtr<MediaRawData>>(
-          this, &AppleVTDecoder::SubmitFrame, aSample);
+          this,
+          &AppleVTDecoder::SubmitFrame,
+          nsRefPtr<MediaRawData>(aSample));
   mTaskQueue->Dispatch(runnable.forget());
   return NS_OK;
 }
 
-void
-AppleVTDecoder::ProcessFlush()
+nsresult
+AppleVTDecoder::Flush()
 {
-  MOZ_ASSERT(mTaskQueue->IsCurrentThreadIn());
+  mTaskQueue->Flush();
   nsresult rv = WaitForAsynchronousFrames();
   if (NS_FAILED(rv)) {
     LOG("AppleVTDecoder::Flush failed waiting for platform decoder "
         "with error:%d.", rv);
   }
   ClearReorderedFrames();
-  MonitorAutoLock mon(mMonitor);
-  mIsFlushing = false;
-  mon.NotifyAll();
+
+  return rv;
 }
 
-void
-AppleVTDecoder::ProcessDrain()
+nsresult
+AppleVTDecoder::Drain()
 {
-  MOZ_ASSERT(mTaskQueue->IsCurrentThreadIn());
+  mTaskQueue->AwaitIdle();
   nsresult rv = WaitForAsynchronousFrames();
   if (NS_FAILED(rv)) {
     LOG("AppleVTDecoder::Drain failed waiting for platform decoder "
         "with error:%d.", rv);
   }
   DrainReorderedFrames();
   mCallback->DrainComplete();
+  return NS_OK;
 }
 
 //
 // Implementation details.
 //
 
 // Callback passed to the VideoToolbox decoder for returning data.
 // This needs to be static because the API takes a C-style pair of
@@ -157,24 +156,27 @@ PlatformCallback(void* decompressionOutp
   AppleVTDecoder* decoder =
     static_cast<AppleVTDecoder*>(decompressionOutputRefCon);
   nsAutoPtr<AppleVTDecoder::AppleFrameRef> frameRef(
     static_cast<AppleVTDecoder::AppleFrameRef*>(sourceFrameRefCon));
 
   // Validate our arguments.
   if (status != noErr || !image) {
     NS_WARNING("VideoToolbox decoder returned no data");
-    image = nullptr;
-  } else if (flags & kVTDecodeInfo_FrameDropped) {
+    return;
+  }
+  if (flags & kVTDecodeInfo_FrameDropped) {
     NS_WARNING("  ...frame tagged as dropped...");
-  } else {
-    MOZ_ASSERT(CFGetTypeID(image) == CVPixelBufferGetTypeID(),
-      "VideoToolbox returned an unexpected image type");
   }
-  decoder->OutputFrame(image, *frameRef);
+  MOZ_ASSERT(CFGetTypeID(image) == CVPixelBufferGetTypeID(),
+    "VideoToolbox returned an unexpected image type");
+
+  // Forward the data back to an object method which can access
+  // the correct MP4Reader callback.
+  decoder->OutputFrame(image, frameRef);
 }
 
 nsresult
 AppleVTDecoder::WaitForAsynchronousFrames()
 {
   OSStatus rv = VTDecompressionSessionWaitForAsynchronousFrames(mSession);
   if (rv != noErr) {
     LOG("AppleVTDecoder: Error %d waiting for asynchronous frames", rv);
@@ -196,18 +198,16 @@ TimingInfoFromSample(MediaRawData* aSamp
     CMTimeMake(aSample->mTimecode, USECS_PER_S);
 
   return timestamp;
 }
 
 nsresult
 AppleVTDecoder::SubmitFrame(MediaRawData* aSample)
 {
-  MOZ_ASSERT(mTaskQueue->IsCurrentThreadIn());
-  mInputIncoming--;
   // For some reason this gives me a double-free error with stagefright.
   AutoCFRelease<CMBlockBufferRef> block = nullptr;
   AutoCFRelease<CMSampleBufferRef> sample = nullptr;
   VTDecodeInfoFlags infoFlags;
   OSStatus rv;
 
   // FIXME: This copies the sample data. I think we can provide
   // a custom block source which reuses the aSample buffer.
@@ -228,34 +228,32 @@ AppleVTDecoder::SubmitFrame(MediaRawData
   }
   CMSampleTimingInfo timestamp = TimingInfoFromSample(aSample);
   rv = CMSampleBufferCreate(kCFAllocatorDefault, block, true, 0, 0, mFormat, 1, 1, &timestamp, 0, NULL, sample.receive());
   if (rv != noErr) {
     NS_ERROR("Couldn't create CMSampleBuffer");
     return NS_ERROR_FAILURE;
   }
 
-  mQueuedSamples++;
-
   VTDecodeFrameFlags decodeFlags =
     kVTDecodeFrame_EnableAsynchronousDecompression;
   rv = VTDecompressionSessionDecodeFrame(mSession,
                                          sample,
                                          decodeFlags,
                                          CreateAppleFrameRef(aSample),
                                          &infoFlags);
   if (rv != noErr && !(infoFlags & kVTDecodeInfo_FrameDropped)) {
     LOG("AppleVTDecoder: Error %d VTDecompressionSessionDecodeFrame", rv);
     NS_WARNING("Couldn't pass frame to decoder");
     mCallback->Error();
     return NS_ERROR_FAILURE;
   }
 
   // Ask for more data.
-  if (!mInputIncoming && mQueuedSamples <= mMaxRefFrames) {
+  if (mTaskQueue->IsEmpty()) {
     LOG("AppleVTDecoder task queue empty; requesting more data");
     mCallback->InputExhausted();
   }
 
   return NS_OK;
 }
 
 nsresult
--- a/dom/media/platforms/apple/AppleVTDecoder.h
+++ b/dom/media/platforms/apple/AppleVTDecoder.h
@@ -17,26 +17,24 @@ class AppleVTDecoder : public AppleVDADe
 public:
   AppleVTDecoder(const VideoInfo& aConfig,
                  FlushableTaskQueue* aVideoTaskQueue,
                  MediaDataDecoderCallback* aCallback,
                  layers::ImageContainer* aImageContainer);
   virtual ~AppleVTDecoder();
   virtual nsresult Init() override;
   virtual nsresult Input(MediaRawData* aSample) override;
+  virtual nsresult Flush() override;
+  virtual nsresult Drain() override;
+  virtual nsresult Shutdown() override;
   virtual bool IsHardwareAccelerated() const override
   {
     return mIsHardwareAccelerated;
   }
 
-protected:
-  void ProcessFlush() override;
-  void ProcessDrain() override;
-  void ProcessShutdown() override;
-
 private:
   CMVideoFormatDescriptionRef mFormat;
   VTDecompressionSessionRef mSession;
 
   // Method to pass a frame to VideoToolbox for decoding.
   nsresult SubmitFrame(MediaRawData* aSample);
   // Method to set up the decompression session.
   nsresult InitializeSession();
--- a/dom/media/webm/WebMBufferedParser.cpp
+++ b/dom/media/webm/WebMBufferedParser.cpp
@@ -29,17 +29,16 @@ VIntLength(unsigned char aFirstByte, uin
   NS_ASSERTION(count >= 1 && count <= 8, "Insane VInt length.");
   return count;
 }
 
 void WebMBufferedParser::Append(const unsigned char* aBuffer, uint32_t aLength,
                                 nsTArray<WebMTimeDataOffset>& aMapping,
                                 ReentrantMonitor& aReentrantMonitor)
 {
-  static const uint32_t EBML_ID = 0x1a45dfa3;
   static const uint32_t SEGMENT_ID = 0x18538067;
   static const uint32_t SEGINFO_ID = 0x1549a966;
   static const uint32_t TRACKS_ID = 0x1654AE6B;
   static const uint32_t CLUSTER_ID = 0x1f43b675;
   static const uint32_t TIMECODESCALE_ID = 0x2ad7b1;
   static const unsigned char TIMECODE_ID = 0xe7;
   static const unsigned char BLOCK_ID = 0xa1;
   static const unsigned char SIMPLEBLOCK_ID = 0xa3;
@@ -98,22 +97,16 @@ void WebMBufferedParser::Append(const un
         mVInt = VInt();
         mVIntLeft = mElement.mSize.mValue;
         mState = READ_VINT_REST;
         mNextState = READ_TIMECODESCALE;
         break;
       case CLUSTER_ID:
         mClusterOffset = mCurrentOffset + (p - aBuffer) -
                         (mElement.mID.mLength + mElement.mSize.mLength);
-        // Handle "unknown" length;
-        if (mElement.mSize.mValue + 1 != uint64_t(1) << (mElement.mSize.mLength * 7)) {
-          mClusterEndOffset = mClusterOffset + mElement.mID.mLength + mElement.mSize.mLength + mElement.mSize.mValue;
-        } else {
-          mClusterEndOffset = -1;
-        }
         mState = READ_ELEMENT_ID;
         break;
       case SIMPLEBLOCK_ID:
         /* FALLTHROUGH */
       case BLOCK_ID:
         mBlockSize = mElement.mSize.mValue;
         mBlockTimecode = 0;
         mBlockTimecodeLength = BLOCK_TIMECODE_LENGTH;
@@ -121,20 +114,16 @@ void WebMBufferedParser::Append(const un
                        (mElement.mID.mLength + mElement.mSize.mLength);
         mState = READ_VINT;
         mNextState = READ_BLOCK_TIMECODE;
         break;
       case TRACKS_ID:
         mSkipBytes = mElement.mSize.mValue;
         mState = CHECK_INIT_FOUND;
         break;
-      case EBML_ID:
-        mLastInitStartOffset = mCurrentOffset + (p - aBuffer) -
-                            (mElement.mID.mLength + mElement.mSize.mLength);
-        /* FALLTHROUGH */
       default:
         mSkipBytes = mElement.mSize.mValue;
         mState = SKIP_DATA;
         mNextState = READ_ELEMENT_ID;
         break;
       }
       break;
     case READ_VINT: {
@@ -178,18 +167,17 @@ void WebMBufferedParser::Append(const un
                               mElement.mID.mLength + mElement.mSize.mLength;
           uint32_t idx = aMapping.IndexOfFirstElementGt(endOffset);
           if (idx == 0 || aMapping[idx - 1] != endOffset) {
             // Don't insert invalid negative timecodes.
             if (mBlockTimecode >= 0 || mClusterTimecode >= uint16_t(abs(mBlockTimecode))) {
               MOZ_ASSERT(mGotTimecodeScale);
               uint64_t absTimecode = mClusterTimecode + mBlockTimecode;
               absTimecode *= mTimecodeScale;
-              WebMTimeDataOffset entry(endOffset, absTimecode, mLastInitStartOffset,
-                                       mClusterOffset, mClusterEndOffset);
+              WebMTimeDataOffset entry(endOffset, absTimecode, mClusterOffset);
               aMapping.InsertElementAt(idx, entry);
             }
           }
         }
 
         // Skip rest of block header and the block's payload.
         mBlockSize -= mVInt.mLength;
         mBlockSize -= BLOCK_TIMECODE_LENGTH;
@@ -199,54 +187,41 @@ void WebMBufferedParser::Append(const un
       }
       break;
     case SKIP_DATA:
       if (mSkipBytes) {
         uint32_t left = aLength - (p - aBuffer);
         left = std::min(left, mSkipBytes);
         p += left;
         mSkipBytes -= left;
-      }
-      if (!mSkipBytes) {
-        mBlockEndOffset = mCurrentOffset + (p - aBuffer);
+      } else {
         mState = mNextState;
       }
       break;
     case CHECK_INIT_FOUND:
       if (mSkipBytes) {
         uint32_t left = aLength - (p - aBuffer);
         left = std::min(left, mSkipBytes);
         p += left;
         mSkipBytes -= left;
       }
       if (!mSkipBytes) {
         if (mInitEndOffset < 0) {
           mInitEndOffset = mCurrentOffset + (p - aBuffer);
-          mBlockEndOffset = mCurrentOffset + (p - aBuffer);
         }
         mState = READ_ELEMENT_ID;
       }
       break;
     }
   }
 
   NS_ASSERTION(p == aBuffer + aLength, "Must have parsed to end of data.");
   mCurrentOffset += aLength;
 }
 
-int64_t
-WebMBufferedParser::EndSegmentOffset(int64_t aOffset)
-{
-  if (mLastInitStartOffset > aOffset || mClusterOffset > aOffset) {
-    return std::min(mLastInitStartOffset >= 0 ? mLastInitStartOffset : INT64_MAX,
-                    mClusterOffset >= 0 ? mClusterOffset : INT64_MAX);
-  }
-  return mBlockEndOffset;
-}
-
 // SyncOffsetComparator and TimeComparator are slightly confusing, in that
 // the nsTArray they're used with (mTimeMapping) is sorted by mEndOffset and
 // these comparators are used on the other fields of WebMTimeDataOffset.
 // This is only valid because timecodes are required to be monotonically
 // increasing within a file (thus establishing an ordering relationship with
 // mTimecode), and mEndOffset is derived from mSyncOffset.
 struct SyncOffsetComparator {
   bool Equals(const WebMTimeDataOffset& a, const int64_t& b) const {
@@ -364,23 +339,16 @@ void WebMBufferedState::NotifyDataArrive
     if (mRangeParsers[i].mCurrentOffset >= mRangeParsers[i + 1].mStartOffset) {
       mRangeParsers[i + 1].mStartOffset = mRangeParsers[i].mStartOffset;
       mRangeParsers[i + 1].mInitEndOffset = mRangeParsers[i].mInitEndOffset;
       mRangeParsers.RemoveElementAt(i);
     } else {
       i += 1;
     }
   }
-
-  if (mRangeParsers.IsEmpty()) {
-    return;
-  }
-
-  ReentrantMonitorAutoEnter mon(mReentrantMonitor);
-  mLastBlockOffset = mRangeParsers.LastElement().mBlockEndOffset;
 }
 
 void WebMBufferedState::Reset() {
   mRangeParsers.Clear();
   mTimeMapping.Clear();
 }
 
 void WebMBufferedState::UpdateIndex(const nsTArray<MediaByteRange>& aRanges, MediaResource* aResource)
@@ -410,45 +378,31 @@ void WebMBufferedState::UpdateIndex(cons
         length -= uint32_t(adjust);
       } else {
         mRangeParsers.InsertElementAt(idx, WebMBufferedParser(offset));
         if (idx) {
           mRangeParsers[idx].SetTimecodeScale(mRangeParsers[0].GetTimecodeScale());
         }
       }
     }
-    while (length > 0) {
-      static const uint32_t BLOCK_SIZE = 1048576;
-      uint32_t block = std::min(length, BLOCK_SIZE);
-      nsRefPtr<MediaByteBuffer> bytes = aResource->MediaReadAt(offset, block);
-      if (!bytes) {
-        break;
-      }
+    nsRefPtr<MediaByteBuffer> bytes = aResource->MediaReadAt(offset, length);
+    if(bytes) {
       NotifyDataArrived(bytes->Elements(), bytes->Length(), offset);
-      length -= bytes->Length();
-      offset += bytes->Length();
     }
   }
 }
 
 int64_t WebMBufferedState::GetInitEndOffset()
 {
   if (mRangeParsers.IsEmpty()) {
     return -1;
   }
   return mRangeParsers[0].mInitEndOffset;
 }
 
-int64_t WebMBufferedState::GetLastBlockOffset()
-{
-  ReentrantMonitorAutoEnter mon(mReentrantMonitor);
-
-  return mLastBlockOffset;
-}
-
 bool WebMBufferedState::GetStartTime(uint64_t *aTime)
 {
   ReentrantMonitorAutoEnter mon(mReentrantMonitor);
 
   if (mTimeMapping.IsEmpty()) {
     return false;
   }
 
--- a/dom/media/webm/WebMBufferedParser.h
+++ b/dom/media/webm/WebMBufferedParser.h
@@ -12,65 +12,49 @@
 #include "MediaResource.h"
 
 namespace mozilla {
 
 // Stores a stream byte offset and the scaled timecode of the block at
 // that offset.
 struct WebMTimeDataOffset
 {
-  WebMTimeDataOffset(int64_t aEndOffset, uint64_t aTimecode,
-                     int64_t aInitOffset, int64_t aSyncOffset,
-                     int64_t aClusterEndOffset)
-    : mEndOffset(aEndOffset)
-    , mInitOffset(aInitOffset)
-    , mSyncOffset(aSyncOffset)
-    , mClusterEndOffset(aClusterEndOffset)
-    , mTimecode(aTimecode)
+  WebMTimeDataOffset(int64_t aEndOffset, uint64_t aTimecode, int64_t aSyncOffset)
+    : mEndOffset(aEndOffset), mSyncOffset(aSyncOffset), mTimecode(aTimecode)
   {}
 
   bool operator==(int64_t aEndOffset) const {
     return mEndOffset == aEndOffset;
   }
 
   bool operator!=(int64_t aEndOffset) const {
     return mEndOffset != aEndOffset;
   }
 
   bool operator<(int64_t aEndOffset) const {
     return mEndOffset < aEndOffset;
   }
 
   int64_t mEndOffset;
-  int64_t mInitOffset;
   int64_t mSyncOffset;
-  int64_t mClusterEndOffset;
   uint64_t mTimecode;
 };
 
 // A simple WebM parser that produces data offset to timecode pairs as it
 // consumes blocks.  A new parser is created for each distinct range of data
 // received and begins parsing from the first WebM cluster within that
 // range.  Old parsers are destroyed when their range merges with a later
 // parser or an already parsed range.  The parser may start at any position
 // within the stream.
 struct WebMBufferedParser
 {
   explicit WebMBufferedParser(int64_t aOffset)
-    : mStartOffset(aOffset)
-    , mCurrentOffset(aOffset)
-    , mInitEndOffset(-1)
-    , mBlockEndOffset(-1)
-    , mState(READ_ELEMENT_ID)
-    , mVIntRaw(false)
-    , mLastInitStartOffset(-1)
-    , mClusterSyncPos(0)
-    , mClusterEndOffset(-1)
-    , mTimecodeScale(1000000)
-    , mGotTimecodeScale(false)
+    : mStartOffset(aOffset), mCurrentOffset(aOffset), mInitEndOffset(-1),
+      mState(READ_ELEMENT_ID), mVIntRaw(false), mClusterSyncPos(0),
+      mTimecodeScale(1000000), mGotTimecodeScale(false)
   {
     if (mStartOffset != 0) {
       mState = FIND_CLUSTER_SYNC;
     }
   }
 
   uint32_t GetTimecodeScale() {
     MOZ_ASSERT(mGotTimecodeScale);
@@ -94,38 +78,29 @@ struct WebMBufferedParser
   bool operator==(int64_t aOffset) const {
     return mCurrentOffset == aOffset;
   }
 
   bool operator<(int64_t aOffset) const {
     return mCurrentOffset < aOffset;
   }
 
-  // Returns the start offset of the init (EBML) or media segment (Cluster)
-  // following the aOffset position. If none were found, returns mBlockEndOffset.
-  // This allows to determine the end of the interval containg aOffset.
-  int64_t EndSegmentOffset(int64_t aOffset);
-
   // The offset at which this parser started parsing.  Used to merge
   // adjacent parsers, in which case the later parser adopts the earlier
   // parser's mStartOffset.
   int64_t mStartOffset;
 
-  // Current offset within the stream.  Updated in chunks as Append() consumes
+  // Current offset with the stream.  Updated in chunks as Append() consumes
   // data.
   int64_t mCurrentOffset;
 
-  // Tracks element's end offset. This indicates the end of the first init
-  // segment. Will only be set if a Segment Information has been found.
+  // Tracks element's end offset. This indicates the end of the init segment.
+  // Will only be set if a Segment Information has been found.
   int64_t mInitEndOffset;
 
-  // End offset of the last block parsed.
-  // Will only be set if a complete block has been parsed.
-  int64_t mBlockEndOffset;
-
 private:
   enum State {
     // Parser start state.  Expects to begin at a valid EBML element.  Move
     // to READ_VINT with mVIntRaw true, then return to READ_ELEMENT_SIZE.
     READ_ELEMENT_ID,
 
     // Store element ID read into mVInt into mElement.mID.  Move to
     // READ_VINT with mVIntRaw false, then return to PARSE_ELEMENT.
@@ -195,20 +170,16 @@ private:
   };
 
   EBMLElement mElement;
 
   VInt mVInt;
 
   bool mVIntRaw;
 
-  // EBML start offset. This indicates the start of the last init segment
-  // parsed. Will only be set if an EBML element has been found.
-  int64_t mLastInitStartOffset;
-
   // Current match position within CLUSTER_SYNC_ID.  Used to find sync
   // within arbitrary data.
   uint32_t mClusterSyncPos;
 
   // Number of bytes of mVInt left to read.  mVInt is complete once this
   // reaches 0.
   uint32_t mVIntLeft;
 
@@ -219,19 +190,16 @@ private:
   // Cluster-level timecode.
   uint64_t mClusterTimecode;
 
   // Start offset of the cluster currently being parsed.  Used as the sync
   // point offset for the offset-to-time mapping as each block timecode is
   // been parsed.
   int64_t mClusterOffset;
 
-  // End offset of the cluster currently being parsed. -1 if unknown.
-  int64_t mClusterEndOffset;
-
   // Start offset of the block currently being parsed.  Used as the byte
   // offset for the offset-to-time mapping once the block timecode has been
   // parsed.
   int64_t mBlockOffset;
 
   // Block-level timecode.  This is summed with mClusterTimecode to produce
   // an absolute timecode for the offset-to-time mapping.
   int16_t mBlockTimecode;
@@ -252,59 +220,52 @@ private:
   bool mGotTimecodeScale;
 };
 
 class WebMBufferedState final
 {
   NS_INLINE_DECL_THREADSAFE_REFCOUNTING(WebMBufferedState)
 
 public:
-  WebMBufferedState()
-    : mReentrantMonitor("WebMBufferedState")
-    , mLastBlockOffset(-1)
-  {
+  WebMBufferedState() : mReentrantMonitor("WebMBufferedState") {
     MOZ_COUNT_CTOR(WebMBufferedState);
   }
 
   void NotifyDataArrived(const unsigned char* aBuffer, uint32_t aLength, int64_t aOffset);
   void Reset();
   void UpdateIndex(const nsTArray<MediaByteRange>& aRanges, MediaResource* aResource);
   bool CalculateBufferedForRange(int64_t aStartOffset, int64_t aEndOffset,
                                  uint64_t* aStartTime, uint64_t* aEndTime);
 
   // Returns true if aTime is is present in mTimeMapping and sets aOffset to
   // the latest offset for which decoding can resume without data
   // dependencies to arrive at aTime.
   bool GetOffsetForTime(uint64_t aTime, int64_t* aOffset);
 
   // Returns end offset of init segment or -1 if none found.
   int64_t GetInitEndOffset();
-  // Returns the end offset of the last complete block or -1 if none found.
-  int64_t GetLastBlockOffset();
 
   // Returns start time
   bool GetStartTime(uint64_t *aTime);
 
   // Returns keyframe for time
   bool GetNextKeyframeTime(uint64_t aTime, uint64_t* aKeyframeTime);
 
 private:
   // Private destructor, to discourage deletion outside of Release():
   ~WebMBufferedState() {
     MOZ_COUNT_DTOR(WebMBufferedState);
   }
 
-  // Synchronizes access to the mTimeMapping array and mLastBlockOffset.
+  // Synchronizes access to the mTimeMapping array.
   ReentrantMonitor mReentrantMonitor;
 
   // Sorted (by offset) map of data offsets to timecodes.  Populated
   // on the main thread as data is received and parsed by WebMBufferedParsers.
   nsTArray<WebMTimeDataOffset> mTimeMapping;
-  // The last complete block parsed. -1 if not set.
-  int64_t mLastBlockOffset;
 
   // Sorted (by offset) live parser instances.  Main thread only.
   nsTArray<WebMBufferedParser> mRangeParsers;
 };
 
 } // namespace mozilla
 
 #endif
--- a/dom/media/webm/WebMDemuxer.cpp
+++ b/dom/media/webm/WebMDemuxer.cpp
@@ -11,17 +11,16 @@
 #include "WebMDemuxer.h"
 #include "WebMBufferedParser.h"
 #include "gfx2DGlue.h"
 #include "mozilla/Preferences.h"
 #include "mozilla/SharedThreadPool.h"
 #include "MediaDataDemuxer.h"
 #include "nsAutoRef.h"
 #include "NesteggPacketHolder.h"
-#include "XiphExtradata.h"
 
 #include <algorithm>
 #include <stdint.h>
 
 #define VPX_DONT_DEFINE_STDINT_TYPES
 #include "vpx/vp8dx.h"
 #include "vpx/vpx_decoder.h"
 
@@ -31,53 +30,50 @@ namespace mozilla {
 
 using namespace gfx;
 
 extern PRLogModuleInfo* gMediaDecoderLog;
 extern PRLogModuleInfo* gNesteggLog;
 
 // Functions for reading and seeking using WebMDemuxer required for
 // nestegg_io. The 'user data' passed to these functions is the
-// demuxer.
+// demuxer's MediaResourceIndex
 static int webmdemux_read(void* aBuffer, size_t aLength, void* aUserData)
 {
   MOZ_ASSERT(aUserData);
+  MediaResourceIndex* resource =
+    reinterpret_cast<MediaResourceIndex*>(aUserData);
+  int64_t length = resource->GetLength();
   MOZ_ASSERT(aLength < UINT32_MAX);
-  WebMDemuxer* demuxer = reinterpret_cast<WebMDemuxer*>(aUserData);
   uint32_t count = aLength;
-  if (demuxer->IsMediaSource()) {
-    int64_t length = demuxer->GetEndDataOffset();
-    int64_t position = demuxer->GetResource()->Tell();
-    MOZ_ASSERT(position <= demuxer->GetResource()->GetLength());
-    MOZ_ASSERT(position <= length);
-    if (length >= 0 && count + position > length) {
-      count = length - position;
-    }
-    MOZ_ASSERT(count <= aLength);
+  if (length >= 0 && count + resource->Tell() > length) {
+    count = uint32_t(length - resource->Tell());
   }
+
   uint32_t bytes = 0;
-  nsresult rv =
-    demuxer->GetResource()->Read(static_cast<char*>(aBuffer), count, &bytes);
-  bool eof = bytes < aLength;
+  nsresult rv = resource->Read(static_cast<char*>(aBuffer), count, &bytes);
+  bool eof = !bytes;
   return NS_FAILED(rv) ? -1 : eof ? 0 : 1;
 }
 
 static int webmdemux_seek(int64_t aOffset, int aWhence, void* aUserData)
 {
   MOZ_ASSERT(aUserData);
-  WebMDemuxer* demuxer = reinterpret_cast<WebMDemuxer*>(aUserData);
-  nsresult rv = demuxer->GetResource()->Seek(aWhence, aOffset);
+  MediaResourceIndex* resource =
+    reinterpret_cast<MediaResourceIndex*>(aUserData);
+  nsresult rv = resource->Seek(aWhence, aOffset);
   return NS_SUCCEEDED(rv) ? 0 : -1;
 }
 
 static int64_t webmdemux_tell(void* aUserData)
 {
   MOZ_ASSERT(aUserData);
-  WebMDemuxer* demuxer = reinterpret_cast<WebMDemuxer*>(aUserData);
-  return demuxer->GetResource()->Tell();
+  MediaResourceIndex* resource =
+    reinterpret_cast<MediaResourceIndex*>(aUserData);
+  return resource->Tell();
 }
 
 static void webmdemux_log(nestegg* aContext,
                           unsigned int aSeverity,
                           char const* aFormat, ...)
 {
   if (!MOZ_LOG_TEST(gNesteggLog, LogLevel::Debug)) {
     return;
@@ -114,37 +110,29 @@ static void webmdemux_log(nestegg* aCont
   PR_vsnprintf(msg+strlen(msg), sizeof(msg)-strlen(msg), aFormat, args);
   MOZ_LOG(gNesteggLog, LogLevel::Debug, (msg));
 
   va_end(args);
 }
 
 
 WebMDemuxer::WebMDemuxer(MediaResource* aResource)
-  : WebMDemuxer(aResource, false)
-{
-}
-
-WebMDemuxer::WebMDemuxer(MediaResource* aResource, bool aIsMediaSource)
   : mResource(aResource)
   , mBufferedState(nullptr)
   , mInitData(nullptr)
   , mContext(nullptr)
   , mVideoTrack(0)
   , mAudioTrack(0)
   , mSeekPreroll(0)
-  , mLastAudioFrameTime(0)
   , mLastVideoFrameTime(0)
   , mAudioCodec(-1)
   , mVideoCodec(-1)
   , mHasVideo(false)
   , mHasAudio(false)
   , mNeedReIndex(true)
-  , mLastWebMBlockOffset(-1)
-  , mIsMediaSource(aIsMediaSource)
 {
   if (!gNesteggLog) {
     gNesteggLog = PR_NewLogModule("Nestegg");
   }
 }
 
 WebMDemuxer::~WebMDemuxer()
 {
@@ -259,17 +247,17 @@ WebMDemuxer::Cleanup()
 
 nsresult
 WebMDemuxer::ReadMetadata()
 {
   nestegg_io io;
   io.read = webmdemux_read;
   io.seek = webmdemux_seek;
   io.tell = webmdemux_tell;
-  io.userdata = this;
+  io.userdata = &mResource;
   int64_t maxOffset = mBufferedState->GetInitEndOffset();
   if (maxOffset == -1) {
     maxOffset = mResource.GetLength();
   }
   int r = nestegg_init(&mContext, io, &webmdemux_log, maxOffset);
   if (r == -1) {
     return NS_ERROR_FAILURE;
   }
@@ -385,43 +373,30 @@ WebMDemuxer::ReadMetadata()
       mInfo.mAudio.mChannels = params.channels;
 
       unsigned int nheaders = 0;
       r = nestegg_track_codec_data_count(mContext, track, &nheaders);
       if (r == -1) {
         return NS_ERROR_FAILURE;
       }
 
-      nsAutoTArray<const unsigned char*,4> headers;
-      nsAutoTArray<size_t,4> headerLens;
       for (uint32_t header = 0; header < nheaders; ++header) {
         unsigned char* data = 0;
         size_t length = 0;
         r = nestegg_track_codec_data(mContext, track, header, &data, &length);
         if (r == -1) {
           return NS_ERROR_FAILURE;
         }
-        headers.AppendElement(data);
-        headerLens.AppendElement(length);
-      }
-
-      // Vorbis has 3 headers, convert to Xiph extradata format to send them to
-      // the demuxer.
-      // TODO: This is already the format WebM stores them in. Would be nice
-      // to avoid having libnestegg split them only for us to pack them again,
-      // but libnestegg does not give us an API to access this data directly.
-      if (nheaders > 1) {
-        if (!XiphHeadersToExtradata(mInfo.mAudio.mCodecSpecificConfig,
-                                    headers, headerLens)) {
-          return NS_ERROR_FAILURE;
+        // Vorbis has 3 headers write length + data for each header
+        if (nheaders > 1) {
+          uint8_t c[2];
+          BigEndian::writeUint16(&c[0], length);
+          mInfo.mAudio.mCodecSpecificConfig->AppendElements(&c[0], 2);
         }
-      }
-      else {
-        mInfo.mAudio.mCodecSpecificConfig->AppendElements(headers[0],
-                                                          headerLens[0]);
+        mInfo.mAudio.mCodecSpecificConfig->AppendElements(data, length);
       }
       uint64_t duration = 0;
       r = nestegg_duration(mContext, &duration);
       if (!r) {
         mInfo.mAudio.mDuration = media::TimeUnit::FromNanoseconds(duration).ToMicroseconds();
       }
     }
   }
@@ -449,22 +424,16 @@ WebMDemuxer::EnsureUpToDateIndex()
   if (NS_FAILED(rv) || !byteRanges.Length()) {
     return;
   }
   mBufferedState->UpdateIndex(byteRanges, resource);
   if (!mInitData && mBufferedState->GetInitEndOffset() != -1) {
     mInitData = mResource.MediaReadAt(0, mBufferedState->GetInitEndOffset());
   }
   mNeedReIndex = false;
-
-  if (!mIsMediaSource) {
-    return;
-  }
-  mLastWebMBlockOffset = mBufferedState->GetLastBlockOffset();
-  MOZ_ASSERT(mLastWebMBlockOffset <= mResource.GetLength());
 }
 
 void
 WebMDemuxer::NotifyDataArrived(uint32_t aLength, int64_t aOffset)
 {
   WEBM_DEBUG("length: %ld offset: %ld", aLength, aOffset);
   mNeedReIndex = true;
 }
@@ -480,20 +449,16 @@ UniquePtr<EncryptionInfo>
 WebMDemuxer::GetCrypto()
 {
   return nullptr;
 }
 
 bool
 WebMDemuxer::GetNextPacket(TrackInfo::TrackType aType, MediaRawDataQueue *aSamples)
 {
-  if (mIsMediaSource) {
-    EnsureUpToDateIndex();
-  }
-
   nsRefPtr<NesteggPacketHolder> holder(NextPacket(aType));
 
   if (!holder) {
     return false;
   }
 
   int r = 0;
   unsigned int count = 0;
@@ -719,20 +684,16 @@ WebMDemuxer::SeekInternal(const media::T
 
     r = nestegg_offset_seek(mContext, offset);
     if (r == -1) {
       WEBM_DEBUG("and nestegg_offset_seek to %" PRIu64 " failed", offset);
       return NS_ERROR_FAILURE;
     }
     WEBM_DEBUG("got offset from buffered state: %" PRIu64 "", offset);
   }
-
-  mLastAudioFrameTime = 0;
-  mLastVideoFrameTime = 0;
-
   return NS_OK;
 }
 
 media::TimeIntervals
 WebMDemuxer::GetBuffered()
 {
   EnsureUpToDateIndex();
   AutoPinned<MediaResource> resource(mResource.GetResource());
--- a/dom/media/webm/WebMDemuxer.h
+++ b/dom/media/webm/WebMDemuxer.h
@@ -49,20 +49,17 @@ private:
 };
 
 class WebMTrackDemuxer;
 
 class WebMDemuxer : public MediaDataDemuxer
 {
 public:
   explicit WebMDemuxer(MediaResource* aResource);
-  // Indicate if the WebMDemuxer is to be used with MediaSource. In which
-  // case the demuxer will stop reads to the last known complete block.
-  WebMDemuxer(MediaResource* aResource, bool aIsMediaSource);
-  
+
   nsRefPtr<InitPromise> Init() override;
 
   already_AddRefed<MediaDataDemuxer> Clone() const override;
 
   bool HasTrackType(TrackInfo::TrackType aType) const override;
 
   uint32_t GetNumberTracks(TrackInfo::TrackType aType) const override;
 
@@ -83,32 +80,16 @@ public:
   nsresult Reset();
 
   // Pushes a packet to the front of the audio packet queue.
   virtual void PushAudioPacket(NesteggPacketHolder* aItem);
 
   // Pushes a packet to the front of the video packet queue.
   virtual void PushVideoPacket(NesteggPacketHolder* aItem);
 
-  // Public accessor for nestegg callbacks
-  MediaResourceIndex* GetResource()
-  {
-    return &mResource;
-  }
-
-  int64_t GetEndDataOffset() const
-  {
-    return (!mIsMediaSource || mLastWebMBlockOffset < 0)
-      ? mResource.GetLength() : mLastWebMBlockOffset;
-  }
-  int64_t IsMediaSource() const
-  {
-    return mIsMediaSource;
-  }
-
 private:
   friend class WebMTrackDemuxer;
 
   ~WebMDemuxer();
   void Cleanup();
   nsresult InitBufferedState();
   nsresult ReadMetadata();
   void NotifyDataArrived(uint32_t aLength, int64_t aOffset) override;
@@ -166,22 +147,16 @@ private:
   int mAudioCodec;
   // Codec ID of video track
   int mVideoCodec;
 
   // Booleans to indicate if we have audio and/or video data
   bool mHasVideo;
   bool mHasAudio;
   bool mNeedReIndex;
-
-  // The last complete block parsed by the WebMBufferedState. -1 if not set.
-  // We cache those values rather than retrieving them for performance reasons
-  // as nestegg only performs 1-byte read at a time.
-  int64_t mLastWebMBlockOffset;
-  const bool mIsMediaSource;
 };
 
 class WebMTrackDemuxer : public MediaTrackDemuxer
 {
 public:
   WebMTrackDemuxer(WebMDemuxer* aParent,
                   TrackInfo::TrackType aType,
                   uint32_t aTrackNumber);
--- a/dom/tests/mochitest/general/test_interfaces.html
+++ b/dom/tests/mochitest/general/test_interfaces.html
@@ -729,17 +729,17 @@ var interfaceNamesInGlobalScope =
     {name: "MediaKeyStatusMap", android: false},
 // IMPORTANT: Do not change this list without review from a DOM peer!
     "MediaList",
 // IMPORTANT: Do not change this list without review from a DOM peer!
     "MediaQueryList",
 // IMPORTANT: Do not change this list without review from a DOM peer!
     "MediaRecorder",
 // IMPORTANT: Do not change this list without review from a DOM peer!
-    "MediaSource",
+    {name: "MediaSource", linux: false, release: false},
 // IMPORTANT: Do not change this list without review from a DOM peer!
     "MediaStream",
 // IMPORTANT: Do not change this list without review from a DOM peer!
     "MediaStreamAudioDestinationNode",
 // IMPORTANT: Do not change this list without review from a DOM peer!
     "MediaStreamAudioSourceNode",
 // IMPORTANT: Do not change this list without review from a DOM peer!
     "MediaStreamEvent",
@@ -983,19 +983,19 @@ var interfaceNamesInGlobalScope =
     "ShadowRoot", // Bogus, but the test harness forces it on.  See bug 1159768.
 // IMPORTANT: Do not change this list without review from a DOM peer!
     "SharedWorker",
 // IMPORTANT: Do not change this list without review from a DOM peer!
     "SimpleGestureEvent",
 // IMPORTANT: Do not change this list without review from a DOM peer!
     {name: "SimpleTest", xbl: false},
 // IMPORTANT: Do not change this list without review from a DOM peer!
-    "SourceBuffer",
-// IMPORTANT: Do not change this list without review from a DOM peer!
-    "SourceBufferList",
+    {name: "SourceBuffer", linux: false, release: false},
+// IMPORTANT: Do not change this list without review from a DOM peer!
+    {name: "SourceBufferList", linux: false, release: false},
 // IMPORTANT: Do not change this list without review from a DOM peer!
     {name: "SpeechSynthesisErrorEvent", b2g: true},
 // IMPORTANT: Do not change this list without review from a DOM peer!
     {name: "SpeechSynthesisEvent", b2g: true},
 // IMPORTANT: Do not change this list without review from a DOM peer!
     {name: "SpeechSynthesis", b2g: true},
 // IMPORTANT: Do not change this list without review from a DOM peer!
     {name: "SpeechSynthesisUtterance", b2g: true},
@@ -1341,17 +1341,17 @@ var interfaceNamesInGlobalScope =
     "URL",
 // IMPORTANT: Do not change this list without review from a DOM peer!
     "URLSearchParams",
 // IMPORTANT: Do not change this list without review from a DOM peer!
     "UserProximityEvent",
 // IMPORTANT: Do not change this list without review from a DOM peer!
     "ValidityState",
 // IMPORTANT: Do not change this list without review from a DOM peer!
-    "VideoPlaybackQuality",
+    {name: "VideoPlaybackQuality", linux: false, release: false},
 // IMPORTANT: Do not change this list without review from a DOM peer!
     "VideoStreamTrack",
 // IMPORTANT: Do not change this list without review from a DOM peer!
     {name: "VRDevice", disabled: true},
 // IMPORTANT: Do not change this list without review from a DOM peer!
     {name: "VRPositionState", disabled: true},
 // IMPORTANT: Do not change this list without review from a DOM peer!
     {name: "VRFieldOfView", disabled: true},
--- a/modules/libpref/init/all.js
+++ b/modules/libpref/init/all.js
@@ -463,25 +463,27 @@ pref("media.getusermedia.audiocapture.en
 // TextTrack support
 pref("media.webvtt.enabled", true);
 pref("media.webvtt.regions.enabled", false);
 
 // AudioTrack and VideoTrack support
 pref("media.track.enabled", false);
 
 // Whether to enable MediaSource support.
+// We want to enable on non-release  builds and on release windows and mac
+// but on release builds restrict to YouTube. We don't enable for other
+// configurations because code for those platforms isn't ready yet.
+#if defined(XP_WIN) || defined(XP_MACOSX) || defined(MOZ_WIDGET_GONK)
 pref("media.mediasource.enabled", true);
+#else
+pref("media.mediasource.enabled", false);
+#endif
 
 pref("media.mediasource.mp4.enabled", true);
-
-#if defined(XP_WIN) || defined(XP_MACOSX) || defined(MOZ_WIDGET_GONK) || defined(MOZ_WIDGET_ANDROID)
 pref("media.mediasource.webm.enabled", false);
-#else
-pref("media.mediasource.webm.enabled", true);
-#endif
 
 // Enable new MediaSource architecture.
 pref("media.mediasource.format-reader", true);
 
 // Enable new MediaFormatReader architecture for webm in MSE
 pref("media.mediasource.format-reader.webm", false);
 // Enable new MediaFormatReader architecture for plain webm.
 pref("media.format-reader.webm", true);
--- a/testing/web-platform/meta/media-source/mediasource-append-buffer.html.ini
+++ b/testing/web-platform/meta/media-source/mediasource-append-buffer.html.ini
@@ -1,5 +1,3 @@
 [mediasource-append-buffer.html]
   type: testharness
   prefs: [media.mediasource.enabled:true]
-  disabled:
-    if (os == "win") and (version == "6.1.7601"): https://bugzilla.mozilla.org/show_bug.cgi?id=1191138
--- a/testing/web-platform/meta/media-source/mediasource-config-change-webm-a-bitrate.html.ini
+++ b/testing/web-platform/meta/media-source/mediasource-config-change-webm-a-bitrate.html.ini
@@ -1,3 +1,9 @@
 [mediasource-config-change-webm-a-bitrate.html]
   type: testharness
   prefs: [media.mediasource.enabled:true]
+  disabled:
+    if os == "linux": https://bugzilla.mozilla.org/show_bug.cgi?id=1186261
+    if (os == "win") and (version == "5.1.2600"): https://bugzilla.mozilla.org/show_bug.cgi?id=1186261
+  [Tests webm audio-only bitrate changes.]
+    expected: FAIL
+
--- a/testing/web-platform/meta/media-source/mediasource-config-change-webm-av-audio-bitrate.html.ini
+++ b/testing/web-platform/meta/media-source/mediasource-config-change-webm-av-audio-bitrate.html.ini
@@ -1,3 +1,6 @@
 [mediasource-config-change-webm-av-audio-bitrate.html]
   type: testharness
   prefs: [media.mediasource.enabled:true]
+  [Tests webm audio bitrate changes in multiplexed content.]
+    expected: FAIL
+
--- a/testing/web-platform/meta/media-source/mediasource-config-change-webm-av-framesize.html.ini
+++ b/testing/web-platform/meta/media-source/mediasource-config-change-webm-av-framesize.html.ini
@@ -1,3 +1,9 @@
 [mediasource-config-change-webm-av-framesize.html]
   type: testharness
   prefs: [media.mediasource.enabled:true]
+  disabled:
+    if os == "linux": https://bugzilla.mozilla.org/show_bug.cgi?id=1186261
+    if (os == "win") and (version == "5.1.2600"): https://bugzilla.mozilla.org/show_bug.cgi?id=1186261
+  [Tests webm frame size changes in multiplexed content.]
+    expected: FAIL
+
--- a/testing/web-platform/meta/media-source/mediasource-config-change-webm-av-video-bitrate.html.ini
+++ b/testing/web-platform/meta/media-source/mediasource-config-change-webm-av-video-bitrate.html.ini
@@ -1,3 +1,6 @@
 [mediasource-config-change-webm-av-video-bitrate.html]
   type: testharness
   prefs: [media.mediasource.enabled:true]
+  [Tests webm video bitrate changes in multiplexed content.]
+    expected: FAIL
+
--- a/testing/web-platform/meta/media-source/mediasource-config-change-webm-v-bitrate.html.ini
+++ b/testing/web-platform/meta/media-source/mediasource-config-change-webm-v-bitrate.html.ini
@@ -1,3 +1,9 @@
 [mediasource-config-change-webm-v-bitrate.html]
   type: testharness
   prefs: [media.mediasource.enabled:true]
+  disabled:
+    if os == "linux": https://bugzilla.mozilla.org/show_bug.cgi?id=1186261
+    if (os == "win") and (version == "5.1.2600"): https://bugzilla.mozilla.org/show_bug.cgi?id=1186261
+  [Tests webm video-only bitrate changes.]
+    expected: FAIL
+
--- a/testing/web-platform/meta/media-source/mediasource-config-change-webm-v-framerate.html.ini
+++ b/testing/web-platform/meta/media-source/mediasource-config-change-webm-v-framerate.html.ini
@@ -1,3 +1,9 @@
 [mediasource-config-change-webm-v-framerate.html]
   type: testharness
   prefs: [media.mediasource.enabled:true]
+  disabled:
+    if os == "linux": https://bugzilla.mozilla.org/show_bug.cgi?id=1186261
+    if (os == "win") and (version == "5.1.2600"): https://bugzilla.mozilla.org/show_bug.cgi?id=1186261
+  [Tests webm video-only frame rate changes.]
+    expected: FAIL
+
--- a/testing/web-platform/meta/media-source/mediasource-config-change-webm-v-framesize.html.ini
+++ b/testing/web-platform/meta/media-source/mediasource-config-change-webm-v-framesize.html.ini
@@ -1,3 +1,9 @@
 [mediasource-config-change-webm-v-framesize.html]
   type: testharness
   prefs: [media.mediasource.enabled:true]
+  disabled:
+    if os == "linux": https://bugzilla.mozilla.org/show_bug.cgi?id=1186261
+    if (os == "win") and (version == "5.1.2600"): https://bugzilla.mozilla.org/show_bug.cgi?id=1186261
+  [Tests webm video-only frame size changes.]
+    expected: FAIL
+
--- a/testing/web-platform/meta/media-source/mediasource-sourcebuffer-mode.html.ini
+++ b/testing/web-platform/meta/media-source/mediasource-sourcebuffer-mode.html.ini
@@ -1,3 +1,7 @@
 [mediasource-sourcebuffer-mode.html]
   type: testharness
   prefs: [media.mediasource.enabled:true]
+  [Test setting SourceBuffer.mode triggers parent MediaSource 'ended' to 'open' transition.] # Bug 1192165
+    expected:
+      if os == "linux": FAIL
+      if (os == "win") and (version == "5.1.2600"): FAIL