Bug 1481998 - Make mozilla::Hash{Map,Set}'s entry storage allocation lazy. r=luke,sfink
authorNicholas Nethercote <nnethercote@mozilla.com>
Fri, 10 Aug 2018 18:00:29 +1000
changeset 486278 ad30dc53e38ec41adc99f81fd8a5102ecf7775fd
parent 486277 223884f0ad76f8224f046311dd016b83fbb3aa6e
child 486279 13ec6b447cc53a5acec6063a892e824cb3d7c7b8
push id9719
push userffxbld-merge
push dateFri, 24 Aug 2018 17:49:46 +0000
treeherdermozilla-beta@719ec98fba77 [default view] [failures only]
perfherder[talos] [build metrics] [platform microbench] (compared to previous push)
reviewersluke, sfink
bugs1481998
milestone63.0a1
first release with
nightly linux32
nightly linux64
nightly mac
nightly win32
nightly win64
last release without
nightly linux32
nightly linux64
nightly mac
nightly win32
nightly win64
Bug 1481998 - Make mozilla::Hash{Map,Set}'s entry storage allocation lazy. r=luke,sfink Entry storage allocation now occurs on the first lookupForAdd()/put()/putNew(). This removes the need for init() and initialized(), and matches how PLDHashTable/nsTHashtable work. It also removes the need for init() functions in a lot of types that are built on top of mozilla::Hash{Map,Set}. Pros: - No need for init() calls and subsequent checks. - No memory allocated for empty tables, which are not that uncommon. Cons: - An extra branch in lookup() and lookupForAdd(), but not in put()/putNew(), because the existing checkOverloaded() can handle it. Specifics: - Construction now can take a length parameter. - init() is removed. Explicit length-setting, when necessary, now occurs in the constructors. - initialized() is removed. - capacity() now returns zero when the entry storage is absent. - lookupForAdd() is no longer `const`, because it can instantiate the storage, which requires modifications. - lookupForAdd() can now return an invalid AddPtr in two cases: - old: hashing failure (due to OOM in the hasher) - new: OOM while instantiating entry storage The existing failure handling paths for the old case work for the new case. - clear(), finish(), and clearAndShrink() are replaced by clear(), compact(), and reserve(). The old compactIfUnderloaded() is also removed. - Capacity computation code is now in its own functions, bestCapacity() and hashShift(). setTableSizeLog2() is removed. - uint32_t is used throughout for capacities, instead of size_t, for consistency with other similar values. - changeTableSize() now takes a capacity instead of a deltaLog2, and it can now handle !mTable. Measurements: - Total source code size is reduced by over 900 lines. Also, lots of existing lines got shorter (i.e. two checks were reduced to one). - Executable size barely changed, down by 2 KiB on Linux64. The extra branches are compensated for by the lack of init() calls. - Speed changed negligibly. The instruction count for Bench_Cpp_MozHash increased from 2.84 billion to 2.89 billion but any execution time change was well below noise.
devtools/shared/heapsnapshot/HeapSnapshot.cpp
devtools/shared/heapsnapshot/HeapSnapshot.h
devtools/shared/heapsnapshot/tests/gtest/DoesCrossCompartmentBoundaries.cpp
devtools/shared/heapsnapshot/tests/gtest/DoesntCrossCompartmentBoundaries.cpp
dom/base/CustomElementRegistry.cpp
dom/plugins/base/nsJSNPRuntime.cpp
js/ipc/JavaScriptChild.cpp
js/ipc/JavaScriptParent.cpp
js/ipc/JavaScriptParent.h
js/ipc/JavaScriptShared.cpp
js/ipc/JavaScriptShared.h
js/ipc/WrapperOwner.cpp
js/ipc/WrapperOwner.h
js/public/GCHashTable.h
js/public/MemoryMetrics.h
js/public/UbiNodeBreadthFirst.h
js/public/UbiNodeCensus.h
js/public/UbiNodeDominatorTree.h
js/public/UbiNodePostOrder.h
js/public/UbiNodeShortestPaths.h
js/src/builtin/JSON.cpp
js/src/builtin/ModuleObject.cpp
js/src/builtin/ModuleObject.h
js/src/builtin/Promise.cpp
js/src/builtin/ReflectParse.cpp
js/src/builtin/TestingFunctions.cpp
js/src/builtin/TypedObject.cpp
js/src/builtin/WeakMapObject-inl.h
js/src/builtin/intl/SharedIntlData.cpp
js/src/ctypes/CTypes.cpp
js/src/ds/Bitmap.cpp
js/src/ds/Bitmap.h
js/src/ds/InlineTable.h
js/src/frontend/BinSourceRuntimeSupport.h
js/src/frontend/BinToken.cpp
js/src/frontend/BinTokenReaderMultipart.cpp
js/src/frontend/BytecodeCompiler.cpp
js/src/frontend/ParseContext.h
js/src/fuzz-tests/testBinASTReader.cpp
js/src/gc/GC.cpp
js/src/gc/HashUtil.h
js/src/gc/Marking.cpp
js/src/gc/Nursery.cpp
js/src/gc/Nursery.h
js/src/gc/NurseryAwareHashMap.h
js/src/gc/RootMarking.cpp
js/src/gc/StoreBuffer.cpp
js/src/gc/StoreBuffer.h
js/src/gc/Verifier.cpp
js/src/gc/WeakMap-inl.h
js/src/gc/WeakMap.cpp
js/src/gc/WeakMap.h
js/src/gc/WeakMapPtr.cpp
js/src/gc/Zone.cpp
js/src/jit/CodeGenerator.cpp
js/src/jit/ExecutableAllocator.cpp
js/src/jit/Ion.cpp
js/src/jit/IonAnalysis.cpp
js/src/jit/JitRealm.h
js/src/jit/LIR.cpp
js/src/jit/LIR.h
js/src/jit/LoopUnroller.cpp
js/src/jit/OptimizationTracking.cpp
js/src/jit/OptimizationTracking.h
js/src/jit/RegisterAllocator.cpp
js/src/jit/Snapshots.cpp
js/src/jit/Snapshots.h
js/src/jit/ValueNumbering.cpp
js/src/jit/ValueNumbering.h
js/src/jit/WasmBCE.cpp
js/src/jit/arm/Simulator-arm.cpp
js/src/jit/arm/Simulator-arm.h
js/src/jit/arm/Trampoline-arm.cpp
js/src/jit/arm64/Trampoline-arm64.cpp
js/src/jit/shared/CodeGenerator-shared.cpp
js/src/jit/x64/Trampoline-x64.cpp
js/src/jit/x86-shared/MacroAssembler-x86-shared.cpp
js/src/jit/x86/Trampoline-x86.cpp
js/src/jsapi-tests/testBinASTReader.cpp
js/src/jsapi-tests/testGCExactRooting.cpp
js/src/jsapi-tests/testGCGrayMarking.cpp
js/src/jsapi-tests/testGCMarking.cpp
js/src/jsapi-tests/testGCWeakCache.cpp
js/src/jsapi-tests/testHashTable.cpp
js/src/jsapi-tests/testJitMinimalFunc.h
js/src/jsapi-tests/testUbiNode.cpp
js/src/jsapi.cpp
js/src/proxy/ScriptedProxyHandler.cpp
js/src/shell/js.cpp
js/src/vm/ArrayBufferObject.cpp
js/src/vm/Caches.cpp
js/src/vm/Caches.h
js/src/vm/CodeCoverage.cpp
js/src/vm/Compartment.cpp
js/src/vm/Compartment.h
js/src/vm/Debugger.cpp
js/src/vm/Debugger.h
js/src/vm/DebuggerMemory.cpp
js/src/vm/EnvironmentObject.cpp
js/src/vm/EnvironmentObject.h
js/src/vm/GeckoProfiler.cpp
js/src/vm/GeckoProfiler.h
js/src/vm/Iteration.cpp
js/src/vm/JSAtom.cpp
js/src/vm/JSScript.cpp
js/src/vm/MemoryMetrics.cpp
js/src/vm/ObjectGroup.cpp
js/src/vm/Realm.cpp
js/src/vm/RegExpObject.cpp
js/src/vm/RegExpShared.h
js/src/vm/Runtime.cpp
js/src/vm/SavedStacks.cpp
js/src/vm/SavedStacks.h
js/src/vm/Shape.cpp
js/src/vm/SharedImmutableStringsCache-inl.h
js/src/vm/SharedImmutableStringsCache.h
js/src/vm/Stack.cpp
js/src/vm/StructuredClone.cpp
js/src/vm/TraceLogging.cpp
js/src/vm/UbiNode.cpp
js/src/vm/UbiNodeCensus.cpp
js/src/vm/UbiNodeShortestPaths.cpp
js/src/vm/Xdr.cpp
js/src/vm/Xdr.h
js/src/wasm/AsmJS.cpp
js/src/wasm/WasmAST.h
js/src/wasm/WasmBuiltins.cpp
js/src/wasm/WasmDebug.cpp
js/src/wasm/WasmGenerator.cpp
js/src/wasm/WasmInstance.cpp
js/src/wasm/WasmIonCompile.cpp
js/src/wasm/WasmJS.cpp
js/src/wasm/WasmTable.cpp
js/src/wasm/WasmTextToBinary.cpp
js/src/wasm/WasmValidate.cpp
js/xpconnect/src/XPCMaps.h
memory/replace/dmd/DMD.cpp
mfbt/HashTable.h
xpcom/rust/gtest/bench-collections/Bench.cpp
--- a/devtools/shared/heapsnapshot/HeapSnapshot.cpp
+++ b/devtools/shared/heapsnapshot/HeapSnapshot.cpp
@@ -401,19 +401,16 @@ readSizeOfNextMessage(ZeroCopyInputStrea
   MOZ_ASSERT(sizep);
   CodedInputStream codedStream(&stream);
   return codedStream.ReadVarint32(sizep) && *sizep > 0;
 }
 
 bool
 HeapSnapshot::init(JSContext* cx, const uint8_t* buffer, uint32_t size)
 {
-  if (!nodes.init() || !frames.init())
-    return false;
-
   ArrayInputStream stream(buffer, size);
   GzipInputStream gzipStream(&stream);
   uint32_t sizeOfMessage = 0;
 
   // First is the metadata.
 
   protobuf::Metadata metadata;
   if (NS_WARN_IF(!readSizeOfNextMessage(gzipStream, &sizeOfMessage)))
@@ -434,18 +431,16 @@ HeapSnapshot::init(JSContext* cx, const 
   // Although the id is optional in the protobuf format for future proofing, we
   // can't currently do anything without it.
   if (NS_WARN_IF(!root.has_id()))
     return false;
   rootId = root.id();
 
   // The set of all node ids we've found edges pointing to.
   NodeIdSet edgeReferents(cx);
-  if (NS_WARN_IF(!edgeReferents.init()))
-    return false;
 
   if (NS_WARN_IF(!saveNode(root, edgeReferents)))
     return false;
 
   // Finally, the rest of the nodes in the core dump.
 
   // Test for the end of the stream. The protobuf library gives no way to tell
   // the difference between an underlying read error and the stream being
@@ -473,20 +468,16 @@ HeapSnapshot::init(JSContext* cx, const 
 
 /*** Heap Snapshot Analyses ***********************************************************************/
 
 void
 HeapSnapshot::TakeCensus(JSContext* cx, JS::HandleObject options,
                          JS::MutableHandleValue rval, ErrorResult& rv)
 {
   JS::ubi::Census census(cx);
-  if (NS_WARN_IF(!census.init())) {
-    rv.Throw(NS_ERROR_OUT_OF_MEMORY);
-    return;
-  }
 
   JS::ubi::CountTypePtr rootType;
   if (NS_WARN_IF(!JS::ubi::ParseCensusOptions(cx,  census, options, rootType))) {
     rv.Throw(NS_ERROR_UNEXPECTED);
     return;
   }
 
   JS::ubi::RootedCount rootCount(cx, rootType->makeCount());
@@ -496,20 +487,16 @@ HeapSnapshot::TakeCensus(JSContext* cx, 
   }
 
   JS::ubi::CensusHandler handler(census, rootCount, GetCurrentThreadDebuggerMallocSizeOf());
 
   {
     JS::AutoCheckCannotGC nogc;
 
     JS::ubi::CensusTraversal traversal(cx, handler, nogc);
-    if (NS_WARN_IF(!traversal.init())) {
-      rv.Throw(NS_ERROR_OUT_OF_MEMORY);
-      return;
-    }
 
     if (NS_WARN_IF(!traversal.addStart(getRoot()))) {
       rv.Throw(NS_ERROR_OUT_OF_MEMORY);
       return;
     }
 
     if (NS_WARN_IF(!traversal.traverse())) {
       rv.Throw(NS_ERROR_UNEXPECTED);
@@ -605,20 +592,16 @@ HeapSnapshot::ComputeShortestPaths(JSCon
     rv.Throw(NS_ERROR_INVALID_ARG);
     return;
   }
 
   // Aggregate the targets into a set and make sure that they exist in the heap
   // snapshot.
 
   JS::ubi::NodeSet targetsSet;
-  if (NS_WARN_IF(!targetsSet.init())) {
-    rv.Throw(NS_ERROR_OUT_OF_MEMORY);
-    return;
-  }
 
   for (const auto& target : targets) {
     Maybe<JS::ubi::Node> targetNode = getNodeById(target);
     if (NS_WARN_IF(targetNode.isNothing())) {
       rv.Throw(NS_ERROR_INVALID_ARG);
       return;
     }
 
@@ -717,19 +700,16 @@ HeapSnapshot::ComputeShortestPaths(JSCon
 /*** Saving Heap Snapshots ************************************************************************/
 
 // If we are only taking a snapshot of the heap affected by the given set of
 // globals, find the set of compartments the globals are allocated
 // within. Returns false on OOM failure.
 static bool
 PopulateCompartmentsWithGlobals(CompartmentSet& compartments, AutoObjectVector& globals)
 {
-  if (!compartments.init())
-    return false;
-
   unsigned length = globals.length();
   for (unsigned i = 0; i < length; i++) {
     if (!compartments.put(GetObjectCompartment(globals[i])))
       return false;
   }
 
   return true;
 }
@@ -763,17 +743,17 @@ AddGlobalsAsRoots(AutoObjectVector& glob
 static bool
 EstablishBoundaries(JSContext* cx,
                     ErrorResult& rv,
                     const HeapSnapshotBoundaries& boundaries,
                     ubi::RootList& roots,
                     CompartmentSet& compartments)
 {
   MOZ_ASSERT(!roots.initialized());
-  MOZ_ASSERT(!compartments.initialized());
+  MOZ_ASSERT(compartments.empty());
 
   bool foundBoundaryProperty = false;
 
   if (boundaries.mRuntime.WasPassed()) {
     foundBoundaryProperty = true;
 
     if (!boundaries.mRuntime.Value()) {
       rv.Throw(NS_ERROR_INVALID_ARG);
@@ -846,18 +826,16 @@ EstablishBoundaries(JSContext* cx,
   }
 
   if (!foundBoundaryProperty) {
     rv.Throw(NS_ERROR_INVALID_ARG);
     return false;
   }
 
   MOZ_ASSERT(roots.initialized());
-  MOZ_ASSERT_IF(boundaries.mDebugger.WasPassed(), compartments.initialized());
-  MOZ_ASSERT_IF(boundaries.mGlobals.WasPassed(), compartments.initialized());
   return true;
 }
 
 
 // A variant covering all the various two-byte strings that we can get from the
 // ubi::Node API.
 class TwoByteString : public Variant<JSAtom*, const char16_t*, JS::ubi::EdgeName>
 {
@@ -1252,22 +1230,16 @@ public:
     , wantNames(wantNames)
     , framesAlreadySerialized(cx)
     , twoByteStringsAlreadySerialized(cx)
     , oneByteStringsAlreadySerialized(cx)
     , stream(stream)
     , compartments(compartments)
   { }
 
-  bool init() {
-    return framesAlreadySerialized.init() &&
-           twoByteStringsAlreadySerialized.init() &&
-           oneByteStringsAlreadySerialized.init();
-  }
-
   ~StreamWriter() override { }
 
   bool writeMetadata(uint64_t timestamp) final {
     protobuf::Metadata metadata;
     metadata.set_timestamp(timestamp);
     return writeMessage(metadata);
   }
 
@@ -1435,18 +1407,16 @@ WriteHeapGraph(JSContext* cx,
     return false;
   }
 
   // Walk the heap graph starting from the given node and serialize it into the
   // core dump.
 
   HeapSnapshotHandler handler(writer, compartments);
   HeapSnapshotHandler::Traversal traversal(cx, handler, noGC);
-  if (!traversal.init())
-    return false;
   traversal.wantNames = wantNames;
 
   bool ok = traversal.addStartVisited(node) &&
             traversal.traverse();
 
   if (ok) {
     outNodeCount = handler.nodeCount;
     outEdgeCount = handler.edgeCount;
@@ -1616,34 +1586,30 @@ ChromeUtils::SaveHeapSnapshotShared(Glob
 
   {
     Maybe<AutoCheckCannotGC> maybeNoGC;
     ubi::RootList rootList(cx, maybeNoGC, wantNames);
     if (!EstablishBoundaries(cx, rv, boundaries, rootList, compartments))
       return;
 
     StreamWriter writer(cx, gzipStream, wantNames,
-                        compartments.initialized() ? &compartments : nullptr);
-    if (NS_WARN_IF(!writer.init())) {
-      rv.Throw(NS_ERROR_OUT_OF_MEMORY);
-      return;
-    }
+                        !compartments.empty() ? &compartments : nullptr);
 
     MOZ_ASSERT(maybeNoGC.isSome());
     ubi::Node roots(&rootList);
 
     // Serialize the initial heap snapshot metadata to the core dump.
     if (!writer.writeMetadata(PR_Now()) ||
         // Serialize the heap graph to the core dump, starting from our list of
         // roots.
         !WriteHeapGraph(cx,
                         roots,
                         writer,
                         wantNames,
-                        compartments.initialized() ? &compartments : nullptr,
+                        !compartments.empty() ? &compartments : nullptr,
                         maybeNoGC.ref(),
                         nodeCount,
                         edgeCount))
     {
       rv.Throw(zeroCopyStream.failed()
                ? zeroCopyStream.result()
                : NS_ERROR_UNEXPECTED);
       return;
--- a/devtools/shared/heapsnapshot/HeapSnapshot.h
+++ b/devtools/shared/heapsnapshot/HeapSnapshot.h
@@ -132,17 +132,16 @@ public:
   virtual JSObject* WrapObject(JSContext* aCx,
                                JS::Handle<JSObject*> aGivenProto) override;
 
   const char16_t* borrowUniqueString(const char16_t* duplicateString,
                                      size_t length);
 
   // Get the root node of this heap snapshot's graph.
   JS::ubi::Node getRoot() {
-    MOZ_ASSERT(nodes.initialized());
     auto p = nodes.lookup(rootId);
     MOZ_ASSERT(p);
     const DeserializedNode& node = *p;
     return JS::ubi::Node(const_cast<DeserializedNode*>(&node));
   }
 
   Maybe<JS::ubi::Node> getNodeById(JS::ubi::Node::Id nodeId) {
     auto p = nodes.lookup(nodeId);
--- a/devtools/shared/heapsnapshot/tests/gtest/DoesCrossCompartmentBoundaries.cpp
+++ b/devtools/shared/heapsnapshot/tests/gtest/DoesCrossCompartmentBoundaries.cpp
@@ -22,17 +22,16 @@ DEF_TEST(DoesCrossCompartmentBoundaries,
       ASSERT_TRUE(JS::InitRealmStandardClasses(cx));
       newCompartment = js::GetContextCompartment(cx);
     }
     ASSERT_TRUE(newCompartment);
     ASSERT_NE(newCompartment, compartment);
 
     // Our set of target compartments is both the old and new compartments.
     JS::CompartmentSet targetCompartments;
-    ASSERT_TRUE(targetCompartments.init());
     ASSERT_TRUE(targetCompartments.put(compartment));
     ASSERT_TRUE(targetCompartments.put(newCompartment));
 
     FakeNode nodeA;
     FakeNode nodeB;
     FakeNode nodeC;
     FakeNode nodeD;
 
--- a/devtools/shared/heapsnapshot/tests/gtest/DoesntCrossCompartmentBoundaries.cpp
+++ b/devtools/shared/heapsnapshot/tests/gtest/DoesntCrossCompartmentBoundaries.cpp
@@ -23,17 +23,16 @@ DEF_TEST(DoesntCrossCompartmentBoundarie
       newCompartment = js::GetContextCompartment(cx);
     }
     ASSERT_TRUE(newCompartment);
     ASSERT_NE(newCompartment, compartment);
 
     // Our set of target compartments is only the pre-existing compartment and
     // does not include the new compartment.
     JS::CompartmentSet targetCompartments;
-    ASSERT_TRUE(targetCompartments.init());
     ASSERT_TRUE(targetCompartments.put(compartment));
 
     FakeNode nodeA;
     FakeNode nodeB;
     FakeNode nodeC;
 
     nodeA.compartment = compartment;
     nodeB.compartment = nullptr;
--- a/dom/base/CustomElementRegistry.cpp
+++ b/dom/base/CustomElementRegistry.cpp
@@ -280,17 +280,16 @@ NS_INTERFACE_MAP_BEGIN_CYCLE_COLLECTION(
   NS_INTERFACE_MAP_ENTRY(nsISupports)
 NS_INTERFACE_MAP_END
 
 CustomElementRegistry::CustomElementRegistry(nsPIDOMWindowInner* aWindow)
  : mWindow(aWindow)
  , mIsCustomDefinitionRunning(false)
 {
   MOZ_ASSERT(aWindow);
-  MOZ_ALWAYS_TRUE(mConstructors.init());
 
   mozilla::HoldJSObjects(this);
 }
 
 CustomElementRegistry::~CustomElementRegistry()
 {
   mozilla::DropJSObjects(this);
 }
--- a/dom/plugins/base/nsJSNPRuntime.cpp
+++ b/dom/plugins/base/nsJSNPRuntime.cpp
@@ -364,22 +364,16 @@ CreateJSObjWrapperTable()
   MOZ_ASSERT(!sJSObjWrappersAccessible);
   MOZ_ASSERT(!sJSObjWrappers);
 
   if (!RegisterGCCallbacks()) {
     return false;
   }
 
   sJSObjWrappers = MakeUnique<JSObjWrapperTable>();
-  if (!sJSObjWrappers->init(16)) {
-    sJSObjWrappers = nullptr;
-    NS_ERROR("Error initializing sJSObjWrappers!");
-    return false;
-  }
-
   sJSObjWrappersAccessible = true;
   return true;
 }
 
 static void
 DestroyJSObjWrapperTable()
 {
   MOZ_ASSERT(sJSObjWrappersAccessible);
--- a/js/ipc/JavaScriptChild.cpp
+++ b/js/ipc/JavaScriptChild.cpp
@@ -37,21 +37,16 @@ JavaScriptChild::~JavaScriptChild()
     JSContext* cx = dom::danger::GetJSContext();
     JS_RemoveWeakPointerZonesCallback(cx, UpdateChildWeakPointersBeforeSweepingZoneGroup);
     JS_RemoveExtraGCRootsTracer(cx, TraceChild, this);
 }
 
 bool
 JavaScriptChild::init()
 {
-    if (!WrapperOwner::init())
-        return false;
-    if (!WrapperAnswer::init())
-        return false;
-
     JSContext* cx = dom::danger::GetJSContext();
     JS_AddWeakPointerZonesCallback(cx, UpdateChildWeakPointersBeforeSweepingZoneGroup, this);
     JS_AddExtraGCRootsTracer(cx, TraceChild, this);
     return true;
 }
 
 void
 JavaScriptChild::trace(JSTracer* trc)
--- a/js/ipc/JavaScriptParent.cpp
+++ b/js/ipc/JavaScriptParent.cpp
@@ -26,31 +26,27 @@ using namespace mozilla::jsipc;
 using namespace mozilla::dom;
 
 static void
 TraceParent(JSTracer* trc, void* data)
 {
     static_cast<JavaScriptParent*>(data)->trace(trc);
 }
 
+JavaScriptParent::JavaScriptParent()
+  : savedNextCPOWNumber_(1)
+{
+    JS_AddExtraGCRootsTracer(danger::GetJSContext(), TraceParent, this);
+}
+
 JavaScriptParent::~JavaScriptParent()
 {
     JS_RemoveExtraGCRootsTracer(danger::GetJSContext(), TraceParent, this);
 }
 
-bool
-JavaScriptParent::init()
-{
-    if (!WrapperOwner::init())
-        return false;
-
-    JS_AddExtraGCRootsTracer(danger::GetJSContext(), TraceParent, this);
-    return true;
-}
-
 static bool
 ForbidUnsafeBrowserCPOWs()
 {
     static bool result;
     static bool cached = false;
     if (!cached) {
         cached = true;
         Preferences::AddBoolVarCache(&result, "dom.ipc.cpows.forbid-unsafe-from-browser", false);
@@ -146,22 +142,17 @@ JavaScriptParent::afterProcessTask()
     MOZ_ASSERT(nextCPOWNumber_ > 0);
     if (active())
         Unused << SendDropTemporaryStrongReferences(nextCPOWNumber_ - 1);
 }
 
 PJavaScriptParent*
 mozilla::jsipc::NewJavaScriptParent()
 {
-    JavaScriptParent* parent = new JavaScriptParent();
-    if (!parent->init()) {
-        delete parent;
-        return nullptr;
-    }
-    return parent;
+    return new JavaScriptParent();
 }
 
 void
 mozilla::jsipc::ReleaseJavaScriptParent(PJavaScriptParent* parent)
 {
     static_cast<JavaScriptParent*>(parent)->decref();
 }
 
--- a/js/ipc/JavaScriptParent.h
+++ b/js/ipc/JavaScriptParent.h
@@ -12,20 +12,19 @@
 #include "mozilla/jsipc/PJavaScriptParent.h"
 
 namespace mozilla {
 namespace jsipc {
 
 class JavaScriptParent : public JavaScriptBase<PJavaScriptParent>
 {
   public:
-    JavaScriptParent() : savedNextCPOWNumber_(1) {}
+    JavaScriptParent();
     virtual ~JavaScriptParent();
 
-    bool init();
     void trace(JSTracer* trc);
 
     void drop(JSObject* obj);
 
     bool allowMessage(JSContext* cx) override;
     void afterProcessTask();
 
   protected:
--- a/js/ipc/JavaScriptShared.cpp
+++ b/js/ipc/JavaScriptShared.cpp
@@ -15,28 +15,20 @@
 #include "mozilla/Preferences.h"
 
 using namespace js;
 using namespace JS;
 using namespace mozilla;
 using namespace mozilla::jsipc;
 
 IdToObjectMap::IdToObjectMap()
-  : table_(SystemAllocPolicy())
+  : table_(SystemAllocPolicy(), 32)
 {
 }
 
-bool
-IdToObjectMap::init()
-{
-    if (table_.initialized())
-        return true;
-    return table_.init(32);
-}
-
 void
 IdToObjectMap::trace(JSTracer* trc, uint64_t minimumId)
 {
     for (Table::Range r(table_.all()); !r.empty(); r.popFront()) {
         if (r.front().key().serialNumber() >= minimumId)
             JS::TraceEdge(trc, &r.front().value(), "ipc-object");
     }
 }
@@ -100,20 +92,19 @@ IdToObjectMap::has(const ObjectId& id, c
 {
     auto p = table_.lookup(id);
     if (!p)
         return false;
     return p->value() == obj;
 }
 #endif
 
-bool
-ObjectToIdMap::init()
+ObjectToIdMap::ObjectToIdMap()
+  : table_(SystemAllocPolicy(), 32)
 {
-    return table_.initialized() || table_.init(32);
 }
 
 void
 ObjectToIdMap::trace(JSTracer* trc)
 {
     table_.trace(trc);
 }
 
@@ -174,31 +165,16 @@ JavaScriptShared::JavaScriptShared()
     }
 }
 
 JavaScriptShared::~JavaScriptShared()
 {
     MOZ_RELEASE_ASSERT(cpows_.empty());
 }
 
-bool
-JavaScriptShared::init()
-{
-    if (!objects_.init())
-        return false;
-    if (!cpows_.init())
-        return false;
-    if (!unwaivedObjectIds_.init())
-        return false;
-    if (!waivedObjectIds_.init())
-        return false;
-
-    return true;
-}
-
 void
 JavaScriptShared::decref()
 {
     refcount_--;
     if (!refcount_)
         delete this;
 }
 
--- a/js/ipc/JavaScriptShared.h
+++ b/js/ipc/JavaScriptShared.h
@@ -93,17 +93,16 @@ struct ObjectIdHasher
 // Map ids -> JSObjects
 class IdToObjectMap
 {
     typedef js::HashMap<ObjectId, JS::Heap<JSObject*>, ObjectIdHasher, js::SystemAllocPolicy> Table;
 
   public:
     IdToObjectMap();
 
-    bool init();
     void trace(JSTracer* trc, uint64_t minimumId = 0);
     void sweep();
 
     bool add(ObjectId id, JSObject* obj);
     JSObject* find(ObjectId id);
     JSObject* findPreserveColor(ObjectId id);
     void remove(ObjectId id);
 
@@ -120,17 +119,18 @@ class IdToObjectMap
 
 // Map JSObjects -> ids
 class ObjectToIdMap
 {
     using Hasher = js::MovableCellHasher<JS::Heap<JSObject*>>;
     using Table = JS::GCHashMap<JS::Heap<JSObject*>, ObjectId, Hasher, js::SystemAllocPolicy>;
 
   public:
-    bool init();
+    ObjectToIdMap();
+
     void trace(JSTracer* trc);
     void sweep();
 
     bool add(JSContext* cx, JSObject* obj, ObjectId id);
     ObjectId find(JSObject* obj);
     void remove(JSObject* obj);
     void clear();
 
@@ -141,18 +141,16 @@ class ObjectToIdMap
 class Logging;
 
 class JavaScriptShared : public CPOWManager
 {
   public:
     JavaScriptShared();
     virtual ~JavaScriptShared();
 
-    bool init();
-
     void decref();
     void incref();
 
     bool Unwrap(JSContext* cx, const InfallibleTArray<CpowEntry>& aCpows, JS::MutableHandleObject objp) override;
     bool Wrap(JSContext* cx, JS::HandleObject aObj, InfallibleTArray<CpowEntry>* outCpows) override;
 
   protected:
     bool toVariant(JSContext* cx, JS::HandleValue from, JSVariant* to);
--- a/js/ipc/WrapperOwner.cpp
+++ b/js/ipc/WrapperOwner.cpp
@@ -902,25 +902,16 @@ void
 WrapperOwner::updatePointer(JSObject* obj, const JSObject* old)
 {
     ObjectId objId = idOfUnchecked(obj);
     MOZ_ASSERT(hasCPOW(objId, old));
     cpows_.add(objId, obj);
 }
 
 bool
-WrapperOwner::init()
-{
-    if (!JavaScriptShared::init())
-        return false;
-
-    return true;
-}
-
-bool
 WrapperOwner::getPropertyKeys(JSContext* cx, HandleObject proxy, uint32_t flags, AutoIdVector& props)
 {
     ObjectId objId = idOf(proxy);
 
     ReturnStatus status;
     InfallibleTArray<JSIDVariant> ids;
     if (!SendGetPropertyKeys(objId, flags, &status, &ids))
         return ipcfail(cx);
--- a/js/ipc/WrapperOwner.h
+++ b/js/ipc/WrapperOwner.h
@@ -19,17 +19,16 @@ namespace jsipc {
 
 class WrapperOwner : public virtual JavaScriptShared
 {
   public:
     typedef mozilla::ipc::IProtocol::ActorDestroyReason
            ActorDestroyReason;
 
     WrapperOwner();
-    bool init();
 
     // Standard internal methods.
     // (The traps should be in the same order like js/Proxy.h)
     bool getOwnPropertyDescriptor(JSContext* cx, JS::HandleObject proxy, JS::HandleId id,
                                   JS::MutableHandle<JS::PropertyDescriptor> desc);
     bool defineProperty(JSContext* cx, JS::HandleObject proxy, JS::HandleId id,
                         JS::Handle<JS::PropertyDescriptor> desc,
                         JS::ObjectOpResult& result);
--- a/js/public/GCHashTable.h
+++ b/js/public/GCHashTable.h
@@ -55,35 +55,32 @@ template <typename Key,
           typename AllocPolicy = js::TempAllocPolicy,
           typename MapSweepPolicy = DefaultMapSweepPolicy<Key, Value>>
 class GCHashMap : public js::HashMap<Key, Value, HashPolicy, AllocPolicy>
 {
     using Base = js::HashMap<Key, Value, HashPolicy, AllocPolicy>;
 
   public:
     explicit GCHashMap(AllocPolicy a = AllocPolicy()) : Base(a)  {}
+    explicit GCHashMap(size_t length) : Base(length)  {}
+    GCHashMap(AllocPolicy a, size_t length) : Base(a, length)  {}
 
     static void trace(GCHashMap* map, JSTracer* trc) { map->trace(trc); }
     void trace(JSTracer* trc) {
-        if (!this->initialized())
-            return;
         for (typename Base::Enum e(*this); !e.empty(); e.popFront()) {
             GCPolicy<Value>::trace(trc, &e.front().value(), "hashmap value");
             GCPolicy<Key>::trace(trc, &e.front().mutableKey(), "hashmap key");
         }
     }
 
     bool needsSweep() const {
-        return this->initialized() && !this->empty();
+        return !this->empty();
     }
 
     void sweep() {
-        if (!this->initialized())
-            return;
-
         for (typename Base::Enum e(*this); !e.empty(); e.popFront()) {
             if (MapSweepPolicy::needsSweep(&e.front().mutableKey(), &e.front().value()))
                 e.removeFront();
         }
     }
 
     // GCHashMap is movable
     GCHashMap(GCHashMap&& rhs) : Base(std::move(rhs)) {}
@@ -112,22 +109,21 @@ template <typename Key,
           typename HashPolicy = DefaultHasher<Key>,
           typename AllocPolicy = TempAllocPolicy,
           typename MapSweepPolicy = JS::DefaultMapSweepPolicy<Key, Value>>
 class GCRekeyableHashMap : public JS::GCHashMap<Key, Value, HashPolicy, AllocPolicy, MapSweepPolicy>
 {
     using Base = JS::GCHashMap<Key, Value, HashPolicy, AllocPolicy>;
 
   public:
-    explicit GCRekeyableHashMap(AllocPolicy a = AllocPolicy()) : Base(a)  {}
+    explicit GCRekeyableHashMap(AllocPolicy a = AllocPolicy()) : Base(a) {}
+    explicit GCRekeyableHashMap(size_t length) : Base(length) {}
+    GCRekeyableHashMap(AllocPolicy a, size_t length) : Base(a, length) {}
 
     void sweep() {
-        if (!this->initialized())
-            return;
-
         for (typename Base::Enum e(*this); !e.empty(); e.popFront()) {
             Key key(e.front().key());
             if (MapSweepPolicy::needsSweep(&key, &e.front().value()))
                 e.removeFront();
             else if (!HashPolicy::match(key, e.front().key()))
                 e.rekeyFront(key);
         }
     }
@@ -148,19 +144,17 @@ class WrappedPtrOperations<JS::GCHashMap
 
     const Map& map() const { return static_cast<const Wrapper*>(this)->get(); }
 
   public:
     using AddPtr = typename Map::AddPtr;
     using Ptr = typename Map::Ptr;
     using Range = typename Map::Range;
 
-    bool initialized() const                   { return map().initialized(); }
     Ptr lookup(const Lookup& l) const          { return map().lookup(l); }
-    AddPtr lookupForAdd(const Lookup& l) const { return map().lookupForAdd(l); }
     Range all() const                          { return map().all(); }
     bool empty() const                         { return map().empty(); }
     uint32_t count() const                     { return map().count(); }
     size_t capacity() const                    { return map().capacity(); }
     bool has(const Lookup& l) const            { return map().lookup(l).found(); }
     size_t sizeOfExcludingThis(mozilla::MallocSizeOf mallocSizeOf) const {
         return map().sizeOfExcludingThis(mallocSizeOf);
     }
@@ -179,20 +173,20 @@ class MutableWrappedPtrOperations<JS::GC
     Map& map() { return static_cast<Wrapper*>(this)->get(); }
 
   public:
     using AddPtr = typename Map::AddPtr;
     struct Enum : public Map::Enum { explicit Enum(Wrapper& o) : Map::Enum(o.map()) {} };
     using Ptr = typename Map::Ptr;
     using Range = typename Map::Range;
 
-    bool init(uint32_t len = 16) { return map().init(len); }
     void clear()                 { map().clear(); }
-    void finish()                { map().finish(); }
+    void clearAndCompact()       { map().clearAndCompact(); }
     void remove(Ptr p)           { map().remove(p); }
+    AddPtr lookupForAdd(const Lookup& l) { return map().lookupForAdd(l); }
 
     template<typename KeyInput, typename ValueInput>
     bool add(AddPtr& p, KeyInput&& k, ValueInput&& v) {
         return map().add(p, std::forward<KeyInput>(k), std::forward<ValueInput>(v));
     }
 
     template<typename KeyInput>
     bool add(AddPtr& p, KeyInput&& k) {
@@ -238,32 +232,30 @@ template <typename T,
           typename HashPolicy = js::DefaultHasher<T>,
           typename AllocPolicy = js::TempAllocPolicy>
 class GCHashSet : public js::HashSet<T, HashPolicy, AllocPolicy>
 {
     using Base = js::HashSet<T, HashPolicy, AllocPolicy>;
 
   public:
     explicit GCHashSet(AllocPolicy a = AllocPolicy()) : Base(a)  {}
+    explicit GCHashSet(size_t length) : Base(length)  {}
+    GCHashSet(AllocPolicy a, size_t length) : Base(a, length)  {}
 
     static void trace(GCHashSet* set, JSTracer* trc) { set->trace(trc); }
     void trace(JSTracer* trc) {
-        if (!this->initialized())
-            return;
         for (typename Base::Enum e(*this); !e.empty(); e.popFront())
             GCPolicy<T>::trace(trc, &e.mutableFront(), "hashset element");
     }
 
     bool needsSweep() const {
-        return this->initialized() && !this->empty();
+        return !this->empty();
     }
 
     void sweep() {
-        if (!this->initialized())
-            return;
         for (typename Base::Enum e(*this); !e.empty(); e.popFront()) {
             if (GCPolicy<T>::needsSweep(&e.mutableFront()))
                 e.removeFront();
         }
     }
 
     // GCHashSet is movable
     GCHashSet(GCHashSet&& rhs) : Base(std::move(rhs)) {}
@@ -291,19 +283,17 @@ class WrappedPtrOperations<JS::GCHashSet
 
   public:
     using Lookup = typename Set::Lookup;
     using AddPtr = typename Set::AddPtr;
     using Entry = typename Set::Entry;
     using Ptr = typename Set::Ptr;
     using Range = typename Set::Range;
 
-    bool initialized() const                   { return set().initialized(); }
     Ptr lookup(const Lookup& l) const          { return set().lookup(l); }
-    AddPtr lookupForAdd(const Lookup& l) const { return set().lookupForAdd(l); }
     Range all() const                          { return set().all(); }
     bool empty() const                         { return set().empty(); }
     uint32_t count() const                     { return set().count(); }
     size_t capacity() const                    { return set().capacity(); }
     bool has(const Lookup& l) const            { return set().lookup(l).found(); }
     size_t sizeOfExcludingThis(mozilla::MallocSizeOf mallocSizeOf) const {
         return set().sizeOfExcludingThis(mallocSizeOf);
     }
@@ -323,21 +313,22 @@ class MutableWrappedPtrOperations<JS::GC
 
   public:
     using AddPtr = typename Set::AddPtr;
     using Entry = typename Set::Entry;
     struct Enum : public Set::Enum { explicit Enum(Wrapper& o) : Set::Enum(o.set()) {} };
     using Ptr = typename Set::Ptr;
     using Range = typename Set::Range;
 
-    bool init(uint32_t len = 16) { return set().init(len); }
     void clear()                 { set().clear(); }
-    void finish()                { set().finish(); }
+    void clearAndCompact()       { set().clearAndCompact(); }
+    MOZ_MUST_USE bool reserve(uint32_t len) { return set().reserve(len); }
     void remove(Ptr p)           { set().remove(p); }
     void remove(const Lookup& l) { set().remove(l); }
+    AddPtr lookupForAdd(const Lookup& l) { return set().lookupForAdd(l); }
 
     template<typename TInput>
     bool add(AddPtr& p, TInput&& t) {
         return set().add(p, std::forward<TInput>(t));
     }
 
     template<typename TInput>
     bool relookupOrAdd(AddPtr& p, const Lookup& l, TInput&& t) {
@@ -390,19 +381,16 @@ class WeakCache<GCHashMap<Key, Value, Ha
         MOZ_ASSERT(!needsBarrier);
     }
 
     bool needsSweep() override {
         return map.needsSweep();
     }
 
     size_t sweep() override {
-        if (!this->initialized())
-            return 0;
-
         size_t steps = map.count();
         map.sweep();
         return steps;
     }
 
     bool setNeedsIncrementalBarrier(bool needs) override {
         MOZ_ASSERT(needsBarrier != needs);
         needsBarrier = needs;
@@ -462,30 +450,26 @@ class WeakCache<GCHashMap<Key, Value, Ha
           : Map::Enum(cache.map)
         {
             // This operation is not allowed while barriers are in place as we
             // may also need to enumerate the set for sweeping.
             MOZ_ASSERT(!cache.needsBarrier);
         }
     };
 
-    bool initialized() const {
-        return map.initialized();
-    }
-
     Ptr lookup(const Lookup& l) const {
         Ptr ptr = map.lookup(l);
         if (needsBarrier && ptr && entryNeedsSweep(*ptr)) {
             const_cast<Map&>(map).remove(ptr);
             return Ptr();
         }
         return ptr;
     }
 
-    AddPtr lookupForAdd(const Lookup& l) const {
+    AddPtr lookupForAdd(const Lookup& l) {
         AddPtr ptr = map.lookupForAdd(l);
         if (needsBarrier && ptr && entryNeedsSweep(*ptr)) {
             const_cast<Map&>(map).remove(ptr);
             return map.lookupForAdd(l);
         }
         return ptr;
     }
 
@@ -519,33 +503,28 @@ class WeakCache<GCHashMap<Key, Value, Ha
 
     size_t sizeOfExcludingThis(mozilla::MallocSizeOf mallocSizeOf) const {
         return map.sizeOfExcludingThis(mallocSizeOf);
     }
     size_t sizeOfIncludingThis(mozilla::MallocSizeOf mallocSizeOf) const {
         return mallocSizeOf(this) + map.shallowSizeOfExcludingThis(mallocSizeOf);
     }
 
-    bool init(uint32_t len = 16) {
-        MOZ_ASSERT(!needsBarrier);
-        return map.init(len);
-    }
-
     void clear() {
         // This operation is not currently allowed while barriers are in place
         // since it doesn't make sense to clear a cache while it is being swept.
         MOZ_ASSERT(!needsBarrier);
         map.clear();
     }
 
-    void finish() {
+    void clearAndCompact() {
         // This operation is not currently allowed while barriers are in place
-        // since it doesn't make sense to destroy a cache while it is being swept.
+        // since it doesn't make sense to clear a cache while it is being swept.
         MOZ_ASSERT(!needsBarrier);
-        map.finish();
+        map.clearAndCompact();
     }
 
     void remove(Ptr p) {
         // This currently supports removing entries during incremental
         // sweeping. If we allow these tables to be swept incrementally this may
         // no longer be possible.
         map.remove(p);
     }
@@ -597,19 +576,16 @@ class WeakCache<GCHashSet<T, HashPolicy,
       : WeakCacheBase(zone), set(std::forward<Args>(args)...), needsBarrier(false)
     {}
     template <typename... Args>
     explicit WeakCache(JSRuntime* rt, Args&&... args)
       : WeakCacheBase(rt), set(std::forward<Args>(args)...), needsBarrier(false)
     {}
 
     size_t sweep() override {
-        if (!this->initialized())
-            return 0;
-
         size_t steps = set.count();
         set.sweep();
         return steps;
     }
 
     bool needsSweep() override {
         return set.needsSweep();
     }
@@ -669,30 +645,26 @@ class WeakCache<GCHashSet<T, HashPolicy,
           : Set::Enum(cache.set)
         {
             // This operation is not allowed while barriers are in place as we
             // may also need to enumerate the set for sweeping.
             MOZ_ASSERT(!cache.needsBarrier);
         }
     };
 
-    bool initialized() const {
-        return set.initialized();
-    }
-
     Ptr lookup(const Lookup& l) const {
         Ptr ptr = set.lookup(l);
         if (needsBarrier && ptr && entryNeedsSweep(*ptr)) {
             const_cast<Set&>(set).remove(ptr);
             return Ptr();
         }
         return ptr;
     }
 
-    AddPtr lookupForAdd(const Lookup& l) const {
+    AddPtr lookupForAdd(const Lookup& l) {
         AddPtr ptr = set.lookupForAdd(l);
         if (needsBarrier && ptr && entryNeedsSweep(*ptr)) {
             const_cast<Set&>(set).remove(ptr);
             return set.lookupForAdd(l);
         }
         return ptr;
     }
 
@@ -726,33 +698,28 @@ class WeakCache<GCHashSet<T, HashPolicy,
 
     size_t sizeOfExcludingThis(mozilla::MallocSizeOf mallocSizeOf) const {
         return set.shallowSizeOfExcludingThis(mallocSizeOf);
     }
     size_t sizeOfIncludingThis(mozilla::MallocSizeOf mallocSizeOf) const {
         return mallocSizeOf(this) + set.shallowSizeOfExcludingThis(mallocSizeOf);
     }
 
-    bool init(uint32_t len = 16) {
-        MOZ_ASSERT(!needsBarrier);
-        return set.init(len);
-    }
-
     void clear() {
         // This operation is not currently allowed while barriers are in place
         // since it doesn't make sense to clear a cache while it is being swept.
         MOZ_ASSERT(!needsBarrier);
         set.clear();
     }
 
-    void finish() {
+    void clearAndCompact() {
         // This operation is not currently allowed while barriers are in place
-        // since it doesn't make sense to destroy a cache while it is being swept.
+        // since it doesn't make sense to clear a cache while it is being swept.
         MOZ_ASSERT(!needsBarrier);
-        set.finish();
+        set.clearAndCompact();
     }
 
     void remove(Ptr p) {
         // This currently supports removing entries during incremental
         // sweeping. If we allow these tables to be swept incrementally this may
         // no longer be possible.
         set.remove(p);
     }
--- a/js/public/MemoryMetrics.h
+++ b/js/public/MemoryMetrics.h
@@ -564,17 +564,17 @@ struct RuntimeSizes
     RuntimeSizes()
       : FOR_EACH_SIZE(ZERO_SIZE)
         scriptSourceInfo(),
         code(),
         gc(),
         notableScriptSources()
     {
         allScriptSources = js_new<ScriptSourcesHashMap>();
-        if (!allScriptSources || !allScriptSources->init())
+        if (!allScriptSources)
             MOZ_CRASH("oom");
     }
 
     ~RuntimeSizes() {
         // |allScriptSources| is usually deleted and set to nullptr before this
         // destructor runs. But there are failure cases due to OOMs that may
         // prevent that, so it doesn't hurt to try again here.
         js_delete(allScriptSources);
--- a/js/public/UbiNodeBreadthFirst.h
+++ b/js/public/UbiNodeBreadthFirst.h
@@ -83,19 +83,16 @@ struct BreadthFirst {
     //
     // We do nothing with noGC, other than require it to exist, with a lifetime
     // that encloses our own.
     BreadthFirst(JSContext* cx, Handler& handler, const JS::AutoRequireNoGC& noGC)
       : wantNames(true), cx(cx), visited(), handler(handler), pending(),
         traversalBegun(false), stopRequested(false), abandonRequested(false)
     { }
 
-    // Initialize this traversal object. Return false on OOM.
-    bool init() { return visited.init(); }
-
     // Add |node| as a starting point for the traversal. You may add
     // as many starting points as you like. Return false on OOM.
     bool addStart(Node node) { return pending.append(node); }
 
     // Add |node| as a starting point for the traversal (see addStart) and also
     // add it to the |visited| set. Return false on OOM.
     bool addStartVisited(Node node) {
         typename NodeMap::AddPtr ptr = visited.lookupForAdd(node);
--- a/js/public/UbiNodeCensus.h
+++ b/js/public/UbiNodeCensus.h
@@ -201,18 +201,16 @@ class RootedCount : JS::CustomAutoRooter
 struct Census {
     JSContext* const cx;
     // If the targetZones set is non-empty, then only consider nodes whose zone
     // is an element of the set. If the targetZones set is empty, then nodes in
     // all zones are considered.
     JS::ZoneSet targetZones;
 
     explicit Census(JSContext* cx) : cx(cx) { }
-
-    MOZ_MUST_USE JS_PUBLIC_API(bool) init();
 };
 
 // A BreadthFirst handler type that conducts a census, using a CountBase to
 // categorize and count each node.
 class CensusHandler {
     Census& census;
     CountBasePtr& rootCount;
     mozilla::MallocSizeOf mallocSizeOf;
--- a/js/public/UbiNodeDominatorTree.h
+++ b/js/public/UbiNodeDominatorTree.h
@@ -333,40 +333,38 @@ class JS_PUBLIC_API(DominatorTree)
             return postOrder.append(node);
         };
 
         auto onEdge = [&](const Node& origin, const Edge& edge) {
             auto p = predecessorSets.lookupForAdd(edge.referent);
             if (!p) {
                 mozilla::UniquePtr<NodeSet, DeletePolicy<NodeSet>> set(js_new<NodeSet>());
                 if (!set ||
-                    !set->init() ||
                     !predecessorSets.add(p, edge.referent, std::move(set)))
                 {
                     return false;
                 }
             }
             MOZ_ASSERT(p && p->value());
             return p->value()->put(origin);
         };
 
         PostOrder traversal(cx, noGC);
-        return traversal.init() &&
-               traversal.addStart(root) &&
+        return traversal.addStart(root) &&
                traversal.traverse(onNode, onEdge);
     }
 
     // Populates the given `map` with an entry for each node to its index in
     // `postOrder`.
     static MOZ_MUST_USE bool mapNodesToTheirIndices(JS::ubi::Vector<Node>& postOrder,
                                                     NodeToIndexMap& map) {
-        MOZ_ASSERT(!map.initialized());
+        MOZ_ASSERT(map.empty());
         MOZ_ASSERT(postOrder.length() < UINT32_MAX);
         uint32_t length = postOrder.length();
-        if (!map.init(length))
+        if (!map.reserve(length))
             return false;
         for (uint32_t i = 0; i < length; i++)
             map.putNewInfallible(postOrder[i], i);
         return true;
     }
 
     // Convert the Node -> NodeSet predecessorSets to a index -> Vector<index>
     // form.
@@ -398,17 +396,17 @@ class JS_PUBLIC_API(DominatorTree)
             if (!predecessorVectors[i].reserve(predecessors->count()))
                 return false;
             for (auto range = predecessors->all(); !range.empty(); range.popFront()) {
                 auto ptr = nodeToPostOrderIndex.lookup(range.front());
                 MOZ_ASSERT(ptr);
                 predecessorVectors[i].infallibleAppend(ptr->value());
             }
         }
-        predecessorSets.finish();
+        predecessorSets.clearAndCompact();
         return true;
     }
 
     // Initialize `doms` such that the immediate dominator of the `root` is the
     // `root` itself and all others are `UNDEFINED`.
     static MOZ_MUST_USE bool initializeDominators(JS::ubi::Vector<uint32_t>& doms,
                                                   uint32_t length) {
         MOZ_ASSERT(doms.length() == 0);
@@ -510,30 +508,30 @@ class JS_PUBLIC_API(DominatorTree)
      *
      * Returns `mozilla::Nothing()` on OOM failure. It is the caller's
      * responsibility to handle and report the OOM.
      */
     static mozilla::Maybe<DominatorTree>
     Create(JSContext* cx, AutoCheckCannotGC& noGC, const Node& root) {
         JS::ubi::Vector<Node> postOrder;
         PredecessorSets predecessorSets;
-        if (!predecessorSets.init() || !doTraversal(cx, noGC, root, postOrder, predecessorSets))
+        if (!doTraversal(cx, noGC, root, postOrder, predecessorSets))
             return mozilla::Nothing();
 
         MOZ_ASSERT(postOrder.length() < UINT32_MAX);
         uint32_t length = postOrder.length();
         MOZ_ASSERT(postOrder[length - 1] == root);
 
         // From here on out we wish to avoid hash table lookups, and we use
         // indices into `postOrder` instead of actual nodes wherever
         // possible. This greatly improves the performance of this
         // implementation, but we have to pay a little bit of upfront cost to
         // convert our data structures to play along first.
 
-        NodeToIndexMap nodeToPostOrderIndex;
+        NodeToIndexMap nodeToPostOrderIndex(postOrder.length());
         if (!mapNodesToTheirIndices(postOrder, nodeToPostOrderIndex))
             return mozilla::Nothing();
 
         JS::ubi::Vector<JS::ubi::Vector<uint32_t>> predecessorVectors;
         if (!convertPredecessorSetsToVectors(root, postOrder, predecessorSets, nodeToPostOrderIndex,
                                              predecessorVectors))
             return mozilla::Nothing();
 
--- a/js/public/UbiNodePostOrder.h
+++ b/js/public/UbiNodePostOrder.h
@@ -118,19 +118,16 @@ struct PostOrder {
       : cx(cx)
       , seen()
       , stack()
 #ifdef DEBUG
       , traversed(false)
 #endif
     { }
 
-    // Initialize this traversal object. Return false on OOM.
-    MOZ_MUST_USE bool init() { return seen.init(); }
-
     // Add `node` as a starting point for the traversal. You may add
     // as many starting points as you like. Returns false on OOM.
     MOZ_MUST_USE bool addStart(const Node& node) {
         if (!seen.put(node))
             return false;
         return pushForTraversing(node);
     }
 
--- a/js/public/UbiNodeShortestPaths.h
+++ b/js/public/UbiNodeShortestPaths.h
@@ -183,28 +183,21 @@ struct JS_PUBLIC_API(ShortestPaths)
 
   private:
     // Private methods.
 
     ShortestPaths(uint32_t maxNumPaths, const Node& root, NodeSet&& targets)
       : maxNumPaths_(maxNumPaths)
       , root_(root)
       , targets_(std::move(targets))
-      , paths_()
+      , paths_(targets_.count())
       , backEdges_()
     {
         MOZ_ASSERT(maxNumPaths_ > 0);
         MOZ_ASSERT(root_);
-        MOZ_ASSERT(targets_.initialized());
-    }
-
-    bool initialized() const {
-        return targets_.initialized() &&
-               paths_.initialized() &&
-               backEdges_.initialized();
     }
 
   public:
     // Public methods.
 
     ShortestPaths(ShortestPaths&& rhs)
       : maxNumPaths_(rhs.maxNumPaths_)
       , root_(rhs.root_)
@@ -244,59 +237,53 @@ struct JS_PUBLIC_API(ShortestPaths)
      * Returns `mozilla::Nothing()` on OOM failure. It is the caller's
      * responsibility to handle and report the OOM.
      */
     static mozilla::Maybe<ShortestPaths>
     Create(JSContext* cx, AutoCheckCannotGC& noGC, uint32_t maxNumPaths, const Node& root, NodeSet&& targets) {
         MOZ_ASSERT(targets.count() > 0);
         MOZ_ASSERT(maxNumPaths > 0);
 
-        size_t count = targets.count();
         ShortestPaths paths(maxNumPaths, root, std::move(targets));
-        if (!paths.paths_.init(count))
-            return mozilla::Nothing();
 
         Handler handler(paths);
         Traversal traversal(cx, handler, noGC);
         traversal.wantNames = true;
-        if (!traversal.init() || !traversal.addStart(root) || !traversal.traverse())
+        if (!traversal.addStart(root) || !traversal.traverse())
             return mozilla::Nothing();
 
         // Take ownership of the back edges we created while traversing the
         // graph so that we can follow them from `paths_` and don't
         // use-after-free.
         paths.backEdges_ = std::move(traversal.visited);
 
-        MOZ_ASSERT(paths.initialized());
         return mozilla::Some(std::move(paths));
     }
 
     /**
      * Get an iterator over each target node we searched for retaining paths
      * for. The returned iterator must not outlive the `ShortestPaths`
      * instance.
      */
     NodeSet::Iterator targetIter() const {
-        MOZ_ASSERT(initialized());
         return targets_.iter();
     }
 
     /**
      * Invoke the provided functor/lambda/callable once for each retaining path
      * discovered for `target`. The `func` is passed a single `JS::ubi::Path&`
      * argument, which contains each edge along the path ordered starting from
      * the root and ending at the target, and must not outlive the scope of the
      * call.
      *
      * Note that it is possible that we did not find any paths from the root to
      * the given target, in which case `func` will not be invoked.
      */
     template <class Func>
     MOZ_MUST_USE bool forEachPath(const Node& target, Func func) {
-        MOZ_ASSERT(initialized());
         MOZ_ASSERT(targets_.has(target));
 
         auto ptr = paths_.lookup(target);
 
         // We didn't find any paths to this target, so nothing to do here.
         if (!ptr)
             return true;
 
--- a/js/src/builtin/JSON.cpp
+++ b/js/src/builtin/JSON.cpp
@@ -656,19 +656,17 @@ js::Stringify(JSContext* cx, MutableHand
             if (!GetLengthProperty(cx, replacer, &len))
                 return false;
 
             // Cap the initial size to a moderately small value.  This avoids
             // ridiculous over-allocation if an array with bogusly-huge length
             // is passed in.  If we end up having to add elements past this
             // size, the set will naturally resize to accommodate them.
             const uint32_t MaxInitialSize = 32;
-            Rooted<GCHashSet<jsid>> idSet(cx, GCHashSet<jsid>(cx));
-            if (!idSet.init(Min(len, MaxInitialSize)))
-                return false;
+            Rooted<GCHashSet<jsid>> idSet(cx, GCHashSet<jsid>(cx, Min(len, MaxInitialSize)));
 
             /* Step 4b(iii)(4). */
             uint32_t k = 0;
 
             /* Step 4b(iii)(5). */
             RootedValue item(cx);
             for (; k < len; k++) {
                 if (!CheckForInterrupt(cx))
--- a/js/src/builtin/ModuleObject.cpp
+++ b/js/src/builtin/ModuleObject.cpp
@@ -344,21 +344,16 @@ IndirectBindingMap::put(JSContext* cx, H
                         HandleModuleEnvironmentObject environment, HandleId localName)
 {
     // This object might have been allocated on the background parsing thread in
     // different zone to the final module. Lazily allocate the map so we don't
     // have to switch its zone when merging compartments.
     if (!map_) {
         MOZ_ASSERT(!cx->zone()->createdForHelperThread());
         map_.emplace(cx->zone());
-        if (!map_->init()) {
-            map_.reset();
-            ReportOutOfMemory(cx);
-            return false;
-        }
     }
 
     RootedShape shape(cx, environment->lookup(cx, localName));
     MOZ_ASSERT(shape);
     if (!map_->put(name, Binding(environment, shape))) {
         ReportOutOfMemory(cx);
         return false;
     }
@@ -1227,24 +1222,16 @@ ModuleBuilder::ModuleBuilder(JSContext* 
     exportEntries_(cx, ExportEntryVector(cx)),
     exportNames_(cx, AtomSet(cx)),
     localExportEntries_(cx, ExportEntryVector(cx)),
     indirectExportEntries_(cx, ExportEntryVector(cx)),
     starExportEntries_(cx, ExportEntryVector(cx))
 {}
 
 bool
-ModuleBuilder::init()
-{
-    return requestedModuleSpecifiers_.init() &&
-           importEntries_.init() &&
-           exportNames_.init();
-}
-
-bool
 ModuleBuilder::buildTables()
 {
     for (const auto& e : exportEntries_) {
         RootedExportEntryObject exp(cx_, e);
         if (!exp->moduleRequest()) {
             RootedImportEntryObject importEntry(cx_, importEntryFor(exp->localName()));
             if (!importEntry) {
                 if (!localExportEntries_.append(exp))
--- a/js/src/builtin/ModuleObject.h
+++ b/js/src/builtin/ModuleObject.h
@@ -347,17 +347,16 @@ class ModuleObject : public NativeObject
 
 // Process a module's parse tree to collate the import and export data used when
 // creating a ModuleObject.
 class MOZ_STACK_CLASS ModuleBuilder
 {
   public:
     explicit ModuleBuilder(JSContext* cx, HandleModuleObject module,
                            const frontend::TokenStreamAnyChars& tokenStream);
-    bool init();
 
     bool processImport(frontend::ParseNode* pn);
     bool processExport(frontend::ParseNode* pn);
     bool processExportFrom(frontend::ParseNode* pn);
 
     bool hasExportedName(JSAtom* name) const;
 
     using ExportEntryVector = GCVector<ExportEntryObject*>;
--- a/js/src/builtin/Promise.cpp
+++ b/js/src/builtin/Promise.cpp
@@ -4574,19 +4574,16 @@ OffThreadPromiseTask::dispatchResolveAnd
 
 OffThreadPromiseRuntimeState::OffThreadPromiseRuntimeState()
   : dispatchToEventLoopCallback_(nullptr),
     dispatchToEventLoopClosure_(nullptr),
     mutex_(mutexid::OffThreadPromiseState),
     numCanceled_(0),
     internalDispatchQueueClosed_(false)
 {
-    AutoEnterOOMUnsafeRegion noOOM;
-    if (!live_.init())
-        noOOM.crash("OffThreadPromiseRuntimeState");
 }
 
 OffThreadPromiseRuntimeState::~OffThreadPromiseRuntimeState()
 {
     MOZ_ASSERT(live_.empty());
     MOZ_ASSERT(numCanceled_ == 0);
     MOZ_ASSERT(internalDispatchQueue_.empty());
     MOZ_ASSERT(!initialized());
--- a/js/src/builtin/ReflectParse.cpp
+++ b/js/src/builtin/ReflectParse.cpp
@@ -3464,18 +3464,16 @@ reflect_parse(JSContext* cx, uint32_t ar
         return false;
 
     CompileOptions options(cx);
     options.setFileAndLine(filename.get(), lineno);
     options.setCanLazilyParse(false);
     options.allowHTMLComments = target == ParseGoal::Script;
     mozilla::Range<const char16_t> chars = linearChars.twoByteRange();
     UsedNameTracker usedNames(cx);
-    if (!usedNames.init())
-        return false;
 
     RootedScriptSourceObject sourceObject(cx, frontend::CreateScriptSourceObject(cx, options,
                                                                                  mozilla::Nothing()));
     if (!sourceObject)
         return false;
 
     Parser<FullParseHandler, char16_t> parser(cx, cx->tempLifoAlloc(), options,
                                               chars.begin().get(), chars.length(),
@@ -3495,18 +3493,16 @@ reflect_parse(JSContext* cx, uint32_t ar
         if (!GlobalObject::ensureModulePrototypesCreated(cx, cx->global()))
             return false;
 
         Rooted<ModuleObject*> module(cx, ModuleObject::create(cx));
         if (!module)
             return false;
 
         ModuleBuilder builder(cx, module, parser.anyChars);
-        if (!builder.init())
-            return false;
 
         ModuleSharedContext modulesc(cx, module, &cx->global()->emptyGlobalScope(), builder);
         pn = parser.moduleBody(&modulesc);
         if (!pn)
             return false;
 
         MOZ_ASSERT(pn->getKind() == ParseNodeKind::Module);
         pn = pn->pn_body;
--- a/js/src/builtin/TestingFunctions.cpp
+++ b/js/src/builtin/TestingFunctions.cpp
@@ -1249,18 +1249,17 @@ GetSavedFrameCount(JSContext* cx, unsign
 }
 
 static bool
 ClearSavedFrames(JSContext* cx, unsigned argc, Value* vp)
 {
     CallArgs args = CallArgsFromVp(argc, vp);
 
     js::SavedStacks& savedStacks = cx->realm()->savedStacks();
-    if (savedStacks.initialized())
-        savedStacks.clear();
+    savedStacks.clear();
 
     for (ActivationIterator iter(cx); !iter.done(); ++iter)
         iter->clearLiveSavedFrameCache();
 
     args.rval().setUndefined();
     return true;
 }
 
@@ -3588,17 +3587,17 @@ FindPath(JSContext* cx, unsigned argc, V
         // We can't tolerate the GC moving things around while we're searching
         // the heap. Check that nothing we do causes a GC.
         JS::AutoCheckCannotGC autoCannotGC;
 
         JS::ubi::Node start(args[0]), target(args[1]);
 
         heaptools::FindPathHandler handler(cx, start, target, &nodes, edges);
         heaptools::FindPathHandler::Traversal traversal(cx, handler, autoCannotGC);
-        if (!traversal.init() || !traversal.addStart(start)) {
+        if (!traversal.addStart(start)) {
             ReportOutOfMemory(cx);
             return false;
         }
 
         if (!traversal.traverse()) {
             if (!cx->isExceptionPending())
                 ReportOutOfMemory(cx);
             return false;
@@ -3714,20 +3713,16 @@ ShortestPaths(JSContext* cx, unsigned ar
     // is bounded within an AutoCheckCannotGC.
     Rooted<GCVector<GCVector<GCVector<Value>>>> values(cx, GCVector<GCVector<GCVector<Value>>>(cx));
     Vector<Vector<Vector<JS::ubi::EdgeName>>> names(cx);
 
     {
         JS::AutoCheckCannotGC noGC(cx);
 
         JS::ubi::NodeSet targets;
-        if (!targets.init()) {
-            ReportOutOfMemory(cx);
-            return false;
-        }
 
         for (size_t i = 0; i < length; i++) {
             RootedValue val(cx, objs->getDenseElement(i));
             JS::ubi::Node node(val);
             if (!targets.put(node)) {
                 ReportOutOfMemory(cx);
                 return false;
             }
--- a/js/src/builtin/TypedObject.cpp
+++ b/js/src/builtin/TypedObject.cpp
@@ -2158,21 +2158,16 @@ ArrayBufferObject*
 InlineTransparentTypedObject::getOrCreateBuffer(JSContext* cx)
 {
     ObjectRealm& realm = ObjectRealm::get(this);
     if (!realm.lazyArrayBuffers) {
         auto table = cx->make_unique<ObjectWeakMap>(cx);
         if (!table)
             return nullptr;
 
-        if (!table->init()) {
-            ReportOutOfMemory(cx);
-            return nullptr;
-        }
-
         realm.lazyArrayBuffers = std::move(table);
     }
 
     ObjectWeakMap* table = realm.lazyArrayBuffers.get();
 
     JSObject* obj = table->lookup(this);
     if (obj)
         return &obj->as<ArrayBufferObject>();
--- a/js/src/builtin/WeakMapObject-inl.h
+++ b/js/src/builtin/WeakMapObject-inl.h
@@ -36,20 +36,16 @@ static MOZ_ALWAYS_INLINE bool
 WeakCollectionPutEntryInternal(JSContext* cx, Handle<WeakCollectionObject*> obj,
                                HandleObject key, HandleValue value)
 {
     ObjectValueMap* map = obj->getMap();
     if (!map) {
         auto newMap = cx->make_unique<ObjectValueMap>(cx, obj.get());
         if (!newMap)
             return false;
-        if (!newMap->init()) {
-            JS_ReportOutOfMemory(cx);
-            return false;
-        }
         map = newMap.release();
         obj->setPrivate(map);
     }
 
     // Preserve wrapped native keys to prevent wrapper optimization.
     if (!TryPreserveReflector(cx, key))
         return false;
 
--- a/js/src/builtin/intl/SharedIntlData.cpp
+++ b/js/src/builtin/intl/SharedIntlData.cpp
@@ -106,22 +106,17 @@ IsLegacyICUTimeZone(const char* timeZone
 bool
 js::intl::SharedIntlData::ensureTimeZones(JSContext* cx)
 {
     if (timeZoneDataInitialized)
         return true;
 
     // If ensureTimeZones() was called previously, but didn't complete due to
     // OOM, clear all sets/maps and start from scratch.
-    if (availableTimeZones.initialized())
-        availableTimeZones.finish();
-    if (!availableTimeZones.init()) {
-        ReportOutOfMemory(cx);
-        return false;
-    }
+    availableTimeZones.clearAndCompact();
 
     UErrorCode status = U_ZERO_ERROR;
     UEnumeration* values = ucal_openTimeZones(&status);
     if (U_FAILURE(status)) {
         ReportInternalError(cx);
         return false;
     }
     ScopedICUObject<UEnumeration, uenum_close> toClose(values);
@@ -153,22 +148,17 @@ js::intl::SharedIntlData::ensureTimeZone
         // ICU shouldn't report any duplicate time zone names, but if it does,
         // just ignore the duplicate name.
         if (!p && !availableTimeZones.add(p, timeZone)) {
             ReportOutOfMemory(cx);
             return false;
         }
     }
 
-    if (ianaZonesTreatedAsLinksByICU.initialized())
-        ianaZonesTreatedAsLinksByICU.finish();
-    if (!ianaZonesTreatedAsLinksByICU.init()) {
-        ReportOutOfMemory(cx);
-        return false;
-    }
+    ianaZonesTreatedAsLinksByICU.clearAndCompact();
 
     for (const char* rawTimeZone : timezone::ianaZonesTreatedAsLinksByICU) {
         MOZ_ASSERT(rawTimeZone != nullptr);
         timeZone = Atomize(cx, rawTimeZone, strlen(rawTimeZone));
         if (!timeZone)
             return false;
 
         TimeZoneHasher::Lookup lookup(timeZone);
@@ -176,22 +166,17 @@ js::intl::SharedIntlData::ensureTimeZone
         MOZ_ASSERT(!p, "Duplicate entry in timezone::ianaZonesTreatedAsLinksByICU");
 
         if (!ianaZonesTreatedAsLinksByICU.add(p, timeZone)) {
             ReportOutOfMemory(cx);
             return false;
         }
     }
 
-    if (ianaLinksCanonicalizedDifferentlyByICU.initialized())
-        ianaLinksCanonicalizedDifferentlyByICU.finish();
-    if (!ianaLinksCanonicalizedDifferentlyByICU.init()) {
-        ReportOutOfMemory(cx);
-        return false;
-    }
+    ianaLinksCanonicalizedDifferentlyByICU.clearAndCompact();
 
     RootedAtom linkName(cx);
     RootedAtom& target = timeZone;
     for (const auto& linkAndTarget : timezone::ianaLinksCanonicalizedDifferentlyByICU) {
         const char* rawLinkName = linkAndTarget.link;
         const char* rawTarget = linkAndTarget.target;
 
         MOZ_ASSERT(rawLinkName != nullptr);
@@ -303,22 +288,17 @@ js::intl::SharedIntlData::LocaleHasher::
 bool
 js::intl::SharedIntlData::ensureUpperCaseFirstLocales(JSContext* cx)
 {
     if (upperCaseFirstInitialized)
         return true;
 
     // If ensureUpperCaseFirstLocales() was called previously, but didn't
     // complete due to OOM, clear all data and start from scratch.
-    if (upperCaseFirstLocales.initialized())
-        upperCaseFirstLocales.finish();
-    if (!upperCaseFirstLocales.init()) {
-        ReportOutOfMemory(cx);
-        return false;
-    }
+    upperCaseFirstLocales.clearAndCompact();
 
     UErrorCode status = U_ZERO_ERROR;
     UEnumeration* available = ucol_openAvailableLocales(&status);
     if (U_FAILURE(status)) {
         ReportInternalError(cx);
         return false;
     }
     ScopedICUObject<UEnumeration, uenum_close> toClose(available);
@@ -388,20 +368,20 @@ js::intl::SharedIntlData::isUpperCaseFir
     *isUpperFirst = upperCaseFirstLocales.has(lookup);
 
     return true;
 }
 
 void
 js::intl::SharedIntlData::destroyInstance()
 {
-    availableTimeZones.finish();
-    ianaZonesTreatedAsLinksByICU.finish();
-    ianaLinksCanonicalizedDifferentlyByICU.finish();
-    upperCaseFirstLocales.finish();
+    availableTimeZones.clearAndCompact();
+    ianaZonesTreatedAsLinksByICU.clearAndCompact();
+    ianaLinksCanonicalizedDifferentlyByICU.clearAndCompact();
+    upperCaseFirstLocales.clearAndCompact();
 }
 
 void
 js::intl::SharedIntlData::trace(JSTracer* trc)
 {
     // Atoms are always tenured.
     if (!JS::RuntimeHeapIsMinorCollecting()) {
         availableTimeZones.trace(trc);
--- a/js/src/ctypes/CTypes.cpp
+++ b/js/src/ctypes/CTypes.cpp
@@ -6099,21 +6099,17 @@ StructType::DefineInternal(JSContext* cx
   if (!prototype)
     return false;
 
   if (!JS_DefineProperty(cx, prototype, "constructor", typeObj,
                          JSPROP_READONLY | JSPROP_PERMANENT))
     return false;
 
   // Create a FieldInfoHash to stash on the type object.
-  Rooted<FieldInfoHash> fields(cx);
-  if (!fields.init(len)) {
-    JS_ReportOutOfMemory(cx);
-    return false;
-  }
+  Rooted<FieldInfoHash> fields(cx, len);
 
   // Process the field types.
   size_t structSize, structAlign;
   if (len != 0) {
     structSize = 0;
     structAlign = 0;
 
     for (uint32_t i = 0; i < len; ++i) {
@@ -6210,17 +6206,16 @@ StructType::DefineInternal(JSContext* cx
   }
 
   // Move the field hash to the heap and store it in the typeObj.
   FieldInfoHash *heapHash = cx->new_<FieldInfoHash>(std::move(fields.get()));
   if (!heapHash) {
     JS_ReportOutOfMemory(cx);
     return false;
   }
-  MOZ_ASSERT(heapHash->initialized());
   JS_SetReservedSlot(typeObj, SLOT_FIELDINFO, PrivateValue(heapHash));
 
   JS_SetReservedSlot(typeObj, SLOT_SIZE, sizeVal);
   JS_SetReservedSlot(typeObj, SLOT_ALIGN, Int32Value(structAlign));
   //if (!JS_FreezeObject(cx, prototype)0 // XXX fixme - see bug 541212!
   //  return false;
   JS_SetReservedSlot(typeObj, SLOT_PROTO, ObjectValue(*prototype));
   return true;
--- a/js/src/ds/Bitmap.cpp
+++ b/js/src/ds/Bitmap.cpp
@@ -7,36 +7,33 @@
 #include "ds/Bitmap.h"
 
 #include <algorithm>
 
 using namespace js;
 
 SparseBitmap::~SparseBitmap()
 {
-    if (data.initialized()) {
-        for (Data::Range r(data.all()); !r.empty(); r.popFront())
-            js_delete(r.front().value());
-    }
+    for (Data::Range r(data.all()); !r.empty(); r.popFront())
+        js_delete(r.front().value());
 }
 
 size_t
 SparseBitmap::sizeOfExcludingThis(mozilla::MallocSizeOf mallocSizeOf)
 {
     size_t size = data.shallowSizeOfExcludingThis(mallocSizeOf);
     for (Data::Range r(data.all()); !r.empty(); r.popFront())
         size += mallocSizeOf(r.front().value());
     return size;
 }
 
 SparseBitmap::BitBlock&
-SparseBitmap::createBlock(Data::AddPtr p, size_t blockId)
+SparseBitmap::createBlock(Data::AddPtr p, size_t blockId, AutoEnterOOMUnsafeRegion& oomUnsafe)
 {
-    MOZ_ASSERT(!p);
-    AutoEnterOOMUnsafeRegion oomUnsafe;
+    MOZ_ASSERT(!p && p.isValid());
     BitBlock* block = js_new<BitBlock>();
     if (!block || !data.add(p, blockId, block))
         oomUnsafe.crash("Bitmap OOM");
     std::fill(block->begin(), block->end(), 0);
     return *block;
 }
 
 bool
--- a/js/src/ds/Bitmap.h
+++ b/js/src/ds/Bitmap.h
@@ -83,32 +83,34 @@ class SparseBitmap
 
     // Return the number of words in a BitBlock starting at |blockWord| which
     // are in |other|.
     static size_t wordIntersectCount(size_t blockWord, const DenseBitmap& other) {
         long count = other.numWords() - blockWord;
         return std::min<size_t>((size_t)WordsInBlock, std::max<long>(count, 0));
     }
 
-    BitBlock& createBlock(Data::AddPtr p, size_t blockId);
+    BitBlock& createBlock(Data::AddPtr p, size_t blockId, AutoEnterOOMUnsafeRegion& oomUnsafe);
 
     MOZ_ALWAYS_INLINE BitBlock* getBlock(size_t blockId) const {
         Data::Ptr p = data.lookup(blockId);
         return p ? p->value() : nullptr;
     }
 
     MOZ_ALWAYS_INLINE BitBlock& getOrCreateBlock(size_t blockId) {
+        // The lookupForAdd() needs protection against injected OOMs, as does
+        // the add() within createBlock().
+        AutoEnterOOMUnsafeRegion oomUnsafe;
         Data::AddPtr p = data.lookupForAdd(blockId);
         if (p)
             return *p->value();
-        return createBlock(p, blockId);
+        return createBlock(p, blockId, oomUnsafe);
     }
 
   public:
-    bool init() { return data.init(); }
     ~SparseBitmap();
 
     size_t sizeOfExcludingThis(mozilla::MallocSizeOf mallocSizeOf);
 
     MOZ_ALWAYS_INLINE void setBit(size_t bit) {
         size_t word = bit / JS_BITS_PER_WORD;
         size_t blockWord = blockStartWord(word);
         BitBlock& block = getOrCreateBlock(blockWord / WordsInBlock);
--- a/js/src/ds/InlineTable.h
+++ b/js/src/ds/InlineTable.h
@@ -66,23 +66,17 @@ class InlineTable : private AllocPolicy
 
     bool usingTable() const {
         return inlNext_ > InlineEntries;
     }
 
     MOZ_MUST_USE bool switchToTable() {
         MOZ_ASSERT(inlNext_ == InlineEntries);
 
-        if (table_.initialized()) {
-            table_.clear();
-        } else {
-            if (!table_.init(count()))
-                return false;
-            MOZ_ASSERT(table_.initialized());
-        }
+        table_.clear();
 
         InlineEntry* end = inlineEnd();
         for (InlineEntry* it = inlineStart(); it != end; ++it) {
             if (it->key && !it->moveTo(table_))
                 return false;
         }
 
         inlNext_ = InlineEntries + 1;
@@ -325,17 +319,17 @@ class InlineTable : private AllocPolicy
         MOZ_ASSERT(p);
         if (p.isInlinePtr_) {
             MOZ_ASSERT(inlCount_ > 0);
             MOZ_ASSERT(p.inlPtr_->key != nullptr);
             p.inlPtr_->key = nullptr;
             --inlCount_;
             return;
         }
-        MOZ_ASSERT(table_.initialized() && usingTable());
+        MOZ_ASSERT(usingTable());
         table_.remove(p.tablePtr_);
     }
 
     void remove(const Lookup& l) {
         if (Ptr p = lookup(l))
             remove(p);
     }
 
--- a/js/src/frontend/BinSourceRuntimeSupport.h
+++ b/js/src/frontend/BinSourceRuntimeSupport.h
@@ -59,16 +59,18 @@ struct BinaryASTSupport {
         }
         static bool match(const Lookup key, Lookup lookup) {
             if (key.byteLen_ != lookup.byteLen_)
                 return false;
             return strncmp(key.start_, lookup.start_, key.byteLen_) == 0;
         }
     };
 
+    BinaryASTSupport();
+
     JS::Result<const BinVariant*>  binVariant(JSContext*, const CharSlice);
     JS::Result<const BinField*> binField(JSContext*, const CharSlice);
     JS::Result<const BinKind*> binKind(JSContext*,  const CharSlice);
 
   private:
     // A HashMap that can be queried without copies from a CharSlice key.
     // Initialized on first call. Keys are CharSlices into static strings.
     using BinKindMap = js::HashMap<const CharSlice, BinKind, CharSlice, js::SystemAllocPolicy>;
--- a/js/src/frontend/BinToken.cpp
+++ b/js/src/frontend/BinToken.cpp
@@ -68,25 +68,27 @@ const char* describeBinField(const BinFi
 
 const char* describeBinVariant(const BinVariant& variant)
 {
     return getBinVariant(variant).begin();
 }
 
 } // namespace frontend
 
+BinaryASTSupport::BinaryASTSupport()
+  : binKindMap_(frontend::BINKIND_LIMIT)
+  , binFieldMap_(frontend::BINFIELD_LIMIT)
+  , binVariantMap_(frontend::BINVARIANT_LIMIT)
+{
+}
 
 JS::Result<const js::frontend::BinKind*>
 BinaryASTSupport::binKind(JSContext* cx, const CharSlice key)
 {
-    if (!binKindMap_.initialized()) {
-        // Initialize lazily.
-        if (!binKindMap_.init(frontend::BINKIND_LIMIT))
-            return ReportOutOfMemoryResult(cx);
-
+    if (binKindMap_.empty()) {
         for (size_t i = 0; i < frontend::BINKIND_LIMIT; ++i) {
             const BinKind variant = static_cast<BinKind>(i);
             const CharSlice& key = getBinKind(variant);
             auto ptr = binKindMap_.lookupForAdd(key);
             MOZ_ASSERT(!ptr);
             if (!binKindMap_.add(ptr, key, variant))
                 return ReportOutOfMemoryResult(cx);
         }
@@ -95,22 +97,19 @@ BinaryASTSupport::binKind(JSContext* cx,
     auto ptr = binKindMap_.lookup(key);
     if (!ptr)
         return nullptr;
 
     return &ptr->value();
 }
 
 JS::Result<const js::frontend::BinVariant*>
-BinaryASTSupport::binVariant(JSContext* cx, const CharSlice key) {
-    if (!binVariantMap_.initialized()) {
-        // Initialize lazily.
-        if (!binVariantMap_.init(frontend::BINVARIANT_LIMIT))
-            return ReportOutOfMemoryResult(cx);
-
+BinaryASTSupport::binVariant(JSContext* cx, const CharSlice key)
+{
+    if (binVariantMap_.empty()) {
         for (size_t i = 0; i < frontend::BINVARIANT_LIMIT; ++i) {
             const BinVariant variant = static_cast<BinVariant>(i);
             const CharSlice& key = getBinVariant(variant);
             auto ptr = binVariantMap_.lookupForAdd(key);
             MOZ_ASSERT(!ptr);
             if (!binVariantMap_.add(ptr, key, variant))
                 return ReportOutOfMemoryResult(cx);
         }
--- a/js/src/frontend/BinTokenReaderMultipart.cpp
+++ b/js/src/frontend/BinTokenReaderMultipart.cpp
@@ -120,18 +120,16 @@ BinTokenReaderMultipart::readHeader()
         return raiseError("Too many entries in strings table");
 
     // This table maps String index -> String.
     // Initialize and populate.
     if (!atomsTable_.reserve(stringsNumberOfEntries))
         return raiseOOM();
     if (!slicesTable_.reserve(stringsNumberOfEntries))
         return raiseOOM();
-    if (!variantsTable_.init())
-        return raiseOOM();
 
     RootedAtom atom(cx_);
     for (uint32_t i = 0; i < stringsNumberOfEntries; ++i) {
         BINJS_MOZ_TRY_DECL(byteLen, readInternalUint32());
         if (current_ + byteLen > stop_ || current_ + byteLen < current_)
             return raiseError("Invalid byte length in individual string");
 
         // Check null string.
--- a/js/src/frontend/BytecodeCompiler.cpp
+++ b/js/src/frontend/BytecodeCompiler.cpp
@@ -218,18 +218,16 @@ BytecodeCompiler::canLazilyParse()
            // happen with lazy parsing.
            !mozilla::recordreplay::IsRecordingOrReplaying();
 }
 
 bool
 BytecodeCompiler::createParser(ParseGoal goal)
 {
     usedNames.emplace(cx);
-    if (!usedNames->init())
-        return false;
 
     if (canLazilyParse()) {
         syntaxParser.emplace(cx, alloc, options, sourceBuffer.get(), sourceBuffer.length(),
                              /* foldConstants = */ false, *usedNames, nullptr, nullptr,
                              sourceObject, goal);
         if (!syntaxParser->checkOptions())
             return false;
     }
@@ -405,18 +403,16 @@ BytecodeCompiler::compileModule()
         return nullptr;
 
     if (!createScript())
         return nullptr;
 
     module->init(script);
 
     ModuleBuilder builder(cx, module, parser->anyChars);
-    if (!builder.init())
-        return nullptr;
 
     ModuleSharedContext modulesc(cx, module, enclosingScope, builder);
     ParseNode* pn = parser->moduleBody(&modulesc);
     if (!pn)
         return nullptr;
 
     Maybe<BytecodeEmitter> emitter;
     if (!emplaceEmitter(emitter, &modulesc))
@@ -624,18 +620,16 @@ frontend::CompileGlobalScript(JSContext*
 
 JSScript*
 frontend::CompileGlobalBinASTScript(JSContext* cx, LifoAlloc& alloc, const ReadOnlyCompileOptions& options,
                                     const uint8_t* src, size_t len, ScriptSourceObject** sourceObjectOut)
 {
     AutoAssertReportedException assertException(cx);
 
     frontend::UsedNameTracker usedNames(cx);
-    if (!usedNames.init())
-        return nullptr;
 
     RootedScriptSourceObject sourceObj(cx, CreateScriptSourceObject(cx, options));
 
     if (!sourceObj)
         return nullptr;
 
     RootedScript script(cx, JSScript::Create(cx, options, sourceObj, 0, len, 0, len));
 
@@ -814,18 +808,16 @@ frontend::CompileLazyFunction(JSContext*
         // will classify addons alongside with web-facing code.
         const int HISTOGRAM = cx->runningWithTrustedPrincipals()
             ? JS_TELEMETRY_PRIVILEGED_PARSER_COMPILE_LAZY_AFTER_MS
             : JS_TELEMETRY_WEB_PARSER_COMPILE_LAZY_AFTER_MS;
         cx->runtime()->addTelemetry(HISTOGRAM, delta.ToMilliseconds());
     }
 
     UsedNameTracker usedNames(cx);
-    if (!usedNames.init())
-        return false;
 
     RootedScriptSourceObject sourceObject(cx, &lazy->sourceObject());
     Parser<FullParseHandler, char16_t> parser(cx, cx->tempLifoAlloc(), options, chars, length,
                                               /* foldConstants = */ true, usedNames, nullptr,
                                               lazy, sourceObject, lazy->parseGoal());
     if (!parser.checkOptions())
         return false;
 
--- a/js/src/frontend/ParseContext.h
+++ b/js/src/frontend/ParseContext.h
@@ -116,20 +116,16 @@ class UsedNameTracker
 
   public:
     explicit UsedNameTracker(JSContext* cx)
       : map_(cx),
         scriptCounter_(0),
         scopeCounter_(0)
     { }
 
-    MOZ_MUST_USE bool init() {
-        return map_.init();
-    }
-
     uint32_t nextScriptId() {
         MOZ_ASSERT(scriptCounter_ != UINT32_MAX,
                    "ParseContext::Scope::init should have prevented wraparound");
         return scriptCounter_++;
     }
 
     uint32_t nextScopeId() {
         MOZ_ASSERT(scopeCounter_ != UINT32_MAX);
--- a/js/src/fuzz-tests/testBinASTReader.cpp
+++ b/js/src/fuzz-tests/testBinASTReader.cpp
@@ -46,20 +46,16 @@ testBinASTReaderFuzz(const uint8_t* buf,
 
     js::Vector<uint8_t> binSource(gCx);
     if (!binSource.append(buf, size)) {
         ReportOutOfMemory(gCx);
         return 0;
     }
 
     js::frontend::UsedNameTracker binUsedNames(gCx);
-    if (!binUsedNames.init()) {
-        ReportOutOfMemory(gCx);
-        return 0;
-    }
 
     js::frontend::BinASTParser<js::frontend::BinTokenReaderTester> reader(gCx, gCx->tempLifoAlloc(), binUsedNames, options);
 
     // Will be deallocated once `reader` goes out of scope.
     auto binParsed = reader.parse(binSource);
     RootedValue binExn(gCx);
     if (binParsed.isErr()) {
         js::GetAndClearException(gCx, &binExn);
--- a/js/src/gc/GC.cpp
+++ b/js/src/gc/GC.cpp
@@ -958,16 +958,17 @@ GCRuntime::releaseArena(Arena* arena, co
 
 GCRuntime::GCRuntime(JSRuntime* rt) :
     rt(rt),
     systemZone(nullptr),
     atomsZone(nullptr),
     stats_(rt),
     marker(rt),
     usage(nullptr),
+    rootsHash(256),
     nextCellUniqueId_(LargestTaggedNullCellPointer + 1), // Ensure disjoint from null tagged pointers.
     numArenasFreeCommitted(0),
     verifyPreData(nullptr),
     chunkAllocationSinceLastGC(false),
     lastGCTime(ReallyNow()),
     mode(TuningDefaults::Mode),
     numActiveZoneIters(0),
     cleanUpEverything(false),
@@ -1282,19 +1283,16 @@ js::gc::DumpArenaInfo()
  */
 static const uint64_t JIT_SCRIPT_RELEASE_TYPES_PERIOD = 20;
 
 bool
 GCRuntime::init(uint32_t maxbytes, uint32_t maxNurseryBytes)
 {
     MOZ_ASSERT(SystemPageSize());
 
-    if (!rootsHash.ref().init(256))
-        return false;
-
     {
         AutoLockGCBgAlloc lock(rt);
 
         MOZ_ALWAYS_TRUE(tunables.setParameter(JSGC_MAX_BYTES, maxbytes, lock));
         MOZ_ALWAYS_TRUE(tunables.setParameter(JSGC_MAX_NURSERY_BYTES, maxNurseryBytes, lock));
         setMaxMallocBytes(TuningDefaults::MaxMallocBytes, lock);
 
         const char* size = getenv("JSGC_MARK_STACK_LIMIT");
@@ -4694,19 +4692,16 @@ js::gc::MarkingValidator::nonIncremental
 {
     /*
      * Perform a non-incremental mark for all collecting zones and record
      * the results for later comparison.
      *
      * Currently this does not validate gray marking.
      */
 
-    if (!map.init())
-        return;
-
     JSRuntime* runtime = gc->rt;
     GCMarker* gcmarker = &gc->marker;
 
     gc->waitBackgroundSweepEnd();
 
     /* Wait for off-thread parsing which can allocate. */
     HelperThreadState().waitForAllThreads();
 
@@ -4727,18 +4722,16 @@ js::gc::MarkingValidator::nonIncremental
     }
 
     /*
      * Temporarily clear the weakmaps' mark flags for the compartments we are
      * collecting.
      */
 
     WeakMapSet markedWeakMaps;
-    if (!markedWeakMaps.init())
-        return;
 
     /*
      * For saving, smush all of the keys into one big table and split them back
      * up into per-zone tables when restoring.
      */
     gc::WeakKeyTable savedWeakKeys(SystemAllocPolicy(), runtime->randomHashCodeScrambler());
     if (!savedWeakKeys.init())
         return;
@@ -8072,17 +8065,17 @@ js::NewRealm(JSContext* cx, JSPrincipals
             return nullptr;
         }
 
         zone = zoneHolder.get();
     }
 
     if (!comp) {
         compHolder = cx->make_unique<JS::Compartment>(zone);
-        if (!compHolder || !compHolder->init(cx))
+        if (!compHolder)
             return nullptr;
 
         comp = compHolder.get();
     }
 
     UniquePtr<Realm> realm(cx->new_<Realm>(comp, options));
     if (!realm || !realm->init(cx, principals))
         return nullptr;
@@ -8249,19 +8242,16 @@ GCRuntime::mergeRealms(Realm* source, Re
     if (rt->lcovOutput().isEnabled() && source->scriptNameMap) {
         AutoEnterOOMUnsafeRegion oomUnsafe;
 
         if (!target->scriptNameMap) {
             target->scriptNameMap = cx->make_unique<ScriptNameMap>();
 
             if (!target->scriptNameMap)
                 oomUnsafe.crash("Failed to create a script name map.");
-
-            if (!target->scriptNameMap->init())
-                oomUnsafe.crash("Failed to initialize a script name map.");
         }
 
         for (ScriptNameMap::Range r = source->scriptNameMap->all(); !r.empty(); r.popFront()) {
             JSScript* key = r.front().key();
             auto value = std::move(r.front().value());
             if (!target->scriptNameMap->putNew(key, std::move(value)))
                 oomUnsafe.crash("Failed to add an entry in the script name map.");
         }
--- a/js/src/gc/HashUtil.h
+++ b/js/src/gc/HashUtil.h
@@ -19,17 +19,17 @@ namespace js {
  */
 template <class T>
 struct DependentAddPtr
 {
     typedef typename T::AddPtr AddPtr;
     typedef typename T::Entry Entry;
 
     template <class Lookup>
-    DependentAddPtr(const JSContext* cx, const T& table, const Lookup& lookup)
+    DependentAddPtr(const JSContext* cx, T& table, const Lookup& lookup)
       : addPtr(table.lookupForAdd(lookup))
       , originalGcNumber(cx->zone()->gcNumber())
     {}
 
     DependentAddPtr(DependentAddPtr&& other)
       : addPtr(other.addPtr)
       , originalGcNumber(other.originalGcNumber)
     {}
@@ -51,17 +51,17 @@ struct DependentAddPtr
     }
 
     bool found() const                 { return addPtr.found(); }
     explicit operator bool() const     { return found(); }
     const Entry& operator*() const     { return *addPtr; }
     const Entry* operator->() const    { return &*addPtr; }
 
   private:
-    AddPtr addPtr ;
+    AddPtr addPtr;
     const uint64_t originalGcNumber;
 
     template <class KeyInput>
     void refreshAddPtr(JSContext* cx, T& table, const KeyInput& key) {
         bool gcHappened = originalGcNumber != cx->zone()->gcNumber();
         if (gcHappened)
             addPtr = table.lookupForAdd(key);
     }
--- a/js/src/gc/Marking.cpp
+++ b/js/src/gc/Marking.cpp
@@ -2706,17 +2706,16 @@ TenuringTracer::traverse(T* thingp)
 } // namespace js
 
 template <typename T>
 void
 js::gc::StoreBuffer::MonoTypeBuffer<T>::trace(StoreBuffer* owner, TenuringTracer& mover)
 {
     mozilla::ReentrancyGuard g(*owner);
     MOZ_ASSERT(owner->isEnabled());
-    MOZ_ASSERT(stores_.initialized());
     if (last_)
         last_.trace(mover);
     for (typename StoreSet::Range r = stores_.all(); !r.empty(); r.popFront())
         r.front().trace(mover);
 }
 
 namespace js {
 namespace gc {
--- a/js/src/gc/Nursery.cpp
+++ b/js/src/gc/Nursery.cpp
@@ -44,17 +44,16 @@ using mozilla::TimeStamp;
 
 constexpr uintptr_t CanaryMagicValue = 0xDEADB15D;
 
 struct js::Nursery::FreeMallocedBuffersTask : public GCParallelTaskHelper<FreeMallocedBuffersTask>
 {
     explicit FreeMallocedBuffersTask(FreeOp* fop)
       : GCParallelTaskHelper(fop->runtime()),
         fop_(fop) {}
-    bool init() { return buffers_.init(); }
     void transferBuffersToFree(MallocedBuffersSet& buffersToFree,
                                const AutoLockHelperThreadState& lock);
     ~FreeMallocedBuffersTask() { join(); }
 
     void run();
 
   private:
     FreeOp* fop_;
@@ -144,21 +143,18 @@ js::Nursery::Nursery(JSRuntime* rt)
     const char* env = getenv("MOZ_NURSERY_STRINGS");
     if (env && *env)
         canAllocateStrings_ = (*env == '1');
 }
 
 bool
 js::Nursery::init(uint32_t maxNurseryBytes, AutoLockGCBgAlloc& lock)
 {
-    if (!mallocedBuffers.init())
-        return false;
-
     freeMallocedBuffersTask = js_new<FreeMallocedBuffersTask>(runtime()->defaultFreeOp());
-    if (!freeMallocedBuffersTask || !freeMallocedBuffersTask->init())
+    if (!freeMallocedBuffersTask)
         return false;
 
     // The nursery is permanently disabled when recording or replaying. Nursery
     // collections may occur at non-deterministic points in execution.
     if (mozilla::recordreplay::IsRecordingOrReplaying())
         maxNurseryBytes = 0;
 
     /* maxNurseryBytes parameter is rounded down to a multiple of chunk size. */
@@ -495,18 +491,16 @@ Nursery::setIndirectForwardingPointer(vo
     MOZ_ASSERT(isInside(oldData));
 
     // Bug 1196210: If a zero-capacity header lands in the last 2 words of a
     // jemalloc chunk abutting the start of a nursery chunk, the (invalid)
     // newData pointer will appear to be "inside" the nursery.
     MOZ_ASSERT(!isInside(newData) || (uintptr_t(newData) & ChunkMask) == 0);
 
     AutoEnterOOMUnsafeRegion oomUnsafe;
-    if (!forwardedBuffers.initialized() && !forwardedBuffers.init())
-        oomUnsafe.crash("Nursery::setForwardingPointer");
 #ifdef DEBUG
     if (ForwardedBufferMap::Ptr p = forwardedBuffers.lookup(oldData))
         MOZ_ASSERT(p->value() == newData);
 #endif
     if (!forwardedBuffers.put(oldData, newData))
         oomUnsafe.crash("Nursery::setForwardingPointer");
 }
 
@@ -525,21 +519,19 @@ js::Nursery::forwardBufferPointer(HeapSl
     HeapSlot* old = *pSlotsElems;
 
     if (!isInside(old))
         return;
 
     // The new location for this buffer is either stored inline with it or in
     // the forwardedBuffers table.
     do {
-        if (forwardedBuffers.initialized()) {
-            if (ForwardedBufferMap::Ptr p = forwardedBuffers.lookup(old)) {
-                *pSlotsElems = reinterpret_cast<HeapSlot*>(p->value());
-                break;
-            }
+        if (ForwardedBufferMap::Ptr p = forwardedBuffers.lookup(old)) {
+            *pSlotsElems = reinterpret_cast<HeapSlot*>(p->value());
+            break;
         }
 
         *pSlotsElems = *reinterpret_cast<HeapSlot**>(old);
     } while (false);
 
     MOZ_ASSERT(!isInside(*pSlotsElems));
     MOZ_ASSERT(IsWriteableAddress(*pSlotsElems));
 }
@@ -924,17 +916,17 @@ js::Nursery::doCollection(JS::gcreason::
     // tenured.
     startProfile(ProfileKey::Sweep);
     sweep(&mover);
     endProfile(ProfileKey::Sweep);
 
     // Update any slot or element pointers whose destination has been tenured.
     startProfile(ProfileKey::UpdateJitActivations);
     js::jit::UpdateJitActivationsForMinorGC(rt);
-    forwardedBuffers.finish();
+    forwardedBuffers.clearAndCompact();
     endProfile(ProfileKey::UpdateJitActivations);
 
     startProfile(ProfileKey::ObjectsTenuredCallback);
     rt->gc.callObjectsTenuredCallback();
     endProfile(ProfileKey::ObjectsTenuredCallback);
 
     // Sweep.
     startProfile(ProfileKey::FreeMallocedBuffers);
--- a/js/src/gc/Nursery.h
+++ b/js/src/gc/Nursery.h
@@ -283,18 +283,16 @@ class Nursery
     }
 
     MOZ_MUST_USE bool queueDictionaryModeObjectToSweep(NativeObject* obj);
 
     size_t sizeOfHeapCommitted() const {
         return allocatedChunkCount() * gc::ChunkSize;
     }
     size_t sizeOfMallocedBuffers(mozilla::MallocSizeOf mallocSizeOf) const {
-        if (!mallocedBuffers.initialized())
-            return 0;
         size_t total = 0;
         for (MallocedBuffersSet::Range r = mallocedBuffers.all(); !r.empty(); r.popFront())
             total += mallocSizeOf(r.front());
         total += mallocedBuffers.shallowSizeOfExcludingThis(mallocSizeOf);
         return total;
     }
 
     // The number of bytes from the start position to the end of the nursery.
--- a/js/src/gc/NurseryAwareHashMap.h
+++ b/js/src/gc/NurseryAwareHashMap.h
@@ -85,18 +85,18 @@ class NurseryAwareHashMap
 
   public:
     using Lookup = typename MapType::Lookup;
     using Ptr = typename MapType::Ptr;
     using Range = typename MapType::Range;
     using Entry = typename MapType::Entry;
 
     explicit NurseryAwareHashMap(AllocPolicy a = AllocPolicy()) : map(a) {}
-
-    MOZ_MUST_USE bool init(uint32_t len = 16) { return map.init(len); }
+    explicit NurseryAwareHashMap(size_t length) : map(length) {}
+    NurseryAwareHashMap(AllocPolicy a, size_t length) : map(a, length) {}
 
     bool empty() const { return map.empty(); }
     Ptr lookup(const Lookup& l) const { return map.lookup(l); }
     void remove(Ptr p) { map.remove(p); }
     Range all() const { return map.all(); }
     struct Enum : public MapType::Enum {
         explicit Enum(NurseryAwareHashMap& namap) : MapType::Enum(namap.map) {}
     };
--- a/js/src/gc/RootMarking.cpp
+++ b/js/src/gc/RootMarking.cpp
@@ -427,18 +427,17 @@ class AssertNoRootsTracer : public JS::C
 
 void
 js::gc::GCRuntime::finishRoots()
 {
     AutoNoteSingleThreadedRegion anstr;
 
     rt->finishAtoms();
 
-    if (rootsHash.ref().initialized())
-        rootsHash.ref().clear();
+    rootsHash.ref().clear();
 
     rt->finishPersistentRoots();
 
     rt->finishSelfHosting();
 
     for (RealmsIter r(rt); !r.done(); r.next())
         r->finishRoots();
 
--- a/js/src/gc/StoreBuffer.cpp
+++ b/js/src/gc/StoreBuffer.cpp
@@ -45,20 +45,17 @@ StoreBuffer::checkEmpty() const
 bool
 StoreBuffer::enable()
 {
     if (enabled_)
         return true;
 
     checkEmpty();
 
-    if (!bufferVal.init() ||
-        !bufferCell.init() ||
-        !bufferSlot.init() ||
-        !bufferWholeCell.init() ||
+    if (!bufferWholeCell.init() ||
         !bufferGeneric.init())
     {
         return false;
     }
 
     enabled_ = true;
     return true;
 }
--- a/js/src/gc/StoreBuffer.h
+++ b/js/src/gc/StoreBuffer.h
@@ -78,51 +78,40 @@ class StoreBuffer
          * temporary instances of HeapPtr.
          */
         T last_;
 
         /* Maximum number of entries before we request a minor GC. */
         const static size_t MaxEntries = 48 * 1024 / sizeof(T);
 
         explicit MonoTypeBuffer() : last_(T()) {}
-        ~MonoTypeBuffer() { stores_.finish(); }
-
-        MOZ_MUST_USE bool init() {
-            if (!stores_.initialized() && !stores_.init())
-                return false;
-            clear();
-            return true;
-        }
 
         void clear() {
             last_ = T();
-            if (stores_.initialized())
-                stores_.clear();
+            stores_.clear();
         }
 
         /* Add one item to the buffer. */
         void put(StoreBuffer* owner, const T& t) {
-            MOZ_ASSERT(stores_.initialized());
             sinkStore(owner);
             last_ = t;
         }
 
         /* Remove an item from the store buffer. */
         void unput(StoreBuffer* owner, const T& v) {
             // Fast, hashless remove of last put.
             if (last_ == v) {
                 last_ = T();
                 return;
             }
             stores_.remove(v);
         }
 
         /* Move any buffered stores to the canonical store set. */
         void sinkStore(StoreBuffer* owner) {
-            MOZ_ASSERT(stores_.initialized());
             if (last_) {
                 AutoEnterOOMUnsafeRegion oomUnsafe;
                 if (!stores_.put(last_))
                     oomUnsafe.crash("Failed to allocate for MonoTypeBuffer::put.");
             }
             last_ = T();
 
             if (MOZ_UNLIKELY(stores_.count() > MaxEntries))
@@ -137,17 +126,17 @@ class StoreBuffer
         /* Trace the source of all edges in the store buffer. */
         void trace(StoreBuffer* owner, TenuringTracer& mover);
 
         size_t sizeOfExcludingThis(mozilla::MallocSizeOf mallocSizeOf) {
             return stores_.shallowSizeOfExcludingThis(mallocSizeOf);
         }
 
         bool isEmpty() const {
-            return last_ == T() && (!stores_.initialized() || stores_.empty());
+            return last_ == T() && stores_.empty();
         }
 
       private:
         MonoTypeBuffer(const MonoTypeBuffer& other) = delete;
         MonoTypeBuffer& operator=(const MonoTypeBuffer& other) = delete;
     };
 
     struct WholeCellBuffer
--- a/js/src/gc/Verifier.cpp
+++ b/js/src/gc/Verifier.cpp
@@ -207,19 +207,16 @@ gc::GCRuntime::startVerifyPreBarriers()
 
     const size_t size = 64 * 1024 * 1024;
     trc->root = (VerifyNode*)js_malloc(size);
     if (!trc->root)
         goto oom;
     trc->edgeptr = (char*)trc->root;
     trc->term = trc->edgeptr + size;
 
-    if (!trc->nodemap.init())
-        goto oom;
-
     /* Create the root node. */
     trc->curnode = MakeNode(trc, nullptr, JS::TraceKind(0));
 
     incrementalState = State::MarkRoots;
 
     /* Make all the roots be edges emanating from the root node. */
     traceRuntime(trc, prep);
 
@@ -453,17 +450,16 @@ js::gc::GCRuntime::finishVerifier()
 #endif /* JS_GC_ZEAL */
 
 #if defined(JSGC_HASH_TABLE_CHECKS) || defined(DEBUG)
 
 class HeapCheckTracerBase : public JS::CallbackTracer
 {
   public:
     explicit HeapCheckTracerBase(JSRuntime* rt, WeakMapTraceKind weakTraceKind);
-    bool init();
     bool traceHeap(AutoTraceSession& session);
     virtual void checkCell(Cell* cell) = 0;
 
   protected:
     void dumpCellInfo(Cell* cell);
     void dumpCellPath();
 
     Cell* parentCell() {
@@ -500,22 +496,16 @@ HeapCheckTracerBase::HeapCheckTracerBase
     oom(false),
     parentIndex(-1)
 {
 #ifdef DEBUG
     setCheckEdges(false);
 #endif
 }
 
-bool
-HeapCheckTracerBase::init()
-{
-    return visited.init();
-}
-
 void
 HeapCheckTracerBase::onChild(const JS::GCCellPtr& thing)
 {
     Cell* cell = thing.asCell();
     checkCell(cell);
 
     if (visited.lookup(cell))
         return;
@@ -669,18 +659,17 @@ js::gc::CheckHeapAfterGC(JSRuntime* rt)
     CheckHeapTracer::GCType gcType;
 
     if (rt->gc.nursery().isEmpty())
         gcType = CheckHeapTracer::GCType::Moving;
     else
         gcType = CheckHeapTracer::GCType::NonMoving;
 
     CheckHeapTracer tracer(rt, gcType);
-    if (tracer.init())
-        tracer.check(session);
+    tracer.check(session);
 }
 
 #endif /* JSGC_HASH_TABLE_CHECKS */
 
 #if defined(JS_GC_ZEAL) || defined(DEBUG)
 
 class CheckGrayMarkingTracer final : public HeapCheckTracerBase
 {
@@ -738,15 +727,13 @@ js::CheckGrayMarkingState(JSRuntime* rt)
     MOZ_ASSERT(!JS::RuntimeHeapIsCollecting());
     MOZ_ASSERT(!rt->gc.isIncrementalGCInProgress());
     if (!rt->gc.areGrayBitsValid())
         return true;
 
     gcstats::AutoPhase ap(rt->gc.stats(), gcstats::PhaseKind::TRACE_HEAP);
     AutoTraceSession session(rt);
     CheckGrayMarkingTracer tracer(rt);
-    if (!tracer.init())
-        return true; // Ignore failure
 
     return tracer.check(session);
 }
 
 #endif // defined(JS_GC_ZEAL) || defined(DEBUG)
--- a/js/src/gc/WeakMap-inl.h
+++ b/js/src/gc/WeakMap-inl.h
@@ -23,27 +23,19 @@ template <typename T>
 static T* extractUnbarriered(T* v)
 {
     return v;
 }
 
 template <class K, class V, class HP>
 WeakMap<K, V, HP>::WeakMap(JSContext* cx, JSObject* memOf)
   : Base(cx->zone()), WeakMapBase(memOf, cx->zone())
-{}
-
-template <class K, class V, class HP>
-bool
-WeakMap<K, V, HP>::init(uint32_t len)
 {
-    if (!Base::init(len))
-        return false;
     zone()->gcWeakMapList().insertFront(this);
     marked = JS::IsIncrementalGCInProgress(TlsContext.get());
-    return true;
 }
 
 // Trace a WeakMap entry based on 'markedCell' getting marked, where 'origKey'
 // is the key in the weakmap. These will probably be the same, but can be
 // different eg when markedCell is a delegate for origKey.
 //
 // This implementation does not use 'markedCell'; it looks up origKey and checks
 // the mark bits on everything it cares about, one of which will be
@@ -75,19 +67,16 @@ WeakMap<K, V, HP>::markEntry(GCMarker* m
 template <class K, class V, class HP>
 void
 WeakMap<K, V, HP>::trace(JSTracer* trc)
 {
     MOZ_ASSERT_IF(JS::RuntimeHeapIsBusy(), isInList());
 
     TraceNullableEdge(trc, &memberOf, "WeakMap owner");
 
-    if (!Base::initialized())
-        return;
-
     if (trc->isMarkingTracer()) {
         MOZ_ASSERT(trc->weakMapAction() == ExpandWeakMaps);
         marked = true;
         (void) markIteratively(GCMarker::fromTracer(trc));
         return;
     }
 
     if (trc->weakMapAction() == DoNotTraceWeakMaps)
--- a/js/src/gc/WeakMap.cpp
+++ b/js/src/gc/WeakMap.cpp
@@ -76,18 +76,17 @@ WeakMapBase::findInterZoneEdges(JS::Zone
 void
 WeakMapBase::sweepZone(JS::Zone* zone)
 {
     for (WeakMapBase* m = zone->gcWeakMapList().getFirst(); m; ) {
         WeakMapBase* next = m->getNext();
         if (m->marked) {
             m->sweep();
         } else {
-            /* Destroy the hash map now to catch any use after this point. */
-            m->finish();
+            m->clearAndCompact();
             m->removeFrom(zone->gcWeakMapList());
         }
         m = next;
     }
 
 #ifdef DEBUG
     for (WeakMapBase* m : zone->gcWeakMapList())
         MOZ_ASSERT(m->isInList() && m->marked);
@@ -152,69 +151,58 @@ ObjectValueMap::findZoneEdges()
     }
     return true;
 }
 
 ObjectWeakMap::ObjectWeakMap(JSContext* cx)
   : map(cx, nullptr)
 {}
 
-bool
-ObjectWeakMap::init()
-{
-    return map.init();
-}
-
 JSObject*
 ObjectWeakMap::lookup(const JSObject* obj)
 {
-    MOZ_ASSERT(map.initialized());
     if (ObjectValueMap::Ptr p = map.lookup(const_cast<JSObject*>(obj)))
         return &p->value().toObject();
     return nullptr;
 }
 
 bool
 ObjectWeakMap::add(JSContext* cx, JSObject* obj, JSObject* target)
 {
     MOZ_ASSERT(obj && target);
-    MOZ_ASSERT(map.initialized());
 
     MOZ_ASSERT(!map.has(obj));
     if (!map.put(obj, ObjectValue(*target))) {
         ReportOutOfMemory(cx);
         return false;
     }
 
     return true;
 }
 
 void
 ObjectWeakMap::clear()
 {
-    MOZ_ASSERT(map.initialized());
     map.clear();
 }
 
 void
 ObjectWeakMap::trace(JSTracer* trc)
 {
     map.trace(trc);
 }
 
 size_t
 ObjectWeakMap::sizeOfExcludingThis(mozilla::MallocSizeOf mallocSizeOf)
 {
-    MOZ_ASSERT(map.initialized());
     return map.shallowSizeOfExcludingThis(mallocSizeOf);
 }
 
 #ifdef JSGC_HASH_TABLE_CHECKS
 void
 ObjectWeakMap::checkAfterMovingGC()
 {
-    MOZ_ASSERT(map.initialized());
     for (ObjectValueMap::Range r = map.all(); !r.empty(); r.popFront()) {
         CheckGCThingAfterMovingGC(r.front().key().get());
         CheckGCThingAfterMovingGC(&r.front().value().toObject());
     }
 }
 #endif // JSGC_HASH_TABLE_CHECKS
--- a/js/src/gc/WeakMap.h
+++ b/js/src/gc/WeakMap.h
@@ -87,17 +87,17 @@ class WeakMapBase : public mozilla::Link
 
   protected:
     // Instance member functions called by the above. Instantiations of WeakMap override
     // these with definitions appropriate for their Key and Value types.
     virtual void trace(JSTracer* tracer) = 0;
     virtual bool findZoneEdges() = 0;
     virtual void sweep() = 0;
     virtual void traceMappings(WeakMapTracer* tracer) = 0;
-    virtual void finish() = 0;
+    virtual void clearAndCompact() = 0;
 
     // Any weakmap key types that want to participate in the non-iterative
     // ephemeron marking must override this method.
     virtual void markEntry(GCMarker* marker, gc::Cell* markedCell, JS::GCCellPtr l) = 0;
 
     virtual bool markIteratively(GCMarker* marker) = 0;
 
   protected:
@@ -122,29 +122,27 @@ class WeakMap : public HashMap<Key, Valu
     typedef typename Base::Lookup Lookup;
     typedef typename Base::Entry Entry;
     typedef typename Base::Range Range;
     typedef typename Base::Ptr Ptr;
     typedef typename Base::AddPtr AddPtr;
 
     explicit WeakMap(JSContext* cx, JSObject* memOf = nullptr);
 
-    bool init(uint32_t len = 16);
-
     // Overwritten to add a read barrier to prevent an incorrectly gray value
     // from escaping the weak map. See the UnmarkGrayTracer::onChild comment in
     // gc/Marking.cpp.
     Ptr lookup(const Lookup& l) const {
         Ptr p = Base::lookup(l);
         if (p)
             exposeGCThingToActiveJS(p->value());
         return p;
     }
 
-    AddPtr lookupForAdd(const Lookup& l) const {
+    AddPtr lookupForAdd(const Lookup& l) {
         AddPtr p = Base::lookupForAdd(l);
         if (p)
             exposeGCThingToActiveJS(p->value());
         return p;
     }
 
     // Resolve ambiguity with LinkedListElement<>::remove.
     using Base::remove;
@@ -173,18 +171,19 @@ class WeakMap : public HashMap<Key, Valu
 
     bool findZoneEdges() override {
         // This is overridden by ObjectValueMap.
         return true;
     }
 
     void sweep() override;
 
-    void finish() override {
-        Base::finish();
+    void clearAndCompact() override {
+        Base::clear();
+        Base::compact();
     }
 
     /* memberOf can be nullptr, which means that the map is not part of a JSObject. */
     void traceMappings(WeakMapTracer* tracer) override;
 
   protected:
 #if DEBUG
     void assertEntriesNotAboutToBeFinalized();
@@ -207,17 +206,16 @@ class ObjectValueMap : public WeakMap<He
 
 // Generic weak map for mapping objects to other objects.
 class ObjectWeakMap
 {
     ObjectValueMap map;
 
   public:
     explicit ObjectWeakMap(JSContext* cx);
-    bool init();
 
     JS::Zone* zone() const { return map.zone(); }
 
     JSObject* lookup(const JSObject* obj);
     bool add(JSContext* cx, JSObject* obj, JSObject* target);
     void clear();
 
     void trace(JSTracer* trc);
--- a/js/src/gc/WeakMapPtr.cpp
+++ b/js/src/gc/WeakMapPtr.cpp
@@ -61,17 +61,17 @@ JS::WeakMapPtr<K, V>::destroy()
 
 template <typename K, typename V>
 bool
 JS::WeakMapPtr<K, V>::init(JSContext* cx)
 {
     MOZ_ASSERT(!initialized());
     typename WeakMapDetails::Utils<K, V>::PtrType map =
         cx->new_<typename WeakMapDetails::Utils<K,V>::Type>(cx);
-    if (!map || !map->init())
+    if (!map)
         return false;
     ptr = map;
     return true;
 }
 
 template <typename K, typename V>
 void
 JS::WeakMapPtr<K, V>::trace(JSTracer* trc)
--- a/js/src/gc/Zone.cpp
+++ b/js/src/gc/Zone.cpp
@@ -100,23 +100,17 @@ Zone::~Zone()
     }
 #endif
 }
 
 bool
 Zone::init(bool isSystemArg)
 {
     isSystem = isSystemArg;
-    return uniqueIds().init() &&
-           gcSweepGroupEdges().init() &&
-           gcWeakKeys().init() &&
-           typeDescrObjects().init() &&
-           markedAtoms().init() &&
-           atomCache().init() &&
-           regExps.init();
+    return gcWeakKeys().init();
 }
 
 void
 Zone::setNeedsIncrementalBarrier(bool needs)
 {
     MOZ_ASSERT_IF(needs, canCollect());
     needsIncrementalBarrier_ = needs;
 }
@@ -283,17 +277,17 @@ js::jit::JitZone*
 Zone::createJitZone(JSContext* cx)
 {
     MOZ_ASSERT(!jitZone_);
 
     if (!cx->runtime()->getJitRuntime(cx))
         return nullptr;
 
     UniquePtr<jit::JitZone> jitZone(cx->new_<js::jit::JitZone>());
-    if (!jitZone || !jitZone->init(cx))
+    if (!jitZone)
         return nullptr;
 
     jitZone_ = jitZone.release();
     return jitZone_;
 }
 
 bool
 Zone::hasMarkedRealms()
@@ -359,20 +353,18 @@ Zone::nextZone() const
     return listNext_;
 }
 
 void
 Zone::clearTables()
 {
     MOZ_ASSERT(regExps.empty());
 
-    if (baseShapes().initialized())
-        baseShapes().clear();
-    if (initialShapes().initialized())
-        initialShapes().clear();
+    baseShapes().clear();
+    initialShapes().clear();
 }
 
 void
 Zone::fixupAfterMovingGC()
 {
     fixupInitialShapeTable();
 }
 
@@ -448,17 +440,17 @@ Zone::purgeAtomCacheOrDefer()
 }
 
 void
 Zone::purgeAtomCache()
 {
     MOZ_ASSERT(!hasKeptAtoms());
     MOZ_ASSERT(!purgeAtomsDeferred);
 
-    atomCache().clearAndShrink();
+    atomCache().clearAndCompact();
 
     // Also purge the dtoa caches so that subsequent lookups populate atom
     // cache too.
     for (RealmsInZoneIter r(this); !r.done(); r.next())
         r->dtoaCache.purge();
 }
 
 void
--- a/js/src/jit/CodeGenerator.cpp
+++ b/js/src/jit/CodeGenerator.cpp
@@ -10212,19 +10212,16 @@ CodeGenerator::generate()
     // Initialize native code table with an entry to the start of
     // top-level script.
     InlineScriptTree* tree = gen->info().inlineScriptTree();
     jsbytecode* startPC = tree->script()->code();
     BytecodeSite* startSite = new(gen->alloc()) BytecodeSite(tree, startPC);
     if (!addNativeToBytecodeEntry(startSite))
         return false;
 
-    if (!snapshots_.init())
-        return false;
-
     if (!safepoints_.init(gen->alloc()))
         return false;
 
     if (!generatePrologue())
         return false;
 
     // Before generating any code, we generate type checks for all parameters.
     // This comes before deoptTable_, because we can't use deopt tables without
--- a/js/src/jit/ExecutableAllocator.cpp
+++ b/js/src/jit/ExecutableAllocator.cpp
@@ -94,18 +94,17 @@ ExecutablePool::available() const
 }
 
 ExecutableAllocator::~ExecutableAllocator()
 {
     for (size_t i = 0; i < m_smallPools.length(); i++)
         m_smallPools[i]->release(/* willDestroy = */true);
 
     // If this asserts we have a pool leak.
-    MOZ_ASSERT_IF((m_pools.initialized() &&
-                   TlsContext.get()->runtime()->gc.shutdownCollectedEverything()),
+    MOZ_ASSERT_IF(TlsContext.get()->runtime()->gc.shutdownCollectedEverything(),
                   m_pools.empty());
 }
 
 ExecutablePool*
 ExecutableAllocator::poolForSize(size_t n)
 {
     // Try to fit in an existing small allocator.  Use the pool with the
     // least available space that is big enough (best-fit).  This is the
@@ -175,19 +174,16 @@ ExecutableAllocator::roundUpAllocationSi
 
 ExecutablePool*
 ExecutableAllocator::createPool(size_t n)
 {
     size_t allocSize = roundUpAllocationSize(n, ExecutableCodePageSize);
     if (allocSize == OVERSIZE_ALLOCATION)
         return nullptr;
 
-    if (!m_pools.initialized() && !m_pools.init())
-        return nullptr;
-
     ExecutablePool::Allocation a = systemAlloc(allocSize);
     if (!a.pages)
         return nullptr;
 
     ExecutablePool* pool = js_new<ExecutablePool>(this, a);
     if (!pool) {
         systemRelease(a);
         return nullptr;
@@ -230,18 +226,16 @@ ExecutableAllocator::alloc(JSContext* cx
 }
 
 void
 ExecutableAllocator::releasePoolPages(ExecutablePool* pool)
 {
     MOZ_ASSERT(pool->m_allocation.pages);
     systemRelease(pool->m_allocation);
 
-    MOZ_ASSERT(m_pools.initialized());
-
     // Pool may not be present in m_pools if we hit OOM during creation.
     if (auto ptr = m_pools.lookup(pool))
         m_pools.remove(ptr);
 }
 
 void
 ExecutableAllocator::purge()
 {
@@ -258,25 +252,23 @@ ExecutableAllocator::purge()
         pool->release();
         m_smallPools.erase(&m_smallPools[i]);
     }
 }
 
 void
 ExecutableAllocator::addSizeOfCode(JS::CodeSizes* sizes) const
 {
-    if (m_pools.initialized()) {
-        for (ExecPoolHashSet::Range r = m_pools.all(); !r.empty(); r.popFront()) {
-            ExecutablePool* pool = r.front();
-            sizes->ion      += pool->m_codeBytes[CodeKind::Ion];
-            sizes->baseline += pool->m_codeBytes[CodeKind::Baseline];
-            sizes->regexp   += pool->m_codeBytes[CodeKind::RegExp];
-            sizes->other    += pool->m_codeBytes[CodeKind::Other];
-            sizes->unused   += pool->m_allocation.size - pool->usedCodeBytes();
-        }
+    for (ExecPoolHashSet::Range r = m_pools.all(); !r.empty(); r.popFront()) {
+        ExecutablePool* pool = r.front();
+        sizes->ion      += pool->m_codeBytes[CodeKind::Ion];
+        sizes->baseline += pool->m_codeBytes[CodeKind::Baseline];
+        sizes->regexp   += pool->m_codeBytes[CodeKind::RegExp];
+        sizes->other    += pool->m_codeBytes[CodeKind::Other];
+        sizes->unused   += pool->m_allocation.size - pool->usedCodeBytes();
     }
 }
 
 /* static */ void
 ExecutableAllocator::reprotectPool(JSRuntime* rt, ExecutablePool* pool, ProtectionSetting protection)
 {
     char* start = pool->m_allocation.pages;
     if (!ReprotectRegion(start, pool->m_freePtr - start, protection))
--- a/js/src/jit/Ion.cpp
+++ b/js/src/jit/Ion.cpp
@@ -216,17 +216,17 @@ JitRuntime::initialize(JSContext* cx)
 {
     MOZ_ASSERT(CurrentThreadCanAccessRuntime(cx->runtime()));
 
     AutoAllocInAtomsZone az(cx);
 
     JitContext jctx(cx, nullptr);
 
     functionWrappers_ = cx->new_<VMWrapperMap>(cx);
-    if (!functionWrappers_ || !functionWrappers_->init())
+    if (!functionWrappers_)
         return false;
 
     StackMacroAssembler masm;
 
     Label bailoutTail;
     JitSpew(JitSpew_Codegen, "# Emitting bailout tail stub");
     generateBailoutTailStub(masm, &bailoutTail);
 
@@ -405,21 +405,16 @@ JitRealm::~JitRealm()
 
 bool
 JitRealm::initialize(JSContext* cx)
 {
     stubCodes_ = cx->new_<ICStubCodeMap>(cx->zone());
     if (!stubCodes_)
         return false;
 
-    if (!stubCodes_->init()) {
-        ReportOutOfMemory(cx);
-        return false;
-    }
-
     stringsCanBeInNursery = cx->nursery().canAllocateStrings();
 
     return true;
 }
 
 template <typename T>
 static T
 PopNextBitmaskValue(uint32_t* bitmask)
@@ -438,27 +433,16 @@ JitRealm::performStubReadBarriers(uint32
     while (stubsToBarrier) {
         auto stub = PopNextBitmaskValue<StubIndex>(&stubsToBarrier);
         const ReadBarrieredJitCode& jitCode = stubs_[stub];
         MOZ_ASSERT(jitCode);
         jitCode.get();
     }
 }
 
-bool
-JitZone::init(JSContext* cx)
-{
-    if (!baselineCacheIRStubCodes_.init()) {
-        ReportOutOfMemory(cx);
-        return false;
-    }
-
-    return true;
-}
-
 void
 jit::FreeIonBuilder(IonBuilder* builder)
 {
     // The builder is allocated into its LifoAlloc, so destroying that will
     // destroy the builder and all other data accumulated during compilation,
     // except any final codegen (which includes an assembler and needs to be
     // explicitly destroyed).
     js_delete(builder->backgroundCodegen());
@@ -681,17 +665,16 @@ JitRuntime::getBailoutTableSize(const Fr
     MOZ_ASSERT(frameClass != FrameSizeClass::None());
     return bailoutTables_.ref()[frameClass.classId()].size;
 }
 
 TrampolinePtr
 JitRuntime::getVMWrapper(const VMFunction& f) const
 {
     MOZ_ASSERT(functionWrappers_);
-    MOZ_ASSERT(functionWrappers_->initialized());
     MOZ_ASSERT(trampolineCode_);
 
     JitRuntime::VMWrapperMap::Ptr p = functionWrappers_->readonlyThreadsafeLookup(&f);
     MOZ_ASSERT(p);
     return trampolineCode(p->value());
 }
 
 void
@@ -1478,18 +1461,16 @@ OptimizeMIR(MIRGenerator* mir)
         gs.spewPass("Alignment Mask Analysis");
         AssertExtendedGraphCoherency(graph);
 
         if (mir->shouldCancel("Alignment Mask Analysis"))
             return false;
     }
 
     ValueNumberer gvn(mir, graph);
-    if (!gvn.init())
-        return false;
 
     // Alias analysis is required for LICM and GVN so that we don't move
     // loads across stores.
     if (mir->optimizationInfo().licmEnabled() ||
         mir->optimizationInfo().gvnEnabled())
     {
         {
             AutoTraceLog log(logger, TraceLogger_AliasAnalysis);
--- a/js/src/jit/IonAnalysis.cpp
+++ b/js/src/jit/IonAnalysis.cpp
@@ -3576,19 +3576,16 @@ PassthroughOperand(MDefinition* def)
 // Bounds checks are added to a hash map and since the hash function ignores
 // differences in constant offset, this offers a fast way to find redundant
 // checks.
 bool
 jit::EliminateRedundantChecks(MIRGraph& graph)
 {
     BoundsCheckMap checks(graph.alloc());
 
-    if (!checks.init())
-        return false;
-
     // Stack for pre-order CFG traversal.
     Vector<MBasicBlock*, 1, JitAllocPolicy> worklist(graph.alloc());
 
     // The index of the current block in the CFG traversal.
     size_t index = 0;
 
     // Add all self-dominating blocks to the worklist.
     // This includes all roots. Order does not matter.
--- a/js/src/jit/JitRealm.h
+++ b/js/src/jit/JitRealm.h
@@ -393,17 +393,16 @@ class JitZone
     using BaselineCacheIRStubCodeMap = GCHashMap<CacheIRStubKey,
                                                  ReadBarrieredJitCode,
                                                  CacheIRStubKey,
                                                  SystemAllocPolicy,
                                                  IcStubCodeMapGCPolicy<CacheIRStubKey>>;
     BaselineCacheIRStubCodeMap baselineCacheIRStubCodes_;
 
   public:
-    MOZ_MUST_USE bool init(JSContext* cx);
     void sweep();
 
     void addSizeOfIncludingThis(mozilla::MallocSizeOf mallocSizeOf,
                                 size_t* jitZone,
                                 size_t* baselineStubsOptimized,
                                 size_t* cachedCFG) const;
 
     OptimizedICStubSpace* optimizedStubSpace() {
@@ -428,32 +427,28 @@ class JitZone
                                                  JitCode* stubCode)
     {
         auto p = baselineCacheIRStubCodes_.lookupForAdd(lookup);
         MOZ_ASSERT(!p);
         return baselineCacheIRStubCodes_.add(p, std::move(key), stubCode);
     }
 
     CacheIRStubInfo* getIonCacheIRStubInfo(const CacheIRStubKey::Lookup& key) {
-        if (!ionCacheIRStubInfoSet_.initialized())
-            return nullptr;
         IonCacheIRStubInfoSet::Ptr p = ionCacheIRStubInfoSet_.lookup(key);
         return p ? p->stubInfo.get() : nullptr;
     }
     MOZ_MUST_USE bool putIonCacheIRStubInfo(const CacheIRStubKey::Lookup& lookup,
                                             CacheIRStubKey& key)
     {
-        if (!ionCacheIRStubInfoSet_.initialized() && !ionCacheIRStubInfoSet_.init())
-            return false;
         IonCacheIRStubInfoSet::AddPtr p = ionCacheIRStubInfoSet_.lookupForAdd(lookup);
         MOZ_ASSERT(!p);
         return ionCacheIRStubInfoSet_.add(p, std::move(key));
     }
     void purgeIonCacheIRStubInfo() {
-        ionCacheIRStubInfoSet_.finish();
+        ionCacheIRStubInfoSet_.clearAndCompact();
     }
 };
 
 enum class BailoutReturnStub {
     GetProp,
     GetPropSuper,
     SetProp,
     Call,
--- a/js/src/jit/LIR.cpp
+++ b/js/src/jit/LIR.cpp
@@ -32,18 +32,16 @@ LIRGraph::LIRGraph(MIRGraph* mir)
     entrySnapshot_(nullptr),
     mir_(*mir)
 {
 }
 
 bool
 LIRGraph::addConstantToPool(const Value& v, uint32_t* index)
 {
-    MOZ_ASSERT(constantPoolMap_.initialized());
-
     ConstantPoolMap::AddPtr p = constantPoolMap_.lookupForAdd(v);
     if (p) {
         *index = p->value();
         return true;
     }
     *index = constantPool_.length();
     return constantPool_.append(v) && constantPoolMap_.add(p, v, *index);
 }
--- a/js/src/jit/LIR.h
+++ b/js/src/jit/LIR.h
@@ -1893,17 +1893,17 @@ class LIRGraph
     LSnapshot* entrySnapshot_;
 
     MIRGraph& mir_;
 
   public:
     explicit LIRGraph(MIRGraph* mir);
 
     MOZ_MUST_USE bool init() {
-        return constantPoolMap_.init() && blocks_.init(mir_.alloc(), mir_.numBlocks());
+        return blocks_.init(mir_.alloc(), mir_.numBlocks());
     }
     MIRGraph& mir() const {
         return mir_;
     }
     size_t numBlocks() const {
         return blocks_.length();
     }
     LBlock* getBlock(size_t i) {
--- a/js/src/jit/LoopUnroller.cpp
+++ b/js/src/jit/LoopUnroller.cpp
@@ -245,19 +245,16 @@ LoopUnroller::go(LoopIterationBound* bou
     newPreheader->discardAllResumePoints();
 
     // Insert new blocks at their RPO position, and update block ids.
     graph.insertBlockAfter(oldPreheader, unrolledHeader);
     graph.insertBlockAfter(unrolledHeader, unrolledBackedge);
     graph.insertBlockAfter(unrolledBackedge, newPreheader);
     graph.renumberBlocksAfter(oldPreheader);
 
-    if (!unrolledDefinitions.init())
-        return false;
-
     // Add phis to the unrolled loop header which correspond to the phis in the
     // original loop header.
     MOZ_ASSERT(header->getPredecessor(0) == oldPreheader);
     for (MPhiIterator iter(header->phisBegin()); iter != header->phisEnd(); iter++) {
         MPhi* old = *iter;
         MOZ_ASSERT(old->numOperands() == 2);
         MPhi* phi = MPhi::New(alloc);
         phi->setResultType(old->type());
--- a/js/src/jit/OptimizationTracking.cpp
+++ b/js/src/jit/OptimizationTracking.cpp
@@ -368,17 +368,16 @@ class jit::UniqueTrackedTypes
     Vector<TypeSet::Type, 1> list_;
 
   public:
     explicit UniqueTrackedTypes(JSContext* cx)
       : map_(cx),
         list_(cx)
     { }
 
-    bool init() { return map_.init(); }
     bool getIndexOf(TypeSet::Type ty, uint8_t* indexp);
 
     uint32_t count() const { MOZ_ASSERT(map_.count() == list_.length()); return list_.length(); }
     bool enumerate(TypeSet::TypeList* types) const;
 };
 
 bool
 UniqueTrackedTypes::getIndexOf(TypeSet::Type ty, uint8_t* indexp)
@@ -960,18 +959,16 @@ jit::WriteIonTrackedOptimizationsTable(J
     offsets.clear();
 
     const UniqueTrackedOptimizations::SortedVector& vec = unique.sortedVector();
     JitSpew(JitSpew_OptimizationTrackingExtended, "=> Writing unique optimizations table with %zu entr%s",
             vec.length(), vec.length() == 1 ? "y" : "ies");
 
     // Write out type info payloads.
     UniqueTrackedTypes uniqueTypes(cx);
-    if (!uniqueTypes.init())
-        return false;
 
     for (const UniqueTrackedOptimizations::SortEntry* p = vec.begin(); p != vec.end(); p++) {
         const TempOptimizationTypeInfoVector* v = p->types;
         JitSpew(JitSpew_OptimizationTrackingExtended,
                 "   Type info entry %zu of length %zu, offset %zu",
                 size_t(p - vec.begin()), v->length(), writer.length());
         SpewTempOptimizationTypeInfoVector(JitSpew_OptimizationTrackingExtended, v, "  ");
 
--- a/js/src/jit/OptimizationTracking.h
+++ b/js/src/jit/OptimizationTracking.h
@@ -177,17 +177,16 @@ class UniqueTrackedOptimizations
     SortedVector sorted_;
 
   public:
     explicit UniqueTrackedOptimizations(JSContext* cx)
       : map_(cx),
         sorted_(cx)
     { }
 
-    MOZ_MUST_USE bool init() { return map_.init(); }
     MOZ_MUST_USE bool add(const TrackedOptimizations* optimizations);
 
     MOZ_MUST_USE bool sortByFrequency(JSContext* cx);
     bool sorted() const { return !sorted_.empty(); }
     uint32_t count() const { MOZ_ASSERT(sorted()); return sorted_.length(); }
     const SortedVector& sortedVector() const { MOZ_ASSERT(sorted()); return sorted_; }
     uint8_t indexOf(const TrackedOptimizations* optimizations) const;
 };
--- a/js/src/jit/RegisterAllocator.cpp
+++ b/js/src/jit/RegisterAllocator.cpp
@@ -70,17 +70,17 @@ AllocationIntegrityState::record()
             }
             for (LInstruction::InputIterator alloc(*ins); alloc.more(); alloc.next()) {
                 if (!info.inputs.append(**alloc))
                     return false;
             }
         }
     }
 
-    return seen.init();
+    return true;
 }
 
 bool
 AllocationIntegrityState::check(bool populateSafepoints)
 {
     MOZ_ASSERT(!instructions.empty());
 
 #ifdef JS_JITSPEW
--- a/js/src/jit/Snapshots.cpp
+++ b/js/src/jit/Snapshots.cpp
@@ -542,24 +542,22 @@ RValueAllocation
 SnapshotReader::readAllocation()
 {
     JitSpew(JitSpew_IonSnapshots, "Reading slot %u", allocRead_);
     uint32_t offset = readAllocationIndex() * ALLOCATION_TABLE_ALIGNMENT;
     allocReader_.seek(allocTable_, offset);
     return RValueAllocation::read(allocReader_);
 }
 
-bool
-SnapshotWriter::init()
-{
-    // Based on the measurements made in Bug 962555 comment 20, this should be
-    // enough to prevent the reallocation of the hash table for at least half of
-    // the compilations.
-    return allocMap_.init(32);
-}
+SnapshotWriter::SnapshotWriter()
+    // Based on the measurements made in Bug 962555 comment 20, this length
+    // should be enough to prevent the reallocation of the hash table for at
+    // least half of the compilations.
+  : allocMap_(32)
+{}
 
 RecoverReader::RecoverReader(SnapshotReader& snapshot, const uint8_t* recovers, uint32_t size)
   : reader_(nullptr, nullptr),
     numInstructions_(0),
     numInstructionsRead_(0),
     resumeAfter_(false)
 {
     if (!recovers)
@@ -642,18 +640,16 @@ SnapshotWriter::trackSnapshot(uint32_t p
     writer_.writeUnsigned(lirOpcode);
     writer_.writeUnsigned(lirId);
 }
 #endif
 
 bool
 SnapshotWriter::add(const RValueAllocation& alloc)
 {
-    MOZ_ASSERT(allocMap_.initialized());
-
     uint32_t offset;
     RValueAllocMap::AddPtr p = allocMap_.lookupForAdd(alloc);
     if (!p) {
         offset = allocWriter_.length();
         alloc.write(allocWriter_);
         if (!allocMap_.add(p, alloc, offset)) {
             allocWriter_.setOOM();
             return false;
--- a/js/src/jit/Snapshots.h
+++ b/js/src/jit/Snapshots.h
@@ -395,17 +395,17 @@ class SnapshotWriter
 
     // This is only used to assert sanity.
     uint32_t allocWritten_;
 
     // Used to report size of the snapshot in the spew messages.
     SnapshotOffset lastStart_;
 
   public:
-    MOZ_MUST_USE bool init();
+    SnapshotWriter();
 
     SnapshotOffset startSnapshot(RecoverOffset recoverOffset, BailoutKind kind);
 #ifdef TRACK_SNAPSHOTS
     void trackSnapshot(uint32_t pcOpcode, uint32_t mirOpcode, uint32_t mirId,
                        uint32_t lirOpcode, uint32_t lirId);
 #endif
     MOZ_MUST_USE bool add(const RValueAllocation& slot);
 
--- a/js/src/jit/ValueNumbering.cpp
+++ b/js/src/jit/ValueNumbering.cpp
@@ -73,23 +73,16 @@ ValueNumberer::VisibleValues::ValueHashe
 {
     k = newKey;
 }
 
 ValueNumberer::VisibleValues::VisibleValues(TempAllocator& alloc)
   : set_(alloc)
 {}
 
-// Initialize the set.
-bool
-ValueNumberer::VisibleValues::init()
-{
-    return set_.init();
-}
-
 // Look up the first entry for |def|.
 ValueNumberer::VisibleValues::Ptr
 ValueNumberer::VisibleValues::findLeader(const MDefinition* def) const
 {
     return set_.lookup(def);
 }
 
 // Look up the first entry for |def|.
@@ -1203,41 +1196,35 @@ bool ValueNumberer::cleanupOSRFixups()
     }
 
     // And sweep.
     return RemoveUnmarkedBlocks(mir_, graph_, numMarked);
 }
 
 ValueNumberer::ValueNumberer(MIRGenerator* mir, MIRGraph& graph)
   : mir_(mir), graph_(graph),
+    // Initialize the value set. It's tempting to pass in a length that is a
+    // function of graph_.getNumInstructionIds(). But if we start out with a
+    // large capacity, it will be far larger than the actual element count for
+    // most of the pass, so when we remove elements, it would often think it
+    // needs to compact itself. Empirically, just letting the HashTable grow as
+    // needed on its own seems to work pretty well.
     values_(graph.alloc()),
     deadDefs_(graph.alloc()),
     remainingBlocks_(graph.alloc()),
     nextDef_(nullptr),
     totalNumVisited_(0),
     rerun_(false),
     blocksRemoved_(false),
     updateAliasAnalysis_(false),
     dependenciesBroken_(false),
     hasOSRFixups_(false)
 {}
 
 bool
-ValueNumberer::init()
-{
-    // Initialize the value set. It's tempting to pass in a size here of some
-    // function of graph_.getNumInstructionIds(), however if we start out with a
-    // large capacity, it will be far larger than the actual element count for
-    // most of the pass, so when we remove elements, it would often think it
-    // needs to compact itself. Empirically, just letting the HashTable grow as
-    // needed on its own seems to work pretty well.
-    return values_.init();
-}
-
-bool
 ValueNumberer::run(UpdateAliasAnalysisFlag updateAliasAnalysis)
 {
     updateAliasAnalysis_ = updateAliasAnalysis == UpdateAliasAnalysis;
 
     JitSpew(JitSpew_GVN, "Running GVN on graph (with %" PRIu64 " blocks)",
             uint64_t(graph_.numBlocks()));
 
     // Adding fixup blocks only make sense iff we have a second entry point into
--- a/js/src/jit/ValueNumbering.h
+++ b/js/src/jit/ValueNumbering.h
@@ -36,17 +36,16 @@ class ValueNumberer
         };
 
         typedef HashSet<MDefinition*, ValueHasher, JitAllocPolicy> ValueSet;
 
         ValueSet set_;        // Set of visible values
 
       public:
         explicit VisibleValues(TempAllocator& alloc);
-        MOZ_MUST_USE bool init();
 
         typedef ValueSet::Ptr Ptr;
         typedef ValueSet::AddPtr AddPtr;
 
         Ptr findLeader(const MDefinition* def) const;
         AddPtr findLeaderForAdd(MDefinition* def);
         MOZ_MUST_USE bool add(AddPtr p, MDefinition* def);
         void overwrite(AddPtr p, MDefinition* def);
@@ -103,17 +102,16 @@ class ValueNumberer
     MOZ_MUST_USE bool visitDominatorTree(MBasicBlock* root);
     MOZ_MUST_USE bool visitGraph();
 
     MOZ_MUST_USE bool insertOSRFixups();
     MOZ_MUST_USE bool cleanupOSRFixups();
 
   public:
     ValueNumberer(MIRGenerator* mir, MIRGraph& graph);
-    MOZ_MUST_USE bool init();
 
     enum UpdateAliasAnalysisFlag {
         DontUpdateAliasAnalysis,
         UpdateAliasAnalysis
     };
 
     // Optimize the graph, performing expression simplification and
     // canonicalization, eliminating statically fully-redundant expressions,
--- a/js/src/jit/WasmBCE.cpp
+++ b/js/src/jit/WasmBCE.cpp
@@ -26,18 +26,16 @@ typedef js::HashMap<uint32_t, MDefinitio
 // check, but a set of checks that together dominate a redundant check?
 //
 // TODO (dbounov): Generalize to constant additions relative to one base
 bool
 jit::EliminateBoundsChecks(MIRGenerator* mir, MIRGraph& graph)
 {
     // Map for dominating block where a given definition was checked
     LastSeenMap lastSeen;
-    if (!lastSeen.init())
-        return false;
 
     for (ReversePostorderIterator bIter(graph.rpoBegin()); bIter != graph.rpoEnd(); bIter++) {
         MBasicBlock* block = *bIter;
         for (MDefinitionIterator dIter(block); dIter;) {
             MDefinition* def = *dIter++;
 
             switch (def->op()) {
               case MDefinition::Opcode::WasmBoundsCheck: {
--- a/js/src/jit/arm/Simulator-arm.cpp
+++ b/js/src/jit/arm/Simulator-arm.cpp
@@ -1277,37 +1277,31 @@ class Redirection
 Simulator::~Simulator()
 {
     js_free(stack_);
 }
 
 SimulatorProcess::SimulatorProcess()
   : cacheLock_(mutexid::SimulatorCacheLock)
   , redirection_(nullptr)
-{}
+{
+    if (getenv("ARM_SIM_ICACHE_CHECKS"))
+        ICacheCheckingDisableCount = 0;
+}
 
 SimulatorProcess::~SimulatorProcess()
 {
     Redirection* r = redirection_;
     while (r) {
         Redirection* next = r->next_;
         js_delete(r);
         r = next;
     }
 }
 
-bool
-SimulatorProcess::init()
-{
-    if (getenv("ARM_SIM_ICACHE_CHECKS"))
-        ICacheCheckingDisableCount = 0;
-
-    return icache_.init();
-}
-
 /* static */ void*
 Simulator::RedirectNativeFunction(void* nativeFunction, ABIFunctionType type)
 {
     Redirection* redirection = Redirection::Get(nativeFunction, type);
     return redirection->addressOfSwiInstruction();
 }
 
 // Sets the register in the architecture state. It will also deal with updating
--- a/js/src/jit/arm/Simulator-arm.h
+++ b/js/src/jit/arm/Simulator-arm.h
@@ -482,29 +482,27 @@ class SimulatorProcess
 
     static mozilla::Atomic<size_t, mozilla::ReleaseAcquire> ICacheCheckingDisableCount;
     static void FlushICache(void* start, size_t size);
 
     static void checkICacheLocked(SimInstruction* instr);
 
     static bool initialize() {
         singleton_ = js_new<SimulatorProcess>();
-        return singleton_ && singleton_->init();
+        return singleton_;
     }
     static void destroy() {
         js_delete(singleton_);
         singleton_ = nullptr;
     }
 
     SimulatorProcess();
     ~SimulatorProcess();
 
   private:
-    bool init();
-
     static SimulatorProcess* singleton_;
 
     // This lock creates a critical section around 'redirection_' and
     // 'icache_', which are referenced both by the execution engine
     // and by the off-thread compiler (see Redirection::Get in the cpp file).
     Mutex cacheLock_;
 
     Redirection* redirection_;
--- a/js/src/jit/arm/Trampoline-arm.cpp
+++ b/js/src/jit/arm/Trampoline-arm.cpp
@@ -711,17 +711,16 @@ JitRuntime::generateBailoutHandler(Macro
 
     GenerateBailoutThunk(masm, NO_FRAME_SIZE_CLASS_ID, bailoutTail);
 }
 
 bool
 JitRuntime::generateVMWrapper(JSContext* cx, MacroAssembler& masm, const VMFunction& f)
 {
     MOZ_ASSERT(functionWrappers_);
-    MOZ_ASSERT(functionWrappers_->initialized());
 
     uint32_t wrapperOffset = startTrampolineCode(masm);
 
     AllocatableGeneralRegisterSet regs(Register::Codes::WrapperMask);
 
     static_assert((Register::Codes::VolatileMask & ~Register::Codes::WrapperMask) == 0,
                   "Wrapper register set must be a superset of Volatile register set.");
 
--- a/js/src/jit/arm64/Trampoline-arm64.cpp
+++ b/js/src/jit/arm64/Trampoline-arm64.cpp
@@ -519,17 +519,16 @@ JitRuntime::generateBailoutHandler(Macro
 
     GenerateBailoutThunk(masm, NO_FRAME_SIZE_CLASS_ID, bailoutTail);
 }
 
 bool
 JitRuntime::generateVMWrapper(JSContext* cx, MacroAssembler& masm, const VMFunction& f)
 {
     MOZ_ASSERT(functionWrappers_);
-    MOZ_ASSERT(functionWrappers_->initialized());
 
     uint32_t wrapperOffset = startTrampolineCode(masm);
 
     // Avoid conflicts with argument registers while discarding the result after
     // the function call.
     AllocatableGeneralRegisterSet regs(Register::Codes::WrapperMask);
 
     static_assert((Register::Codes::VolatileMask & ~Register::Codes::WrapperMask) == 0,
--- a/js/src/jit/shared/CodeGenerator-shared.cpp
+++ b/js/src/jit/shared/CodeGenerator-shared.cpp
@@ -875,18 +875,16 @@ CodeGeneratorShared::generateCompactTrac
     MOZ_ASSERT(trackedOptimizationsRegionTableOffset_ == 0);
     MOZ_ASSERT(trackedOptimizationsTypesTableOffset_ == 0);
     MOZ_ASSERT(trackedOptimizationsAttemptsTableOffset_ == 0);
 
     if (trackedOptimizations_.empty())
         return true;
 
     UniqueTrackedOptimizations unique(cx);
-    if (!unique.init())
-        return false;
 
     // Iterate through all entries to deduplicate their optimization attempts.
     for (size_t i = 0; i < trackedOptimizations_.length(); i++) {
         NativeToTrackedOptimizations& entry = trackedOptimizations_[i];
         if (!unique.add(entry.optimizations))
             return false;
     }
 
--- a/js/src/jit/x64/Trampoline-x64.cpp
+++ b/js/src/jit/x64/Trampoline-x64.cpp
@@ -599,17 +599,16 @@ JitRuntime::generateBailoutHandler(Macro
 
     GenerateBailoutThunk(masm, NO_FRAME_SIZE_CLASS_ID, bailoutTail);
 }
 
 bool
 JitRuntime::generateVMWrapper(JSContext* cx, MacroAssembler& masm, const VMFunction& f)
 {
     MOZ_ASSERT(functionWrappers_);
-    MOZ_ASSERT(functionWrappers_->initialized());
 
     uint32_t wrapperOffset = startTrampolineCode(masm);
 
     // Avoid conflicts with argument registers while discarding the result after
     // the function call.
     AllocatableGeneralRegisterSet regs(Register::Codes::WrapperMask);
 
     static_assert((Register::Codes::VolatileMask & ~Register::Codes::WrapperMask) == 0,
--- a/js/src/jit/x86-shared/MacroAssembler-x86-shared.cpp
+++ b/js/src/jit/x86-shared/MacroAssembler-x86-shared.cpp
@@ -134,21 +134,16 @@ MacroAssemblerX86Shared::asMasm() const
 }
 
 template<class T, class Map>
 T*
 MacroAssemblerX86Shared::getConstant(const typename T::Pod& value, Map& map,
                                      Vector<T, 0, SystemAllocPolicy>& vec)
 {
     typedef typename Map::AddPtr AddPtr;
-    if (!map.initialized()) {
-        enoughMemory_ &= map.init();
-        if (!enoughMemory_)
-            return nullptr;
-    }
     size_t index;
     if (AddPtr p = map.lookupForAdd(value)) {
         index = p->value();
     } else {
         index = vec.length();
         enoughMemory_ &= vec.append(T(value));
         if (!enoughMemory_)
             return nullptr;
--- a/js/src/jit/x86/Trampoline-x86.cpp
+++ b/js/src/jit/x86/Trampoline-x86.cpp
@@ -619,17 +619,16 @@ JitRuntime::generateBailoutHandler(Macro
 
     GenerateBailoutThunk(masm, NO_FRAME_SIZE_CLASS_ID, bailoutTail);
 }
 
 bool
 JitRuntime::generateVMWrapper(JSContext* cx, MacroAssembler& masm, const VMFunction& f)
 {
     MOZ_ASSERT(functionWrappers_);
-    MOZ_ASSERT(functionWrappers_->initialized());
 
     uint32_t wrapperOffset = startTrampolineCode(masm);
 
     // Avoid conflicts with argument registers while discarding the result after
     // the function call.
     AllocatableGeneralRegisterSet regs(Register::Codes::WrapperMask);
 
     static_assert((Register::Codes::VolatileMask & ~Register::Codes::WrapperMask) == 0,
--- a/js/src/jsapi-tests/testBinASTReader.cpp
+++ b/js/src/jsapi-tests/testBinASTReader.cpp
@@ -161,18 +161,16 @@ runTestFromPath(JSContext* cx, const cha
         js::Vector<char16_t> txtSource(cx);
         readFull(cx, txtPath.begin(), txtSource);
 
         // Parse text file.
         CompileOptions txtOptions(cx);
         txtOptions.setFileAndLine(txtPath.begin(), 0);
 
         UsedNameTracker txtUsedNames(cx);
-        if (!txtUsedNames.init())
-            MOZ_CRASH("Couldn't initialize used names");
 
         RootedScriptSourceObject sourceObject(cx, frontend::CreateScriptSourceObject(
                                                   cx, txtOptions, mozilla::Nothing()));
         if (!sourceObject)
             MOZ_CRASH("Couldn't initialize ScriptSourceObject");
 
         js::frontend::Parser<js::frontend::FullParseHandler, char16_t> txtParser(
             cx, allocScope.alloc(), txtOptions, txtSource.begin(), txtSource.length(),
@@ -201,18 +199,16 @@ runTestFromPath(JSContext* cx, const cha
         js::Vector<uint8_t> binSource(cx);
         readFull(binPath.begin(), binSource);
 
         // Parse binary file.
         CompileOptions binOptions(cx);
         binOptions.setFileAndLine(binPath.begin(), 0);
 
         js::frontend::UsedNameTracker binUsedNames(cx);
-        if (!binUsedNames.init())
-            MOZ_CRASH("Couldn't initialized binUsedNames");
 
         js::frontend::BinASTParser<Tok> binParser(cx, allocScope.alloc(), binUsedNames, binOptions);
 
         auto binParsed = binParser.parse(binSource); // Will be deallocated once `reader` goes out of scope.
         RootedValue binExn(cx);
         if (binParsed.isErr()) {
             // Save exception for more detailed error message, if necessary.
             if (!js::GetAndClearException(cx, &binExn))
--- a/js/src/jsapi-tests/testGCExactRooting.cpp
+++ b/js/src/jsapi-tests/testGCExactRooting.cpp
@@ -140,19 +140,17 @@ BEGIN_TEST(testGCPersistentRootedTraceab
     return true;
 }
 END_TEST(testGCPersistentRootedTraceableCannotOutliveRuntime)
 
 using MyHashMap = js::GCHashMap<js::Shape*, JSObject*>;
 
 BEGIN_TEST(testGCRootedHashMap)
 {
-    JS::Rooted<MyHashMap> map(cx, MyHashMap(cx));
-    CHECK(map.init(15));
-    CHECK(map.initialized());
+    JS::Rooted<MyHashMap> map(cx, MyHashMap(cx, 15));
 
     for (size_t i = 0; i < 10; ++i) {
         RootedObject obj(cx, JS_NewObject(cx, nullptr));
         RootedValue val(cx, UndefinedValue());
         // Construct a unique property name to ensure that the object creates a
         // new shape.
         char buffer[2];
         buffer[0] = 'a' + i;
@@ -200,19 +198,17 @@ CheckMyHashMap(JSContext* cx, Handle<MyH
         if (obj->as<NativeObject>().lastProperty() != r.front().key())
             return false;
     }
     return true;
 }
 
 BEGIN_TEST(testGCHandleHashMap)
 {
-    JS::Rooted<MyHashMap> map(cx, MyHashMap(cx));
-    CHECK(map.init(15));
-    CHECK(map.initialized());
+    JS::Rooted<MyHashMap> map(cx, MyHashMap(cx, 15));
 
     CHECK(FillMyHashMap(cx, &map));
 
     JS_GC(cx);
     JS_GC(cx);
 
     CHECK(CheckMyHashMap(cx, map));
 
--- a/js/src/jsapi-tests/testGCGrayMarking.cpp
+++ b/js/src/jsapi-tests/testGCGrayMarking.cpp
@@ -339,17 +339,16 @@ TestWeakMaps()
 }
 
 bool
 TestUnassociatedWeakMaps()
 {
     // Make a weakmap that's not associated with a JSObject.
     auto weakMap = cx->make_unique<GCManagedObjectWeakMap>(cx);
     CHECK(weakMap);
-    CHECK(weakMap->init());
 
     // Make sure this gets traced during GC.
     Rooted<GCManagedObjectWeakMap*> rootMap(cx, weakMap.get());
 
     JSObject* key = AllocWeakmapKeyObject();
     CHECK(key);
 
     JSObject* value = AllocPlainObject();
--- a/js/src/jsapi-tests/testGCMarking.cpp
+++ b/js/src/jsapi-tests/testGCMarking.cpp
@@ -117,17 +117,16 @@ BEGIN_TEST(testTracingIncomingCCWs)
     CHECK(!js::gc::IsInsideNursery(wrapper));
 
     JS::RootedValue v(cx, JS::ObjectValue(*wrapper));
     CHECK(JS_SetProperty(cx, global1, "ccw", v));
 
     // Ensure that |TraceIncomingCCWs| finds the object wrapped by the CCW.
 
     JS::CompartmentSet compartments;
-    CHECK(compartments.init());
     CHECK(compartments.put(global2->compartment()));
 
     void* thing = wrappee.get();
     CCWTestTracer trc(cx, &thing, JS::TraceKind::Object);
     JS::TraceIncomingCCWs(&trc, compartments);
     CHECK(trc.numberOfThingsTraced == 1);
     CHECK(trc.okay);
 
--- a/js/src/jsapi-tests/testGCWeakCache.cpp
+++ b/js/src/jsapi-tests/testGCWeakCache.cpp
@@ -27,17 +27,16 @@ BEGIN_TEST(testWeakCacheSet)
     JS::RootedObject nursery1(cx, JS_NewPlainObject(cx));
     JS::RootedObject nursery2(cx, JS_NewPlainObject(cx));
 
     using ObjectSet = GCHashSet<JS::Heap<JSObject*>,
                                 MovableCellHasher<JS::Heap<JSObject*>>,
                                 SystemAllocPolicy>;
     using Cache = JS::WeakCache<ObjectSet>;
     Cache cache(JS::GetObjectZone(tenured1));
-    CHECK(cache.init());
 
     cache.put(tenured1);
     cache.put(tenured2);
     cache.put(nursery1);
     cache.put(nursery2);
 
     // Verify relocation and that we don't sweep too aggressively.
     JS_GC(cx);
@@ -68,17 +67,16 @@ BEGIN_TEST(testWeakCacheMap)
     JS_GC(cx);
     JS::RootedObject nursery1(cx, JS_NewPlainObject(cx));
     JS::RootedObject nursery2(cx, JS_NewPlainObject(cx));
 
     using ObjectMap = js::GCHashMap<JS::Heap<JSObject*>, uint32_t,
                                     js::MovableCellHasher<JS::Heap<JSObject*>>>;
     using Cache = JS::WeakCache<ObjectMap>;
     Cache cache(JS::GetObjectZone(tenured1), cx);
-    CHECK(cache.init());
 
     cache.put(tenured1, 1);
     cache.put(tenured2, 2);
     cache.put(nursery1, 3);
     cache.put(nursery2, 4);
 
     JS_GC(cx);
     CHECK(cache.has(tenured1));
@@ -279,18 +277,16 @@ SweepCacheAndFinishGC(JSContext* cx, con
 bool
 TestSet()
 {
     using ObjectSet = GCHashSet<JS::Heap<JSObject*>,
                                 MovableCellHasher<JS::Heap<JSObject*>>,
                                 TempAllocPolicy>;
     using Cache = JS::WeakCache<ObjectSet>;
     Cache cache(JS::GetObjectZone(global), cx);
-    CHECK(cache.init());
-    CHECK(cache.initialized());
 
     // Sweep empty cache.
 
     CHECK(cache.empty());
     JS_GC(cx);
     CHECK(cache.empty());
 
     // Add an entry while sweeping.
@@ -400,31 +396,28 @@ TestSet()
 
     CHECK(SweepCacheAndFinishGC(cx, cache));
 
     CHECK(cache.count() == 4);
     CHECK(cache.has(obj3));
     CHECK(cache.has(obj4));
 
     cache.clear();
-    cache.finish();
 
     return true;
 }
 
 bool
 TestMap()
 {
     using ObjectMap = GCHashMap<JS::Heap<JSObject*>, uint32_t,
                                 MovableCellHasher<JS::Heap<JSObject*>>,
                                 TempAllocPolicy>;
     using Cache = JS::WeakCache<ObjectMap>;
     Cache cache(JS::GetObjectZone(global), cx);
-    CHECK(cache.init());
-    CHECK(cache.initialized());
 
     // Sweep empty cache.
 
     CHECK(cache.empty());
     JS_GC(cx);
     CHECK(cache.empty());
 
     // Add an entry while sweeping.
@@ -536,32 +529,30 @@ TestMap()
 
     CHECK(SweepCacheAndFinishGC(cx, cache));
 
     CHECK(cache.count() == 4);
     CHECK(cache.has(obj3));
     CHECK(cache.has(obj4));
 
     cache.clear();
-    cache.finish();
 
     return true;
 }
 
 bool
 TestReplaceDyingInSet()
 {
     // Test replacing dying entries with ones that have the same key using the
     // various APIs.
 
     using Cache = JS::WeakCache<GCHashSet<NumberAndObjectEntry,
                                           MovableCellHasher<NumberAndObjectEntry>,
                                           TempAllocPolicy>>;
     Cache cache(JS::GetObjectZone(global), cx);
-    CHECK(cache.init());
 
     RootedObject value1(cx, JS_NewPlainObject(cx));
     RootedObject value2(cx, JS_NewPlainObject(cx));
     CHECK(value1);
     CHECK(value2);
 
     CHECK(cache.put(NumberAndObjectEntry(1, value1)));
     CHECK(cache.put(NumberAndObjectEntry(2, value2)));
@@ -615,17 +606,16 @@ TestReplaceDyingInMap()
     // Test replacing dying entries with ones that have the same key using the
     // various APIs.
 
     using Cache = JS::WeakCache<GCHashMap<uint32_t,
                                           JS::Heap<JSObject*>,
                                           DefaultHasher<uint32_t>,
                                           TempAllocPolicy>>;
     Cache cache(JS::GetObjectZone(global), cx);
-    CHECK(cache.init());
 
     RootedObject value1(cx, JS_NewPlainObject(cx));
     RootedObject value2(cx, JS_NewPlainObject(cx));
     CHECK(value1);
     CHECK(value2);
 
     CHECK(cache.put(1, value1));
     CHECK(cache.put(2, value2));
@@ -682,17 +672,16 @@ TestUniqueIDLookups()
 
     const size_t DeadFactor = 3;
     const size_t ObjectCount = 100;
 
     using Cache = JS::WeakCache<GCHashSet<ObjectEntry,
                                           MovableCellHasher<ObjectEntry>,
                                           TempAllocPolicy>>;
     Cache cache(JS::GetObjectZone(global), cx);
-    CHECK(cache.init());
 
     Rooted<GCVector<JSObject*, 0, SystemAllocPolicy>> liveObjects(cx);
 
     for (size_t j = 0; j < ObjectCount; j++) {
         JSObject* obj = JS_NewPlainObject(cx);
         CHECK(obj);
         CHECK(cache.put(obj));
         if (j % DeadFactor == 0)
--- a/js/src/jsapi-tests/testHashTable.cpp
+++ b/js/src/jsapi-tests/testHashTable.cpp
@@ -138,18 +138,16 @@ AddLowKeys(IntSet* as, IntSet* bs, int s
     }
     return true;
 }
 
 template <class NewKeyFunction>
 static bool
 SlowRekey(IntMap* m) {
     IntMap tmp;
-    if (!tmp.init())
-        return false;
 
     for (auto iter = m->iter(); !iter.done(); iter.next()) {
         if (NewKeyFunction::shouldBeRemoved(iter.get().key()))
             continue;
         uint32_t hi = NewKeyFunction::rekey(iter.get().key());
         if (tmp.has(hi))
             return false;
         if (!tmp.putNew(hi, iter.get().value()))
@@ -164,18 +162,16 @@ SlowRekey(IntMap* m) {
 
     return true;
 }
 
 template <class NewKeyFunction>
 static bool
 SlowRekey(IntSet* s) {
     IntSet tmp;
-    if (!tmp.init())
-        return false;
 
     for (auto iter = s->iter(); !iter.done(); iter.next()) {
         if (NewKeyFunction::shouldBeRemoved(iter.get()))
             continue;
         uint32_t hi = NewKeyFunction::rekey(iter.get());
         if (tmp.has(hi))
             return false;
         if (!tmp.putNew(hi))
@@ -189,18 +185,16 @@ SlowRekey(IntSet* s) {
     }
 
     return true;
 }
 
 BEGIN_TEST(testHashRekeyManual)
 {
     IntMap am, bm;
-    CHECK(am.init());
-    CHECK(bm.init());
     for (size_t i = 0; i < TestIterations; ++i) {
 #ifdef FUZZ
         fprintf(stderr, "map1: %lu\n", i);
 #endif
         CHECK(AddLowKeys(&am, &bm, i));
         CHECK(MapsAreEqual(am, bm));
 
         for (auto iter = am.modIter(); !iter.done(); iter.next()) {
@@ -211,18 +205,16 @@ BEGIN_TEST(testHashRekeyManual)
         CHECK(SlowRekey<LowToHigh>(&bm));
 
         CHECK(MapsAreEqual(am, bm));
         am.clear();
         bm.clear();
     }
 
     IntSet as, bs;
-    CHECK(as.init());
-    CHECK(bs.init());
     for (size_t i = 0; i < TestIterations; ++i) {
 #ifdef FUZZ
         fprintf(stderr, "set1: %lu\n", i);
 #endif
         CHECK(AddLowKeys(&as, &bs, i));
         CHECK(SetsAreEqual(as, bs));
 
         for (auto iter = as.modIter(); !iter.done(); iter.next()) {
@@ -239,18 +231,16 @@ BEGIN_TEST(testHashRekeyManual)
 
     return true;
 }
 END_TEST(testHashRekeyManual)
 
 BEGIN_TEST(testHashRekeyManualRemoval)
 {
     IntMap am, bm;
-    CHECK(am.init());
-    CHECK(bm.init());
     for (size_t i = 0; i < TestIterations; ++i) {
 #ifdef FUZZ
         fprintf(stderr, "map2: %lu\n", i);
 #endif
         CHECK(AddLowKeys(&am, &bm, i));
         CHECK(MapsAreEqual(am, bm));
 
         for (auto iter = am.modIter(); !iter.done(); iter.next()) {
@@ -265,18 +255,16 @@ BEGIN_TEST(testHashRekeyManualRemoval)
         CHECK(SlowRekey<LowToHighWithRemoval>(&bm));
 
         CHECK(MapsAreEqual(am, bm));
         am.clear();
         bm.clear();
     }
 
     IntSet as, bs;
-    CHECK(as.init());
-    CHECK(bs.init());
     for (size_t i = 0; i < TestIterations; ++i) {
 #ifdef FUZZ
         fprintf(stderr, "set1: %lu\n", i);
 #endif
         CHECK(AddLowKeys(&as, &bs, i));
         CHECK(SetsAreEqual(as, bs));
 
         for (auto iter = as.modIter(); !iter.done(); iter.next()) {
@@ -333,17 +321,16 @@ struct MoveOnlyType {
     MoveOnlyType& operator=(const MoveOnlyType&) = delete;
 };
 
 BEGIN_TEST(testHashSetOfMoveOnlyType)
 {
     typedef js::HashSet<MoveOnlyType, MoveOnlyType::HashPolicy, js::SystemAllocPolicy> Set;
 
     Set set;
-    CHECK(set.init());
 
     MoveOnlyType a(1);
 
     CHECK(set.put(std::move(a))); // This shouldn't generate a compiler error.
 
     return true;
 }
 END_TEST(testHashSetOfMoveOnlyType)
@@ -352,27 +339,24 @@ END_TEST(testHashSetOfMoveOnlyType)
 
 // Add entries to a HashMap until either we get an OOM, or the table has been
 // resized a few times.
 static bool
 GrowUntilResize()
 {
     IntMap m;
 
-    if (!m.init())
-        return false;
-
     // Add entries until we've resized the table four times.
     size_t lastCapacity = m.capacity();
     size_t resizes = 0;
     uint32_t key = 0;
     while (resizes < 4) {
         auto p = m.lookupForAdd(key);
         if (!p && !m.add(p, key, 0))
-            return false;   // OOM'd while adding
+            return false;   // OOM'd in lookupForAdd() or add()
 
         size_t capacity = m.capacity();
         if (capacity != lastCapacity) {
             resizes++;
             lastCapacity = capacity;
         }
         key++;
     }
@@ -393,17 +377,16 @@ BEGIN_TEST(testHashMapGrowOOM)
 }
 
 END_TEST(testHashMapGrowOOM)
 #endif // defined(DEBUG)
 
 BEGIN_TEST(testHashTableMovableModIterator)
 {
     IntSet set;
-    CHECK(set.init());
 
     // Exercise returning a hash table ModIterator object from a function.
 
     CHECK(set.put(1));
     for (auto iter = setModIter(set); !iter.done(); iter.next())
         iter.remove();
     CHECK(set.count() == 0);
 
@@ -430,8 +413,89 @@ BEGIN_TEST(testHashTableMovableModIterat
 }
 
 IntSet::ModIterator setModIter(IntSet& set)
 {
     return set.modIter();
 }
 
 END_TEST(testHashTableMovableModIterator)
+
+BEGIN_TEST(testHashLazyStorage)
+{
+    // The following code depends on the current capacity computation, which
+    // could change in the future.
+    uint32_t defaultCap = 32;
+    uint32_t minCap = 4;
+
+    IntSet set;
+    CHECK(set.capacity() == 0);
+
+    CHECK(set.put(1));
+    CHECK(set.capacity() == defaultCap);
+
+    set.compact();                  // effectively a no-op
+    CHECK(set.capacity() == minCap);
+
+    set.clear();
+    CHECK(set.capacity() == minCap);
+
+    set.compact();
+    CHECK(set.capacity() == 0);
+
+    CHECK(set.putNew(1));
+    CHECK(set.capacity() == minCap);
+
+    set.clear();
+    set.compact();
+    CHECK(set.capacity() == 0);
+
+    // lookupForAdd() instantiates, even if not followed by add().
+    set.lookupForAdd(1);
+    CHECK(set.capacity() == minCap);
+
+    set.clear();
+    set.compact();
+    CHECK(set.capacity() == 0);
+
+    CHECK(set.reserve(0));          // a no-op
+    CHECK(set.capacity() == 0);
+
+    CHECK(set.reserve(1));
+    CHECK(set.capacity() == minCap);
+
+    CHECK(set.reserve(0));          // a no-op
+    CHECK(set.capacity() == minCap);
+
+    CHECK(set.reserve(2));          // effectively a no-op
+    CHECK(set.capacity() == minCap);
+
+    // No need to clear here because we didn't add anything.
+    set.compact();
+    CHECK(set.capacity() == 0);
+
+    CHECK(set.reserve(128));
+    CHECK(set.capacity() == 256);
+    CHECK(set.reserve(3));          // effectively a no-op
+    CHECK(set.capacity() == 256);
+    for (int i = 0; i < 8; i++) {
+      CHECK(set.putNew(i));
+    }
+    CHECK(set.count() == 8);
+    CHECK(set.capacity() == 256);
+    set.compact();
+    CHECK(set.capacity() == 16);
+    set.compact();                  // effectively a no-op
+    CHECK(set.capacity() == 16);
+    for (int i = 8; i < 16; i++) {
+      CHECK(set.putNew(i));
+    }
+    CHECK(set.count() == 16);
+    CHECK(set.capacity() == 32);
+    set.clear();
+    CHECK(set.capacity() == 32);
+    set.compact();
+    CHECK(set.capacity() == 0);
+
+    return true;
+}
+END_TEST(testHashLazyStorage)
+
--- a/js/src/jsapi-tests/testJitMinimalFunc.h
+++ b/js/src/jsapi-tests/testJitMinimalFunc.h
@@ -81,18 +81,16 @@ struct MinimalFunc : MinimalAlloc
         if (!SplitCriticalEdges(graph))
             return false;
         RenumberBlocks(graph);
         if (!BuildDominatorTree(graph))
             return false;
         if (!BuildPhiReverseMapping(graph))
             return false;
         ValueNumberer gvn(&mir, graph);
-        if (!gvn.init())
-            return false;
         if (!gvn.run(ValueNumberer::DontUpdateAliasAnalysis))
             return false;
         return true;
     }
 
     bool runRangeAnalysis()
     {
         if (!SplitCriticalEdges(graph))
--- a/js/src/jsapi-tests/testUbiNode.cpp
+++ b/js/src/jsapi-tests/testUbiNode.cpp
@@ -363,17 +363,16 @@ BEGIN_TEST(test_ubiPostOrder)
     FakeNode b('b');
     FakeNode c('c');
     FakeNode d('d');
     FakeNode e('e');
     FakeNode f('f');
     FakeNode g('g');
 
     js::HashSet<ExpectedEdge> expectedEdges(cx);
-    CHECK(expectedEdges.init());
 
     auto declareEdge = [&](FakeNode& from, FakeNode& to) {
         return from.addEdgeTo(to) && expectedEdges.putNew(ExpectedEdge(from, to));
     };
 
     CHECK(declareEdge(r, a));
     CHECK(declareEdge(r, e));
     CHECK(declareEdge(a, b));
@@ -390,17 +389,16 @@ BEGIN_TEST(test_ubiPostOrder)
     {
         // Do a PostOrder traversal, starting from r. Accumulate the names of
         // the nodes we visit in `visited`. Remove edges we traverse from
         // `expectedEdges` as we find them to ensure that we only find each edge
         // once.
 
         JS::AutoCheckCannotGC nogc(cx);
         JS::ubi::PostOrder traversal(cx, nogc);
-        CHECK(traversal.init());
         CHECK(traversal.addStart(&r));
 
         auto onNode = [&](const JS::ubi::Node& node) {
             return visited.append(node.as<FakeNode>()->name);
         };
 
         auto onEdge = [&](const JS::ubi::Node& origin, const JS::ubi::Edge& edge) {
             ExpectedEdge e(*origin.as<FakeNode>(), *edge.referent.as<FakeNode>());
@@ -585,17 +583,16 @@ BEGIN_TEST(test_JS_ubi_DominatorTree)
         // the set of nodes immediately dominated by this one in `domination`,
         // then iterate over the actual dominated set and check against the
         // expected set.
 
         auto& node = relation.dominated;
         fprintf(stderr, "Checking %c's dominated set:\n", node.name);
 
         js::HashSet<char> expectedDominatedSet(cx);
-        CHECK(expectedDominatedSet.init());
         for (auto& rel : domination) {
             if (&rel.dominator == &node) {
                 fprintf(stderr, "    Expecting %c\n", rel.dominated.name);
                 CHECK(expectedDominatedSet.putNew(rel.dominated.name));
             }
         }
 
         auto maybeActualDominatedSet = tree.getDominatedSet(&node);
@@ -709,17 +706,16 @@ BEGIN_TEST(test_JS_ubi_ShortestPaths_no_
     CHECK(a.addEdgeTo(c));
     CHECK(c.addEdgeTo(a));
 
     mozilla::Maybe<JS::ubi::ShortestPaths> maybeShortestPaths;
     {
         JS::AutoCheckCannotGC noGC(cx);
 
         JS::ubi::NodeSet targets;
-        CHECK(targets.init());
         CHECK(targets.put(&b));
 
         maybeShortestPaths = JS::ubi::ShortestPaths::Create(cx, noGC, 10, &a,
                                                             std::move(targets));
     }
 
     CHECK(maybeShortestPaths);
     auto& paths = *maybeShortestPaths;
@@ -751,17 +747,16 @@ BEGIN_TEST(test_JS_ubi_ShortestPaths_one
     CHECK(c.addEdgeTo(a));
     CHECK(c.addEdgeTo(b));
 
     mozilla::Maybe<JS::ubi::ShortestPaths> maybeShortestPaths;
     {
         JS::AutoCheckCannotGC noGC(cx);
 
         JS::ubi::NodeSet targets;
-        CHECK(targets.init());
         CHECK(targets.put(&b));
 
         maybeShortestPaths = JS::ubi::ShortestPaths::Create(cx, noGC, 10, &a,
                                                             std::move(targets));
     }
 
     CHECK(maybeShortestPaths);
     auto& paths = *maybeShortestPaths;
@@ -818,17 +813,16 @@ BEGIN_TEST(test_JS_ubi_ShortestPaths_mul
     CHECK(d.addEdgeTo(e));
     CHECK(e.addEdgeTo(f));
 
     mozilla::Maybe<JS::ubi::ShortestPaths> maybeShortestPaths;
     {
         JS::AutoCheckCannotGC noGC(cx);
 
         JS::ubi::NodeSet targets;
-        CHECK(targets.init());
         CHECK(targets.put(&f));
 
         maybeShortestPaths = JS::ubi::ShortestPaths::Create(cx, noGC, 10, &a,
                                                             std::move(targets));
     }
 
     CHECK(maybeShortestPaths);
     auto& paths = *maybeShortestPaths;
@@ -910,17 +904,16 @@ BEGIN_TEST(test_JS_ubi_ShortestPaths_mor
     CHECK(d.addEdgeTo(e));
     CHECK(e.addEdgeTo(f));
 
     mozilla::Maybe<JS::ubi::ShortestPaths> maybeShortestPaths;
     {
         JS::AutoCheckCannotGC noGC(cx);
 
         JS::ubi::NodeSet targets;
-        CHECK(targets.init());
         CHECK(targets.put(&f));
 
         maybeShortestPaths = JS::ubi::ShortestPaths::Create(cx, noGC, 1, &a,
                                                             std::move(targets));
     }
 
     CHECK(maybeShortestPaths);
     auto& paths = *maybeShortestPaths;
@@ -960,17 +953,16 @@ BEGIN_TEST(test_JS_ubi_ShortestPaths_mul
     CHECK(a.addEdgeTo(b, u"y"));
     CHECK(a.addEdgeTo(b, u"z"));
 
     mozilla::Maybe<JS::ubi::ShortestPaths> maybeShortestPaths;
     {
         JS::AutoCheckCannotGC noGC(cx);
 
         JS::ubi::NodeSet targets;
-        CHECK(targets.init());
         CHECK(targets.put(&b));
 
         maybeShortestPaths = JS::ubi::ShortestPaths::Create(cx, noGC, 10, &a,
                                                             std::move(targets));
     }
 
     CHECK(maybeShortestPaths);
     auto& paths = *maybeShortestPaths;
--- a/js/src/jsapi.cpp
+++ b/js/src/jsapi.cpp
@@ -4380,18 +4380,16 @@ JS_BufferIsCompilableUnit(JSContext* cx,
         return true;
 
     // Return true on any out-of-memory error or non-EOF-related syntax error, so our
     // caller doesn't try to collect more buffered source.
     bool result = true;
 
     CompileOptions options(cx);
     frontend::UsedNameTracker usedNames(cx);
-    if (!usedNames.init())
-        return false;
 
     RootedScriptSourceObject sourceObject(cx, frontend::CreateScriptSourceObject(cx, options,
                                                                                  mozilla::Nothing()));
     if (!sourceObject)
         return false;
 
     frontend::Parser<frontend::FullParseHandler, char16_t> parser(cx, cx->tempLifoAlloc(),
                                                                   options, chars.get(), length,
--- a/js/src/proxy/ScriptedProxyHandler.cpp
+++ b/js/src/proxy/ScriptedProxyHandler.cpp
@@ -744,19 +744,17 @@ ScriptedProxyHandler::ownPropertyKeys(JS
         return false;
 
     // Step 8.
     AutoIdVector trapResult(cx);
     if (!CreateFilteredListFromArrayLike(cx, trapResultArray, trapResult))
         return false;
 
     // Steps 9, 18.
-    Rooted<GCHashSet<jsid>> uncheckedResultKeys(cx, GCHashSet<jsid>(cx));
-    if (!uncheckedResultKeys.init(trapResult.length()))
-        return false;
+    Rooted<GCHashSet<jsid>> uncheckedResultKeys(cx, GCHashSet<jsid>(cx, trapResult.length()));
 
     for (size_t i = 0, len = trapResult.length(); i < len; i++) {
         MOZ_ASSERT(!JSID_IS_VOID(trapResult[i]));
 
         auto ptr = uncheckedResultKeys.lookupForAdd(trapResult[i]);
         if (ptr)
             return js::Throw(cx, trapResult[i], JSMSG_OWNKEYS_DUPLICATE);
 
--- a/js/src/shell/js.cpp
+++ b/js/src/shell/js.cpp
@@ -4479,18 +4479,16 @@ BinParse(JSContext* cx, unsigned argc, V
     }
 
 
     CompileOptions options(cx);
     options.setIntroductionType("js shell bin parse")
            .setFileAndLine("<ArrayBuffer>", 1);
 
     UsedNameTracker usedNames(cx);
-    if (!usedNames.init())
-        return false;
 
     JS::Result<ParseNode*> parsed(nullptr);
     if (useMultipart) {
         // Note: We need to keep `reader` alive as long as we can use `parsed`.
         BinASTParser<BinTokenReaderMultipart> reader(cx, cx->tempLifoAlloc(), usedNames, options);
 
         parsed = reader.parse(buf_data, buf_length);
 
@@ -4576,18 +4574,16 @@ Parse(JSContext* cx, unsigned argc, Valu
     const char16_t* chars = stableChars.twoByteRange().begin().get();
 
     CompileOptions options(cx);
     options.setIntroductionType("js shell parse")
            .setFileAndLine("<string>", 1)
            .setAllowSyntaxParser(allowSyntaxParser);
 
     UsedNameTracker usedNames(cx);
-    if (!usedNames.init())
-        return false;
 
     RootedScriptSourceObject sourceObject(cx, frontend::CreateScriptSourceObject(cx, options,
                                                                                  Nothing()));
     if (!sourceObject)
         return false;
 
     Parser<FullParseHandler, char16_t> parser(cx, cx->tempLifoAlloc(), options, chars, length,
                                               /* foldConstants = */ false, usedNames, nullptr,
@@ -4633,18 +4629,16 @@ SyntaxParse(JSContext* cx, unsigned argc
 
     AutoStableStringChars stableChars(cx);
     if (!stableChars.initTwoByte(cx, scriptContents))
         return false;
 
     const char16_t* chars = stableChars.twoByteRange().begin().get();
     size_t length = scriptContents->length();
     UsedNameTracker usedNames(cx);
-    if (!usedNames.init())
-        return false;
 
     RootedScriptSourceObject sourceObject(cx, frontend::CreateScriptSourceObject(cx, options,
                                                                                  Nothing()));
     if (!sourceObject)
         return false;
 
     Parser<frontend::SyntaxParseHandler, char16_t> parser(cx, cx->tempLifoAlloc(),
                                                           options, chars, length, false,
--- a/js/src/vm/ArrayBufferObject.cpp
+++ b/js/src/vm/ArrayBufferObject.cpp
@@ -1495,21 +1495,16 @@ ArrayBufferObject::addView(JSContext* cx
 static size_t VIEW_LIST_MAX_LENGTH = 500;
 
 bool
 InnerViewTable::addView(JSContext* cx, ArrayBufferObject* buffer, ArrayBufferViewObject* view)
 {
     // ArrayBufferObject entries are only added when there are multiple views.
     MOZ_ASSERT(buffer->firstView());
 
-    if (!map.initialized() && !map.init()) {
-        ReportOutOfMemory(cx);
-        return false;
-    }
-
     Map::AddPtr p = map.lookupForAdd(buffer);
 
     MOZ_ASSERT(!gc::IsInsideNursery(buffer));
     bool addToNursery = nurseryKeysValid && gc::IsInsideNursery(view);
 
     if (p) {
         ViewVector& views = p->value();
         MOZ_ASSERT(!views.empty());
@@ -1548,19 +1543,16 @@ InnerViewTable::addView(JSContext* cx, A
         nurseryKeysValid = false;
 
     return true;
 }
 
 InnerViewTable::ViewVector*
 InnerViewTable::maybeViewsUnbarriered(ArrayBufferObject* buffer)
 {
-    if (!map.initialized())
-        return nullptr;
-
     Map::Ptr p = map.lookup(buffer);
     if (p)
         return &p->value();
     return nullptr;
 }
 
 void
 InnerViewTable::removeViews(ArrayBufferObject* buffer)
@@ -1623,19 +1615,16 @@ InnerViewTable::sweepAfterMinorGC()
 
         nurseryKeysValid = true;
     }
 }
 
 size_t
 InnerViewTable::sizeOfExcludingThis(mozilla::MallocSizeOf mallocSizeOf)
 {
-    if (!map.initialized())
-        return 0;
-
     size_t vectorSize = 0;
     for (Map::Enum e(map); !e.empty(); e.popFront())
         vectorSize += e.front().value().sizeOfExcludingThis(mallocSizeOf);
 
     return vectorSize
          + map.shallowSizeOfExcludingThis(mallocSizeOf)
          + nurseryKeys.sizeOfExcludingThis(mallocSizeOf);
 }
--- a/js/src/vm/Caches.cpp
+++ b/js/src/vm/Caches.cpp
@@ -7,25 +7,16 @@
 #include "vm/Caches-inl.h"
 
 #include "mozilla/PodOperations.h"
 
 using namespace js;
 
 using mozilla::PodZero;
 
-bool
-RuntimeCaches::init()
-{
-    if (!evalCache.init())
-        return false;
-
-    return true;
-}
-
 void
 NewObjectCache::clearNurseryObjects(JSRuntime* rt)
 {
     for (unsigned i = 0; i < mozilla::ArrayLength(entries); ++i) {
         Entry& e = entries[i];
         NativeObject* obj = reinterpret_cast<NativeObject*>(&e.templateObject);
         if (IsInsideNursery(e.key) ||
             rt->gc.nursery().isInside(obj->slots_) ||
--- a/js/src/vm/Caches.h
+++ b/js/src/vm/Caches.h
@@ -242,27 +242,24 @@ class RuntimeCaches
 {
   public:
     js::GSNCache gsnCache;
     js::EnvironmentCoordinateNameCache envCoordinateNameCache;
     js::NewObjectCache newObjectCache;
     js::UncompressedSourceCache uncompressedSourceCache;
     js::EvalCache evalCache;
 
-    bool init();
-
     void purgeForMinorGC(JSRuntime* rt) {
         newObjectCache.clearNurseryObjects(rt);
         evalCache.sweep();
     }
 
     void purgeForCompaction() {
         newObjectCache.purge();
-        if (evalCache.initialized())
-            evalCache.clear();
+        evalCache.clear();
     }
 
     void purge() {
         purgeForCompaction();
         gsnCache.purge();
         envCoordinateNameCache.purge();
         uncompressedSourceCache.purge();
     }
--- a/js/src/vm/CodeCoverage.cpp
+++ b/js/src/vm/CodeCoverage.cpp
@@ -109,17 +109,17 @@ LCovSource::exportInto(GenericPrinter& o
     outFNDA_.exportInto(out);
     out.printf("FNF:%zu\n", numFunctionsFound_);
     out.printf("FNH:%zu\n", numFunctionsHit_);
 
     outBRDA_.exportInto(out);
     out.printf("BRF:%zu\n", numBranchesFound_);
     out.printf("BRH:%zu\n", numBranchesHit_);
 
-    if (linesHit_.initialized()) {
+    if (!linesHit_.empty()) {
         for (size_t lineno = 1; lineno <= maxLineHit_; ++lineno) {
             if (auto p = linesHit_.lookup(lineno))
                 out.printf("DA:%zu,%" PRIu64 "\n", lineno, p->value());
         }
     }
 
     out.printf("LF:%zu\n", numLinesInstrumented_);
     out.printf("LH:%zu\n", numLinesHit_);
@@ -135,19 +135,16 @@ LCovSource::writeScriptName(LSprinter& o
         return EscapedStringPrinter(out, fun->displayAtom(), 0);
     out.printf("top-level");
     return true;
 }
 
 bool
 LCovSource::writeScript(JSScript* script)
 {
-    if (!linesHit_.initialized() && !linesHit_.init())
-        return false;
-
     numFunctionsFound_++;
     outFN_.printf("FN:%u,", script->lineno());
     if (!writeScriptName(outFN_, script))
         return false;
     outFN_.put("\n", 1);
 
     uint64_t hits = 0;
     ScriptCounts* sc = nullptr;
--- a/js/src/vm/Compartment.cpp
+++ b/js/src/vm/Compartment.cpp
@@ -33,30 +33,20 @@
 #include "vm/NativeObject-inl.h"
 #include "vm/UnboxedObject-inl.h"
 
 using namespace js;
 using namespace js::gc;
 
 Compartment::Compartment(Zone* zone)
   : zone_(zone),
-    runtime_(zone->runtimeFromAnyThread())
+    runtime_(zone->runtimeFromAnyThread()),
+    crossCompartmentWrappers(0)
 {}
 
-bool
-Compartment::init(JSContext* cx)
-{
-    if (!crossCompartmentWrappers.init(0)) {
-        ReportOutOfMemory(cx);
-        return false;
-    }
-
-    return true;
-}
-
 #ifdef JSGC_HASH_TABLE_CHECKS
 
 namespace {
 struct CheckGCThingAfterMovingGCFunctor {
     template <class T> void operator()(T* t) { CheckGCThingAfterMovingGC(*t); }
 };
 } // namespace (anonymous)
 
--- a/js/src/vm/Compartment.h
+++ b/js/src/vm/Compartment.h
@@ -268,17 +268,18 @@ class WrapperMap
         friend class WrapperMap;
 
         InnerMap* map;
 
         Ptr() : InnerMap::Ptr(), map(nullptr) {}
         Ptr(const InnerMap::Ptr& p, InnerMap& m) : InnerMap::Ptr(p), map(&m) {}
     };
 
-    MOZ_MUST_USE bool init(uint32_t len) { return map.init(len); }
+    WrapperMap() {}
+    explicit WrapperMap(size_t aLen) : map(aLen) {}
 
     bool empty() {
         if (map.empty())
             return true;
         for (OuterMap::Enum e(map); !e.empty(); e.popFront()) {
             if (!e.front().value().empty())
                 return false;
         }
@@ -300,18 +301,18 @@ class WrapperMap
             p.map->remove(p);
     }
 
     MOZ_MUST_USE bool put(const CrossCompartmentKey& k, const JS::Value& v) {
         JS::Compartment* c = const_cast<CrossCompartmentKey&>(k).compartment();
         MOZ_ASSERT(k.is<JSString*>() == !c);
         auto p = map.lookupForAdd(c);
         if (!p) {
-            InnerMap m;
-            if (!m.init(InitialInnerMapSize) || !map.add(p, c, std::move(m)))
+            InnerMap m(InitialInnerMapSize);
+            if (!map.add(p, c, std::move(m)))
                 return false;
         }
         return p->value().put(k, v);
     }
 
     size_t sizeOfExcludingThis(mozilla::MallocSizeOf mallocSizeOf) {
         size_t size = map.shallowSizeOfExcludingThis(mallocSizeOf);
         for (OuterMap::Enum e(map); !e.empty(); e.popFront())
@@ -431,17 +432,16 @@ class JS::Compartment
 
   private:
     bool getNonWrapperObjectForCurrentCompartment(JSContext* cx, js::MutableHandleObject obj);
     bool getOrCreateWrapper(JSContext* cx, js::HandleObject existing, js::MutableHandleObject obj);
 
   public:
     explicit Compartment(JS::Zone* zone);
 
-    MOZ_MUST_USE bool init(JSContext* cx);
     void destroy(js::FreeOp* fop);
 
     MOZ_MUST_USE inline bool wrap(JSContext* cx, JS::MutableHandleValue vp);
 
     MOZ_MUST_USE bool wrap(JSContext* cx, js::MutableHandleString strp);
 #ifdef ENABLE_BIGINT
     MOZ_MUST_USE bool wrap(JSContext* cx, js::MutableHandle<JS::BigInt*> bi);
 #endif
--- a/js/src/vm/Debugger.cpp
+++ b/js/src/vm/Debugger.cpp
@@ -692,57 +692,36 @@ Debugger::Debugger(JSContext* cx, Native
     if (logger) {
 #ifdef NIGHTLY_BUILD
         logger->getIterationAndSize(&traceLoggerLastDrainedIteration, &traceLoggerLastDrainedSize);
 #endif
         logger->getIterationAndSize(&traceLoggerScriptedCallsLastDrainedIteration,
                                     &traceLoggerScriptedCallsLastDrainedSize);
     }
 #endif
+
+    cx->runtime()->debuggerList().insertBack(this);
 }
 
 Debugger::~Debugger()
 {
-    MOZ_ASSERT_IF(debuggees.initialized(), debuggees.empty());
+    MOZ_ASSERT(debuggees.empty());
     allocationsLog.clear();
 
     // We don't have to worry about locking here since Debugger is not
     // background finalized.
     JSContext* cx = TlsContext.get();
     if (onNewGlobalObjectWatchersLink.mPrev ||
         onNewGlobalObjectWatchersLink.mNext ||
         cx->runtime()->onNewGlobalObjectWatchers().begin() == JSRuntime::WatchersList::Iterator(this))
     {
         cx->runtime()->onNewGlobalObjectWatchers().remove(this);
     }
 }
 
-bool
-Debugger::init(JSContext* cx)
-{
-    if (!debuggees.init() ||
-        !debuggeeZones.init() ||
-        !frames.init() ||
-        !scripts.init() ||
-        !lazyScripts.init() ||
-        !sources.init() ||
-        !objects.init() ||
-        !observedGCs.init() ||
-        !environments.init() ||
-        !wasmInstanceScripts.init() ||
-        !wasmInstanceSources.init())
-    {
-        ReportOutOfMemory(cx);
-        return false;
-    }
-
-    cx->runtime()->debuggerList().insertBack(this);
-    return true;
-}
-
 JS_STATIC_ASSERT(unsigned(JSSLOT_DEBUGFRAME_OWNER) == unsigned(JSSLOT_DEBUGSCRIPT_OWNER));
 JS_STATIC_ASSERT(unsigned(JSSLOT_DEBUGFRAME_OWNER) == unsigned(JSSLOT_DEBUGSOURCE_OWNER));
 JS_STATIC_ASSERT(unsigned(JSSLOT_DEBUGFRAME_OWNER) == unsigned(JSSLOT_DEBUGOBJECT_OWNER));
 JS_STATIC_ASSERT(unsigned(JSSLOT_DEBUGFRAME_OWNER) == unsigned(DebuggerEnvironment::OWNER_SLOT));
 
 /* static */ Debugger*
 Debugger::fromChildJSObject(JSObject* obj)
 {
@@ -2390,17 +2369,16 @@ class MOZ_RAII ExecutionObservableRealms
     explicit ExecutionObservableRealms(JSContext* cx
                                        MOZ_GUARD_OBJECT_NOTIFIER_PARAM)
       : realms_(cx),
         zones_(cx)
     {
         MOZ_GUARD_OBJECT_NOTIFIER_INIT;
     }
 
-    bool init() { return realms_.init() && zones_.init(); }
     bool add(Realm* realm) { return realms_.put(realm) && zones_.put(realm->zone()); }
 
     using RealmRange = HashSet<Realm*>::Range;
     const HashSet<Realm*>* realms() const { return &realms_; }
 
     const HashSet<Zone*>* zones() const override { return &zones_; }
     bool shouldRecompileOrInvalidate(JSScript* script) const override {
         return script->hasBaselineScript() && realms_.has(script->realm());
@@ -2755,17 +2733,17 @@ Debugger::ensureExecutionObservabilityOf
 }
 
 /* static */ bool
 Debugger::ensureExecutionObservabilityOfRealm(JSContext* cx, Realm* realm)
 {
     if (realm->debuggerObservesAllExecution())
         return true;
     ExecutionObservableRealms obs(cx);
-    if (!obs.init() || !obs.add(realm))
+    if (!obs.add(realm))
         return false;
     realm->updateDebuggerObservesAllExecution();
     return updateExecutionObservability(cx, obs, Observing);
 }
 
 /* static */ bool
 Debugger::hookObservesAllExecution(Hook which)
 {
@@ -2806,18 +2784,16 @@ Debugger::observesCoverage() const
 
 // Toggle whether this Debugger's debuggees observe all execution. This is
 // called when a hook that observes all execution is set or unset. See
 // hookObservesAllExecution.
 bool
 Debugger::updateObservesAllExecutionOnDebuggees(JSContext* cx, IsObserving observing)
 {
     ExecutionObservableRealms obs(cx);
-    if (!obs.init())
-        return false;
 
     for (WeakGlobalObjectSet::Range r = debuggees.all(); !r.empty(); r.popFront()) {
         GlobalObject* global = r.front();
         JS::Realm* realm = global->realm();
 
         if (realm->debuggerObservesAllExecution() == observing)
             continue;
 
@@ -2836,18 +2812,16 @@ Debugger::updateObservesAllExecutionOnDe
 
     return true;
 }
 
 bool
 Debugger::updateObservesCoverageOnDebuggees(JSContext* cx, IsObserving observing)
 {
     ExecutionObservableRealms obs(cx);
-    if (!obs.init())
-        return false;
 
     for (WeakGlobalObjectSet::Range r = debuggees.all(); !r.empty(); r.popFront()) {
         GlobalObject* global = r.front();
         Realm* realm = global->realm();
 
         if (realm->debuggerObservesCoverage() == observing)
             continue;
 
@@ -3190,22 +3164,20 @@ Debugger::trace(JSTracer* trc)
     TraceNullableEdge(trc, &uncaughtExceptionHook, "hooks");
 
     // Mark Debugger.Frame objects. These are all reachable from JS, because the
     // corresponding JS frames are still on the stack.
     //
     // (Once we support generator frames properly, we will need
     // weakly-referenced Debugger.Frame objects as well, for suspended generator
     // frames.)
-    if (frames.initialized()) {
-        for (FrameMap::Range r = frames.all(); !r.empty(); r.popFront()) {
-            HeapPtr<DebuggerFrame*>& frameobj = r.front().value();
-            TraceEdge(trc, &frameobj, "live Debugger.Frame");
-            MOZ_ASSERT(frameobj->getPrivate(frameobj->numFixedSlotsMaybeForwarded()));
-        }
+    for (FrameMap::Range r = frames.all(); !r.empty(); r.popFront()) {
+        HeapPtr<DebuggerFrame*>& frameobj = r.front().value();
+        TraceEdge(trc, &frameobj, "live Debugger.Frame");
+        MOZ_ASSERT(frameobj->getPrivate(frameobj->numFixedSlotsMaybeForwarded()));
     }
 
     allocationsLog.trace(trc);
 
     // Trace the weak map from JSScript instances to Debugger.Script objects.
     scripts.trace(trc);
 
     // Trace the weak map from LazyScript instances to Debugger.Script objects.
@@ -3773,18 +3745,16 @@ Debugger::removeDebuggee(JSContext* cx, 
 
     if (!args.requireAtLeast(cx, "Debugger.removeDebuggee", 1))
         return false;
     Rooted<GlobalObject*> global(cx, dbg->unwrapDebuggeeArgument(cx, args[0]));
     if (!global)
         return false;
 
     ExecutionObservableRealms obs(cx);
-    if (!obs.init())
-        return false;
 
     if (dbg->debuggees.has(global)) {
         dbg->removeDebuggeeGlobal(cx->runtime()->defaultFreeOp(), global, nullptr);
 
         // Only update the realm if there are no Debuggers left, as it's
         // expensive to check if no other Debugger has a live script or frame
         // hook on any of the current on-stack debuggee frames.
         if (global->getDebuggers()->empty() && !obs.add(global->realm()))
@@ -3798,18 +3768,16 @@ Debugger::removeDebuggee(JSContext* cx, 
 }
 
 /* static */ bool
 Debugger::removeAllDebuggees(JSContext* cx, unsigned argc, Value* vp)
 {
     THIS_DEBUGGER(cx, argc, vp, "removeAllDebuggees", args, dbg);
 
     ExecutionObservableRealms obs(cx);
-    if (!obs.init())
-        return false;
 
     for (WeakGlobalObjectSet::Enum e(dbg->debuggees); !e.empty(); e.popFront()) {
         Rooted<GlobalObject*> global(cx, e.front());
         dbg->removeDebuggeeGlobal(cx->runtime()->defaultFreeOp(), global, &e);
 
         // See note about adding to the observable set in removeDebuggee.
         if (global->getDebuggers()->empty() && !obs.add(global->realm()))
             return false;
@@ -3935,17 +3903,17 @@ Debugger::construct(JSContext* cx, unsig
     for (unsigned slot = JSSLOT_DEBUG_PROTO_START; slot < JSSLOT_DEBUG_PROTO_STOP; slot++)
         obj->setReservedSlot(slot, proto->getReservedSlot(slot));
     obj->setReservedSlot(JSSLOT_DEBUG_MEMORY_INSTANCE, NullValue());
 
     Debugger* debugger;
     {
         // Construct the underlying C++ object.
         auto dbg = cx->make_unique<Debugger>(cx, obj.get());
-        if (!dbg || !dbg->init(cx))
+        if (!dbg)
             return false;
 
         debugger = dbg.release();
         obj->setPrivate(debugger); // owns the released pointer
     }
 
     // Add the initial debuggees, if any.
     for (unsigned i = 0; i < args.length(); i++) {
@@ -4223,31 +4191,16 @@ class MOZ_STACK_CLASS Debugger::ScriptQu
         innermostForRealm(cx->zone()),
         scriptVector(cx, ScriptVector(cx)),
         lazyScriptVector(cx, LazyScriptVector(cx)),
         wasmInstanceVector(cx, WasmInstanceObjectVector(cx)),
         oom(false)
     {}
 
     /*
-     * Initialize this ScriptQuery. Raise an error and return false if we
-     * haven't enough memory.
-     */
-    bool init() {
-        if (!realms.init() ||
-            !innermostForRealm.init())
-        {
-            ReportOutOfMemory(cx);
-            return false;
-        }
-
-        return true;
-    }
-
-    /*
      * Parse the query object |query|, and prepare to match only the scripts
      * it specifies.
      */
     bool parseQuery(HandleObject query) {
         // Check for a 'global' property, which limits the results to those
         // scripts scoped to a particular global object.
         RootedValue global(cx);
         if (!GetProperty(cx, query, query, cx->names().global, &global))
@@ -4736,18 +4689,16 @@ Debugger::findScripts(JSContext* cx, uns
     THIS_DEBUGGER(cx, argc, vp, "findScripts", args, dbg);
 
     if (gc::GCRuntime::temporaryAbortIfWasmGc(cx)) {
         JS_ReportErrorASCII(cx, "API temporarily unavailable under wasm gc");
         return false;
     }
 
     ScriptQuery query(cx, dbg);
-    if (!query.init())
-        return false;
 
     if (args.length() >= 1) {
         RootedObject queryObject(cx, NonNullObject(cx, args[0]));
         if (!queryObject || !query.parseQuery(queryObject))
             return false;
     } else {
         if (!query.omittedQuery())
             return false;
@@ -4841,21 +4792,16 @@ class MOZ_STACK_CLASS Debugger::ObjectQu
     /*
      * Traverse the heap to find all relevant objects and add them to the
      * provided vector.
      */
     bool findObjects() {
         if (!prepareQuery())
             return false;
 
-        if (!debuggeeCompartments.init()) {
-            ReportOutOfMemory(cx);
-            return false;
-        }
-
         for (WeakGlobalObjectSet::Range r = dbg->allDebuggees(); !r.empty(); r.popFront()) {
             if (!debuggeeCompartments.put(r.front()->compartment())) {
                 ReportOutOfMemory(cx);
                 return false;
             }
         }
 
         {
@@ -4865,20 +4811,16 @@ class MOZ_STACK_CLASS Debugger::ObjectQu
             RootedObject dbgObj(cx, dbg->object);
             JS::ubi::RootList rootList(cx, maybeNoGC);
             if (!rootList.init(dbgObj)) {
                 ReportOutOfMemory(cx);
                 return false;
             }
 
             Traversal traversal(cx, *this, maybeNoGC.ref());
-            if (!traversal.init()) {
-                ReportOutOfMemory(cx);
-                return false;
-            }
             traversal.wantNames = false;
 
             return traversal.addStart(JS::ubi::Node(&rootList)) &&
                    traversal.traverse();
         }
     }
 
     /*
@@ -5099,18 +5041,16 @@ Debugger::isCompilableUnit(JSContext* cx
     AutoStableStringChars chars(cx);
     if (!chars.initTwoByte(cx, str))
         return false;
 
     bool result = true;
 
     CompileOptions options(cx);
     frontend::UsedNameTracker usedNames(cx);
-    if (!usedNames.init())
-        return false;
 
     RootedScriptSourceObject sourceObject(cx, frontend::CreateScriptSourceObject(cx, options,
                                                                                  Nothing()));
     if (!sourceObject)
         return false;
 
     frontend::Parser<frontend::FullParseHandler, char16_t> parser(cx, cx->tempLifoAlloc(),
                                                                   options, chars.twoByteChars(),
--- a/js/src/vm/Debugger.h
+++ b/js/src/vm/Debugger.h
@@ -160,20 +160,16 @@ class DebuggerWeakMap : private WeakMap<
     typedef typename Base::Lookup Lookup;
 
     /* Expose WeakMap public interface */
 
     using Base::lookupForAdd;
     using Base::all;
     using Base::trace;
 
-    MOZ_MUST_USE bool init(uint32_t len = 16) {
-        return Base::init(len) && zoneCounts.init();
-    }
-
     template<typename KeyInput, typename ValueInput>
     bool relookupOrAdd(AddPtr& p, const KeyInput& k, const ValueInput& v) {
         MOZ_ASSERT(v->compartment() == this->compartment);
 #ifdef DEBUG
         CheckDebuggeeThing(k, InvisibleKeysOk);
 #endif
         MOZ_ASSERT(!Base::has(k));
         if (!incZoneCount(k->zone()))
@@ -876,17 +872,16 @@ class Debugger : private mozilla::Linked
     static MOZ_MUST_USE bool replaceFrameGuts(JSContext* cx, AbstractFramePtr from,
                                               AbstractFramePtr to,
                                               ScriptFrameIter& iter);
 
   public:
     Debugger(JSContext* cx, NativeObject* dbg);
     ~Debugger();
 
-    MOZ_MUST_USE bool init(JSContext* cx);
     inline const js::GCPtrNativeObject& toJSObject() const;
     inline js::GCPtrNativeObject& toJSObjectRef();
     static inline Debugger* fromJSObject(const JSObject* obj);
     static Debugger* fromChildJSObject(JSObject* obj);
 
     Zone* zone() const { return toJSObject()->zone(); }
 
     bool hasMemory() const;
--- a/js/src/vm/DebuggerMemory.cpp
+++ b/js/src/vm/DebuggerMemory.cpp
@@ -378,18 +378,16 @@ DebuggerMemory::takeCensus(JSContext* cx
 #ifdef ENABLE_WASM_GC
     if (gc::GCRuntime::temporaryAbortIfWasmGc(cx)) {
         JS_ReportErrorASCII(cx, "API temporarily unavailable under wasm gc");
         return false;
     }
 #endif
 
     Census census(cx);
-    if (!census.init())
-        return false;
     CountTypePtr rootType;
 
     RootedObject options(cx);
     if (args.get(0).isObject())
         options = &args[0].toObject();
 
     if (!JS::ubi::ParseCensusOptions(cx, census, options, rootType))
         return false;
@@ -412,20 +410,16 @@ DebuggerMemory::takeCensus(JSContext* cx
         Maybe<JS::AutoCheckCannotGC> maybeNoGC;
         JS::ubi::RootList rootList(cx, maybeNoGC);
         if (!rootList.init(dbgObj)) {
             ReportOutOfMemory(cx);
             return false;
         }
 
         JS::ubi::CensusTraversal traversal(cx, handler, maybeNoGC.ref());
-        if (!traversal.init()) {
-            ReportOutOfMemory(cx);
-            return false;
-        }
         traversal.wantNames = false;
 
         if (!traversal.addStart(JS::ubi::Node(&rootList)) ||
             !traversal.traverse())
         {
             ReportOutOfMemory(cx);
             return false;
         }
--- a/js/src/vm/EnvironmentObject.cpp
+++ b/js/src/vm/EnvironmentObject.cpp
@@ -53,35 +53,31 @@ js::EnvironmentCoordinateToEnvironmentSh
 }
 
 static const uint32_t ENV_COORDINATE_NAME_THRESHOLD = 20;
 
 void
 EnvironmentCoordinateNameCache::purge()
 {
     shape = nullptr;
-    if (map.initialized())
-        map.finish();
+    map.clearAndCompact();
 }
 
 PropertyName*
 js::EnvironmentCoordinateName(EnvironmentCoordinateNameCache& cache, JSScript* script,
                               jsbytecode* pc)
 {
     Shape* shape = EnvironmentCoordinateToEnvironmentShape(script, pc);
     if (shape != cache.shape && shape->slot() >= ENV_COORDINATE_NAME_THRESHOLD) {
         cache.purge();
-        if (cache.map.init(shape->slot())) {
+        if (cache.map.reserve(shape->slot())) {
             cache.shape = shape;
             Shape::Range<NoGC> r(shape);
             while (!r.empty()) {
-                if (!cache.map.putNew(r.front().slot(), r.front().propid())) {
-                    cache.purge();
-                    break;
-                }
+                cache.map.putNewInfallible(r.front().slot(), r.front().propid());
                 r.popFront();
             }
         }
     }
 
     jsid id;
     EnvironmentCoordinate ec(pc);
     if (shape == cache.shape) {
@@ -2409,23 +2405,17 @@ DebugEnvironments::DebugEnvironments(JSC
  : zone_(zone),
    proxiedEnvs(cx),
    missingEnvs(cx->zone()),
    liveEnvs(cx->zone())
 {}
 
 DebugEnvironments::~DebugEnvironments()
 {
-    MOZ_ASSERT_IF(missingEnvs.initialized(), missingEnvs.empty());
-}
-
-bool
-DebugEnvironments::init()
-{
-    return proxiedEnvs.init() && missingEnvs.init() && liveEnvs.init();
+    MOZ_ASSERT(missingEnvs.empty());
 }
 
 void
 DebugEnvironments::trace(JSTracer* trc)
 {
     proxiedEnvs.trace(trc);
 }
 
@@ -2520,21 +2510,16 @@ DebugEnvironments::ensureRealmData(JSCon
     Realm* realm = cx->realm();
     if (auto* debugEnvs = realm->debugEnvs())
         return debugEnvs;
 
     auto debugEnvs = cx->make_unique<DebugEnvironments>(cx, cx->zone());
     if (!debugEnvs)
         return nullptr;
 
-    if (!debugEnvs->init()) {
-        ReportOutOfMemory(cx);
-        return nullptr;
-    }
-
     realm->debugEnvsRef() = std::move(debugEnvs);
     return realm->debugEnvs();
 }
 
 /* static */ DebugEnvironmentProxy*
 DebugEnvironments::hasDebugEnvironment(JSContext* cx, EnvironmentObject& env)
 {
     DebugEnvironments* envs = env.realm()->debugEnvs();
@@ -3682,18 +3667,16 @@ RemoveReferencedNames(JSContext* cx, Han
 
     return true;
 }
 
 static bool
 AnalyzeEntrainedVariablesInScript(JSContext* cx, HandleScript script, HandleScript innerScript)
 {
     PropertyNameSet remainingNames(cx);
-    if (!remainingNames.init())
-        return false;
 
     for (BindingIter bi(script); bi; bi++) {
         if (bi.closedOver()) {
             PropertyName* name = bi.name()->asPropertyName();
             PropertyNameSet::AddPtr p = remainingNames.lookupForAdd(name);
             if (!p && !remainingNames.add(p, name))
                 return false;
         }
--- a/js/src/vm/EnvironmentObject.h
+++ b/js/src/vm/EnvironmentObject.h
@@ -989,18 +989,16 @@ class DebugEnvironments
 
   public:
     DebugEnvironments(JSContext* cx, Zone* zone);
     ~DebugEnvironments();
 
     Zone* zone() const { return zone_; }
 
   private:
-    bool init();
-
     static DebugEnvironments* ensureRealmData(JSContext* cx);
 
     template <typename Environment, typename Scope>
     static void onPopGeneric(JSContext* cx, const EnvironmentIter& ei);
 
   public:
     void trace(JSTracer* trc);
     void sweep();
--- a/js/src/vm/GeckoProfiler.cpp
+++ b/js/src/vm/GeckoProfiler.cpp
@@ -37,26 +37,16 @@ GeckoProfilerRuntime::GeckoProfilerRunti
     strings(mutexid::GeckoProfilerStrings),
     slowAssertions(false),
     enabled_(false),
     eventMarker_(nullptr)
 {
     MOZ_ASSERT(rt != nullptr);
 }
 
-bool
-GeckoProfilerRuntime::init()
-{
-    auto locked = strings.lock();
-    if (!locked->init())
-        return false;
-
-    return true;
-}
-
 void
 GeckoProfilerThread::setProfilingStack(ProfilingStack* profilingStack)
 {
     profilingStack_ = profilingStack;
 }
 
 void
 GeckoProfilerRuntime::setEventMarker(void (*fn)(const char*))
@@ -158,17 +148,16 @@ GeckoProfilerRuntime::enable(bool enable
         r->wasm.ensureProfilingLabels(enabled);
 }
 
 /* Lookup the string for the function/script, creating one if necessary */
 const char*
 GeckoProfilerRuntime::profileString(JSScript* script, JSFunction* maybeFun)
 {
     auto locked = strings.lock();
-    MOZ_ASSERT(locked->initialized());
 
     ProfileStringMap::AddPtr s = locked->lookupForAdd(script);
 
     if (!s) {
         auto str = allocProfileString(script, maybeFun);
         if (!str || !locked->add(s, script, std::move(str)))
             return nullptr;
     }
@@ -182,18 +171,16 @@ GeckoProfilerRuntime::onScriptFinalized(
     /*
      * This function is called whenever a script is destroyed, regardless of
      * whether profiling has been turned on, so don't invoke a function on an
      * invalid hash set. Also, even if profiling was enabled but then turned
      * off, we still want to remove the string, so no check of enabled() is
      * done.
      */
     auto locked = strings.lock();
-    if (!locked->initialized())
-        return;
     if (ProfileStringMap::Ptr entry = locked->lookup(script))
         locked->remove(entry);
 }
 
 void
 GeckoProfilerRuntime::markEvent(const char* event)
 {
     MOZ_ASSERT(enabled());
@@ -329,36 +316,30 @@ GeckoProfilerThread::trace(JSTracer* trc
             profilingStack_->frames[i].trace(trc);
     }
 }
 
 void
 GeckoProfilerRuntime::fixupStringsMapAfterMovingGC()
 {
     auto locked = strings.lock();
-    if (!locked->initialized())
-        return;
-
     for (ProfileStringMap::Enum e(locked.get()); !e.empty(); e.popFront()) {
         JSScript* script = e.front().key();
         if (IsForwarded(script)) {
             script = Forwarded(script);
             e.rekeyFront(script);
         }
     }
 }
 
 #ifdef JSGC_HASH_TABLE_CHECKS
 void
 GeckoProfilerRuntime::checkStringsMapAfterMovingGC()
 {
     auto locked = strings.lock();
-    if (!locked->initialized())
-        return;
-
     for (auto r = locked->all(); !r.empty(); r.popFront()) {
         JSScript* script = r.front().key();
         CheckGCThingAfterMovingGC(script);
         auto ptr = locked->lookup(script);
         MOZ_RELEASE_ASSERT(ptr.found() && &*ptr == &r.front());
     }
 }
 #endif
--- a/js/src/vm/GeckoProfiler.h
+++ b/js/src/vm/GeckoProfiler.h
@@ -117,18 +117,16 @@ class GeckoProfilerRuntime
     uint32_t             enabled_;
     void                (*eventMarker_)(const char*);
 
     UniqueChars allocProfileString(JSScript* script, JSFunction* function);
 
   public:
     explicit GeckoProfilerRuntime(JSRuntime* rt);
 
-    bool init();
-
     /* management of whether instrumentation is on or off */
     bool enabled() { return enabled_; }
     void enable(bool enabled);
     void enableSlowAssertions(bool enabled) { slowAssertions = enabled; }
     bool slowAssertionsEnabled() { return slowAssertions; }
 
     void setEventMarker(void (*fn)(const char*));
     const char* profileString(JSScript* script, JSFunction* maybeFun);
--- a/js/src/vm/Iteration.cpp
+++ b/js/src/vm/Iteration.cpp
@@ -103,20 +103,18 @@ typedef HashSet<jsid, DefaultHasher<jsid
 
 template <bool CheckForDuplicates>
 static inline bool
 Enumerate(JSContext* cx, HandleObject pobj, jsid id,
           bool enumerable, unsigned flags, Maybe<IdSet>& ht, AutoIdVector* props)
 {
     if (CheckForDuplicates) {
         if (!ht) {
-            ht.emplace(cx);
             // Most of the time there are only a handful of entries.
-            if (!ht->init(5))
-                return false;
+            ht.emplace(cx, 5);
         }
 
         // If we've already seen this, we definitely won't add it.
         IdSet::AddPtr p = ht->lookupForAdd(id);
         if (MOZ_UNLIKELY(!!p))
             return true;
 
         // It's not necessary to add properties to the hash table at the end of
--- a/js/src/vm/JSAtom.cpp
+++ b/js/src/vm/JSAtom.cpp
@@ -166,18 +166,18 @@ JSRuntime::initializeAtoms(JSContext* cx
 
         atoms_ = js_new<AtomsTable>();
         if (!atoms_)
             return false;
 
         return atoms_->init();
     }
 
-    permanentAtomsDuringInit_ = js_new<AtomSet>();
-    if (!permanentAtomsDuringInit_ || !permanentAtomsDuringInit_->init(JS_PERMANENT_ATOM_SIZE))
+    permanentAtomsDuringInit_ = js_new<AtomSet>(JS_PERMANENT_ATOM_SIZE);
+    if (!permanentAtomsDuringInit_)
         return false;
 
     staticStrings = js_new<StaticStrings>();
     if (!staticStrings || !staticStrings->init(cx))
         return false;
 
     static const CommonNameInfo cachedNames[] = {
 #define COMMON_NAME_INFO(idpart, id, text) { js_##idpart##_str, sizeof(text) - 1 },
@@ -265,16 +265,17 @@ class AtomsTable::AutoLock
     MOZ_ALWAYS_INLINE ~AutoLock() {
         if (lock)
             lock->unlock();
     }
 };
 
 AtomsTable::Partition::Partition(uint32_t index)
   : lock(MutexId { mutexid::AtomsTable.name, mutexid::AtomsTable.order + index }),
+    atoms(InitialTableSize),
     atomsAddedWhileSweeping(nullptr)
 {}
 
 AtomsTable::Partition::~Partition()
 {
     MOZ_ASSERT(!atomsAddedWhileSweeping);
 }
 
@@ -286,18 +287,16 @@ AtomsTable::~AtomsTable()
 
 bool
 AtomsTable::init()
 {
     for (size_t i = 0; i < PartitionCount; i++) {
         partitions[i] = js_new<Partition>(i);
         if (!partitions[i])
             return false;
-        if (!partitions[i]->atoms.init(InitialTableSize))
-            return false;
     }
     return true;
 }
 
 void
 AtomsTable::lockAll()
 {
     MOZ_ASSERT(!allPartitionsLocked);
@@ -490,17 +489,17 @@ AtomsTable::startIncrementalSweep()
 {
     MOZ_ASSERT(JS::RuntimeHeapIsCollecting());
 
     bool ok = true;
     for (size_t i = 0; i < PartitionCount; i++) {
         auto& part = *partitions[i];
 
         auto newAtoms = js_new<AtomSet>();
-        if (!newAtoms || !newAtoms->init()) {
+        if (!newAtoms) {
             ok = false;
             break;
         }
 
         MOZ_ASSERT(!part.atomsAddedWhileSweeping);
         part.atomsAddedWhileSweeping = newAtoms;
     }
 
--- a/js/src/vm/JSScript.cpp
+++ b/js/src/vm/JSScript.cpp
@@ -1059,21 +1059,16 @@ JSScript::initScriptCounts(JSContext* cx
         base.infallibleEmplaceBack(pcToOffset(jumpTargets[i]));
 
     // Create realm's scriptCountsMap if necessary.
     if (!realm()->scriptCountsMap) {
         auto map = cx->make_unique<ScriptCountsMap>();
         if (!map)
             return false;
 
-        if (!map->init()) {
-            ReportOutOfMemory(cx);
-            return false;
-        }
-
         realm()->scriptCountsMap = std::move(map);
     }
 
     // Allocate the ScriptCounts.
     UniqueScriptCounts sc = cx->make_unique<ScriptCounts>(std::move(base));
     if (!sc) {
         ReportOutOfMemory(cx);
         return false;
@@ -1536,17 +1531,17 @@ UncompressedSourceCache::lookup(const Sc
 bool
 UncompressedSourceCache::put(const ScriptSourceChunk& ssc, UniqueTwoByteChars str,
                              AutoHoldEntry& holder)
 {
     MOZ_ASSERT(!holder_);
 
     if (!map_) {
         UniquePtr<Map> map = MakeUnique<Map>();
-        if (!map || !map->init())
+        if (!map)
             return false;
 
         map_ = std::move(map);
     }
 
     if (!map_->put(ssc, std::move(str)))
         return false;
 
@@ -2020,21 +2015,16 @@ ScriptSource::xdrEncodeTopLevel(JSContex
         return false;
     }
 
     MOZ_ASSERT(hasEncoder());
     auto failureCase = mozilla::MakeScopeExit([&] {
         xdrEncoder_.reset(nullptr);
     });
 
-    if (!xdrEncoder_->init()) {
-        ReportOutOfMemory(cx);
-        return false;
-    }
-
     RootedScript s(cx, script);
     XDRResult res = xdrEncoder_->codeScript(&s);
     if (res.isErr()) {
         // On encoding failure, let failureCase destroy encoder and return true
         // to avoid failing any currently executing script.
         if (res.unwrapErr() & JS::TranscodeResult_Failure)
             return true;
 
@@ -2496,18 +2486,16 @@ js::SweepScriptData(JSRuntime* rt)
 }
 
 void
 js::FreeScriptData(JSRuntime* rt)
 {
     AutoLockScriptData lock(rt);
 
     ScriptDataTable& table = rt->scriptDataTable(lock);
-    if (!table.initialized())
-        return;
 
     // The table should be empty unless the embedding leaked GC things.
     MOZ_ASSERT_IF(rt->gc.shutdownCollectedEverything(), table.empty());
 
     for (ScriptDataTable::Enum e(table); !e.empty(); e.popFront()) {
 #ifdef DEBUG
         SharedScriptData* scriptData = e.front();
         fprintf(stderr, "ERROR: GC found live SharedScriptData %p with ref count %d at shutdown\n",
@@ -2716,21 +2704,16 @@ JSScript::initScriptName(JSContext* cx)
         return true;
 
     // Create realm's scriptNameMap if necessary.
     if (!realm()->scriptNameMap) {
         auto map = cx->make_unique<ScriptNameMap>();
         if (!map)
             return false;
 
-        if (!map->init()) {
-            ReportOutOfMemory(cx);
-            return false;
-        }
-
         realm()->scriptNameMap = std::move(map);
     }
 
     UniqueChars name = DuplicateString(filename());
     if (!name) {
         ReportOutOfMemory(cx);
         return false;
     }
@@ -3179,29 +3162,27 @@ JSScript::finalize(FreeOp* fop)
 }
 
 static const uint32_t GSN_CACHE_THRESHOLD = 100;
 
 void
 GSNCache::purge()
 {
     code = nullptr;
-    if (map.initialized())
-        map.finish();
+    map.clearAndCompact();
 }
 
 jssrcnote*
 js::GetSrcNote(GSNCache& cache, JSScript* script, jsbytecode* pc)
 {
     size_t target = pc - script->code();
     if (target >= script->length())
         return nullptr;
 
     if (cache.code == script->code()) {
-        MOZ_ASSERT(cache.map.initialized());
         GSNCache::Map::Ptr p = cache.map.lookup(pc);
         return p ? p->value() : nullptr;
     }
 
     size_t offset = 0;
     jssrcnote* result;
     for (jssrcnote* sn = script->notes(); ; sn = SN_NEXT(sn)) {
         if (SN_IS_TERMINATOR(sn)) {
@@ -3212,28 +3193,25 @@ js::GetSrcNote(GSNCache& cache, JSScript
         if (offset == target && SN_IS_GETTABLE(sn)) {
             result = sn;
             break;
         }
     }
 
     if (cache.code != script->code() && script->length() >= GSN_CACHE_THRESHOLD) {
         unsigned nsrcnotes = 0;
-        for (jssrcnote* sn = script->notes(); !SN_IS_TERMINATOR(sn);
-             sn = SN_NEXT(sn))
-        {
+        for (jssrcnote* sn = script->notes(); !SN_IS_TERMINATOR(sn); sn = SN_NEXT(sn)) {
             if (SN_IS_GETTABLE(sn))
                 ++nsrcnotes;
         }
         if (cache.code) {
-            MOZ_ASSERT(cache.map.initialized());
-            cache.map.finish();
+            cache.map.clear();
             cache.code = nullptr;
         }
-        if (cache.map.init(nsrcnotes)) {
+        if (cache.map.reserve(nsrcnotes)) {
             pc = script->code();
             for (jssrcnote* sn = script->notes(); !SN_IS_TERMINATOR(sn);
                  sn = SN_NEXT(sn))
             {
                 pc += SN_DELTA(sn);
                 if (SN_IS_GETTABLE(sn))
                     cache.map.putNewInfallible(pc, sn);
             }
@@ -3791,21 +3769,16 @@ JSScript::ensureHasDebugScript(JSContext
         return false;
 
     /* Create realm's debugScriptMap if necessary. */
     if (!realm()->debugScriptMap) {
         auto map = cx->make_unique<DebugScriptMap>();
         if (!map)
             return false;
 
-        if (!map->init()) {
-            ReportOutOfMemory(cx);
-            return false;
-        }
-
         realm()->debugScriptMap = std::move(map);
     }
 
     if (!realm()->debugScriptMap->putNew(this, std::move(debug))) {
         ReportOutOfMemory(cx);
         return false;
     }
 
--- a/js/src/vm/MemoryMetrics.cpp
+++ b/js/src/vm/MemoryMetrics.cpp
@@ -265,24 +265,16 @@ struct StatsClosure
     wasm::Table::SeenSet wasmSeenTables;
     bool anonymize;
 
     StatsClosure(RuntimeStats* rt, ObjectPrivateVisitor* v, bool anon)
       : rtStats(rt),
         opv(v),
         anonymize(anon)
     {}
-
-    bool init() {
-        return seenSources.init() &&
-               wasmSeenMetadata.init() &&
-               wasmSeenBytes.init() &&
-               wasmSeenCode.init() &&
-               wasmSeenTables.init();
-    }
 };
 
 static void
 DecommittedArenasChunkCallback(JSRuntime* rt, void* data, gc::Chunk* chunk)
 {
     // This case is common and fast to check.  Do it first.
     if (chunk->decommittedArenas.isAllClear())
         return;
@@ -621,30 +613,30 @@ StatsCellCallback(JSRuntime* rt, void* d
     zStats->unusedGCThings.addToKind(traceKind, -thingSize);
 }
 
 bool
 ZoneStats::initStrings()
 {
     isTotals = false;
     allStrings = js_new<StringsHashMap>();
-    if (!allStrings || !allStrings->init()) {
+    if (!allStrings) {
         js_delete(allStrings);
         allStrings = nullptr;
         return false;
     }
     return true;
 }
 
 bool
 RealmStats::initClasses()
 {
     isTotals = false;
     allClasses = js_new<ClassesHashMap>();
-    if (!allClasses || !allClasses->init()) {
+    if (!allClasses) {
         js_delete(allClasses);
         allClasses = nullptr;
         return false;
     }
     return true;
 }
 
 static bool
@@ -767,18 +759,16 @@ CollectRuntimeStatsHelper(JSContext* cx,
     rtStats->gcHeapUnusedChunks =
         size_t(JS_GetGCParameter(cx, JSGC_UNUSED_CHUNKS)) * gc::ChunkSize;
 
     IterateChunks(cx, &rtStats->gcHeapDecommittedArenas,
                   DecommittedArenasChunkCallback);
 
     // Take the per-compartment measurements.
     StatsClosure closure(rtStats, opv, anonymize);
-    if (!closure.init())
-        return false;
     IterateHeapUnbarriered(cx, &closure,
                            StatsZoneCallback,
                            StatsRealmCallback,
                            StatsArenaCallback,
                            statsCellCallback);
 
     // Take the "explicit/js/runtime/" measurements.
     rt->addSizeOfIncludingThis(rtStats->mallocSizeOf_, &rtStats->runtime);
@@ -930,18 +920,16 @@ AddSizeOfTab(JSContext* cx, HandleObject
         return false;
 
     if (!rtStats.zoneStatsVector.reserve(1))
         return false;
 
     // Take the per-compartment measurements. No need to anonymize because
     // these measurements will be aggregated.
     StatsClosure closure(&rtStats, opv, /* anonymize = */ false);
-    if (!closure.init())
-        return false;
     IterateHeapUnbarrieredForZone(cx, zone, &closure,
                                   StatsZoneCallback,
                                   StatsRealmCallback,
                                   StatsArenaCallback,
                                   StatsCellCallback<CoarseGrained>);
 
     MOZ_ASSERT(rtStats.zoneStatsVector.length() == 1);
     rtStats.zTotals.addSizes(rtStats.zoneStatsVector[0]);
--- a/js/src/vm/ObjectGroup.cpp
+++ b/js/src/vm/ObjectGroup.cpp
@@ -526,23 +526,16 @@ ObjectGroup::defaultNewGroup(JSContext* 
     AutoEnterAnalysis enter(cx);
 
     ObjectGroupRealm::NewTable*& table = groups.defaultNewTable;
 
     if (!table) {
         table = cx->new_<ObjectGroupRealm::NewTable>(cx->zone());
         if (!table)
             return nullptr;
-
-        if (!table->init()) {
-            js_delete(table);
-            table = nullptr;
-            ReportOutOfMemory(cx);
-            return nullptr;
-        }
     }
 
     if (proto.isObject() && !proto.toObject()->isDelegate()) {
         RootedObject protoObj(cx, proto.toObject());
         if (!JSObject::setDelegate(cx, protoObj))
             return nullptr;
 
         // Objects which are prototypes of one another should be singletons, so
@@ -634,23 +627,16 @@ ObjectGroup::lazySingletonGroup(JSContex
     MOZ_ASSERT_IF(proto.isObject(), cx->compartment() == proto.toObject()->compartment());
 
     ObjectGroupRealm::NewTable*& table = realm.lazyTable;
 
     if (!table) {
         table = cx->new_<ObjectGroupRealm::NewTable>(cx->zone());
         if (!table)
             return nullptr;
-
-        if (!table->init()) {
-            ReportOutOfMemory(cx);
-            js_delete(table);
-            table = nullptr;
-            return nullptr;
-        }
     }
 
     ObjectGroupRealm::NewTable::AddPtr p =
         table->lookupForAdd(ObjectGroupRealm::NewEntry::Lookup(clasp, proto, nullptr));
     if (p) {
         ObjectGroup* group = p->group;
         MOZ_ASSERT(group->lazy());
 
@@ -865,23 +851,16 @@ ObjectGroup::newArrayObject(JSContext* c
 
     ObjectGroupRealm& realm = ObjectGroupRealm::getForNewObject(cx);
     ObjectGroupRealm::ArrayObjectTable*& table = realm.arrayObjectTable;
 
     if (!table) {
         table = cx->new_<ObjectGroupRealm::ArrayObjectTable>();
         if (!table)
             return nullptr;
-
-        if (!table->init()) {
-            ReportOutOfMemory(cx);
-            js_delete(table);
-            table = nullptr;
-            return nullptr;
-        }
     }
 
     ObjectGroupRealm::ArrayObjectKey key(elementType);
     DependentAddPtr<ObjectGroupRealm::ArrayObjectTable> p(cx, *table, key);
 
     RootedObjectGroup group(cx);
     if (p) {
         group = p->value();
@@ -1191,23 +1170,16 @@ ObjectGroup::newPlainObject(JSContext* c
 
     ObjectGroupRealm& realm = ObjectGroupRealm::getForNewObject(cx);
     ObjectGroupRealm::PlainObjectTable*& table = realm.plainObjectTable;
 
     if (!table) {
         table = cx->new_<ObjectGroupRealm::PlainObjectTable>();
         if (!table)
             return nullptr;
-
-        if (!table->init()) {
-            ReportOutOfMemory(cx);
-            js_delete(table);
-            table = nullptr;
-            return nullptr;
-        }
     }
 
     ObjectGroupRealm::PlainObjectKey::Lookup lookup(properties, nproperties);
     ObjectGroupRealm::PlainObjectTable::Ptr p = table->lookup(lookup);
 
     if (!p) {
         if (!CanShareObjectGroup(properties, nproperties))
             return NewPlainObjectWithProperties(cx, properties, nproperties, newKind);
@@ -1456,23 +1428,16 @@ ObjectGroup::allocationSiteGroup(JSConte
 
     ObjectGroupRealm& realm = ObjectGroupRealm::getForNewObject(cx);
     ObjectGroupRealm::AllocationSiteTable*& table = realm.allocationSiteTable;
 
     if (!table) {
         table = cx->new_<ObjectGroupRealm::AllocationSiteTable>(cx->zone());
         if (!table)
             return nullptr;
-
-        if (!table->init()) {
-            ReportOutOfMemory(cx);
-            js_delete(table);
-            table = nullptr;
-            return nullptr;
-        }
     }
 
     RootedScript script(cx, scriptArg);
     JSObject* proto = protoArg;
     if (!proto && kind != JSProto_Null) {
         proto = GlobalObject::getOrCreatePrototype(cx, kind);
         if (!proto)
             return nullptr;
@@ -1769,32 +1734,32 @@ ObjectGroupRealm::addSizeOfExcludingThis
 
     if (lazyTable)
         *realmTables += lazyTable->sizeOfIncludingThis(mallocSizeOf);
 }
 
 void
 ObjectGroupRealm::clearTables()
 {
-    if (allocationSiteTable && allocationSiteTable->initialized())
+    if (allocationSiteTable)
         allocationSiteTable->clear();
-    if (arrayObjectTable && arrayObjectTable->initialized())
+    if (arrayObjectTable)
         arrayObjectTable->clear();
-    if (plainObjectTable && plainObjectTable->initialized()) {
+    if (plainObjectTable) {
         for (PlainObjectTable::Enum e(*plainObjectTable); !e.empty(); e.popFront()) {
             const PlainObjectKey& key = e.front().key();
             PlainObjectEntry& entry = e.front().value();
             js_free(key.properties);
             js_free(entry.types);
         }
         plainObjectTable->clear();
     }
-    if (defaultNewTable && defaultNewTable->initialized())
+    if (defaultNewTable)
         defaultNewTable->clear();
-    if (lazyTable && lazyTable->initialized())
+    if (lazyTable)
         lazyTable->clear();
     defaultNewGroupCache.purge();
 }
 
 /* static */ bool
 ObjectGroupRealm::PlainObjectTableSweepPolicy::needsSweep(PlainObjectKey* key,
                                                           PlainObjectEntry* entry)
 {
@@ -1826,17 +1791,17 @@ ObjectGroupRealm::sweep()
 
 void
 ObjectGroupRealm::fixupNewTableAfterMovingGC(NewTable* table)
 {
     /*
      * Each entry's hash depends on the object's prototype and we can't tell
      * whether that has been moved or not in sweepNewObjectGroupTable().
      */
-    if (table && table->initialized()) {
+    if (table) {
         for (NewTable::Enum e(*table); !e.empty(); e.popFront()) {
             NewEntry& entry = e.mutableFront();
 
             ObjectGroup* group = entry.group.unbarrieredGet();
             if (IsForwarded(group)) {
                 group = Forwarded(group);
                 entry.group.set(group);
             }
@@ -1857,17 +1822,17 @@ ObjectGroupRealm::fixupNewTableAfterMovi
 
 void
 ObjectGroupRealm::checkNewTableAfterMovingGC(NewTable* table)
 {
     /*
      * Assert that nothing points into the nursery or needs to be relocated, and
      * that the hash table entries are discoverable.
      */
-    if (!table || !table->initialized())
+    if (!table)
         return;
 
     for (auto r = table->all(); !r.empty(); r.popFront()) {
         NewEntry entry = r.front();
         CheckGCThingAfterMovingGC(entry.group.unbarrieredGet());
         TaggedProto proto = entry.group.unbarrieredGet()->proto();
         if (proto.isObject())
             CheckGCThingAfterMovingGC(proto.toObject());
--- a/js/src/vm/Realm.cpp
+++ b/js/src/vm/Realm.cpp
@@ -84,21 +84,16 @@ Realm::~Realm()
 
     MOZ_ASSERT(runtime_->numRealms > 0);
     runtime_->numRealms--;
 }
 
 bool
 ObjectRealm::init(JSContext* cx)
 {
-    if (!iteratorCache.init()) {
-        ReportOutOfMemory(cx);
-        return false;
-    }
-
     NativeIteratorSentinel sentinel(NativeIterator::allocateSentinel(cx));
     if (!sentinel)
         return false;
 
     iteratorSentinel_ = std::move(sentinel);
     enumerators = iteratorSentinel_.get();
     return true;
 }
@@ -112,23 +107,16 @@ Realm::init(JSContext* cx, JSPrincipals*
      * interfere with benchmarks that create tons of date objects (unless they
      * also create tons of iframes, which seems unlikely).
      */
     JS::ResetTimeZone();
 
     if (!objects_.init(cx))
         return false;
 
-    if (!savedStacks_.init() ||
-        !varNames_.init())
-    {
-        ReportOutOfMemory(cx);
-        return false;
-    }
-
     if (principals) {
         // Any realm with the trusted principals -- and there can be
         // multiple -- is a system realm.
         isSystem_ = (principals == cx->runtime()->trustedPrincipals());
         JS_HoldPrincipals(principals);
         principals_ = principals;
     }
 
@@ -207,21 +195,16 @@ ObjectRealm::getOrCreateNonSyntacticLexi
 {
     MOZ_ASSERT(&ObjectRealm::get(enclosing) == this);
 
     if (!nonSyntacticLexicalEnvironments_) {
         auto map = cx->make_unique<ObjectWeakMap>(cx);
         if (!map)
             return nullptr;
 
-        if (!map->init()) {
-            ReportOutOfMemory(cx);
-            return nullptr;
-        }
-
         nonSyntacticLexicalEnvironments_ = std::move(map);
     }
 
     // If a wrapped WithEnvironmentObject was passed in, unwrap it, as we may
     // be creating different WithEnvironmentObject wrappers each time.
     RootedObject key(cx, enclosing);
     if (enclosing->is<WithEnvironmentObject>()) {
         MOZ_ASSERT(!enclosing->as<WithEnvironmentObject>().isSyntactic());
@@ -597,17 +580,17 @@ Realm::checkScriptMapsAfterMovingGC()
 #endif
 
 void
 Realm::purge()
 {
     dtoaCache.purge();
     newProxyCache.purge();
     objectGroups_.purge();
-    objects_.iteratorCache.clearAndShrink();
+    objects_.iteratorCache.clearAndCompact();
     arraySpeciesLookup.purge();
     promiseLookup.purge();
 }
 
 void
 Realm::clearTables()
 {
     global_.set(nullptr);
@@ -615,20 +598,18 @@ Realm::clearTables()
     // No scripts should have run in this realm. This is used when merging
     // a realm that has been used off thread into another realm and zone.
     compartment()->assertNoCrossCompartmentWrappers();
     MOZ_ASSERT(!jitRealm_);
     MOZ_ASSERT(!debugEnvs_);
     MOZ_ASSERT(objects_.enumerators->next() == objects_.enumerators);
 
     objectGroups_.clearTables();
-    if (savedStacks_.initialized())
-        savedStacks_.clear();
-    if (varNames_.initialized())
-        varNames_.clear();
+    savedStacks_.clear();
+    varNames_.clear();
 }
 
 void
 Realm::setAllocationMetadataBuilder(const js::AllocationMetadataBuilder* builder)
 {
     // Clear any jitcode in the runtime, which behaves differently depending on
     // whether there is a creation callback.
     ReleaseAllJITCode(runtime_->defaultFreeOp());
@@ -657,17 +638,17 @@ Realm::setNewObjectMetadata(JSContext* c
 
     AutoEnterOOMUnsafeRegion oomUnsafe;
     if (JSObject* metadata = allocationMetadataBuilder_->build(cx, obj, oomUnsafe)) {
         MOZ_ASSERT(metadata->maybeCCWRealm() == obj->maybeCCWRealm());
         assertSameCompartment(cx, metadata);
 
         if (!objects_.objectMetadataTable) {
             auto table = cx->make_unique<ObjectWeakMap>(cx);
-            if (!table || !table->init())
+            if (!table)
                 oomUnsafe.crash("setNewObjectMetadata");
 
             objects_.objectMetadataTable = std::move(table);
         }
 
         if (!objects_.objectMetadataTable->add(cx, obj, metadata))
             oomUnsafe.crash("setNewObjectMetadata");
     }
--- a/js/src/vm/RegExpObject.cpp
+++ b/js/src/vm/RegExpObject.cpp
@@ -1261,25 +1261,16 @@ RegExpRealm::createMatchResultTemplateOb
     AddTypePropertyId(cx, templateObject, JSID_VOID, TypeSet::StringType());
     AddTypePropertyId(cx, templateObject, JSID_VOID, TypeSet::UndefinedType());
 
     matchResultTemplateObject_.set(templateObject);
 
     return matchResultTemplateObject_;
 }
 
-bool
-RegExpZone::init()
-{
-    if (!set_.init(0))
-        return false;
-
-    return true;
-}
-
 void
 RegExpRealm::sweep()
 {
     if (matchResultTemplateObject_ &&
         IsAboutToBeFinalized(&matchResultTemplateObject_))
     {
         matchResultTemplateObject_.set(nullptr);
     }
--- a/js/src/vm/RegExpShared.h
+++ b/js/src/vm/RegExpShared.h
@@ -261,21 +261,19 @@ class RegExpZone
      */
     using Set = JS::WeakCache<JS::GCHashSet<ReadBarriered<RegExpShared*>, Key, ZoneAllocPolicy>>;
     Set set_;
 
   public:
     explicit RegExpZone(Zone* zone);
 
     ~RegExpZone() {
-        MOZ_ASSERT_IF(set_.initialized(), set_.empty());
+        MOZ_ASSERT(set_.empty());
     }
 
-    bool init();
-
     bool empty() const { return set_.empty(); }
 
     RegExpShared* maybeGet(JSAtom* source, RegExpFlag flags) const {
         Set::Ptr p = set_.lookup(Key(source, flags));
         return p ? *p : nullptr;
     }
 
     RegExpShared* get(JSContext* cx, HandleAtom source, RegExpFlag flags);
--- a/js/src/vm/Runtime.cpp
+++ b/js/src/vm/Runtime.cpp
@@ -217,46 +217,34 @@ JSRuntime::init(JSContext* cx, uint32_t 
         return false;
 
     UniquePtr<Zone> atomsZone = MakeUnique<Zone>(this);
     if (!atomsZone || !atomsZone->init(true))
         return false;
 
     gc.atomsZone = atomsZone.release();
 
-    if (!symbolRegistry_.ref().init())
-        return false;
-
-    if (!scriptDataTable_.ref().init())
-        return false;
-
     /* The garbage collector depends on everything before this point being initialized. */
     gcInitialized = true;
 
     if (!InitRuntimeNumberState(this))
         return false;
 
     JS::ResetTimeZone();
 
     jitSupportsFloatingPoint = js::jit::JitSupportsFloatingPoint();
     jitSupportsUnalignedAccesses = js::jit::JitSupportsUnalignedAccesses();
     jitSupportsSimd = js::jit::JitSupportsSimd();
 
-    if (!geckoProfiler().init())
-        return false;
-
     if (!parentRuntime) {
         sharedImmutableStrings_ = js::SharedImmutableStringsCache::Create();
         if (!sharedImmutableStrings_)
             return false;
     }
 
-    if (!caches().init())
-        return false;
-
     return true;
 }
 
 void
 JSRuntime::destroyRuntime()
 {
     MOZ_ASSERT(!JS::RuntimeHeapIsBusy());
     MOZ_ASSERT(childRuntimeCount == 0);
--- a/js/src/vm/SavedStacks.cpp
+++ b/js/src/vm/SavedStacks.cpp
@@ -1222,27 +1222,19 @@ SavedFrame::toStringMethod(JSContext* cx
     RootedString string(cx);
     if (!JS::BuildStackString(cx, principals, frame, &string))
         return false;
     args.rval().setString(string);
     return true;
 }
 
 bool
-SavedStacks::init()
-{
-    return frames.init() &&
-           pcLocationMap.init();
-}
-
-bool
 SavedStacks::saveCurrentStack(JSContext* cx, MutableHandleSavedFrame frame,
                               JS::StackCapture&& capture /* = JS::StackCapture(JS::AllFrames()) */)
 {
-    MOZ_ASSERT(initialized());
     MOZ_RELEASE_ASSERT(cx->realm());
     MOZ_DIAGNOSTIC_ASSERT(&cx->realm()->savedStacks() == this);
 
     if (creatingSavedFrame ||
         cx->isExceptionPending() ||
         !cx->global() ||
         !cx->global()->isStandardClassResolved(JSProto_Object))
     {
@@ -1254,17 +1246,16 @@ SavedStacks::saveCurrentStack(JSContext*
     return insertFrames(cx, frame, std::move(capture));
 }
 
 bool
 SavedStacks::copyAsyncStack(JSContext* cx, HandleObject asyncStack, HandleString asyncCause,
                             MutableHandleSavedFrame adoptedStack,
                             const Maybe<size_t>& maxFrameCount)
 {
-    MOZ_ASSERT(initialized());
     MOZ_RELEASE_ASSERT(cx->realm());
     MOZ_DIAGNOSTIC_ASSERT(&cx->realm()->savedStacks() == this);
 
     RootedAtom asyncCauseAtom(cx, AtomizeString(cx, asyncCause));
     if (!asyncCauseAtom)
         return false;
 
     RootedObject asyncStackObj(cx, CheckedUnwrap(asyncStack));
@@ -1289,17 +1280,16 @@ void
 SavedStacks::trace(JSTracer* trc)
 {
     pcLocationMap.trace(trc);
 }
 
 uint32_t
 SavedStacks::count()
 {
-    MOZ_ASSERT(initialized());
     return frames.count();
 }
 
 void
 SavedStacks::clear()
 {
     frames.clear();
 }
--- a/js/src/vm/SavedStacks.h
+++ b/js/src/vm/SavedStacks.h
@@ -157,18 +157,16 @@ class SavedStacks {
   public:
     SavedStacks()
       : frames(),
         bernoulliSeeded(false),
         bernoulli(1.0, 0x59fdad7f6b4cc573, 0x91adf38db96a9354),
         creatingSavedFrame(false)
     { }
 
-    MOZ_MUST_USE bool init();
-    bool initialized() const { return frames.initialized(); }
     MOZ_MUST_USE bool saveCurrentStack(JSContext* cx, MutableHandleSavedFrame frame,
                                        JS::StackCapture&& capture = JS::StackCapture(JS::AllFrames()));
     MOZ_MUST_USE bool copyAsyncStack(JSContext* cx, HandleObject asyncStack,
                                      HandleString asyncCause,
                                      MutableHandleSavedFrame adoptedStack,
                                      const mozilla::Maybe<size_t>& maxFrameCount);
     void sweep();
     void trace(JSTracer* trc);
--- a/js/src/vm/Shape.cpp
+++ b/js/src/vm/Shape.cpp
@@ -1440,21 +1440,16 @@ BaseShape::adoptUnowned(UnownedBaseShape
     assertConsistency();
 }
 
 /* static */ UnownedBaseShape*
 BaseShape::getUnowned(JSContext* cx, StackBaseShape& base)
 {
     auto& table = cx->zone()->baseShapes();
 
-    if (!table.initialized() && !table.init()) {
-        ReportOutOfMemory(cx);
-        return nullptr;
-    }
-
     auto p = MakeDependentAddPtr(cx, table, base);
     if (p)
         return *p;
 
     BaseShape* nbase_ = Allocate<BaseShape>(cx);
     if (!nbase_)
         return nullptr;
 
@@ -1527,19 +1522,16 @@ BaseShape::canSkipMarkingShapeTable(Shap
 }
 #endif
 
 #ifdef JSGC_HASH_TABLE_CHECKS
 
 void
 Zone::checkBaseShapeTableAfterMovingGC()
 {
-    if (!baseShapes().initialized())
-        return;
-
     for (auto r = baseShapes().all(); !r.empty(); r.popFront()) {
         UnownedBaseShape* base = r.front().unbarrieredGet();
         CheckGCThingAfterMovingGC(base);
 
         BaseShapeSet::Ptr ptr = baseShapes().lookup(base);
         MOZ_RELEASE_ASSERT(ptr.found() && &*ptr == &r.front());
     }
 }
@@ -1566,19 +1558,16 @@ InitialShapeEntry::InitialShapeEntry(Sha
 {
 }
 
 #ifdef JSGC_HASH_TABLE_CHECKS
 
 void
 Zone::checkInitialShapesTableAfterMovingGC()
 {
-    if (!initialShapes().initialized())
-        return;
-
     /*
      * Assert that the postbarriers have worked and that nothing is left in
      * initialShapes that points into the nursery, and that the hash table
      * entries are discoverable.
      */
     for (auto r = initialShapes().all(); !r.empty(); r.popFront()) {
         InitialShapeEntry entry = r.front();
         JSProtoKey protoKey = entry.proto.key();
@@ -1625,17 +1614,17 @@ ShapeHasher::match(const Key k, const Lo
 {
     return k->matches(l);
 }
 
 static KidsHash*
 HashChildren(Shape* kid1, Shape* kid2)
 {
     auto hash = MakeUnique<KidsHash>();
-    if (!hash || !hash->init(2))
+    if (!hash || !hash->reserve(2))
         return nullptr;
 
     hash->putNewInfallible(StackShape(kid1), kid1);
     hash->putNewInfallible(StackShape(kid2), kid2);
     return hash.release();
 }
 
 bool
@@ -2089,21 +2078,16 @@ GetInitialShapeProtoKey(TaggedProto prot
 /* static */ Shape*
 EmptyShape::getInitialShape(JSContext* cx, const Class* clasp, TaggedProto proto,
                             size_t nfixed, uint32_t objectFlags)
 {
     MOZ_ASSERT_IF(proto.isObject(), cx->isInsideCurrentCompartment(proto.toObject()));
 
     auto& table = cx->zone()->initialShapes();
 
-    if (!table.initialized() && !table.init()) {
-        ReportOutOfMemory(cx);
-        return nullptr;
-    }
-
     using Lookup = InitialShapeEntry::Lookup;
     auto protoPointer = MakeDependentAddPtr(cx, table,
                                             Lookup(clasp, Lookup::ShapeProto(proto),
                                                    nfixed, objectFlags));
     if (protoPointer)
         return protoPointer->shape;
 
     // No entry for this proto. If the proto is one of a few common builtin
@@ -2243,19 +2227,16 @@ EmptyShape::insertInitialShape(JSContext
      */
     if (!cx->helperThread())
         cx->caches().newObjectCache.invalidateEntriesForShape(cx, shape, proto);
 }
 
 void
 Zone::fixupInitialShapeTable()
 {
-    if (!initialShapes().initialized())
-        return;
-
     for (InitialShapeSet::Enum e(initialShapes()); !e.empty(); e.popFront()) {
         // The shape may have been moved, but we can update that in place.
         Shape* shape = e.front().shape.unbarrieredGet();
         if (IsForwarded(shape)) {
             shape = Forwarded(shape);
             e.mutableFront().shape.set(shape);
         }
         shape->updateBaseShapeAfterMovingGC();
--- a/js/src/vm/SharedImmutableStringsCache-inl.h
+++ b/js/src/vm/SharedImmutableStringsCache-inl.h
@@ -16,19 +16,16 @@ MOZ_MUST_USE mozilla::Maybe<SharedImmuta
 SharedImmutableStringsCache::getOrCreate(const char* chars, size_t length,
                                          IntoOwnedChars intoOwnedChars)
 {
     MOZ_ASSERT(inner_);
     MOZ_ASSERT(chars);
     Hasher::Lookup lookup(Hasher::hashLongString(chars, length), chars, length);
 
     auto locked = inner_->lock();
-    if (!locked->set.initialized() && !locked->set.init())
-        return mozilla::Nothing();
-
     auto entry = locked->set.lookupForAdd(lookup);
     if (!entry) {
         OwnedChars ownedChars(intoOwnedChars());
         if (!ownedChars)
             return mozilla::Nothing();
         MOZ_ASSERT(ownedChars.get() == chars ||
                    memcmp(ownedChars.get(), chars, length) == 0);
         auto box = StringBox::Create(std::move(ownedChars), length);
@@ -46,19 +43,16 @@ SharedImmutableStringsCache::getOrCreate
                                          IntoOwnedTwoByteChars intoOwnedTwoByteChars) {
     MOZ_ASSERT(inner_);
     MOZ_ASSERT(chars);
     auto hash = Hasher::hashLongString(reinterpret_cast<const char*>(chars),
                                        length * sizeof(char16_t));
     Hasher::Lookup lookup(hash, chars, length);
 
     auto locked = inner_->lock();
-    if (!locked->set.initialized() && !locked->set.init())
-        return mozilla::Nothing();
-
     auto entry = locked->set.lookupForAdd(lookup);
     if (!entry) {
         OwnedTwoByteChars ownedTwoByteChars(intoOwnedTwoByteChars());
         if (!ownedTwoByteChars)
             return mozilla::Nothing();
         MOZ_ASSERT(ownedTwoByteChars.get() == chars ||
                    memcmp(ownedTwoByteChars.get(), chars, length * sizeof(char16_t)) == 0);
         OwnedChars ownedChars(reinterpret_cast<char*>(ownedTwoByteChars.release()));
--- a/js/src/vm/SharedImmutableStringsCache.h
+++ b/js/src/vm/SharedImmutableStringsCache.h
@@ -134,18 +134,16 @@ class SharedImmutableStringsCache
     MOZ_MUST_USE mozilla::Maybe<SharedImmutableTwoByteString>
     getOrCreate(const char16_t* chars, size_t length);
 
     size_t sizeOfExcludingThis(mozilla::MallocSizeOf mallocSizeOf) const {
         MOZ_ASSERT(inner_);
         size_t n = mallocSizeOf(inner_);
 
         auto locked = inner_->lock();
-        if (!locked->set.initialized())
-            return n;
 
         // Size of the table.
         n += locked->set.shallowSizeOfExcludingThis(mallocSizeOf);
 
         // Sizes of the strings and their boxes.
         for (auto r = locked->set.all(); !r.empty(); r.popFront()) {
             n += mallocSizeOf(r.front().get());
             if (const char* chars = r.front()->chars())
@@ -209,19 +207,16 @@ class SharedImmutableStringsCache
 
     /**
      * Purge the cache of all refcount == 0 entries.
      */
     void purge() {
         auto locked = inner_->lock();
         MOZ_ASSERT(locked->refcount > 0);
 
-        if (!locked->set.initialized())
-            return;
-
         for (Inner::Set::Enum e(locked->set); !e.empty(); e.popFront()) {
             if (e.front()->refcount == 0) {
                 // The chars should be eagerly freed when refcount reaches zero.
                 MOZ_ASSERT(!e.front()->chars());
                 e.removeFront();
             } else {
                 // The chars should exist as long as the refcount is non-zero.
                 MOZ_ASSERT(e.front()->chars());
--- a/js/src/vm/Stack.cpp
+++ b/js/src/vm/Stack.cpp
@@ -1657,21 +1657,16 @@ jit::JitActivation::getRematerializedFra
 {
     MOZ_ASSERT(iter.activation() == this);
     MOZ_ASSERT(iter.isIonScripted());
 
     if (!rematerializedFrames_) {
         rematerializedFrames_ = cx->make_unique<RematerializedFrameTable>(cx);
         if (!rematerializedFrames_)
             return nullptr;
-        if (!rematerializedFrames_->init()) {
-            rematerializedFrames_.reset();
-            ReportOutOfMemory(cx);
-            return nullptr;
-        }
     }
 
     uint8_t* top = iter.fp();
     RematerializedFrameTable::AddPtr p = rematerializedFrames_->lookupForAdd(top);
     if (!p) {
         RematerializedFrameVector frames(cx);
 
         // The unit of rematerialization is an uninlined frame and its inlined
--- a/js/src/vm/StructuredClone.cpp
+++ b/js/src/vm/StructuredClone.cpp
@@ -486,20 +486,16 @@ struct JSStructuredCloneWriter {
           cloneDataPolicy(cloneDataPolicy)
     {
         out.setCallbacks(cb, cbClosure, OwnTransferablePolicy::NoTransferables);
     }
 
     ~JSStructuredCloneWriter();
 
     bool init() {
-        if (!memory.init()) {
-            ReportOutOfMemory(context());
-            return false;
-        }
         return parseTransferable() && writeHeader() && writeTransferMap();
     }
 
     bool write(HandleValue v);
 
     SCOutput& output() { return out; }
 
     void extractBuffer(JSStructuredCloneData* newData) {
@@ -1070,21 +1066,21 @@ JSStructuredCloneWriter::~JSStructuredCl
 }
 
 bool
 JSStructuredCloneWriter::parseTransferable()
 {
     // NOTE: The transferables set is tested for non-emptiness at various
     //       junctures in structured cloning, so this set must be initialized
     //       by this method in all non-error cases.
-    MOZ_ASSERT(!transferableObjects.initialized(),
+    MOZ_ASSERT(transferableObjects.empty(),
                "parseTransferable called with stale data");
 
     if (transferable.isNull() || transferable.isUndefined())
-        return transferableObjects.init(0);
+        return true;
 
     if (!transferable.isObject())
         return reportDataCloneError(JS_SCERR_TRANSFERABLE);
 
     JSContext* cx = context();
     RootedObject array(cx, &transferable.toObject());
     bool isArray;
     if (!JS_IsArrayObject(cx, array, &isArray))
@@ -1092,17 +1088,17 @@ JSStructuredCloneWriter::parseTransferab
     if (!isArray)
         return reportDataCloneError(JS_SCERR_TRANSFERABLE);
 
     uint32_t length;
     if (!JS_GetArrayLength(cx, array, &length))
         return false;
 
     // Initialize the set for the provided array's length.
-    if (!transferableObjects.init(length))
+    if (!transferableObjects.reserve(length))
         return false;
 
     if (length == 0)
         return true;
 
     RootedValue v(context());
     RootedObject tObj(context());
 
--- a/js/src/vm/TraceLogging.cpp
+++ b/js/src/vm/TraceLogging.cpp
@@ -351,21 +351,19 @@ size_t
 TraceLoggerThreadState::sizeOfExcludingThis(mozilla::MallocSizeOf mallocSizeOf)
 {
     LockGuard<Mutex> guard(lock);
 
     // Do not count threadLoggers since they are counted by JSContext::traceLogger.
 
     size_t size = 0;
     size += pointerMap.shallowSizeOfExcludingThis(mallocSizeOf);
-    if (textIdPayloads.initialized()) {
-        size += textIdPayloads.shallowSizeOfExcludingThis(mallocSizeOf);
-        for (TextIdHashMap::Range r = textIdPayloads.all(); !r.empty(); r.popFront())
-            r.front().value()->sizeOfIncludingThis(mallocSizeOf);
-    }
+    size += textIdPayloads.shallowSizeOfExcludingThis(mallocSizeOf);
+    for (TextIdHashMap::Range r = textIdPayloads.all(); !r.empty(); r.popFront())
+        r.front().value()->sizeOfIncludingThis(mallocSizeOf);
     return size;
 }
 
 bool
 TraceLoggerThread::textIdIsScriptEvent(uint32_t id)
 {
     if (id < TraceLogger_Last)
         return false;
@@ -687,20 +685,18 @@ TraceLoggerThread::log(uint32_t id)
 
 TraceLoggerThreadState::~TraceLoggerThreadState()
 {
     while (TraceLoggerThread* logger = threadLoggers.popFirst())
         js_delete(logger);
 
     threadLoggers.clear();
 
-    if (textIdPayloads.initialized()) {
-        for (TextIdHashMap::Range r = textIdPayloads.all(); !r.empty(); r.popFront())
-            js_delete(r.front().value());
-    }
+    for (TextIdHashMap::Range r = textIdPayloads.all(); !r.empty(); r.popFront())
+        js_delete(r.front().value());
 
 #ifdef DEBUG
     initialized = false;
 #endif
 }
 
 static bool
 ContainsFlag(const char* str, const char* flag)
@@ -870,21 +866,16 @@ TraceLoggerThreadState::init()
         if (strstr(options, "EnableOffThread"))
             helperThreadEnabled = true;
         if (strstr(options, "EnableGraph"))
             graphSpewingEnabled = true;
         if (strstr(options, "Errors"))
             spewErrors = true;
     }
 
-    if (!pointerMap.init())
-        return false;
-    if (!textIdPayloads.init())
-        return false;
-
     startupTime = rdtsc();
 
 #ifdef DEBUG
     initialized = true;
 #endif
 
     return true;
 }
--- a/js/src/vm/UbiNode.cpp
+++ b/js/src/vm/UbiNode.cpp
@@ -445,18 +445,16 @@ RootList::init()
 
 bool
 RootList::init(CompartmentSet& debuggees)
 {
     EdgeVector allRootEdges;
     EdgeVectorTracer tracer(cx->runtime(), &allRootEdges, wantNames);
 
     ZoneSet debuggeeZones;
-    if (!debuggeeZones.init())
-        return false;
     for (auto range = debuggees.all(); !range.empty(); range.popFront()) {
         if (!debuggeeZones.put(range.front()->zone()))
             return false;
     }
 
     js::TraceRuntime(&tracer);
     if (!tracer.okay)
         return false;
@@ -485,18 +483,16 @@ RootList::init(CompartmentSet& debuggees
 
 bool
 RootList::init(HandleObject debuggees)
 {
     MOZ_ASSERT(debuggees && JS::dbg::IsDebugger(*debuggees));
     js::Debugger* dbg = js::Debugger::fromJSObject(debuggees.get());
 
     CompartmentSet debuggeeCompartments;
-    if (!debuggeeCompartments.init())
-        return false;
 
     for (js::WeakGlobalObjectSet::Range r = dbg->allDebuggees(); !r.empty(); r.popFront()) {
         if (!debuggeeCompartments.put(r.front()->compartment()))
             return false;
     }
 
     if (!init(debuggeeCompartments))
         return false;
--- a/js/src/vm/UbiNodeCensus.cpp
+++ b/js/src/vm/UbiNodeCensus.cpp
@@ -25,22 +25,16 @@ CountDeleter::operator()(CountBase* ptr)
         return;
 
     // Downcast to our true type and destruct, as guided by our CountType
     // pointer.
     ptr->destruct();
     js_free(ptr);
 }
 
-JS_PUBLIC_API(bool)
-Census::init() {
-    return targetZones.init();
-}
-
-
 /*** Count Types ***********************************************************************************/
 
 // The simplest type: just count everything.
 class SimpleCount : public CountType {
 
     struct Count : CountBase {
         size_t totalBytes_;
 
@@ -462,18 +456,16 @@ class ByObjectClass : public CountType {
     struct Count : public CountBase {
         Table table;
         CountBasePtr other;
 
         Count(CountType& type, CountBasePtr& other)
           : CountBase(type),
             other(std::move(other))
         { }
-
-        bool init() { return table.init(); }
     };
 
     CountTypePtr classesType;
     CountTypePtr otherType;
 
   public:
     ByObjectClass(CountTypePtr& classesType, CountTypePtr& otherType)
         : CountType(),
@@ -495,17 +487,17 @@ class ByObjectClass : public CountType {
 CountBasePtr
 ByObjectClass::makeCount()
 {
     CountBasePtr otherCount(otherType->makeCount());
     if (!otherCount)
         return nullptr;
 
     auto count = js::MakeUnique<Count>(*this, otherCount);
-    if (!count || !count->init())
+    if (!count)
         return nullptr;
 
     return CountBasePtr(count.release());
 }
 
 void
 ByObjectClass::traceCount(CountBase& countBase, JSTracer* trc)
 {
@@ -576,18 +568,16 @@ class ByDomObjectClass : public CountTyp
                           UniqueC16StringHasher,
                           SystemAllocPolicy>;
     using Entry = Table::Entry;
 
     struct Count : public CountBase {
         Table table;
 
         explicit Count(CountType& type) : CountBase(type) { }
-
-        bool init() { return table.init(); }
     };
 
     CountTypePtr classesType;
 
   public:
     explicit ByDomObjectClass(CountTypePtr& classesType)
       : CountType(),
         classesType(std::move(classesType))
@@ -603,17 +593,17 @@ class ByDomObjectClass : public CountTyp
     bool count(CountBase& countBase, mozilla::MallocSizeOf mallocSizeOf, const Node& node) override;
     bool report(JSContext* cx, CountBase& countBase, MutableHandleValue report) override;
 };
 
 CountBasePtr
 ByDomObjectClass::makeCount()
 {
     auto count = js::MakeUnique<Count>(*this);
-    if (!count || !count->init())
+    if (!count)
         return nullptr;
 
     return CountBasePtr(count.release());
 }
 
 void
 ByDomObjectClass::traceCount(CountBase& countBase, JSTracer* trc)
 {
@@ -667,18 +657,16 @@ class ByUbinodeType : public CountType {
     using Table = HashMap<const char16_t*, CountBasePtr, DefaultHasher<const char16_t*>,
                           SystemAllocPolicy>;
     using Entry = Table::Entry;
 
     struct Count: public CountBase {
         Table table;
 
         explicit Count(CountType& type) : CountBase(type) { }
-
-        bool init() { return table.init(); }
     };
 
     CountTypePtr entryType;
 
   public:
     explicit ByUbinodeType(CountTypePtr& entryType)
       : CountType(),
         entryType(std::move(entryType))
@@ -694,17 +682,17 @@ class ByUbinodeType : public CountType {
     bool count(CountBase& countBase, mozilla::MallocSizeOf mallocSizeOf, const Node& node) override;
     bool report(JSContext* cx, CountBase& countBase, MutableHandleValue report) override;
 };
 
 CountBasePtr
 ByUbinodeType::makeCount()
 {
     auto count = js::MakeUnique<Count>(*this);
-    if (!count || !count->init())
+    if (!count)
         return nullptr;
 
     return CountBasePtr(count.release());
 }
 
 void
 ByUbinodeType::traceCount(CountBase& countBase, JSTracer* trc)
 {
@@ -809,17 +797,16 @@ class ByAllocationStack : public CountTy
         // can't abide it.
         Table table;
         CountBasePtr noStack;
 
         Count(CountType& type, CountBasePtr& noStack)
           : CountBase(type),
             noStack(std::move(noStack))
         { }
-        bool init() { return table.init(); }
     };
 
     CountTypePtr entryType;
     CountTypePtr noStackType;
 
   public:
     ByAllocationStack(CountTypePtr& entryType, CountTypePtr& noStackType)
       : CountType(),
@@ -841,17 +828,17 @@ class ByAllocationStack : public CountTy
 CountBasePtr
 ByAllocationStack::makeCount()
 {
     CountBasePtr noStackCount(noStackType->makeCount());
     if (!noStackCount)
         return nullptr;
 
     auto count = js::MakeUnique<Count>(*this, noStackCount);
-    if (!count || !count->init())
+    if (!count)
         return nullptr;
     return CountBasePtr(count.release());
 }
 
 void
 ByAllocationStack::traceCount(CountBase& countBase, JSTracer* trc)
 {
     Count& count = static_cast<Count&>(countBase);
@@ -980,18 +967,16 @@ class ByFilename : public CountType {
         CountBasePtr then;
         CountBasePtr noFilename;
 
         Count(CountType& type, CountBasePtr&& then, CountBasePtr&& noFilename)
           : CountBase(type)
           , then(std::move(then))
           , noFilename(std::move(noFilename))
         { }
-
-        bool init() { return table.init(); }
     };
 
     CountTypePtr thenType;
     CountTypePtr noFilenameType;
 
   public:
     ByFilename(CountTypePtr&& thenType, CountTypePtr&& noFilenameType)
         : CountType(),
@@ -1017,17 +1002,17 @@ ByFilename::makeCount()
     if (!thenCount)
         return nullptr;
 
     CountBasePtr noFilenameCount(noFilenameType->makeCount());
     if (!noFilenameCount)
         return nullptr;
 
     auto count = js::MakeUnique<Count>(*this, std::move(thenCount), std::move(noFilenameCount));
-    if (!count || !count->init())
+    if (!count)
         return nullptr;
 
     return CountBasePtr(count.release());
 }
 
 void
 ByFilename::traceCount(CountBase& countBase, JSTracer* trc)
 {
--- a/js/src/vm/UbiNodeShortestPaths.cpp
+++ b/js/src/vm/UbiNodeShortestPaths.cpp
@@ -49,17 +49,17 @@ JS_PUBLIC_API(void)
 dumpPaths(JSContext* cx, Node node, uint32_t maxNumPaths /* = 10 */)
 {
     mozilla::Maybe<AutoCheckCannotGC> nogc;
 
     JS::ubi::RootList rootList(cx, nogc, true);
     MOZ_ASSERT(rootList.init());
 
     NodeSet targets;
-    bool ok = targets.init() && targets.putNew(node);
+    bool ok = targets.putNew(node);
     MOZ_ASSERT(ok);
 
     auto paths = ShortestPaths::Create(cx, nogc.ref(), maxNumPaths, &rootList, std::move(targets));
     MOZ_ASSERT(paths.isSome());
 
     int i = 0;
     ok = paths->forEachPath(node, [&](Path& path) {
         fprintf(stderr, "Path %d:\n", i++);
--- a/js/src/vm/Xdr.cpp
+++ b/js/src/vm/Xdr.cpp
@@ -242,24 +242,16 @@ XDRIncrementalEncoder::getTreeKey(JSFunc
                       sizeof(fun->nonLazyScript()->sourceEnd()) == 4,
                       "AutoXDRTree key requires JSScripts positions to be uint32");
         return uint64_t(fun->nonLazyScript()->sourceStart()) << 32 | fun->nonLazyScript()->sourceEnd();
     }
 
     return AutoXDRTree::noKey;
 }
 
-bool
-XDRIncrementalEncoder::init()
-{
-    if (!tree_.init())
-        return false;
-    return true;
-}
-
 void
 XDRIncrementalEncoder::createOrReplaceSubTree(AutoXDRTree* child)
 {
     AutoXDRTree* parent = scope_;
     child->parent_ = parent;
     scope_ = child;
     if (oom_)
         return;
@@ -392,12 +384,12 @@ XDRIncrementalEncoder::linearize(JS::Tra
         SlicesTree::Ptr p = tree_.lookup(slice.child);
         MOZ_ASSERT(p);
         if (!depthFirst.append(((const SlicesNode&) p->value()).all())) {
             ReportOutOfMemory(cx());
             return fail(JS::TranscodeResult_Throw);
         }
     }
 
-    tree_.finish();
+    tree_.clearAndCompact();
     slices_.clearAndFree();
     return Ok();
 }
--- a/js/src/vm/Xdr.h
+++ b/js/src/vm/Xdr.h
@@ -590,18 +590,16 @@ class XDRIncrementalEncoder : public XDR
     {
     }
 
     virtual ~XDRIncrementalEncoder() {}
 
     AutoXDRTree::Key getTopLevelTreeKey() const override;
     AutoXDRTree::Key getTreeKey(JSFunction* fun) const override;
 
-    MOZ_MUST_USE bool init();
-
     void createOrReplaceSubTree(AutoXDRTree* child) override;
     void endSubTree() override;
 
     // Append the content collected during the incremental encoding into the
     // buffer given as argument.
     XDRResult linearize(JS::TranscodeBuffer& buffer);
 };
 
--- a/js/src/wasm/AsmJS.cpp
+++ b/js/src/wasm/AsmJS.cpp
@@ -1542,21 +1542,17 @@ class MOZ_STACK_CLASS JS_HAZ_ROOTED Modu
             return false;
 
         asmJSMetadata_->toStringStart = moduleFunctionNode_->pn_funbox->toStringStart;
         asmJSMetadata_->srcStart = moduleFunctionNode_->pn_body->pn_pos.begin;
         asmJSMetadata_->strict = parser_.pc->sc()->strict() &&
                                  !parser_.pc->sc()->hasExplicitUseStrict();
         asmJSMetadata_->scriptSource.reset(parser_.ss);
 
-        if (!globalMap_.init() || !sigSet_.init() || !funcImportMap_.init())
-            return false;
-
-        if (!standardLibraryMathNames_.init() ||
-            !addStandardLibraryMathName("sin", AsmJSMathBuiltin_sin) ||
+        if (!addStandardLibraryMathName("sin", AsmJSMathBuiltin_sin) ||
             !addStandardLibraryMathName("cos", AsmJSMathBuiltin_cos) ||
             !addStandardLibraryMathName("tan", AsmJSMathBuiltin_tan) ||
             !addStandardLibraryMathName("asin", AsmJSMathBuiltin_asin) ||
             !addStandardLibraryMathName("acos", AsmJSMathBuiltin_acos) ||
             !addStandardLibraryMathName("atan", AsmJSMathBuiltin_atan) ||
             !addStandardLibraryMathName("ceil", AsmJSMathBuiltin_ceil) ||
             !addStandardLibraryMathName("floor", AsmJSMathBuiltin_floor) ||
             !addStandardLibraryMathName("exp", AsmJSMathBuiltin_exp) ||
@@ -2338,22 +2334,16 @@ class MOZ_STACK_CLASS FunctionValidator
         hasAlreadyReturned_(false),
         ret_(ExprType::Limit)
     {}
 
     ModuleValidator& m() const        { return m_; }
     JSContext* cx() const             { return m_.cx(); }
     ParseNode* fn() const             { return fn_; }
 
-    bool init() {
-        return locals_.init() &&
-               breakLabels_.init() &&
-               continueLabels_.init();
-    }
-
     void define(ModuleValidator::Func* func, unsigned line) {
         MOZ_ASSERT(!blockDepth_);
         MOZ_ASSERT(breakableStack_.empty());
         MOZ_ASSERT(continuableStack_.empty());
         MOZ_ASSERT(breakLabels_.empty());
         MOZ_ASSERT(continueLabels_.empty());
         func->define(fn_, line, std::move(bytes_), std::move(callSiteLineNums_));
     }
@@ -5411,18 +5401,16 @@ CheckFunction(ModuleValidator& m)
     unsigned line = 0;
     if (!ParseFunction(m, &fn, &line))
         return false;
 
     if (!CheckFunctionHead(m, fn))
         return false;
 
     FunctionValidator f(m, fn);
-    if (!f.init())
-        return m.fail(fn, "internal compiler failure (probably out of memory)");
 
     ParseNode* stmtIter = ListHead(FunctionStatementList(fn));
 
     if (!CheckProcessingDirectives(m, &stmtIter))
         return false;
 
     ValTypeVector args;
     if (!CheckArguments(f, &stmtIter, &args))
--- a/js/src/wasm/WasmAST.h
+++ b/js/src/wasm/WasmAST.h
@@ -1253,19 +1253,16 @@ class AstModule : public AstNode
         memories_(lifo),
         exports_(lifo),
         funcs_(lifo),
         dataSegments_(lifo),
         elemSegments_(lifo),
         globals_(lifo),
         numGlobalImports_(0)
     {}
-    bool init() {
-        return funcTypeMap_.init();
-    }
     bool addMemory(AstName name, const Limits& memory) {
         return memories_.append(AstResizable(memory, false, name));
     }
     bool hasMemory() const {
         return !!memories_.length();
     }
     const AstResizableVector& memories() const {
         return memories_;
--- a/js/src/wasm/WasmBuiltins.cpp
+++ b/js/src/wasm/WasmBuiltins.cpp
@@ -832,19 +832,16 @@ struct TypedNative
 };
 
 using TypedNativeToFuncPtrMap =
     HashMap<TypedNative, void*, TypedNative, SystemAllocPolicy>;
 
 static bool
 PopulateTypedNatives(TypedNativeToFuncPtrMap* typedNatives)
 {
-    if (!typedNatives->init())
-        return false;
-
 #define ADD_OVERLOAD(funcName, native, abiType)                                           \
     if (!typedNatives->putNew(TypedNative(InlinableNative::native, abiType),              \
                               FuncCast(funcName, abiType)))                               \
         return false;
 
 #define ADD_UNARY_OVERLOADS(funcName, native)                                             \
     ADD_OVERLOAD(funcName##_impl, native, Args_Double_Double)                         \
     ADD_OVERLOAD(funcName##_impl_f32, native, Args_Float32_Float32)
@@ -951,19 +948,16 @@ wasm::EnsureBuiltinThunksInitialized()
         if (!thunks->codeRanges.emplaceBack(CodeRange::BuiltinThunk, offsets))
             return false;
     }
 
     TypedNativeToFuncPtrMap typedNatives;
     if (!PopulateTypedNatives(&typedNatives))
         return false;
 
-    if (!thunks->typedNativeToCodeRange.init())
-        return false;
-
     for (TypedNativeToFuncPtrMap::Range r = typedNatives.all(); !r.empty(); r.popFront()) {
         TypedNative typedNative = r.front().key();
 
         uint32_t codeRangeIndex = thunks->codeRanges.length();
         if (!thunks->typedNativeToCodeRange.putNew(typedNative, codeRangeIndex))
             return false;
 
         ABIFunctionType abiType = typedNative.abiType;
--- a/js/src/wasm/WasmDebug.cpp
+++ b/js/src/wasm/WasmDebug.cpp
@@ -142,31 +142,26 @@ DebugState::totalSourceLines(JSContext* 
     if (maybeBytecode_)
         *count = maybeBytecode_->length();
     return true;
 }
 
 bool
 DebugState::stepModeEnabled(uint32_t funcIndex) const
 {
-    return stepModeCounters_.initialized() && stepModeCounters_.lookup(funcIndex);
+    return stepModeCounters_.lookup(funcIndex).found();
 }
 
 bool
 DebugState::incrementStepModeCount(JSContext* cx, uint32_t funcIndex)
 {
     MOZ_ASSERT(debugEnabled());
     const CodeRange& codeRange = codeRanges(Tier::Debug)[debugFuncToCodeRangeIndex(funcIndex)];
     MOZ_ASSERT(codeRange.isFunction());
 
-    if (!stepModeCounters_.initialized() && !stepModeCounters_.init()) {
-        ReportOutOfMemory(cx);
-        return false;
-    }
-
     StepModeCounters::AddPtr p = stepModeCounters_.lookupForAdd(funcIndex);
     if (p) {
         MOZ_ASSERT(p->value() > 0);
         p->value()++;
         return true;
     }
     if (!stepModeCounters_.add(p, funcIndex, 1)) {
         ReportOutOfMemory(cx);
@@ -189,34 +184,34 @@ DebugState::incrementStepModeCount(JSCon
 
 bool
 DebugState::decrementStepModeCount(FreeOp* fop, uint32_t funcIndex)
 {
     MOZ_ASSERT(debugEnabled());
     const CodeRange& codeRange = codeRanges(Tier::Debug)[debugFuncToCodeRangeIndex(funcIndex)];
     MOZ_ASSERT(codeRange.isFunction());
 
-    MOZ_ASSERT(stepModeCounters_.initialized() && !stepModeCounters_.empty());
+    MOZ_ASSERT(!stepModeCounters_.empty());
     StepModeCounters::Ptr p = stepModeCounters_.lookup(funcIndex);
     MOZ_ASSERT(p);
     if (--p->value())
         return true;
 
     stepModeCounters_.remove(p);
 
     AutoWritableJitCode awjc(fop->runtime(), code_->segment(Tier::Debug).base() + codeRange.begin(),
                              codeRange.end() - codeRange.begin());
     AutoFlushICache afc("Code::decrementStepModeCount");
 
     for (const CallSite& callSite : callSites(Tier::Debug)) {
         if (callSite.kind() != CallSite::Breakpoint)
             continue;
         uint32_t offset = callSite.returnAddressOffset();
         if (codeRange.begin() <= offset && offset <= codeRange.end()) {
-            bool enabled = breakpointSites_.initialized() && breakpointSites_.has(offset);
+            bool enabled = breakpointSites_.has(offset);
             toggleDebugTrap(offset, enabled);
         }
     }
     return true;
 }
 
 bool
 DebugState::hasBreakpointTrapAtOffset(uint32_t offset)
@@ -234,33 +229,29 @@ DebugState::toggleBreakpointTrap(JSRunti
     if (!callSite)
         return;
     size_t debugTrapOffset = callSite->returnAddressOffset();
 
     const ModuleSegment& codeSegment = code_->segment(Tier::Debug);
     const CodeRange* codeRange = code_->lookupFuncRange(codeSegment.base() + debugTrapOffset);
     MOZ_ASSERT(codeRange);
 
-    if (stepModeCounters_.initialized() && stepModeCounters_.lookup(codeRange->funcIndex()))
+    if (stepModeCounters_.lookup(codeRange->funcIndex()))
         return; // no need to toggle when step mode is enabled
 
     AutoWritableJitCode awjc(rt, codeSegment.base(), codeSegment.length());
     AutoFlushICache afc("Code::toggleBreakpointTrap");
     AutoFlushICache::setRange(uintptr_t(codeSegment.base()), codeSegment.length());
     toggleDebugTrap(debugTrapOffset, enabled);
 }
 
 WasmBreakpointSite*
 DebugState::getOrCreateBreakpointSite(JSContext* cx, uint32_t offset)
 {
     WasmBreakpointSite* site;
-    if (!breakpointSites_.initialized() && !breakpointSites_.init()) {
-        ReportOutOfMemory(cx);
-        return nullptr;
-    }
 
     WasmBreakpointSiteMap::AddPtr p = breakpointSites_.lookupForAdd(offset);
     if (!p) {
         site = cx->new_<WasmBreakpointSite>(this, offset);
         if (!site)
             return nullptr;
 
         if (!breakpointSites_.add(p, offset, site)) {
@@ -272,34 +263,33 @@ DebugState::getOrCreateBreakpointSite(JS
         site = p->value();
     }
     return site;
 }
 
 bool
 DebugState::hasBreakpointSite(uint32_t offset)
 {
-    return breakpointSites_.initialized() && breakpointSites_.has(offset);
+    return breakpointSites_.has(offset);
 }
 
 void
 DebugState::destroyBreakpointSite(FreeOp* fop, uint32_t offset)
 {
-    MOZ_ASSERT(breakpointSites_.initialized());
     WasmBreakpointSiteMap::Ptr p = breakpointSites_.lookup(offset);
     MOZ_ASSERT(p);
     fop->delete_(p->value());
     breakpointSites_.remove(p);
 }
 
 bool
 DebugState::clearBreakpointsIn(JSContext* cx, WasmInstanceObject* instance, js::Debugger* dbg, JSObject* handler)
 {
     MOZ_ASSERT(instance);
-    if (!breakpointSites_.initialized())
+    if (breakpointSites_.empty())
         return true;
 
     // Make copy of all sites list, so breakpointSites_ can be modified by
     // destroyBreakpointSite calls.
     Vector<WasmBreakpointSite*> sites(cx);
     if (!sites.resize(breakpointSites_.count()))
         return false;
     size_t i = 0;
--- a/js/src/wasm/WasmGenerator.cpp
+++ b/js/src/wasm/WasmGenerator.cpp
@@ -415,18 +415,16 @@ ModuleGenerator::linkCallSites()
     // go out of range. Far jumps are created for two cases: direct calls
     // between function definitions and calls to trap exits by trap out-of-line
     // paths. Far jump code is shared when possible to reduce bloat. This method
     // is called both between function bodies (at a frequency determined by the
     // ISA's jump range) and once at the very end of a module's codegen after
     // all possible calls/traps have been emitted.
 
     OffsetMap existingCallFarJumps;
-    if (!existingCallFarJumps.init())
-        return false;
 
     TrapMaybeOffsetArray existingTrapFarJumps;
 
     for (; lastPatchedCallSite_ < metadataTier_->callSites.length(); lastPatchedCallSite_++) {
         const CallSite& callSite = metadataTier_->callSites[lastPatchedCallSite_];
         const CallSiteTarget& target = callSiteTargets_[lastPatchedCallSite_];
         uint32_t callerOffset = callSite.returnAddressOffset();
         switch (callSite.kind()) {
--- a/js/src/wasm/WasmInstance.cpp
+++ b/js/src/wasm/WasmInstance.cpp
@@ -37,26 +37,17 @@ using mozilla::BitwiseCast;
 
 class FuncTypeIdSet
 {
     typedef HashMap<const FuncType*, uint32_t, FuncTypeHashPolicy, SystemAllocPolicy> Map;
     Map map_;
 
   public:
     ~FuncTypeIdSet() {
-        MOZ_ASSERT_IF(!JSRuntime::hasLiveRuntimes(), !map_.initialized() || map_.empty());
-    }
-
-    bool ensureInitialized(JSContext* cx) {
-        if (!map_.initialized() && !map_.init()) {
-            ReportOutOfMemory(cx);
-            return false;
-        }
-
-        return true;
+        MOZ_ASSERT_IF(!JSRuntime::hasLiveRuntimes(), map_.empty());
     }
 
     bool allocateFuncTypeId(JSContext* cx, const FuncType& funcType, const void** funcTypeId) {
         Map::AddPtr p = map_.lookupForAdd(funcType);
         if (p) {
             MOZ_ASSERT(p->value() > 0);
             p->value()++;
             *funcTypeId = p->key();
@@ -614,19 +605,16 @@ Instance::init(JSContext* cx)
     for (const SharedTable& table : tables_) {
         if (table->movingGrowable() && !table->addMovingGrowObserver(cx, object_))
             return false;
     }
 
     if (!metadata().funcTypeIds.empty()) {
         ExclusiveData<FuncTypeIdSet>::Guard lockedFuncTypeIdSet = funcTypeIdSet.lock();
 
-        if (!lockedFuncTypeIdSet->ensureInitialized(cx))
-            return false;
-
         for (const FuncTypeWithId& funcType : metadata().funcTypeIds) {
             const void* funcTypeId;
             if (!lockedFuncTypeIdSet->allocateFuncTypeId(cx, funcType, &funcTypeId))
                 return false;
 
             *addressOfFuncTypeId(funcType.id) = funcTypeId;
         }
     }
--- a/js/src/wasm/WasmIonCompile.cpp
+++ b/js/src/wasm/WasmIonCompile.cpp
@@ -1474,17 +1474,17 @@ class FunctionCompiler
             return false;
         if (!addControlFlowPatch(table, defaultDepth, defaultIndex))
             return false;
 
         typedef HashMap<uint32_t, uint32_t, DefaultHasher<uint32_t>, SystemAllocPolicy>
             IndexToCaseMap;
 
         IndexToCaseMap indexToCase;
-        if (!indexToCase.init() || !indexToCase.put(defaultDepth, defaultIndex))
+        if (!indexToCase.put(defaultDepth, defaultIndex))
             return false;
 
         for (size_t i = 0; i < numCases; i++) {
             uint32_t depth = depths[i];
 
             size_t caseIndex;
             IndexToCaseMap::AddPtr p = indexToCase.lookupForAdd(depth);
             if (!p) {
--- a/js/src/wasm/WasmJS.cpp
+++ b/js/src/wasm/WasmJS.cpp
@@ -1076,23 +1076,23 @@ WasmInstanceObject::create(JSContext* cx
                            SharedTableVector&& tables,
                            Handle<FunctionVector> funcImports,
                            const GlobalDescVector& globals,
                            HandleValVector globalImportValues,
                            const WasmGlobalObjectVector& globalObjs,
                            HandleObject proto)
 {
     UniquePtr<ExportMap> exports = js::MakeUnique<ExportMap>();
-    if (!exports || !exports->init()) {
+    if (!exports) {
         ReportOutOfMemory(cx);
         return nullptr;
     }
 
     UniquePtr<ScopeMap> scopes = js::MakeUnique<ScopeMap>(cx->zone());
-    if (!scopes || !scopes->init()) {
+    if (!scopes) {
         ReportOutOfMemory(cx);
         return nullptr;
     }
 
     uint32_t indirectGlobals = 0;
 
     for (uint32_t i = 0; i < globalObjs.length(); i++) {
         if (globalObjs[i] && globals[i].isIndirect())
@@ -1684,17 +1684,17 @@ WasmMemoryObject::observers() const
     return *reinterpret_cast<InstanceSet*>(getReservedSlot(OBSERVERS_SLOT).toPrivate());
 }
 
 WasmMemoryObject::InstanceSet*
 WasmMemoryObject::getOrCreateObservers(JSContext* cx)
 {
     if (!hasObservers()) {
         auto observers = MakeUnique<InstanceSet>(cx->zone());
-        if (!observers || !observers->init()) {
+        if (!observers) {
             ReportOutOfMemory(cx);
             return nullptr;
         }
 
         setReservedSlot(OBSERVERS_SLOT, PrivateValue(observers.release()));
     }
 
     return &observers();
--- a/js/src/wasm/WasmTable.cpp
+++ b/js/src/wasm/WasmTable.cpp
@@ -167,40 +167,33 @@ Table::grow(uint32_t delta, JSContext* c
         return -1;
     Unused << array_.release();
     array_.reset((uint8_t*)newArray);
 
     // Realloc does not zero the delta for us.
     PodZero(newArray + length_, delta);
     length_ = newLength.value();
 
-    if (observers_.initialized()) {
-        for (InstanceSet::Range r = observers_.all(); !r.empty(); r.popFront())
-            r.front()->instance().onMovingGrowTable();
-    }
+    for (InstanceSet::Range r = observers_.all(); !r.empty(); r.popFront())
+        r.front()->instance().onMovingGrowTable();
 
     return oldLength;
 }
 
 bool
 Table::movingGrowable() const
 {
     return !maximum_ || length_ < maximum_.value();
 }
 
 bool
 Table::addMovingGrowObserver(JSContext* cx, WasmInstanceObject* instance)
 {
     MOZ_ASSERT(movingGrowable());
 
-    if (!observers_.initialized() && !observers_.init()) {
-        ReportOutOfMemory(cx);
-        return false;
-    }
-
     if (!observers_.putNew(instance)) {
         ReportOutOfMemory(cx);
         return false;
     }
 
     return true;
 }
 
--- a/js/src/wasm/WasmTextToBinary.cpp
+++ b/js/src/wasm/WasmTextToBinary.cpp
@@ -4025,17 +4025,17 @@ ParseModule(const char16_t* text, uintpt
     *binary = false;
 
     if (!c.ts.match(WasmToken::OpenParen, c.error))
         return nullptr;
     if (!c.ts.match(WasmToken::Module, c.error))
         return nullptr;
 
     auto* module = new(c.lifo) AstModule(c.lifo);
-    if (!module || !module->init())
+    if (!module)
         return nullptr;
 
     if (c.ts.peek().kind() == WasmToken::Text) {
         *binary = true;
         return ParseBinaryModule(c, module);
     }
 
     while (c.ts.getIf(WasmToken::OpenParen)) {
@@ -4167,26 +4167,16 @@ class Resolver
         funcTypeMap_(lifo),
         funcMap_(lifo),
         importMap_(lifo),
         tableMap_(lifo),
         memoryMap_(lifo),
         typeMap_(lifo),
         targetStack_(lifo)
     {}
-    bool init() {
-        return funcTypeMap_.init() &&
-               funcMap_.init() &&
-               importMap_.init() &&
-               tableMap_.init() &&
-               memoryMap_.init() &&
-               typeMap_.init() &&
-               varMap_.init() &&
-               globalMap_.init();
-    }
     void beginFunc() {
         varMap_.clear();
         MOZ_ASSERT(targetStack_.empty());
     }
 
 #define REGISTER(what, map)                                    \
     bool register##what##Name(AstName name, size_t index) {    \
         return name.empty() || registerName(map, name, index); \
@@ -4717,19 +4707,16 @@ ResolveStruct(Resolver& r, AstStructType
     return true;
 }
 
 static bool
 ResolveModule(LifoAlloc& lifo, AstModule* module, UniqueChars* error)
 {
     Resolver r(lifo, error);
 
-    if (!r.init())
-        return false;
-
     size_t numTypes = module->types().length();
     for (size_t i = 0; i < numTypes; i++) {
         AstTypeDef* td = module->types()[i];
         if (td->isFuncType()) {
             AstFuncType* funcType = &td->asFuncType();
             if (!r.registerFuncTypeName(funcType->name(), i))
                 return r.fail("duplicate signature");
         } else if (td->isStructType()) {
--- a/js/src/wasm/WasmValidate.cpp
+++ b/js/src/wasm/WasmValidate.cpp
@@ -1858,18 +1858,16 @@ DecodeExportSection(Decoder& d, ModuleEn
 {
     MaybeSectionRange range;
     if (!d.startSection(SectionId::Export, env, &range, "export"))
         return false;
     if (!range)
         return true;
 
     CStringSet dupSet;
-    if (!dupSet.init())
-        return false;
 
     uint32_t numExports;
     if (!d.readVarU32(&numExports))
         return d.fail("failed to read number of exports");
 
     if (numExports > MaxExports)
         return d.fail("too many exports");
 
--- a/js/xpconnect/src/XPCMaps.h
+++ b/js/xpconnect/src/XPCMaps.h
@@ -29,24 +29,17 @@ class JSObject2WrappedJSMap
 {
     using Map = js::HashMap<JS::Heap<JSObject*>,
                             nsXPCWrappedJS*,
                             js::MovableCellHasher<JS::Heap<JSObject*>>,
                             InfallibleAllocPolicy>;
 
 public:
     static JSObject2WrappedJSMap* newMap(int length) {
-        auto* map = new JSObject2WrappedJSMap();
-        if (!map->mTable.init(length)) {
-            // This is a decent estimate of the size of the hash table's
-            // entry storage. The |2| is because on average the capacity is
-            // twice the requested length.
-            NS_ABORT_OOM(length * 2 * sizeof(Map::Entry));
-        }
-        return map;
+        return new JSObject2WrappedJSMap(length);
     }
 
     inline nsXPCWrappedJS* Find(JSObject* Obj) {
         MOZ_ASSERT(Obj,"bad param");
         Map::Ptr p = mTable.lookup(Obj);
         return p ? p->value() : nullptr;
     }
 
@@ -89,17 +82,17 @@ public:
 
     size_t SizeOfIncludingThis(mozilla::MallocSizeOf mallocSizeOf) const;
 
     // Report the sum of SizeOfIncludingThis() for all wrapped JS in the map.
     // Each wrapped JS is only in one map.
     size_t SizeOfWrappedJS(mozilla::MallocSizeOf mallocSizeOf) const;
 
 private:
-    JSObject2WrappedJSMap() {}
+    explicit JSObject2WrappedJSMap(size_t length) : mTable(length) {}
 
     Map mTable;
 };
 
 /*************************/
 
 class Native2WrappedNativeMap
 {
@@ -501,24 +494,17 @@ class JSObject2JSObjectMap
 {
     using Map = JS::GCHashMap<JS::Heap<JSObject*>,
                               JS::Heap<JSObject*>,
                               js::MovableCellHasher<JS::Heap<JSObject*>>,
                               js::SystemAllocPolicy>;
 
 public:
     static JSObject2JSObjectMap* newMap(int length) {
-        auto* map = new JSObject2JSObjectMap();
-        if (!map->mTable.init(length)) {
-            // This is a decent estimate of the size of the hash table's
-            // entry storage. The |2| is because on average the capacity is
-            // twice the requested length.
-            NS_ABORT_OOM(length * 2 * sizeof(Map::Entry));
-        }
-        return map;
+        return new JSObject2JSObjectMap(length);
     }
 
     inline JSObject* Find(JSObject* key) {
         MOZ_ASSERT(key, "bad param");
         if (Map::Ptr p = mTable.lookup(key))
             return p->value();
         return nullptr;
     }
@@ -542,14 +528,14 @@ public:
 
     inline uint32_t Count() { return mTable.count(); }
 
     void Sweep() {
         mTable.sweep();
     }
 
 private:
-    JSObject2JSObjectMap() {}
+    explicit JSObject2JSObjectMap(size_t length) : mTable(length) {}
 
     Map mTable;
 };
 
 #endif /* xpcmaps_h___ */
--- a/memory/replace/dmd/DMD.cpp
+++ b/memory/replace/dmd/DMD.cpp
@@ -580,18 +580,18 @@ public:
 //---------------------------------------------------------------------------
 // Location service
 //---------------------------------------------------------------------------
 
 class StringTable
 {
 public:
   StringTable()
+    : mSet(64)
   {
-    MOZ_ALWAYS_TRUE(mSet.init(64));
   }
 
   const char*
   Intern(const char* aString)
   {
     StringHashSet::AddPtr p = mSet.lookupForAdd(aString);
     if (p) {
       return *p;
@@ -1126,18 +1126,18 @@ void MaybeAddToDeadBlockTable(const Dead
 // Add a pointer to each live stack trace into the given StackTraceSet.  (A
 // stack trace is live if it's used by one of the live blocks.)
 static void
 GatherUsedStackTraces(StackTraceSet& aStackTraces)
 {
   MOZ_ASSERT(gStateLock->IsLocked());
   MOZ_ASSERT(Thread::Fetch()->InterceptsAreBlocked());
 
-  aStackTraces.finish();
-  MOZ_ALWAYS_TRUE(aStackTraces.init(512));
+  aStackTraces.clear();
+  MOZ_ALWAYS_TRUE(aStackTraces.reserve(512));
 
   for (auto iter = gLiveBlockTable->iter(); !iter.done(); iter.next()) {
     iter.get().AddStackTracesToTable(aStackTraces);
   }
 
   for (auto iter = gDeadBlockTable->iter(); !iter.done(); iter.next()) {
     iter.get().key().AddStackTracesToTable(aStackTraces);
   }
@@ -1571,28 +1571,24 @@ Init(malloc_table_t* aMallocTable)
     InfallibleAllocPolicy::malloc_(sizeof(FastBernoulliTrial));
   ResetBernoulli();
 
   DMD_CREATE_TLS_INDEX(gTlsIndex);
 
   {
     AutoLockState lock;
 
-    gStackTraceTable = InfallibleAllocPolicy::new_<StackTraceTable>();
-    MOZ_ALWAYS_TRUE(gStackTraceTable->init(8192));
-
-    gLiveBlockTable = InfallibleAllocPolicy::new_<LiveBlockTable>();
-    MOZ_ALWAYS_TRUE(gLiveBlockTable->init(8192));
+    gStackTraceTable = InfallibleAllocPolicy::new_<StackTraceTable>(8192);
+    gLiveBlockTable = InfallibleAllocPolicy::new_<LiveBlockTable>(8192);
 
     // Create this even if the mode isn't Cumulative (albeit with a small
     // size), in case the mode is changed later on (as is done by SmokeDMD.cpp,
     // for example).
-    gDeadBlockTable = InfallibleAllocPolicy::new_<DeadBlockTable>();
     size_t tableSize = gOptions->IsCumulativeMode() ? 8192 : 4;
-    MOZ_ALWAYS_TRUE(gDeadBlockTable->init(tableSize));
+    gDeadBlockTable = InfallibleAllocPolicy::new_<DeadBlockTable>(tableSize);
   }
 
   return true;
 }
 
 //---------------------------------------------------------------------------
 // Block reporting and unreporting
 //---------------------------------------------------------------------------
@@ -1707,19 +1703,19 @@ DMDFuncs::ClearReports()
     iter.get().UnreportIfNotReportedOnAlloc();
   }
 }
 
 class ToIdStringConverter final
 {
 public:
   ToIdStringConverter()
-    : mNextId(0)
+    : mIdMap(512)
+    , mNextId(0)
   {
-    MOZ_ALWAYS_TRUE(mIdMap.init(512));
   }
 
   // Converts a pointer to a unique ID. Reuses the existing ID for the pointer
   // if it's been seen before.
   const char* ToIdString(const void* aPtr)
   {
     uint32_t id;
     PointerIdMap::AddPtr p = mIdMap.lookupForAdd(aPtr);
@@ -1824,21 +1820,18 @@ AnalyzeImpl(UniquePtr<JSONWriteFunc> aWr
   JSONWriter writer(std::move(aWriter));
 
   AutoBlockIntercepts block(Thread::Fetch());
   AutoLockState lock;
 
   // Allocate this on the heap instead of the stack because it's fairly large.
   auto locService = InfallibleAllocPolicy::new_<CodeAddressService>();
 
-  StackTraceSet usedStackTraces;
-  MOZ_ALWAYS_TRUE(usedStackTraces.init(512));
-
-  PointerSet usedPcs;
-  MOZ_ALWAYS_TRUE(usedPcs.init(512));
+  StackTraceSet usedStackTraces(512);
+  PointerSet usedPcs(512);
 
   size_t iscSize;
 
   static int analysisCount = 1;
   StatusMsg("Dump %d {\n", analysisCount++);
 
   writer.Start();
   {
@@ -1905,18 +1898,17 @@ AnalyzeImpl(UniquePtr<JSONWriteFunc> aWr
         }
         writer.EndObject();
       };
 
       // Live blocks.
       if (!gOptions->IsScanMode()) {
         // At this point we typically have many LiveBlocks that differ only in
         // their address. Aggregate them to reduce the size of the output file.
-        AggregatedLiveBlockTable agg;
-        MOZ_ALWAYS_TRUE(agg.init(8192));
+        AggregatedLiveBlockTable agg(8192);
         for (auto iter = gLiveBlockTable->iter(); !iter.done(); iter.next()) {
           const LiveBlock& b = iter.get();
           b.AddStackTracesToTable(usedStackTraces);
 
           if (AggregatedLiveBlockTable::AddPtr p = agg.lookupForAdd(&b)) {
             p->value() += 1;
           } else {
             MOZ_ALWAYS_TRUE(agg.add(p, &b, 1));
--- a/mfbt/HashTable.h
+++ b/mfbt/HashTable.h
@@ -1,11 +1,11 @@
-/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 4 -*-
- * vim: set ts=8 sts=4 et sw=4 tw=99:
- * This Source Code Form is subject to the terms of the Mozilla Public
+/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- */
+/* vim: set ts=8 sts=2 et sw=2 tw=80: */
+/* This Source Code Form is subject to the terms of the Mozilla Public
  * License, v. 2.0. If a copy of the MPL was not distributed with this
  * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
 
 //---------------------------------------------------------------------------
 // Overview
 //---------------------------------------------------------------------------
 //
 // This file defines HashMap<Key, Value> and HashSet<T>, hash tables that are
@@ -32,16 +32,19 @@
 //   - |MallocAllocPolicy| is the default and is usually appropriate; note that
 //     operations (such as insertions) that might cause allocations are
 //     fallible and must be checked for OOM. These checks are enforced by the
 //     use of MOZ_MUST_USE.
 //
 //   - |InfallibleAllocPolicy| is another possibility; it allows the
 //     abovementioned OOM checks to be done with MOZ_ALWAYS_TRUE().
 //
+//   Note that entry storage allocation is lazy, and not done until the first
+//   lookupForAdd(), put(), or putNew() is performed.
+//
 //  See AllocPolicy.h for more details.
 //
 // Documentation on how to use HashMap and HashSet, including examples, is
 // present within those classes. Search for "class HashMap" and "class
 // HashSet".
 //
 // Both HashMap and HashSet are implemented on top of a third class, HashTable.
 // You only need to look at HashTable if you want to understand the
@@ -62,19 +65,16 @@
 //   distinction.
 //
 // - mozilla::HashTable requires more explicit OOM checking. As mentioned
 //   above, the use of |InfallibleAllocPolicy| can simplify things.
 //
 // - mozilla::HashTable has a default capacity on creation of 32 and a minimum
 //   capacity of 4. PLDHashTable has a default capacity on creation of 8 and a
 //   minimum capacity of 8.
-//
-// - mozilla::HashTable allocates memory eagerly. PLDHashTable delays
-//   allocating until the first element is inserted.
 
 #ifndef mozilla_HashTable_h
 #define mozilla_HashTable_h
 
 #include "mozilla/AllocPolicy.h"
 #include "mozilla/Assertions.h"
 #include "mozilla/Attributes.h"
 #include "mozilla/Casting.h"
@@ -128,17 +128,16 @@ using Generation = Opaque<uint64_t>;
 // Template parameter requirements:
 // - Key/Value: movable, destructible, assignable.
 // - HashPolicy: see the "Hash Policy" section below.
 // - AllocPolicy: see AllocPolicy.h.
 //
 // Note:
 // - HashMap is not reentrant: Key/Value/HashPolicy/AllocPolicy members
 //   called by HashMap must not call back into the same HashMap object.
-// - Due to the lack of exception handling, the user must call |init()|.
 //
 template<class Key,
          class Value,
          class HashPolicy = DefaultHasher<Key>,
          class AllocPolicy = MallocAllocPolicy>
 class HashMap
 {
   // -- Implementation details -----------------------------------------------
@@ -168,69 +167,74 @@ class HashMap
   friend class Impl::Enum;
 
 public:
   using Lookup = typename HashPolicy::Lookup;
   using Entry = TableEntry;
 
   // -- Initialization -------------------------------------------------------
 
-  // HashMap construction is fallible (due to possible OOM). The user must
-  // call init() after construction and check the return value.
-  explicit HashMap(AllocPolicy aPolicy = AllocPolicy())
-    : mImpl(aPolicy)
+  explicit HashMap(AllocPolicy aAllocPolicy = AllocPolicy(),
+                   uint32_t aLen = Impl::sDefaultLen)
+    : mImpl(aAllocPolicy, aLen)
+  {
+  }
+
+  explicit HashMap(uint32_t aLen)
+    : mImpl(AllocPolicy(), aLen)
   {
   }
 
   // HashMap is movable.
   HashMap(HashMap&& aRhs)
     : mImpl(std::move(aRhs.mImpl))
   {
   }
   void operator=(HashMap&& aRhs)
   {
     MOZ_ASSERT(this != &aRhs, "self-move assignment is prohibited");
     mImpl = std::move(aRhs.mImpl);
   }
 
-  // Initialize the map for use. Must be called after construction, before
-  // any other operations (other than initialized()).
-  MOZ_MUST_USE bool init(uint32_t aLen = 16) { return mImpl.init(aLen); }
-
   // -- Status and sizing ----------------------------------------------------
 
-  // Has the map been initialized?
-  bool initialized() const { return mImpl.initialized(); }
-
   // The map's current generation.
   Generation generation() const { return mImpl.generation(); }
 
   // Is the map empty?
   bool empty() const { return mImpl.empty(); }
 
   // Number of keys/values in the map.
   uint32_t count() const { return mImpl.count(); }
 
   // Number of key/value slots in the map. Note: resize will happen well before
   // count() == capacity().
-  size_t capacity() const { return mImpl.capacity(); }
+  uint32_t capacity() const { return mImpl.capacity(); }
 
   // The size of the map's entry storage, in bytes. If the keys/values contain
   // pointers to other heap blocks, you must iterate over the map and measure
   // them separately; hence the "shallow" prefix.
   size_t shallowSizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const
   {
     return mImpl.shallowSizeOfExcludingThis(aMallocSizeOf);
   }
   size_t shallowSizeOfIncludingThis(MallocSizeOf aMallocSizeOf) const
   {
     return aMallocSizeOf(this) +
            mImpl.shallowSizeOfExcludingThis(aMallocSizeOf);
   }
 
+  // Attempt to minimize the capacity(). If the table is empty, this will free
+  // the empty storage and upon regrowth it will be given the minimum capacity.
+  void compact() { mImpl.compact(); }
+
+  // Attempt to reserve enough space to fit at least |aLen| elements. Does
+  // nothing if the map already has sufficient capacity.
+  MOZ_MUST_USE bool reserve(uint32_t aLen) { return mImpl.reserve(aLen); }
+
   // -- Lookups --------------------------------------------------------------
 
   // Does the map contain a key/value matching |aLookup|?
   bool has(const Lookup& aLookup) const
   {
     return mImpl.lookup(aLookup).found();
   }
 
@@ -331,17 +335,17 @@ public:
   //      if (!h.relookupOrAdd(p, 3, 'a')) {
   //        return false;
   //      }
   //    }
   //    assert(p->key() == 3);
   //    char val = p->value();
   //
   using AddPtr = typename Impl::AddPtr;
-  MOZ_ALWAYS_INLINE AddPtr lookupForAdd(const Lookup& aLookup) const
+  MOZ_ALWAYS_INLINE AddPtr lookupForAdd(const Lookup& aLookup)
   {
     return mImpl.lookupForAdd(aLookup);
   }
 
   // Add a key/value. Returns false on OOM.
   template<typename KeyInput, typename ValueInput>
   MOZ_MUST_USE bool add(AddPtr& aPtr, KeyInput&& aKey, ValueInput&& aValue)
   {
@@ -370,16 +374,22 @@ public:
       remove(p);
     }
   }
 
   // Remove a previously found key/value (assuming aPtr.found()). The map must
   // not have been mutated in the interim.
   void remove(Ptr aPtr) { mImpl.remove(aPtr); }
 
+  // Remove all keys/values without changing the capacity.
+  void clear() { mImpl.clear(); }
+
+  // Like clear() followed by compact().
+  void clearAndCompact() { mImpl.clearAndCompact(); }
+
   // -- Rekeying -------------------------------------------------------------
 
   // Infallibly rekey one entry, if necessary. Requires that template
   // parameters Key and HashPolicy::Lookup are the same type.
   void rekeyIfMoved(const Key& aOldKey, const Key& aNewKey)
   {
     if (aOldKey != aNewKey) {
       rekeyAs(aOldKey, aNewKey, aNewKey);
@@ -423,45 +433,32 @@ public:
   using ModIterator = typename Impl::ModIterator;
   ModIterator modIter() { return mImpl.modIter(); }
 
   // These are similar to Iterator/ModIterator/iter(), but use different
   // terminology.
   using Range = typename Impl::Range;
   using Enum = typename Impl::Enum;
   Range all() const { return mImpl.all(); }
-
-  // -- Clearing -------------------------------------------------------------
-
-  // Remove all keys/values without changing the capacity.
-  void clear() { mImpl.clear(); }
-
-  // Remove all keys/values and attempt to minimize the capacity.
-  void clearAndShrink() { mImpl.clearAndShrink(); }
-
-  // Remove all keys/values and release entry storage. The map must be
-  // initialized via init() again before further use.
-  void finish() { mImpl.finish(); }
 };
 
 //---------------------------------------------------------------------------
 // HashSet
 //---------------------------------------------------------------------------
 
 // HashSet is a fast hash-based set of values.
 //
 // Template parameter requirements:
 // - T: movable, destructible, assignable.
 // - HashPolicy: see the "Hash Policy" section below.
 // - AllocPolicy: see AllocPolicy.h
 //
 // Note:
 // - HashSet is not reentrant: T/HashPolicy/AllocPolicy members called by
 //   HashSet must not call back into the same HashSet object.
-// - Due to the lack of exception handling, the user must call |init()|.
 //
 template<class T,
          class HashPolicy = DefaultHasher<T>,
          class AllocPolicy = MallocAllocPolicy>
 class HashSet
 {
   // -- Implementation details -----------------------------------------------
 
@@ -485,69 +482,74 @@ class HashSet
   friend class Impl::Enum;
 
 public:
   using Lookup = typename HashPolicy::Lookup;
   using Entry = T;
 
   // -- Initialization -------------------------------------------------------
 
-  // HashSet construction is fallible (due to possible OOM). The user must call
-  // init() after construction and check the return value.
-  explicit HashSet(AllocPolicy a = AllocPolicy())
-    : mImpl(a)
+  explicit HashSet(AllocPolicy aAllocPolicy = AllocPolicy(),
+                   uint32_t aLen = Impl::sDefaultLen)
+    : mImpl(aAllocPolicy, aLen)
+  {
+  }
+
+  explicit HashSet(uint32_t aLen)
+    : mImpl(AllocPolicy(), aLen)
   {
   }
 
   // HashSet is movable.
   HashSet(HashSet&& aRhs)
     : mImpl(std::move(aRhs.mImpl))
   {
   }
   void operator=(HashSet&& aRhs)
   {
     MOZ_ASSERT(this != &aRhs, "self-move assignment is prohibited");
     mImpl = std::move(aRhs.mImpl);
   }
 
-  // Initialize the set for use. Must be called after construction, before
-  // any other operations (other than initialized()).
-  MOZ_MUST_USE bool init(uint32_t aLen = 16) { return mImpl.init(aLen); }
-
   // -- Status and sizing ----------------------------------------------------
 
-  // Has the set been initialized?
-  bool initialized() const { return mImpl.initialized(); }
-
   // The set's current generation.
   Generation generation() const { return mImpl.generation(); }
 
   // Is the set empty?
   bool empty() const { return mImpl.empty(); }
 
   // Number of elements in the set.
   uint32_t count() const { return mImpl.count(); }
 
   // Number of element slots in the set. Note: resize will happen well before
   // count() == capacity().
-  size_t capacity() const { return mImpl.capacity(); }
+  uint32_t capacity() const { return mImpl.capacity(); }
 
   // The size of the set's entry storage, in bytes. If the elements contain
   // pointers to other heap blocks, you must iterate over the set and measure
   // them separately; hence the "shallow" prefix.
   size_t shallowSizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const
   {
     return mImpl.shallowSizeOfExcludingThis(aMallocSizeOf);
   }
   size_t shallowSizeOfIncludingThis(MallocSizeOf aMallocSizeOf) const
   {
     return aMallocSizeOf(this) +
            mImpl.shallowSizeOfExcludingThis(aMallocSizeOf);
   }
 
+  // Attempt to minimize the capacity(). If the table is empty, this will free
+  // the empty storage and upon regrowth it will be given the minimum capacity.
+  void compact() { mImpl.compact(); }
+
+  // Attempt to reserve enough space to fit at least |aLen| elements. Does
+  // nothing if the map already has sufficient capacity.
+  MOZ_MUST_USE bool reserve(uint32_t aLen) { return mImpl.reserve(aLen); }
+
   // -- Lookups --------------------------------------------------------------
 
   // Does the set contain an element matching |aLookup|?
   bool has(const Lookup& aLookup) const
   {
     return mImpl.lookup(aLookup).found();
   }
 
@@ -646,17 +648,17 @@ public:
   //        return false;
   //      }
   //    }
   //    assert(*p == 3);
   //
   // Note that relookupOrAdd(p,l,t) performs Lookup using |l| and adds the
   // entry |t|, where the caller ensures match(l,t).
   using AddPtr = typename Impl::AddPtr;
-  MOZ_ALWAYS_INLINE AddPtr lookupForAdd(const Lookup& aLookup) const
+  MOZ_ALWAYS_INLINE AddPtr lookupForAdd(const Lookup& aLookup)
   {
     return mImpl.lookupForAdd(aLookup);
   }
 
   // Add an element. Returns false on OOM.
   template<typename U>
   MOZ_MUST_USE bool add(AddPtr& aPtr, U&& aU)
   {
@@ -679,16 +681,22 @@ public:
       remove(p);
     }
   }
 
   // Remove a previously found element (assuming aPtr.found()). The set must
   // not have been mutated in the interim.
   void remove(Ptr aPtr) { mImpl.remove(aPtr); }
 
+  // Remove all keys/values without changing the capacity.
+  void clear() { mImpl.clear(); }
+
+  // Like clear() followed by compact().
+  void clearAndCompact() { mImpl.clearAndCompact(); }
+
   // -- Rekeying -------------------------------------------------------------
 
   // Infallibly rekey one entry, if present. Requires that template parameters
   // T and HashPolicy::Lookup are the same type.
   void rekeyIfMoved(const Lookup& aOldValue, const T& aNewValue)
   {
     if (aOldValue != aNewValue) {
       rekeyAs(aOldValue, aNewValue, aNewValue);
@@ -745,28 +753,16 @@ public:
   using ModIterator = typename Impl::ModIterator;
   ModIterator modIter() { return mImpl.modIter(); }
 
   // These are similar to Iterator/ModIterator/iter(), but use different
   // terminology.
   using Range = typename Impl::Range;
   using Enum = typename Impl::Enum;
   Range all() const { return mImpl.all(); }
-
-  // -- Clearing -------------------------------------------------------------
-
-  // Remove all elements without changing the capacity.
-  void clear() { mImpl.clear(); }
-
-  // Remove all elements and attempt to minimize the capacity.
-  void clearAndShrink() { mImpl.clearAndShrink(); }
-
-  // Remove all keys/values and release entry storage. The set must be
-  // initialized via init() again before further use.
-  void finish() { mImpl.finish(); }
 };
 
 //---------------------------------------------------------------------------
 // Hash Policy
 //---------------------------------------------------------------------------
 
 // A hash policy |HP| for a hash table with key-type |Key| must provide:
 //
@@ -1447,17 +1443,17 @@ public:
     ~ModIterator()
     {
       if (mRekeyed) {
         mTable.mGen++;
         mTable.checkOverRemoved();
       }
 
       if (mRemoved) {
-        mTable.compactIfUnderloaded();
+        mTable.compact();
       }
     }
   };
 
   // Range is similar to Iterator, but uses different terminology.
   class Range
   {
     friend class HashTable;
@@ -1534,49 +1530,81 @@ public:
     aRhs.mTable = nullptr;
   }
 
 private:
   // HashTable is not copyable or assignable
   HashTable(const HashTable&) = delete;
   void operator=(const HashTable&) = delete;
 
-  static const size_t CAP_BITS = 30;
+  static const uint32_t CAP_BITS = 30;
 
 public:
   uint64_t mGen : 56;      // entry storage generation number
   uint64_t mHashShift : 8; // multiplicative hash shift
   Entry* mTable;           // entry storage
   uint32_t mEntryCount;    // number of entries in mTable
   uint32_t mRemovedCount;  // removed entry sentinels in mTable
 
 #ifdef DEBUG
   uint64_t mMutationCount;
   mutable bool mEntered;
 #endif
 
   // The default initial capacity is 32 (enough to hold 16 elements), but it
   // can be as low as 4.
+  static const uint32_t sDefaultLen = 16;
   static const uint32_t sMinCapacity = 4;
   static const uint32_t sMaxInit = 1u << (CAP_BITS - 1);
   static const uint32_t sMaxCapacity = 1u << CAP_BITS;
 
   // Hash-table alpha is conceptually a fraction, but to avoid floating-point
   // math we implement it as a ratio of integers.
   static const uint8_t sAlphaDenominator = 4;
   static const uint8_t sMinAlphaNumerator = 1; // min alpha: 1/4
   static const uint8_t sMaxAlphaNumerator = 3; // max alpha: 3/4
 
   static const HashNumber sFreeKey = Entry::sFreeKey;
   static const HashNumber sRemovedKey = Entry::sRemovedKey;
   static const HashNumber sCollisionBit = Entry::sCollisionBit;
 
-  void setTableSizeLog2(uint32_t aSizeLog2)
+  static uint32_t bestCapacity(uint32_t aLen)
   {
-    mHashShift = kHashNumberBits - aSizeLog2;
+    static_assert((sMaxInit * sAlphaDenominator) / sAlphaDenominator ==
+                    sMaxInit,
+                  "multiplication in numerator below could overflow");
+    static_assert(sMaxInit * sAlphaDenominator <=
+                    UINT32_MAX - sMaxAlphaNumerator,
+                  "numerator calculation below could potentially overflow");
+
+    // Compute the smallest capacity allowing |aLen| elements to be
+    // inserted without rehashing: ceil(aLen / max-alpha).  (Ceiling
+    // integral division: <http://stackoverflow.com/a/2745086>.)
+    uint32_t capacity =
+      (aLen * sAlphaDenominator + sMaxAlphaNumerator - 1) / sMaxAlphaNumerator;
+    capacity = (capacity < sMinCapacity)
+             ? sMinCapacity
+             : RoundUpPow2(capacity);
+
+    MOZ_ASSERT(capacity >= aLen);
+    MOZ_ASSERT(capacity <= sMaxCapacity);
+
+    return capacity;
+  }
+
+  static uint32_t hashShift(uint32_t aLen)
+  {
+    // Reject all lengths whose initial computed capacity would exceed
+    // sMaxCapacity. Round that maximum aLen down to the nearest power of two
+    // for speedier code.
+    if (MOZ_UNLIKELY(aLen > sMaxInit)) {
+      MOZ_CRASH("initial length is too large");
+    }
+
+    return kHashNumberBits - mozilla::CeilingLog2(bestCapacity(aLen));
   }
 
   static bool isLiveHash(HashNumber aHash) { return Entry::isLiveHash(aHash); }
 
   static HashNumber prepareHash(const Lookup& aLookup)
   {
     HashNumber keyHash = ScrambleHashCode(HashPolicy::hash(aLookup));
 
@@ -1626,75 +1654,35 @@ public:
     Entry* end = aOldTable + aCapacity;
     for (Entry* e = aOldTable; e < end; ++e) {
       e->~Entry();
     }
     aAllocPolicy.free_(aOldTable, aCapacity);
   }
 
 public:
-  explicit HashTable(AllocPolicy aAllocPolicy)
+  HashTable(AllocPolicy aAllocPolicy, uint32_t aLen)
     : AllocPolicy(aAllocPolicy)
     , mGen(0)
-    , mHashShift(kHashNumberBits)
+    , mHashShift(hashShift(aLen))
     , mTable(nullptr)
     , mEntryCount(0)
     , mRemovedCount(0)
 #ifdef DEBUG
     , mMutationCount(0)
     , mEntered(false)
 #endif
   {
   }
 
-  MOZ_MUST_USE bool init(uint32_t aLen)
+  explicit HashTable(AllocPolicy aAllocPolicy)
+    : HashTable(aAllocPolicy, sDefaultLen)
   {
-    MOZ_ASSERT(!initialized());
-
-    // Reject all lengths whose initial computed capacity would exceed
-    // sMaxCapacity. Round that maximum aLen down to the nearest power of two
-    // for speedier code.
-    if (MOZ_UNLIKELY(aLen > sMaxInit)) {
-      this->reportAllocOverflow();
-      return false;
-    }
-
-    static_assert((sMaxInit * sAlphaDenominator) / sAlphaDenominator ==
-                    sMaxInit,
-                  "multiplication in numerator below could overflow");
-    static_assert(sMaxInit * sAlphaDenominator <=
-                    UINT32_MAX - sMaxAlphaNumerator,
-                  "numerator calculation below could potentially overflow");
-
-    // Compute the smallest capacity allowing |aLen| elements to be
-    // inserted without rehashing: ceil(aLen / max-alpha).  (Ceiling
-    // integral division: <http://stackoverflow.com/a/2745086>.)
-    uint32_t newCapacity =
-      (aLen * sAlphaDenominator + sMaxAlphaNumerator - 1) / sMaxAlphaNumerator;
-    if (newCapacity < sMinCapacity) {
-      newCapacity = sMinCapacity;
-    }
-
-    // Round up capacity to next power-of-two.
-    uint32_t log2 = mozilla::CeilingLog2(newCapacity);
-    newCapacity = 1u << log2;
-
-    MOZ_ASSERT(newCapacity >= aLen);
-    MOZ_ASSERT(newCapacity <= sMaxCapacity);
-
-    mTable = createTable(*this, newCapacity);
-    if (!mTable) {
-      return false;
-    }
-    setTableSizeLog2(log2);
-    return true;
   }
 
-  bool initialized() const { return !!mTable; }
-
   ~HashTable()
   {
     if (mTable) {
       destroyTable(*this, mTable, capacity());
     }
   }
 
 private:
@@ -1715,20 +1703,24 @@ private:
   }
 
   static HashNumber applyDoubleHash(HashNumber aHash1,
                                     const DoubleHash& aDoubleHash)
   {
     return (aHash1 - aDoubleHash.mHash2) & aDoubleHash.mSizeMask;
   }
 
+  // True if the current load is equal to or exceeds the maximum.
   bool overloaded()
   {
     static_assert(sMaxCapacity <= UINT32_MAX / sMaxAlphaNumerator,
                   "multiplication below could overflow");
+
+    // Note: if capacity() is zero, this will always succeed, which is
+    // what we want.
     return mEntryCount + mRemovedCount >=
            capacity() * sMaxAlphaNumerator / sAlphaDenominator;
   }
 
   // Would the table be underloaded if it had the given capacity and entryCount?
   static bool wouldBeUnderloaded(uint32_t aCapacity, uint32_t aEntryCount)
   {
     static_assert(sMaxCapacity <= UINT32_MAX / sMinAlphaNumerator,
@@ -1838,73 +1830,81 @@ private:
 
   enum RebuildStatus
   {
     NotOverloaded,
     Rehashed,
     RehashFailed
   };
 
-  RebuildStatus changeTableSize(int aDeltaLog2,
+  RebuildStatus changeTableSize(uint32_t newCapacity,
                                 FailureBehavior aReportFailure = ReportFailure)
   {
+    MOZ_ASSERT(IsPowerOfTwo(newCapacity));
+    MOZ_ASSERT(!!mTable == !!capacity());
+
     // Look, but don't touch, until we succeed in getting new entry store.
     Entry* oldTable = mTable;
-    uint32_t oldCap = capacity();
-    uint32_t newLog2 = kHashNumberBits - mHashShift + aDeltaLog2;
-    uint32_t newCapacity = 1u << newLog2;
+    uint32_t oldCapacity = capacity();
+    uint32_t newLog2 = mozilla::CeilingLog2(newCapacity);
+
     if (MOZ_UNLIKELY(newCapacity > sMaxCapacity)) {
       if (aReportFailure) {
         this->reportAllocOverflow();
       }
       return RehashFailed;
     }
 
     Entry* newTable = createTable(*this, newCapacity, aReportFailure);
     if (!newTable) {
       return RehashFailed;
     }
 
     // We can't fail from here on, so update table parameters.
-    setTableSizeLog2(newLog2);
+    mHashShift = kHashNumberBits - newLog2;
     mRemovedCount = 0;
     mGen++;
     mTable = newTable;
 
     // Copy only live entries, leaving removed ones behind.
-    Entry* end = oldTable + oldCap;
+    Entry* end = oldTable + oldCapacity;
     for (Entry* src = oldTable; src < end; ++src) {
       if (src->isLive()) {
         HashNumber hn = src->getKeyHash();
         findFreeEntry(hn).setLive(
           hn, std::move(const_cast<typename Entry::NonConstT&>(src->get())));
       }
 
       src->~Entry();
     }
 
     // All entries have been destroyed, no need to destroyTable.
-    this->free_(oldTable, oldCap);
+    this->free_(oldTable, oldCapacity);
     return Rehashed;
   }
 
   bool shouldCompressTable()
   {
-    // Compress if a quarter or more of all entries are removed.
+    // Succeed if a quarter or more of all entries are removed. Note that this
+    // always succeeds if capacity() == 0 (i.e. entry storage has not been
+    // allocated), which is what we want, because it means changeTableSize()
+    // will allocate the requested capacity rather than doubling it.
     return mRemovedCount >= (capacity() >> 2);
   }
 
   RebuildStatus checkOverloaded(FailureBehavior aReportFailure = ReportFailure)
   {
     if (!overloaded()) {
       return NotOverloaded;
     }
 
-    int deltaLog2 = shouldCompressTable() ? 0 : 1;
-    return changeTableSize(deltaLog2, aReportFailure);
+    uint32_t newCapacity = shouldCompressTable()
+                         ? rawCapacity()
+                         : rawCapacity() * 2;
+    return changeTableSize(newCapacity, aReportFailure);
   }
 
   // Infallibly rehash the table if we are overloaded with removals.
   void checkOverRemoved()
   {
     if (overloaded()) {
       if (checkOverloaded(DontReportFailure) == RehashFailed) {
         rehashTableInPlace();
@@ -1926,50 +1926,33 @@ private:
 #ifdef DEBUG
     mMutationCount++;
 #endif
   }
 
   void checkUnderloaded()
   {
     if (underloaded()) {
-      (void)changeTableSize(-1, DontReportFailure);
-    }
-  }
-
-  // Resize the table down to the largest capacity which doesn't underload the
-  // table.  Since we call checkUnderloaded() on every remove, you only need
-  // to call this after a bulk removal of items done without calling remove().
-  void compactIfUnderloaded()
-  {
-    int32_t resizeLog2 = 0;
-    uint32_t newCapacity = capacity();
-    while (wouldBeUnderloaded(newCapacity, mEntryCount)) {
-      newCapacity = newCapacity >> 1;
-      resizeLog2--;
-    }
-
-    if (resizeLog2 != 0) {
-      (void)changeTableSize(resizeLog2, DontReportFailure);
+      (void)changeTableSize(capacity() / 2, DontReportFailure);
     }
   }
 
   // This is identical to changeTableSize(currentSize), but without requiring
   // a second table.  We do this by recycling the collision bits to tell us if
   // the element is already inserted or still waiting to be inserted.  Since
   // already-inserted elements win any conflicts, we get the same table as we
   // would have gotten through random insertion order.
   void rehashTableInPlace()
   {
     mRemovedCount = 0;
     mGen++;
-    for (size_t i = 0; i < capacity(); ++i) {
+    for (uint32_t i = 0; i < capacity(); ++i) {
       mTable[i].unsetCollision();
     }
-    for (size_t i = 0; i < capacity();) {
+    for (uint32_t i = 0; i < capacity();) {
       Entry* src = &mTable[i];
 
       if (!src->isLive() || src->hasCollision()) {
         ++i;
         continue;
       }
 
       HashNumber keyHash = src->getKeyHash();
@@ -2030,126 +2013,154 @@ public:
     }
     mRemovedCount = 0;
     mEntryCount = 0;
 #ifdef DEBUG
     mMutationCount++;
 #endif
   }
 
-  void clearAndShrink()
+  // Resize the table down to the smallest capacity that doesn't overload the
+  // table. Since we call checkUnderloaded() on every remove, you only need
+  // to call this after a bulk removal of items done without calling remove().
+  void compact()
   {
-    clear();
-    compactIfUnderloaded();
-  }
-
-  void finish()
-  {
-#ifdef DEBUG
-    MOZ_ASSERT(!mEntered);
-#endif
-    if (!mTable) {
+    if (empty()) {
+      // Free the entry storage.
+      this->free_(mTable, capacity());
+      mGen++;
+      mHashShift = hashShift(0);  // gives minimum capacity on regrowth
+      mTable = nullptr;
+      mRemovedCount = 0;
       return;
     }
 
-    destroyTable(*this, mTable, capacity());
-    mTable = nullptr;
-    mGen++;
-    mEntryCount = 0;
-    mRemovedCount = 0;
-#ifdef DEBUG
-    mMutationCount++;
-#endif
+    uint32_t bestCapacity = this->bestCapacity(mEntryCount);
+    MOZ_ASSERT(bestCapacity <= capacity());
+
+    if (bestCapacity < capacity()) {
+      (void)changeTableSize(bestCapacity, DontReportFailure);
+    }
+  }
+
+  void clearAndCompact()
+  {
+    clear();
+    compact();
+  }
+
+  MOZ_MUST_USE bool reserve(uint32_t aLen)
+  {
+    if (aLen == 0) {
+      return true;
+    }
+
+    uint32_t bestCapacity = this->bestCapacity(aLen);
+    if (bestCapacity <= capacity()) {
+      return true;  // Capacity is already sufficient.
+    }
+
+    RebuildStatus status = changeTableSize(bestCapacity, DontReportFailure);
+    MOZ_ASSERT(status != NotOverloaded);
+    return status != RehashFailed;
   }
 
   Iterator iter() const
   {
-    MOZ_ASSERT(mTable);
     return Iterator(*this);
   }
 
   ModIterator modIter()
   {
-    MOZ_ASSERT(mTable);
     return ModIterator(*this);
   }
 
   Range all() const
   {
-    MOZ_ASSERT(mTable);
     return Range(*this);
   }
 
   bool empty() const
   {
-    MOZ_ASSERT(mTable);
-    return !mEntryCount;
+    return mEntryCount == 0;
   }
 
   uint32_t count() const
   {
-    MOZ_ASSERT(mTable);
     return mEntryCount;
   }
 
+  uint32_t rawCapacity() const
+  {
+    return 1u << (kHashNumberBits - mHashShift);
+  }
+
   uint32_t capacity() const
   {
-    MOZ_ASSERT(mTable);
-    return 1u << (kHashNumberBits - mHashShift);
+    return mTable ? rawCapacity() : 0;
   }
 
   Generation generation() const
   {
-    MOZ_ASSERT(mTable);
     return Generation(mGen);
   }
 
   size_t shallowSizeOfExcludingThis(MallocSizeOf aMallocSizeOf) const
   {
     return aMallocSizeOf(mTable);
   }
 
   size_t shallowSizeOfIncludingThis(MallocSizeOf aMallocSizeOf) const
   {
     return aMallocSizeOf(this) + shallowSizeOfExcludingThis(aMallocSizeOf);
   }
 
   MOZ_ALWAYS_INLINE Ptr readonlyThreadsafeLookup(const Lookup& aLookup) const
   {
-    if (!HasHash<HashPolicy>(aLookup)) {
+    if (!mTable || !HasHash<HashPolicy>(aLookup)) {
       return Ptr();
     }
     HashNumber keyHash = prepareHash(aLookup);
     return Ptr(lookup<ForNonAdd>(aLookup, keyHash), *this);
   }
 
   MOZ_ALWAYS_INLINE Ptr lookup(const Lookup& aLookup) const
   {
     ReentrancyGuard g(*this);
     return readonlyThreadsafeLookup(aLookup);
   }
 
-  MOZ_ALWAYS_INLINE AddPtr lookupForAdd(const Lookup& aLookup) const
+  MOZ_ALWAYS_INLINE AddPtr lookupForAdd(const Lookup& aLookup)
   {
     ReentrancyGuard g(*this);
     if (!EnsureHash<HashPolicy>(aLookup)) {
       return AddPtr();
     }
+
+    if (!mTable) {
+      uint32_t newCapacity = rawCapacity();
+      RebuildStatus status = changeTableSize(newCapacity, ReportFailure);
+      MOZ_ASSERT(status != NotOverloaded);
+      if (status == RehashFailed) {
+        return AddPtr();
+      }
+    }
+
     HashNumber keyHash = prepareHash(aLookup);
     // Directly call the constructor in the return statement to avoid
     // excess copying when building with Visual Studio 2017.
     // See bug 1385181.
     return AddPtr(lookup<ForAdd>(aLookup, keyHash), *this, keyHash);
   }
 
   template<typename... Args>
   MOZ_MUST_USE bool add(AddPtr& aPtr, Args&&... aArgs)
   {
     ReentrancyGuard g(*this);
-    MOZ_ASSERT(mTable);
+    MOZ_ASSERT_IF(aPtr.isValid(), mTable);
     MOZ_ASSERT_IF(aPtr.isValid(), aPtr.mTable == this);
     MOZ_ASSERT(!aPtr.found());
     MOZ_ASSERT(!(aPtr.mKeyHash & sCollisionBit));
 
     // Check for error from ensureHash() here.
     if (!aPtr.isValid()) {
       return false;
     }
--- a/xpcom/rust/gtest/bench-collections/Bench.cpp
+++ b/xpcom/rust/gtest/bench-collections/Bench.cpp
@@ -169,17 +169,16 @@ Bench_Cpp_PLDHashTable(const Params* aPa
   }
 }
 
 // Keep this in sync with all the other Bench_*() functions.
 void
 Bench_Cpp_MozHashSet(const Params* aParams, void** aVals, size_t aLen)
 {
   mozilla::HashSet<void*, mozilla::DefaultHasher<void*>, MallocAllocPolicy> hs;
-  MOZ_RELEASE_ASSERT(hs.init());
 
   for (size_t j = 0; j < aParams->mNumInserts; j++) {
     auto p = hs.lookupForAdd(aVals[j]);
     MOZ_RELEASE_ASSERT(!p);
     MOZ_RELEASE_ASSERT(hs.add(p, aVals[j]));
   }
 
   for (size_t i = 0; i < aParams->mNumSuccessfulLookups; i++) {