Bug 1533890: Add LoadCallee op and refactor call flags r=mgaudet
authorIain Ireland <iireland@mozilla.com>
Mon, 08 Apr 2019 16:07:33 +0000
changeset 468405 f8a17aa391621c11ad8d3c9afd339e1c494a549d
parent 468404 8631f1213092349c7cdccfd7e6dbb33a57c711ee
child 468406 c1d550195b9fd52c9fd8a5086d062c032d53f7e1
push id82565
push useriireland@mozilla.com
push dateMon, 08 Apr 2019 18:32:43 +0000
treeherderautoland@0d7d1ef8d08e [default view] [failures only]
perfherder[talos] [build metrics] [platform microbench] (compared to previous push)
reviewersmgaudet
bugs1533890
milestone68.0a1
first release with
nightly linux32
nightly linux64
nightly mac
nightly win32
nightly win64
last release without
nightly linux32
nightly linux64
nightly mac
nightly win32
nightly win64
Bug 1533890: Add LoadCallee op and refactor call flags r=mgaudet - Doing my best to unify some of this code and provide a single-ish source of truth for things. 1. Add a "CallFlags" struct that bundles up information about the call, including whether it is a constructor, whether it is a cross-realm call, and how its arguments are being passed. Notes: a) After working with it for a while, I'm convinced that isCrossRealm is named backwards. Calls are cross-realm by default. I've changed the flag to isSameRealm so that now we only have to set a non-default value for the flag if we know something about the callee. This makes CallFlags noticeably more ergonomic. b) Right now there are two possibilities for how arguments are being passed: standard and spread. In future patches, this will expand to include FunCall, FunApplyArgs, and FunApplyArray. c) This all gets packed up into a byte to make it easy to pass via CacheIR. 2. Unify as much of the argument-address-calculating logic in one place. I've added an ArgumentKind enum that lets us ask for this, callee, newtarget, arg0, or arg1, and then it calculates the slot, possibly using argc. 3. The first consumer of GetIndexFromArgumentKind is a new CacheIR interface for accessing specific arguments. Instead of doing manual calculations and using LoadStackValue(slotIndex), we use LoadArgument(argumentKind, argcInfo). The correct slot is calculated under the covers and encoded in the cache op. Note that sometimes we are willing to bake in argc at compile time (for example, tryAttachStringSplit) and other times we want the flexibility / codesharing of using the value of argc that we get as an input operand. To make this work, both LoadArgument and the CacheIR op it generates come in two flavours. I'm very open to bikeshedding on the name, which is currently borrowing terminology from LoadEnvironment(Fixed/Variable)SlotResult. 4. The second consumer of GetIndexFromArgumentKind is a new BaselineCacheIRCompiler-internal interface for loading objects off the stack, because it happens pretty often. Right now, the user of the interface is responsible for tracking the depth of the stack above the arguments, although I'd like to find a way to fix that. Differential Revision: https://phabricator.services.mozilla.com/D25868
js/src/jit/BaselineCacheIRCompiler.cpp
js/src/jit/BaselineIC.cpp
js/src/jit/CacheIR.cpp
js/src/jit/CacheIR.h
js/src/jit/CacheIRCompiler.h
js/src/jit/IonCacheIRCompiler.cpp
--- a/js/src/jit/BaselineCacheIRCompiler.cpp
+++ b/js/src/jit/BaselineCacheIRCompiler.cpp
@@ -27,16 +27,23 @@ using mozilla::Maybe;
 class AutoStubFrame;
 
 Address CacheRegisterAllocator::addressOf(MacroAssembler& masm,
                                           BaselineFrameSlot slot) const {
   uint32_t offset =
       stackPushed_ + ICStackValueOffset + slot.slot() * sizeof(JS::Value);
   return Address(masm.getStackPointer(), offset);
 }
+BaseValueIndex CacheRegisterAllocator::addressOf(MacroAssembler& masm,
+                                                 Register argcReg,
+                                                 BaselineFrameSlot slot) const {
+  uint32_t offset =
+      stackPushed_ + ICStackValueOffset + slot.slot() * sizeof(JS::Value);
+  return BaseValueIndex(masm.getStackPointer(), argcReg, offset);
+}
 
 // BaselineCacheIRCompiler compiles CacheIR to BaselineIC native code.
 class MOZ_RAII BaselineCacheIRCompiler : public CacheIRCompiler {
   bool inStubFrame_;
   bool makesGCCalls_;
   BaselineCacheIRStubKind kind_;
 
   void callVMInternal(MacroAssembler& masm, VMFunctionId id);
@@ -57,22 +64,24 @@ class MOZ_RAII BaselineCacheIRCompiler :
 
   MOZ_MUST_USE bool callTypeUpdateIC(Register obj, ValueOperand val,
                                      Register scratch,
                                      LiveGeneralRegisterSet saveRegs);
 
   MOZ_MUST_USE bool emitStoreSlotShared(bool isFixed);
   MOZ_MUST_USE bool emitAddAndStoreSlotShared(CacheOp op);
 
+  void loadStackObject(ArgumentKind slot, CallFlags flags, size_t stackPushed,
+                       Register argcReg, Register dest);
   void pushCallArguments(Register argcReg, Register scratch, bool isJitCall,
                          bool isConstructing);
   void pushSpreadCallArguments(Register argcReg, Register scratch,
                                bool isJitCall, bool isConstructing);
   void createThis(Register argcReg, Register calleeReg, Register scratch,
-                  bool isSpread);
+                  CallFlags flags);
   void updateReturnValue();
 
   enum class NativeCallType { Native, ClassHook };
   bool emitCallNativeShared(NativeCallType callType);
 
  public:
   friend class AutoStubFrame;
 
@@ -534,17 +543,17 @@ bool BaselineCacheIRCompiler::emitGuardH
   masm.branchIfFalseBool(scratch1, failure->label());
   return true;
 }
 
 bool BaselineCacheIRCompiler::emitCallScriptedGetterResult() {
   JitSpew(JitSpew_Codegen, __FUNCTION__);
   Register obj = allocator.useRegister(masm, reader.objOperandId());
   Address getterAddr(stubAddress(reader.stubOffset()));
-  bool isCrossRealm = reader.readBool();
+  bool isSameRealm = reader.readBool();
 
   AutoScratchRegister code(allocator, masm);
   AutoScratchRegister callee(allocator, masm);
   AutoScratchRegister scratch(allocator, masm);
 
   // First, ensure our getter is non-lazy.
   {
     FailurePath* failure;
@@ -558,17 +567,17 @@ bool BaselineCacheIRCompiler::emitCallSc
     masm.loadJitCodeRaw(callee, code);
   }
 
   allocator.discardStack(masm);
 
   AutoStubFrame stubFrame(*this);
   stubFrame.enter(masm, scratch);
 
-  if (isCrossRealm) {
+  if (!isSameRealm) {
     masm.switchToObjectRealm(callee, scratch);
   }
 
   // Align the stack such that the JitFrameLayout is aligned on
   // JitStackAlignment.
   masm.alignJitStackBasedOnNArgs(0);
 
   // Getter is called with 0 arguments, just |obj| as thisv.
@@ -592,17 +601,17 @@ bool BaselineCacheIRCompiler::emitCallSc
     masm.movePtr(argumentsRectifier, code);
   }
 
   masm.bind(&noUnderflow);
   masm.callJit(code);
 
   stubFrame.leave(masm, true);
 
-  if (isCrossRealm) {
+  if (!isSameRealm) {
     masm.switchToBaselineFrameRealm(R1.scratchReg());
   }
 
   return true;
 }
 
 bool BaselineCacheIRCompiler::emitCallNativeGetterResult() {
   JitSpew(JitSpew_Codegen, __FUNCTION__);
@@ -1661,17 +1670,17 @@ bool BaselineCacheIRCompiler::emitCallNa
 bool BaselineCacheIRCompiler::emitCallScriptedSetter() {
   JitSpew(JitSpew_Codegen, __FUNCTION__);
   AutoScratchRegister scratch1(allocator, masm);
   AutoScratchRegister scratch2(allocator, masm);
 
   Register obj = allocator.useRegister(masm, reader.objOperandId());
   Address setterAddr(stubAddress(reader.stubOffset()));
   ValueOperand val = allocator.useValueRegister(masm, reader.valOperandId());
-  bool isCrossRealm = reader.readBool();
+  bool isSameRealm = reader.readBool();
 
   // First, ensure our setter is non-lazy. This also loads the callee in
   // scratch1.
   {
     FailurePath* failure;
     if (!addFailurePath(&failure)) {
       return false;
     }
@@ -1681,17 +1690,17 @@ bool BaselineCacheIRCompiler::emitCallSc
                                        failure->label());
   }
 
   allocator.discardStack(masm);
 
   AutoStubFrame stubFrame(*this);
   stubFrame.enter(masm, scratch2);
 
-  if (isCrossRealm) {
+  if (!isSameRealm) {
     masm.switchToObjectRealm(scratch1, scratch2);
   }
 
   // Align the stack such that the JitFrameLayout is aligned on
   // JitStackAlignment.
   masm.alignJitStackBasedOnNArgs(1);
 
   // Setter is called with 1 argument, and |obj| as thisv. Note that we use
@@ -1725,17 +1734,17 @@ bool BaselineCacheIRCompiler::emitCallSc
     masm.movePtr(argumentsRectifier, scratch1);
   }
 
   masm.bind(&noUnderflow);
   masm.callJit(scratch1);
 
   stubFrame.leave(masm, true);
 
-  if (isCrossRealm) {
+  if (!isSameRealm) {
     masm.switchToBaselineFrameRealm(R1.scratchReg());
   }
 
   return true;
 }
 
 bool BaselineCacheIRCompiler::emitCallSetArrayLength() {
   JitSpew(JitSpew_Codegen, __FUNCTION__);
@@ -1917,22 +1926,34 @@ bool BaselineCacheIRCompiler::emitTypeMo
 
 bool BaselineCacheIRCompiler::emitReturnFromIC() {
   JitSpew(JitSpew_Codegen, __FUNCTION__);
   allocator.discardStack(masm);
   EmitReturnFromIC(masm);
   return true;
 }
 
-bool BaselineCacheIRCompiler::emitLoadStackValue() {
+bool BaselineCacheIRCompiler::emitLoadArgumentFixedSlot() {
   JitSpew(JitSpew_Codegen, __FUNCTION__);
-  ValueOperand val = allocator.defineValueRegister(masm, reader.valOperandId());
+  ValueOperand resultReg =
+      allocator.defineValueRegister(masm, reader.valOperandId());
   Address addr =
-      allocator.addressOf(masm, BaselineFrameSlot(reader.uint32Immediate()));
-  masm.loadValue(addr, val);
+      allocator.addressOf(masm, BaselineFrameSlot(reader.readByte()));
+  masm.loadValue(addr, resultReg);
+  return true;
+}
+
+bool BaselineCacheIRCompiler::emitLoadArgumentDynamicSlot() {
+  JitSpew(JitSpew_Codegen, __FUNCTION__);
+  ValueOperand resultReg =
+      allocator.defineValueRegister(masm, reader.valOperandId());
+  Register argcReg = allocator.useRegister(masm, reader.int32OperandId());
+  BaseValueIndex addr =
+      allocator.addressOf(masm, argcReg, BaselineFrameSlot(reader.readByte()));
+  masm.loadValue(addr, resultReg);
   return true;
 }
 
 bool BaselineCacheIRCompiler::emitGuardAndGetIterator() {
   JitSpew(JitSpew_Codegen, __FUNCTION__);
   Register obj = allocator.useRegister(masm, reader.objOperandId());
 
   AutoScratchRegister scratch1(allocator, masm);
@@ -2392,31 +2413,32 @@ bool BaselineCacheIRCompiler::emitCallSt
   masm.pushValue(lhs);
 
   using Fn = bool (*)(JSContext*, HandleValue, HandleValue, MutableHandleValue);
   tailCallVM<Fn, DoConcatStringObject>(masm);
 
   return true;
 }
 
+
 bool BaselineCacheIRCompiler::emitGuardAndUpdateSpreadArgc() {
   JitSpew(JitSpew_Codegen, __FUNCTION__);
   Register argcReg = allocator.useRegister(masm, reader.int32OperandId());
   AutoScratchRegister scratch(allocator, masm);
   bool isConstructing = reader.readBool();
 
   FailurePath* failure;
   if (!addFailurePath(&failure)) {
     return false;
   }
 
   masm.unboxObject(Address(masm.getStackPointer(),
                            isConstructing * sizeof(Value) + ICStackValueOffset),
-                   argcReg);
-  masm.loadPtr(Address(argcReg, NativeObject::offsetOfElements()), scratch);
+                   scratch);
+  masm.loadPtr(Address(scratch, NativeObject::offsetOfElements()), scratch);
   masm.load32(Address(scratch, ObjectElements::offsetOfLength()), scratch);
 
   // Limit actual argc to something reasonable (huge number of arguments can
   // blow the stack limit).
   static_assert(CacheIRCompiler::MAX_ARGS_SPREAD_LENGTH <= ARGS_LENGTH_MAX,
                 "maximum arguments length for optimized stub should be <= "
                 "ARGS_LENGTH_MAX");
   masm.branch32(Assembler::Above, argcReg,
@@ -2524,36 +2546,43 @@ void BaselineCacheIRCompiler::pushSpread
 }
 
 bool BaselineCacheIRCompiler::emitCallNativeShared(NativeCallType callType) {
   AutoOutputRegister output(*this);
   AutoScratchRegisterMaybeOutput scratch(allocator, masm, output);
 
   Register calleeReg = allocator.useRegister(masm, reader.objOperandId());
   Register argcReg = allocator.useRegister(masm, reader.int32OperandId());
-  bool maybeCrossRealm = reader.readBool();
-  bool isSpread = reader.readBool();
-  bool isConstructing = reader.readBool();
+
+  CallFlags flags = reader.callFlags();
+  bool isConstructing = flags.isConstructing();
+  bool isSameRealm = flags.isSameRealm();
 
   allocator.discardStack(masm);
 
   // Push a stub frame so that we can perform a non-tail call.
   // Note that this leaves the return address in TailCallReg.
   AutoStubFrame stubFrame(*this);
   stubFrame.enter(masm, scratch);
 
-  if (maybeCrossRealm) {
+  if (!isSameRealm) {
     masm.switchToObjectRealm(calleeReg, scratch);
   }
 
-  if (isSpread) {
-    pushSpreadCallArguments(argcReg, scratch, /*isJitCall = */ false,
-                            isConstructing);
-  } else {
-    pushCallArguments(argcReg, scratch, /*isJitCall = */ false, isConstructing);
+  switch (flags.getArgFormat()) {
+    case CallFlags::Standard:
+      pushCallArguments(argcReg, scratch, /*isJitCall = */ false,
+                        isConstructing);
+      break;
+    case CallFlags::Spread:
+      pushSpreadCallArguments(argcReg, scratch, /*isJitCall = */ false,
+                              isConstructing);
+      break;
+    default:
+      MOZ_CRASH("Invalid arg format");
   }
 
   // Native functions have the signature:
   //
   //    bool (*)(JSContext*, unsigned, Value* vp)
   //
   // Where vp[0] is space for callee/return value, vp[1] is |this|, and vp[2]
   // onward are the function arguments.
@@ -2610,74 +2639,87 @@ bool BaselineCacheIRCompiler::emitCallNa
 
   // Load the return value.
   masm.loadValue(
       Address(masm.getStackPointer(), NativeExitFrameLayout::offsetOfResult()),
       output.valueReg());
 
   stubFrame.leave(masm);
 
-  if (maybeCrossRealm) {
+  if (!isSameRealm) {
     masm.switchToBaselineFrameRealm(scratch2);
   }
 
   return true;
 }
 
 bool BaselineCacheIRCompiler::emitCallNativeFunction() {
   JitSpew(JitSpew_Codegen, __FUNCTION__);
   return emitCallNativeShared(NativeCallType::Native);
 }
 
 bool BaselineCacheIRCompiler::emitCallClassHook() {
   JitSpew(JitSpew_Codegen, __FUNCTION__);
   return emitCallNativeShared(NativeCallType::ClassHook);
 }
 
+// Helper function for loading call arguments from the stack.  Loads
+// and unboxes an object from a specific slot. |stackPushed| is the
+// size of the data pushed on top of the call arguments in the current
+// frame. It must be tracked manually by the caller. (createThis is
+// currently the only caller. If more callers are added, it might be
+// worth improving the stack depth story.)
+void BaselineCacheIRCompiler::loadStackObject(ArgumentKind kind,
+                                              CallFlags flags,
+                                              size_t stackPushed,
+                                              Register argcReg, Register dest) {
+  bool addArgc = false;
+  int32_t slotIndex = GetIndexOfArgument(kind, flags, &addArgc);
+
+  if (addArgc) {
+    int32_t slotOffset = slotIndex * sizeof(JS::Value) + stackPushed;
+    BaseValueIndex slotAddr(masm.getStackPointer(), argcReg, slotOffset);
+    masm.unboxObject(slotAddr, dest);
+  } else {
+    int32_t slotOffset = slotIndex * sizeof(JS::Value) + stackPushed;
+    Address slotAddr(masm.getStackPointer(), slotOffset);
+    masm.unboxObject(slotAddr, dest);
+  }
+}
+
 /*
  * Scripted constructors require a |this| object to be created prior to the
  * call. When this function is called, the stack looks like (bottom->top):
  *
  * [..., Callee, ThisV, Arg0V, ..., ArgNV, NewTarget, StubFrameHeader]
  *
  * At this point, |ThisV| is JSWhyMagic::JS_IS_CONSTRUCTING.
  *
  * This function calls CreateThis to generate a new |this| object, then
  * overwrites the magic ThisV on the stack.
  */
 void BaselineCacheIRCompiler::createThis(Register argcReg, Register calleeReg,
-                                         Register scratch, bool isSpread) {
+                                         Register scratch, CallFlags flags) {
+  MOZ_ASSERT(flags.isConstructing());
+
+  size_t depth = STUB_FRAME_SIZE;
+
   // Save argc before call.
   masm.push(argcReg);
+  depth += sizeof(size_t);
 
   // CreateThis takes two arguments: callee, and newTarget.
 
   // Push newTarget:
-  Address newTargetAddress(masm.getStackPointer(),
-                           STUB_FRAME_SIZE + sizeof(size_t));
-  masm.unboxObject(newTargetAddress, scratch);
+  loadStackObject(ArgumentKind::NewTarget, flags, depth, argcReg, scratch);
   masm.push(scratch);
+  depth += sizeof(JSObject*);
 
   // Push callee:
-  if (isSpread) {
-    Address calleeAddress(masm.getStackPointer(),
-                          3 * sizeof(Value) +      // This, arg array, NewTarget
-                              STUB_FRAME_SIZE +    // Stub frame
-                              sizeof(size_t) +     // argc
-                              sizeof(JSObject*));  // Unboxed NewTarget
-    masm.unboxObject(calleeAddress, scratch);
-  } else {
-    BaseValueIndex calleeAddress(masm.getStackPointer(),
-                                 argcReg,                 // Arguments
-                                 2 * sizeof(Value) +      // This, NewTarget
-                                     STUB_FRAME_SIZE +    // Stub frame
-                                     sizeof(size_t) +     // argc
-                                     sizeof(JSObject*));  // Unboxed NewTarget
-    masm.unboxObject(calleeAddress, scratch);
-  }
+  loadStackObject(ArgumentKind::Callee, flags, depth, argcReg, scratch);
   masm.push(scratch);
 
   // Call CreateThis
   using Fn =
       bool (*)(JSContext*, HandleObject, HandleObject, MutableHandleValue);
   callVM<Fn, CreateThis>(masm);
 
 #ifdef DEBUG
@@ -2688,46 +2730,41 @@ void BaselineCacheIRCompiler::createThis
       "The return of CreateThis must be an object or uninitialized.");
   masm.bind(&createdThisOK);
 #endif
 
   // Restore argc
   masm.pop(argcReg);
 
   // Save |this| value back into pushed arguments on stack.
-  if (isSpread) {
-    Address thisAddress(masm.getStackPointer(),
-                        2 * sizeof(Value) +    // Arg array, NewTarget
-                            STUB_FRAME_SIZE);  // Stub frame
-    masm.storeValue(JSReturnOperand, thisAddress);
-  } else {
-    BaseValueIndex thisAddress(masm.getStackPointer(),
-                               argcReg,               // Arguments
-                               1 * sizeof(Value) +    // NewTarget
-                                   STUB_FRAME_SIZE);  // Stub frame
-    masm.storeValue(JSReturnOperand, thisAddress);
+  switch (flags.getArgFormat()) {
+    case CallFlags::Standard: {
+      BaseValueIndex thisAddress(masm.getStackPointer(),
+                                 argcReg,               // Arguments
+                                 1 * sizeof(Value) +    // NewTarget
+                                     STUB_FRAME_SIZE);  // Stub frame
+      masm.storeValue(JSReturnOperand, thisAddress);
+    } break;
+    case CallFlags::Spread: {
+      Address thisAddress(masm.getStackPointer(),
+                          2 * sizeof(Value) +    // Arg array, NewTarget
+                              STUB_FRAME_SIZE);  // Stub frame
+      masm.storeValue(JSReturnOperand, thisAddress);
+    } break;
+    default:
+      MOZ_CRASH("Invalid arg format for scripted constructor");
   }
 
   // Restore the stub register from the baseline stub frame.
   Address stubRegAddress(masm.getStackPointer(), STUB_FRAME_SAVED_STUB_OFFSET);
   masm.loadPtr(stubRegAddress, ICStubReg);
 
   // Restore calleeReg.
-  if (isSpread) {
-    Address calleeAddress(masm.getStackPointer(),
-                          3 * sizeof(Value) +    // This, arg array, NewTarget
-                              STUB_FRAME_SIZE);  // Stub frame
-    masm.unboxObject(calleeAddress, calleeReg);
-  } else {
-    BaseValueIndex calleeAddress(masm.getStackPointer(),
-                                 argcReg,               // Arguments
-                                 2 * sizeof(Value) +    // This, NewTarget
-                                     STUB_FRAME_SIZE);  // Stub frame
-    masm.unboxObject(calleeAddress, calleeReg);
-  }
+  depth = STUB_FRAME_SIZE;
+  loadStackObject(ArgumentKind::Callee, flags, depth, argcReg, calleeReg);
 }
 
 void BaselineCacheIRCompiler::updateReturnValue() {
   Label skipThisReplace;
   masm.branchTestObject(Assembler::Equal, JSReturnOperand, &skipThisReplace);
 
   // If a constructor does not explicitly return an object, the return value
   // of the constructor is |this|. We load it out of the baseline stub frame.
@@ -2754,40 +2791,47 @@ void BaselineCacheIRCompiler::updateRetu
 
 bool BaselineCacheIRCompiler::emitCallScriptedFunction() {
   JitSpew(JitSpew_Codegen, __FUNCTION__);
   AutoOutputRegister output(*this);
   AutoScratchRegisterMaybeOutput scratch(allocator, masm, output);
 
   Register calleeReg = allocator.useRegister(masm, reader.objOperandId());
   Register argcReg = allocator.useRegister(masm, reader.int32OperandId());
-  bool maybeCrossRealm = reader.readBool();
-  bool isSpread = reader.readBool();
-  bool isConstructing = reader.readBool();
+
+  CallFlags flags = reader.callFlags();
+  bool isConstructing = flags.isConstructing();
+  bool isSameRealm = flags.isSameRealm();
 
   allocator.discardStack(masm);
 
   // Push a stub frame so that we can perform a non-tail call.
   // Note that this leaves the return address in TailCallReg.
   AutoStubFrame stubFrame(*this);
   stubFrame.enter(masm, scratch);
 
-  if (maybeCrossRealm) {
+  if (!isSameRealm) {
     masm.switchToObjectRealm(calleeReg, scratch);
   }
 
   if (isConstructing) {
-    createThis(argcReg, calleeReg, scratch, isSpread);
+    createThis(argcReg, calleeReg, scratch, flags);
   }
 
-  if (isSpread) {
-    pushSpreadCallArguments(argcReg, scratch, /*isJitCall = */ true,
-                            isConstructing);
-  } else {
-    pushCallArguments(argcReg, scratch, /*isJitCall = */ true, isConstructing);
+  switch (flags.getArgFormat()) {
+    case CallFlags::Standard:
+      pushCallArguments(argcReg, scratch, /*isJitCall = */ true,
+                        isConstructing);
+      break;
+    case CallFlags::Spread:
+      pushSpreadCallArguments(argcReg, scratch, /*isJitCall = */ true,
+                              isConstructing);
+      break;
+    default:
+      MOZ_CRASH("Invalid arg format");
   }
 
   // TODO: The callee is currently on top of the stack.  The old
   // implementation popped it at this point, but I'm not sure why,
   // because it is still in a register along both paths. For now we
   // just free that stack slot to make things line up. This should
   // probably be rewritten to avoid pushing callee at all if we don't
   // have to.
@@ -2823,15 +2867,15 @@ bool BaselineCacheIRCompiler::emitCallSc
   // If this is a constructing call, and the callee returns a non-object,
   // replace it with the |this| object passed in.
   if (isConstructing) {
     updateReturnValue();
   }
 
   stubFrame.leave(masm, true);
 
-  if (maybeCrossRealm) {
+  if (!isSameRealm) {
     // Use |code| as a scratch register.
     masm.switchToBaselineFrameRealm(code);
   }
 
   return true;
 }
--- a/js/src/jit/BaselineIC.cpp
+++ b/js/src/jit/BaselineIC.cpp
@@ -3807,19 +3807,19 @@ bool DoCallFallback(JSContext* cx, Basel
   }
 
   bool canAttachStub = stub->state().canAttachStub();
   bool handled = false;
 
   // Only bother to try optimizing JSOP_CALL with CacheIR if the chain is still
   // allowed to attach stubs.
   if (canAttachStub) {
+    HandleValueArray args = HandleValueArray::fromMarkedLocation(argc, vp + 2);
     CallIRGenerator gen(cx, script, pc, op, stub->state().mode(), argc, callee,
-                        callArgs.thisv(), newTarget,
-                        HandleValueArray::fromMarkedLocation(argc, vp + 2));
+                        callArgs.thisv(), newTarget, args);
     if (gen.tryAttachStub()) {
       ICStub* newStub = AttachBaselineCacheIRStub(
           cx, gen.writerRef(), gen.cacheKind(), gen.cacheIRStubKind(), script,
           stub, &handled);
 
       if (newStub) {
         JitSpew(JitSpew_BaselineIC, "  Attached Call CacheIR stub");
 
@@ -3928,20 +3928,20 @@ bool DoSpreadCallFallback(JSContext* cx,
   // Try attaching a call stub.
   bool handled = false;
   if (op != JSOP_SPREADEVAL && op != JSOP_STRICTSPREADEVAL &&
       stub->state().canAttachStub()) {
     // Try CacheIR first:
     RootedArrayObject aobj(cx, &arr.toObject().as<ArrayObject>());
     MOZ_ASSERT(aobj->length() == aobj->getDenseInitializedLength());
 
+    HandleValueArray args = HandleValueArray::fromMarkedLocation(
+        aobj->length(), aobj->getDenseElements());
     CallIRGenerator gen(cx, script, pc, op, stub->state().mode(), 1, callee,
-                        thisv, newTarget,
-                        HandleValueArray::fromMarkedLocation(
-                            aobj->length(), aobj->getDenseElements()));
+                        thisv, newTarget, args);
     if (gen.tryAttachStub()) {
       ICStub* newStub = AttachBaselineCacheIRStub(
           cx, gen.writerRef(), gen.cacheKind(), gen.cacheIRStubKind(), script,
           stub, &handled);
 
       if (newStub) {
         JitSpew(JitSpew_BaselineIC, "  Attached Spread Call CacheIR stub");
 
--- a/js/src/jit/CacheIR.cpp
+++ b/js/src/jit/CacheIR.cpp
@@ -4709,34 +4709,28 @@ bool CallIRGenerator::tryAttachStringSpl
     return false;
   }
 
   Int32OperandId argcId(writer.setInputOperandId(0));
 
   // Ensure argc == 2.
   writer.guardSpecificInt32Immediate(argcId, 2);
 
-  // 2 arguments.  Stack-layout here is (bottom to top):
-  //
-  //  3: Callee
-  //  2: ThisValue
-  //  1: Arg0
-  //  0: Arg1 <-- Top of stack
-
   // Ensure callee is the |String_split| native function.
-  ValOperandId calleeValId = writer.loadStackValue(3);
+  ValOperandId calleeValId =
+      writer.loadArgumentFixedSlot(ArgumentKind::Callee, argc_);
   ObjOperandId calleeObjId = writer.guardIsObject(calleeValId);
   writer.guardIsNativeFunction(calleeObjId, js::intrinsic_StringSplitString);
 
   // Ensure arg0 is a string.
-  ValOperandId arg0ValId = writer.loadStackValue(1);
+  ValOperandId arg0ValId = writer.loadArgumentFixedSlot(ArgumentKind::Arg0, argc_);
   StringOperandId arg0StrId = writer.guardIsString(arg0ValId);
 
   // Ensure arg1 is a string.
-  ValOperandId arg1ValId = writer.loadStackValue(0);
+  ValOperandId arg1ValId = writer.loadArgumentFixedSlot(ArgumentKind::Arg1, argc_);
   StringOperandId arg1StrId = writer.guardIsString(arg1ValId);
 
   // Call custom string splitter VM-function.
   writer.callStringSplitResult(arg0StrId, arg1StrId, group);
   writer.typeMonitorResult();
 
   cacheIRStubKind_ = BaselineCacheIRStubKind::Monitored;
   trackAttached("StringSplitString");
@@ -4786,29 +4780,24 @@ bool CallIRGenerator::tryAttachArrayPush
   // After this point, we can generate code fine.
 
   // Generate code.
   Int32OperandId argcId(writer.setInputOperandId(0));
 
   // Ensure argc == 1.
   writer.guardSpecificInt32Immediate(argcId, 1);
 
-  // 1 argument only.  Stack-layout here is (bottom to top):
-  //
-  //  2: Callee
-  //  1: ThisValue
-  //  0: Arg0 <-- Top of stack.
-
   // Guard callee is the |js::array_push| native function.
-  ValOperandId calleeValId = writer.loadStackValue(2);
+  ValOperandId calleeValId =
+      writer.loadArgumentFixedSlot(ArgumentKind::Callee, argc_);
   ObjOperandId calleeObjId = writer.guardIsObject(calleeValId);
   writer.guardIsNativeFunction(calleeObjId, js::array_push);
 
   // Guard this is an array object.
-  ValOperandId thisValId = writer.loadStackValue(1);
+  ValOperandId thisValId = writer.loadArgumentFixedSlot(ArgumentKind::This, argc_);
   ObjOperandId thisObjId = writer.guardIsObject(thisValId);
 
   // This is a soft assert, documenting the fact that we pass 'true'
   // for needsTypeBarrier when constructing typeCheckInfo_ for CallIRGenerator.
   // Can be removed safely if the assumption becomes false.
   MOZ_ASSERT(typeCheckInfo_.needsTypeBarrier());
 
   // Guard that the group and shape matches.
@@ -4816,17 +4805,17 @@ bool CallIRGenerator::tryAttachArrayPush
     writer.guardGroupForTypeBarrier(thisObjId, thisobj->group());
   }
   TestMatchingNativeReceiver(writer, thisarray, thisObjId);
 
   // Guard proto chain shapes.
   ShapeGuardProtoChain(writer, thisobj, thisObjId);
 
   // arr.push(x) is equivalent to arr[arr.length] = x for regular arrays.
-  ValOperandId argId = writer.loadStackValue(0);
+  ValOperandId argId = writer.loadArgumentFixedSlot(ArgumentKind::Arg0, argc_);
   writer.arrayPush(thisObjId, argId);
 
   writer.returnFromIC();
 
   // Set the type-check info, and the stub kind to Updated
   typeCheckInfo_.set(thisobj->group(), JSID_VOID);
 
   cacheIRStubKind_ = BaselineCacheIRStubKind::Updated;
@@ -4865,40 +4854,32 @@ bool CallIRGenerator::tryAttachArrayJoin
   }
 
   // We don't need to worry about indexed properties because we can perform
   // hole check manually.
 
   // Generate code.
   Int32OperandId argcId(writer.setInputOperandId(0));
 
-  // if 0 arguments:
-  //  1: Callee
-  //  0: ThisValue <-- Top of stack.
-  //
-  // if 1 argument:
-  //  2: Callee
-  //  1: ThisValue
-  //  0: Arg0 [optional] <-- Top of stack.
-
   // Guard callee is the |js::array_join| native function.
-  uint32_t calleeIndex = (argc_ == 0) ? 1 : 2;
-  ValOperandId calleeValId = writer.loadStackValue(calleeIndex);
+  ValOperandId calleeValId =
+      writer.loadArgumentFixedSlot(ArgumentKind::Callee, argc_);
   ObjOperandId calleeObjId = writer.guardIsObject(calleeValId);
   writer.guardIsNativeFunction(calleeObjId, js::array_join);
 
   if (argc_ == 1) {
     // If argcount is 1, guard that the argument is a string.
-    ValOperandId argValId = writer.loadStackValue(0);
+    ValOperandId argValId =
+        writer.loadArgumentFixedSlot(ArgumentKind::Arg0, argc_);
     writer.guardIsString(argValId);
   }
 
   // Guard this is an array object.
-  uint32_t thisIndex = (argc_ == 0) ? 0 : 1;
-  ValOperandId thisValId = writer.loadStackValue(thisIndex);
+  ValOperandId thisValId =
+      writer.loadArgumentFixedSlot(ArgumentKind::This, argc_);
   ObjOperandId thisObjId = writer.guardIsObject(thisValId);
   writer.guardClass(thisObjId, GuardClassKind::Array);
 
   // Do the join.
   writer.arrayJoinResult(thisObjId);
 
   writer.returnFromIC();
 
@@ -4920,17 +4901,17 @@ bool CallIRGenerator::tryAttachIsSuspend
 
   MOZ_ASSERT(argc_ == 1);
 
   // Stack layout here is (bottom to top):
   //  2: Callee
   //  1: ThisValue
   //  0: Arg <-- Top of stack.
   // We only care about the argument.
-  ValOperandId valId = writer.loadStackValue(0);
+  ValOperandId valId = writer.loadArgumentFixedSlot(ArgumentKind::Arg0, argc_);
 
   // Check whether the argument is a suspended generator.
   // We don't need guards, because IsSuspendedGenerator returns
   // false for values that are not generator objects.
   writer.callIsSuspendedGeneratorResult(valId);
   writer.returnFromIC();
 
   // This stub does not need to be monitored, because it always
@@ -4969,29 +4950,16 @@ bool CallIRGenerator::tryAttachSpecialCa
   if (callee->native() == intrinsic_IsSuspendedGenerator) {
     if (tryAttachIsSuspendedGenerator()) {
       return true;
     }
   }
   return false;
 }
 
-uint32_t CallIRGenerator::calleeStackSlot(bool isSpread, bool isConstructing) {
-  // Stack layout is (bottom to top):
-  //   Callee
-  //   ThisValue
-  //   Args:
-  //     a) if spread: array object containing args
-  //     b) if not spread: |argc| values on the stack
-  //   NewTarget (only if constructing)
-  //   <top of stack>
-
-  return 1 + (isSpread ? 1 : argc_) + isConstructing;
-}
-
 // Remember the template object associated with any script being called
 // as a constructor, for later use during Ion compilation.
 bool CallIRGenerator::getTemplateObjectForScripted(HandleFunction calleeFunc,
                                                    MutableHandleObject result,
                                                    bool* skipAttach) {
   MOZ_ASSERT(!*skipAttach);
 
   // Saving the template object is unsound for super(), as a single
@@ -5058,16 +5026,18 @@ bool CallIRGenerator::tryAttachCallScrip
   // Never attach optimized scripted call stubs for JSOP_FUNAPPLY.
   // MagicArguments may escape the frame through them.
   if (op_ == JSOP_FUNAPPLY) {
     return false;
   }
 
   bool isConstructing = IsConstructorCallPC(pc_);
   bool isSpread = IsSpreadCallPC(pc_);
+  bool isSameRealm = cx_->realm() == calleeFunc->realm();
+  CallFlags flags(isConstructing, isSpread, isSameRealm);
 
   // If callee is not an interpreted constructor, we have to throw.
   if (isConstructing && !calleeFunc->isConstructor()) {
     return false;
   }
 
   // Likewise, if the callee is a class constructor, we have to throw.
   if (!isConstructing && calleeFunc->isClassConstructor()) {
@@ -5105,36 +5075,34 @@ bool CallIRGenerator::tryAttachCallScrip
   if (skipAttach) {
     // TODO: this should mark "handled" somehow
     return false;
   }
 
   // Load argc.
   Int32OperandId argcId(writer.setInputOperandId(0));
 
-  // Load the callee
-  uint32_t calleeSlot = calleeStackSlot(isSpread, isConstructing);
-  ValOperandId calleeValId = writer.loadStackValue(calleeSlot);
+  // Load the callee and ensure it is an object
+  ValOperandId calleeValId =
+      writer.loadArgumentDynamicSlot(ArgumentKind::Callee, argcId, flags);
   ObjOperandId calleeObjId = writer.guardIsObject(calleeValId);
 
   // Ensure callee matches this stub's callee
   FieldOffset calleeOffset =
       writer.guardSpecificObject(calleeObjId, calleeFunc);
 
   // Guard against relazification
   writer.guardFunctionHasJitEntry(calleeObjId, isConstructing);
 
   // Enforce limits on spread call length, and update argc.
   if (isSpread) {
     writer.guardAndUpdateSpreadArgc(argcId, isConstructing);
   }
 
-  bool isCrossRealm = cx_->realm() != calleeFunc->realm();
-  writer.callScriptedFunction(calleeObjId, argcId, isCrossRealm, isSpread,
-                              isConstructing);
+  writer.callScriptedFunction(calleeObjId, argcId, flags);
   writer.typeMonitorResult();
 
   if (templateObj) {
     writer.metaScriptedTemplateObject(templateObj, calleeOffset);
   }
 
   cacheIRStubKind_ = BaselineCacheIRStubKind::Monitored;
   trackAttached("Call scripted func");
@@ -5230,18 +5198,20 @@ bool CallIRGenerator::getTemplateObjectF
   }
 }
 
 bool CallIRGenerator::tryAttachCallNative(HandleFunction calleeFunc) {
   MOZ_ASSERT(mode_ == ICState::Mode::Specialized);
   MOZ_ASSERT(calleeFunc->isNative());
 
   bool isSpread = IsSpreadCallPC(pc_);
-
+  bool isSameRealm = cx_->realm() == calleeFunc->realm();
   bool isConstructing = IsConstructorCallPC(pc_);
+  CallFlags flags(isConstructing, isSpread, isSameRealm);
+
   if (isConstructing && !calleeFunc->isConstructor()) {
     return false;
   }
 
   // Check for specific native-function optimizations.
   if (tryAttachSpecialCaseCallNative(calleeFunc)) {
     return true;
   }
@@ -5252,31 +5222,31 @@ bool CallIRGenerator::tryAttachCallNativ
   RootedObject templateObj(cx_);
   if (isConstructing && !getTemplateObjectForNative(calleeFunc, &templateObj)) {
     return false;
   }
 
   // Load argc.
   Int32OperandId argcId(writer.setInputOperandId(0));
 
-  // Load the callee
-  uint32_t calleeSlot = calleeStackSlot(isSpread, isConstructing);
-  ValOperandId calleeValId = writer.loadStackValue(calleeSlot);
+  // Load the callee and ensure it is an object
+  ValOperandId calleeValId =
+      writer.loadArgumentDynamicSlot(ArgumentKind::Callee, argcId, flags);
   ObjOperandId calleeObjId = writer.guardIsObject(calleeValId);
 
   // Ensure callee matches this stub's callee
   FieldOffset calleeOffset =
       writer.guardSpecificObject(calleeObjId, calleeFunc);
 
   // Enforce limits on spread call length, and update argc.
   if (isSpread) {
     writer.guardAndUpdateSpreadArgc(argcId, isConstructing);
   }
-  writer.callNativeFunction(calleeObjId, argcId, op_, calleeFunc, isSpread,
-                            isConstructing);
+
+  writer.callNativeFunction(calleeObjId, argcId, op_, calleeFunc, flags);
   writer.typeMonitorResult();
 
   if (templateObj) {
     writer.metaNativeTemplateObject(templateObj, calleeOffset);
   }
 
   cacheIRStubKind_ = BaselineCacheIRStubKind::Monitored;
   trackAttached("Call native func");
@@ -5311,46 +5281,47 @@ bool CallIRGenerator::tryAttachCallHook(
   }
 
   if (op_ == JSOP_FUNAPPLY) {
     return false;
   }
 
   bool isSpread = IsSpreadCallPC(pc_);
   bool isConstructing = IsConstructorCallPC(pc_);
+  CallFlags flags(isConstructing, isSpread);
   JSNative hook =
       isConstructing ? calleeObj->constructHook() : calleeObj->callHook();
   if (!hook) {
     return false;
   }
 
   RootedObject templateObj(cx_);
   if (isConstructing &&
       !getTemplateObjectForClassHook(calleeObj, &templateObj)) {
     return false;
   }
 
   // Load argc.
   Int32OperandId argcId(writer.setInputOperandId(0));
 
-  // Load the callee.
-  uint32_t calleeSlot = calleeStackSlot(isSpread, isConstructing);
-  ValOperandId calleeValId = writer.loadStackValue(calleeSlot);
+  // Load the callee and ensure it is an object
+  ValOperandId calleeValId =
+      writer.loadArgumentDynamicSlot(ArgumentKind::Callee, argcId, flags);
   ObjOperandId calleeObjId = writer.guardIsObject(calleeValId);
 
   // Ensure the callee's class matches the one in this stub.
   FieldOffset classOffset =
       writer.guardAnyClass(calleeObjId, calleeObj->getClass());
 
   // Enforce limits on spread call length, and update argc.
   if (isSpread) {
     writer.guardAndUpdateSpreadArgc(argcId, isConstructing);
   }
 
-  writer.callClassHook(calleeObjId, argcId, hook, isSpread, isConstructing);
+  writer.callClassHook(calleeObjId, argcId, hook, flags);
   writer.typeMonitorResult();
 
   if (templateObj) {
     writer.metaClassTemplateObject(templateObj, classOffset);
   }
 
   cacheIRStubKind_ = BaselineCacheIRStubKind::Monitored;
   trackAttached("Call native func");
--- a/js/src/jit/CacheIR.h
+++ b/js/src/jit/CacheIR.h
@@ -244,22 +244,23 @@ extern const uint32_t ArgLengths[];
   _(GuardIndexGreaterThanDenseInitLength, Id, Id)                              \
   _(GuardTagNotEqual, Id, Id)                                                  \
   _(GuardXrayExpandoShapeAndDefaultProto, Id, Byte, Field)                     \
   _(GuardFunctionPrototype, Id, Id, Field)                                     \
   _(GuardNoAllocationMetadataBuilder, None)                                    \
   _(GuardObjectGroupNotPretenured, Field)                                      \
   _(GuardFunctionHasJitEntry, Id, Byte)                                        \
   _(GuardAndUpdateSpreadArgc, Id, Byte)                                        \
-  _(LoadStackValue, Id, UInt32)                                                \
   _(LoadObject, Id, Field)                                                     \
   _(LoadProto, Id, Id)                                                         \
   _(LoadEnclosingEnvironment, Id, Id)                                          \
   _(LoadWrapperTarget, Id, Id)                                                 \
   _(LoadValueTag, Id, Id)                                                      \
+  _(LoadArgumentFixedSlot, Id, Byte)                                           \
+  _(LoadArgumentDynamicSlot, Id, Id, Byte)                                     \
                                                                                \
   _(TruncateDoubleToUInt32, Id, Id)                                            \
                                                                                \
   _(MegamorphicLoadSlotResult, Id, Field, Byte)                                \
   _(MegamorphicLoadSlotByValueResult, Id, Id, Byte)                            \
   _(MegamorphicStoreSlot, Id, Field, Id, Byte)                                 \
   _(MegamorphicSetElement, Id, Id, Id, Byte)                                   \
   _(MegamorphicHasPropResult, Id, Id, Byte)                                    \
@@ -285,19 +286,19 @@ extern const uint32_t ArgLengths[];
   _(CallNativeSetter, Id, Id, Field)                                           \
   _(CallScriptedSetter, Id, Field, Id, Byte)                                   \
   _(CallSetArrayLength, Id, Byte, Id)                                          \
   _(CallProxySet, Id, Id, Field, Byte)                                         \
   _(CallProxySetByValue, Id, Id, Id, Byte)                                     \
   _(CallAddOrUpdateSparseElementHelper, Id, Id, Id, Byte)                      \
   _(CallInt32ToString, Id, Id)                                                 \
   _(CallNumberToString, Id, Id)                                                \
-  _(CallScriptedFunction, Id, Id, Byte, Byte, Byte)                            \
-  _(CallNativeFunction, Id, Id, Byte, Byte, Byte, IF_SIMULATOR(Field, Byte))   \
-  _(CallClassHook, Id, Id, Byte, Byte, Byte, Field)                            \
+  _(CallScriptedFunction, Id, Id, Byte)                                        \
+  _(CallNativeFunction, Id, Id, Byte, IF_SIMULATOR(Field, Byte))               \
+  _(CallClassHook, Id, Id, Byte, Field)                                        \
                                                                                \
   /* Meta ops generate no code, but contain data for BaselineInspector */      \
   _(MetaTwoByte, Byte, Field, Field)                                           \
                                                                                \
   /* The *Result ops load a value into the cache's result register. */         \
   _(LoadFixedSlotResult, Id, Field)                                            \
   _(LoadDynamicSlotResult, Id, Field)                                          \
   _(LoadTypedObjectResult, Id, Byte, Byte, Field)                              \
@@ -451,17 +452,110 @@ class StubField {
     return uintptr_t(data_);
   }
   uint64_t asInt64() const {
     MOZ_ASSERT(sizeIsInt64());
     return data_;
   }
 } JS_HAZ_GC_POINTER;
 
-typedef uint8_t FieldOffset;
+using FieldOffset = uint8_t;
+
+// This class is used to wrap up information about a call to make it
+// easier to convey from one function to another. (In particular,
+// CacheIRWriter encodes the CallFlags in CacheIR, and CacheIRReader
+// decodes them and uses them for compilation.)
+class CallFlags {
+ public:
+  enum ArgFormat : uint8_t { Standard, Spread, LastArgFormat = Spread };
+
+  CallFlags(bool isConstructing, bool isSpread, bool isSameRealm = false)
+      : argFormat_(isSpread ? Spread : Standard),
+        isConstructing_(isConstructing),
+        isSameRealm_(isSameRealm) {}
+  explicit CallFlags(ArgFormat format)
+      : argFormat_(format), isConstructing_(false), isSameRealm_(false) {}
+
+  ArgFormat getArgFormat() const { return argFormat_; }
+  bool isConstructing() const {
+    MOZ_ASSERT_IF(isConstructing_,
+                  argFormat_ == Standard || argFormat_ == Spread);
+    return isConstructing_;
+  }
+  bool isSameRealm() const { return isSameRealm_; }
+
+ private:
+  ArgFormat argFormat_;
+  bool isConstructing_;
+  bool isSameRealm_;
+
+  // Used for encoding/decoding
+  static const uint8_t ArgFormatBits = 4;
+  static const uint8_t ArgFormatMask = (1 << ArgFormatBits) - 1;
+  static_assert(LastArgFormat <= ArgFormatMask, "Not enough arg format bits");
+  static const uint8_t IsConstructing = 1 << 5;
+  static const uint8_t IsSameRealm = 1 << 6;
+
+  friend class CacheIRReader;
+  friend class CacheIRWriter;
+};
+
+// Set of arguments supported by GetIndexOfArgument.
+// Support for Arg2 and up can be added easily, but is currently unneeded.
+enum class ArgumentKind : uint8_t { Callee, This, NewTarget, Arg0, Arg1 };
+
+// This function calculates the index of an argument based on the call flags.
+// addArgc is an out-parameter, indicating whether the value of argc should
+// be added to the return value to find the actual index.
+inline int32_t GetIndexOfArgument(ArgumentKind kind, CallFlags flags,
+                                  bool* addArgc) {
+  // *** STACK LAYOUT (bottom to top) ***        ******** INDEX ********
+  //   Callee                                <-- argc+1 + isConstructing
+  //   ThisValue                             <-- argc   + isConstructing
+  //   Args: | Arg0 |        |  ArgArray  |  <-- argc-1 + isConstructing
+  //         | Arg1 | --or-- |            |  <-- argc-2 + isConstructing
+  //         | ...  |        | (if spread |  <-- ...
+  //         | ArgN |        |  call)     |  <-- 0      + isConstructing
+  //   NewTarget (only if constructing)      <-- 0 (if it exists)
+  //
+  // If this is a spread call, then argc is always 1, and we can calculate the
+  // index directly. If this is not a spread call, then the index of any
+  // argument other than NewTarget depends on argc.
+
+  // First we determine whether the caller needs to add argc.
+  switch (flags.getArgFormat()) {
+    case CallFlags::Standard:
+      *addArgc = true;
+      break;
+    case CallFlags::Spread:
+      // Spread calls do not have Arg1 or higher.
+      MOZ_ASSERT(kind != ArgumentKind::Arg1);
+      *addArgc = false;
+      break;
+  }
+
+  // Second, we determine the offset relative to argc.
+  bool hasArgumentArray = !*addArgc;
+  switch (kind) {
+    case ArgumentKind::Callee:
+      return flags.isConstructing() + hasArgumentArray + 1;
+    case ArgumentKind::This:
+      return flags.isConstructing() + hasArgumentArray;
+    case ArgumentKind::Arg0:
+      return flags.isConstructing() + hasArgumentArray - 1;
+    case ArgumentKind::Arg1:
+      return flags.isConstructing() + hasArgumentArray - 2;
+    case ArgumentKind::NewTarget:
+      MOZ_ASSERT(flags.isConstructing());
+      *addArgc = false;
+      return 0;
+    default:
+      MOZ_CRASH("Invalid argument kind");
+  }
+}
 
 // We use this enum as GuardClass operand, instead of storing Class* pointers
 // in the IR, to keep the IR compact and the same size on all platforms.
 enum class GuardClassKind : uint8_t {
   Array,
   MappedArguments,
   UnmappedArguments,
   WindowProxy,
@@ -543,16 +637,28 @@ class MOZ_RAII CacheIRWriter : public JS
     MOZ_ASSERT(nextInstructionId_ > 0);
     operandLastUsed_[opId.id()] = nextInstructionId_ - 1;
   }
 
   void writeInt32Immediate(int32_t i32) { buffer_.writeFixedUint32_t(i32); }
   void writeUint32Immediate(uint32_t u32) { buffer_.writeFixedUint32_t(u32); }
   void writePointer(void* ptr) { buffer_.writeRawPointer(ptr); }
 
+  void writeCallFlags(CallFlags flags) {
+    // See CacheIRReader::callFlags()
+    uint8_t value = flags.getArgFormat();
+    if (flags.isConstructing()) {
+      value |= CallFlags::IsConstructing;
+    }
+    if (flags.isSameRealm()) {
+      value |= CallFlags::IsSameRealm;
+    }
+    buffer_.writeByte(uint32_t(value));
+  }
+
   void writeOpWithOperandId(CacheOp op, OperandId opId) {
     writeOp(op);
     writeOperandId(opId);
   }
 
   uint8_t addStubField(uint64_t value, StubField::Type fieldType) {
     uint8_t offset = 0;
     size_t newStubDataSize = stubDataSize_ + StubField::sizeInBytes(fieldType);
@@ -908,22 +1014,16 @@ class MOZ_RAII CacheIRWriter : public JS
   }
   void loadFrameArgumentResult(Int32OperandId index) {
     writeOpWithOperandId(CacheOp::LoadFrameArgumentResult, index);
   }
   void guardNoDenseElements(ObjOperandId obj) {
     writeOpWithOperandId(CacheOp::GuardNoDenseElements, obj);
   }
 
-  ValOperandId loadStackValue(uint32_t idx) {
-    ValOperandId res(nextOperandId_++);
-    writeOpWithOperandId(CacheOp::LoadStackValue, res);
-    writeUint32Immediate(idx);
-    return res;
-  }
   ObjOperandId loadObject(JSObject* obj) {
     assertSameCompartment(obj);
     ObjOperandId res(nextOperandId_++);
     writeOpWithOperandId(CacheOp::LoadObject, res);
     addStubField(uintptr_t(obj), StubField::Type::JSObject);
     return res;
   }
   ObjOperandId loadProto(ObjOperandId obj) {
@@ -956,16 +1056,51 @@ class MOZ_RAII CacheIRWriter : public JS
 
   ValueTagOperandId loadValueTag(ValOperandId val) {
     ValueTagOperandId res(nextOperandId_++);
     writeOpWithOperandId(CacheOp::LoadValueTag, val);
     writeOperandId(res);
     return res;
   }
 
+  ValOperandId loadArgumentFixedSlot(
+      ArgumentKind kind, uint32_t argc,
+      CallFlags flags = CallFlags(CallFlags::Standard)) {
+    bool addArgc;
+    int32_t slotIndex = GetIndexOfArgument(kind, flags, &addArgc);
+    if (addArgc) {
+      slotIndex += argc;
+    }
+    MOZ_ASSERT(slotIndex >= 0);
+    MOZ_ASSERT(slotIndex <= UINT8_MAX);
+
+    ValOperandId res(nextOperandId_++);
+    writeOpWithOperandId(CacheOp::LoadArgumentFixedSlot, res);
+    buffer_.writeByte(uint32_t(slotIndex));
+    return res;
+  }
+
+  ValOperandId loadArgumentDynamicSlot(
+      ArgumentKind kind, Int32OperandId argcId,
+      CallFlags flags = CallFlags(CallFlags::Standard)) {
+    ValOperandId res(nextOperandId_++);
+
+    bool addArgc;
+    int32_t slotIndex = GetIndexOfArgument(kind, flags, &addArgc);
+    if (addArgc) {
+      writeOpWithOperandId(CacheOp::LoadArgumentDynamicSlot, res);
+      writeOperandId(argcId);
+      buffer_.writeByte(uint32_t(slotIndex));
+    } else {
+      writeOpWithOperandId(CacheOp::LoadArgumentFixedSlot, res);
+      buffer_.writeByte(uint32_t(slotIndex));
+    }
+    return res;
+  }
+
   ValOperandId loadDOMExpandoValue(ObjOperandId obj) {
     ValOperandId res(nextOperandId_++);
     writeOpWithOperandId(CacheOp::LoadDOMExpandoValue, obj);
     writeOperandId(res);
     return res;
   }
   void guardDOMExpandoMissingOrGuardShape(ValOperandId expando, Shape* shape) {
     writeOpWithOperandId(CacheOp::GuardDOMExpandoMissingOrGuardShape, expando);
@@ -1079,17 +1214,17 @@ class MOZ_RAII CacheIRWriter : public JS
   void arrayJoinResult(ObjOperandId obj) {
     writeOpWithOperandId(CacheOp::ArrayJoinResult, obj);
   }
   void callScriptedSetter(ObjOperandId obj, JSFunction* setter,
                           ValOperandId rhs) {
     writeOpWithOperandId(CacheOp::CallScriptedSetter, obj);
     addStubField(uintptr_t(setter), StubField::Type::JSObject);
     writeOperandId(rhs);
-    buffer_.writeByte(cx_->realm() != setter->realm());
+    buffer_.writeByte(cx_->realm() == setter->realm());
   }
   void callNativeSetter(ObjOperandId obj, JSFunction* setter,
                         ValOperandId rhs) {
     writeOpWithOperandId(CacheOp::CallNativeSetter, obj);
     addStubField(uintptr_t(setter), StubField::Type::JSObject);
     writeOperandId(rhs);
   }
   void callSetArrayLength(ObjOperandId obj, bool strict, ValOperandId rhs) {
@@ -1125,33 +1260,26 @@ class MOZ_RAII CacheIRWriter : public JS
   }
   StringOperandId callNumberToString(ValOperandId id) {
     StringOperandId res(nextOperandId_++);
     writeOpWithOperandId(CacheOp::CallNumberToString, id);
     writeOperandId(res);
     return res;
   }
   void callScriptedFunction(ObjOperandId calleeId, Int32OperandId argc,
-                            bool isCrossRealm, bool isSpread,
-                            bool isConstructing) {
+                            CallFlags flags) {
     writeOpWithOperandId(CacheOp::CallScriptedFunction, calleeId);
     writeOperandId(argc);
-    buffer_.writeByte(uint32_t(isCrossRealm));
-    buffer_.writeByte(uint32_t(isSpread));
-    buffer_.writeByte(uint32_t(isConstructing));
+    writeCallFlags(flags);
   }
   void callNativeFunction(ObjOperandId calleeId, Int32OperandId argc, JSOp op,
-                          HandleFunction calleeFunc, bool isSpread,
-                          bool isConstructing) {
+                          HandleFunction calleeFunc, CallFlags flags) {
     writeOpWithOperandId(CacheOp::CallNativeFunction, calleeId);
     writeOperandId(argc);
-    bool isCrossRealm = cx_->realm() != calleeFunc->realm();
-    buffer_.writeByte(uint32_t(isCrossRealm));
-    buffer_.writeByte(uint32_t(isSpread));
-    buffer_.writeByte(uint32_t(isConstructing));
+    writeCallFlags(flags);
 
     // Some native functions can be implemented faster if we know that
     // the return value is ignored.
     bool ignoresReturnValue =
         op == JSOP_CALL_IGNORES_RV && calleeFunc->hasJitInfo() &&
         calleeFunc->jitInfo()->type() == JSJitInfo::IgnoresReturnValueNative;
 
 #ifdef JS_SIMULATOR
@@ -1170,22 +1298,21 @@ class MOZ_RAII CacheIRWriter : public JS
 #else
     // If we are not running in the simulator, we generate different jitcode
     // to find the ignoresReturnValue version of a native function.
     buffer_.writeByte(ignoresReturnValue);
 #endif
   }
 
   void callClassHook(ObjOperandId calleeId, Int32OperandId argc, JSNative hook,
-                     bool isSpread, bool isConstructing) {
+                     CallFlags flags) {
     writeOpWithOperandId(CacheOp::CallClassHook, calleeId);
     writeOperandId(argc);
-    buffer_.writeByte(true);  // may be cross-realm
-    buffer_.writeByte(uint32_t(isSpread));
-    buffer_.writeByte(uint32_t(isConstructing));
+    MOZ_ASSERT(!flags.isSameRealm());
+    writeCallFlags(flags);
     void* target = JS_FUNC_TO_DATA_PTR(void*, hook);
 
 #ifdef JS_SIMULATOR
     // The simulator requires VM calls to be redirected to a special
     // swi instruction to handle them, so we store the redirected
     // pointer in the stub and use that instead of the original one.
     target = Simulator::RedirectNativeFunction(target, Args_General3);
 #endif
@@ -1417,17 +1544,17 @@ class MOZ_RAII CacheIRWriter : public JS
   }
   void loadStringCharResult(StringOperandId str, Int32OperandId index) {
     writeOpWithOperandId(CacheOp::LoadStringCharResult, str);
     writeOperandId(index);
   }
   void callScriptedGetterResult(ObjOperandId obj, JSFunction* getter) {
     writeOpWithOperandId(CacheOp::CallScriptedGetterResult, obj);
     addStubField(uintptr_t(getter), StubField::Type::JSObject);
-    buffer_.writeByte(cx_->realm() != getter->realm());
+    buffer_.writeByte(cx_->realm() == getter->realm());
   }
   void callNativeGetterResult(ObjOperandId obj, JSFunction* getter) {
     writeOpWithOperandId(CacheOp::CallNativeGetterResult, obj);
     addStubField(uintptr_t(getter), StubField::Type::JSObject);
   }
   void callProxyGetResult(ObjOperandId obj, jsid id) {
     writeOpWithOperandId(CacheOp::CallProxyGetResult, obj);
     addStubField(uintptr_t(JSID_BITS(id)), StubField::Type::Id);
@@ -1616,16 +1743,26 @@ class MOZ_RAII CacheIRReader {
   template <typename MetaKind>
   MetaKind metaKind() {
     return MetaKind(buffer_.readByte());
   }
 
   ReferenceType referenceTypeDescrType() {
     return ReferenceType(buffer_.readByte());
   }
+  CallFlags callFlags() {
+    // See CacheIRWriter::writeCallFlags()
+    uint8_t encoded = buffer_.readByte();
+    CallFlags::ArgFormat format =
+        CallFlags::ArgFormat(encoded & CallFlags::ArgFormatMask);
+    bool isConstructing = encoded & CallFlags::IsConstructing;
+    bool isSpread = format == CallFlags::Spread;
+    bool isSameRealm = encoded & CallFlags::IsSameRealm;
+    return CallFlags(isConstructing, isSpread, isSameRealm);
+  }
 
   uint8_t readByte() { return buffer_.readByte(); }
   bool readBool() {
     uint8_t b = buffer_.readByte();
     MOZ_ASSERT(b <= 1);
     return bool(b);
   }
 
@@ -2099,17 +2236,16 @@ class MOZ_RAII CallIRGenerator : public 
   uint32_t argc_;
   HandleValue callee_;
   HandleValue thisval_;
   HandleValue newTarget_;
   HandleValueArray args_;
   PropertyTypeCheckInfo typeCheckInfo_;
   BaselineCacheIRStubKind cacheIRStubKind_;
 
-  uint32_t calleeStackSlot(bool isSpread, bool isConstructing);
   bool getTemplateObjectForScripted(HandleFunction calleeFunc,
                                     MutableHandleObject result,
                                     bool* skipAttach);
   bool getTemplateObjectForNative(HandleFunction calleeFunc,
                                   MutableHandleObject result);
   bool getTemplateObjectForClassHook(HandleObject calleeObj,
                                      MutableHandleObject result);
 
--- a/js/src/jit/CacheIRCompiler.h
+++ b/js/src/jit/CacheIRCompiler.h
@@ -569,16 +569,18 @@ class MOZ_RAII CacheRegisterAllocator {
 #endif
   }
 
   // Removes spilled values from the native stack. This should only be
   // called after all registers have been allocated.
   void discardStack(MacroAssembler& masm);
 
   Address addressOf(MacroAssembler& masm, BaselineFrameSlot slot) const;
+  BaseValueIndex addressOf(MacroAssembler& masm, Register argcReg,
+                           BaselineFrameSlot slot) const;
 
   // Returns the register for the given operand. If the operand is currently
   // not in a register, it will load it into one.
   ValueOperand useValueRegister(MacroAssembler& masm, ValOperandId val);
   ValueOperand useFixedValueRegister(MacroAssembler& masm, ValOperandId valId,
                                      ValueOperand reg);
   Register useRegister(MacroAssembler& masm, TypedOperandId typedId);
 
--- a/js/src/jit/IonCacheIRCompiler.cpp
+++ b/js/src/jit/IonCacheIRCompiler.cpp
@@ -929,18 +929,18 @@ bool IonCacheIRCompiler::emitCallScripte
   JitSpew(JitSpew_Codegen, __FUNCTION__);
   AutoSaveLiveRegisters save(*this);
   AutoOutputRegister output(*this);
 
   Register obj = allocator.useRegister(masm, reader.objOperandId());
   JSFunction* target = &objectStubField(reader.stubOffset())->as<JSFunction>();
   AutoScratchRegister scratch(allocator, masm);
 
-  bool isCrossRealm = reader.readBool();
-  MOZ_ASSERT(isCrossRealm == (cx_->realm() != target->realm()));
+  bool isSameRealm = reader.readBool();
+  MOZ_ASSERT(isSameRealm == (cx_->realm() == target->realm()));
 
   allocator.discardStack(masm);
 
   uint32_t framePushedBefore = masm.framePushed();
 
   // Construct IonICCallFrameLayout.
   uint32_t descriptor = MakeFrameDescriptor(
       masm.framePushed(), FrameType::IonJS, IonICCallFrameLayout::Size());
@@ -958,17 +958,17 @@ bool IonCacheIRCompiler::emitCallScripte
   MOZ_ASSERT(padding < JitStackAlignment);
   masm.reserveStack(padding);
 
   for (size_t i = 0; i < target->nargs(); i++) {
     masm.Push(UndefinedValue());
   }
   masm.Push(TypedOrValueRegister(MIRType::Object, AnyRegister(obj)));
 
-  if (isCrossRealm) {
+  if (!isSameRealm) {
     masm.switchToRealm(target->realm(), scratch);
   }
 
   masm.movePtr(ImmGCPtr(target), scratch);
 
   descriptor = MakeFrameDescriptor(argSize + padding, FrameType::IonICCall,
                                    JitFrameLayout::Size());
   masm.Push(Imm32(0));  // argc
@@ -981,17 +981,17 @@ bool IonCacheIRCompiler::emitCallScripte
 
   // The getter currently has a jit entry or a non-lazy script. We will only
   // relazify when we do a shrinking GC and when that happens we will also
   // purge IC stubs.
   MOZ_ASSERT(target->hasJitEntry());
   masm.loadJitCodeRaw(scratch, scratch);
   masm.callJit(scratch);
 
-  if (isCrossRealm) {
+  if (!isSameRealm) {
     static_assert(!JSReturnOperand.aliases(ReturnReg),
                   "ReturnReg available as scratch after scripted calls");
     masm.switchToRealm(cx_->realm(), ReturnReg);
   }
 
   masm.storeCallResultValue(output);
   masm.freeStack(masm.framePushed() - framePushedBefore);
   return true;
@@ -2094,18 +2094,18 @@ bool IonCacheIRCompiler::emitCallScripte
   JitSpew(JitSpew_Codegen, __FUNCTION__);
   AutoSaveLiveRegisters save(*this);
 
   Register obj = allocator.useRegister(masm, reader.objOperandId());
   JSFunction* target = &objectStubField(reader.stubOffset())->as<JSFunction>();
   ConstantOrRegister val =
       allocator.useConstantOrRegister(masm, reader.valOperandId());
 
-  bool isCrossRealm = reader.readBool();
-  MOZ_ASSERT(isCrossRealm == (cx_->realm() != target->realm()));
+  bool isSameRealm = reader.readBool();
+  MOZ_ASSERT(isSameRealm == (cx_->realm() == target->realm()));
 
   AutoScratchRegister scratch(allocator, masm);
 
   allocator.discardStack(masm);
 
   uint32_t framePushedBefore = masm.framePushed();
 
   // Construct IonICCallFrameLayout.
@@ -2127,17 +2127,17 @@ bool IonCacheIRCompiler::emitCallScripte
   masm.reserveStack(padding);
 
   for (size_t i = 1; i < target->nargs(); i++) {
     masm.Push(UndefinedValue());
   }
   masm.Push(val);
   masm.Push(TypedOrValueRegister(MIRType::Object, AnyRegister(obj)));
 
-  if (isCrossRealm) {
+  if (!isSameRealm) {
     masm.switchToRealm(target->realm(), scratch);
   }
 
   masm.movePtr(ImmGCPtr(target), scratch);
 
   descriptor = MakeFrameDescriptor(argSize + padding, FrameType::IonICCall,
                                    JitFrameLayout::Size());
   masm.Push(Imm32(1));  // argc
@@ -2150,17 +2150,17 @@ bool IonCacheIRCompiler::emitCallScripte
 
   // The setter currently has a jit entry or a non-lazy script. We will only
   // relazify when we do a shrinking GC and when that happens we will also
   // purge IC stubs.
   MOZ_ASSERT(target->hasJitEntry());
   masm.loadJitCodeRaw(scratch, scratch);
   masm.callJit(scratch);
 
-  if (isCrossRealm) {
+  if (!isSameRealm) {
     masm.switchToRealm(cx_->realm(), ReturnReg);
   }
 
   masm.freeStack(masm.framePushed() - framePushedBefore);
   return true;
 }
 
 bool IonCacheIRCompiler::emitCallSetArrayLength() {
@@ -2334,21 +2334,16 @@ bool IonCacheIRCompiler::emitReturnFromI
   }
 
   RepatchLabel rejoin;
   rejoinOffset_ = masm.jumpWithPatch(&rejoin);
   masm.bind(&rejoin);
   return true;
 }
 
-bool IonCacheIRCompiler::emitLoadStackValue() {
-  MOZ_ASSERT_UNREACHABLE("emitLoadStackValue not supported for IonCaches.");
-  return false;
-}
-
 bool IonCacheIRCompiler::emitGuardAndGetIterator() {
   JitSpew(JitSpew_Codegen, __FUNCTION__);
   Register obj = allocator.useRegister(masm, reader.objOperandId());
 
   AutoScratchRegister scratch1(allocator, masm);
   AutoScratchRegister scratch2(allocator, masm);
   AutoScratchRegister niScratch(allocator, masm);
 
@@ -2616,8 +2611,16 @@ bool IonCacheIRCompiler::emitCallNativeF
 
 bool IonCacheIRCompiler::emitCallClassHook() {
   MOZ_CRASH("Call ICs not used in ion");
 }
 
 bool IonCacheIRCompiler::emitGuardAndUpdateSpreadArgc() {
   MOZ_CRASH("Call ICs not used in ion");
 }
+
+bool IonCacheIRCompiler::emitLoadArgumentFixedSlot() {
+  MOZ_CRASH("Call ICs not used in ion");
+}
+
+bool IonCacheIRCompiler::emitLoadArgumentDynamicSlot() {
+  MOZ_CRASH("Call ICs not used in ion");
+}