• Some SpiderMonkey optimizations in Firefox Quantum

    A few weeks ago we released Firefox 57, also known as Firefox Quantum. I work on SpiderMonkey performance and this year I spent a lot of time analyzing profiles and optimizing as many things as possible.

    I can't go through all SpiderMonkey performance improvements here: this year alone I landed almost 500 patches (most of them performance related, most of them pretty boring) and other SpiderMonkey hackers have done excellent work in this area as well. So I just decided to focus on some of the more interesting optimizations I worked on in 2017. Most of these changes landed in Firefox 55-57, some of them will ship in Firefox 58 or 59.

    shift/unshift optimizations

    Array.prototype.shift removes the first element from an array. We used to implement this by moving all other elements in memory, so shift() had O(n) performance. When JS code uses arrays as queues, it was easy to get quadratic behavior (outlook.com did something like this):

    while (arr.length > 0)
        foo(arr.shift());
    

    The situation was even worse for us when we were in the middle of an incremental GC (due to pre-barriers).

    Instead of moving the elements in memory, we can now use pointer arithmetic to make the object point to the second element (with some bookkeeping for the GC) and this is an order of magnitude faster when there are many elements. I also optimized unshift and splice to take advantage of this shifted-elements optimization. For instance, unshift can now reserve space for extra elements at the start of the array, so subsequent calls to unshift will be very fast.

    While working on this I noticed some other engines have a similar optimization, but it doesn't always work (for example when the array has many elements). In SpiderMonkey, the shifted-elements optimization fits in very well architecturally and we don't have such performance cliffs (as far as I know!).

    Regular expressions

    RegExp objects can now be nursery allocated. We can now also allocate them directly from JIT code. These changes improved some benchmarks a lot (the orange line is Firefox):

    While working on this, I also moved the RegExpShared table from the compartment to the zone: when multiple iframes use the same regular expression, we will now only parse and compile it once (this matters for ads, Facebook like buttons, etc). I also fixed a performance bug with regular expressions and interrupts: we would sometimes execute regular expressions in the (slow!) regex interpreter instead of running the (much faster) regex JIT code.

    Finally, the RegExp constructor could waste a lot of time checking the pattern syntax. I noticed this when I was profiling real-world code, but the fix for this also happened to double our Dromaeo object-regexp score :)

    Inline Caches

    This year we finished converting our most important ICs (for getting/setting properties) to CacheIR, our new IC architecture. This allowed us to optimize more things, here are a few:

    • I rewrote our IC heuristics. We now have special stubs for megamorphic lookups.
    • We now optimize more property gets and sets on DOM proxies like document or NodeLists. We had some longstanding bugs here that were much easier to fix with our new tools.
    • Similarly, we now optimize more property accesses on WindowProxy (things like this.foo or window.foo in the browser).
    • Our IC stubs for adding slots and adding elements now support (re)allocating new slots/elements.
    • We can now use ICs in more cases.

    The work on CacheIR has really paid off this year: we were able to remove many lines of code while also improving IC coverage and performance a lot.

    Property addition

    Adding new properties to objects used to be much slower than necessary. I landed at least 20 patches to optimize this.

    SpiderMonkey used to support slotful accessor properties (data properties with a getter/setter) and this complicated our object layout a lot. To get rid of this, I first had to remove the internal getProperty and setProperty Class hooks, this turned out to be pretty complicated because I had to fix some ancient code we have in Gecko where we relied on these hooks, from NPAPI code to js-ctypes to XPConnect.

    After that I was able to remove slotful accessor properties and simplify a lot of code. This allowed us to optimize our property addition/lookup code even more: for instance, we now have separate methods for adding data vs accessor properties. This was impossible to do before because there was simply no clear distinction between data and accessor properties.

    Property iteration

    Property iteration via for-in or Object.keys is pretty common, so I spent some time optimizing this. We used to have some caches that were fine for micro-benchmarks (read: SunSpider), but didn't work very well on real-world code. I optimized the for-in code, rewrote the iterator cache, and added an IC for this. For-in performance should be much better now.

    I also rewrote the enumeration code used by both for-in, Object.getOwnPropertyNames, etc to be much faster and simpler.

    MinorGC triggers

    In Firefox, when navigating to another page, we have a mechanism to "nuke" chrome -> content wrappers to prevent bad memory leaks. The code for this used to trigger a minor GC to evict the GC's nursery, in case there were nursery-allocated wrappers. These GCs showed up in profiles and it turned out that most of these evict-nursery calls were unnecessary, so I fixed this.

    According to Telemetry, this small patch eliminated tons of unnecessary minor GCs in the browser:

    The black line shows most of our minor GCs (69%) were EVICT_NURSERY GCs and afterwards (the orange line) this just doesn't show up anymore. We now have other minor GC reasons that are more common and expected (full nursery, full store buffer, etc).

    Proxies

    After refactoring our object allocation code to be faster and simpler, it was easy to optimize proxy objects: we now allocate ProxyValueArray inline instead of requiring a malloc for each proxy.

    Proxies can also have an arbitrary slot layout now (a longstanding request from our DOM team). Accessing certain slots on DOM objects is now faster than before and I was able to shrink many of our proxies (before these changes, all proxy objects had at least 3-4 Value slots, even though most proxies need only 1 or 2 slots).

    Builtins

    A lot of builtin functions were optimized. Here are just a few of them:

    • I fixed some bad performance cliffs that affected various Array functions.
    • I ported Object.assign to C++. It now uses less memory (we used to allocate an array for the property names) and in a lot of cases is much faster than before.
    • I optimized Function.prototype.toString. It's surprisingly common for websites to stringify the same function repeatedly so we now have a FunctionToString cache for this.
    • Object.prototype.toString is very hot and I optimized it a number of times. We can also inline it in the JIT now and I added a new optimization for lookups of the toStringTag/toPrimitive Symbols.
    • Array.isArray is now inlined in JIT code in a lot more cases.

    Other optimizations

    • We unnecessarily delazified (triggering full parsing of) thousands of functions when loading Gmail.
    • Babel generates code that mutates __proto__ and this used to deoptimize a lot of things. I fixed a number of issues in this area.
    • Cycle detection (for instance for JSON.stringify and Array.prototype.join) now uses a Vector instead of a HashSet. This is much faster in the common cases (and not that much slower in pathological cases).
    • I devirtualized some of our hottest virtual functions in the frontend and in our Ion JIT backend.

    Conclusion

    SpiderMonkey performance has improved tremendously the past months and we're not stopping here. Hopefully there will be a lot more of this in 2018 :) If you find some real-world JS code that's much slower in Firefox than in other browsers, please let us know. Usually when we're significantly slower than other browsers it's because we're doing something silly and most of these bugs are not that hard to fix once we are aware of them.

  • CacheIR: A new approach to Inline Caching in Firefox

    The past months we have been working on CacheIR, an overhaul of the IC code in SpiderMonkey's JITs. CacheIR has allowed us to remove thousands of lines of boilerplate and code duplication. It has also made it much easier to add new optimizations. This post describes how CacheIR works, why it's much better than what we had before, and some plans we have for the future.

    IC stubs

    Both the Baseline JIT and IonMonkey optimizing JIT can use Inline Cache stubs to optimize things like property accesses. Each IC has a linked list of stubs. When we want to optimize a certain operation, like object.foo, the IC adds a new stub for this particular case to the list. Stubs usually have one or more guards and if a guard fails we jump to the next stub in the list.

    CacheIR

    CacheIR is a very simple bytecode that makes it easy to generate new IC stubs to optimize particular cases. Consider the following JS code:

    var point = {x: 1, y: 2};
    var x = point.x;
    

    The GetProp IC we use for the point.x lookup uses the GetPropIRGenerator to emit CacheIR for this lookup. The IR we emit for it looks like this:

    GuardIsObject Op0
    GuardShape Op0 Field0
    LoadFixedSlotResult Op0 Field1
    

    As you can see, CacheIR is a very simple, linear IR. It has no explicit branches, no loops, just guards. This makes it easy to compile CacheIR to machine code.

    Here Op0 is the IC's input operand (point in this case). We first guard it's an object, then we guard on its Shape (an object's Shape determines its layout and properties, other engines may use the terms 'map' or 'hidden class'), and then we load a value from the object's fixed slot into the IC's output register.

    In addition to the IR, the IR emitter also generates a list of stub fields. Field0 and Field1 above refer to fields in the stub data, in this case we have the following fields:

    Field 0: Shape* pointer: 0xeb39f900
    Field 1: offset: 16
    

    The IR itself does not contain any pointers or slot offsets, we will see below why this matters.

    A GetProp IC has a single input value, but other ICs like GetElem (used for x[y] in JS) can have multiple inputs:

    var prop = "x";
    var x = point[prop];
    

    Now the IR generator will insert extra guards to check prop is the string "x":

    GuardIsObject Op0
    GuardIsString Op1              <---
    GuardSpecificAtom Op1 Field0   <---
    GuardShape Op0 Field1
    LoadFixedSlotResult Op0 Field2
    

    We have two extra guards now (GuardIsString and GuardSpecificAtom) and an extra stub field that stores the atom we're guarding on ("x" in this case), but other than that the IR looks the same as before. The same trick can be used in a lot of other cases: for example, if point would be a proxy/wrapper to our Point object, all we need to do is emit some CacheIR instructions to unwrap it and then we can emit the same IR as before.

    We emit the same CacheIR for our Baseline and Ion ICs (but Ion ICs may NOP some type guards, for instance if we know the input is always an object). This is a big improvement over what we had before: our Baseline and Ion ICs used to share some code but not much, because they're quite different. This meant there was a lot of code duplication and many cases we optimized in Ion but not (or differently) in Baseline. CacheIR fixes all this.

    Instead of having register allocation, code generation, and high level decisions (which guards to emit) in the same code, we now have a very clear separation of concerns: when we emit the IR, we don't have to worry about registers, stack slots, or boxing formats. When we compile the IR to machine code, we can simply compile one instruction at a time and don't have to worry about what to emit next. This separation has made the code much more readable and maintainable.

    Sharing Baseline stub code

    Our Baseline JIT can use the same machine code for many different stubs, because the IC inputs are always boxed Values stored in fixed registers and the IC stub reads things like Shape* pointers and slot offsets from the stub data, they are not baked into the code. Before CacheIR, each Baseline stub had to define an int32 key for this purpose and stubs with the same key could share code. This worked well, but it was tedious to maintain and easy to screw up when making changes to IC code. With CacheIR, we simply use the IR as key: stubs with the same IR can share code. This means the code sharing works automatically and we no longer have to think about it. It eliminates a lot of boilerplate and a whole class of potential bugs.

    Removing boilerplate has been a recurring theme, we have seen the same thing with GC tracing for instance. Each Baseline stub that contained a Shape guard had to trace this Shape for GC purposes. Now there's a single place where we trace Shapes stored in stubs. Less boilerplate and much harder to get wrong.

    Compiling CacheIR to machine code

    After emitting the IR, we can compile it to native code. The code generation for most CacheIR instructions is shared between Baseline and Ion. As mentioned above, Baseline stub code is shared, so some CacheIR instructions are compiled differently for Baseline and Ion: Ion can bake in values directly into the code, whereas the Baseline code will read them from the stub data.

    The CacheIR compiler uses a very simple register allocator to track where each operand lives (register, stack slot, etc) and which registers are available. When we compile an instruction, the first thing we do is request registers for the operands we need, and the register allocator takes care of spilling and restoring values as needed.

    Furthermore, the register allocator allows us to generate failure paths automatically. A failure path is the code we emit when a guard fails: it restores the input registers and jumps to the next stub. This works because the register allocator knows the register state at the start of the stub and the current register state.

    This makes it much easier to write IC code: you no longer have to remember where all values are stored, which registers are available, and which ones have to be restored. It eliminates another class of subtle bugs.

    Improvements

    Tom Schuster (evilpie) has been working on a logging mechamism to find cases our ICs fail to optimize currently. He already fixed numerous performance issues that came up on websites like Facebook and Google Docs. We also had some performance issues on file that used to be hard to fix, but were much easier to optimize with CacheIR. In some cases, fixing an issue found on real-world websites turned out to (unexpectedly) improve benchmarks as well :)

    Because the CacheIR we emit is the same for Baseline and Ion, we no longer have cases where Ion has IC support for something but Baseline doesn't. Whenever we add a new optimization to our IC code, it works in both Baseline and Ion. As a result, especially our Baseline JIT now optimizes a lot more things than before.

    CacheIR stubs require much less boilerplate and code duplication. The IR instructions are our IC stub building blocks: many instructions can be reused for other things. Every time we converted a stub to CacheIR we saw big improvements in code size and maintainability. This has been incredibly satisfying.

    Future plans

    We are still working on converting the remaining Baseline and Ion IC stubs to CacheIR, and we will continue to use our new CacheIR tools to speed up a lot of other operations.

    Our Ion optimizing JIT is currently able to pattern match the CacheIR we emitted for Baseline and uses it optimize certain cases. Once we are done converting IC stubs to CacheIR, the next step is to add a generic CacheIR to MIR compiler. This will bring some significant performance wins and will let us remove even more code: whenever we optimize a new case in the CacheIR emitter, we will not only generate Baseline IC code and Ion IC code, but also Ion inline paths from it.

    CacheIR will also make it much easier to optimize the ES2015 super property accesses used in derived classes. super.x involves 2 objects: the receiver and the super object. With CacheIR, we can simply add a new input operand to our GetProp IR generator and the IR compiler will do the right thing.

    Conclusion

    We've seen how CacheIR is used in SpiderMonkey to simplify/improve IC code generation and remove code duplication and boilerplate. Firefox Nightly builds now optimize more cases than ever, and we are working hard to improve our IC coverage even more. We will also see some serious performance wins from compiling CacheIR to MIR.

  • W^X JIT-code enabled in Firefox

    Back in June, I added an option to SpiderMonkey to enable W^X protection of JIT code. The past weeks I've been working on fixing the remaining performance issues and yesterday I enabled W^X on the Nightly channel, on all platforms. What this means is that each page holding JIT code is either executable or writable, never both at the same time.

    Why?

    Almost all JITs (including the ones in Firefox until now) allocate memory pages for code with RWX (read-write-execute) permissions. JITs typically need to patch code (for inline caches, for instance) and with writable memory they can do that with no performance overhead. RWX memory introduces some problems though:

    • Security: RWX pages make it easier to exploit certain bugs. As a result, all modern operating systems store code in executable but non-writable memory, and data is usually not executable, see W^X and DEP. RWX JIT-code is an exception to this rule and that makes it an interesting target.
    • Memory corruption: I've seen some memory dumps for crashes in JIT-code that might have been caused by memory corruption elsewhere. All memory corruption bugs are serious, but if it happens for whatever reason, it's much better to crash immediately.

    How It Works

    With W^X enabled, all JIT-code pages are non-writable by default. When we need to patch JIT-code for some reason, we use a RAII-class, AutoWritableJitCode, to make the page(s) we're interested in writable (RW), using VirtualProtect on Windows and mprotect on other platforms. The destructor then toggles this back from RW to RX when we're done with it.

    (As an aside, an alternative to W^X is a dual-mapping scheme: pages are mapped twice, once as RW and once as RX. In 2010, some people wrote patches to implement this for TraceMonkey, but this work never landed. This approach avoids the mprotect overhead, but for this to be safe, the RW mapping should be in a separate process. It's also more complicated and introduces IPC overhead.)

    Performance

    Last week I fixed implicit interrupt checks to work with W^X, got rid of some unnecessary mprotect calls, and optimized code poisoning to be faster with W^X.

    After that, the performance overhead was pretty small on all benchmarks and websites I tested: Kraken and Octane are less than 1% slower with W^X enabled. On (ancient) SunSpider the overhead is bigger, because most tests finish in a few milliseconds, so any compile-time overhead is measurable. Still, it's less than 3% on Windows and Linux. On OS X it's less than 4% because mprotect is slower there.

    I think W^X works well in SpiderMonkey for a number of reasons:

    • We run bytecode in the interpreter before Baseline-compiling it. On the web, most functions have less than ~10 calls or loop iterations, so we never JIT those and we don't have any memory protection overhead.
    • The Baseline JIT uses IC stubs for most operations, but we use indirect calls here, so we don't have to make code writable when attaching stubs. Baseline stubs also share code, so only the first time we attach a particular stub we compile code for it. Ion IC stubs do require us to make memory writable, but Ion doesn't use ICs as much as Baseline.
    • For asm.js (and soon WebAssembly!), we do AOT-compilation of the whole module. After compilation, we need only one mprotect call to switch everything from RW to RX. Furthermore, this code is only modified on some slow paths, so there's basically no performance overhead for asm.js/WebAssembly code.

    Conclusion

    I've enabled W^X protection for all JIT-code in Firefox Nightly. Assuming we don't run into bugs or serious performance issues, this will ship in Firefox 46.

    Last but not least, thanks to the OpenBSD and HardenedBSD teams for being brave enough to flip the W^X switch before we did!

  • Testing Math.random(): Crushing the browser

    (For tl;dr, see the Conclusion.)

    A few days ago, I wrote about Math.random() implementations in Safari and (older versions of) Chrome using only 32 bits of precision. As I mentioned in that blog post, I've been working on upgrading Math.random() in SpiderMonkey to XorShift128+. V8 has been using the same algorithm since last week. (Update Dec 1: WebKit is now also using XorShift128+!)

    The most extensive RNG test is TestU01. It's a bit of a pain to run: to test a custom RNG, you have to compile the library and then link it to a test program. I did this initially for the SpiderMonkey shell but after that I thought it'd be more interesting to use Emscripten to compile TestU01 to asm.js so we can easily run it in different browsers.

    Today I tried this and even though I had never used Emscripten before, I had it running in the browser in less than an hour. Because the tests can take a long time, it runs in a web worker. You can try it for yourself here.

    I also wanted to test window.crypto.getRandomValues() but unfortunately it's not available in workers.

    Disclaimer: browsers implement Math functions like Math.sin differently and this can affect their precision. I don't know if TestU01 uses these functions and whether it affects the results below, but it's possible. Furthermore, some test failures are intermittent so results can vary between runs.

    Results

    TestU01 has three batteries of tests: SmallCrush, Crush, and BigCrush. SmallCrush runs only a few tests and is very fast. Crush and especially BigCrush have a lot more tests so they are much slower.

    SmallCrush

    Running SmallCrush takes about 15-30 seconds. It runs 10 tests with 15 statistics (results). Here are the number of failures I got:

    Browser Number of failures
    Firefox Nightly 1: BirthdaySpacings
    Firefox with XorShift128+ 0
    Chrome 48 11
    Safari 9 1: RandomWalk1 H
    Internet Explorer 11 1: BirthdaySpacings
    Edge 20 1: BirthdaySpacings

    Chrome/V8 failing 11 out of 15 is not too surprising. Again, the V8 team fixed this last week and the new RNG should pass SmallCrush.

    Crush

    The Crush battery of tests is much more time consuming. On my MacBook Pro, it finishes in less than an hour in Firefox but in Chrome and Safari it can take at least 2 hours. It runs 96 tests with 144 statistics. Here are the results I got:

    Browser Number of failures
    Firefox Nightly 12
    Firefox with XorShift128+ 0
    Chrome 48 108
    Safari 9 33
    Internet Explorer 11 14

    XorShift128+ passes Crush, as expected. V8's previous RNG fails most of these tests and Safari/WebKit isn't doing too great either.

    BigCrush

    BigCrush didn't finish in the browser because it requires more than 512 MB of memory. To fix that I probably need to recompile the asm.js code with a different TOTAL_MEMORY value or with ALLOW_MEMORY_GROWTH=1.

    Furthermore, running BigCrush would likely take at least 3 hours in Firefox and more than 6-8 hours in Safari, Chrome, and IE, so I didn't bother.

    The XorShift128+ algorithm being implemented in Firefox and Chrome should pass BigCrush (for Firefox, I verified this in the SpiderMonkey shell).

    About IE and Edge

    I noticed Firefox (without XorShift128+) and Internet Explorer 11 get very similar test failures. When running SmallCrush, they both fail the same BirthdaySpacings test. Here's the list of Crush failures they have in common:

    • 11 BirthdaySpacings, t = 2
    • 12 BirthdaySpacings, t = 3
    • 13 BirthdaySpacings, t = 4
    • 14 BirthdaySpacings, t = 7
    • 15 BirthdaySpacings, t = 7
    • 16 BirthdaySpacings, t = 8
    • 17 BirthdaySpacings, t = 8
    • 19 ClosePairs mNP2S, t = 3
    • 20 ClosePairs mNP2S, t = 7
    • 38 Permutation, r = 15
    • 40 CollisionPermut, r = 15
    • 54 WeightDistrib, r = 24
    • 75 Fourier3, r = 20

    This suggests the RNG in IE may be very similar to the one we used in Firefox (imported from Java decades ago). Maybe Microsoft imported the same algorithm from somewhere? If anyone on the Chakra team is reading this and can tell us more, it would be much appreciated :)

    IE 11 fails 2 more tests that pass in Firefox. Some failures are intermittent and I'd have to rerun the tests to see if these failures are systematic.

    Based on the SmallCrush results I got with Edge 20, I think it uses the same algorithm as IE 11 (not too surprising). Unfortunately the Windows VM I downloaded to test Edge shut down for some reason when it was running Crush so I gave up and don't have full results for it.

    Conclusion

    I used Emscripten to port TestU01 to the browser. Results confirm most browsers currently don't use very strong RNGs for Math.random(). Both Firefox and Chrome are implementing XorShift128+, which has no systematic failures on any of these tests.

    Furthermore, these results indicate IE and Edge may use the same algorithm as the one we used in Firefox.

  • Math.random() and 32-bit precision

    Last week, Mike Malone, CTO of Betable, wrote a very insightful and informative article on Math.random() and PRNGs in general. Mike pointed out V8/Chrome used a pretty bad algorithm to generate random numbers and, since this week, V8 uses a better algorithm.

    The article also mentioned the RNG we use in Firefox (it was copied from Java a long time ago) should be improved as well. I fully agree with this. In fact, the past days I've been working on upgrading Math.random() in SpiderMonkey to XorShift128+, see bug 322529. We think XorShift128+ is a good choice: we already had a copy of the RNG in our repository, it's fast (even faster than our current algorithm!), and it passes BigCrush (the most complete RNG test available).

    While working on this, I looked at a number of different RNGs and noticed Safari/WebKit uses GameRand. It's extremely fast but very weak. (Update Dec 1: WebKit is now also using XorShift128+, so this doesn't apply to newer Safari/WebKit versions.)

    Most interesting to me, though, was that, like the previous V8 RNG, it has only 32 bits of precision: it generates a 32-bit unsigned integer and then divides that by UINT_MAX + 1. This means the result of the RNG is always one of about 4.2 billion different numbers, instead of 9007199 billion (2^53). In other words, it can generate 0.00005% of all numbers an ideal RNG can generate.

    I wrote a small testcase to visualize this. It generates random numbers and plots all numbers smaller than 0.00000131072.

    Here's the output I got in Firefox (old algorithm) after generating 115 billion numbers:

    And a Firefox build with XorShift128+:

    In Chrome (before Math.random was fixed):

    And in Safari:

    These pics clearly show the difference in precision.

    Conclusion

    Safari and older Chrome versions both generate random numbers with only 32 bits of precision. This issue has been fixed in Chrome, but Safari's RNG should probably be fixed as well. Even if we ignore its suboptimal precision, the algorithm is still extremely weak.

    Math.random() is not a cryptographically-secure PRNG and should never be used for anything security-related, but, as Mike argued, there are a lot of much better (and still very fast) RNGs to choose from.