Introducing Shared Memory Versioning to improve slow interactions

Introducing Shared Memory Versioning to  improve slow interactions


On the Chrome team, we believe it’s not sufficient to be fast most of the time, we have to be fast all of the time. Today’s The Fast and the Curious post explores how we contributed to Core Web Vitals by surveying the field data of Chrome responding to user interactions across all websites, ultimately improving performance of the web.

As billions of people turn to the web to get things done every day, the browser becomes more responsible for hosting a multitude of apps at once, resource contention becomes a challenge. The multi-process Chrome browser contends for multiple resources: CPU and memory of course, but also its own queues of work between its internal services (in this article, the network service).

This is why we’ve been focused on identifying and fixing slow interactions from Chrome users’ field data, which is the authoritative source when it comes to real user experiences. We gather this field data by recording anonymized Perfetto traces on Chrome Canary, and report them using a privacy-preserving filter.

When looking at field data of slow interactions, one particular cause caught our attention: recurring synchronous calls to fetch the current site’s cookies from the network service.

Let’s dive into some history.


Cookies under an evolving web

Cookies have been part of the web platform since the very beginning. They are commonly created like this:

    document.cookie = "user=Alice;color=blue"

And later retrieved like this:

    // Assuming a `getCookie` helper method:
    getCookie("user", document.cookie)

Its implementation was simple in single-process browsers, which kept the cookie jar in memory.

Over time, browsers became multi-process, and the process hosting the cookie jar became responsible for answering more and more queries. Because the Web Spec requires Javascript to fetch cookies synchronously, however, answering each document.cookie query is a blocking operation.

The operation itself is very fast, so this approach was generally fine, but under heavy load scenarios where multiple websites are requesting cookies (and other resources) from the network service, the queue of requests could get backed up.

We discovered through field traces of slow interactions that some websites were triggering inefficient scenarios with cookies being fetched multiple times in a row. We landed additional metrics to measure how often a GetCookieString() IPC was redundant (same value returned as last time) across all navigations. We were astonished to discover that 87% of cookie accesses were redundant and that, in some cases, this could happen hundreds of times per second.

The simple design of document.cookie was backfiring as JavaScript on the web was using it like a local value when it was really a remote lookup. Was this a classic computer science case of caching?! Not so fast!

The web spec allows collaborating domains to modify each other’s cookies. Hence, a simple cache per renderer process didn’t work, as it would have prevented writes from propagating between such sites (causing stale cookies and, for example, unsynchronized carts in ecommerce applications).

A new paradigm: Shared Memory Versioning

We solved this with a new paradigm which we called Shared Memory Versioning. The idea is that each value of document.cookie is now paired with a monotonically increasing version. Each renderer caches its last read of document.cookie alongside that version. The network service hosts the version of each document.cookie in shared memory. Renderers can thus tell whether they have the latest version without having to send an inter-process query to the network service.

This reduced cookie-related inter-process messages by 80% and made document.cookie accesses 60% faster 🥳.

Hypothesis testing

Improving an algorithm is nice, but what we ultimately care about is whether that improvement results in improving slow interactions for users. In other words, we need to test the hypothesis that stalled cookie queries were a significant cause of slow interactions.

To achieve this, we used Chrome’s A/B testing framework to study the effect and determined that it, combined with other improvements to reduce resource contention, improved the slowest interactions by approximately 5% on all platforms. This further resulted in more websites passing Core Web Vitals 🥳. All of this adds up to a more seamless web for users.

Timeline of the weighted average of the slowest interactions across the web on Chrome as this was released to 1% (Nov), 50% (Dec), and then all users (Feb).

Onward to a seamless web!

By Gabriel Charette, Olivier Li Shing Tat-Dupuis, Carlos Caballero Grolimund, and François Doray, from the Chrome engineering team

Andreas Kling steps down from SerenityOS to focus entirely on the Ladybird browser

We’ve got some possibly sad, possibly great news. Today, Andreas Kling, the amazing developer who started SerenityOS as a way to regain a sense or normalcy after completing his drug rehab program, has announced he’s stepping down as the ‘big dictator for life’ of the SerenityOS project, handing leadership over the maintainer group. The other half of the coin, however, is that Kling will officially fork Ladybird, the cross-platform web browser that originated as part of SerenityOS, turning it into a proper, separate project. Personally, for the past two years, I’ve been almost entirely focused on Ladybird, a new web browser that started as a simple HTML viewer for SerenityOS. When Ladybird became a cross-platform project in 2022, I switched all my attention to the Linux version, as testing on Linux was much easier and didn’t require booting into SerenityOS. Time flew by, and now I can’t remember the last time I worked on something in SerenityOS that wasn’t related to Ladybird. ↫ Andreas Kling If you know a little bit about Kling’s career, it’s not entirely surprising that his heart lies with working on a browser engine. He originally worked at Nokia, and then at Apple in San Francisco on WebKit, and there’s most likely some code that he’s written in the browser you’re using right now (except, perhaps, for us Firefox users). As such, it makes sense that once Ladybird grew into something more than just a simple HTML viewer, he’d be focusing on it a lot. As part of the fork, Ladybird will focus entirely on Linux and macOS, and drop SerenityOS as a target. This may seem weird at first, but this is an entirely amicable and planned step, as this allows Ladybird to adopt, use, and integrate third party code, something SerenityOS does not allow. In addition, many of these open source projects Ladybird couldn’t really use anyway because they simply didn’t exist for SerenityOS in the first place. This decision creates a lot of breathing room and flexibility for both projects. Ladybird was getting a lot of attention from outside of SerenityOS circles, from large donations to code contributions. I’m not entirely surprised by this step, and I really hope it’s going to be the beginning of something great. We really need new and competitive browser engines to push the web forward, and alongside Servo, it now seems Ladybird has also picked up the baton. What this will mean for SerenityOS remains to be seen. As Kling said, he hasn’t really been involved with SerenityOS outside of Ladybird work for two years now, so it seems the rest of the contributors were already doing a lot of the heavy lifting. I hope this doesn’t mean the project will peter out, since it has a certain flair few other operating systems have.