var js = new Image();
js.src = 'script.js';
js.onerror = function(){/* js done */};
Then some browsers started having a separate image cache and this stopped workingMost connections on the web nowadays are over H2+: https://almanac.httparchive.org/en/2024/http#http-version-ad...
You may not have intended it this way but this statement very much reads as "just use Chrome". There's lots of microbrowsers in the world generating link previews, users stuck with proxies, web spiders, and people stuck with old browsers that don't necessarily have H2 connectivity.
That doesn't mean over-optimize for HTTP 1.x but decent performance in that case should not be ignored. If you can make HTTP 1.x performant then H2 connections will be as well by default.
Far too many pages download gobs of unnecessary resources just because they didn't bother tree shaking and minifying resources. Huge populations of web users at any given moment are stuck on 2G and 3G equivalent connections. Depending where I am in town my 5G phone can barely manage to load the typical news website because of poor signal quality.
Using a bunch of domains like a.static.com,b.static.com etc was really only helpful when the limit on connections to a domain was like 2. Depending on the browser those limits have been higher for a while.
For http/2 it's less helpful.
But honestly there's not really one right answer theoretically. multiple domains increase fixed overhead of DNS, tcp connect, tls handshake, but offer parallelism that doesn't suffer from head of line blocking.
You can multiplex a bunch of request/responses over a http/2 stream in parallel... Until you drop a packet and remember that http/2 is still TCP based.
UDP based transports like http/3 and quic don't have this problem.
What is the actual problem thats being solved here? Is cache-sniffing being actively used for fingerprinting or something?
Whats going on here?
Want to know if your user has previously visited a specific website? Time how long it takes to load their CSS file, if it's instant then you know where they have been.
You can tell if they have an account on that site by timing the load of an asset that's usually only shown to logged-in users.
All of the browser manufacturers decided to make this change (most of them over 5 years ago now) for privacy reasons, even though it made their products worse in that they would be slower for their users.
In the days of webpack, not so much.
The question (not addressed in the article) is: is there in fact a gigantic, widespread problem of using cache-sniffing to fingerprint users?
We have always known that this was possible, yet continued to include cross site caching anyway because of the enormous benefits. Presumably something fairly major has recently happened in order for this functionality to be removed- if so what?
Safari (who make a big deal about being privacy-conscious) was the first to introduce cache partitioning, looking into it as far back as 2013[1]; Chrome followed in 2020 and Firefox in 2021[3]. One thing I know, anecdotally, to be a strong motivator among browser vendors is ‘X is doing it, why aren’t we?’
1. https://bugs.webkit.org/show_bug.cgi?id=110269
2. https://developer.chrome.com/blog/http-cache-partitioning
3. https://blog.mozilla.org/security/2021/01/26/supercookie-pro...