There was a period where I think both disabled ESNI support as work was made on ECH, which now is pretty far along. I was even able to setup a forked nginx w/ ECH support to build a client(browser) tester[0].
Hopefully now ECH can get more mainstream in HTTPS servers allowing for some fun configs.
A pretty interesting feature of ECH is that the server does not need to validate the public name (it MAY) , so clients can use public_name's that middleboxes (read: censors) approve to connect to other websites. I'm trying to get this added to the RustTLS client[1], now might be a good time to pick that back up.
[0] https://rfc9849.mywaifu.best:3443/ [1] https://github.com/rustls/rustls/issues/2741
I'm not 100% sure it's allowed in the specs, but it works in Chrome.
As I understand it, without this feature it would be pretty useless for small website owners, since they would need to register a separate domain for their ECH public name, which censors could just block.
E.g. all the users will remember `example.com` , underlying it doesn't matter what IP it resolves to. If the IP gets "burned" , then the providers can rotate to a new IP (if their provider allows).
Vs. telling your users to use a new domain `example.org` , fake websites etc.
Also sensible ISPs usually don't block IPs since for services behind a CDN it could lead to other websites being blocked, though of course sometimes this is ignored. See also: https://blog.cloudflare.com/consequences-of-ip-blocking/
If tomorrow, everyone said "we don't want IP's from Frankfurt showing up somewhere in Dubai", you'd have a massive technical problem and rearranging to start with but once that was sorted you could geo-lock. IANA and Network providers simply haven't been doing that.
The reason it doesn't happen is Devs/Stakeholders want uptime from ISPs/Networks and not something they can't abstract. Basically its just a status quo much like the entire internet reverse-proxying through CDNs is a status quo. It wasn't always like that, and it may not always be like that in the future - just depends which way the winds blow over time.
what do you mean, IPs from Frankfurt?
IP addresses are just IP addresses, they know no geographical boundaries. In RIR DBs you can geolocate them to wherever you want. Which is the entire reason why Geo IP DBs even exist - they triangulate.
From a network perspective statements like that make no sense. IP addresses don't have any sort of physicality,
How do you determine to whom an IP is even registered to? They get sub-leased all the time.
The best you can do is check who has administrative control over the prefixes RIR info, but that doesn’t mean that anyone with control is the factual user of the IPs.
You could check the IRR for the ASN and base it on that, but still.
There's also no way to actually know _where_ an IP actually originates from. Only its AS path.
The DFZ contains all prefixes announced everywhere, for the internet is completely decentralized.
You check the RIR's records.
> They get sub-leased all the time.
With records updated. If not, any consequences from wrong information fall on the lessor and lessee.
> There's also no way to actually know _where_ an IP actually originates from. Only its AS path.
Ping time from different locations on their upstream AS gives a good guess.
Not always + there are no consequences whatsoever.
Plenty of leasing services will just provide you with IRR & RPKI, without ever touching the actual records.
> Ping time from different locations on their upstream AS gives a good guess.
Upstream AS is meaningless if it's a T1 carrier. Ping AS6939. They are everywhere.
It'll still eventually stick, but a lot slower
With Jio, you don't really need ECH at all. The blocks are mostly rudimentary and bypassed with encrypted DNS (DoH / DoT / DNSCrypt) and Firefox (which fragments the TLS ClientHello packets into two).
Funnily enough, not setting the SNI and connecting the the origin IP, and then requesting the page worked fine.
Such tricks, called "domain fronting" are why ECH exists. The problem is that although domain fronting is effective for the client it's a significant headache for the provider. Big providers involved, such as Cloudflare have always insisted that they want to provide this sort of censorship resisting capability but they don't want to authorize domain fronting because it's a headache for them technically.
Let me explain the headache with an example. Say I'm Grand Corp, a French company with 25 million web sites including both cats-are-great.example and fuck-trump.example. Users discover that although the US government has used Emergency Powers to prohibit access to fuck-trump.example, using domain fronting they can connect to cats-are-great.example and request fuck-trump.example pages anyway and the US government's blocking rules can't stop them.
What they don't know is that I, Grand Corp had been sharding sites 25 ways, so there was only 1-in-25 chance that this worked - it so happened cats-are-great and fuck-trump were in the same shard, On Thursday during routine software upgrade we happen to switch to 32-way sharding and suddenly it stops working - users are outraged, are the French surrendering to Donald Trump?
Or, maybe as a fallback mechanism the other 31 servers can loop back around to fetch your fuck-trump.example pages from the server where they live, but in doing so they double the effective system load. So now my operational costs at Grand Corp for fuck-trump.example doubled because clients were fronting. Ouch.
GP said "not setting SNI"... doing TLS handshake with IP certs don't (need to) set SNI?
They won't have received a certificate for the IP as a name, it's relatively unusual to have those, the main users are things like DoH and DoT servers since their clients may not know the name of the server... historically if you connect to a TLS server without SNI it just picks a name and presents a certificate for that name - if there's a single name for the machine that definitely works, and if not well - domain fronting.
TLS 1.3 even specifies that you must always do SNI and shouldn't expect such tricks to work, because it's such a headache.
Note that it is exactly this type of thing that makes age verification laws reasonable. You're making it technically impossible for even sophisticated parents to censor things without a non-solution like "don't let kids use a computer until they're 18", so naturally the remaining solution is a legal one to put liability on service operators.
You're still ultimately going to get the censorship when the law catches up in whatever jurisdiction, but you'll also provide opacity for malware (e.g. ad and tracking software) to do its thing.
I do agree though that it should be illegal for device manufacturers or application developers to use encryption that the device owner cannot MitM. The owner should always be able to install their own CA and all applications should be required to respect it.
The only thing this makes impossible is the laziest, and easiest to bypass method of filtering the internet.
Given that it's pretty much the norm that consumer embedded devices don't respect the owner's wishes network level filtering is the best thing a device owner can do on their own network.
It's a mess.
I'd like to see consumer regulation to force manufacturers to allow owners complete control over their devices. Then we could have client side filtering on the devices we own.
I can't imagine that will happen. I suspect what we'll see, instead, is regulation that further removes owner control of their devices in favor of baking ideas like age or identity verification directly into embedded devices.
Then they'll come for the unrestricted general purpose computers.
Along similar lines, a security hole you can use for jailbreaking is also a security hole that could potentially be exploited by malware. As cute as things like "visit this webpage and it'll jailbreak your iPhone" were, it's good that that doesn't work anymore, because that is also a malware vector.
I'd like to see more devices being sold that give the user control, like the newly announced GrapheneOS phones for instance. I look forward to seeing how those are received.
As brought up in another thread on the topic, you have things like web browsers embedded in the Spotify app that will happily ignore your policy if you're not doing external filtering.
I guess it (network-level filtering) just feels like a dragnet solution that reduces privacy and security for the population at large, when a more targeted and cohesive solution like client-side filtering, having all apps that use web browsers funnel into an OS-level check, etc would accomplish the same goals with improved security.
You could have cooperation from everyone to hook into some system (California's solution), which I expect will be a cover for more "we need to block unverified software", or you could allow basic centralized filtering as we've had, and ideally compel commercial OS vendors to make it easy to root and MitM their devices for more effective security.
Rather than “get over” it I think we need to fight. You seem to insist that monitoring/control is a done deal and we only need to argue about the form it takes, but this is not correct. Centralized monitoring/control can be resisted and broken through a combination of political and technical means. While you may not want this, I do. (And many others are being swayed back in my direction as they start to feel the effects of service enshittification, censorship under the guise of “fighting misinformation”, and media consolidation.)
Ideally you would lock them up in a padded room until then. There is a significant amount of shared real world space that isn't supervised and doesn't require any age verification to enter either.
A little while after that, back in the UK, I drove my young cousin to the seaside. I didn't carry ID - I don't drink and you're not required to carry ID to drive here† so it was never necessary back then, but she did, so I try to buy her booze, they demand ID, I do not have any ID so I can't buy it even though I'm old enough to drink. So, she just orders her own booze, she's under age but they don't ask because she's pretty.
† The law here says police are allowed to ask to see a driving license if you're in charge of a vehicle on a public road, but, since you aren't required to carry it they can require you to attend a police station and show documents within a few days. In practice in 2026 police have network access and so they can very easily go from "Jim Smith, NW1A 4DQ" to a photo and confirmation that you're licensed to drive a bus or whatever if you are co-operative.
And it's likely a temporary win there until the authoritarian regimes mandate local monitoring software and send you to the gulag if they detect opaque traffic.
In addition to the main RFC 9849, there is also RFC 9848 - "Bootstrapping TLS Encrypted ClientHello with DNS Service Bindings": https://datatracker.ietf.org/doc/rfc9848/
There's an example of how it's used in the article.
Now we need to get Qualys to cap SSL Labs ratings at B for servers that don't support ECH. Also those that don't have HSTS and HSTS Preload while we're at it.
Although Hardenize was a commercial product (it was acquired in 2022 by another company, Red Sift), it has a public report that's always been free. For example:
https://www.hardenize.com/report/feistyduck.com
The CSP inspection in Hardenize could use a refresh, but the TLS and PKI aspects are well maintained [at the time of writing].
A bit tricky in Go, but nothing too complicated. We implemented ECH in Aug 2024 for our DNS Android app and it has worked nicely since: https://github.com/celzero/firestack/blob/09b26631a2eac2cf9c...
I'm actually kind of furious at nginx's marketing materials around ECH. They compare with other servers but completely ignore Caddy, saying that they're the only practical path to deploying ECH right now. Total lies: https://x.com/mholt6/status/2029219467482603717
right now tools like Cloudflare Bot Management rely heavily on JA3/JA4 hashes - they fingerprint the ClientHello to identify scrapers vs real browsers. if the ClientHello is encrypted, that whole detection layer collapses. you can still do behavioral analysis and JS challenges, but the pre-HTTP layer that currently catches a huge chunk of naive bots - gone
curious how Cloudflare handles this internally given they're one of the biggest ECH adopters but also one of the biggest bot detection vendors. seems like they're eating their own lunch on this one, or they've already shifted their detection stack to not rely on it as much
If you're not in control of the domain you're fingerprinting, then ECH is working as intended.
I don't expect naive bots to implement ECH any time soon, though. If a bot can't be bothered to download curl-impersonate, they won't pass any ECH flags either.
for that tier, ECH flips the dynamic a bit. right now detection can use JA3/JA4 as a positive signal - "this fingerprint matches Chrome 120, looks clean". with ECH, if the bot is running behind a CDN that terminates ECH (like Cloudflare), the server sees a decrypted ClientHello that looks like... a real Chrome on Cloudflare's infrastructure. the fingerprint is clean by construction.
so paradoxically ECH might make things harder for the sophisticated bot detection case while doing nothing about the naive case, which is sort of backwards from what you'd expect.
the part that changes is passive fingerprinting from third parties - network middleboxes, ISPs, DPI systems that have historically been able to read ClientHello parameters in transit and build behavioral profiles. that layer goes away. for bot detection specifically that matters less since detection happens at the server, so your correction stands for that use case.
the Cloudflare paradox I was gesturing at is maybe better framed as: for sites NOT on Cloudflare, ECH makes it harder for Cloudflare (as a network observer) to do pre-connection fingerprinting. but for their own CDN customers, they decrypt it anyway so nothing changes for them. the conflict is more theoretical than practical for their current product.
Right. Things that should never have been allowed to exist to begin with. Working as designed.
That's exactly what I said:
> It only prevents your ISP from knowing what website you're connecting to.
Why would they want to peek traffic? Most likely for statistics (most frequently visited websites etc).
Also reverse lookup has nothing to do with hosting own DNS resolver.
> Also reverse lookup has nothing to do with hosting own DNS resolver.
It has everything to do with that. Had you used two brain cells, you would've known that they can memorize the IP address and the domain name, and if you connect to that IP in a short period of time, most likely you visited that domain name.
> ECH basically kills TLS fingerprinting as a bot detection signal
They are not talking about fingerprinting in general. Please elaborate how else TLS fingerprinting can be done.
> Please elaborate how else TLS fingerprinting can be done.
By doing everything as it is right now?
In truth ECH sends three: Host header + real SNI + dummy SNI
If the CDNs come under pressure, they can stop allowing ECH, just like they stopped allowing domain fronting. Unlike fronting, they can do this selectively -- like, only if the client is in $COUNTRY and the hostname is one of XYZ.
Many of those (not looking at any particular Germans..) however only offer a single API token across all DNS zones, which is awful when you're managing many zones. One compromised API token = hundreds of compromised zones.
Would be nice if more DNS providers offered granular API tokens, at least on a per-zone basis and ideally on a per-record basis within a zone.
> In verifying the client-facing server certificate, the client MUST interpret the public name as a DNS-based reference identity [RFC9525]. Clients that incorporate DNS names and IP addresses into the same syntax (e.g. Section 7.4 of [RFC3986] and [WHATWG-IPV4]) MUST reject names that would be interpreted as IPv4 addresses.
Aside from apparently not considering the existence of IPv6, why are IP-based certificates explicitly ruled out? This makes the spec entirely meaningless for small servers and basically requires shifting hosting to shared hosts/massive CDNs to provide any protection against SNI snooping.
However, "DNS-based reference identity [RFC9525]" seems to explicitly disallow IP-based certificates by requiring a DNS name. I can only interpret the sentence I quoted as written to say "make sure you never ever accidentally validate an IP address".
> Clients that incorporate DNS names and IP addresses into the same syntax
They wouldn't mention the IP addresses at all. Also, notice the word "and".
Actually you can setup ECH on your server, and configure the public_name to be something like `cloudflare-ech.com` , so clients would indeed use that in the OuterSNI, connect to you, without you needing to use CF. And middleboxes might think they are indeed connecting to CF (though CF publishes their IP ranges so this could be checked elsewhere).
Yes, "Don't stand out" technologies like ECH aren't useful if you inherently stand out anyway. They're intended to make broad surveillance and similar undirected attacks less effective, they aren't magic invisibility cloaks and won't protect you if you're a singular target.
For IPv4, there’s room for ambiguity.
And how are IP certificates required for small servers?
I can't think of a single numeric TLD, so I don't think anyone is confusing IP literals with domain names, unless they're doing so extremely lazily.
> And how are IP certificates required for small servers?
You need a valid certificate as the outer certificate which contains an SNI that will still be readable. For cloudflare.com and google.com that's easy; you can't tell what website Cloudflare is proxying and whether Google is serving you Youtube, Gmail, or Google Search content.
For an independently-hosted myhumanrightsblog.net, that's not as easy. They'd need another domain reachable on that server to set up the ECH connection to hide the risky TLD. Clients being snooped on still get specific domains logged.
IP certificates work around that issue by validating the security of the underlying connection rather than any specific hostname. Any server could be serving any hostname over an IP-address-validated connection. For snooped-on clients, the IP address is already part of the network traffic anyway, but no domains ever hit the traffic logs at all.
In other words, blocking solutions that know your small blog is hosted exclusively on 1.2.3.4, without any collateral damage to other blogs the blocking government cares about will just block your IP.
Conversely, if you're hosting importedgoodsecommercesitegovernmentofficialslove.com next to myhumanrightsblog.net on the same IP, ECH is for you and solves your problem: Just register mycoolagnostichosting.net and do ECH to that.
ECH prevents tracking through routing layers where your ClientHello might contain foo.example.com or bar.example.com but route via the same IP (Cloudflare). A middlebox can see you are using a cloudflare hosted website, but not know what cloudflare website.
There's no benefit encrypting the SNI with 10.20.30.40 if they can see you're connecting to 10 20.30.40 anyway
If the client (read: chrome) does support that (and prevent its desactivation), then zscaler and other shitty things are made even more useless than what they are today
The previous attempt of encrypting plaintext SNI is Encrypted Server Name Indication (ESNI), which didn't end well.
>analysis has shown that encrypting only the SNI extension provides incomplete protection. As just one example: during session resumption, the Pre-Shared Key extension could, legally, contain a cleartext copy of exactly the same server name that is encrypted by ESNI. The ESNI approach would require an encrypted variant of every extension with potential privacy implications, and even that exposes the set of extensions advertised. Lastly, real-world use of ESNI has exposed interoperability and deployment challenges that prevented it from being enabled at a wider scale.
IME, ESNI worked for accessing _all_ websites using CF. AFAIK, ECH has never been offered for all websites using CF
ESNI was a bit simpler to use than ECH, e.g., when making HTTP requests with programs like openssl s_client, bssl client, etc. (I don't use popular browsers to make HTTP requests)
When CF ended the ESNI trial, there was nothing to take its place. The public was asked to wait for ECH
It has been roughly five years (correct me if wrong) without any replacement solution for plaintext SNI
ECH is available on a few test sites, e.g.,
But software support for ECH makes little practical difference for www users if major CDNs still don't support it
And as far as a solution that applies to CDNs other than CF, there has been no solution at all
Plaintext SNI is everywhere. It more or less defeats the stated purpose of "encrypted DNS"
You're experiencing it working in practice. RFC9849 is a published document, the end of a very long process in which the people who make this "actually work in practice" decided how to do this years ago and have deployed it.
This isn't like treaty negotiation where the formal document often creates a new reality, the RFC publication is more like the way the typical modern marriage ceremony is just formalising an existing reality. Like yeah, yesterday Bill and Sarah were legally not married, and today Bill and Sarah are married, but "Bill and Sarah" were a thing five Christmases ago, one of the bridesmaids is their daughter, we're just doing some paperwork and having a party.
It's not a problem if Network can still do their job. It's a whole other matter to expect Network to do their job through another layer. You end up with organizations that can't maintain their applications and expect magic fixes.
Orgs that are cooperative probably don't have this issue but there are definitely parts of some organizations that when one part takes capability from another they don't give it back in some sort of weird headcount game despite not really wanting to understand Network to a Network level.
Encryption and higher-level platforms are great for security and productivity, but the debugging surface keeps shrinking. Eventually when something breaks, nobody actually has the layer-by-layer visibility needed to reason about it.
0: https://community.fortinet.com/t5/FortiGate/Technical-Tip-Ho...
Notice that if users don't trust the Fortigate all it can do is IP layer blocks, exactly as intended.
It seems pointless to try to have a policy where people say they trust somebody else (whoever is operating that Fortigate) to override their will but also they don't want their will overridden, that's an incoherent policy, there's no technical problem there, technology can't help.
Defense in layers makes sense, but domain blocking was never a "layer" if a hostile actor can just buy a new domain that's not on your blocklist.
I think it'd be good if ECH became more widespread so that we can get away from these antiquated control techniques that just result in frustration with no security benefits.
Relying on HTTPS and SVCB records will probably allow a downgrade for some attackers, but if browsers roll out something akin to the HSTS preload list, then downgrade attacks become pretty difficult.
DNSSEC can also protect against malicious SVCB/HTTPS records and the spec recommends DoT/DoH against local MitM attacks to prevent this.
However, DoH/DoT without record integrity is about as useful as self-signed HTTPS certificates. You need both for the system to work right in every case.
To quote the spec:
> Clearly, DNSSEC (if the client validates and hard fails) is a defense against this form of attack, but encrypted DNS transport is also a defense against DNS attacks by attackers on the local network, which is a common case where ClientHello and SNI encryption are desired. Moreover, as noted in the introduction, SNI encryption is less useful without encryption of DNS queries in transit.
Can you explain why, considering it is at the client's side ("browsers")?
For HSTS, browsers come with a preloaded list of known-HTTPS domains that requests are matched against. That means they will never connect over HTTP, rather than connect over HTTP and upgrade+maintain a cache when the HSTS header is present. If ECH comes with a preload list, then browsers connecting to ECH domains will simply fail to connect rather than permit the network to downgrade their connection to non-ECH TLS.