mtud 18 hours ago

> During the TLS handshake, the client tells the server which treeheads it has.

I don’t love the idea of giving every server I connect to via TLS the ability to fingerprint me by how recently (or not) I’ve fetched MTC treeheads. Even worse if this is in client hello, where anyone on the network path can view it either per connection or for my DoH requests to bootstrap encrypted client hello.

crote 8 hours ago

It worries me how we are increasingly making browser vendors a critical part of the TLS ecosystem - first with CRL bloom filters, now with signature Merkle trees. And they of course also manage the root stores.

Sure, it's nice and convenient if you're using an evergreen browser which is constantly getting updates from the mothership, but what is the rest supposed to do? How are we supposed to use this in Curl, or in whatever HTTP library your custom application code is using? How about email clients? Heck, is it even possible with embedded devices?

"The internet" is a hell of a lot bigger than "some website in Google Chrome", and we should be careful to not make all those other use cases impossible.

  • mcpherrinm 6 hours ago

    On some major OSes (like Windows and Mac), there’s a “platform verifier” which can handle some of this, including the fetching and sharing of out of band data. It doesn’t have to be tied to a browser.

    Linux should probably get one too, but I don’t know who will lead that effort.

    In the mean time, browsers aren’t willing to wait on OSes to get their act together, and reasonably so. There’s regulation (and users, especially corporate/government) pushing for post-quantum solutions soon, so folks are trying to find solutions that can actually be deployed.

    Browsers have always led in this space, all the way back to Netscape introducing SSL in the first place.

mcpherrinm a day ago

Next week at IETF 124, there's a Birds-of-a-Feather session that will kick off the standardization process here.

I think Merkle Tree Certificates a promising option. I'll be participating in the standardization efforts.

Chrome has signalled in multiple venues that they anticipate this to be their preferred (or only) option for post-quantum certificates, so it seems fairly likely we will deploy this in the coming years

I work for Let's Encrypt, but this is not an official statement or promise to implement anything yet. For that you can subscribe to our newsletter :)

dtj1123 15 hours ago

There's no good reason to believe that quantum computers will break modern cryptography.

Shor's algorithm requires that a quantum Fourier transform is applied to the integer to be factored. The QFT essentially takes quantum data with a representation that mirrors ordinary binary, and maps it to a representation that encodes information in quantum phase (an angle).

The precision in phase needed to perform an accurate QFT scales EXPONENTIALLY with the number of qubits you're trying to transform. You manage to develop a quantum computer capable of factoring my keys? Fine, I'll add 11 bits to my key length, come back when you've developed a computer with 2000x the phase precision.

  • jchw 13 hours ago

    Even though I agree with you, I totally understand why the push for at least hybrid algorithms. We just don't know how quantum computers could feasibly scale. Until we know for sure that cracking existing crypto systems really is infeasible for physical reasons, using hybrid systems that provide an extra layer of security just seems like obvious good practice.

    • Avamander 13 hours ago

      At the same time it's very suspicious how a few three-letter agencies are pushing for total deprecation of non-PQ algorithms, even in hybrids.

      • goalieca 7 hours ago

        Transitions takes decades especially when silicon and networks are involved (eg secure boot and MTU). Most of us would rather just stick with a handful of ciphers than constantly changing (crypto agile has become crypto chaos).

    • hulitu 3 hours ago

      > We just don't know how quantum computers could feasibly scale.

      We know. They are not able yet to emulate an i4004, let alone be a treat to "computing".

      • jchw 2 hours ago

        > We know.

        We know that current quantum computers are very weak. We do not know what is physically possible, or even feasible. Quantum computers today struggle with decoherence, but we really genuinely don't know for sure if they always will or if there is a way to overcome it. We have not hit a point where we believe we are up against hard physical limitations that can never be overcome.

        > They are not able yet to emulate an i4004, let alone be a treat [sic] to "computing".

        I am skeptical this is a good benchmark, though. How many logical qubits do you reckon it would take to emulate an i4004? I don't have the answer, but I wouldn't be surprised if you need less to do something interesting that a classical computer can't reasonably.

  • adgjlsfhk1 8 hours ago

    > Fine, I'll add 11 bits to my key length, come back when you've developed a computer with 2000x the phase precision.

    the really weird thing it's what this isn't true. we already have quantum error correction schemes that can take a quantum computer with O(1) error and get O(exp(-k)) error with polylog(k) inaccurate qbits (and we have empirical evidence that these schemes work to correct the error of single digit numbers of qbits already). adding 11 bits to the key adds ~12 logical qbits or ~a hundred physical qbits to the side of the QC

  • sunsetonsaturn 9 hours ago

    Regulators require a transition to quantum-safe algorithms. In the EU, systems labeled as "highly important" must complete the transition by 2030; so you have to do it regardless of how quantum computers evolve.

    • dtj1123 7 hours ago

      Perhaps regulators shouldn't require that transition?

  • hulitu 3 hours ago

    > There's no good reason to believe that quantum computers will break modern cryptography.

    Nobody needs them The 5 eyes already have access to root certs and internet nodes.

    What it _really_ matters is that you are secure, and terrorists and pedofiles stand no chance. At least in theory. /s

commandersaki 15 hours ago

I took some rough notes to whittle down the verbiage.

This proposal is to introduce PQ certificates in WebPKI such as for certificate authorities.

Problem is PQ signatures are large. If certificate chain is small that could be acceptable, but if the chain is large, then it can be expensive in terms of bandwidth and computation during TLS handshake. That is the exchange sends many certificates which embed a signature and a large (PQ) public key.

Merkle Tree Certificates ensures that an up to date client only needs 1 signature, 1 public key, 1 merkle tree witness.

Looking at an MTC generated certificate they've replaced the traditional signing algorithm and signature with a witness.

That means all a client needs is a signed merkle root which comes from an expanding Merkle Tree signed by the MTCA (Merkle Tree CA), which is delivered somehow out of band.

So basically TLS client receives certificate containing new signature algorithm which embeds a witness instead of a signature, a root (not sure if just a hash or a signed hash, I think the former). Client will get the signed roots out of band, which can be pre-verified, which means verifying the witness is simply doing a check on the witness.

Edit: My question: is this really a concern that needs to be addressed? PQ for TLS key exchange addresses a looming threat of HNDL (Harvest Now Decrypt Later). I don't see why we need to address making WebPKI use PQ signatures, at least for awhile now.

  • rand846633 13 hours ago

    Could take ipv6 ages to have this standardised and rolled out the most parts of the internet and IOT. Might make sense to do now if you want to be able to shut down that last non PQ safe tls device in the year 2050?

    • arccy 8 hours ago

      They're saying WebPKI, which mean basically web browsers can more or less push this through on their evergreen release schedule when it becomes necessary.

      PKI for everything else can go at their own pace

      • rand846633 6 hours ago

        But this implies that any small plastic home router using libCurl can fetch its updates via PQ safe https?

cryptonector 16 hours ago

> During the TLS handshake, the client tells the server which treeheads it has.

If the first time the client doesn't know what root the server's certificate will chain to, therefore it doesn't tell the server what treeheads it has, and so the client gets a full certificate, and then the client caches this to remember for later connections, then... that could work, though it's a slight metadata leak.

Alternatively the client could send the treeheads for all the roots it trusts. That's going to bloat the ClientHello and... it's going to leak a bit of metadata unless if the client does anything other than claim to trust all roots blessed by the CA/Browser Forum, or the Chrome Root Program.

tempay 19 hours ago

> All the information a client needs to validate a Merkle Tree Certificate can be disseminated out-of-band.

The post didn't discuss it but naively this feels like it becomes a privacy issue?

  • codebje 18 hours ago

    Using The Approved Set™ from your browser or OS carries no privacy issues: it's just another little bit of data your machine pulls down from some mothership periodically, along with everyone else. There's nothing distinguishing you from anyone else there.

    You may want to pull landmarks from CAs outside of The Approved Set™ for inclusion in what your machine trusts, and this means you'll need data from somewhere else periodically. All the usual privacy concerns over how you get what from where apply; if you're doing a web transaction a third party may be able to see your DNS lookup, your connection to port 443, and the amount of traffic you exchange, but they shouldn't be able to see what you asked for or what you go. Your OS or browser can snitch on you as normal, though.

    I don't personally see any new privacy threats, but I may not have considered all angles.

    • mtud 18 hours ago

      Different machines will need to have variations in when they grab updates to avoid thundering herd problems.

      I could see the list of client-supplied available roots being added to client fingerprinting code for passive monitoring (e.g. JA4) if it’s in the client hello, or for the benefit of just the server if it’s encrypted in transit.

  • cryptonector 19 hours ago

    Vaguely. Basically each CA will run one of these. Relying parties (browsers) will need to fetch the Merkle tree heads periodically for at least the CAs that sign certificates for the sites the users browse, or maybe all of the WebPKI CAs rather than just those that sign certs for the visited sites. There's on the order of 100 CAs for the whole web though, so knowledge that your browser fetched the MT heads for, say, 5 CAs, or even 1 CA, wouldn't leak much of anything about what specific sites the user is visiting. Though if the user is visiting, say, China sites only, then you might see them only fetch the MT heads for CN, and then you might say "aha! it's a Chinese user", or something, but... that's not a lot of information leaked, nor terribly useful.

    • mcpherrinm 18 hours ago

      I think most folks involved are assuming the landmarks will be distributed by the browser/OS vendor, at least for end-user devices where privacy matters the most - Similar to how CRLSets/CRLite/etc are pushed today.

      There's "full certificates" defined in the draft which include signatures for clients who don't have landmarks pre-distributed, too.

    • vlovich123 17 hours ago

      The privacy aspect isn’t that you fetched the heads. It’s that you are sending data that is likely usable (in combination with other data being leaked) that websites can use to fingerprint you and track you across websites (since these heads are sent by the client in TLS hello)

      • cryptonector 16 hours ago

        The heads don't change often. I think we should list all the metadata that gets leaked to see just how many bits of fingerprint we might get. Naively I think it's rather few bits.

        • vlovich123 15 hours ago

          Sure, but every few bits is enough to disambiguate just a little bit more. Then each little trickle is combined to create a waterfall to completely deanonymize you.

          For example, your IP + screen resolution + the TLS handshake head might be enough of a fingerprint to disambiguate your specific device among the general population.

  • marginalia_nu 9 hours ago

    Between SNI and OCSP, TLS has never been about hiding which websites you visit from prying eyes.

    • DaSHacka 9 hours ago

      OCSP is being phased out and ESNI exists though

      • marginalia_nu 5 hours ago

        ESNI/ECH nominally exists, but it's not really seeing very widespread deployment. Last I checked Caddy was the only web server/reverse proxy that fully supports it. The rate of adoption is glacial.

        My point is that there really hasn't been a point where domain level traffic information has been truly anonymous. Whether this is an oversight or state actors have made the outcome a reality, I have no idea. Probably a bit of both.

  • megous 19 hours ago

    It would be ridiculous for cloudflare to discuss privacy issues. ;-)

matthewaveryusa 19 hours ago

There's a section missing on the inclusion proof and what exactly the clients will be getting.

If I understand this correctly each CA publishes a signed list of landmarks at some cadence (weekly)

For the certs you get the landmark (a 256-bit hash) and the hashes along the merkle path to the leaf cert's hash. For a landmark that contains N certs, you need to include log2(N) * hash_len bytes and perform log2(N) hash computations.

For a MTC signature that uses a 256bit hash and N=1 million that's about 20*32=620bytes.

Is this the gist of it?

I'm really curious about the math behind deciding the optimal landmark size and publishing cadence

  • mcpherrinm 19 hours ago

    Yeah. From the RFC draft:

    > If a new landmark is allocated every hour, signatureless certificate subtrees will span around 4,400,000 certificates, leading to 23 hashes in the inclusion proof, giving an inclusion proof size of 736 bytes, with no signatures.

    https://davidben.github.io/merkle-tree-certs/draft-davidben-...

    That's assuming 4.4 million certs per landmark, a bit bigger than your estimate.

    There's also a "full certificate" which includes signatures, for clients who don't have up-to-date landmarks. Those are big still, but if it's just for the occasional "curl" command, that's not the end of the world for many clients.

some_bird 5 hours ago

We could also just get rid of webPKI entirely?

But you still need a public key for TLS? Well, just put it in DNS!

And assuming your DNS responses are validated by DNSSEC, it would be even more secure too. You'd be closing a whole lot of attack vectors: from IP hijacks and server-side AitM to CA compromises. In fact, you would no longer need to use CA's in the first place. The chain of trust goes directly from your registrar to your webserver with no third party in between anymore. (And if your registrar or webserver is hacked, you'd have bigger problems...)

cryptonector 19 hours ago

I think MTC is best described as a new signature algorithm for signing certificates where the value is a Merkle tree inclusion proof. This is quite clever. I like it.

bawolff 14 hours ago

Has the day finally arrived where "blockchain technology" is actually useful for something?

(I know its controversial what a blockchain even is, but this seems sufficiently close to how cryptocurrencies work to count)

  • snowwrestler 8 hours ago

    It’s probably more accurate to say that cryptocurrencies and this approach to TLS certs share common ancestors.

  • layer8 7 hours ago

    Merkle trees have been used in the cryptographic archiving space for a long time (see RFC 4998 for example), they don’t equate to blockchains.

itopaloglu83 20 hours ago

Here’s what I’m not following in general about the Post Quantum encryption studies.

Don’t we already just use the certificates to just negotiate the final encryption keys? Wouldn’t a quantum computer still crack the agreed upon keys without the exchange details?

  • mcpherrinm 20 hours ago

    Yes, the rest of the cryptography needs to be PQ-secure as well.

    But that's largely already true:

    The key exchange is now typically done with X25519MLKEM768, a hybrid of the traditional x25519 and ML-KEM-768, which is post-quantum secure.

    The exchanged keys typically AES-128 or AES-256 or ChaCha20. These are likely to be much more secure against quantum computers as well (while they may be weakened, it is likely we have plenty of security margin left).

    Changing the key exchange or transport encryption protocols however is much, much easier, as it's negotiated and we can add new options right away.

    Certificates are the trickiest piece to change and upgrade, so even though Q-day is likely years away still, we need to start working on this now.

    Upgrading the key exchange has already happened because of the risk of capture-now, decrypt-later attacks, where you sniff traffic now and break it in the future.

  • NoahZuniga 20 hours ago

    While quantum computing weakens AES encyption, AES 256 bit can't be cracked by quantum computers.

    • MangoToupe 20 hours ago

      Not practically, anyway, and even with absurd advances we can just grow the key sizes.

  • mtoner23 20 hours ago

    No the agreed upon keys are symmetric encryption keys with like an AES cipher and we don't have any reason to believe the current encryption there is easier to calculate with a quantum computer

  • throwaway89201 19 hours ago

    > Don’t we already just use the certificates to just negotiate the final encryption keys?

    No, since forward secret key agreement the certificate private key isn't involved at all in the secrecy of the session keys; the private key only proves the authenticity of the connection / the session keys.

jokoon 21 hours ago

Are we already talking about attackers having access to quantum computers?

I could see government agencies with a big budget having access to it, but I don't see those computers becoming mainstream

Although I could see China having access to it, which is problem.

  • mcpherrinm 20 hours ago

    The migration here is going to be long.

    Chrome and Cloudflare are doing a MTC experiment this year. We'll work on standardizing over the next year. Let's Encrypt may start adding support the year after that. Downstream software might start deploying support MTCs the year after that. People using LTS Linux distros might not upgrade software for another 5 years after that. People run out-of-date client devices for another 5 years too.

    So even in that timeline, which is about as fast as any internet-scale migration goes, it may be 10-15 years from today for MTC support to be fully widespread.

  • mtoner23 20 hours ago

    The fear is attackers are recording conversations today in the hopes that they can crack the encryption when we do have quantum computers in a few years

    • mcpherrinm 20 hours ago

      Capture-now Decrypt-later isn't really relevant to certificates, who mostly exist to defend against active MITM. The key exchange algorithms need to be PQ-secure for CN-DL, but that has already happened if you have an up-to-date client and server.

  • tptacek 21 hours ago

    No. Nobody serious that I know of thinks Q-day has occurred or will occur in 2025. The more typical question is whether we're 10, 50, or 100 years away from it.

    • phire 18 hours ago

      I’m of the opinion that it’s unlikely to happen within 50 years.

      But I still think it’s a good idea to start switching over to post-quantum encryption, because the lead time is so high. It could easily take a full 10 years to fully implement the transition and we don’t want to be scrambling to start after Q-day.

      • sunsetonsaturn 9 hours ago

        > 10 years

        Moving from SHA-1 to SHA-2 took ~20 years - and that's the "happy path", because SHA-2 is a drop-in replacement.

        The post-quantum transition is more complex: keys and signatures are larger; KEM is a cryptographic primitive with a different interface; stateful signature algorithms require special treatment for state handling. It can easily take more than 20 years.

  • HexDecOctBin 20 hours ago

    > Although I could see China having access to it, which is problem.

    I can see USA having access to it, which is also a problem. Or any other government.

  • rynn 21 hours ago

    Seems like you answered your own question

  • squigz 17 hours ago

    How is China having access to it any different than, say, America?

dur-randir 15 hours ago

Keeping internet locked, you mean?

nacozarina 8 hours ago

share your design, sure, but there is no reason to suggest Q-day is imminent, the fear-mongering is utterly absurd.

rvz a day ago

[flagged]

  • tomrod a day ago

    ... Why is this the first place to go?

oasisbob 16 hours ago

Regardless of the strengths of this, I can't read this slop. A third of the way in, and:

> Instead of expecting the client to know the server's public key in advance, the server might just send its public key during the TLS handshake. But how does the client know that the public key actually belongs to the server? This is the job of a certificate.

Are you kidding me? You don't know your audience on an article at the nexus of certificate transparency and post-quantum cryptography well-enough to understand that this introduction to PKI isn't required?

Know your audience. Turning over your voice to an AI doesn't do that for you. It will waste everyone's time on thousands of words of vapid nonsense.

  • jgrahamc 8 hours ago

    When I was the editor in chief of the Cloudflare blog we had a very, very strong mission to "educate, educate, educate" our readers. That often meant including details that someone versed in the field would skip over or find too basic. After all, we were writing for a general technical audience interested in learning about a topic.

    So, its natural that some readers would find parts over-explanatory but the hope was that they could read past those bits and the less educated reader would come away having learnt something new.

  • flufluflufluffy 7 hours ago

    I for one welcomed the refresher as I don’t often deal with the intricacies of the public key infrastructure, even though yes I am a programmer and make websites.