Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security Apple

Ask Slashdot: Could We Not Use DNS For a Certificate Revocation Mechanism? 97

Long-time Slashdot reader dhammabum writes: As reported in the recent slashdot story, starting in September we system admins will be forced into annually updating TLS certificates because of a decision by Apple, abetted by Google and Mozilla. Supposedly this measure somewhat rectifies the current ineffective certificate revocation list system by limiting the use of compromised certificates to one year... But in an attempt to prevent this pathetic measure, could we instead use DNS to replace the current certificate revocation list system?

Why not create a new type of TXT record, call it CRR (Certificate Revocation Record), that would consist of the Serial Number (or Subject Key ID or thumbprint) of the certificate. On TLS connection to a website, the browser does a DNS query for a CRR for the Common Name of the certificate. If the number/key/thumbprint matches, reject the connection. This way the onus is on the domain owner to directly control their fate. The only problem I can see with this is if there are numerous certificate Alternate Names — there would need to be a CRR for each name. A pain, but one only borne by the hapless domain owner.

Alternatively, if Apple is so determined to save us from ourselves, why don't they fund and host a functional CRL system? They have enough money. End users could create a CRL request via their certificate authority who would then create the signed record and forward it to this grand scheme.

Otherwise, are there any other ideas?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Could We Not Use DNS For a Certificate Revocation Mechanism?

Comments Filter:
  • by Way Smarter Than You ( 6157664 ) on Sunday July 05, 2020 @10:41AM (#60263548)

    I understand the theory but out here in the real world expiring cents just leads to panic the day it expires and the site effectively goes down and customers slam the help desk.

    One year certs will just make this happen more often.

    How about certs that can auto-update? Or just say fuck it and not expire them and let the chips fall? Let anyone who wants or needs to use a short duration cert do so and leave the rest of us alone. Why do I need to update the cert on our zero content marketing site? Why does it even need to be "protected" by ssl at all? It doesn't but we pay the fee and the hassle to keep browser makers happy. It has zero security or privacy benefit to our customers.

    • by thsths ( 31372 ) on Sunday July 05, 2020 @10:47AM (#60263560)

      "out here in the real world expiring cents just leads to panic"

      How comes that everybody manages to file taxes on time, renew car insurance on time, pay your mobile contract on time etc.

      But websites still do not manage to update their certificate, one of the main defences against malicious attacks, on time?

      Your "real world" is a place of shoddy business practices, and you should not be allowed to handle any sensitive data.

      • How comes that everybody manages to file taxes on time, renew car insurance on time, pay your mobile contract on time etc

        Those things you mentioned? Those things don't happen either..

        • And those inactions do have results. If you don't pay your cell phone bill, you don't get service, if you don't resign your contract you get evicted.

          1 year certs can still be done manually. Not sure why it's such a pain.

        • Those things don't happen either.

          That's true. And there are consequences when they don't.

          I'd say that having some help desk calls for an expired cert is getting off light. And more than reasonable.

          • by sjames ( 1099 )

            Yes, you have to file for an automatic extension for the first one and you get a late notice for the other two. Your lights don't go out at midnight on the day after the power bill was due.

      • Welcome to the real world.

        Good luck putting an unnecessary process consistently in place at millions of sites.

        Again, I ask, why does my marketing site need a cert at all?

        What value is there in a one year vs three year?

        We have real work to do. This is bullshit and a waste of time for most sites.

        I'm so glad my favorite web comics keep their certs up to date or the sky would fall if evil forces used a man in the middle attack to see which comic episode I'm on.

        • If your job is not web hosting, why would you bother with it at all? Our host manages all our certs for the website, automated via cpanel. Internally things are much more shoddy for things like the phones or other things that just need a management interface, but that is a different beast.

        • Again, I ask, why does my marketing site need a cert at all?

          Because you never know when that site might be expanded, or get referenced on another. Make encryption the norm and you won't even need to think about it.

          What value is there in a one year vs three year?

          The whole reason for this /. article is certificate revocation not working well. That is the reason to shorten the lifetime so that if a certificate leaks then the damage is time-limited. No, simply issuing a new certificate for the affected names doesn't help. Clients will still trust the old certificate until it expires.

          We have real work to do. This is bullshit and a waste of time for most sites.

          The true waste of time is not

        • by mamba-mamba ( 445365 ) on Sunday July 05, 2020 @03:16PM (#60264480)

          It is far worse than them spying on your viewing habits. Once they successfully run a man-in-the-middle attack against you they can serve up comics which differ in subtle ways from the original, gradually changing your views on politics and other things. They may even be able to program you to wear masks in crowded public places or accept 5G cell towers.

        • by Bengie ( 1121981 )
          1) Unencrypted traffic has many attack vectors. The mere act of supporting unencrypted HTTP is a constant danger.
          2) Self-signed certs are dangerous to trust.


          Regardless of your reasoning, 1+2 means all sites should be encrypted and use a cert. We're not trying to make all sites trustworthy, we just want to make sure that sites we're connecting are the sites we're connecting to.
          • 1) Unencrypted traffic has many attack vectors. The mere act of supporting unencrypted HTTP is a constant danger.

            Given you are bypassing a complex TLS stack and browser misfeatures that only work with TLS sessions the attack surface of unencrypted sessions appears to me to be significantly LESS especially after factoring in users not suffering delusion of "secure" padlock icons displayed by the browser.

            The boogeyman in the wire is far from the most salient threat to Internet users security and privacy. Everyone talks about the vulnerable hotspot luser and evil ISPs yet these things have never actually been a signifi

            • by Bengie ( 1121981 )
              Fat fingering a URL? I haven't typed in a url in well over a decade and I don't know anyone who does. Except you.

              Most people have two primary use cases 1) Connecting to a site they know 2) Connecting to a site they found in a search 3) A 3rd-party site referenced from a secure site you're visiting
              I think we all understand case 1. But case 2 still benefits from malicious interception, which can happen by having compromised system on the local network. Case 3 is the most dangerous. If you're on a website y
              • Fat fingering a URL? I haven't typed in a url in well over a decade and I don't know anyone who does. Except you.

                https://www.tripwire.com/state... [tripwire.com]

                https://www.theregister.com/20... [theregister.com]

                https://krebsonsecurity.com/20... [krebsonsecurity.com]

                Most people have two primary use cases 1) Connecting to a site they know 2) Connecting to a site they found in a search 3) A 3rd-party site referenced from a secure site you're visiting

                Or the site you were told over the phone or think you remembered or someone you think you know sent you a link to in an email.

                If you're on a website you trust and it references an external resource, that resource can be intercepted and replaced with someone malicious, which could scrape secret information from your current site.

                This is completely backwards. Say a website I trust allows me to login using a secure authentication protocol. For the sake of argument TLS-SRP which is available in Apache, cURL and many popular TLS stacks.

                The success of authentication establishes two things:

                1. The site knows our s

            • by skids ( 119237 )

              Given you are bypassing a complex TLS stack and browser misfeatures that only work with TLS sessions the attack surface of unencrypted sessions appears to me to be significantly LESS especially after factoring in users not suffering delusion of "secure" padlock icons displayed by the browser.

              You're opening the entire code surface of browser content processing to potentially forged data. That's a much bigger codebase than TLS.

              Everyone talks about the vulnerable hotspot luser and evil ISPs yet these things have never actually been a significant source of ownage.

              It's the buggy commodity home routers that have increased the viability of MITM dramatically vs the days of dialup.

              As to TFA, First, CRLs are cryptographically signed so you can't just publish hashes in DNS and expect it to work as a secure CRL unless you have a way to get DNSSec or other crypto DNS protections universally deployed on endpoints. Secondly, this problem is

              • You're opening the entire code surface of browser content processing to potentially forged data.

                That happens every time you browse, every site you visit and everything injected by anything the site owner lets be injected into their sites. A typical website has code in it injected by literally dozens of third parties none of whom give a damn about your best interests, you have never heard of, know or have any relationship with whatsoever.

                Yes you could get owned by some 0-day injected by someone running a fake hotspot you happen to connect to and if you had used encryption perhaps you would not have be

          • by sjames ( 1099 )

            With cert pinning (trivial to implement if the browser vendors ACTUALLY cared about security) self-signed certs are fine for most sites.

            • by Bengie ( 1121981 )
              Pinning doesn't work when the site uses ephemeral or different certs per server. Even in the past, I've looked at certs for sites like Google. Load page once, get one cert. Reload the page, get a different one. Certs other than CAs are meant to be ephemeral. Long lived ones were just in demand because of lazy admins.
              • by sjames ( 1099 )

                Cannot confirm, perhaps someone was pulling a fast one on you. Were you doing that from work?

                Pinning doesn't work because browsers don't do it.

                This is a classic story, everyone wants "just a minute of your time". Soon, you notice you're all out of minutes. An ant's mouthful isn't much, but they'll soon make that slice of cake disappear.

        • Comment removed based on user account deletion
        • The cert at least in theory proves to the consumer of your site that your site is your site. Without a cert the consumer has no way of knowing if the man in the middle happens. Say the man in the middle decides to put a popup asking for the user to subscribe, reset their password, whatever. Instead of your site being free it is now a paid site except the money's going to the hacker and your customers will be pissed at you as well as the hacker for letting it happen.

          It's all kabuki anyways because do we real

      • Car insurance and mobile phone bills are typically on auto-pay; can I do that for my certs, too - and not have to worry about updating anything on my end (like my car insurance and phone bill)?

        Taxes are enforced with a threat of prison - a bit more extreme than a site down for a day or two. And even then, at least, in the US, over 10 million people every year [filelater.com] ask for an extension of several months before actually filing. Can I do that for my cert, too?

      • by NFN_NLN ( 633283 )

        > How comes that everybody manages to file taxes on time

        Wesley Snipes laughing...

    • by MatthiasF ( 1853064 ) on Sunday July 05, 2020 @10:49AM (#60263566)

      You can auto-update certs. Let's Encrypt does it.

      Just need to use a software like Certbot, create some scripts that send certificate update requests to your certificate authority or use one of the many certificate management servers.

      And you don't need to wait until the expiration date to renew. You can renew a cert at any time.

      Any admin that waits until the last day is asking for trouble.

      • Indeed. Before I started using LetsEncrypt I used StartSSL for a few years. Create a cert, set some sort of method of reminding myself to update in 11 months. Same with domain names - renew it, set reminder 11 months out.

      • by sjames ( 1099 )

        Excellent! I have a smart switch that could do with a new cert, can you point me to the certbot download for smart switches?

      • You can only automatically update Domain Validated certificates [wikipedia.org]. Let's Encrypt's certificates are only valid for 3 months for this reason. The certificates the OP is referring to are Organization Validation [wikipedia.org] and Extended Validation Certificates [wikipedia.org]. Most people don't realize that there is a three tiered system which indicates how heavily the applicant is vetted and exactly how much trust should be placed in the third party vouching for the applicant. These latter two types of certificates require the validation
    • by Opportunist ( 166417 ) on Sunday July 05, 2020 @11:05AM (#60263612)

      Let's imagine for a moment you don't use https on your marketing site and I get control over the DNS server of some user and redirect your www.yourmarketingsite.com to a bestiality porn site. Now your customer sees doggies playing happy games with women (and men) on www.yourmarketingsite.com.

      You sure will be the talk of the afternoon on that customer's meeting, I can reassure you of that.

      The worst bit is that you can't even avoid this. Because it wasn't your server that was hijacked by someone trying to cause harm to you, but the customer's blunder (or his ISP's) caused you the goodwill hit. Nothing you could do to avoid it, nothing you could improve on your security to not be associated with things you'd rather not be.

      What people very easily overlook is that https doesn't only care about encryption but also about identity. That "the connection is insecure, the certificate doesn't match" warning isn't as much about someone being able to eavesdrop on your connection, what's even more important is that this ensures that you're actually talking with whoever you think you're talking to.

      • I get control over the DNS server of some user and redirect your www.yourmarketingsite.com to a bestiality porn site.

        Thatâ(TM)s a really dumb straw man. Your scenario could happen regardless of whether they had ssl enabled on their website or not. If the baddies have control of their servers, they can put up a letâ(TM)s encrypt cert on their server, and the whole world will think your marketing website has changed its business model.

      • Comment removed based on user account deletion
        • They do not prove ownership, they only prove that the page pretending to be www.abcd.com is actually the site the certificate was issued for. Whether it belongs to who you believe it belongs to is outside the scope of certificates. What it does prevent, though, is DNS spoofing because either the certificate does not match the page or there is no certificate altogether.

          HSTS only works after the first visit, and the only thing it really does is prevent you from accessing a page's unencrypted version after the

          • by micheas ( 231635 )

            HSTS works all the time if you have submitted your site to the HSTS preload site and a new version of web browsers has been released since you submitted.

            However, it does nothing to stop a hijacked nameserver that is validating certificates for the new website.

          • Comment removed based on user account deletion
      • It's not just that, this idea is about as old as the web. In 1996 I was at a security conference where people were talking about using the DNS for PKI. That was quarter of a century ago.

        If that was going to work, it would have worked by now.

    • One year certs will just make this happen more often. . . .How about certs that can auto-update? Or just say fuck it and not expire them and let the chips fall?

      Do you work as an IT admin?

      Why do I need to update the cert on our zero content marketing site?

      You don't. Your admin does. There is a complexity of administration and infrastructure in that "zero content" marketing site. I mean you seem to think marketing site == dedicated standalone solo web server. In reality that's page is in normally part of a cluster of web servers. Add to that complexity is how cloud services are required for different functions some of which need to be secured.

      Why does it even need to be "protected" by ssl at all?

      Generally if one part of your web infrastructure needs to be protected; they should all be

    • by ahodgson ( 74077 )

      How about just monitor your certs. Geez. Nagios has a check_ssl_cert plugin that works fine.

    • by Junta ( 36770 )

      Actually, shorter term cert expiry may help cert expiry problems.

      This generally happens because someone sets up the cert and then totally stops thinking about it, because that's years in the future.

      With a shorter expiry, the annoyance forces you to implement a more robust strategy.

      As to why, generally speaking people leave their private key in the clear on disks somewhere and end up in the dumpster or otherwise recovered by someone either online or offline. There are a ton of risks that are frequently ignor

    • What are you talking about?

      Ssl certs are free, and setting up auto renew takes less then 60 seconds with the let's encrypt bot.

    • by EvilSS ( 557649 )

      One year certs will just make this happen more often.

      At first, I suspect so. However, as admins get used to the cadence, I suspect it will actually drop over time. One problem is with multiyear certs it's really easy to forget about them. Doing them every year, it will become more routine.

    • In the real world this is all automated. Competent staff will use automated systems to handle certificate issuance and renewal. They will also rotate certificates a good amount of time before the expiry date just in case. Putting SSL offload on to LB devices has helped with this as they generally expose an API that can be used to do life-cycle management of all SSL certs in a centralized place.
    • There are auto-updating certs, even free ones with LetsEncrypt. I'd be strongly against indefinite/infinite certs though - they have expirations because it takes a known time to crack them, it's not about protecting the site, it's about protecting the end user. I'd much rather not have to inspect the cert of every site I visit (to say nothing of the fact most people lack that aptitude) and just see if it's secure or not before entering my credit card information than not.
  • Alternate Names (Score:4, Insightful)

    by amorsen ( 7485 ) <benny+slashdot@amorsen.dk> on Sunday July 05, 2020 @10:50AM (#60263572)

    I do not see why multiple DNS entries are required for Alternate Names. Just look up _crr.[common name] TXT to get the list of revoked thumbprints. (Why TXT and not CRR? Because adding new types requires upgrading all DNS middleboxes in the entire Internet.)

    The problem with the proposal is that (almost) the whole point of TLS is to protect you from DNS compromise. If DNS compromise is not a concern, we can ditch the whole TLS thing and just stick the public key in _pubkey.[common name]. Goodbye and good riddance, CA leeches!

    • Comment removed based on user account deletion
      • by amorsen ( 7485 )

        You are a client DNS hijack away of your certificate becoming untrusted.

        I do not particularly see why that is a problem. Sure, it is an easy DoS on that client. However, a much easier DoS when you have hijacked the client DNS is to not answer the AAAA query in the first place.

    • The problem with the proposal is that (almost) the whole point of TLS is to protect you from DNS compromise. If DNS compromise is not a concern, we can ditch the whole TLS thing and just stick the public key in _pubkey.[common name]. Goodbye and good riddance, CA leeches!

      I think sticking the public key in _pubkey is a good idea. I'm just uncomfortable with size of response. Some kind of chunking scheme where you request a few bytes at a time... unless using TLS, TCP or UDP /w DNS cookies.

      I know detractors and naysayers will say D.N.S is insecure. To that I have an open question to all who are willing to ponder the following:

      Since when has this ever prevented even single solitary CA from not relying completely on automated systems from depending on insecure requests and re

  • So, if you somehow block the DNS lookup, the cert is good? If you wanted to include DNS as a validation mechanism, it would seem better to require an affirmative response, not the lack of a negative response.

    • It could be designed so that the DNS server would always have a response. The response might be a message saying that there are not any expired certs, or a list of expired serials. The browser would require that the DNS server respond with an answer, and that the website cert is not in the list. If the DNS response is blocked, then the website should not load, or should warn the user that the certificate cannot be validated.
  • by crow ( 16139 ) on Sunday July 05, 2020 @10:58AM (#60263598) Homepage Journal

    If DNS is secure enough to use for revocations, then it's good enough to use for distributing the certificates in the first place. So just get rid of revocation, expiration, and certificate authorities. We just need to switch everyone over to secure DNS, which we should do anyway.

    Of course, the current certificate authorities will do anything they can to stop this.

  • You cannot rely on DNS as controlled by the Subject only. The Issuer should be able to revoke a certificate and it does not control the corresponding DNS.
  • CRLite (Score:5, Informative)

    by bowlinearl ( 896016 ) on Sunday July 05, 2020 @11:23AM (#60263672)
    The idea of using DNS to distribute revocations has been explored in the academic literature [forth.gr] (no, I'm not an author on this paper). The idea of distributing revocations through DNS is related to the idea of distributing TLS key material through DNS, which is the goal of DANE.

    CRLite [ieee.org] is a system that preemptively pushes all revocation information to TLS clients such as browsers (FULL DISCLOSURE, I'm on author on this). CRLite works because all valid TLS certificates are publicly known in the Certificate Transparency logs, which means all revocations can be crawled. CRLite crawls them, packages the information in a highly compressed data structure, and then pushes that to clients. Mozilla has announced that they are adopting CRLite in Firefox (see here [mozilla.org], here [mozilla.org], and here [mozilla.org]). CRLite is a better solution than CRLs and OCSP, at least until (1) we settle on a world where all certificates are extremely short-lived, say 1 week, or (2) OCSP Must-Staple is widely deployed by certificate owners and supported by TLS clients (but don't hold your breath, we're not there yet [acm.org], FULL DISCLOSURE I'm an author on this too).
    • CRLite is a system that preemptively pushes all revocation information to TLS clients such as browsers (FULL DISCLOSURE, I'm on author on this).

      Can you give a short description of how it avoids DoSes via forged revocations?

      (Both fakes in the things it craweld and forgeries of its notifications.)

    • CRLite is a system that preemptively pushes all revocation information to TLS clients such as browsers (FULL DISCLOSURE, I'm on author on this). CRLite works because all valid TLS certificates are publicly known in the Certificate Transparency logs, which means all revocations can be crawled. CRLite crawls them, packages the information in a highly compressed data structure, and then pushes that to clients.

      The databases would be even smaller if revocation lists were not chock full of administrative bullshit.

      People should have an option to get revocations that are just list of certs mistakenly issued to the wrong party or stem from an actual security breach.

  • Let's not hack up DNS any more than we already have. TLS libraries would then need to bundle or depend on DNS resolving libraries and handle all of DNS's weirdness.

    We already have OCSP, which seems to do what you're talking about, but is actually designed around what you're asking for - a replacement for CRLs. Let's improve OCSP if we need to, but let's not be lazy with Internet protocols. There's enough of that as it is...

  • I intercept your DNS queries and respond with "Revoked" for every certificate. Your system now bans every single certificate on the internet, and you can no longer connect to anyone.

    The idea of a clearinghouse for CRL's that could be accessed in real-time is good, but it needs to be a secure channel.

    • by Bengie ( 1121981 )
      Intercepting my DNSSEC over TLS 1.3 and modifying it? Yeah, it's not common yet, but it's(DoH/DoT) getting support. Android, Windows, built right into browsers. Either fully supported or in beta testing, so soon(tm).
  • Comment removed based on user account deletion
  • Many browsers reside behind http proxies. Unless they are also using DoH and assuming that is not blocked for some fascist-like reason or another, you'd basically be crippling the browsing experience.

    DNS is not really a good solution for this, but (stapled) OCSP [wikipedia.org] might be, even if not without it's flaws).

    • How do you even get to the website in the first place if DNS is not available?

      If you can do DNS lookup to get to the website, you can also do DNS lookup to get the TXT records. DNS does not travel over HTTP.

      Also, blocking DoH has its very valid reasons, like companies doing split DNS, and companies caring about their IT security.
  • It was proposed to use SRV DNS records for discovery of PKI resources in 2006. The revocation list would not be directly embedded in DNS but the URL for the revocation mechanism would be (OCSP URI).

    https://tools.ietf.org/html/rf... [ietf.org]

    More recently a more secure and elaborate system was proposed using DNSSEC. This would actually make CA's almost unnecessary and make DNS the authoritative source for identity verification.

    https://tools.ietf.org/html/rf... [ietf.org]

    FYI

  • A revocation mechanism exists and that works. The problem is one of identification. Before any certificate can be revoked it needs to be identified as fraudulent. The shortened validity period does little more than reduce the time a fraudulent certificate remains *unidentified* in circulation.

  • What I want to know is why DANE (which is supposed to store information about the certificate in a way that avoids problems with CRLs and rogue CAs) isn't more widely adopted.

    If I understand DANE properly, it holds a fingerprint for the current certificate (and secures it with DNSSEC). If a certificate is revoked, the DANE record is updated and no longer points to the revoked certificate. And if there is a rogue or bogus certificate created (e.g. via a hacked CA or via some government ordering a CA in their

    • by Hizonner ( 38491 )

      Because the idiots who maintain browsers can't imagine relying on anything that doesn't run over HTTP, and have repeatedly refused to put it into their core systems.

      DANE can be used to replace revocation. It can be used to replace the whole damned idiotic CA trust infrastructure. It's way more effective than "certificate transparency" or any of the other fool Rube Goldberg hacks that people keep creating to try to work around the evils of that giant CA trust list.

      But there are a lot of fools out there. Ther

      • But there are a lot of fools out there. There also seem to be vested CA interests deliberately spreading FUD against DANE. There are also a lot of people also fighting the DNSSEC deployment that you need to support DANE (often by ignoring the fact that DANE exists and then claiming that DNSSEC doesn't add value because "securing IP lookups isn't enough").

        I'm one of those fools. Fuck DNSSEC until DNS transport fixes are actually deployed. When that happens I'll be a cheerleader for it. Right now it's not worth the price of insane levels of DDOS amplification.

        • by Hizonner ( 38491 )

          You've found the one actually sane issue.

          But any DNS over UDP is an amplifier. DNSSEC isn't different in kind. For that matter, any stateless query protocol over UDP has the same problem.

          If you want to kill DNS over UDP, I will be cheering you on (please just don't stick HTTP into the mix in the process). And if you want to find ways of cutting down on the number of TCP handshakes involved in doing so I'll be cheering you on even more (just try not to centralize the DNS too much).

          The real root caus

          • But any DNS over UDP is an amplifier. DNSSEC isn't different in kind

            Agreed. A pinhole and a 50ft gash both let water into a boat.

            For that matter, any stateless query protocol over UDP has the same problem.

            There is a cheap easy solution... https://www.ietf.org/rfc/rfc78... [ietf.org]

            If you want to kill DNS over UDP, I will be cheering you on (please just don't stick HTTP into the mix in the process).

            This is nice too https://www.ietf.org/rfc/rfc80... [ietf.org]... Don't care so long as a fix gets deployed. It could be DNS over HTTP over TLS over IPoXML.

            The real root cause of UDP query amplification DDoS, DNS and otherwise, is of course in the IP routing layer, which really should be guaranteeing the provenance of packets it delivers, but does not. Asymmetric routing seemed like a good idea at the time. Not sure what to do about it now; the network operators don't want to move.

            The vast majority of operators already have.

            • by Hizonner ( 38491 )

              The cookies look good, but they do still force somebody to keep some state. Just saying. Also, they're only for DNS, so the next time somebody decides to dream up some "high performance blah retrieval" protocol, and doesn't think to include something like that, you get a new crop of amplifiers popping up.

              The vast majority of operators already have.

              Have people started doing RPF in the Internet core now? Because what you need to really solve DDoS is truly pervasive enforcement of symmetric routing, throughout

              • The cookies look good, but they do still force somebody to keep some state. Just saying

                DNS servers are issuing stateless cookies. Verification occurs without server having to remember anything about any client. Clients need to remember if they don't want to keep burning round trips.

                Also, they're only for DNS, so the next time somebody decides to dream up some "high performance blah retrieval" protocol, and doesn't think to include something like that, you get a new crop of amplifiers popping up.

                It's not just availability of the network to punt packets at play it is availability of DNS infrastructure. Significantly increasing incentives for it to be attacked when that can be easily avoided is irresponsible. It REMAINS irresponsible regardless of any action or inaction of any other party for any other t

  • Why bother re-inventing the wheel? There's already a well-established and simple protocol for doing this - OCSP [wikipedia.org]. CRLs can also be used - all trusted CAs are required by CAB forum rules to have OCSP services available and CRLs available over HTTP and/or LDAP.

Always draw your curves, then plot your reading.

Working...