egberts1 an hour ago

Yeah and whatever you do, don't be deploying SSHFP DNS record unless its DNS server is running DNSSEC AND ... AND your web browser/client is also using DNSSEC-verified query responses too (either thru your dedicated DNSSEC-enabled resolver and/or specially-modified /etc/resolv.conf

Source: https://egbert.net/blog/tags/sshfp.html

  • tptacek 44 minutes ago

    SSHFP is baffling to me. The entire point of the DNS is to make introductions between parties with no preexisting relationship, which is exactly not how an SSH cluster works. SSH already has a (very good) certificate system that solves the same problem. Why would you include the global DNS among your trust anchors!?

zie a day ago

We transfer ACH files(i.e. paychecks) via SSH(SFTP) to several banks. You better believe I check keys. One of the banks forces key rotation every 2-ish years. I absolutely verify it every rotation and delete the old keys.

Occasionally it fails, almost always it's something unexpected happening, but occasionally we catch their errors(verified by connecting from various endpoints/DNS queries/etc). We used to call them all the time whenever that happened. Now we just auto-retry on failure in an hour and that fixes the issue all of the time(so far). We only re-try once and then fail with a ticket. Most of us like our paychecks, so we are pretty good about getting that ticket resolved quickly.

kqr a day ago

I feel like this is unnecessarily reductive. The initial handshake is always fraught with security problems. I struggle to see a scenario in which a bad actor is able to give me the address of a bad machine, yet not be able to trick me into their host key being the correct one.

I would definitely however spend effort into verifying a host key that changes unexpectedly.

  • chasil a day ago

    My corporate SFTP server that became mandated to use several years ago presented multiple keys, apparently because it was on DNS round robin.

    My attempts to convince them to use the same key came to naught, so instead I use one of the IP addresses.

    I could alternately erase the known hosts entry on each transfer. That would probably have been preferable.

    I also got a shell on it when I attempted ssh, so you can guess the care that is taken with it.

  • kragen a day ago

    They might be able to spoof your DNS, for example if you're using their Wi-Fi, so you get the wrong IP address for the right hostname, but not your mail server's SSL. You could pass the host key fingerprint across an existing secure connection, such as in email or with ssh to a host that you already have the fingerprint of.

    • kqr 7 hours ago

      Sure, but they'd have to do this on the first connection attempt. Rarely do I try to connect to new servers when I'm not using a trusted connection – among other things for this very reason.

    • whatevaa a day ago

      5 dollar wrench xckd applies here.

      • integralid a day ago

        It does not. DNS attacks happen. It won't be used by APT on you or me, but may be used as an escalation mechanism in a company, for example. It's also something a hacked router could do, but I never heard of it happening, to be fair.

        I was actually personally a victim of such (unsuccessful) attack in the tor network. SSH login to my hidden service complained about a wrong fingerprint. The same happened when I tried again. After tor restart, problem disappeared. I assume this was an attempt at SSH mitm by one of exit nodes?

        • kragen a day ago

          That's possible. I've also had this happen with Wi-Fi captive portals; I assume redirecting all the port 22 traffic to port 22 on the router was an unintentional side effect of redirecting all the port 80 traffic to port 80 on the router.

      • kragen a day ago

        No, it really doesn't, as in most cases where people invoke it.

  • jrochkind1 a day ago

    I'll be honest, I have never spent effort on a host key that had changed unexpectedly, and at least a few have.

    • kragen a day ago

      I've often called people on the phone and stuff. It depends somewhat on what's at stake. Authenticating users with SSH passwords puts much more at stake than using public keys, since an attacker who can get you to send your unencrypted password to a malicious server once can steal your account; deploying PAKE algorithms (successors to SRP, see https://eprint.iacr.org/2021/1492.pdf) could mitigate that, but I don't think any shipped SSH version has ever supported a PAKE algorithm.

  • amelius a day ago

    We need a new protocol, where installing the OS of a new machine automatically installs a trusted key from an inserted USB drive, so that the machine automatically becomes part of the "enclave".

    Or something like that.

    • MaxMatti a day ago

      The paper does mention that you can have your ssh keys signed by a ca, so in a company the it staff could configure everybodys os to only trust ssh keys signed by the organization.

      • otabdeveloper4 a day ago

        > you can have your ssh keys signed by a ca

        Good idea. That way when your CA private key leaks (the key which we never ever rotate, of course) the bad guys can compromise the whole fleet and not just one server. Bonus points if the same CA is also used for authenticating users.

    • organsnyder a day ago

      This is common in corporate environments.

    • thesuitonym a day ago

      You are under no obligation to use the generated, self-signed key. Most people do because it's "good enough".

  • zenmac a day ago

    Server should publish their key fingerprint to at least the authorized personal of the group. So people know if the server they are connecting to is actually that server.

  • dspillett a day ago

    > I struggle to see a scenario in which a bad actor is able to give me the address of a bad machine, yet not be able to trick me into their host key being the correct one.

    If you aren't bothering to verify then they do not need to trick you at all.

    In DayJob we have a lot of clients send feeds and collect automated exports via SFTP, and a few to whom we operate the other way (us pulling data via SFTP or pushing it to their endpoint). HTTPS based APIs are very common and becoming more so, but SFTP is still big in this area (we offer some HTTPS APIs, few use them instead of SFTP).

    One possible exploit route, for a malicious actor playing a long and targetted game, that could affect us:

    1. Attacker somehow poisons our DNS, or that of a specific prospective client of ours, sending traffic for sftp.ourname.tld to their server, and has access to our mail.

    2. Initially they just forward traffic so key verification works for existing users. They monitor for some time to record host addresses that already access the server, so when they start intervening they can keep just forwarding connections from those addresses, so those users see no warnings (and are unaffected by the hack).

    3. When they do start to intercept connections from hosts not already on the list make above instead of forwarding everything, existing users are unaffected¹ but new users coming in from entirely different addresses now go to their server and if they are not verifying the key will happily send information through it², authenticating with the initial user+pass we sent or PKI using the public key they sent, with the malicious server connecting through to ours to complete the transfers.

    4. Now wait and collect data as no one realises there is a MitM, and later use any PII or other valuable information for ransom/extortion purposes.

    Of course there are ways to mitigate this attack route. For one: source address whitelisting, supported by OpenSSH's key based auth as the acceptable source list can be included with the public key so only specific sources can use that key for auth. But they client would have to make effort to do this, and if they aren't going to make the effort to verify the host key then they aren't going to make other efforts either.

    We do have some clients who verify the host properly and/or give us source addresses to limit connections to when they provide a public key, we work with financial institutions who are appropriately paranoid about their data and the data of their customers, some even use PGP for data in transit (and in case it is ever stored where it shouldn't be) for an extra level of paranoia. But most do none of this. Most utterly ignore our strong suggestion that they use keys, or change passwords in case of email breach, instead using the password we mail them before first connection for eternity.

    --------

    [1] none of our clients are likely to be sending files from dynamic source addresses, at most the source might move around a v4/24 or v6/64, currently I don't think all of them connect from a single IPv4 address, I've had one recently let us know (months in advance) that their source address will be changing.

    [2] it can connect to us and send the data

  • otabdeveloper4 a day ago

    > ...a host key that changes unexpectedly.

    Literally happens every single damn day and literally nobody on the face of this earth ever gives a shit.

    Host keys are the stupidest idea in the history of computer so-called "security".

    • BenjiWiebe a day ago

      Why are yours changing every day? If they always did that, then yes it would be a stupid idea. But they don't change on their own, or for no reason, so it isn't a stupid idea.

      Mine change maybe once every couple of years, if I do a full reinstall without copying over the old host key. And then I know exactly why it changed.

      • otabdeveloper4 12 hours ago

        > Why are yours changing every day?

        Nobody knows how the hell the host keys are generated in the first place. Don't worry about it.

        > And then I know exactly why it changed.

        Really? What is a "full" reinstall as opposed to a "non-full" reinstall, and how much exactly reinstall do I need for my host keys to change?

        • rcxdude 10 hours ago

          the only time the host keys should change is if you a) delete them (either by wiping the whole machine or just deleting the files), or b) explicitly regenerate them. If they're changing for any other reason you're doing something weird.

beala a day ago

Terminal.shop lets you order coffee over ssh, which is kind of novel and fun. I did it, and the coffee was good! This post reminded me that they've gotten enough questions about security that they've added this to their FAQ:

> is ordering via ssh secure?# you bet it is. arguably more secure than your browser. ssh incorporates encryption and authentication via a process called public key cryptography. if that doesn’t sound secure we don’t know what does. [1]

I think this is wrong though for exactly the reasons described in this post. TLS verifies that the URL matches the cert through the chain of trust, whereas SSH leaves this up to the user to do out-of-band, which of course no one does.

But then the author of this article goes on to say (emphasis mine):

> This result represents good news for both the SSL/TLS PKI camps and the SSH non-PKI camps, since SSH advocates can rejoice over the fact that the expensive PKI-based approach is no better than the SSH one, while PKI advocates can rest assured that their solution is no less secure than the SSH one.

Which feels like it comes out of left field. Certainly the chain of trust adds some security, even if it's imperfect. I know many people just click through the warning, but I certainly don't.

[1] https://www.terminal.shop/faq

  • tw04 a day ago

    >TLS verifies that the URL matches cert through the chain of trust,

    I think you need to point out that TLS utilizes the browsers cert store for that chain of trust. If a bad actor acquires an entity that has a trusted cert, or your cert store is compromised, that embedded cert store is almost entirely useless which has happened on more than one occassion (Chinese government and Symantec most recently).

    https://expeditedsecurity.com/blog/control-the-ssl-cas-your-...

    This is typically caught pretty quickly but there's almost nothing a user can do to defend against a chain of trust attack. With SSH, while nobody does it, at least you have the ability to protect yourself.

  • zie a day ago

    in SSH, it's a two-way handshake, the client ordering the coffee also gets a cert to prove their identity.

    In browser land, the client browser doesn't get a cert to prove their identity, it's one-way only.

    Certainly TLS supports client certs, browsers(at least some) technically even implement a version, but the UX is SOOOO horrible that nobody uses it. Some people have tried, the only people that have ever seen any success with client side authentication certificates over a web browser are webauthn/passkeys and the US Military(their ID cards have a cert in them).

    webauthn/passkeys are not fully baked yet, so time will tell if they will actually be a success, but so far their usage is growing.

    • kbolino a day ago

      I think webauthn/passkeys will be more successful (frankly I think they already have been) because they're not part of TLS. The problem with client certs, and other TLS client auth like TLS-SRP, is that it inherently operates at a different layer than the site itself. This cross-cutting through layers greatly complicates getting the UX right, not just on the browser side (1) but also on the server side (2). Whereas, webauthn is entirely in the application layer, though of course there's also some supporting browser machinery.

      (1) = Most browsers defer to the operating system for TLS support, meaning there's not just a layer boundary but a (major) organizational one. A lot of the relevant standards are also stuck in the 1990s and/or focused on narrow uses like the aforementioned U.S. military and so they ossified.

      (2) = The granularity of TLS configuration in web servers varies widely among server software and TLS libraries. Requesting client credentials only when needed meant tight, brittle coupling between backend applications and their load balancer configuration, which was also tricky to secure properly.

      • zie a day ago

        So true, two-way certs with TLS have crappy implementations everywhere, not just in the browser.

        I have 2 problems with webauthn/passkeys:

        * You MUST run Javascript, meaning you are executing random code in the browser, which is arguably unsafe. You can do things to make it safer, most of these things nobody does(never run 3rd party code, Subresource Integrity, etc).

        * The implementations throughout the stack are not robust. Troubleshooting webauthn/passkey issues is an exercise in wasted time. About the only useful troubleshooting step you can do is delete the user passkey(s) and have them try again, and hope whatever broke doesn't break again.

teeray a day ago

I kinda like the approach Github does: they just publish their fingerprints here: https://docs.github.com/en/authentication/keeping-your-accou...

This is served over TLS, so it's no worse than TLS. You can also benefit from the paved road that LetsEncrypt has provided. It might not be as smooth as SSH CAs once they're set up, but setting those up and the Day 2 operations involved isn't nearly as straightforward.

erikerikson a day ago

Fails to mention that you can paste in the expected key. Of course if there is a compromise of the source the key is copied from that no help but that's a higher bar. Still easy and doesn't rely on human frailty.

  • jon-wood a day ago

    Or you can use SSH certificates, where you work on the basis that if the host key is signed by the correct CA then it's legit. No more tofu required beyond need to trust whatever source you got your CA's public key from.

radial_symmetry a day ago

I appreciate the to-the-point abstract

  • tobinfricke a day ago

    Also in compliance with Betteridge's law of headlines: "Any headline that ends in a question mark can be answered by the word no."

chuckadams a day ago

No, and expecting users to actually do so is a sign that something is very wrong about the process. TOFU turns out to be good enough for most purposes anyway, but if a key changes (perhaps the server was reimaged) then verifying it is about as friendly as a tax audit. Or using GPG.

  • franga2000 a day ago

    This is entirely the fault of the software.

    For planned key rotations, you could sign the new key with the old key and send that in the handshake, so the client could change the known_hosts file on its own.

    For unplanned rotations (server got nuked), you could isntruct your users to use a secure connection and run "ssh-replace-key server.example.com b3f620", which would re-run TOFU, with the last param being an optional truncated hash of the key for extra security.

    You could also do a prompt like "DANGER!! The host key has changed. If you know this is expected or of your IT administrator told you to do so, type 'I know what I'm doing or the IT admin told me to do this' ".

  • SoftTalker a day ago

    Yeah that is my experience. Users don't understand public key cryptography. You ask them for their public key and they send you the private one. They use the same key everywhere. They don't understand the difference between a host key and a login key. Ask them to do anything with their authorized_keys file and your next ticket will be "I'm locked out of my system."

    They do understand passwords, and most can manage an SMS code as a second factor. That's about the limit of what you can count on.

    • franga2000 a day ago

      Users can understand asymmetric crypto, but the tools are so convoluted for no reason that they usually just give up. I've had no trouble explaining it to "average" computer users and they got it completely, but then actually using the tools for signing or authentication was the nearly impossible part.

      Your key has to parts: public and private. You give your public part to the server so it knows it's talking to you because only you have the private part. The server has its own pair and it gives you its public part so you know you're not talking to an impostor server. The private key is never sent, it stays on your computer, but it does some fancy math so the server can know you have it.

    • chuckadams a day ago

      I've been doing this for 30 years and sometimes I give the wrong key file on the command line by forgetting to add '.pub' to the end. Far as I remember, I've always caught it before I managed to send it somewhere public, and thankfully most of my keys nowadays have a passphrase that gets remembered in my OS's keychain. But the UX is really that bad.

dcminter a day ago

I guess the interesting question to me is: how often does this matter? How many successful mitm attacks on ssh connections are there and in what sort of circumstances do they occur?

It seems like it ought to matter, but if roughly nobody verifies and yet the sky has not fallen - does it?

  • marcosdumay a day ago

    It's only for the first connection, and it's very rare that targets are valuable on the first connection.

    On the other hand, we know of at least two suppliers of software that run with elevated access everywhere (including the dev side of every advanced military) that have been breached by unknown parties for years. The most likely explanation, by far, is that the sky only didn't fall yet because nobody wants it to. And that leaves us vulnerable to somebody suddenly wanting it.

  • 1oooqooq a day ago

    nobody robbed my house in years. i still lock the door.

    it's so banal to check host keys.

kemotep a day ago

Thanks for sharing this! Yesterday I was just wondering about ssh key verification techniques for third party services.

SSH keys are amazing, portable and in some ways easier to use than Passkeys. But for them to successfully replace passwords and account configuration, which works decently well for a service like pico.sh, the user experience needs to be improved significantly. Not impossible but what does become a continuous and ongoing problem is verification.

foxyv a day ago

Until SSH servers implement PKIX based Host Key verification, it's always going to be fraught with issues like this. Users will just keep blindly accepting host keys because they "Don't got time for that."

EPendragon 20 hours ago

The abstract for this paper is fire

hk1337 a day ago

Coupled with that you have to have a matching private key that you created and using ssh-config, is this even necessary?

vbezhenar a day ago

It's really ridiculous that ssh does not use standard PKI which is deployed everywhere. So unsecure.

  • kragen a day ago

    By "standard PKI which is deployed everywhere" do you mean SSL certificates? That would make you vulnerable to dozens of poorly secured CAs throughout the world; any attacker who could penetrate one of them could then use that access to MITM any SSH connection in the world (if they could additionally spoof DNS).

    SSL certificates are probably the best we can do for the "talk to a server you've never heard of" scenario, but we can do enormously better for the scenario where you're SSHing into a server you already have a pre-existing trust relationship with.

    • vbezhenar 7 hours ago

      It is good enough for websites, it will be good enough for ssh. "pre-existing trust relationship" prevents from rotation keys which is standard security measure (unheard of in ssh, of course).

      • kbolino 23 minutes ago

        Let us set aside the reasons why SSH adopted a different certificate format (namely, that X.509 is much more complex than they needed at the time).

        WebPKI only realistically serves a small portion of the SSH hosts out there. This is quite different from the situation with HTTPS. Even so, this would still be very convenient and useful. As I said elsewhere, I think this is sub-5% of SSH servers.

        X.509 more broadly could replace SSH certificates. Many institutional settings already have trust stores set up to include their in-house CAs. Public clouds and major hosting providers could also set up their own CAs, but they would have trouble distributing them (cf. AWS RDS, for example). Now we're probably up to 25% or so of deployed SSH servers. In the case of clouds, though, this adds a massive new exploitation vector (IP reassignment) and thus puts pressure on expiration/revocation.

        The rest are going to need self-signed certs.

        Between the non-WebPKI CA distribution problem and the probable predominance of self-signed certs, trust-on-first-use would still be the norm, and so relying on pre-existing trust relationships would still be necessary. We could augment TOFU/known-hosts with some kind of certificate or CA pinning rather than just key pinning, though.

        So, again, while I think adopting X.509 isn't a bad idea, and makes a lot more sense today than it did in 2010 (pre-Heartbleed!) when SSH added certificates, it's not really solving the problem that SSH has much better than today's solutions, no matter how well it solves the problem that HTTPS has.

      • kragen 5 minutes ago

        > It is good enough for websites, it will be good enough for ssh.

        This is backwards. Breaking SSH authentication permits subverting most websites; the converse is not true.

        > "pre-existing trust relationship" prevents from rotation keys

        This is also false. Things like Signal and OTR rotate keys frequently and automatically within pre-existing trust relationships.

  • dspillett a day ago

    To use the style of server identity management we use for, say, HTTPS, you need a “trusted” 3rd party involved to sign certificates. This is impractical for SSH in many (most?) cases for several reasons (SSH does support cert based identification and authentication, but there are not many circumstances where this is more practical than, or otherwise preferable to, TOFU for SSH).

    In fact, many people who don't properly understand SSH's trust-on-first-use system (so don't actually verify server certificate fingerprints) argue for browsers to support it as an option alongside the current certificate signing & verification regimes.

  • kbolino a day ago

    The CA/Browser Forum does not want to support this either. They are only interested in public, domain-verified websites served over HTTPS. They forbid client certificates and dual-use CAs, they require certificate transparency and short expiration times, and their policies get stricter every year. Most SSH deployments would not want to accept these constraints.

    So, even if SSH supported X.509 certificates, which isn't necessarily a bad idea, it would be completely detached from WebPKI, thus removing most of the benefit.

    • vbezhenar 7 hours ago

      A lot of servers do have domains associated with them. So that's not an issue.

      There are CA which will issue certificates for public IP addresses. So any public ssh server also can use these certificates.

      There's no reason to detach ssh PKI from Web PKI. They can use exactly the same certificates and keys.

      • kbolino 4 hours ago

        There is no doubt some number of SSH servers which have public domain names and/or public IP addresses, can accept DNS verification or running a completely unrelated HTTP server for IP verification, don't mind having their existence published in certificate transparency logs, don't care about or can separately handle client certs, and don't mind the SSH server restarting every ~month (until this gets shortened again) when the certificate is rotated. However, I would estimate the share of such sites at less than 5% of deployed SSH servers. The primary use case I can see here is to reuse an existing HTTPS cert for SSH on a box that already hosts a website.

        FWIW, there is an RFC for X.509 certificates in SSH, but it has not achieved wide adoption: https://www.rfc-editor.org/rfc/rfc6187

        • kbolino 14 minutes ago

          (I have also responded to kragen's subthread, you may want to consolidate the discussion there)

  • advisedwang a day ago

    ssh does support certificate based authentication [1]

    [1] https://docs.redhat.com/en/documentation/red_hat_enterprise_...

    • jon-wood a day ago

      Worth noting this is similar to but not the same as the type of certificate based authentication used in web browsers. Most notably you can't chain CAs, so there is no root of trust beyond whoever operates the CA you care about telling you the public key out of band.

      For SSH this is fine, because very rarely is anyone connecting to a random SSH server on the internet without being able to talk to the operators (hi Github, we see you there, being the exception).

    • vbezhenar 7 hours ago

      You can't just use letsencrypt certificates and make it work out of the box. Still insecure.