this post was submitted on 23 Jul 2023
59 points (96.8% liked)

Selfhosted

40394 readers
304 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I have too many machines floating around, some virtual, some physical, and they're getting added and removed semi-frequently as I play around with different tools/try out ideas. One recurring pain point is I have no easy way to manage SSH keys around them, and it's a pain to deal with adding/removing/cycling keys. I know I can use AuthorizedKeysCommand on sshd_config to make the system fetch a remote key for validation, I know I could theoretically publish my pub key to github or alike, but I'm wondering if there's something more flexible/powerful where I can manage multiple users (essentially roles) such that each machine can be assigned a role and automatically allow access accordingly?

I've seen Keyper before, but the container haven't been updated for years, and the support discord owner actively kicks everyone from the server, even after asking questions.

Is there any other solution out there that would streamline this process a bit?

top 29 comments
sorted by: hot top controversial new old
[–] Max_P@lemmy.max-p.me 21 points 1 year ago (1 children)

I would switch to certificate based SSH authentication.

All the server keys gets signed by your CA, all clients also gets signed by your CA. Everyone implicitly trust eachother though the CA and it's as safe as regular SSH keys.

You can also sign short lived client keys if you want to make revocations easier, the servers don't care because now all it cares is that it's a valid cert issues by the CA, which can be done entirely offline!

HashiCorp Vault can also help managing the above, but it's also pretty easy to do manually.

[–] mhzawadi@lemmy.horwood.cloud 7 points 1 year ago (2 children)

I do this, use the small step ca/Cli to manage the lot. It's amazing

[–] Max_P@lemmy.max-p.me 8 points 1 year ago (1 children)

It's such an underrated feature. It baffles me how people immediately turn to overly complicated solutions solving a problem they don't really have to solve, just because everyone assumes the only way is the default commonly known way. Like OP, people immediately jump to the conclusion you need extra software to manage the keys, rather than using another authentication method natively supported, and keep filling their known_hosts file with junk, making the whole validation process useless because everyone just accepts whatever key the host presents.

It's amazing how simple it is. Developer needs temporary access to debug a web server? Sure, here's your 2h valid cert to log in as the web user on the server, don't even need to actually log into the server to put their key in and then remove it. I mint a cert and it's ready to go on whichever users and servers I specified in the cert. Can't even gain persistence because regular authorized_keys is disabled and we have limited session durations.

I regularly leave people baffled at work because I come up with a clever and built-in super simple solution to something they expected to just slap more scripts and software to work around the only way they know to use the software. Read your manpages in full folks, it'll save you so much work. Know what your software is capable of.

[–] mhzawadi@lemmy.horwood.cloud 5 points 1 year ago

That's a long rant, but your on point with it. I have a colleague who refuses to try new things cuz they don't understand that it makes life easier, I do tend to find the solutions that are simpler and easier to work

[–] gamer@lemm.ee 0 points 1 year ago (1 children)

Is smallstep free to self host? Looking at their pricing page it's kind of unclear, and their saas is pretty pricey.

[–] mhzawadi@lemmy.horwood.cloud 1 points 1 year ago

It sure is, give this a read DIY-Single-Sign-On-for-SSH

I've been using https://github.com/warp-tech/warpgate for essentially this purpose. It does kind of put all of your eggs in one basket so don't expose this to the Internet and probably keep at least one other machine that has all the keys. I haven't had any catastrophic issues so far other than my host going down (unrelated to this tool).

[–] csm10495@sh.itjust.works 8 points 1 year ago (1 children)

Terrible idea of the day: You could use something like NFS and map the drive on all clients. On that drive you can have the latest keys then use symlinking to update, etc.

Something like puppet, chef, ansible are likely better choices.

[–] solidgrue@lemmy.world 8 points 1 year ago

You're the devil.

Did we work together, maybe?

[–] key@lemmy.keychat.org 8 points 1 year ago (1 children)

You could use an LDAP and/or Kerberos solution to centralize user management. Alternatively you could use ansible

[–] kylian0087@lemmy.world 1 points 1 year ago

To add to this. If everything is just linux based take a look at freeipa/rhel idm.

[–] yacgta@infosec.pub 6 points 1 year ago

I quite like Tailscale SSH for this, but I don't have as many machines, so not sure how it will scale. You can definitely assign roles here to allow/deny SSH between hosts in your fleet though.

[–] RegalPotoo@lemmy.world 6 points 1 year ago (1 children)

You could try SSH certificates using something like https://smallstep.com/sso-ssh/ - essentially you delegate validation of your public key to a IDP, which your servers are configured to trust.

The other approach would be something like ansible or puppet to deploy trusted keys to all servers

[–] chiisana@lemmy.chiisana.net 0 points 1 year ago

Hm... these are both interesting but might be a bit overkill IMO.

I don't think I'd need a CA and intermediary step if all SSHd needs to do is check if a key is a currently approved key for this particular service or not; and I last looked at chef/puppet many years ago, and it was way too much orchestration work that we no longer need w/ Docker containers and smaller footprint host OSes.

[–] FlexibleToast@lemmy.world 4 points 1 year ago

This is one of the jobs of OpenLDAP.

[–] kool_newt@lemm.ee 4 points 1 year ago* (last edited 1 year ago) (1 children)

Are you initiating SSH connections from all these hosts?

If you just need to SSH to these hosts, use a single key and copy the public key only to the hosts you need to connect to. If you don't want to copy the pubkeys to target hosts, use LDAP + SSSD or certificates.

Then, if you do need to initiate connections from these hosts and use an SSH agent you can forward your agent and SSH to another host

client> ssh -A host1
host1> ssh host2
host2>
client> ssh -A host1
host1> ssh -A host2
host2> ssh -A host3
host3> 
[–] InverseParallax@lemmy.world 0 points 1 year ago* (last edited 1 year ago) (1 children)

Have an alias so trusted hosts can bounce through my authorization host and end up on a tmux session on the targetted host. It has logging and such but mostly it's for simplicity.

If I plan to use that connection a lot there's a script to cat my priv key through the relay.

Have an scp alias too, but that gets more complicated.

For more sensitive systems I have 2fa from gauth set up, works great.

[–] kool_newt@lemm.ee 4 points 1 year ago (1 children)

This is a common pattern, typically called a "jump host" or "bastion host".

a script to cat my priv key through the relay

When it comes to security, I typically recommend against rolling your own. SSH already has agent forwarding option to do this securely and the -J option to accomplish the same without even needing to forward the key. The agent can seem complex at first, it's actually pretty simple and worth learning.

Feel free to message me if you have more questions, I've got lots of experience w/ SSH.

[–] InverseParallax@lemmy.world 2 points 1 year ago* (last edited 1 year ago) (1 children)

I did not know -J, I rolled my own because I've been doing it forever and many of my tricks (non-ssh included) aren't as easily portable across different os's.

For some reason ssh-copy-id has been failing for me sometimes lately because it can't reach the agent, while cat always works, but I never learned much about the user agent, let me look into that now, thanks for the tip!

[–] kool_newt@lemm.ee 1 points 1 year ago

I think -J is newer and may not work if you have distro versions older than like 5 years (eg. Centos 7 or before). There is a less convenient syntax that does the same thing though

$ ssh -o ProxyCommand="ssh -W %h:%p bastion-host" remote-host

See: https://www.redhat.com/sysadmin/ssh-proxy-bastion-proxyjump

[–] johntash@eviltoast.org 3 points 1 year ago

You could use ldap with OpenLdap, Keycloak, freeipa, etc to set ssh keys for users.

If you want something simpler, you could use Ansible (or another cm) or just have a startup script that downloads the authorized keys file from GitHub or wherever you can store it.

And if you want something less simple, hashicorp Vault supports dynamic ssh keys using certificates.

[–] MajorHavoc@lemmy.world 3 points 1 year ago* (last edited 1 year ago)

Sometimes the obvious solution is the way to go.

Your idea sounds good to go ahead and publish your pubkey(s) to fully public URL you control and can memorize.

Then you can stash or memorize the curl command needed to grab it (them) and authorize something to it (them).

A lot of more complicated solutions are just fancy ways to safely move private keys around.

For my private keys, I prefer to generate a new one for each use case, and throw them out when I'm done with them. That way I don't need a solution to move, share or store them.

Edit: Full disclosure - I do also use Ansible to deploy my public keys.

[–] c2c2@lemmy.world 2 points 1 year ago
[–] alsobrsp@lemmy.world 2 points 1 year ago

Some options

  • Use a build system like Foreman to automate the builds putting the key in place, uses puppet for config management after the build
  • Use vanilla puppet without Foreman
  • Use Ansible
[–] vegetaaaaaaa@lemmy.world 2 points 1 year ago

I use ansible for that: https://docs.ansible.com/ansible/latest/collections/ansible/posix/authorized_key_module.html

Keys stored alongside my playbook in a git repository.

[–] neoney@lemmy.neoney.dev -3 points 1 year ago (1 children)

all I know is that on NixOS you can declare the authorized keys for each user in the config

[–] chiisana@lemmy.chiisana.net 1 points 1 year ago (1 children)

Yeah, the problem is that I have 2 physical servers, each with 5 to 10 VMs on it, and a bunch of other VMs scattered across different cloud providers; it gets tricky to edit the ~/.ssh/authorized_keys file on each of them to reflect a new SSH key (i.e.: new machine on the "network") or replace an existing SSH key (i.e.: annual key cycle).

[–] neoney@lemmy.neoney.dev -3 points 1 year ago (1 children)

yeah what i mean is on nixos you make 1 config for them all and you’d just change the key in 1 spot

[–] danielphan2003@lemm.ee 1 points 1 year ago

You do realize that those machines are not necessarily NixOS right? It is best to separate the management of SSH from NixOS declarative nature since what you would really want to be declarative is ACL rules, not network topology/SSH keys. For example you can use Netbird or Tailscale and their respective SSH feature.

load more comments
view more: next ›