this post was submitted on 13 Feb 2024
66 points (98.5% liked)

Selfhosted

40296 readers
321 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I've got a QNAP NAS and two Linux servers. Whenever the power goes down, the UPS kicks in and shut downs the NAS and the Linux servers, all good. The servers + NAS are automatically started when the power comes back on line using WOL. All good.

The problem is that I have apps running using Docker which heavily rely on connections to the NAS. As the Linux servers boot quicker than the NAS, the mount points are not mounted, and thus everything falls apart. Even when I manually re-mount, it's not propagated to the Docker instances. All mount points use NFS.

Currently, I just reboot the Linux servers manually, and then all works well.

Probably easiest would be to run a cron job to check the mounts every x minutes, and if they are not mounted, then just reboot. The only issue is that this may cause an infinite loop of reboots if e.g. the NAS has been turned off.

I could also install a monitoring solution, but I've seen so many options that I'm not sure which one to do. If it's easier with a monitoring solution, I'd like the simplest one.

all 26 comments
sorted by: hot top controversial new old
[–] freeearth@discuss.tchncs.de 29 points 9 months ago (3 children)

Just specaluting.. is it possible to mount NFS through systemd and make docker service dependent from that mount?

[–] avidamoeba@lemmy.ca 29 points 9 months ago* (last edited 9 months ago) (2 children)

This is the answer. You can straight up make things dependent on .mount units that represent stuff in fstab. To add, you can create any number of systemd services that just check if something is "as you want it" and only then "start". You simply make the Exec line "/bin/bash -c 'your script here'". Then you make whatever else you want dependent on it. For example I have such a unit that monitors for Internet connection by checking some public DNS servers. Then I have services that depend on Internet connection dependent on that. Here's for example my Plex service which demonstrates how to depend on a mount, docker and shows how to manage a docker container with systemd:

~$ cat /etc/systemd/system/plex-docker.service
[Unit]
Description=Plex Media Server
After=docker.service network-internet.service media-storage\x2dvolume1.mount
After=docker.service

[Service]
TimeoutStartSec=0
Restart=always
RestartSec=10
ExecStartPre=-/usr/bin/docker rm -f plex
ExecStartPre=/usr/bin/docker pull plexinc/pms-docker:latest
ExecStart=/usr/bin/docker run \
        --name plex \
        --net=host \
        -e TZ="US/Eastern" \
        -e "PLEX_UID=1000" \
        -e "PLEX_GID=1000" \
        -v /tmp:/tmp \
        -v /var/lib/plex/config:/config \
        -v /var/cache/plex/transcode:/transcode \
        -v "/media/storage-volume1:/media/storage-volume1" \
        plexinc/pms-docker:latest

[Install]
WantedBy=multi-user.target

BTW you can also do timers in systemd which allows doing what you can do with cron but much more flexibly and utilize dependencies too.

[–] samwwwblack@lemm.ee 10 points 9 months ago (1 children)

You can use RequiresMountsFor= (eg RequiresMountsFor=/media/storage-volume1) instead of manually adding .mount to After/Requires - you can then use .mount files or fstab as you're stipulating the path rather than a potentially changeable systemd unit name.

The systemd.mount manpage also strongly recommends using fstab for human added mount points over .mount files.

[–] avidamoeba@lemmy.ca 1 points 9 months ago

Oh this is nice. I'll probably start using it.

[–] sylverstream 2 points 9 months ago (2 children)

That's interesting! I've converted all my docker run commands to docker compose, as I found that easier to manage. But, I guess you can't do the dependencies like you have. Also, yours has the advantage it always pulls the latest.

[–] key@lemmy.keychat.org 3 points 9 months ago (1 children)

Doesn't seem mutually exclusive. Replace the docker rm with compose down and the docker run with compose up.

[–] avidamoeba@lemmy.ca 1 points 9 months ago* (last edited 9 months ago)

Exactly. In fact I have a few multi-container services with docker-compose that I have to write systemd unit files for.

[–] qaz@lemmy.world 1 points 9 months ago

Perhaps you could also add the mounts as dependencies to the Docker daemon.

[–] sylverstream 4 points 9 months ago (1 children)

Sorry, I'm absolutely not a Linux expert :) I use /etc/fstab for the mounts, and to manually re-mount I run "mount -a".

[–] avidamoeba@lemmy.ca 8 points 9 months ago* (last edited 9 months ago) (1 children)

This is a great opportunity to learn a bit of systemd then. Look at my other comment. I've had a nearly identical problem which prompted me to learn in order to solve it years ago.

Especially if you find a corner case autofs doesn't cover. ☺️

[–] sylverstream 3 points 9 months ago

Awesome, yes, definitely will do. After years of using Linux, the whole systemd thing is still a bit of a black box to me. I know how to create /start/stop services etc but that's about it. Thanks for the prompt replies!

[–] Illecors@lemmy.cafe 3 points 9 months ago

I think this is the way!

[–] h3ndrik@feddit.de 17 points 9 months ago* (last edited 9 months ago)

I think that is a good question to write something positive about SystemD.

I start my services with SystemD. I also moved my containers and docker-compose stack to be started by systemd. And it does mounting and bind-mounts, too. So I removed things from /etc/fstab and instead created unit files for systemd to mount the network mounts. And then you can edit the service file that starts the docker-container and say it relies on the mount. SystemD will figure it out and start them in the correct order, wait until the network and the mounts are there.

You have to put some effort in but it's not that hard. And for me it's turned out to be pretty reliable and low maintenance.

[–] Nollij@sopuli.xyz 9 points 9 months ago

The absolute easiest and simplest would be to modify your grub config to have a longer timer on the boot menu, effectively delaying them until the NAS is up.

That doesn't necessarily mean it's the best option- there are ways to make the actual boot process wait for mounts, or to stagger the WOL signals, or the solutions others have mentioned. But changing grub is quick and easy.

[–] monkeyman512@lemmy.world 7 points 9 months ago (1 children)

Try looking into "autofs".

[–] sylverstream 2 points 9 months ago (1 children)

Thanks! I've just set that up. That would seem to solve the solution, right, without reboots?

[–] monkeyman512@lemmy.world 1 points 9 months ago

Yes. The important detail is that it remounts the path once the path gets called. So I setup a cron job to "ls" the path every few minutes to make sure it's always remounted quickly.

[–] MangoPenguin@lemmy.blahaj.zone 6 points 9 months ago

Best option is to delay docker startup until the mounts are ready.

[–] grayaytrox@lemmy.world 5 points 9 months ago (1 children)

It's been a while since I set it up, but from memory my mount point was set to be owned by root and immutable. That stopped any of my docker containers making new files and folders if the mounted drive or network location was not mounted or unavailable.

[–] sylverstream 2 points 9 months ago

Yeah I used /etc/fstab which are static mounts.

I switched to autofs and that seems to be much better, as it does the mounts "at runtime", ie when requested.

[–] Cookie1990@lemmy.world 5 points 9 months ago

Read systemd mounts and systemd dependencies.

[–] mbirth@lemmy.mbirth.uk 3 points 9 months ago

Not sure how Docker behaves, but in a Stack/Compose file you can define volumes to use a specific driver, such as smb. E.g.:

volumes:
  smb-calibre:
    driver_opts:
      type: "smb3"
      device: "//mynas/ebooks/CalibreDB"
      o: "ro,vers=3.1.1,addr=192.168.1.1,username=mbirth,password=secret,cache=loose,iocharset=utf8,noperm,hard"

So Docker will take care of mounting. Paired with restart: unless-stopped, all my Containers come back online just fine after an outage.

[–] Decronym@lemmy.decronym.xyz 2 points 9 months ago* (last edited 9 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
DNS Domain Name Service/System
NAS Network-Attached Storage
Plex Brand of media server package

3 acronyms in this thread; the most compressed thread commented on today has 9 acronyms.

[Thread #509 for this sub, first seen 13th Feb 2024, 14:05] [FAQ] [Full list] [Contact] [Source code]

[–] lemmyvore@feddit.nl 1 points 9 months ago

You can use bind mounts instead of volumes to prevent the container to start when the target is missing.

https://docs.docker.com/storage/bind-mounts/

I'm not sure what happens if the target goes down while the container is running. And you would still need a monitoring solution for telling the container to start when the target comes up.

[–] ShellMonkey@lemmy.socdojo.com 0 points 9 months ago

The other options of making the containers dependent on mounts or similar are all really better, but a simple enough one is to use SMB/CIFS rather than NFS. It's a lot more transactional in design so the drive vanishing for a bit will just come back when the drive is available. It's also a fair bit heavier on the overhead.

Using NFSv4 seems to work in similar fashion without the overhead though I haven't dug into the exact back and forth of the system to know how it differs from the v3 to accomplish that.