dont

joined 5 months ago
[–] dont@lemmy.world 9 points 2 weeks ago

Finally, I can give it a star, being only on gitlab and not on github

[–] dont@lemmy.world 2 points 3 weeks ago

Thanks, the bootstrapping idea was not mentioned in the comments, yet. And your blog looks promising, will have a more through look soon.

[–] dont@lemmy.world 2 points 3 weeks ago

Nice, thanks, again! I overlooked the dependency instructions in the container service file, which is why I wondered how the heck podman figures out the dependencies. It makes a lot of sense to do it like this, now that I think of it.

[–] dont@lemmy.world 1 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

Awesome, so, essentially, you create a name.pod file like so:

[Unit]
Description=Pod Description

[Pod]
# stuff like PublishPort or networking

and join every container into the pod through the following line in the .container files: Pod=name.pod

and I presume this all gets started via systemctl --user start name.service and systemd/podman figures out somehow which containers will have to be created and joined into the pod, or do they all have to be started individually?

(Either way, I find the documentation of this feature lacking. When I tested this stuff myself, I'll look into improving it.)

[–] dont@lemmy.world 2 points 4 weeks ago

I've wondered myself and asked here https://lemmy.world/post/20435712 – got some very reasonable answers

[–] dont@lemmy.world 3 points 4 weeks ago

Thank you, I think the "less heavy than managing a local micro-k8s cluster"-part was a great portion of what I was missing here.

[–] dont@lemmy.world 1 points 4 weeks ago (5 children)

Understood, thanks, but if I may ask, just to be sure: It seems to me that without interacting with the kubernetes layer, I'm not getting pods, only standalone containers, correct? (Not that I'm afraid of writing kube configuration, as others have inferred incorrectly. At this point, I'm mostly curious how this configuration would be looking, because I couldn't find any examples.)

[–] dont@lemmy.world 1 points 4 weeks ago

Thank you for those very convincing points. I think I'll give it a try at some point. It seems to me that what you're getting in return for writing quadlet configuration in addition to the kubernetes style pod/container config is that you don't need to maintain an independent kubernetes distro since podman and systemd take care of it and allow for system-native management. This makes a lot of sense.

 

I'm afraid this is going to attract the "why use podman when docker exists"-folks, so let me put this under the supposition that you're already sold on (considering) using podman for whatever reason. (For me, it has been the existence of pods, to be used in situations where pods make sense, but in a non-redundant, single-node setup.)

Now, I was trying to understand the purpose of quadlets and, frankly, I don't get it. It seems to me that as soon as I want a pod with more than one container, what I'll be writing is effectively a kubernetes configuration plus some systemd unit-like file, whereas with podman compose I just have the (arguably) simpler compose file and a systemd file (which works for all pod setups).

I would get that it's sort of simpler, more streamlined and possibly more stable using quadlets to let systemd manage single containers instead of putting podman run commands in systemd service files. Is that all there is to it, or do people utilise quadlets as a kind of lightweight almost-kubernetes distro which leverages systemd in a supposedly reasonable way? (Why would you want to do that if lightweight, fully compliant kubernetes distros are a thing, nowadays?)

Am I missing or misunderstanding something?

[–] dont@lemmy.world 1 points 4 months ago

One caveat I was (more or less actively) ignoring is that when the server shuts down, it stops powering the mikrotik and so I cannot access the BMC to restart it etc... On a related note, I am afraid that a remote session to the BMC tunnelled through the mikrotik will not survive a reboot of the machine, which might prevent me from getting to the BIOS screen, should I have to reconfigure something remotely.

[–] dont@lemmy.world 2 points 5 months ago (1 children)

Thanks 😀 But you hardly get to control what that CPU on your graphics card does the same way as you get control over the Linux machine that is this router, do you?

(Oh, and actually, my first and last discrete GPU was an ati 9600 xt or something from over twenty years ago, so, I guess that statement about my inexperience with it is still standing 😉 Until somebody comes along to tell me that the same could be said about raid controllers etc...)

3
submitted 5 months ago* (last edited 5 months ago) by dont@lemmy.world to c/mikrotik@lemmy.world
 

I have just ordered a CCR2004-1G-2XS-PCIe to be used as the firewall of a single server (and its IPMI) that's going to end up in a data center for colocation. I would appreciate a sanity check and perhaps some hints as I haven't had any prior experience with mikrotik and, of course, no experience at all with such a wild thing as a computer in a computer over pcie.

My plan is to manage the router over ssh over the internet with certificates and then open the api / web-configurator / perhaps windows-thinyg only on localhost. Moreover, I was planning to use it as an ssh proxy for managing the server as well as accessing the server IPMI.

I intend to use the pcie-connection for the communication between the server and the router and just connect the IPMI and either physical port.

I have a (hopefully compatible) RJ45 1.25 G transceiver. Since the transceiver is a potential point of failure and loosing IPMI is worse than loosing the only online connection, I guess it makes more sense to connect to the data center via the RJ45-port and the server IPMI via the transceiver. (The data center connection is gigabit copper.) Makes sense? Or is there something about the RJ45-port that should be considered?

I plan to manually forward ports to the server as needed. I do not intend to use the router as some sort of reverse proxy, the server will deal with that.

Moreover, I want to do a site2site wireguard vpn-connection to my homelab to also enable me to manage the router and server without the ssh-jump.

Are there any obstacles I am overlooking or is this plan sound? Is there something more to consider or does anyone have any further suggestions or a better idea?