ramielrowe

joined 1 year ago
[–] ramielrowe@lemmy.world 6 points 1 week ago (1 children)

Realistically, probably not. If your workload is highly memory bound, and sensitive to latency, you would be leaving a little performance on the table. But, I wouldn't stress over it. It's certainly not going to bottleneck your CPU.

[–] ramielrowe@lemmy.world 1 points 3 weeks ago (1 children)

In a centralized management scenario, the central controlling service needs the ability to control everything registered with it. So, if the central controlling service is compromised, it is very likely that everything it controlled is also compromised. There are ways to mitigate this at the application level, like role-based and group-based access controls. But, if the service itself is compromised rather than an individual's credentials, then the application protections can likely all be bypassed. You can mitigate this a bit by giving each tenant their own deployment of the controlling service, with network isolation between tenants. But, even that is still not fool-proof.

Fundamentally, security is not solved by one golden thing. You need layers of protection. If one layer is compromised, others are hopefully still safe.

[–] ramielrowe@lemmy.world 3 points 3 weeks ago* (last edited 3 weeks ago) (4 children)

If we boil this article down to it's most basic point, it actually has nothing to do with virtualization. The true issue here is actually centralized infra/application management. The article references two ESXi CVE's that deal with compromised management interfaces. Imagine a scenario where we avoid virtualization by running Kubernetes on bare metal nodes, and each Pod gets exclusive assignment to a Node. If a threat actor has access to the Kubernetes management interface, and can exploit a vulnerability to access that management interface, it can immediately compromise everything within that Kubernetes cluster. We don't even need to have a container management platform. Imagine a collection of bare-metal nodes managed by Ansible via Ansible Automation Platform (AAP). If a threat actor has access to AAP and exploit it, it then can compromise everything managed by that AAP instance. This author fundamentally misattributes the issue to virtualization. The issue is centralized management and there are significant benefits to using higher-order centralized management solutions.

[–] ramielrowe@lemmy.world 28 points 2 months ago (4 children)

Perhaps as the more experienced smoker, you can be a good friend and offer a lower dose that is more suited for their tolerance. Maybe don't pack a big-ol bong rip for someone who hasn't smoked in months. Chop up that chocolate bar into something a little more manageable. If they wanna buy something, suggest something a little more controllable like a vape. And most of all, if you're pressuring people who are on the fence into smoking, maybe just stop doing that.

[–] ramielrowe@lemmy.world 5 points 2 months ago* (last edited 2 months ago) (1 children)

Yea, I don't think this is necessarily a horrible idea. It's just that this doesn't really provide any extra security, but even the first line of this blog is talking about security. This will absolutely provide privacy via pretty good traffic obfuscation, but you still need good security configuration of the exposed service.

[–] ramielrowe@lemmy.world 34 points 2 months ago* (last edited 2 months ago) (4 children)

If I understand this correctly, you're still forwarding it a port from one network to another. It's just in this case, instead of a port on the internet, it's a port on the TOR network. Which is still just as open, but also a massive calling card for anyone trolling around the TOR network for things to hack.

[–] ramielrowe@lemmy.world 3 points 2 months ago (1 children)

This isn't about social platforms or using the newest-hottest tech. It's about following industry standard practices. You act like source control is such a pain in the ass and that it's some huge burden. And that I just don't understand. Getting started with git is so simple, and setting up an account with a repo host is a one time thing. I find it hard to believe that you don't already have ssh keys set up too. What I find more controversial and concerning is your ho-hum opinion on automated testing, and your belief that "most software doesn't do it". You're writing software that you expect people to not only run on their infra, but also expose to the public internet. Not only that, but it also needs to protect the traffic between the server on public infra and client on private infra. There is a much higher expectation of good practices being in place. And it is clear that you are willingly disregarding basic industry standard practices.

[–] ramielrowe@lemmy.world 2 points 2 months ago (7 children)

Github and Gitlab are free, and both even allow private repos for free at this point. Git is practically one of the first tools I install on a dev machine. Likewise, git is the defacto means of package management in golang. It's so built in that module names are repo URLs.

[–] ramielrowe@lemmy.world 5 points 2 months ago (1 children)

Git was literally written by Linus to manage the source of the kernel. Sure patches are proposed via mailing list, but the actual source is hosted and managed via git. It is literally the gold standard, and source control is a foundational piece of software development. Same with not just unit tests, but functional testing too. You absolutely should not be putting off testing.

[–] ramielrowe@lemmy.world 12 points 2 months ago (3 children)

Gotta be honest, downloading security related software from a random drive is sending off sketchy vibes. Fundamentally, it's no different than a random untrusted git repo. But, I really would suggest using some source control rather than trying to roll your own with diff archives.

Likewise, I would also suggest adding in some unit and functional tests. Not only would it help maintain software quality, but also build confidence in other folks using the software you are releasing.

[–] ramielrowe@lemmy.world 42 points 4 months ago (4 children)

After briefly reading about systemd's tmpfiles.d, I have to ask why it was used to create home directories in the first place. The documentation I read said it was for volatile files. Is a users home directory considered volatile? Was this something the user set up, or the distro they were using. If the distro, this seems like a lot of ire at someone who really doesn't deserve it.

view more: next ›