Rsync with the -a
option is meant to preserve as much as possible.
Linux
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
Thanks for the suggestion. In fact I tried rsync and it works. But is it possible to integrate in my current workflow? Maybe copying/moving files using a file manager?
I'm asking because with the 3 options I mentioned I may, for example, create mount points in fstab and from this there on everything would be transparent to the user. Would it be possible using rsync?
Secure file transfers frequently trade off some performance for their crypto. You can't have it both ways. (Well, you can but you'd need hardware crypto offload or end to end MACSEC, where both are more exotic use cases)
rsync is basically a copy command with a lot of knobs and stream optimization. It also happens to be able to invoke SSH to pipeline encrypted data over the network at the cost of using ssh for encrypting the stream.
Your other two options are faster because of write-behind caching in to protocol and transfer in the clear-- you don't bog down the stream with crypto overhead, but you're also exposing your payload
File managers are probably the slowest of your options because they're a feature of the DE, and there are more layers of calls between your client and the data stream. Plus, it's probably leveraging one of NFS, Samba or SSHFS anyway.
I believe "rsync -e ssh" is going to be your best over all case for secure, fast, and xattrs. SCP might be a close second. SSHFS is a userland application, and might suffer some penalties for it
I'll take a closer look into rsync possibilities and see if it applies to my situation. I appreciate your input.
How much delay could you live with between syncs? If it's not important to be immidiate, just an end-of-the-day thing you could cronjob the rync with the update flag every so often.
Maybe copying/moving files using a file manager?
FileZilla
-or-
Gnome Commander
...but call me quaint. I still like...
mc
... 'cause it always just works. mc can ostensibly preserve attributes, time-stamps, and (with appropriate privilege on the receiving end) ownership of transferred files (using an sftp server supposedly).
You didn't mention rsync, which I think is usually considered standard. I'd look into that.
"rsync -X"
The whole samba filenames thing is configurable. I only use linux systems and I ran into that same issue.
By default samba seems to mangle file names. Not to mention that Windows systems don't tend to support naming your files whatever you want the same way they do on linux so we need to map those characters to something else. To solve this I include a few different entries in my samba config file to fix the issue.
mangled names = no
vfs objects = catia
catia:mappings = 0x22:0xa8,0x2a:0xa4,0x2f:0xf8,0x3a:0xf7,0x3c:0xab,0x3e:0xbb,0x3f:0xbf,0x5c:0xff,0x7c:0xa6
That's just if you choose to go with samba. I only use it cause it was easier to setup than NFS when I tried.
Hey, thanks for taking the time to reply.
Yep, when I tried using Samba I had these catia:mappings
configuration in my smb.conf
. Thing is it slightly changes things (two that I specifically remember are ¿
and ¡
), sometimes doesn't recognizes filenames (don't remember exactly which chars), etc.
I tried to setup Samba, NFS and sshfs. Took a couple of days to understand a little better each one and, by trial and error, have an idea of their perks. I do appreciate your suggestion but I don't think Samba is what I'm looking for.
rsync -a src dst
forces me to reenter the credentials frequently
Can you explain what your need is for copying files this frequently? Is this for backups? Do you always want the two sides to stay in sync? If so, something like a distributed filesystem such as gluster/ceph/etc. might work better for you.
Sure. I have a little home server running Linux and 2 or 3 machines that access files shared by this server. I use Plasma on my desktop machines and I rely a lot on tags (just to clarify, Plasma uses xattrs - more specifically user.xdg.tags
) to tag files. On the server I already have a couple of scripts that automatically insert some predefined tags on files.
Thing is when I try to copy and/or move files between server and desktop, depending on the protocol I used to mount the shared, I loose this information.
People suggested rsync, and it would be an excellent option if what I wanted was to keep both sides synchronized or something like that. In fact what I need is just a solution that allow me to mount a server share content and allow me to transfer files from it preserving their extended attributes, preferentially using a file manager (I use basically Dolphin or ranger).
No need to keep then synced.
I assume you don't intend to copy the files but use them from a remote host? As security is a concern I suppose we're talking about traffic over the public network where (if I'm not mistaken) kerberos with NFS doesn't provide encryption, only authentication. You obviously can tunnel NFS with SSH or VPN and I'm pretty sure you can create a kerberos ticket which stores credentials locally for longer periods of time and/or read them from a file.
SSH/VPN obviously causes some overhead, but they also provide encryption over the public network. If this is something ran in a LAN I wouldn't worry too much about encrypting the traffic and in my own network I wouldn't worry about authentication either too much. Maybe separate the NFS server to it's own VLAN or firewall it heavily.
I appreciate your help, but notice that the article just tell some basics of xattrs usage (I already know how to use it), but has no reference of file transfering files, which is what I need.
I suspect you use them more extensively, than I. Mine are limited usually to the extended acls, which I then use getfacl to generate a dump of all the acls of the files and sub directories I am transferring or 7zipping, and include that file in the transfer or 7z bundle. Then use setfacl to apply all those permissions on the receiving end after everything has been copied or extracted.