Renaming a file without copying it; any alternative to OverlayFS? - linux

I want to have a "virtual" filesystem like OverlayFS where I can rename the files and folders, but without copying the whole file. I have an archive with over 800TB of Data and the files need to be renamed, but I want to keep the original folder structure and the filenames.
For instance:
I have the 800tb mount on /mnt/archive.
I want an "overlay" mount on /mnt/archive_renamed.
So that a file, for example Data001.bin on /mnt/archive can be renamed on the overlaymount and look something like this /mnt/archive_renamed/Data_from_2014/Data_from_Cats.bin but belongs still to Data001.bin and never touches the underlying mount.
OverlayFS would be perfect if it doesn't need to copy the whole file when renaming it.
Any clue?

In /mnt/archive_renamed, you can use hardlinks to the original files.
Just do "cp -al /mnt/archive /mnt/archive_renamed" and you'll have to folders pointing to the same files.

Related

create Linux Bash Script to copy files and create directory

I need to create a script that ..
copy original documents from any portable hard drive or memory stick to an archive without creating unnecessary duplicates.
copy .doc files and .pdf but no duplicates if the files are the same.
The script must make a directory if one doesn't already exist
and if one doesn't exist it must report an error.
Can anyone help?
Your question is very abstract, maybe you could provide an example.
But I think you are looking for rsync.
rsync -ar dir1/ dir2
Synchronize directory 1 to directory 2 and prevents group, owner time ...
Check how to create script: link
How to copy: Link
Portable device: Link
Handling .doc .pdf suffix link
Introduction to if: link

Copy only file reference using rsync

I'm trying to sync two directories, but with a catch: I want my destination folder to have a zero byte file if the corresponding full file exists in the source. Is this possible using rsync or is there any other linux alternative?

How to create a copy of a directory on Linux with links

I have a series of directories on Linux and each directory contains lots of files and data. The data in those directories are automatically generated, but multiple users will need to perform more analysis on that data and generate more files, change the structure, etc.
Since these data directories are very large, I don't want several people to make a copy of the original data so I'd like to make a copy of the directory and link to the original from the new one. However, I'd like any changes to be kept only in the new directory, and leave the original read only. I'd prefer not to link only specific files that I define because the data in these directories is so varied.
So I'm wondering if there is a way to create a copy of a directory by linking to the original but keeping any changed files in the new directory only.
It turns out this is what I wanted to:
cp -al <origdir> <newdir>
It will copy an entire directory and create hard links to the original files. If the original file is deleted, the copied file still exists, and vice-versa. This will work perfectly, but I found newdir must not already exist. As long as the original files are read-only, you'll be able to create an identical, safe copy of the original directory.
However, since you are looking for a way that people can write back changes, UnionFS is probably what you are looking for. It provides means to combine read-only and read-write locations into one.
Unionfs allows any mix of read-only and read-write branches, as well as insertion and deletion of branches anywhere in the fan-out.
Originally I was going to recommend this (I use it a lot):
Assuming the permissions aren't an issue (e.g. only reading is required) I would suggest to bind-mount them into place.
mount -B <original> <new-location>
# or
mount --bind <original> <new-location>
<new-location> must exist as a folder.

rsync copies local file structure onto remote box

I'm trying to upload a .zip file to a location on a remote server.
In my fabfile.py I have this line:
local("rsync files.zip webfaction:~/webapps/app")
This completes without a problem. However when I ssh onto the box, I find that rsync put the files.zip file in
~/webapps/app/Users/kevin/resources/files.zip
Where I really just want to put it in webapps/app without copying the local file structure. What can I do to avoid having rsync copy over the local file structure along with the files?
Thanks,
Kevin
rysnc does not copy the local folder structure if it is not included in your command (don't think it will even if you specify it).
Are you sure you got the command correct? If so, I guess it could be something to do with how Python locates the file through the local() method.
Not much help, but I hope it'll provide some clue...
Since you're using Fabric, why aren't you using the put() api call?

How to change a file inside an archive (.ear) file without extracting entire file

I have an .ear file (an archive file like tar / zip) that has a file inside that i want to change.
For example myfile.ear contains 1.txt and i want to change 1.txt to 2.txt and possibly also change some of the content inside 1.txt (like sed does)
I really want to avoid having to extract myfile.ear, change the file and compress it again.
Does anyone know a way to achieve this in linux ?
And if it's not possible, I would also like to know why
Thanks.
EAR files are just JAR files which are just ZIP files. The ZIP format, IIRC, contains metadata and data interleaved, so changing one file (which might be larger/smaller than the file it is replacing) might not fit (or leave a gap), thus in all practical terms the file must be rewritten when doing modifications.

Resources