Linux: How to delete the original file if the shortcut/link is deleted - linux

I believe the question is self explanatory so I am gonna make more efficient use of the body section by sharing with you why I asked the question in the first place to get a better solution than the one I am trying to achieve and get two for one.
Basically I am trying to sync two local directories bi-directionally that respects a kind of .gitignore logic i.e. they are gonna ignore particular files and directories. Better yet, I would love something along the line of whitelisting!
I am familiar with tools like rsync and unison that get the syncing part done but not the ignoring/whitelisting.

You can get the original file name and delete it when deleting the symlink. For example:
rm symlink_name $(readlink -f symlink_name)
But remember that if there are other symlinks to the same file, then they'll be dangling.

Related

Unix create multiple files with same name in a directory

I am looking for some kind of logic in linux where I can place files with same name in a directory or file system.
For e.g. i create a file abc.txt, so the next time if any process creates abc.txt it should automatically check and make the file named as abc.txt.1 should be created, then next time abc.txt.2 and so on...
Is there a way to achieve this.
Any logic or third party tools are also welcomed.
You ask,
For e.g. i create a file abc.txt, so the next time if any process
creates abc.txt it should automatically check and make the file named
as abc.txt.1 should be created
(emphasis added). To obtain such an effect automatically, for every process, without explicit provision by processes, it would have to be implemented as a feature of the filesystem containing the files. Such filesystems are called versioning filesystems, though typically the details are slightly different from what you describe. Most importantly, however, although such filesystems exist for Linux, none of them are mainstream. To the best of my knowledge, none of the major Linux distributions even offers one as a distribution-supported option.
Although it's a bit dated, see also Linux file versioning?
You might be able to approximate that for many programs via a customized version of the C standard library, but that's not foolproof, and you should not expect it to have universal effect.
It would be an altogether different matter for an individual process to be coded for such behavior. It would need to check for existing files and choose an appropriate name when opening each new file. In doing so, some care needs to be taken to avoid related race conditions, but it can be done. Details would depend on the language in which you are writing.
You can use BASH expression to achieve this. For example if I wanted to make 10 files all with the same name, but having a unique number value I would do the following:
# touch my_file{01..10}.txt
This would create 10 files starting at 01 all the way to 10. This method is also hand for looping over files in a sequence or if your also creating directories.
Now if i am reading you question right your asking that if you move a file or create a file in a directory. you would want the a script to automatically create a new file for you? If that is the case then just use a test and if there is a file move that file and mark it. Me personally I use time stamps to do so.
Logic:
# The [ -f ] tests if the file is present
if [ -f $MY_FILE_NAME ]; then
# If the file is present move the file and give it the PID
# That way the name will always be unique
mv $MY_FILE_NAME $MY_FILE_NAME_$$
mv $MY_NEW_FILE .
else
# Move or make the file here
mv $MY_NEW_FILE .
fi
As you can see the logic is very simple. Hope this helps.
Cheers
I don't know about Your particular use case, but You may try to look at logrotate:
https://wiki.archlinux.org/index.php/Logrotate

Remove a lot of UUID format named files using rm

I'm having a lot of files in a directory under a linux Environment.
The problem is that those files are mixed with also a lot of UUID named files that who knows how got there.
Is there a way to issue a "rm" command that allows me to delete those files? without the risk of removing the other files (None of the other files have a UUID format for filename).
I think it has something to do by defining how many characters there is before each " - " simbol, so something among the lines of "rm 8chars-4chars-4-4-12" but I don't know how to say that to rm, I only know "rm somefolder/*" using * to delete its contents, but that's it.
Thanks in advance.
Actually solved it!
It was as easy as using the "?" wildcard, it determines a character and only one character.
So, in this particular case:
rm -v ????????-????-????-* //This says "remove (verbosely) 8-4-4-whatever"
So, that way, it deletes only files that follow this same format for the filename.
More information here: http://www.linfo.org/wildcard.html

mv command failing because destination already exists

Sorry if this has been answered - I tried to search, but didn't find anyone with quite the same issue..
I'm trying to basically move all files from one drive (mountpoint) to another.. I initially used
mv /mnt/old1/* /mnt/disk1
This SEEMS to have been working, but I had a power failure in the middle of it, and when I re-issue the command, it seems to be having issues because the directory structure in the destination already exists, so it's not moving the source files anymore.
Basically, at this point, I'm just trying to merge two directory structures into one. I guess I could cp the structure, but I would really have no way to know if a file was skipped as I do with mv, since if it's still in the source drive, I can assume it wasn't moved..
Is there a better way to do this? I've never used rsync, but from what I'm reading, perhaps this is a better option?
Any help would be greatly appreciated - I've got millions of files (18+tb) to move and I don't want to inadvertently miss something..
Thanks!
Steve
I just tried the following, and it works.
mv -ui /old/* /new/
-u for update mode
-i for query if exist (just for double check, maybe useless)
I do not know whether slash "/" after "/new" matters, and after that, files in /old/ are those not moved.
Hope this can help :)

ln has unexpected behavior when using a wildcard

I am planning on filing a bug on coreutils for this, as this behavior is unexpected, and there isn't any practical use for it in the real world... Although it did make me chuckle at first, as I never even knew one could create files with wildcard in their filename. How practical is a filename with a wildcard in it? Who even uses such a feature?
I recently ran a bash command similar to this:
ln -s ../../avatars/* ./
Unfortunately, I did not add the correct amount of "../", so rather than providing me with an informative error, it merely creates a link to a "*" file which does not exist. I would expect this to do that:
ln -s "../../avatars/*" ./
As this is the proper way to address such a filename.
Before a submit a bug on coreutils, I would like the opinion of others. Is there any practical use for this behavior, or should ln provide a meaningful error message?
And yes, I know one can just link to the entire directory, rather than each file within, but I do not wish newly created files to be replicated to the old location. There are only a few files in there that are being linked right now.
Some might even say that using a wildcard in symlinking is bad practice. However, I know the contents of the directory exactly, and this is much quicker than manually doing each file manually.
This isn't a bug.
In the shell, if you use a wildcard pattern that doesn't match anything, then the pattern isn't substituted. For example, if you do this:
echo *.c
If you have no .c files in the current directory, it will just print "*.c". If there are .c files in the current directory, then *.c will be replaced with that list.
For many commands, if you specify files that don't exist it is an error, and you get a message that seems to make sense, like "cannot access *.c". But for ln -s, since it is a symbolic link, the actual file doesn't have to exist, and it goes ahead and makes the link.

Copying just files not present with SCP

I need to move my web server directory to another server. I'd like to do it with a simple "scp -r destination:destdirectory". But in the meanwhile the directory will be filled with another stuff: so I'll take the old server down the time I need to move the newest file to the new one. How can I do an scp which is gonna write just the differences? So it'll take not much time, and I won't have to take the website down for too long!
Probably not at all, or just with pains. But if you have the possibility to use rsync, just do that. It automatically excludes files that haven't changed, and for changed files, it just transfers the differences.

Resources