I'm having a lot of files in a directory under a linux Environment.
The problem is that those files are mixed with also a lot of UUID named files that who knows how got there.
Is there a way to issue a "rm" command that allows me to delete those files? without the risk of removing the other files (None of the other files have a UUID format for filename).
I think it has something to do by defining how many characters there is before each " - " simbol, so something among the lines of "rm 8chars-4chars-4-4-12" but I don't know how to say that to rm, I only know "rm somefolder/*" using * to delete its contents, but that's it.
Thanks in advance.
Actually solved it!
It was as easy as using the "?" wildcard, it determines a character and only one character.
So, in this particular case:
rm -v ????????-????-????-* //This says "remove (verbosely) 8-4-4-whatever"
So, that way, it deletes only files that follow this same format for the filename.
More information here: http://www.linfo.org/wildcard.html
Related
On a Linux volume as part of a NAS with many TB of data some files were created from macOS and some of those files uploaded from macOS seem to include characters in filenames that cannot be reproduced via FTP or SMB file protocol. These files will appear as e.g. "picture_name001.jpg". Where the "" probably stands for a colon or slash.
I can search for "" and found out it applies to 2171 files in distributed locations on the volume. Way too much to manually find and correct each file name.
I thought I can connect to the NAS via SSH and simply loop through each directory doing an automated replace of the "" into "_", but this doesn't work because:
for file in **; do mv -- "$file" "${file///_}"; done
this attempt will throw back an error on the first item matching with:
mv: can't rename '120422_LAXJFK': No such file or directory
So obviously this substitute character displayed as "" is not the way to address the file or directory as it refers to a name that doesn't actually exists in the volume index.
(A) How do I find out if "120422_LAX:JFK" or "120422_LAX/JFK" is meant here, and (B) how do I escape these invalid characters to eventually be able to automatically rename all those names to for example "120422_LAX_JFK"?
Is there for example a way to get a numerical file ID from the name and then instruct to rename the file by number in case its name contains ""?
I think the problem is that behind this "" can be different codes of symbols. When the system can't represent some characters (for example, given encoding is not supported), then it automatically replaced by some default character (in your case it is ""). But actually there is some code of the character, that should be in the name. BUT when you trying to do this for file in **; do mv -- "$file" "${file///_}"; done system can't recognize code, that symbol is "" is stands for.
I think this problem can be solved by changing the encoding of characters (they should be compatible and better the same) on both devices (mac and NAS)
Hope this would help
I believe the question is self explanatory so I am gonna make more efficient use of the body section by sharing with you why I asked the question in the first place to get a better solution than the one I am trying to achieve and get two for one.
Basically I am trying to sync two local directories bi-directionally that respects a kind of .gitignore logic i.e. they are gonna ignore particular files and directories. Better yet, I would love something along the line of whitelisting!
I am familiar with tools like rsync and unison that get the syncing part done but not the ignoring/whitelisting.
You can get the original file name and delete it when deleting the symlink. For example:
rm symlink_name $(readlink -f symlink_name)
But remember that if there are other symlinks to the same file, then they'll be dangling.
I've the Cygwin Packages Library installed om my system (Win7- x64) at location C:\Cygwin64\ .
That directory contains over 185.000 Files ! and its size passed the 5GB this week, Knowing that the packages source directory isn't included .
Now, I want to decrease that size, and of-course I'm going to uninstall some of my packages that I don't need anymore. But first I want to ask about the ability of deleting a specific directory that located in: C:\cygwin64\usr\share
(Please, forgive my ignorant, if my question is silly)
While I was trying to figure out the cause of that large files number, I noticed that, this directory specifically, has over than 90.000 File !!
I don't Know what is that directory used for, but would someone please tell me if I can Delete that folder safely, without affecting on the installed packages? - Thanks :)
I cannot speak for the entirety of the folder, but
awk uses that folder for
include files, which I would miss
delete a column with awk or sed
awk - how to delete first column with field separator
how to remove the first two columns in a file using shell (awk, sed, whatever)
I am planning on filing a bug on coreutils for this, as this behavior is unexpected, and there isn't any practical use for it in the real world... Although it did make me chuckle at first, as I never even knew one could create files with wildcard in their filename. How practical is a filename with a wildcard in it? Who even uses such a feature?
I recently ran a bash command similar to this:
ln -s ../../avatars/* ./
Unfortunately, I did not add the correct amount of "../", so rather than providing me with an informative error, it merely creates a link to a "*" file which does not exist. I would expect this to do that:
ln -s "../../avatars/*" ./
As this is the proper way to address such a filename.
Before a submit a bug on coreutils, I would like the opinion of others. Is there any practical use for this behavior, or should ln provide a meaningful error message?
And yes, I know one can just link to the entire directory, rather than each file within, but I do not wish newly created files to be replicated to the old location. There are only a few files in there that are being linked right now.
Some might even say that using a wildcard in symlinking is bad practice. However, I know the contents of the directory exactly, and this is much quicker than manually doing each file manually.
This isn't a bug.
In the shell, if you use a wildcard pattern that doesn't match anything, then the pattern isn't substituted. For example, if you do this:
echo *.c
If you have no .c files in the current directory, it will just print "*.c". If there are .c files in the current directory, then *.c will be replaced with that list.
For many commands, if you specify files that don't exist it is an error, and you get a message that seems to make sense, like "cannot access *.c". But for ln -s, since it is a symbolic link, the actual file doesn't have to exist, and it goes ahead and makes the link.
I've written a vary basic shell script that moves a specified file into the dustbin directory. The script is as follows:
#!/bin/bash
#move items to dustbin directory
mv "$#" ~/dustbin/
echo "File moved to dustbin"
This works fine for me, any file I specify gets moved to the dustbin directory. However, what I would like to do is create a new script that will move the file in the dustbin directory back to its original directory. I know I could easily write a script that would move it back to a location specified by the user, but I would prefer to have one that would move it to its original directory.
Is this possible?
I'm using Mac OS X 10.6.4 and Terminal
You will have to store where the original file is coming from then. Maybe in a seperate file, a database, or in the files attributes (meta-data).
Create a logfile with 2 columns:
The complete filename in the dustbin
The complete original path and filename
You will need this logfile anyway - what will you do when a user deleted 2 files in different directories, but with the same name? /home/user/.wgetrc and /home/user/old/.wgetrc ?
What will you do when a user deletes a file, makes a new one with the same name, and then deletes that too? You'll need versions or timestamps or something.
You need to store the original location somewhere, either in a database or in an extended attribute of the file. A database is definitely the easiest way to do it, though an extended attribute would be more robust. Looking in ~/.Trash/ I see some, but not all files have extended attributes, so I'm not sure how Apple does it.
You need to somehow encode the source directory in the file. I think the easiest would be to change the filename in the dustbin directory. So that /home/user/music/song.mp3 becomes ~/dustbin/song.mp3|home_user_music
And when you copy it back your script needs to process the file name and construct the path beginning at |.
Another approach would be to let the filesystem be your database.
A file moved from /some/directory/somewhere/filename would be moved to ~/dustbin/some/directory/somewhere/filename and you'd do find ~/dustbin -name "$file" to find it based on its basename (from user input). Then you'd just trim "~/bustbin" from the output of find and you'd have the destination ready to use. If more than one file is returned by find, you can list the proposed files for user selection. You could use ~/dustbin/$deletiondate if you wanted to make it possible to roll back to earlier versions.
You could do a cron job that would periodically remove old files and the directories (if empty).