Using find with -L option - linux

I have this:
mylink -> myfile
When I do:
find -L . -name 'mylink'
I get:
./mylink
I don't understand why this is so, given this from the man page:
-L : Follow symbolic links. When find examines or prints information about files, the information used shall be taken from the properties of the file to which the link points, not from the link itself (unless it is a bro- ken symbolic link or find is unable to examine the file to which the link points).
Based on the above I was expecting the following behavior for my example case: find starts the search. It encounters mylink. Since -L is in effect it dereferences it and gets the name of the pointed to file 'myfile'. The file name does not match the pattern 'mylink' and nothing is reported. Whats happening?

The name is not a property of the file, it's a property of the directory containing it.
If you want to match the contents of a symlink, use -lname.

The 'following of links' applies when there is a link to a directory.
A link to a file is just a file itself, to find. Without -L a link to a directory is also just a file.
Only when -L is added, will find recurse into the linked-to directory

Related

Is there a way to undo a batch-rename of file extensions?

Ok so I kinda dropped the ball. I was trying to understand how things work. I had a few html files on my computer that I was trying to rename as txt files. This was strictly a learning exercise. Following the instructions I found here using this code:
for file in *.html
do
mv "$file" "${file%.html}.txt"
done
produced this error:
mv: rename *.html to *.txt: No such file or directory
Long story short I ended up going rogue and renaming the html files, as well as a lot of other non html files as txt files. So now I have files labeled like
my_movie.mp4.txt
my_song.mp3.txt
my_file.txt.txt
This may be a really dumb question but.. Is there a way to check if a file has two extensions and if yes remove the last one? Or any other way to undo this mess?
EDIT
Doing this find . -name "*.*.txt" -exec echo {} \; | cat -b seems to tell me what was changed and where it is located. The cat -b part is not necessary but I like it. This still doesn't fix what I broke though.
I'm not sure if terminal can check for extensions "twice", but you can check for . in every name an if there's more than one occurence of ., then your file has more extensions. Then you can cut the extension off with finding first occurence of . in a string when going backwards... or last one if checking characters in string in a normal way.
I have a faster option for you if you can use python. You can strip the extension with:
for file in list_of_files:
os.rename(file,os.path.splitext(file)[0])
which can give you from your file.txt.txt your file.txt
Example:
You wrote that your command tells you what has changed, so just take those changed files and dump them into a file(path to file per line). Then you can easily run this:
with open('<path to list>') as f:
list_of_files = f.readlines()
for file in list_of_files:
os.rename(file.strip('\n'), os.path.splitext(file.strip('\n'))[0])
If not, then you'd need to get the list from python:
import os
results = []
for root, folder, filenames in os.walk(<your path to folder>):
for filename in filenames:
if filename.endswith('.txt.txt'):
results.append(os.path.join(root, filename))
With this you got a list of files ending with .txt.txt like this <your folder>\\<path_to_file>.
Get a path to your directory used in os.walk() without folder's name(it's already in list) so it'll be like this:
e.g. os.walk('/home/me/directory') -> path='/home/me/' and res is item already in a list, which looks like directory/...
for res in results:
path = '' # set the path here
file = os.path.join(path,r)
os.rename(file, os.path.splitext(file)[0])
Depending on what files you want to find change .txt.txt in if filename.endswith('...') to whatever you like and os.rename() will take file's name without extension which in your case means it strips the additional extension you don't want to have.

Linux rename files based on input file

I need to rename hundreds of files in Linux to change the unique identifier of each from the command line. For sake of examples, I have a file containing:
old_name1 new_name1
old_name2 new_name2
and need to change the names from new to old IDs. The file names contain the IDs, but have extra characters as well. My plan is therefore to end up with:
abcd_old_name1_1234.txt ==> abcd_new_name1_1234.txt
abcd_old_name2_1234.txt ==> abcd_new_name2_1234.txt
Use of rename is obviously fairly helpful here, but I am struggling to work out how to iterate through the file of the desired name changes and pass this as input into rename?
Edit: To clarify, I am looking to make hundreds of different rename commands, the different changes that need to be made are listed in a text file.
Apologies if this is already answered, I've has a good hunt, but can't find a similar case.
rename 's/^(abcd_)old_name(\d+_1234\.txt)$/$1new_name$2/' *.txt
Should work, depending on whether you have that package installed. Also have a look at qmv (rename-utils)
If you want more options, use e.g.
shopt -s globstart
rename 's/^(abcd_)old_name(\d+_1234\.txt)$/$1new_name$2/' folder/**/*.txt
(finds all txt files in subdirectories of folder), or
find folder -type f -iname '*.txt' -exec rename 's/^(abcd_)old_name(\d+_1234\.txt)$/$1new_name$2/' {} \+
To do then same using GNU find
while read -r old_name new_name; do
rename "s/$old_name/$new_name/" *$old_name*.txt
done < file_with_names
In this way, you read the IDs from file_with_names and rename the files replacing $old_name with $new_name leaving the rest of the filename untouched.
I was about to write a php function to do this for myself, but I came upon a faster method:
ls and copy & paste the directory contents into excel from the terminal window. Perhaps you may need to use on online line break removal or addition tool. Assume that your file names are in column A In excel, use the following formula in another column:
="mv "&A1&" prefix"&A1&"suffix"
or
="mv "&A1&" "&substitute(A1,"jpeg","jpg")&"suffix"
or
="mv olddirectory/"&A1&" newdirectory/"&A1
back in Linux, create a new file with
nano rename.txt and paste in the values from excel. They should look something like this:
mv oldname1.jpg newname1.jpg
mv oldname1.jpg newname2.jpg
then close out of nano and run the following command:
bash rename.txt. Bash just runs every line in the file as if you had typed it.
and you are done! This method gives verbose output on errors, which is handy.

Unix: finding a string within a directory and listing only its associated file names

I have been working on this for quite some time and decided to ask for some help. I'm trying to use a command to find a multiple occurrences of a function (basically a string) within a directory (that has multiple files) and would like to view only the file names which the string is found.
Lets say this was the directory I want to search filled with multiple .h and .cpp files is:
~/Project/Files
and I was looking for occurrences of a function called 'doThis'
So far I have tried:
grep -r doThis ~/Project/Files
But I get the path and where it occurs in the file, I only need the file names.
Also grep -f wont work because I get an error message saying "No such file or directory" and when using just grep I get an error message saying "path is a directory"
Any help would be great: Thanks guys!
Simply use the -l switch ;)
So :
grep -rl foobar dir

zip command not working

I am trying to zip a file using shell script command. I am using following command:
zip ./test/step1.zip $FILES
where $FILES contain all the input files. But I am getting a warning as follows
zip warning: name not matched: myfile.dat
and one more thing I observed that the file which is at last in the list of files in a folder has the above warning and that file is not getting zipped.
Can anyone explain me why this is happening? I am new to shell script world.
zip warning: name not matched: myfile.dat
This means the file myfile.dat does not exist.
You will get the same error if the file is a symlink pointing to a non-existent file.
As you say, whatever is the last file at the of $FILES, it will not be added to the zip along with the warning. So I think something's wrong with the way you create $FILES. Chances are there is a newline, carriage return, space, tab, or other invisible character at the end of the last filename, resulting in something that doesn't exist. Try this for example:
for f in $FILES; do echo :$f:; done
I bet the last line will be incorrect, for example:
:myfile.dat :
...or something like that instead of :myfile.dat: with no characters before the last :
UPDATE
If you say the script started working after running dos2unix on it, that confirms what everybody suspected already, that somehow there was a carriage-return at the end of your $FILES list.
od -c shows the \r carriage-return. Try echo $FILES | od -c
Another possible cause that can generate a zip warning: name not matched: error is having any of zip's environment variables set incorrectly.
From the man page:
ENVIRONMENT
The following environment variables are read and used by zip as described.
ZIPOPT
contains default options that will be used when running zip. The contents of this environment variable will get added to the command line just after the zip command.
ZIP
[Not on RISC OS and VMS] see ZIPOPT
Zip$Options
[RISC OS] see ZIPOPT
Zip$Exts
[RISC OS] contains extensions separated by a : that will cause native filenames with one of the specified extensions to be added to the zip file with basename and extension swapped.
ZIP_OPTS
[VMS] see ZIPOPT
In my case, I was using zip in a script and had the binary location in an environment variable ZIP so that we could change to a different zip binary easily without making tonnes of changes in the script.
Example:
ZIP=/usr/bin/zip
...
${ZIP} -r folder.zip folder
This is then processed as:
/usr/bin/zip /usr/bin/zip -r folder.zip folder
And generates the errors:
zip warning: name not matched: folder.zip
zip I/O error: Operation not permitted
zip error: Could not create output file (/usr/bin/zip.zip)
The first because it's now trying to add folder.zip to the archive instead of using it as the archive. The second and third because it's trying to use the file /usr/bin/zip.zip as the archive which is (fortunately) not writable by a normal user.
Note: This is a really old question, but I didn't find this answer anywhere, so I'm posting it to help future searchers (my future self included).
eebbesen hit the nail in his comment for my case (but i cannot vote for comment).
Another possible reason missed in the other comments is file exceeding the file size limit (4GB).
I converted my script for unix environment using dos2unix command and executed my script as ./myscript.sh instead bash myscript.sh.
I just discovered another potential cause for this. If the permissions of the directory/subdirectory don't allow the zip to find the file, it will report this error. Actually, if you run a chmod -R 444 on the directory, and then try to zip it, you will reproduce this error, and also have a "stored 0%" report, like this:
zip warning: name not matched: borrar/enviar
adding: borrar/ (stored 0%)
Hence, try changing the permissions of the file. If you are trying to send them through email, and those email filters (like Gmail's) invent silly filters of not sending executables, don't forget that making permissions very strict when making zip compression can be the cause of the error you are reporting, of "name not matched".
spaces are not allowed:
it would fail if there are more than one files(s) in $FILES unless you put them in loop
I also encountered this issue. In my case, the line separate is CRLF in my zip shell script which causes the problem. Using LF fixed it.

linux find command is not working properly

I am using Linux(Ubuntu), I am trying to find the files, but it is not working properly.
I have created some files in my directory structure, for example: World/India/Maharashtra/Pune/filename.xml
When I use the find command like:
find /home/lokesh/Desktop/Testing_India2/Test/World/India/Maharashtra/ -name filename*.xml -mmin -3000
It is giving the result perfectly.
But, when I am using the same command at "World" or "India" level:
find /home/lokesh/Desktop/Testing_India2/Test/World/ -name filename*.xml -mmin -3000
it does not give any result.
I have lots of directories at "India" level as well as at "Maharashtra" level and may be some directories within "Maharashtra's" inner directories. I have to find each file created in all directories.
And I have mounted all folders from different machine.(I mean some state from different and some from different machine.)
If someone knows how to solve this problem please reply me as soon as possible.
Double quote your search string and -L to make it follow symbolic links:
find -L /home/lokesh/Desktop/Testing_India2/Test/World/ -name "filename*.xml" -mmin -30000
This is something I ran into earlier today actually when using the '*' wildcard. I couldn't get it to continually traverse the subdirectories unless I escaped the * with a .
Give this a try:
find -L /home/lokesh/Desktop/Testing_India2/Test/World/ -name filename\*.xml -mmin -30000
Yes, as mentioned you have to double qoute your -name argument or use a backslash prior to the *. The reason for it not working from one directory, but working fine in other directories, is that the * character is used for filename generation by your shell. This of course happens before the find command is executed. Therefore, if you have a file that match the filename*.xml pattern in your current directory it will be substituted before find is executed, which is not what you want. On the other hand, if there is no pattern match in the current directory, the * character is passed on to the find command unmodified. By qouting you protect the string from shell filename generation.
Regards

Resources