How to exclude integers from all find command results - linux

I feel like I'm so close to the answer looking through the forums here... I'm successfully using the following command:
sudo find . -mindepth 1 -maxdepth 4 -type d -print0 >structure.txt
to get a text file of our file structure. The issue is that I need to prune out any file directories that have integers. We have THOUSANDS of sub-directories that our app creates that are numbered. Example:
Blue\clientfiles
Blue\clientfiles\saturn
Blue\clientfiles\saturn\moon
Blue\clientfiles\saturn\moon\33
Blue\clientfiles\saturn\moon\34
Blue\clientfiles\saturn\moon\35
Blue\documents
Blue\documents\1001
Blue\documents\1002
Blue\documents\1003
Blue\ftp
Blue\ftp\consumed
Blue\ftp\consumed\202
Blue\ftp\consumed\203
Blue\ftp\consumed\204
Blue\system
Blue\system\007
Blue\system\008
As you can see, part of the problem is that the depth varies... not just in the "Blue" directory, but the Red might have different depths as well. The only constant is that I do not need ANY of the numbered directories. I can't figure out the proper prune syntax to exclude the numbered dirs.
Any insight would be appreciated!! ~R

This is rather unix.stackexchange question. Try to exclude all directories with digit on it's end:
egrep -v "[0-9]$"
(not compatible with print0)

Related

Find and move files based on filenames in txt file oneline

I'm sure I had a working oneliner that allowed me to search a directory (or .) for files containing names matching names in a txt file and copying these to a new directory.
Somehow I cannot get it to work - any help please.
Sorry if this is a duplicate - I have really searched for an answer (here and elsewhere), but cannot find a solution.
foo/movehere/sample.txt file:
141516
141619
Files I want to find and move i.e.:
foo/folder/folder2/141516_S2_R1.fastq.gz
foo/folder/folder2/141516_S2_R1.fastq.gz
Where I want to move them:
foo/movehere/
my current (nonfunctioning) oneliner:
while read -r FILE; do find . -name "$FILE*.fastq.gz" -type f -exec cp {} /foo/movehere/ \;;done </foo/movehere/sample.txt
There are some errors in the oneliner. It still does not work.
you can use eval in your code
SEARCH="-name '$FILE*.fastq.gz'"
eval "find . $SEARCH -type f exec cp '{}' /foo/movehere/ \";
security note : do not put user supplied data into eval.
Not sure if I should delete the post - but I'll leave my solution here if anyone else encounter the exact same problem.
Still not 100% sure I understand why it failed, but I got the oneliner working by copying all the sample names from the txt to a unedited file with no suffix.
I guess some (hidden) "\r" editing in the txt file messed up the "$FILE" so that it searched for something like this:
151617*fastq.gz\r
Perhaps someone with a better understanding of terminal scripts may confirm this.
EDIT 190128: happened across my old question, and just in case anyone struggle with something similar, make sure you have UNIX or similar line shifts, my txt files had weird window line shifts.

Find recursively relative paths without showing the main directory contents

First of all... I'm newbie in linux! haha
I'm trying to show all the files and directories from a main directory but I need to exclude the main directory record.
Example (all files in /var/www/html):
index.php
Images
Images/1.jpg
Images/2.jpg
Images/3.jp3
Includes
Includes/db.php
Includes/security.php
The records that I want to exclude I've shown in bold / strong
Now I'm using this command:
find /var/www/html/ -mindepth 1 -printf '%P\n'
I appreciate any help. Regards!

Linux rename files based on input file

I need to rename hundreds of files in Linux to change the unique identifier of each from the command line. For sake of examples, I have a file containing:
old_name1 new_name1
old_name2 new_name2
and need to change the names from new to old IDs. The file names contain the IDs, but have extra characters as well. My plan is therefore to end up with:
abcd_old_name1_1234.txt ==> abcd_new_name1_1234.txt
abcd_old_name2_1234.txt ==> abcd_new_name2_1234.txt
Use of rename is obviously fairly helpful here, but I am struggling to work out how to iterate through the file of the desired name changes and pass this as input into rename?
Edit: To clarify, I am looking to make hundreds of different rename commands, the different changes that need to be made are listed in a text file.
Apologies if this is already answered, I've has a good hunt, but can't find a similar case.
rename 's/^(abcd_)old_name(\d+_1234\.txt)$/$1new_name$2/' *.txt
Should work, depending on whether you have that package installed. Also have a look at qmv (rename-utils)
If you want more options, use e.g.
shopt -s globstart
rename 's/^(abcd_)old_name(\d+_1234\.txt)$/$1new_name$2/' folder/**/*.txt
(finds all txt files in subdirectories of folder), or
find folder -type f -iname '*.txt' -exec rename 's/^(abcd_)old_name(\d+_1234\.txt)$/$1new_name$2/' {} \+
To do then same using GNU find
while read -r old_name new_name; do
rename "s/$old_name/$new_name/" *$old_name*.txt
done < file_with_names
In this way, you read the IDs from file_with_names and rename the files replacing $old_name with $new_name leaving the rest of the filename untouched.
I was about to write a php function to do this for myself, but I came upon a faster method:
ls and copy & paste the directory contents into excel from the terminal window. Perhaps you may need to use on online line break removal or addition tool. Assume that your file names are in column A In excel, use the following formula in another column:
="mv "&A1&" prefix"&A1&"suffix"
or
="mv "&A1&" "&substitute(A1,"jpeg","jpg")&"suffix"
or
="mv olddirectory/"&A1&" newdirectory/"&A1
back in Linux, create a new file with
nano rename.txt and paste in the values from excel. They should look something like this:
mv oldname1.jpg newname1.jpg
mv oldname1.jpg newname2.jpg
then close out of nano and run the following command:
bash rename.txt. Bash just runs every line in the file as if you had typed it.
and you are done! This method gives verbose output on errors, which is handy.

in drupal language: grep and pipe - list all the findings to avoid overhead & serverperformance issues

As i have a serous sever performance warning with installing drupal-commons (this is a installation-profile) i now want to reduce the server load.
Why - i get a message when trying to install drupal commons: Too-many-files-open it says!
Well Drupal & modules (ab)uses too many files! 50,000 maximum files and maybe 5000 directories is their goal and that si what they only backup so its in
So my question: How can i get rid of all those silly translation files or whatever for tiny miny parts of info and
unnecesary subdivisions; How i can get rid of them!
Background: I would expect that file_exists() during the installation(or bootstrap-cycle) is the most expensive built-in PHP function measured as total time spent calling the function for all invocations in a single request.
Well now i try to get rid of all the overhead (especially of the translation-files that are so called - po-files) - and unnecessary files that are contained in the drupal-commons 6.x-2.3 in order to get it runnning on my server.
i want to get rid all those silly translation files or whatever for tiny miny parts of info and unnecesary subdivisions;
How to search for all those .po-files recursivly - with GREP i guess ..
Note: i do not now where they are!
linux-vi17:/home/martin/web_technik/drupal/commons_3_jan_12/commons-6.x-2.3/commons-6.x-2.3 # lsCHANGELOG.txt
._.htaccess install.php modules themes
._CHANGELOG.txt ._includes INSTALL.txt ._profiles ._update.php
COMMONS_RELEASE_NOTES.txt includes ._INSTALL.txt profiles update.php
._COMMONS_RELEASE_NOTES.txt ._index.php LICENSE.txt ._robots.txt UPGRADE.txt
COPYRIGHT.txt index.php ._LICENSE.txt robots.txt ._UPGRADE.txt
._COPYRIGHT.txt INSTALL.mysql.txt MAINTAINERS.txt ._scripts ._xmlrpc.php
._cron.php ._INSTALL.mysql.txt ._MAINTAINERS.txt scripts xmlrpc.php
cron.php INSTALL.pgsql.txt ._misc ._sites
.directory ._INSTALL.pgsql.txt misc sites
.htaccess ._install.php ._modules ._themes
linux-vi17:/home/martin/web_technik/drupal/commons_3_jan_12/commons-6.x-2.3/commons-6.x-2.3 # grep .po
Any way i want to remove all .po files with one bash command - is this possible
but wait: first of all - i want to find out all the files - and the ni want to list it:
- since i then know what i rease (/or remove)
Well - all language translations in Drupal are named with .po -
how to find them with GREP?
How to list them - and subsequently - how to erase them!?
update:
i did the search with
find -type f -name "*.po"
. well i found approx 930 files.
afterwards i did remove all them with
6.x-2.3 # find -type f -name "*.po" -exec rm -f {} \;
a final serach with that code
find -type f -name "*.po"
gave no results back so every po-file was erased!
manym many thanks for the hints.
greetings
zero
If you want to find all files named *.po in a directory named /some/directory, you can use find:
find /some/directory -type f -name "*.po"
If you want to delete them all in a row (you do have backups, don't you?), then append an action to this command:
find /some/directory -type f -name "*.po" -exec rm -f {} \;
Replace /some/directory with the appropriate value and you should be set.
The issue with "too many open files" isn't normally because there are too many files in the filesystem, but because there is a limitation to the amount of files an application or user can have open at one time. This issue has been covered on drupal forums, for example, see this thread to solve it more permanently/nicely:
http://drupal.org/node/474152
A few more links about open files:
http://www.cyberciti.biz/tips/linux-procfs-file-descriptors.html
http://blog.thecodingmachine.com/content/solving-too-many-open-files-exception-red5-or-any-other-application

linux find command is not working properly

I am using Linux(Ubuntu), I am trying to find the files, but it is not working properly.
I have created some files in my directory structure, for example: World/India/Maharashtra/Pune/filename.xml
When I use the find command like:
find /home/lokesh/Desktop/Testing_India2/Test/World/India/Maharashtra/ -name filename*.xml -mmin -3000
It is giving the result perfectly.
But, when I am using the same command at "World" or "India" level:
find /home/lokesh/Desktop/Testing_India2/Test/World/ -name filename*.xml -mmin -3000
it does not give any result.
I have lots of directories at "India" level as well as at "Maharashtra" level and may be some directories within "Maharashtra's" inner directories. I have to find each file created in all directories.
And I have mounted all folders from different machine.(I mean some state from different and some from different machine.)
If someone knows how to solve this problem please reply me as soon as possible.
Double quote your search string and -L to make it follow symbolic links:
find -L /home/lokesh/Desktop/Testing_India2/Test/World/ -name "filename*.xml" -mmin -30000
This is something I ran into earlier today actually when using the '*' wildcard. I couldn't get it to continually traverse the subdirectories unless I escaped the * with a .
Give this a try:
find -L /home/lokesh/Desktop/Testing_India2/Test/World/ -name filename\*.xml -mmin -30000
Yes, as mentioned you have to double qoute your -name argument or use a backslash prior to the *. The reason for it not working from one directory, but working fine in other directories, is that the * character is used for filename generation by your shell. This of course happens before the find command is executed. Therefore, if you have a file that match the filename*.xml pattern in your current directory it will be substituted before find is executed, which is not what you want. On the other hand, if there is no pattern match in the current directory, the * character is passed on to the find command unmodified. By qouting you protect the string from shell filename generation.
Regards

Resources