Why doesn't using grep -r -e filename find files named "filename"? - linux

I want to search k1.sh, in all folders of my directory
ADU-07 WebInterface-JobTable_files
ADU-07 WebInterface-JobTable.html
ADU-07 WebInterface-SelfTestResults_files
ADU-07 WebInterface-SelfTestResults.html
meas_2016-07-13_20-22-00
meas_2016-07-13_20-25-13
meas_2016-07-13_20-29-43
meas_2016-07-13_20-33-43
meas_2016-07-13_20-37-43
meas_2016-07-13_20-54-43
meas_2016-07-13_21-46-43
When I try
grep -r -e "k1.sh" /home/milenko/SerradoMel/MT06
I got nothing.Why? How to check if my grep supports -r option?

grep checks files' contents, not files' names. You can test this like so:
mkdir -p foo.d
echo "non-matching" >foo.d/k1.sh
echo "k1.sh" >foo.d/matching
grep -r -e k1.sh foo.d
rm -rf -- foo.d
The output of the above is:
foo.d/matching:k1.sh
...where foo.d/matching is the filename which was matched, and k1.sh is the content.
You'll see that it doesn't identify the file named k1.sh (but having the contents non-matching), but instead identifies the file named matching; this is because the documented purpose of grep is to look at files' contents, not files' names.
Instead, to find a file named k1.sh, use:
find /home/milenko/SerradoMel/MT06 -type f -name k1.sh

Question: Do you want to find all files named k1.sh or do you want to find the string k1.sh inside all files?
Answer: #PauloScardine All files named k1.sh,will try with find.
Then grep is the wrong guy for the job. The grep command looks for the string "k1.sh" inside all files in that path - if you want to find all files named "k1.sh" you are looking for the find command instead:
find /home/milenko/SerradoMel/MT06 -name k1.sh

Related

Renaming folders and files in subdirectories using text file linux

I am trying to rename the files and directories using a text file separated by space.
The text file looks like this:
dir1-1 dir1_1
dir2-1 dir223_1
My command is as follows:
xargs -r -a files.txt -L1 mv
This command can rename only folders from dir1-1 to dir1_1 and dir2-1to dir223_1so on but it doesn't rename the files in the subdirectories. The files in the corresponding directories also have these prefix of these directories.
Looking forward for the assistance.
Assuming you don't have special characters(space of tab...) in your file/dir names,
try
perl_script=$(
echo 'chop($_); $orig=$_;'
while read -r src tgt; do
echo 'if (s{(.*)/'"$src"'([^/]*)}{$1/'"$tgt"'\2}) { print "$orig $_\n";next;}'
done < files.txt)
find . -depth | perl -ne "$perl_script" | xargs -r -L1 echo mv
Remove echo once you see it does what you wanted.

Use a text file (containing file names) to copy files from current directory to new directory

I have created a file (search.txt) containing file names of .fasta files I want to copy from the current directory (which also contains many unwanted .fasta files). Is it possible to use this text file to find and copy the matching files in the current directory to a new location?
The search.txt file contains a list of names like this:
name_1
name_2
name_3
I tried to build the search term using find and grep, like this:
find . *.fasta | grep -f search.txt
which is returning output like this for each matching file:
./name_1.fasta
./name_2.fasta
./name_3.fasta
name_1.fasta
name_2.fasta
name_3.fasta
It's finding the correct files, but I'm not sure if this output is useful / can be used to copy these files?
To get only matching filenames from search.txt I would do this:
find . -type f -name '*.fasta' -print0 | grep -zf search.txt | xargs -r0 cp -t target-dir/
It will find all files with the extension .fasta, display only the ones with matching patterns in search.txt, and bulk cp them to target-dir, and each filename is terminated with a nullbyte in case filenames contain newlines.
Using Bash, you can read all the files from the list into an array:
$ mapfile -t files < search.txt
$ declare -p files
declare -a files=([0]="name_1" [1]="name_2" [2]="name_3")
Then, you can append the desired file extension to all array elements:
$ files=("${files[#]/%/.fasta}")
$ declare -p files
declare -a files=([0]="name_1.fasta" [1]="name_2.fasta" [2]="name_3.fasta")
And finally, move them to the desired location:
$ mv "${files[#]}" path/to/new/location
You don't actually need the intermediate step:
mapfile -t files < search.txt
mv "${files[#]/%/.fasta}" path/to/new/location

bash copy files at extract path from original

There is a set of file paths that i need to copy to some other location:
find . -name '*.txt':
/a/b/c/d/e/f/filea.txt
/a/b/c/d/e/g/h/fileb.txt
/a/b/c/d/e/k/l/filec.txt
I want to copy each of the files to the path that includes e and after e.g:
cp /a/b/c/d/e/f/filea.txt e/f/
The e path will always exists.
I need first to find all files.
How can find, extract everything after e from the path and copy at the same time?
Pipe the find output through
sed 's|\(.*\)\(\*\*e\*\*.*/\)\(.*\)|cp \1\2\3 \2|'
and if the result looks ok, pipe the sed output through sh. If by **e** you just meant e, the pattern is
sed 's|\(.*\)\(e.*/\)\(.*\)|cp \1\2\3 \2|'
You can use something like this. If the generated mkdir and cp commands look ok, you can remove the echo before them.
SOURCE="/a/b/c/d"
cd $SOURCE
DESTINATION="full/path/to/whatever"
for FILE in $(find . -name '*.txt'); do
DIR=$(dirname $FILE)
# create the intermediate directories if they don't exist
echo "mkdir -p $DESTINATION/$DIR"
echo "cp $FILE $DESTINATION/$DIR"
done

Find all directories containing a file that contains a keyword in linux

In my hierarchy of directories I have many text files called STATUS.txt. These text files each contain one keyword such as COMPLETE, WAITING, FUTURE or OPEN. I wish to execute a shell command of the following form:
./mycommand OPEN
which will list all the directories that contain a file called STATUS.txt, where this file contains the text "OPEN"
In future I will want to extend this script so that the directories returned are sorted. Sorting will determined by a numeric value stored the file PRIORITY.txt, which lives in the same directories as STATUS.txt. However, this can wait until my competence level improves. For the time being I am happy to list the directories in any order.
I have searched Stack Overflow for the following, but to no avail:
unix filter by file contents
linux filter by file contents
shell traverse directory file contents
bash traverse directory file contents
shell traverse directory find
bash traverse directory find
linux file contents directory
unix file contents directory
linux find name contents
unix find name contents
shell read file show directory
bash read file show directory
bash directory search
shell directory search
I have tried the following shell commands:
This helps me identify all the directories that contain STATUS.txt
$ find ./ -name STATUS.txt
This reads STATUS.txt for every directory that contains it
$ find ./ -name STATUS.txt | xargs -I{} cat {}
This doesn't return any text, I was hoping it would return the name of each directory
$ find . -type d | while read d; do if [ -f STATUS.txt ]; then echo "${d}"; fi; done
... or the other way around:
find . -name "STATUS.txt" -exec grep -lF "OPEN" \{} +
If you want to wrap that in a script, a good starting point might be:
#!/bin/sh
[ $# -ne 1 ] && echo "One argument required" >&2 && exit 2
find . -name "STATUS.txt" -exec grep -lF "$1" \{} +
As pointed out by #BroSlow, if you are looking for directories containing the matching STATUS.txt files, this might be more what you are looking for:
fgrep --include='STATUS.txt' -rl 'OPEN' | xargs -L 1 dirname
Or better
fgrep --include='STATUS.txt' -rl 'OPEN' |
sed -e 's|^[^/]*$|./&|' -e 's|/[^/]*$||'
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# simulate `xargs -L 1 dirname` using `sed`
# (no trailing `\`; returns `.` for path without dir part)
Maybe you can try this:
grep -rl "OPEN" . --include='STATUS.txt'| sed 's/STATUS.txt//'
where grep -r means recursive , -l means only list the files matching, '.' is the directory location. You can pipe it to sed to remove the file name.
You can then wrap this in a bash script file where you can pass in keywords such as 'OPEN', 'FUTURE' as an argument.
#!/bin/bash
grep -rl "$1" . --include='STATUS.txt'| sed 's/STATUS.txt//'
Try something like this
find -type f -name "STATUS.txt" -exec grep -q "OPEN" {} \; -exec dirname {} \;
or in a script
#!/bin/bash
(($#==1)) || { echo "Usage: $0 <pattern>" && exit 1; }
find -type f -name "STATUS.txt" -exec grep -q "$1" {} \; -exec dirname {} \;
You could use grep and awk instead of find:
grep -r OPEN * | awk '{split($1, path, ":"); print path[1]}' | xargs -I{} dirname {}
The above grep will list all files containing "OPEN" recursively inside you dir structure. The result will be something like:
dir_1/subdir_1/STATUS.txt:OPEN
dir_2/subdir_2/STATUS.txt:OPEN
dir_2/subdir_3/STATUS.txt:OPEN
Then the awk script will split this output at the colon and print the first part of it (the dir path).
dir_1/subdir_1/STATUS.txt
dir_2/subdir_2/STATUS.txt
dir_2/subdir_3/STATUS.txt
The dirname will then return only the directory path, not the file name, which I suppose it what you want.
I'd consider using Perl or Python if you want to evolve this further, though, as it might get messier if you want to add priorities and sorting.
Taking up the accepted answer, it does not output a sorted and unique directory list. At the end of the "find" command, add:
| sort -u
or:
| sort | uniq
to get the unique list of the directories.
Credits go to Get unique list of all directories which contain a file whose name contains a string.
IMHO you should write a Python script which:
Examines your directory structure and finds all files named STATUS.txt.
For each found file:
reads the file and executes mycommand depending on what the file contains.
If you want to extend the script later with sorting, you can find all the interesting files first, save them to a list, sort the list and execute the commands on the sorted list.
Hint: http://pythonadventures.wordpress.com/2011/03/26/traversing-a-directory-recursively/

Search and replace entire files

I've seen numerous examples for replacing one string with another among multiple files but what I want to do is a bit different. Probably a lot simpler :)
Find all the files that match a certain string and replace them completely with the contents of a new file.
I have a find command that works
find /home/*/public_html -name "index.php" -exec grep "version:1.23" '{}' \; -print
This finds all the files I need to update.
Now how do I replace their entire content with the CONTENTS of /home/indexnew.txt (I could also name it /home/index.php)
I emphasize content because I don't want to change the name or ownership of the files I'm updating.
find ... | while read filename; do cat static_file > "$filename"; done
efficiency hint: use grep -q -- it will return "true" immediately when the first match is found, not having to read the entire file.
If you have a bunch of files you want to replace, and you can get all of their names using wildcards you can try piping output to the tee command:
cat my_file | tee /home/*/update.txt
This should look through all the directories in /home and write the text in my_file to update.txt in each of those directories.
Let me know if this helps or isn't what you want.
I am not sure if your command without -l and then print it is better than to add -l in grep to list file directly.
find /home/*/public_html -name "index.php" -exec grep -l "version:1.23" '{}' \; |xargs -i cp /home/index.php {}
Here is the option -l detail
-l, --files-with-matches
Suppress normal output; instead print the name of each input
file from which output would normally have been printed. The
scanning will stop on the first match. (-l is specified by
POSIX.)

Resources