Capture the output of command line in a variable in Unix - linux

I want to capture the output of the command below in a variable.
Command:
find . -iname 'FIL*'.TXT
The output is :
./FILE1.TXT
I want to capture './FILE1.TXT' into 'A' variable. But when I am trying
A=`find . -iname 'FIL*'.TXT`
then this command is displaying the data of the file. But I want ./FILE1.TXT value in the variable A.

# ls *.txt
test1.txt test.txt
# find ./ -maxdepth 1 -iname "*.txt"
./test1.txt
./test.txt
# A=$(find ./ -maxdepth 1 -iname "*.txt")
# echo $A
./test1.txt ./test.txt
You can ignore -maxdepth 1 if you want to. I had to use it for this example.
Or with a single file:
# ls *.txt
test.txt
# find ./ -maxdepth 1 -iname "*.txt"
./test.txt
# A=$(find ./ -maxdepth 1 -iname "*.txt")
# echo $A
./test.txt

Do you try ?
A="`find . -iname 'FIL*'.TXT`"
and
A="`find . -iname 'FIL*'.TXT -print`"

A file does not have any value, but does have a content. Use the following to display that content.
find . -iname 'FIL*'.TXT -exec cat {} \;
If you want all the contents (of all such files) in a variable, then
A=$(find . -iname 'FIL*'.TXT -exec cat {} \;)
BTW you could have used
find . -iname 'FIL*.TXT' -print0 | xargs -0 cat
If you want the names of such files in a variable, try
A=$(find . -iname 'FILE*.txt' -print)
BTW, on some several recent interactive shells (zsh, bash version 4 but not earlier versions) just write
A=**/FILE*.txt
My feeling is that the ** feature is by itself worth switching to a newer shell, but it is just my opinion.
Also, don't forget that files may have several or no names. Read about inodes ...

Related

I want to get an output of the find command in shell script

Am trying to write a script that finds the files that are older than 10 hours from the sub-directories that are in the "HS_client_list". And send the Output to a file "find.log".
#!/bin/bash
while IFS= read -r line; do
echo Executing cd /moveit/$line
cd /moveit/$line
#Find files less than 600 minutes old.
find $PWD -type f -iname "*.enc" -mmin +600 -execdir basename '{}' ';' | xargs ls > /home/infa91punv/find.log
done < HS_client_list
However, the script is able to cd to the folders from HS_client_list(this file contents the name of the subdirectories) but, the find command (find $PWD -type f -iname "*.enc" -mmin +600 -execdir basename '{}' ';' | xargs ls > /home/infa91punv/find.log) is not working. The Output file is empty. But when I run find $PWD -type f -iname "*.enc" -mmin +600 -execdir basename '{}' ';' | xargs ls > /home/infa91punv/find.log as a command it works and from the script it doesn't.
You are overwriting the file in each iteration.
You can use xargs to perform find on multiple directories; but you have to use an alternate delimiter to avoid having xargs populate the {} in the -execdir command.
sed 's%^%/moveit/%' HS_client_list |
xargs -I '<>' find '<>' -type f -iname "*.enc" -mmin +600 -execdir basename {} \; > /home/infa91punv/find.log
The xargs ls did not seem to perform any useful functionality, so I took it out. Generally, don't use ls in scripts.
With GNU find, you could avoid the call to an external utility, and use the -printf predicate to print just the part of the path name that you care about.
For added efficiency, you could invoke a shell to collect the arguments:
sed 's%^%/moveit/%' HS_client_list |
xargs sh -c 'find "$#" -type f -iname "*.enc" -mmin +600 -execdir basename {} \;' _ >/home/infa91punv/find.log
This will run as many directories as possible in a single find invocation.
If you want to keep your loop, the solution is to put the redirection after done. I would still factor out the cd, and take care to quote the variable interpolation.
while IFS= read -r line; do
find /moveit/"$line" -type f -iname "*.enc" -mmin +600 -execdir basename '{}' ';'
done < HS_client_list >/home/infa91punv/find.log

In Linux terminal, how to delete all files in a directory except one or two

In a Linux terminal, how to delete all files from a folder except one or two?
For example.
I have 100 image files in a directory and one .txt file.
I want to delete all files except that .txt file.
From within the directory, list the files, filter out all not containing 'file-to-keep', and remove all files left on the list.
ls | grep -v 'file-to-keep' | xargs rm
To avoid issues with spaces in filenames (remember to never use spaces in filenames), use find and -0 option.
find 'path' -maxdepth 1 -not -name 'file-to-keep' -print0 | xargs -0 rm
Or mixing both, use grep option -z to manage the -print0 names from find
In general, using an inverted pattern search with grep should do the job. As you didn't define any pattern, I'd just give you a general code example:
ls -1 | grep -v 'name_of_file_to_keep.txt' | xargs rm -f
The ls -1 lists one file per line, so that grep can search line by line. grep -v is the inverted flag. So any pattern matched will NOT be deleted.
For multiple files, you may use egrep:
ls -1 | grep -E -v 'not_file1.txt|not_file2.txt' | xargs rm -f
Update after question was updated:
I assume you are willing to delete all files except files in the current folder that do not end with .txt. So this should work too:
find . -maxdepth 1 -type f -not -name "*.txt" -exec rm -f {} \;
find supports a -delete option so you do not need to -exec. You can also pass multiple sets of -not -name somefile -not -name otherfile
user#host$ ls
1.txt 2.txt 3.txt 4.txt 5.txt 6.txt 7.txt 8.txt josh.pdf keepme
user#host$ find . -maxdepth 1 -type f -not -name keepme -not -name 8.txt -delete
user#host$ ls
8.txt keepme
Use the not modifier to remove file(s) or pattern(s) you don't want to delete, you can modify the 1 passed to -maxdepth to specify how many sub directories deep you want to delete files from
find . -maxdepth 1 -not -name "*.txt" -exec rm -f {} \;
You can also do:
find -maxdepth 1 \! -name "*.txt" -exec rm -f {} \;
In bash, you can use:
$ shopt -s extglob # Enable extended pattern matching features
$ rm !(*.txt) # Delete all files except .txt files

CVS Tagging recursively from within a shell script

My team uses CVS for revision control.I need to develop a shell script which extracts the content from a file and does a CVS tag to all .txt files(also the text files present in the sub-directories of the current direcotry) with that content. The file from which the content is extracted ,the script ,both are present in the same directory.
I tried running the script :
#!bin/bash
return_content(){
content=$(cat file1)
echo $content
}
find . -name "*.txt" -type f -print0|grep - v CVS|xargs -0 cvs tag $content
file1=> the file from where the content is extracted
"abc"=> content inside file1
Output:
abc
find: paths must precede expression
Usage: find [path...] [expression]
cvs tag: in directory
cvs [tag aborted]: there is no version here; run 'cvs checkout' first
I cannot figure out the problem. Please help
There are a few problems with the script.
1) The shebang line is missing the root /.
You have #!bin/bash and it should be #!/bin/bash
2) the -v option to grep has a space between the - and the v (and it shouldn't)
3) You don't actually call the return_content function in the last line - you refer to a variable inside the function. Perhaps the last line should look like:
find . -name "*.txt" -type f -print0|grep -v CVS|\
xargs -0 cvs tag $( return_content )
4) even after fixing all that, you may find that the grep complains because the print0 is passing it binary data (there are embedded nulls due to the -print0), and grep is expecting text. You can use more arguments to the find command to perform the function of the grep command and cut grep out, like this:
find . -type d -name CVS -prune -o -type f -name "*.txt" -print0 |\
xargs -0 cvs tag $( return_content )
find will recurse through all the entries in the current directory (and below), discarding anything that is a directory named CVS or below, and of the rest it will choose only files named *.txt.
I tested my version of that line with:
find . -type d -name CVS -prune -o -type f -name "*.txt" -print0 |\
xargs -t -0 echo ls -la
I created a couple of files with spaces in the names and .txt extensions in the directory so the script would show results:
bjb#spidy:~/junk/find$ find . -type d -name CVS -prune -o \
-type f -name "*.txt" -print0 | xargs -t -0 ls -la
ls -la ./one two.txt ./three four.txt
-rw-r--r-- 1 bjb bjb 0 Jun 27 00:44 ./one two.txt
-rw-r--r-- 1 bjb bjb 0 Jun 27 00:45 ./three four.txt
bjb#spidy:~/junk/find$
The -t argument makes xargs show the command it is about to run. I used ls -la instead of cvs tag - it should work similarly for cvs.

How to copy all the files with the same suffix to another directory? - Unix

I have a directory with unknown number of subdirectories and unknown level of sub*directories within them. How do I copy all the file swith the same suffix to a new directory?
E.g. from this directory:
> some-dir
>> foo-subdir
>>> bar-sudsubdir
>>>> file-adx.txt
>> foobar-subdir
>>> file-kiv.txt
Move all the *.txt files to:
> new-dir
>> file-adx.txt
>> file-kiv.txt
One option is to use find:
find some-dir -type f -name "*.txt" -exec cp \{\} new-dir \;
find some-dir -type f -name "*.txt" would find *.txt files in the directory some-dir. The -exec option builds a command line (e.g. cp file new.txt) for every matching file denoted by {}.
Use find with xargs as shown below:
find some-dir -type f -name "*.txt" -print0 | xargs -0 cp --target-directory=new-dir
For a large number of files, this xargs version is more efficient than using find some-dir -type f -name "*.txt" -exec cp {} new-dir \; because xargs will pass multiple files at a time to cp, instead of calling cp once per file. So there will be fewer fork/exec calls with the xargs version.

how to find files containing a string using egrep

I would like to find the files containing specific string under linux.
I tried something like but could not succeed:
find . -name *.txt | egrep mystring
Here you are sending the file names (output of the find command) as input to egrep; you actually want to run egrep on the contents of the files.
Here are a couple of alternatives:
find . -name "*.txt" -exec egrep mystring {} \;
or even better
find . -name "*.txt" -print0 | xargs -0 egrep mystring
Check the find command help to check what the single arguments do.
The first approach will spawn a new process for every file, while the second will pass more than one file as argument to egrep; the -print0 and -0 flags are needed to deal with potentially nasty file names (allowing to separate file names correctly even if a file name contains a space, for example).
try:
find . -name '*.txt' | xargs egrep mystring
There are two problems with your version:
Firstly, *.txt will first be expanded by the shell, giving you a listing of files in the current directory which end in .txt, so for instance, if you have the following:
[dsm#localhost:~]$ ls *.txt
test.txt
[dsm#localhost:~]$
your find command will turn into find . -name test.txt. Just try the following to illustrate:
[dsm#localhost:~]$ echo find . -name *.txt
find . -name test.txt
[dsm#localhost:~]$
Secondly, egrep does not take filenames from STDIN. To convert them to arguments you need to use xargs
find . -name *.txt | egrep mystring
That will not work as egrep will be searching for mystring within the output generated by find . -name *.txt which are just the path to *.txt files.
Instead, you can use xargs:
find . -name *.txt | xargs egrep mystring
You could use
find . -iname *.txt -exec egrep mystring \{\} \;
Here's an example that will return the file paths of a all *.log files that have a line that begins with ERROR:
find . -name "*.log" -exec egrep -l '^ERROR' {} \;
there's a recursive option from egrep you can use
egrep -R "pattern" *.log
If you only want the filenames:
find . -type f -name '*.txt' -exec egrep -l pattern {} \;
If you want filenames and matches:
find . -type f -name '*.txt' -exec egrep pattern {} /dev/null \;

Resources