remove files in directory and print the deleted files name - linux

I want to remove files in directory except a specific file and print a message if files remove.
find . ! -name 'name' -type f -exec echo 'removed files:' rm -v -f {} +
When I run this command it prints:
removed files: rm -v -f ./aaavl ./aaavlpo
I want to print putput like:
removed files:
./aaavl
./aaavlpo
How should I do this?

Just use find in a Bash loop to modify the output.
Given:
$ ls -1
file 1
file 2
file 3
file 4
file 5
You can still loop and negate with find as desired. Just use Bash to delete and report:
$ find . ! -name *3 -type f | while read fn; do echo "removing: $fn"; rm "$fn"; done
removing: ./file 1
removing: ./file 2
removing: ./file 5
removing: ./file 4
$ ls -1
file 3
That loop will work for filenames with spaces OTHER THAN \n.
If there is a possibility of file names with \n in them, use xargs with a NUL delimiter:
$ find . ! -name *3 -type f -print0 | xargs -0 -n 1 -I% bash -c ' echo "%" ; rm "%" ;'
And add the header echo "removed files:" above the loop or xargs pipe as desired.

maybe it's not a better way, but ...
1 - Save the file names in .txt:
find -name 'name' -type f -exec echo >> test.txt {} \;
2 after saving the files, executing the deletion of the files
find. ! -name 'name' -type f -exec rm -f {} +

Related

Linux find script result not appending to the output text file

I wrote small shell script, to identify the PDF file associate pages in my website.
It’s take the pdf source list url one by one, as an input and finding recursive in website content.
Problem is when I run the script find result not appending to the output file,
But when I take the find command and run in terminal/putty manually can see the result.
Script:
#!/bin/bash
filename="PDF_Search_File.txt"
while read -r line
do
name="$line"
echo "*******pdf******** - $name\n" >>output_pdf_new.txt
find . -type f -exec grep -l "$name" '{}' \; >>output_pdf_new.txt
echo "*******pdf******** - $name\n" >>output_pdf_new.txt
done < "$filename"
source list url input file (PDF_Search_File.txt)
/static/pdf/pdf1.pdf
/static/pdf/pdf2.pdf
/static/pdf/pdf3.pdf
--------------------
out put result file (output_pdf_new.txt)
./Search_pdf.sh
*******pdf******** - /static/pdf/pdf1.pdf\n
*******pdf******** - /static/pdf/pdf1.pdf\n
./Search_pdf.sh
*******pdf******** - /static/pdf/pdf2.pdf\n
*******pdf******** - /static/pdf/pdf2.pdf\n
./Search_pdf.sh
*******pdf******** - /static/pdf/pdf3.pdf\n
*******pdf******** - /static/pdf/pdf3.pdf\n
------------------------------------------
terminal/putty can see the result for below, when manually run the find.
find . -type f -exec grep -l "/static/pdf/pdf1.pdf" '{}' \;
./en/toyes/zzz/index.xhtml
./en/toyes/kkk/index.xhtml
--------------
but having issue with script , only out put the echo result as above output result .
Update
when i execute the script with bash -x , it's giving below result
[user#server1 generated_content]# bash -x Search_pdf.sh
+ filename=PDF_Search_File.txt
+ read -r line
+ name=$'/static/pdf/pdf1.pdf\r'
\n'cho '*******pdf******** - /static/pdf/pdf1.pdf
+ find . -type f -exec grep -l $'/static/pdf/pdf1.pdf\r' '{}' ';'
\n'cho '*******pdf******** - /static/pdf/pdf1.pdf
+ read -r line
+ name=$'/static/pdf/pdf2.pdf\r'
\n'cho '*******pdf******** - /static/pdf/pdf2.pdf
+ find . -type f -exec grep -l $'/static/pdf/pdf2.pdf\r' '{}' ';'
is something wrong here
+ find . -type f -exec grep -l $'/static/pdf/pdf2.pdf\r' '{}' ';'
find command should be like below , but it's taking as above when executing
find . -type f -exec grep -l "/static/pdf/pdf1.pdf" '{}' \;
Have you tried -e option in echo to enable interpretation of backslash escapes?
Also why don't you simply do find | grep?
find ./ -type f | grep "$name" >> output_pdf_new.txt
Try following (./ instead of .) in find
find ./ -type f -exec grep -l "$name" '{}' \; >>output_pdf_new.txt
grep -rl for the file inside of your for loop:
cd /www/webroot/
grep -rl "${name}" * | while read file_path; do
# I need to do something with each file
echo $file_path
done
OR I just need to run the output to file
cd /www/webroot/
grep -rl "${name}" * >> output_pdf_new.txt

issue Find command Linux

I have a folder and I want count all regular files in it, and for this I use this bash command:
find pathfolder -type f 2> err.txt | wc -l
In the folder there are 3 empty text files and a subfolder with inside it other text files.
For this reason I should get 3 as a result, but I get 6 and I don't understand why. Maybe there is some options that I did not set.
If I remove the subfolder I get 4 as result
To grab all the files and directories in current directory without dot files:
shopt -u dotglob
all=(*)
To grab only directories:
dirs=(*/)
To count only non-dot files in current directory:
echo $(( ${#all[#]} - ${#dirs[#]} ))
To do this with find use:
find . -type f -maxdepth 1 ! -name '.*' -exec printf '%.0s.\n' {} + | wc -l
Below solutions ignore the filenames starting with dot.
To count the files in pathfolder only:
find pathfolder -maxdepth 1 -type f -not -path '*/\.*' | wc -l
To count the files in ALL child directories of pathfolder:
find pathfolder -mindepth 2 -maxdepth 2 -type f -not -path '*/\.*' | wc -l
UPDATE: Converting comments into an answer
Based on the suggestions received from anubhava, by creating a dummy file using the command touch $'foo\nbar', the wc -l counts this filename twice, like in below example:
$> touch $'foo\nbar'
$> find . -type f
./foo?bar
$> find . -type f | wc -l
2
To avoid this, get rid of the newlines before calling wc (anubhava's solution):
$> find . -type f -exec printf '%.0sbla\n' {} +
bla
$> find . -type f -exec printf '%.0sbla\n' {} + | wc -l
1
or avoid calling wc at all:
$> find . -type f -exec sh -c 'i=0; for f; do ((i++)); done; echo $i' sh {} +
1

merge find command output with another command output and redirect to file

I am looking to combine the output of the Linux find and head commands (to derive a list of filenames) with output of another Linux/bash command and save the result in a file such that each filename from the "find" occurs with the other command output on a separate line.
So for example,
- if a dir testdir contains files a.txt, b.txt and c.txt,
- and the output of the other command is some number say 10, the desired output I'm looking for is
10 a.txt
10 b.txt
10 c.txt
On searching here, I saw folks recommending paste for doing similar merging but I couldn't figure out how to do it in this scenario as paste seems to be expecting files . I tried
paste $(find testdir -maxdepth 1 -type f -name "*.text" | head -2) $(echo "10") > output.txt
paste: 10: No such file or directory
Would appreciate any pointers as to what I'm doing wrong. Any other ways of achieving the same thing are also welcome.
Note that if I wanted to make everything appear on the same line, I could use xargs and that does the job.
$find testdir -maxdepth 1 -type f -name "*.text" | head -2 |xargs echo "10" > output.txt
$cat output.txt
10 a.txt b.txt
But my requirement is to merge the two command outputs as shown earlier.
Thanks in advance for any help!
find can handle both the -exec and -print directives, you just need to merge the output:
$ find . -maxdepth 1 -type f -name \*.txt -exec echo hello \; -print | paste - -
hello ./b.txt
hello ./a.txt
hello ./all.txt
Assuming your "command" requires the filename (here's a very contrived example):
$ find . -maxdepth 1 -type f -name \*.txt -exec sh -c 'wc -l <"$1"' _ {} \; -print | paste - -
4 ./b.txt
4 ./a.txt
7 ./all.txt
Of course, that's executing the command for each file. To restrict myself to your question:
cmd_out=$(echo 10)
for file in *.txt; do
echo "$cmd_out $file"
done
Try this,
$find testdir -maxdepth 1 -type f -name "*.text" | head -2 |tr ' ' '\n'|sed -i 's/^/10/' > output.txt
You can make xargs operate on one line at a time using -L1:
find testdir -maxdepth 1 -type f -name "*.text" | xargs -L1 echo "10" > output.txt

How to prepend filename to last lines of files found through find command in Unix

I have a requirement where I need to display the last lines of all the files under a directory in the format
filename: lastline
I found the following code
find /user/directory/* -name "*txt" -mtime 0 -type f -exec awk '{s=$0};END{print FILENAME, ": ",s}' {} \;
But I read this reads the entire file each time. The files in my directory are huge so I cannot afford this. Do I have any alternatives?
find /user/directory/* -name "*txt" -mtime 0 -type f | while IFS= read -r file
do
echo -n "$file: "
tail -1 "$file"
done
The important change is that tail -1 won't read the whole file, but reads small portions from the end and increases them until it has found the complete last line.
If you know the directory name:
for f in $(/bin/ls directory/*.txt); do
echo "$f: $(tail -1 $f)"
done
will do the trick. More generally,
for f in $(find /user/directory -type f -name "*.txt"); do
echo "$f: $(tail -1 $f)"
done
will work as well. The program tail will start reading the file from the end, and tail -n will only read the last n lines of a specified file.
Using tail as in the other answers is good. Now, you can wrap all this into the find command.
If your find supports the -printf command:
find /user/directory/ -name "*txt" -mtime 0 -type f -printf '%p: ' -exec tail -1 {} \;
If your find doesn't support the -printf command:
find /user/directory/ -name "*txt" -mtime 0 -type f -exec printf '%s: ' {} \; -exec tail -1 {} \;

Linux clean directory script

I need to write a script for a web server that will clean out files/folders older than 14 days, but keep the last 7 files/directories. I've been doing my research so far and here is what I came up with (I know the syntax and commands are incorrect but just so you get an idea):
ls -ldt /data/deployments/product/website.com/*/ | tail -n +8 | xargs find /data/deployments/product/website.com/ -type f -type d -mtime +14 -exec rm -R {} \;
This is my thought process as to how the script should behave (I'm more a windows batch guy):
List the directory contents
If contents is less than or equal to 7, goto END
If contents is > 7 goto CLEAN
:CLEAN
ls -ldt /data/deployments/product/website.com/*/
keep last 7 entries (tail -n +8)
output of that "tail" -> find -type f -type d (both files and directories) -mtime +14 (not older than 14 days) -exec rm -R (delete)
I've seen a bunch of examples, using xargs and sed but I just can't figure out how to put it all together.
#!/bin/bash
find you_dir -mindepth 1 -maxdepth 1 -printf "%T# %p\n" | \
sort -nrk1,1 |sed '1,7d' | cut -d' ' -f2 | \
xargs -n1 -I fname \
find fname -maxdepth 0 -mtime +14 -exec echo rm -rf {} \;
remove the echo if your happy with the output...
Explanation (line-by-line):
find in exactly in your_dir and print seconds_since_Unix_epoch (%T#) and file(/dir)name for each file/dir on a separate line
sort by first field (seconds_since_Unix_epoch) descending, throw the first seven lines away - from the rest extract just the name (second field)
xargs passes on to new find process argument-by-argument (-n1) and uses fname to represent argument
-maxdepth 0 limits find to just fname
You could store the minNrOfFiles and the ageLimit in Bash-Variables or pass in to the script with just few changes:
minNrOfFiles=7 # or $1
ageLimit=14 # or $2
change: sed '1,'"$minNrOfFiles"'d' and -mtime +"$ageLimit"

Resources