OS: Linux RedHat
Bash: 3.5
I have 2 commands below to get list of files with status of them and another command for footprint.
I want to find the way to combine them together in single line.
Here's my mentioned commands.
find "$PWD" -type f ! -iname '*thumbs.db*' -print0 | xargs -0 stat -c "%y %s %n"
find "$PWD" -type f -print0 | xargs -0 sha1sum -b
Will this work? Do a man on xargs.
find $PWD -type f ! -iname '*thumbs.db*' -print0 | xargs -0 -I '{}' sh -c 'stat --printf "%y %s %n " {} ; sha1sum -b {}'
If you do not want the file name repeated twice:
find $PWD -type f ! -iname '*thumbs.db*' -print0 | xargs -0 -I '{}' sh -c 'stat --printf "%y %s %n " {} ; sha1sum -b {} | cut -d\ -f1'
There needs to be 2 blank spaces after d\ in cut command.
You can do this with -exec in find command itself.
find $PWD -type f ! -iname '*thumbs.db*' -exec stat -c "%y %s %n" {} \; -exec sha1sum -b {} \;
Related
It's easy enough to do it with tar:
date=`date +%F-%T`;find /var/log/nginx -name "access.log" -mtime -1 -type f -print | xargs -0 tar czvf /tmp/$date-access.tar.gz
.
$ ls /tmp
2019-05-03-11:25:49-access.tar.gz
How do I do this with gzip?
find + bash + gzip solution:
$ d=$(date +%F-%T)
$ find /var/log/nginx -name "access.log" -mtime -1 -type f \
-exec bash -c 'gzip -c "$2" > "/tmp/$1-access.gz"' _ $d {} \;
I am working on linux bash. Now I would like to clear the contents of all .txt files. However, this command "find -type f -iname '.txt' | xargs -I {} echo ""> {}" seems not to work. Any suggestions? Any ideas about better solutions?
I replaced echo with truncate in order to clear a file and used find's -exec instead of piping to xargs:
find . -type f -name "*.txt" -exec truncate -s 0 {} \;
I also need the directory name to be outputs as well. What I was able to do is to output the total number of lines in all directories with directory name.
find . -name '*.c' | xargs wc -l | xargs -I{} dirname {} | xargs -I{} dirname {}
I have jumbled up a mixture of bash commands mostly GNU-specific, make sure you have them, GNU grep and GNU Awk
find . -type f -print0 | xargs -0 grep -c ';$' | \
awk -F":" '$NF>0{cmd="dirname "$1; while ( ( cmd | getline result ) > 0 ) {printf "%s\t%s\n",result,$2} close(cmd) }'
The idea is grep -c returns the pattern count in format, file-name:count, which I am passing it to GNU Awk to filter those files whose count is greater than zero and print the directory of the file containing it and the count itself.
As a fancy one-liner as they call it these days,
find . -type f -print0 | xargs -0 grep -c ';$' | awk -F":" '$NF>0{cmd="dirname "$1; while ( ( cmd | getline result ) > 0 ) {printf "%s\t%s\n",result,$2} close(cmd) }'
Here is a script:
#!/usr/bin/env bash
for dir in */; do (
cd "$dir"
count=$(find . -name '*.c' -print0 | xargs -0 grep '[;]$' | wc -l)
echo -e "${count}\t${dir}"
) done
If you want numbers for each sub-directory:
#!/usr/bin/env bash
for dir in $(find . -type d); do (
cd "$dir"
count=$(find . -maxdepth 1 -name '*.c' -print0 | \
xargs -0 grep '[;]$' | wc -l)
echo -e "${count}\t${dir}"
) done
Using -maxdepth 1 makes sure the calculation is only done in the current directory, not its sub-directories. So each file is counted once.
Here is my command line:
find . -type f -exec file {} \; \
| sed 's/\(.*png\): .* \([0-9]* x [0-9]*\).*/\2 \1/' \
| sed 's/\(.*jpg\): .* \([0-9]*x[0-9]*\).*/\2 \1/' \
| awk 'int($1) < 1000' \
| sed 's/^.*[[:blank:]]//' \
| tar -czvf images.tar.gz --null -T -
And the error i got is:
tar: Unix\n./test.png\n./test2.jpg\n: Cannot stat: No such file or
directory
tar: Exiting with failure status due to previous errors
What i want is to find all images in current directory, who's width less than 1000 px and tar them into archive.
to use --null, you need to convert newlines to nulls first:
...
| tr '\n' '\0' \
| tar -czvf images.tar.gz --null -T -
(tested, working.)
also, here are a number of suggestions on speed and style in decreasing order of importance.
a. don't find and run file on more files than you need to:
find . -type f -iname "*.png" -or -iname "*.jpg"
b. for commands that can run on multiple files per command, such as file, use xargs to save a lot of time:
find . -type f -iname "*.png" -or -iname "*.jpg" -print0 | xargs -0 file
c. if you put | at the end of each line, you can continue on the next line without also using \.
find . -type f -iname "*.png" -or -iname "*.jpg" -print0 |
xargs -0 file
d. you can save yourself a lot of trouble since your max width is 999 by just greping for 1, 2, or 3 digit widths, though the awk '$1<1000' is ultimately better in case you ever want to use a different threshold:
find . -type f -iname "*.png" -or -iname "*.jpg" -print0 |
xargs -0 file |
grep ', [0-9][0-9]\?[0-9]\? x '
e. grep and awk are faster than sed, so use them where possible:
find . -type f -iname "*.png" -or -iname "*.jpg" -print0 |
xargs -0 file |
grep ', [0-9][0-9]\?[0-9]\? x ' |
grep -o -i '.*\.\(png\|jpg\)'
final command:
find . -type f -iname "*.png" -or -iname "*.jpg" -print0 |
xargs -0 file |
grep ', [0-9][0-9]\?[0-9]\? x ' |
grep -o -i '.*\.\(png\|jpg\)' |
tr '\n' '\0' |
tar -czvf images.tar.gz --null -T -
You can also use awk only with :
find . -type f \( -name "*.png" -or -name "*.jpg" \) -exec file {} \; | awk -v width_limit=1000 '
{
match($0, /,\s+([0-9]+)\s*x\s*([0-9]+)/, items)
if (items[1] < width_limit){
match($0, /(.*):/, filename)
print filename[1]
}
}' | tar -czvf allfiles.tar -T -
The width can be configured with width_limit variable
Quick way using perl:
find . -type f -exec file {} + |
perl -ne '
print $1."\0" if /^(.*):\s*(JPEG|PNG).*,\s*(\d+)\s+x\s*\d+\s*,/ &&
$3 < 1000;
' | tar -czvf images.tar.gz --null -T -
Using + operator to find as same effect than print0 | xargs -0.
Good morning to everyone here, attempt to replace a series of characters in different PHP files taking into account the following:
The files are lines like this:
if($_GET['x']){
And so I want to replace:
if(isset($_GET['x'])){
But we must take into account that there are files in lines like the following, but they do not want to modify the
if($_GET["x"] == $_GET["x"]){
I try as follows but I can not because I change all lines containing $ _GET ["x"]
My example:
find . -name "*.php" -type f -exec ./code.sh {} \;
sed -i 's/\ if($_GET['x']){/ if(isset($_GET['x'])){/' "$1"
find . -name "*.php" -type f -print0 | xargs -0 sed -i -e "s|if *(\$_GET\['x'\]) *{|if(isset(\$_GET['x'])){|g" --
The pattern above for if($_GET['x']){ would never match if($_GET["x"] == $_GET["x"]){.
Update:
This would change if($_GET['x']){ or if($_GET["x"]){ to if(isset($_GET['x'])){:
find . -name "*.php" -type f -print0 | xargs -0 sed -i -e "s|if *(\$_GET\[[\"']x[\"']\]) *{|if(isset(\$_GET['x'])){|g" --
Another update:
find . -name "*.php" -type f -print0 | xargs -0 sed -i -e "s|if *(\$_GET\[[\"']\([^\"']\+\)[\"']\]) *{|if(isset(\$_GET['\1'])){|g" --
Would change anything in the form of if($_GET['<something>']){ or if($_GET["<something>"]){.