Reverse print result of unix find without sort - linux

I want to sort the result of the find as follows:
I am using:
find . -type f -print0
Result is:
/mnt/sdcard/qu/led/t1/temp42.txt
/mnt/sdcard/qu/led/File.plist
/mnt/sdcard/qu/yellow.plist
/mnt/sdcard/SHA1Keys/SHA1SUMS
/mnt/sdcard/File.xml
/mnt/sdcard/File.plist
/mnt/sdcard/.DS_Store
But i want the result as:
/mnt/sdcard/.DS_Store
/mnt/sdcard/File.plist
/mnt/sdcard/File.xml
/mnt/sdcard/SHA1Keys/SHA1SUMS
/mnt/sdcard/qu/yellow.plist
/mnt/sdcard/qu/led/File.plist
/mnt/sdcard/qu/led/t1/temp42.txt
And if i do:
find . -type f print0 | sort -r
The order gets all messed up. I saw this solution somewhere:
find . -type f -ls | awk '{print $(NF-3), $(NF-2), $(NF-1), $NF}'
But I can't use it since it prints the results.
Also note I don't have permissions to write to the filesystem, so writing to a file and reversing the lines is not an option.

Use tac (cat backwards) to reverse output. You don't need to sort it in reverse order, you just need it reversed.
find . -type f | tac
If you want to keep the -print0 then use:
find . -type f -print0 | tac -rs '\0'

Alternatively, you could use tail -r:
find . -type f | tail -r

Related

Bash: Compare alphanumeric string with lower and upper case

Given a directory with files with an alphanumeric name:
file45369985.xml
file45793220.xml
file0005461x.xml
Also, given a csv table with a list of files
file45369985.xml,file,45369985,.xml,https://www.tib.eu/de/suchen/id/FILE:45369985/Understanding-terrorism-challenges-perspectives?cHash=16d713678274dd2aa205fc07b2fc5b86
file0005461X.xml,file,0005461X,.xml,https://www.tib.eu/de/suchen/id/FILE:0005461X/The-reality-of-social-construction?cHash=5d8152fbbfae77357c1ec6f443f8c8a4
I would like to match all files in the csv table with the directory's content and move them somewhere else. However, I cannot switch off the case sensitivity in this command:
while read p; do
data_set=$(echo "$p" | cut -f1 -d",")
# do something else
done
How can the "X-Files" be correctly matched as well?
Given the format of the csv file (no quotes around the first field), I show an answer for filenames without newlines.
List all files in current directory
find . -maxdepth 1 -type f -printf "%f\n"
Look for one filename in that list (ignoring case)
grep -Fix file0005461X.xml <(find . -maxdepth 1 -type f -printf "%f\n")
Show first field only from file
cut -d"," -f1 csvfile
Pretend that the output is a file
<(cut -d"," -f1 csvfile)
Tell grep to use that "file" for strings to look for with option f
grep -Fixf <(cut -d"," -f1 csvfile) <(find . -maxdepth 1 -type f -printf "%f\n")
Move to /tmp
grep -Fixf <(cut -d"," -f1 csvfile) <(find . -maxdepth 1 -type f -printf "%f\n") |
xargs -i{} mv "{}" /tmp
You can use join to perform a inner join between the CSV and the file list:
join -i -t, \
<(sort -t, -k1 list.csv) \
<(find given_dir -maxdepth 1 -mindepth 1 -type f -printf "%f\n" | sort) \
-o "2.1"
Explanation:
-i: perform a case insensitive comparison for the join
-t,: use the comma as a field separator
<(sort -t, -k1 list.csv): sort the CSV file on the first field using the comma as a field separator and use the output as a file, and perform a process substitution to "connect the output" to a file and use it as file argument (see Bash manual page)
<(find given_dir -maxdepth 1 -mindepth 1 -type f -printf "%f\n" | sort): list all the file stored in the root of the given directory given_dir (and not in the subdirectories), sort it and perform a process substitution like the above
-o "2.1": list the first column of the second input (the find output) of the join result
Note: this solution relies on GNU find due to printf command
awk -F [,\.] '{ print substr($1,1,length($1)-1)toupper(substr($1,length($1)))"."$2;print substr($1,1,length($1)-1)tolower(substr($1,length($1)))"."$2 }' csvfile | while read line
do
find /path -name "$line" -exec mv '{}' /newpath \;
done
Use awk and set the file delimiter to . and , Take each line and generate both an uppercase and lowercase X version of the file name.
Loop through this output and find the file in a given path. If the file exists, execute the move command to a given path.
You can use grep -i to make case insensitive matches:
while read p; do
data_set=$(echo "$p" | cut -f1 -d",")
match=$(ls $your_dir | grep -i "^$data_set\$")
if [ ! -z match ]; then
mv "$match" $another_dir
fi
done

Sorting find command by filename

I am looking to sort the output of a find command alphabetically by only the filename, not the whole path.
Say I have a bunch of text files:
./d/meanwhile/in/b.txt
./meanwhile/c.txt
./z/path/with/more/a.txt
./d/mored/dddd/d.txt
I am looking for the output:
./z/path/with/more/a.txt
./d/meanwhile/in/b.txt
./meanwhile/c.txt
./d/mored/dddd/d.txt
I have tried:
find . -type f -name '*.txt' -print0 | sort -t
find . -name '*.txt' -exec ls -ltu {} \; | sort -n
find . -name '*.txt' | sort -n
...among other permutations.
The straightforward way would be to print each file (record) in two columns -- filename and path -- separated by some character sure to never appear in the filename (-printf '%f\t%p\n'), then sort the output on first column only (sort -k1), and then strip the first column (cut -d$'\t' -f2):
find . -type f -name '*.txt' -printf '%f\t%p\n' | sort -k1 | cut -d$'\t' -f2
Just note that here we use the \t (tab) and \n (newline) for field and record separators, assuming those will not appear as a part of any filename.

Sub-sorting without losing original sort order?

I have a bunch of files in a folder with a common naming structure that looks something like this:
FOOBAR_1A.8_Alice.pdf
FOOBAR_1A.9_Bob.pdf
FOOBAR_1B.10_Foo.pdf
FOOBAR_1B.11_Bar.pdf
FOOBAR_1B.12_Jack.pdf
FOOBAR_1B.1_Jill.pdf
FOOBAR_1B.2_John.pdf
FOOBAR_1B.3_Mary.pdf
To achieve the above sort order, I have a first sort iteration that looks like this:
find . -type f -name "*.pdf" -print | cut -d'/' -f2 | sort
But as you can see, 10/11/12 is printed before 1/2/3.
I tried piping back into sort again:
find . -type f -name "*.pdf" -print | cut -d'/' -f2 | sort | sort -t '.' -k 2n
But this messes up the prior sorting efforts and prints output that looks like:
FOOBAR_1A.7_Alice.pdf
FOOBAR_1B.7_Bob.pdf
FOOBAR_2A.7_John.pdf
FOOBAR_2B.7_Mary.pdf
FOOBAR_2C.7_Foo.pdf
FOOBAR_1A.8_Bar.pdf
FOOBAR_1B.8_Jack.pdf
FOOBAR_2A.8_Jill.pdf
So, to summarise, my desired sorting output is :
FOOBAR_NA.N should be sorted numerically for the first character (i.e. FOOBAR_1 then FOOBAR_2 etc.)
FOOBAR_NA.N should then be sorted alphabetically by the second character (i.e. FOOBAR_1A then FOOBAR_1B etc.)
FOOBAR_NA.N should finally be sorted by the number after the first dot (i.e. FOOBAR_1A.1 then FOOBAR_1A.2 etc.)
You can try -V (sort by version):
find . -name '*.pdf' | cut -d'/' -f2 | sort -t _ -k2V
FOOBAR_1A.8_Alice.pdf
FOOBAR_1A.9_Bob.pdf
FOOBAR_1B.1_Jill.pdf
FOOBAR_1B.2_John.pdf
FOOBAR_1B.3_Mary.pdf
FOOBAR_1B.10_Foo.pdf
FOOBAR_1B.11_Bar.pdf
FOOBAR_1B.12_Jack.pdf
It's ok too with dir or ls
dir -1v *.pdf
or
ls -1v *.pdf

Replace strings in PHP files with shell script

Good morning to everyone here, attempt to replace a series of characters in different PHP files taking into account the following:
The files are lines like this:
if($_GET['x']){
And so I want to replace:
if(isset($_GET['x'])){
But we must take into account that there are files in lines like the following, but they do not want to modify the
if($_GET["x"] == $_GET["x"]){
I try as follows but I can not because I change all lines containing $ _GET ["x"]
My example:
find . -name "*.php" -type f -exec ./code.sh {} \;
sed -i 's/\ if($_GET['x']){/ if(isset($_GET['x'])){/' "$1"
find . -name "*.php" -type f -print0 | xargs -0 sed -i -e "s|if *(\$_GET\['x'\]) *{|if(isset(\$_GET['x'])){|g" --
The pattern above for if($_GET['x']){ would never match if($_GET["x"] == $_GET["x"]){.
Update:
This would change if($_GET['x']){ or if($_GET["x"]){ to if(isset($_GET['x'])){:
find . -name "*.php" -type f -print0 | xargs -0 sed -i -e "s|if *(\$_GET\[[\"']x[\"']\]) *{|if(isset(\$_GET['x'])){|g" --
Another update:
find . -name "*.php" -type f -print0 | xargs -0 sed -i -e "s|if *(\$_GET\[[\"']\([^\"']\+\)[\"']\]) *{|if(isset(\$_GET['\1'])){|g" --
Would change anything in the form of if($_GET['<something>']){ or if($_GET["<something>"]){.

Use wc on all subdirectories to count the sum of lines

How can I count all lines of all files in all subdirectories with wc?
cd mydir
wc -l *
..
11723 total
man wc suggests wc -l --files0-from=-, but I do not know how to generate the list of all files as NUL-terminated names
find . -print | wc -l --files0-from=-
did not work.
You probably want this:
find . -type f -print0 | wc -l --files0-from=-
If you only want the total number of lines, you could use
find . -type f -exec cat {} + | wc -l
Perhaps you are looking for exec option of find.
find . -type f -exec wc -l {} \; | awk '{total += $1} END {print total}'
To count all lines for specific file extension u can use ,
find . -name '*.fileextension' | xargs wc -l
if you want it on two or more different types of files u can put -o option
find . -name '*.fileextension1' -o -name '*.fileextension2' | xargs wc -l
Another option would be to use a recursive grep:
grep -hRc '' . | awk '{k+=$1}END{print k}'
The awk simply adds the numbers. The grep options used are:
-c, --count
Suppress normal output; instead print a count of matching lines
for each input file. With the -v, --invert-match option (see
below), count non-matching lines. (-c is specified by POSIX.)
-h, --no-filename
Suppress the prefixing of file names on output. This is the
default when there is only one file (or only standard input) to
search.
-R, --dereference-recursive
Read all files under each directory, recursively. Follow all
symbolic links, unlike -r.
The grep, therefore, counts the number of lines matching anything (''), so essentially just counts the lines.
I would suggest something like
find ./ -type f | xargs wc -l | cut -c 1-8 | awk '{total += $1} END {print total}'
Based on ДМИТРИЙ МАЛИКОВ's answer:
Example for counting lines of java code with formatting:
one liner
find . -name *.java -exec wc -l {} \; | awk '{printf ("%3d: %6d %s\n",NR,$1,$2); total += $1} END {printf (" %6d\n",total)}'
awk part:
{
printf ("%3d: %6d %s\n",NR,$1,$2);
total += $1
}
END {
printf (" %6d\n",total)
}
example result
1: 120 ./opencv/NativeLibrary.java
2: 65 ./opencv/OsCheck.java
3: 5 ./opencv/package-info.java
190
Bit late to the game here, but wouldn't this also work? find . -type f | wc -l
This counts all lines output by the 'find' command. You can fine-tune the 'find' to show whatever you want. I am using it to count the number of subdirectories, in one specific subdir, in deep tree: find ./*/*/*/*/*/*/TOC -type d | wc -l . Output: 76435. (Just doing a find without all the intervening asterisks yielded an error.)

Resources