How do I find all unique filenames, eliminate duplicate names, and eliminate directory names?
e.g., given these directories/folders and files:
dir-aa/file-1
dir-aa/subdir-cc/file-2
dir-bb/file-1
dir-bb/file-3
I want this output:
file-1
file-2
file-3
#!/bin/sh
find . -type f -printf '%f\n' | sort -u
or
#!/bin/sh
find . -type f -exec basename '{}' ';' | sort -u
Related
I am looking to sort the output of a find command alphabetically by only the filename, not the whole path.
Say I have a bunch of text files:
./d/meanwhile/in/b.txt
./meanwhile/c.txt
./z/path/with/more/a.txt
./d/mored/dddd/d.txt
I am looking for the output:
./z/path/with/more/a.txt
./d/meanwhile/in/b.txt
./meanwhile/c.txt
./d/mored/dddd/d.txt
I have tried:
find . -type f -name '*.txt' -print0 | sort -t
find . -name '*.txt' -exec ls -ltu {} \; | sort -n
find . -name '*.txt' | sort -n
...among other permutations.
The straightforward way would be to print each file (record) in two columns -- filename and path -- separated by some character sure to never appear in the filename (-printf '%f\t%p\n'), then sort the output on first column only (sort -k1), and then strip the first column (cut -d$'\t' -f2):
find . -type f -name '*.txt' -printf '%f\t%p\n' | sort -k1 | cut -d$'\t' -f2
Just note that here we use the \t (tab) and \n (newline) for field and record separators, assuming those will not appear as a part of any filename.
I want to see the list of specific files under the directory using linux.
Say for example:-
I have following sub-directories in my current directory
Feb 16 00:37 a1
Feb 16 00:38 a2
Feb 16 00:36 a3
Now if i do ls a* - I can see
bash-4.1$ ls a*
a:
a1:
123.sh 123.txt
a2:
a234.sh a234.txt
a3:
a345.sh a345.txt
I want to filter out only .sh files from the directory so that output should be:-
a1:
123.sh
a2:
a234.sh
a3:
a345.sh
Is it Possible?
Moreover is it possible to print the 1st line of sh file also?
The following find command should work for you:
find . -maxdepth 2 -mindepth 2 -path '*/a*/*.sh' -print -exec head -n1 {} \;
Just take a look at those options. I hope you would find what you you are looking for
basic 'find file' commands
find / -name foo.txt -type f -print # full command
find / -name foo.txt -type f # -print isn't necessary
find / -name foo.txt # don't have to specify "type==file"
find . -name foo.txt # search under the current dir
find . -name "foo.*" # wildcard
find . -name "*.txt" # wildcard
find /users/al -name Cookbook -type d # search '/users/al'
search multiple dirs
find /opt /usr /var -name foo.scala -type f # search multiple dirs
case-insensitive searching
find . -iname foo # find foo, Foo, FOo, FOO, etc.
find . -iname foo -type d # same thing, but only dirs
find . -iname foo -type f # same thing, but only files
find files with different extensions
find . -type f \( -name "*.c" -o -name "*.sh" \) # *.c and *.sh files
find . -type f \( -name "*cache" -o -name "*xml" -o -name "*html" \) # three patterns
find files that don't match a pattern (-not)
find . -type f -not -name "*.html" # find allfiles not ending in ".html"
find files by text in the file (find + grep)
find . -type f -name "*.java" -exec grep -l StringBuffer {} \; # find StringBuffer in all *.java files
find . -type f -name "*.java" -exec grep -il string {} \; # ignore case with -i option
find . -type f -name "*.gz" -exec zgrep 'GET /foo' {} \; # search for a string in gzip'd files
Only using ls, you can get the .sh files and their parent directory with:
ls -1 * | grep ":\|.sh" | grep -B1 .sh
Which will provide the output:
a1:
123.sh
a2:
a234.sh
a3:
a345.sh
However, note that this won't have the correct behavior in case of you have any file called for example 123.sh.txt
In order to print the first line of the first .sh file in every folder:
head -n1 $(ls -1 */*.sh)
Yes and very easy and simple just with ls itself:
ls -d */*.sh
Prove
If you would like to print it with newline:
t $ ls -d */*.sh | tr ' ' '\n'
d1/file.sh
d2/file.sh
d3/file.sh
Or
ls -d */*.sh | tr '/' '\n'
the output:
d1
file.sh
d2
file.sh
d3
file.sh
Also for the first line if you want:
t $ ls -d */*.sh | tr ' ' '\n' | head -n 1
d1/file.sh
I have a folder and I want count all regular files in it, and for this I use this bash command:
find pathfolder -type f 2> err.txt | wc -l
In the folder there are 3 empty text files and a subfolder with inside it other text files.
For this reason I should get 3 as a result, but I get 6 and I don't understand why. Maybe there is some options that I did not set.
If I remove the subfolder I get 4 as result
To grab all the files and directories in current directory without dot files:
shopt -u dotglob
all=(*)
To grab only directories:
dirs=(*/)
To count only non-dot files in current directory:
echo $(( ${#all[#]} - ${#dirs[#]} ))
To do this with find use:
find . -type f -maxdepth 1 ! -name '.*' -exec printf '%.0s.\n' {} + | wc -l
Below solutions ignore the filenames starting with dot.
To count the files in pathfolder only:
find pathfolder -maxdepth 1 -type f -not -path '*/\.*' | wc -l
To count the files in ALL child directories of pathfolder:
find pathfolder -mindepth 2 -maxdepth 2 -type f -not -path '*/\.*' | wc -l
UPDATE: Converting comments into an answer
Based on the suggestions received from anubhava, by creating a dummy file using the command touch $'foo\nbar', the wc -l counts this filename twice, like in below example:
$> touch $'foo\nbar'
$> find . -type f
./foo?bar
$> find . -type f | wc -l
2
To avoid this, get rid of the newlines before calling wc (anubhava's solution):
$> find . -type f -exec printf '%.0sbla\n' {} +
bla
$> find . -type f -exec printf '%.0sbla\n' {} + | wc -l
1
or avoid calling wc at all:
$> find . -type f -exec sh -c 'i=0; for f; do ((i++)); done; echo $i' sh {} +
1
I am looking to combine the output of the Linux find and head commands (to derive a list of filenames) with output of another Linux/bash command and save the result in a file such that each filename from the "find" occurs with the other command output on a separate line.
So for example,
- if a dir testdir contains files a.txt, b.txt and c.txt,
- and the output of the other command is some number say 10, the desired output I'm looking for is
10 a.txt
10 b.txt
10 c.txt
On searching here, I saw folks recommending paste for doing similar merging but I couldn't figure out how to do it in this scenario as paste seems to be expecting files . I tried
paste $(find testdir -maxdepth 1 -type f -name "*.text" | head -2) $(echo "10") > output.txt
paste: 10: No such file or directory
Would appreciate any pointers as to what I'm doing wrong. Any other ways of achieving the same thing are also welcome.
Note that if I wanted to make everything appear on the same line, I could use xargs and that does the job.
$find testdir -maxdepth 1 -type f -name "*.text" | head -2 |xargs echo "10" > output.txt
$cat output.txt
10 a.txt b.txt
But my requirement is to merge the two command outputs as shown earlier.
Thanks in advance for any help!
find can handle both the -exec and -print directives, you just need to merge the output:
$ find . -maxdepth 1 -type f -name \*.txt -exec echo hello \; -print | paste - -
hello ./b.txt
hello ./a.txt
hello ./all.txt
Assuming your "command" requires the filename (here's a very contrived example):
$ find . -maxdepth 1 -type f -name \*.txt -exec sh -c 'wc -l <"$1"' _ {} \; -print | paste - -
4 ./b.txt
4 ./a.txt
7 ./all.txt
Of course, that's executing the command for each file. To restrict myself to your question:
cmd_out=$(echo 10)
for file in *.txt; do
echo "$cmd_out $file"
done
Try this,
$find testdir -maxdepth 1 -type f -name "*.text" | head -2 |tr ' ' '\n'|sed -i 's/^/10/' > output.txt
You can make xargs operate on one line at a time using -L1:
find testdir -maxdepth 1 -type f -name "*.text" | xargs -L1 echo "10" > output.txt
Good morning to everyone here, attempt to replace a series of characters in different PHP files taking into account the following:
The files are lines like this:
if($_GET['x']){
And so I want to replace:
if(isset($_GET['x'])){
But we must take into account that there are files in lines like the following, but they do not want to modify the
if($_GET["x"] == $_GET["x"]){
I try as follows but I can not because I change all lines containing $ _GET ["x"]
My example:
find . -name "*.php" -type f -exec ./code.sh {} \;
sed -i 's/\ if($_GET['x']){/ if(isset($_GET['x'])){/' "$1"
find . -name "*.php" -type f -print0 | xargs -0 sed -i -e "s|if *(\$_GET\['x'\]) *{|if(isset(\$_GET['x'])){|g" --
The pattern above for if($_GET['x']){ would never match if($_GET["x"] == $_GET["x"]){.
Update:
This would change if($_GET['x']){ or if($_GET["x"]){ to if(isset($_GET['x'])){:
find . -name "*.php" -type f -print0 | xargs -0 sed -i -e "s|if *(\$_GET\[[\"']x[\"']\]) *{|if(isset(\$_GET['x'])){|g" --
Another update:
find . -name "*.php" -type f -print0 | xargs -0 sed -i -e "s|if *(\$_GET\[[\"']\([^\"']\+\)[\"']\]) *{|if(isset(\$_GET['\1'])){|g" --
Would change anything in the form of if($_GET['<something>']){ or if($_GET["<something>"]){.