How to display directories that begins with small or capital letter in Linux? - linux

I want to display all directory names in directory /opt/BMC/patrol/ that begin with Patrol* or patrol*. Command
ls /opt/BMC/patrol/ | grep -i '^Patrol*'
produces
Patrol3
Patrol3.16
PatrolAgent_3181.sh
patrol_cfg.sh
It is correct but there are directories and files, instead of just directories.
Command ls -d /opt/BMC/patrol/*/ | grep -i '^Patrol*' produces nothing...
Command ls -d /opt/BMC/patrol/*/ | grep -i 'Patrol*' produces
/opt/BMC/patrol/BMCINSTALL/
/opt/BMC/patrol/bmc_products/
/opt/BMC/patrol/cert_gg/
/opt/BMC/patrol/common/
/opt/BMC/patrol/Install/
/opt/BMC/patrol/itools/
/opt/BMC/patrol/Patrol3/
/opt/BMC/patrol/Patrol3.16/
/opt/BMC/patrol/perform/
/opt/BMC/patrol/rtserver/
/opt/BMC/patrol/temp2/
/opt/BMC/patrol/test/
/opt/BMC/patrol/testftp/
/opt/BMC/patrol/Uninstall/
Does it search recursively? What is a command to find only directory names that begins with capital or small letters?

Try find:
find /opt/BMC/patrol -type d -iname 'patrol*'
-type d matches directories, and -iname is a case-insensitive match. The 'patrol*' has to be quoted '' because otherwise the shell will expand the * before find gets a chance.
find does search recursively by default (see Edit 2, below).
Edit ls is not optimized for this use case. ls -d prevents descending into directories, which is why you don't get any matches. As far as grep goes, ^ matches at the beginning of the line, not at the leading / before a directory's name. So grep -i '\/patrol' would be a way to find names beginning with Patrol or patrol, but you would still have to filter down to directories. find is designed to handle all these things.
Edit 2 For non-recursive, use -maxdepth:
find /opt/BMC/patrol -maxdepth 1 -type d -iname 'patrol*'
I made a test directory with the following contents, based on your question:
opt/BMC/patrol/
opt/BMC/patrol/BMCINSTALL/
opt/BMC/patrol/bmc_products/
opt/BMC/patrol/cert_gg/
opt/BMC/patrol/common/
opt/BMC/patrol/common/patrol.d/
opt/BMC/patrol/Install/
opt/BMC/patrol/itools/
opt/BMC/patrol/Patrol3/
opt/BMC/patrol/Patrol3.16/
opt/BMC/patrol/PatrolAgent_3181.sh
opt/BMC/patrol/patrol_cfg.sh
opt/BMC/patrol/perform/
opt/BMC/patrol/rtserver/
opt/BMC/patrol/temp2/
opt/BMC/patrol/test/
opt/BMC/patrol/testftp/
opt/BMC/patrol/Uninstall/
When I run the first command (without -maxdepth), I get:
opt/BMC/patrol
opt/BMC/patrol/common/patrol.d
opt/BMC/patrol/Patrol3
opt/BMC/patrol/Patrol3.16
When I run the second command (with -maxdepth), I get:
opt/BMC/patrol
opt/BMC/patrol/Patrol3
opt/BMC/patrol/Patrol3.16
and common/patrol.d is not present in the results.

Related

BASH grep pattern filename in end of line

I just started bash and i am doing a search script that search for files and grep a pattern. Simple idea:
find $HOME -type f | grep $1
In the current script i match everything that contains $1 (files and directories). I only want to match the pattern in the filename, i don't want to match the directories in the path. I tried lots of advanced expressions with symbols like "/.*^$" etc.. to grep in a specific part but honestly for a new user is being a bit hard.
Cut tool is not an option because i want the path of the file.
EDIT:
Correct Example:
$ ./search test
/home/user/documents/test.txt
/home/user/downloads/blahtestblah.py
Incorrect example:
$ ./search test
/home/user/test/whatever.txt
In the incorrect example grep matched the keyword in the path, matching a directory.
^ matches the beginning of the string. Try:
find $HOME -type f | grep "/.*$1$"
You could also use the -regex switch to find instead of piping the output to grep:
find $HOME -type f -regex ".*$1$"

Unix - Only list directories which contain a subdirectory

How can I print in the Unix shell the number of directories in a tree which contain other directories?
I haven't found a solution yet with commands like find or ls.
You can use find command: find . -type d -not -empty
That will print every subdirectory that is not empty. You can control how deep you want the search with -maxdepth.
To print the number, you can use wc -l.
find . -type d -not -empty | wc -l
If you generate a list of all the directories under a particular directory, and then remove the last component from the name, you have a list of the directories containing subdirectories, but there are likely to be repeats in that list. So, you need to post-process the list, yielding (as a first approximation):
find ${base:-.} -type d |
sed 's%/[^/]*$%%' |
sort -u
Find all the directories under the directory or directories listed in variable $base, defaulting to the current directory, and print their names. The code assumes you don't have directories with a newline in the name. If you do, there are fixes, but the best fix is to rename the directory. The sed command removes the last slash and everything after it. The sort eliminates duplicate entries. What's left is the list of directories containing subdirectories.
Well, more or less. There's the degenerate case to consider: the top-level directories in the list will be listed regardless of whether they have sub-directories or not. Fixing that is a bit harder. You need to eliminate any lines of output that exactly match the directories specified to find before removing trailing material. So, you need something like:
{
printf '\\#^%s$#d\n' ${base:-.}
echo 's%/[^/]*$%%'
} > sed.script
find ${base:-.} -type d |
sed -f sed.script |
sort -u
rm -f sed.script
The \\#^%s$#d assumes you don't use # in directory names. If you do use it, then you need to find a character you don't use in names (maybe Control-A) and use that in place of the #. If you could face absolutely any character, then you'll need to do more work escaping some obscure character, such as Control-A, when it appears in a directory name.
There's a problem still: using a fixed name like sed.script for a temporary file name is bad (for multiple reasons — such as two people trying to run the script at the same time in the same directory, though it can also be a security risk), so use mktemp to create a temporary file name:
tmp=$(mktemp ${TMPDIR:-/tmp}/dircnt.XXXXXX)
trap "rm -f $tmp; exit 1" 0 1 2 3 13 15
{
printf '\\#^%s$#d\n' ${base:-.}
echo 's%/[^/]*$%%'
} > $tmp
find ${base:-.} -type d |
sed -f $tmp |
sort -u
rm -f $tmp
trap 0
This deals with the most common signals (HUP, INT, QUIT, PIPE, TERM) and removes the temporary file even if one of those arrives.
Clearly, if you want to simply count the number of directories, you can pipe the output from the commands above through wc -l to get the count.
ls -1d */*/. | cut -d / -f1 | uniq

Counting Amount of Files in Directory Including Hidden Files with BASH

I want to count the amount of files in the directory I am currently in (including hidden files). So far I have this:
ls -1a | wc -l
but I believe this returns 2 more than what I want because it also counts "." (current directory) and ".." (directory above this one) as files. How would I go about returning the correct amount of files?
I believe to count all files / directories / hidden file you can also use BASH array like this:
shopt -s nullglob dotglob
cd /whatever/path
arr=( * )
count="${#arr[#]}"
This also works with filenames that contain space or newlines.
Edit:
ls piped to wc is not the right tool for that job. This is because filenames in UNIX can contain newlines as well. This would lead to counting them multiple times.
Following #gniourf_gniourf's comment (thanks!) the following command will handle newlines in file names correctly and should be used:
find -mindepth 1 -maxdepth 1 -printf x | wc -c
The find command lists files in the current directory - including hidden files, excluding the . and .. because of -mindepth 1. It works non-recursively because of -maxdepth 1.
The -printf x action simply prints an x for each file in the directory which leads to an output like this:
xxxxxxxx
Piped to wc -c (-c means counting characters) you get your final result.
Former Answer:
Use the following command:
ls -1A | wc -l
-a will include all files or directories starting with a dot, but -A will exclude the current folder . and the parent folder ..
I suggest to follow man ls
You almost got it right:
ls -1A | wc -l
If you filenames contain new-lines or other funny characters do:
find -type f -ls | wc -l

find and copy all images in directory using terminal linux mint, trying to understand syntax

OS Linux Mint
Like the title says finally I would like to find and copy all images in a directory.
I found:
find all jpg (or JPG) files in a directory and copy them into the folder /home/joachim/neu2:
find . -iname \*.jpg -print0 | xargs -I{} -0 cp -v {} /home/joachim/neu2
and
find all image files in a direcotry:
find . -name '*' -exec file {} \; | grep -o -P '^.+: \w+ image'
My problem is first of all, I don't really understand the syntax. Could someone explain the code?
And secondly can someone connect the two codes for generating a code that does what I want ;)
Greetings and thanks in advance!
First, understand that the pipe "|" links commands piping the output of the first into the second as an argument. Your two shell codes both pipe output of the find command into other commands (grep and xargs). Let's look at those commands one after another:
First command: find
find is a program to "search for files in a directory hierarchy" (that is the explanation from find's man page). The syntax is (in this case)
find <search directory> <search pattern> <action>
In both cases the search directory is . (that is the current directory). Note that it does not just search the current directory but all its subdirectories as well (the directory hierarchy).
The search pattern accepts options -name (meaning it searches for files the name of which matches the pattern given as an argument to this option) or -iname (same as name but case insensitive) among others.
The action pattern may be -print0 (print the exact filename including its position in the given search directory, i.e. the relative or absolute path to the file) or -exec (execute the given command on the file(s), the command is to be ended with ";" and every instance of "{}" is replaced by the filename).
That is, the first shell code (first part, left of the pipe)
find . -iname \*.jpg -print0
searches all files with ending ".jpg" in the current directory hierarchy and prints their paths and names. The second one (first part)
find . -name '*' -exec file {} \;
finds all files in the current directory hierarchy and executes
file <filename>
on them. File is another command that determines and prints the file type (have a look at the man page for details, man file).
Second command: xargs
xargs is a command that "builds and exectues command lines from standard input" (man xargs), i.e. from the find output that is piped into xargs. The command that it builds and executes is in this case
cp -v {} /home/joachim/neu2"
Option -I{} defines the replacement string, i.e. every instance of {} in the command is to be replaced by the input it gets from file (that is, the filenames). Option -0 defines that input items are not terminated (seperated) by whitespace or newlines but only by a null character. This seems to be necessary when using and the standard way to deal with find output as xargs input.
The command that is built and executed is then of course the copy command with option -v (verbose) and it copies each of the filenames it gets from find to the directory.
Third command: grep
grep filters its input giving only those lines or strings that match a particular output pattern. Option -o tells grep to print only the matching string, not the entire line (see man grep), -P tells it to interpret the following pattern as a perl regexp pattern. In perl regex, ^ is the start of the line, .+ is any arbitrary string, this arbitrary should then be followed by a colon, a space, a number of alphanumeric characters (in perl regex denoted \w+) a space and the string "image". Essentially this grep command filters the file output to only output the filenames that are image files. (Read about perl regex's for instance here: http://www.comp.leeds.ac.uk/Perl/matching.html )
The command you actually wanted
Now what you want to do is (1) take the output of the second shell command (which lists the image files), (2) bring it into the appropriate form and (3) pipe it into the xargs command from the first shell command line (which then builds and executes the copy command you wanted). So this time we have a three (actually four) stage shell command with two pipes. Not a problem. We already have stages (1) and (3) (though in stage (3) we need to leave out the -0 option because the input is not find output any more; we need it to treat newlines as item seperators).
Stage (2) is still missing. I suggest using the cut command for this. cut changes strings py splitting them into different fields (seperated by a delimiter character in the original string) that can then be rearranged. I will choose ":" as the delimiter character (this ends the filename in the grep output, option -d':') and tell it to give us just the first field (option -f1, essentialls: print only the filename, not the part that comes after the ":"), i.e. stage (2) would then be
cut -d':' -f1
And the entire command you wanted will then be:
find . -name '*' -exec file {} \; | grep -o -P '^.+: \w+ image' | cut -d':' -f1 | xargs -I{} cp -v {} /home/joachim/neu2
Note that you can find all the man pages for instance here: http://www.linuxmanpages.com
I figured out a command only using awk that does the job as well:
find . -name '*' -exec file {} \; |
awk '{
if ($3=="image"){
print substr($1, 0, length($1)-1);
system("cp " substr($1, 0, length($1)-1) " /home/joachim/neu2" )
}
}'
the substr($1, 0, length($1)-1) is needed because in first column file returns name;
The above answer is really good. but it could take longer if it a huge directory.
here is a shorter version of it , if you already know your file extension
find . -name \*.jpg | cut -d':' -f1 | xargs -I{} cp --parents -v {} ~/testimage/
Here's another one which works like a charm.
It adds the EPOCH time to prevent overwriting files with the same name.
cd /media/myhome/'Local station'/
find . -path ./jpg -prune -o -type f -iname '*.jpg' -exec sh -c '
for file do
newname="${file##*/}"
newname="${newname%.jpg}"
mv -T -- "$file" "/media/myhome/Local station/jpg/$newname-$(date +%s).jpg"
done
' find-sh {} +
cd ~/
It's been designed by Kamil in this post here.
Find a specific type file from a directory:
find /home/user/find/data/ -name '*' -exec file {} \; | grep -o -P '^.+: \w+ image'
Copy specific type of file from one directory to another directory:
find /home/user/find/data/ -name '*' -exec file {} \; | grep -o -P '^.+: \w+ image' | cut -d':' -f1 | xargs -I{} cp -v {} /home/user/copy/data/

Find the number of files in a directory

Is there any method in Linux to calculate the number of files in a directory (that is, immediate children) in O(1) (independently of the number of files) without having to list the directory first? If not O(1), is there a reasonably efficient way?
I'm searching for an alternative to ls | wc -l.
readdir is not as expensive as you may think. The knack is avoid stat'ing each file, and (optionally) sorting the output of ls.
/bin/ls -1U | wc -l
avoids aliases in your shell, doesn't sort the output, and lists 1 file-per-line (not strictly necessary when piping the output into wc).
The original question can be rephrased as "does the data structure of a directory store a count of the number of entries?", to which the answer is no. There isn't a more efficient way of counting files than readdir(2)/getdents(2).
One can get the number of subdirectories of a given directory without traversing the whole list by stat'ing (stat(1) or stat(2)) the given directory and observing the number of links to that directory. A given directory with N child directories will have a link count of N+2, one link for the ".." entry of each subdirectory, plus two for the "." and ".." entries of the given directory.
However one cannot get the number of all files (whether regular files or subdirectories) without traversing the whole list -- that is correct.
The "/bin/ls -1U" command will not get all entries however. It will get only those directory entries that do not start with the dot (.) character. For example, it would not count the ".profile" file found in many login $HOME directories.
One can use either the "/bin/ls -f" command or the "/bin/ls -Ua" command to avoid the sort and get all entries.
Perhaps unfortunately for your purposes, either the "/bin/ls -f" command or the "/bin/ls -Ua" command will also count the "." and ".." entries that are in each directory. You will have to subtract 2 from the count to avoid counting these two entries, such as in the following:
expr `/bin/ls -f | wc -l` - 2 # Those are back ticks, not single quotes.
The --format=single-column (-1) option is not necessary on the "/bin/ls -Ua" command when piping the "ls" output, as in to "wc" in this case. The "ls" command will automatically write its output in a single column if the output is not a terminal.
The -U option for ls is not in POSIX, and in OS X's ls it has a different meaning from GNU ls, which is that it makes -t and -l use creation times instead of modification times. -f is in POSIX as an XSI extension. The manual of GNU ls describes -f as do not sort, enable -aU, disable -ls --color and -U as do not sort; list entries in directory order.
POSIX describes -f like this:
Force each argument to be interpreted as a directory and list the name found in each slot. This option shall turn off -l, -t, -s, and -r, and shall turn on -a; the order is the order in which entries appear in the directory.
Commands like ls|wc -l give the wrong result when filenames contain newlines.
In zsh you can do something like this:
a=(*(DN));echo ${#a}
D (glob_dots) includes files whose name starts with a period and N (null_glob) causes the command to not result in an error in an empty directory.
Or the same in bash:
shopt -s dotglob nullglob;a=(*);echo ${#a[#]}
If IFS contains ASCII digits, add double quotes around ${#a[#]}. Add shopt -u failglob to ensure that failglob is unset.
A portable option is to use find:
find . ! -name . -prune|grep -c /
grep -c / can be replaced with wc -l if filenames do not contain newlines. ! -name . -prune is a portable alternative to -mindepth 1 -maxdepth 1.
Or here's another alternative that does not usually include files whose name starts with a period:
set -- *;[ -e "$1" ]&&echo "$#"
The command above does however include files whose name starts with a period when an option like dotglob in bash or glob_dots in zsh is set. When * matches no file, the command results in an error in zsh with the default settings.
I used this command..works like a charm..only to change the maxdepth..that is sub directories
find * -maxdepth 0 -type d -exec sh -c "echo -n {} ' ' ; ls -lR {} | wc -l" \;
I think you can have more control on this using find:
find <path> -maxdepth 1 -type f -printf "." | wc -c
find -maxdepth 1 will not go deeper into the hierarchy of files.
-type f allows filtering to just files. Similarly, you can use -type d for directories.
-printf "." prints a dot for every match.
wc -c counts the characters, so it counts the dots created by the print... which means counting how many files exist in the given path.
For the number of all file in a current directory try this:
ls -lR * | wc -l
As far as I know, there is no better alternative. This information might be off-topic to this question and you may already know this that under Linux (in general under Unix) directories are just special file which contains the list of other files (I understand that the exact details will be dependent on specific file system but this is the general idea). And there is no call to find the total number of entries without traversing the whole list. Please make me correct if I'm wrong.
use ls -1 | wc -l

Resources