Calculate the total size of all files from a generated folders list with full PATH - linux

I have a list containing multiple directories with the full PATH:
/mnt/directory_1/sub_directory_1/
/mnt/directory_2/
/mnt/directory_3/sub_directory_3/other_directories_3/
I need to calculated what the total size is of this list.
From Get total size of a list of files in UNIX
du -ch $file_list | tail -1 | cut -f 1
This was the closest of an answer I could find but gave me the following error message:
bash: /bin/du: Argument list too long

Do not use backticks `. Use $(..) instead.
Do not use:
command $(cat something)
this is a common anti-pattern. It works for simple cases, fails for many more, because the result of $(...) undergoes word splitting and filename expansion.
Check your scripts with http://shellcheck.net
If you want to "run a command with argument from a file" use xargs or write a loop. Read https://mywiki.wooledge.org/BashFAQ/001 . Also xargs will handle too many arguments by itself. And I would also add -s to du. Try:
xargs -d'\n' du -sch < file_list.txt | tail -1 | cut -f 1
test on repl bash

Related

grep search for pipe term Argument list too long

I have something like
grep ... | grep -f - *orders*
where the first grep ... gives a list of order numbers like
1393
3435
5656
4566
7887
6656
and I want to find those orders in multiple files (a_orders_1, b_orders_3 etc.), these files look something like
1001|strawberry|sam
1002|banana|john
...
However, when the first grep... returns too many order numbers I get the error "Argument list too long".
I also tried to give the grep command one order number at a time using a while loop but that's just way too slow. I did
grep ... | while read order; do grep $order *orders*; done
I'm very new to Unix clearly, explanations would be greatly appreciated, thanks!
The problem is the expansion of *orders* in grep ... | grep -f - *orders*. Your shell expands the pattern to the full list of files before passing that list to grep.
So we need to pass fewer "orders" files to each grep invocation. The find program is one way to do that, because it accepts wildcards and expands them internally:
find . -name '*orders*' # note this searches subdirectories too
Now that you know how to generate the list of filenames without running into the command line length limit, you can tell find to execute your second grep:
grep ... | find . -name '*orders*' -exec grep -f - {} +
The {} is where find places the filenames, and the + terminates the command and lets find know you're OK with passing multiple arguments to each invocation of grep -f, while still respecting the command line length limit by invoking grep -f more than once if the list of files exceeds the allowed length of a single command.

Unable to run cat command in CentOS (argument list too long)

I have a folder which has around 300k files of each file contains 2-3mb
Now I want to run a command to find the count of char { in shell
My command:
nohup cat *20200119*| grep "{" | wc -l > /mpt_sftp/mpt_cdr_ocs/file.txt
This works fine with small number of files
When i run in files location where I have all the files (300k files) it showing
Argument too long
Would you please try the following:
find . -maxdepth 1 -type f -name "*20200119*" -print0 | xargs -0 grep -F -o "{" | wc -l > /mpt_sftp/mpt_cdr_ocs/file.txt
I have actually tested with 300,000 files of 10-character-long filenames and it is working well.
xargs automatically adjusts the length of argument list fed to grep and we don't need to worry about it. (You can see how the grep command is executed by putting -t option to xargs.)
The -F option drastically speeds-up the execution of grep to search for a fixed string, not a regex.
The -o option will be needed if the character { appears multiple times in a line and you want to count them individually.
The maximum size of the argument list varies, but it is usually something like 128 KiB or 256 KiB. That means you have an awful lot of files if the *20200119* part is overflowing the maximum argument list. But you say "around 3 lakhs files", which is around 300,000 — each file has at least the 8-character date string in it, plus enough other characters to make the name unique, so the list of file names will be far too long for even the largest plausible 'maximum argument list size'.
Note that the nohup cat part of your command is not sensible (see UUoC: Useless Use of Cat); you should be using grep '{' *20200119* to save transferring all that data down a pipe unnecessarily. However, that too would run into problems with the argument list being too long.
You will probably have to use a variant of the following command to get the desired result without overflowing your command line:
find . -depth 1 -name '*20200119*' -exec grep '{' {} + | wc -l
This uses the feature of POSIX find that groups as many arguments as will fit on the command line without overflowing to run grep on large (but not too large) numbers of files, and then pass the output of the grep commands to wc. If you're worried about the file names appearing in the output, suppress them with the grep -h.
Or you might use:
find . -depth 1 -name '*20200119*' -exec grep -c -h '{' {} + |
awk '{sum += $1} END {print sum}'
The grep -c -h on macOS produces a simple number (the count of the number of lines containing at least one {) on its standard output for each file listed in its argument list; so too does GNU grep. The awk script adds up those numbers and prints the result.
Using -depth 1 is supported by find on macOS; so too is -maxdepth 1 — they are equivalent. GNU find does not appear to support -depth 1. It would be better to use -maxdepth 1. POSIX find only supports -depth with no number. You'd probably get a better error message from using -maxdepth 1 with a find that only supports POSIX's rather minimal set of options than you would when using -depth 1.

Operating on multiple results from find command in bash

Hi I'm a novice linux user. I'm trying to use the find command in bash to search through a given directory, each containing multiple files of the same name but with varying content, to find a maximum value within the files.
Initially I wasn't taking the directory as input and knew the file wouldn't be less than 2 directories deep so I was using nested loops as follows:
prev_value=0
for i in <directory_name> ; do
if [ -d "$i" ]; then
cd $i
for j in "$i"/* ; do
if [ -d "$j" ]; then
cd $j
curr_value=`grep "<keyword>" <filename>.txt | cut -c32-33` #gets value I'm comparing
if [ $curr_value -lt $prev_value ]; then
curr_value=$prev_value
else
prev_value=$curr_value
fi
fi
done
fi
done
echo $prev_value
Obviously that's not going to cut it now. I've looked into the -exec option of find but since find is producing a vast amount of results I'm just not sure how to handle the variable assignment and comparisons. Any help would be appreciated, thanks.
find "${DIRECTORY}" -name "${FILENAME}.txt" -print0 | xargs -0 -L 1 grep "${KEYWORD}" | cut -c32-33 | sort -nr | head -n1
We find the filenames that are named FILENAME.txt (FILENAME is a bash variable) that exist under DIRECTORY.
We print them all out, separated by nulls (this avoids any problems with certain characters in directory or file names).
Then we read them all in again using xargs, and pass the null-separated (-0) values as arguments to grep, launching one grep for each filename (-L 1 - let's be POSIX-compliant here). (I do that to avoid grep printing the filenames, which would screw up cut).
Then we sort all the results, numerically (-n), in descending order (-r).
Finally, we take the first line (head -n1) of the sorted numbers - which will be the maximum.
P.S. If you have 4 CPU cores you can try adding the -P 4 option to xargs to try to make the grep part of it run faster.

Listing the results of the du command in alphabetical order

How can I list the results of the du command in alphabetical order?
I know I can use the find command to list them alphabetically, but without the directory size, I also use the -maxdepth option for both commands so that the listing only goes down one subdirectory.
Here's the question in italics:
Write a shell script that implements a directory size analyzer. In your script you may use common Linux commands. The script should list the disk storage occupied by each immediate subdirectory of a given argument or the current directory (if no argument is given) with the subdirectory names sorted alphabetically. Also, list the name of the subdirectory with the highest disk usage along with its storage size. If more than one subdirectory has the same highest disk usage, list any one of those subdirectories. Include meaningful brief comments. List of bash commands applicable for this script includes the following but not limited: cat, cut, du, echo, exit, for, head, if, ls, rm, sort, tail, wc. You may use bash variables as well as temporary files to hold intermediate results. Delete all temporary files at the end of the execution.
Here is my result after entering du $dir -hk --max-depth=2 | sort -o temp1.txt then cat temp1.txt in the command line:
12 ./IT_PLAN/Inter_Disciplinary
28 ./IT_PLAN
3 ./IT_PLAN/Core_Courses
3 ./IT_PLAN/Pre_reqs
81 .
9 ./IT_PLAN/IT_Electives
It should look like this:
28 ./IT_PLAN
3 ./IT_PLAN/Core_Courses
12 ./IT_PLAN/Inter_Disciplinary
9 ./IT_PLAN/IT_Electives
The subdirectory with the maximum disk space use:
28 ./IT_PLAN
Once again, I'm having trouble sorting the results alphabetically.
Try doing this :
du $dir -hk --max-depth=2 | sort -k2
-k2 is the column number 2
See http://www.manpagez.com/man/1/sort/
du $dir -hk --max-depth=2 |awk '{print $2"\t"$1}'|sort -d -k1 -o temp1.txt
and if you want to remove the ./ path
du $dir -hk --max-depth=2 |awk '{print $2"\t"$1}'|sed -e 's/\.\///g'|sort -d -k1 -o temp1.txt

How to tell how many files match description with * in unix

Pretty simple question: say I have a set of files:
a1.txt
a2.txt
a3.txt
b1.txt
And I use the following command:
ls a*.txt
It will return:
a1.txt a2.txt a3.txt
Is there a way in a bash script to tell how many results will be returned when using the * pattern. In the above example if I were to use a*.txt the answer should be 3 and if I used *1.txt the answer should be 2.
Comment on using ls:
I see all the other answers attempt this by parsing the output of
ls. This is very unpredictable because this breaks when you have
file names with "unusual characters" (e.g. spaces).
Another pitfall would be, it is ls implementation dependent. A
particular implementation might format output differently.
There is a very nice discussion on the pitfalls of parsing ls output on the bash wiki maintained by Greg Wooledge.
Solution using bash arrays
For the above reasons, using bash syntax would be the more reliable option. You can use a glob to populate a bash array with all the matching file names. Then you can ask bash the length of the array to get the number of matches. The following snippet should work.
files=(a*.txt) && echo "${#files[#]}"
To save the number of matches in a variable, you can do:
files=(a*.txt)
count="${#files[#]}"
One more advantage of this method is you now also have the matching files in an array which you can iterate over.
Note: Although I keep repeating bash syntax above, I believe the above solution applies to all sh-family of shells.
You can't know ahead of time, but you can count how many results are returned. I.e.
ls -l *.txt | wc -l
ls -l will display the directory entries matching the specified wildcard, wc -l will give you the count.
You can save the value of this command in a shell variable with either
num=$(ls * | wc -l)
or
num=`ls -l *.txt | wc -l`
and then use $num to access it. The first form is preferred.
You can use ls in combination with wc:
ls a*.txt | wc -l
The ls command lists the matching files one per line, and wc -l counts the number of lines.
I like suvayu's answer, but there's no need to use an array:
count() { echo $#; }
count *
In order to count files that might have unpredictable names, e.g. containing new-lines, non-printable characters etc., I would use the -print0 option of find and awk with RS='\0':
num=$(find . -maxdepth 1 -print0 | awk -v RS='\0' 'END { print NR }')
Adjust the options to find to refine the count, e.g. if the criteria is files starting with a lower-case a with .txt extension in the current directory, use:
find . -type f -name 'a*.txt' -maxdepth 1 -print0

Resources