How read paths in text file and get the file count under that paths - linux

I have a text file which contains multiple paths like below
$ cat directory.txt
/aaaa/bbbbb/ccccc/
/aaaa/bbbbb/eeeee/
/aaaa/bbbbb/ddddd/
I need to change directory to each path in text file and need to get count of files under that paths.Below is the code i used, But it is not working.
i=cat /aaaa/bbbbb/directory.txt
while read $i ;do
cd $i
ls |wc -l
done < /aaaa/bbbbb/count.txt

Actually you're almost there. The line i=... is not needed, read $i should be read i, and you simply need to call ls with the path instead of cd it first.
#!/bin/bash
while read i; do
ls "$i" | wc -l
done < "/xxx/yyy/count.txt"

Thanks every one i tried this code it is working fine
!/bin/bash
for i in cat /nrt/home/directory.txt;
do
cd $i
ls | wc -l
done > /nrt/home/count.txt

Related

Shell - iterate over content of file but do something only the first x lines

So guys,
I need your help trying to identify the fastest and the most "fault" tolerant solution to my problem.
I have a shell script which executes some functions, based on a txt file, in which I have a list of files.
The list can contain from 1 file to X files.
What I would like to do is iterate over the content of the file and execute my scripts for only 4 items out of the file.
Once the functions have been executed for these 4 files, go over to the next 4 .... and keep on doing so until all the files from the list have been "processed".
My code so far is as follows.
#!/bin/bash
number_of_files_in_folder=$(cat list.txt | wc -l)
max_number_of_files_to_process=4
Translated_files=/home/german_translated_files/
while IFS= read -r files
do
while [[ $number_of_files_in_folder -gt 0 ]]; do
i=1
while [[ $i -le $max_number_of_files_to_process ]]; do
my_first_function "$files" & # I execute my translation function for each file, as it can only perform 1 file per execution
find /home/german_translator/ -name '*.logs' -exec mv {} $Translated_files \; # As there will be several files generated, I have them copied to another folder
sed -i "/$files/d" list.txt # We remove the processed file from within our list.txt file.
my_second_function # Without parameters as it will process all the files copied at step 2.
done
# here, I want to have all the files processed and don't stop after the first iteration
done
done < list.txt
Unfortunately, as I am not quite good at shell scripting, I do not know how to structure it so that it won't waste any resources and mostly, to make sure that it "processes" everything from that file.
Do you have any advice on how to achieve what I am trying to achieve?
only 4 items out of the file. Once the functions have been executed for these 4 files, go over to the next 4
Seems to be quite easy with xargs.
your_function() {
echo "Do something with $1 $2 $3 $4"
}
export -f your_function
xargs -d '\n' -n 4 bash -c 'your_function "$#"' _ < list.txt
xargs -d '\n' for each line
-n 4 take for arguments
bash .... - run this command with 4 arguments
_ - the syntax is bash -c <script> $0 $1 $2 etc..., see man bash.
"$#" - forward arguments
export -f your_function - export your function to environment so child bash can pick it up.
I execute my translation function for each file
So you execute your translation function for each file, not for each 4 files. If the "translation function" is really for each file with no inter-file state, consider rather executing 4 processes in parallel with same code and just xargs -P 4.
If you have GNU Parallel it looks something like this:
doit() {
my_first_function "$1"
my_first_function "$2"
my_first_function "$3"
my_first_function "$4"
my_second_function "$1" "$2" "$3" "$4"
}
export -f doit
cat list.txt | parallel -n4 doit

Using the ls command to hide non-executable files

I'm trying to have a command that will print only the non-executable files sorted by modification time in the current directory.
What I have so far is:
$ ls -lt | grep -i "...x......"
This is printing all of the files in the dir. Just starting to learn code, any help would be much appreciated.
The way to go :
for file in *; do test -x "$file" || echo "$file"; done
Thanks to not parsing ls output

use Ls -l in script shell linux and separate between results

when i execute this script that i had made
#!/bin/bash
for object in $(ls -l)
do
echo $object
done
it's displayed as below
when i execute my script i want to have a result like this
i was trying a lot of things but its not working
please i need your help
thank you in advance
You want to read the output of ls -l line by line; to do this, replace the line
for object in $(ls -l)
by
ls -l|while read object
and change
echo $object
to
echo "$object"
(to preserve spaces).
If you insist on using for, you can do:
readarray -t < <(ls -l)
for object in "${MAPFILE[#]}"; do echo "$object"; done

List files greater than 100K in bash

I want to list the files recursively in the HOME directory. I'm trying to write my own script , so I should not use the command find or ls. My script is:
#!/bin/bash
minSize=102400;
printFiles() {
for x in "$1/"*; do
if [ -d "$x" ]; then
printFiles "$x";
else
size=$(wc -c "$x");
if [[ "$size" -gt "$minSize" ]]; then
echo "$size";
fi
fi
done
}
printFiles "/~";
So, the problem here is that when I run this script, the terminal throws Line 11: division by 0 and /home/gandalf/Videos/*: No such file or directory. I have not divided by any number, why I'm getting this error?. And the second one?
Alternatively, I can't use find or ls because I have to display the files one by one asking to the user if he want to see the next file or not. This is possible using the command find or ls or only can be done writing my own function?
Thanks.
size=$(wc -c "$x");
That's the line that is failing. When you run that wc command manually you should be able to see why:
$ wc -c /tmp/out
5 /tmp/out
The output contains not only the file size but also the file name. So you can't use $size with the -gt comparator on the next line. One way to fix that is to change the wc line to use cut (or awk, or sed, etc) to keep just the file size.
size=$(wc -c "$x" | cut -f1 -d " ")
A simpler alternative suggested by #mklement0:
size=$(wc -c < "$x")

Execute and delete command from a file

I have multiple files with an insanely long list of commands. I can't run them all in one go, so I need a smart way to read and execute from file as well as delete the command after completion.
So far I have tried
for i in filename.txt ; do ; execute $i ; sed -s 's/$i//' ; done ;
but it doesn't work. Before I introduced sed, $i was executing. Now even that is not working.
I thought of a workaround where I will read first line and delete first line till file is empty.
Any better ideas or commands?
This should work for you, list.txt is your file containing commands.
Make sure you backup the command file before running.
while read line; do $line;sed -i '1d' list.txt;done < "list.txt"
sed -i edits in-place so list.txt will be changed along the loop and you will end up with a empty file.
I think what you want to do is something like this:
while read -r -- i; do $i; sed -i "0,/$i/s/$i//;/^$/d" filename.txt; done < filename.txt
The file is read into the loop. Each line is executed, and the sed command will delete only the first entry it finds, then delete the empty line.
I think that one way to do it is to have the source file of all the commands to be executed, and the script that executes the commands also writes a second log file that lists the files as they are executed.
If you need to resume the process, you work on the lines in the source file that are not present in the log file.
logfile=commands.log
srcfile=commands.src
oldfile=commands.old
trap "mv $oldfile $logfile; exit 1" 0 1 2 3 13 15
[ -f $logfile ] || cp /dev/null $logfile
cp $logfile $oldfile
comm -23 $srcfile $logfile |
while read -r line
do
echo "$line" >> $oldfile
($line) < /dev/null
done
mv $oldfile $logfile
trap 0

Resources