For a problem at uni I need to get the file size and file name of the 5 largest files in a series of directories. To do this I'm using two functions, one which loads everything in with ls -l (I realize that parsing info from ls isn't a good method but this particular problem specifies that I can't use find, locate or du). Each line from the ls output is then sent to another function which using awk should withdraw the filesize and file name and store it into an array. Instead I seem to be getting awk trying to open every column from ls to be read.
The code for this is as so:
function addFileSize {
local y=0
local curLine=$1
if [[ -z "${sizeArray[0]}" ]]; then
i=$(awk '{print $5}' $curLine)
nameArray[y]=$(awk '{print $9}' $curLine)
elif [[ -z "${sizeArray[1]}" ]]; then
i=$(awk '{print $5}' $curLine)
nameArray[y]=$(awk '{print $9}' $curLine)
elif [[ -z "${sizeArray[2]}" ]]; then
i=$(awk '{print $5}' $curLine)
nameArray[y]=$(awk '{print $9}' $curLine)
elif [[ -z "${sizeArray[3]}" ]]; then
i=$(awk '{print $5}' $curLine)
nameArray[y]=$(awk '{print $9}' $curLine)
elif [[ -z "${sizeArray[4]}" ]]; then
i=$(awk '{print $5}' $curLine)
nameArray[y]=$(awk '{print $9}' $curLine)
fi
for i in "${sizeArray[#]}"; do
echo "$(awk '{print $5}' $curLine)"
if [[ -z "$i" ]]; then
i=$(awk '{print $5}' $curLine)
nameArray[y]=$(awk '{print $9}' $curLine)
break
elif [[ $i -lt $(awk '{print $5}' $curLine) ]]; then
i=$(awk '{print $5}' $curLine)
nameArray[y]=$(awk '{print $9}' $curLine)
break
fi
let "y++"
done
echo "Name Array:"
echo "${nameArray[#]}"
echo "Size Array:"
echo "${sizeArray[#]}"
}
function searchFiles {
local curdir=$1
for i in $( ls -C -l -A $curdir | grep -v ^d | grep -v ^total ); do # Searches through all files in the current directory
if [[ -z "${sizeArray[4]}" ]]; then
addFileSize $i
elif [[ ${sizeArray[4]} -lt $(awk '{print $5}' $i) ]]; then
addFileSize $i
fi
done
}
Any help would be greatly appreciated, thanks.
If the problem is specifically supposed to be about parsing, then awk might be a good option (although ls output is challenging to parse reliably). Likewise, if the problem is about working with arrays, then your solution should focus on those.
However, if the problem is there to encourage learning about the tools available to you, I would suggest:
the stat tool prints particular pieces of information about a file (including size)
the sort tool re-orders lines of input
the head and tail tools print the first and last lines of input
and your shell can also perform pathname expansion to list files matching a glob wildcard pattern like *.txt
Imagine a directory with some files of various sizes:
10000000 sound/concert.wav
1000000 sound/song.wav
100000 sound/ding.wav
You can use pathname expansion to find their names:
$ echo sound/*
sound/concert.wav sound/ding.wav sound/song.wav
You can use stat to turn a name into a size:
$ stat -f 'This one is %z bytes long.' sound/ding.wav
This one is 100000 bytes long.
Like most Unix tools, stat works the same whether you provide it one argument or several:
$ stat -f 'This one is %z bytes long.' sound/concert.wav sound/ding.wav sound/song.wav
This one is 10000000 bytes long.
This one is 100000 bytes long.
This one is 1000000 bytes long.
(Check man stat for reference about %z and what else you can print. The file's Name is particularly useful.)
Now you have a list of file sizes (and hopefully you've kept their names around too). How do you find which sizes are biggest?
It's much easier to find the biggest item in a sorted list than an unsorted list. To get a feel for it, think about how you might find the highest two items in this unsorted list:
1234 5325 3243 4389 5894 245 2004 45901 3940 3255
Whereas if the list is sorted, you can find the biggest items very quickly indeed:
245 1234 2004 3243 3255 3940 4389 5325 5894 45901
The Unix sort utility takes lines of input and outputs them from lowest to highest (or in reverse order with sort -r).
It defaults to sorting character-by-character, which is great for words ("apple" comes before "balloon") but not so great for numbers ("10" comes before "9"). You can activate numeric sorting with sort -n.
Once you have a sorted list of lines, you can print the first lines with the head tool, or print the last lines using the tail tool.
The first two items of the (already-sorted) list of words for spell-checking:
$ head -n 2 /usr/share/dict/words
A
a
The last two items:
$ tail -n 2 /usr/share/dict/words
Zyzomys
Zyzzogeton
With those pieces, you can assemble a solution to the problem "find the five biggest files across dir1, dir2, dir3":
stat -f '%z %N' dir1/* dir2/* dir3/* |
sort -n |
tail -n 5
Or a solution to "find the biggest file in each of dir1, dir, dir3, dir4, dir5":
for dir in dir1 dir2 dir3 dir4 dir5; do
stat -f '%z %N' "$dir"/* |
sort -n |
tail -n 1
done
Without using find, locate, or du, you could do the following for each directory:
ls -Sl|grep ^\-|head -5|awk '{printf("%s %d\n", $9, $5);}'
which lists all files by size, filters out directories, takes the top 5, and prints the file name and size. Wrap with a loop in bash for each directory.
Use ls -S to sort by size, pipe through head to get the top five, pipe through sed to compress multiple spaces into one, then pipe through cut to get the size and file name fields.
robert#habanero:~/scripts$ ls -lS | head -n 5 | sed -e 's/ / /g' | cut -d " " -f 5,9
32K xtractCode.pl
29K tmd55.pl
24K tagebuch.pl
14K backup
Just specify the directories as arguments to the initial ls.
This would be another choice. Ctrl+V+I is how to insert a tab from the command line.
ls -lS dir1 dir2 dir3.. | awk 'BEGIN{print "Size""Ctrl+V+I""Name"}NR <= 6{print $5"Ctrl+V+I"$9}'
If you can't use find locate and du, there's still a straightforward option to get the file size without resorting to ls parsing:
size=$(wc -c < "$file")
wc is smart enough to detect a file on STDIN and call stat to get the size, so this works just as fast.
Related
I am writing a function in a BASH shell script, that should return lines from csv-files with headers, having more commas than the header. This can happen, as there are values inside these files, that could contain commas. For quality control, I must identify these lines to later clean them up. What I have currently:
#!/bin/bash
get_bad_lines () {
local correct_no_of_commas=$(head -n 1 $1/$1_0_0_0.csv | tr -cd , | wc -c)
local no_of_files=$(ls $1 | wc -l)
for i in $(seq 0 $(( ${no_of_files}-1 )))
do
# Check that the file exist
if [ ! -f "$1/$1_0_${i}_0.csv" ]; then
echo "File: $1_0_${i}_0.csv not found!"
continue
fi
# Search for error-lines inside the file and print them out
echo "$1_0_${i}_0.csv has over $correct_no_of_commas commas in the following lines:"
grep -o -n '[,]' "$1/$1_0_${i}_0.csv" | cut -d : -f 1 | uniq -c | awk '$1 > $correct_no_of_commas {print}'
done
}
get_bad_lines products
get_bad_lines users
The output of this program is now all the comma-counts with all of the line numbers in all the files,
and I suspect this is due to the input $1 (foldername, i.e. products & users) conflicting with the call to awk with reference to $1 as well (where I wish to grab the first column being the count of commas for that line in the current file in the loop).
Is this the issue? and if so, would it be solvable by either referencing the 1.st column or the folder name by different variable names instead of both of them using $1 ?
Example, current output:
5 6667
5 6668
5 6669
5 6670
(should only show lines for that file having more than 5 commas).
Tried variable declaration in call to awk as well, with same effect
(as in the accepted answer to Awk field variable clash with function argument)
:
get_bad_lines () {
local table_name=$1
local correct_no_of_commas=$(head -n 1 $table_name/${table_name}_0_0_0.csv | tr -cd , | wc -c)
local no_of_files=$(ls $table_name | wc -l)
for i in $(seq 0 $(( ${no_of_files}-1 )))
do
# Check that the file exist
if [ ! -f "$table_name/${table_name}_0_${i}_0.csv" ]; then
echo "File: ${table_name}_0_${i}_0.csv not found!"
continue
fi
# Search for error-lines inside the file and print them out
echo "${table_name}_0_${i}_0.csv has over $correct_no_of_commas commas in the following lines:"
grep -o -n '[,]' "$table_name/${table_name}_0_${i}_0.csv" | cut -d : -f 1 | uniq -c | awk -v table_name="$table_name" '$1 > $correct_no_of_commas {print}'
done
}
You can use awk the full way to achieve that :
get_bad_lines () {
find "$1" -maxdepth 1 -name "$1_0_*_0.csv" | while read -r my_file ; do
awk -v table_name="$1" '
NR==1 { num_comma=gsub(/,/, ""); }
/,/ { if (gsub(/,/, ",", $0) > num_comma) wrong_array[wrong++]=NR":"$0;}
END { if (wrong > 0) {
print(FILENAME" has over "num_comma" commas in the following lines:");
for (i=0;i<wrong;i++) { print(wrong_array[i]); }
}
}' "${my_file}"
done
}
For why your original awk command failed to give only lines with too many commas, that is because you are using a shell variable correct_no_of_commas inside a single quoted awk statement ('$1 > $correct_no_of_commas {print}'). Thus there no substitution by the shell, and awk read "$correct_no_of_commas" as is, and perceives it as an undefined variable. More precisely, awk look for the variable correct_no_of_commas which is undefined in the awk script so it is an empty string . awk will then execute $1 > $"" as matching condition, and as $"" is a $0 equivalent, awk will compare the count in $1 with the full input line. From a numerical point of view, the full input line has the form <tab><count><tab><num_line>, so it is 0 for awk. Thus, $1 > $correct_no_of_commas will be always true.
You can identify all the bad lines with a single awk command
awk -F, 'FNR==1{print FILENAME; headerCount=NF;} NF>headerCount{print} ENDFILE{print "#######\n"}' /path/here/*.csv
If you want the line number also to be printed, use this
awk -F, 'FNR==1{print FILENAME"\nLine#\tLine"; headerCount=NF;} NF>headerCount{print FNR"\t"$0} ENDFILE{print "#######\n"}' /path/here/*.csv
I'm pretty new to bash scripting so some of the syntaxes may not be optimal. Please do point them out if you see one.
I have files in a directory named sequentially.
Example: prob01_01 prob01_03 prob01_07 prob02_01 prob02_03 ....
I am trying to have the script iterate through the current directory and count how many extensions each problem has. Then print the pre-extension name then count
Sample output for above would be:
prob01 3
prob02 2
This is my code:
#!/bin/bash
temp=$(mktemp)
element=''
count=0
for i in *
do
current=${i%_*}
if [[ $current == $element ]]
then
let "count+=1"
else
echo $element $count >> temp
element=$current
count=1
fi
done
echo 'heres the temp:'
cat temp
rm 'temp'
The Problem:
Current output:
prob1 3
Desired output:
prob1 3
prob2 2
The last count isn't appended because it's not seeing a different element after it
My Guess on possible solutions:
Have the last append occur at the end of the for loop?
Your code has 2 problems.
The first problem doesn't answer your question. You make a temporary file, the filename is stored in $temp. You should use that one, and not the file with the fixed name temp.
The problem is that you only write results when you see a new problem/filename. The last one will not be printed.
Fixing only these problems will result in
results() {
if (( count == 0 )); then
return
fi
echo $element $count >> "${temp}"
}
temp=$(mktemp)
element=''
count=0
for i in prob*
do
current=${i%_*}
if [[ $current == $element ]]
then
let "count+=1" # Better is using ((count++))
else
results
element=$current
count=1
fi
done
results
echo 'heres the temp:'
cat "${temp}"
rm "${temp}"
You can do without the script with
ls prob* | cut -d"_" -f1 | sort | uniq -c
When you want the have the output displayed as given, you need one more step.
ls prob* | cut -d"_" -f1 | sort | uniq -c | awk '{print $2 " " $1}'
You may use printf + awk solution:
printf '%s\n' *_* | awk -F_ '{a[$1]++} END{for (i in a) print i, a[i]}'
prob01 3
prob02 2
We use printf to print each file that has at least one _
We use awk to get a count of each file's first element delimited by _ by using an associative array.
I would do it like this:
$ ls | awk -F_ '{print $1}' | sort | uniq -c | awk '{print $2 " " $1}'
prob01 3
prob02 2
I want to print the longest and shortest username found in /etc/passwd. If I run the code below it works fine for the shortest (head -1), but doesn't run for (sort -n |tail -1 | awk '{print $2}). Can anyone help me figure out what's wrong?
#!/bin/bash
grep -Eo '^([^:]+)' /etc/passwd |
while read NAME
do
echo ${#NAME} ${NAME}
done |
sort -n |head -1 | awk '{print $2}'
sort -n |tail -1 | awk '{print $2}'
Here the issue is:
Piping finishes with the first sort -n |head -1 | awk '{print $2}' command. So, input to first command is provided through piping and output is obtained.
For the second command, no input is given. So, it waits for the input from STDIN which is the keyboard and you can feed the input through keyboard and press ctrl+D to obtain output.
Please run the code like below to get desired output:
#!/bin/bash
grep -Eo '^([^:]+)' /etc/passwd |
while read NAME
do
echo ${#NAME} ${NAME}
done |
sort -n |head -1 | awk '{print $2}'
grep -Eo '^([^:]+)' /etc/passwd |
while read NAME
do
echo ${#NAME} ${NAME}
done |
sort -n |tail -1 | awk '{print $2}
'
All you need is:
$ awk -F: '
NR==1 { min=max=$1 }
length($1) > length(max) { max=$1 }
length($1) < length(min) { min=$1 }
END { print min ORS max }
' /etc/passwd
No explicit loops or pipelines or multiple commands required.
The problem is that you only have two pipelines, when you really need one. So you have grep | while read do ... done | sort | head | awk and sort | tail | awk: the first sort has an input (i.e., the while loop) - the second sort doesn't. So the script is hanging because your second sort doesn't have an input: or rather it does, but it's STDIN.
There's various ways to resolve:
save the output of the while loop to a temporary file and use that as an input to both sort commands
repeat your while loop
use awk to do both the head and tail
The first two involve iterating over the password file twice, which may be okay - depends what you're ultimately trying to do. But using a small awk script, this can give you both the first and last line by way of the BEGIN and END blocks.
While you already have good answers, you can also use POSIX shell to accomplish your goal without any pipe at all using the parameter expansion and string length provided by the shell itself (see: POSIX shell specifiction). For example you could do the following:
#!/bin/sh
sl=32;ll=0;sn=;ln=; ## short len, long len, short name, long name
while read -r line; do ## read each line
u=${line%%:*} ## get user
len=${#u} ## get length
[ "$len" -lt "$sl" ] && { sl="$len"; sn="$u"; } ## if shorter, save len, name
[ "$len" -gt "$ll" ] && { ll="$len"; ln="$u"; } ## if longer, save len, name
done </etc/passwd
printf "shortest (%2d): %s\nlongest (%2d): %s\n" $sl "$sn" $ll "$ln"
Example Use/Output
$ sh cketcpw.sh
shortest ( 2): at
longest (17): systemd-bus-proxy
Using either pipe/head/tail/awk or the shell itself is fine. It's good to have alternatives.
(note: if you have multiple users of the same length, this just picks the first, you can use a temp file if you want to save all names and use -le and -ge for the comparison.)
If you want both the head and the tail from the same input, you may want something like sed -e 1b -e '$!d' after you sort the data to get the top and bottom lines using sed.
So your script would be:
#!/bin/bash
grep -Eo '^([^:]+)' /etc/passwd |
while read NAME
do
echo ${#NAME} ${NAME}
done |
sort -n | sed -e 1b -e '$!d'
Alternatively, a shorter way:
cut -d":" -f1 /etc/passwd | awk '{ print length, $0 }' | sort -n | cut -d" " -f2- | sed -e 1b -e '$!d'
Assignment: I have to create a shell script using diff and sort, and a pipeline using ls -l, grep '^d', and awk '{print $9}' to print a full directory tree.
I wrote a C program to display what I am looking for. Here is the output:
ryan#chrx:~/Documents/OS-Projects/Project5_DirectoryTree$ ./a.out
TestRoot/
[Folder1]
[FolderC]
[FolderB]
[FolderA]
[Folder2]
[FolderD]
[FolderF]
[FolderE]
[Folder3]
[FolderI]
[FolderG]
[FolderH]
I wrote this so far:
ls -R -l $1 | grep '^d' | awk '{print $9}'
to print the directory tree but now I need a way to sort it by folder depth and possibly indent but not required. Any suggestions? I can't use find or tree commands.
EDIT: The original assignment & restrictions were mistaken and changed at a later date. The current answers are good solutions if you disregard the restrictions so please leave them for any people with similar issues. As for the the new assignment in case anybody was wondering. I was to recursively print all sub directories, sort them, then compare them with my program to make sure they have similar results. Here was my solution:
#!/bin/bash
echo Program:
./a.out $1 | sort
echo Shell Script:
ls -R -l $1 | grep '^d' | awk '{print $9}' | sort
diff <(./a.out $1 | sort) <(ls -R -l $1 | grep '^d' | awk '{print $9}' | sort)
DIFF=$?
if [[ $DIFF -eq 0 ]]
then
echo "The outputs are similar!"
fi
You don't need neither ls nor grep nor awk for getting the tree. The Simple recursive bash function will be enouh, like:
#!/bin/bash
walk() {
local indent="${2:-0}"
printf "%*s%s\n" $indent '' "$1"
for entry in "$1"/*; do
[[ -d "$entry" ]] && walk "$entry" $((indent+4))
done
}
walk "$1"
If you run it as bash script.sh /etc it will print the dir-tree like:
/etc
/etc/apache2
/etc/apache2/extra
/etc/apache2/original
/etc/apache2/original/extra
/etc/apache2/other
/etc/apache2/users
/etc/asl
/etc/cups
/etc/cups/certs
/etc/cups/interfaces
/etc/cups/ppd
/etc/defaults
/etc/emond.d
/etc/emond.d/rules
/etc/mach_init.d
/etc/mach_init_per_login_session.d
/etc/mach_init_per_user.d
/etc/manpaths.d
/etc/newsyslog.d
/etc/openldap
/etc/openldap/schema
/etc/pam.d
/etc/paths.d
/etc/periodic
/etc/periodic/daily
/etc/periodic/monthly
/etc/periodic/weekly
/etc/pf.anchors
/etc/postfix
/etc/postfix/postfix-files.d
/etc/ppp
/etc/racoon
/etc/security
/etc/snmp
/etc/ssh
/etc/ssl
/etc/ssl/certs
/etc/sudoers.d
Borrowing from #jm666's idea of running it on /etc:
$ find /etc -type d -print | awk -F'/' '{printf "%*s[%s]\n", 4*(NF-2), "", $0}'
[/etc]
[/etc/alternatives]
[/etc/bash_completion.d]
[/etc/defaults]
[/etc/defaults/etc]
[/etc/defaults/etc/pki]
[/etc/defaults/etc/pki/ca-trust]
[/etc/defaults/etc/pki/nssdb]
[/etc/defaults/etc/profile.d]
[/etc/defaults/etc/skel]
[/etc/fonts]
[/etc/fonts/conf.d]
[/etc/fstab.d]
[/etc/ImageMagick]
[/etc/ImageMagick-6]
[/etc/pango]
[/etc/pkcs11]
[/etc/pki]
[/etc/pki/ca-trust]
[/etc/pki/ca-trust/extracted]
[/etc/pki/ca-trust/extracted/java]
[/etc/pki/ca-trust/extracted/openssl]
[/etc/pki/ca-trust/extracted/pem]
[/etc/pki/ca-trust/source]
[/etc/pki/ca-trust/source/anchors]
[/etc/pki/ca-trust/source/blacklist]
[/etc/pki/nssdb]
[/etc/pki/tls]
[/etc/postinstall]
[/etc/preremove]
[/etc/profile.d]
[/etc/sasl2]
[/etc/setup]
[/etc/skel]
[/etc/ssl]
[/etc/texmf]
[/etc/texmf/tlmgr]
[/etc/texmf/web2c]
[/etc/xml]
Sorry, I couldn't find a sensible way to use the other tools you mentioned so it may not help you but maybe it'll help others with the same question but without the requirement to use specific tools.
I have directory containing files:
$> ls blender/output/celebAnim/
0100.png 0107.png 0114.png 0121.png 0128.png 0135.png 0142.png 0149.png 0156.png 0163.png 0170.png 0177.png 0184.png 0191.png 0198.png 0205.png 0212.png 0219.png 0226.png 0233.png 0240.png 0247.png 0254.png 0261.png 0268.png 0275.png 0282.png
0101.png 0108.png 0115.png 0122.png 0129.png 0136.png 0143.png 0150.png 0157.png 0164.png 0171.png 0178.png 0185.png 0192.png 0199.png 0206.png 0213.png 0220.png 0227.png 0234.png 0241.png 0248.png 0255.png 0262.png 0269.png 0276.png 0283.png
0102.png 0109.png 0116.png 0123.png 0130.png 0137.png 0144.png 0151.png 0158.png 0165.png 0172.png 0179.png 0186.png 0193.png 0200.png 0207.png 0214.png 0221.png 0228.png 0235.png 0242.png 0249.png 0256.png 0263.png 0270.png 0277.png 0284.png
0103.png 0110.png 0117.png 0124.png 0131.png 0138.png 0145.png 0152.png 0159.png 0166.png 0173.png 0180.png 0187.png 0194.png 0201.png 0208.png 0215.png 0222.png 0229.png 0236.png 0243.png 0250.png 0257.png 0264.png 0271.png 0278.png
0104.png 0111.png 0118.png 0125.png 0132.png 0139.png 0146.png 0153.png 0160.png 0167.png 0174.png 0181.png 0188.png 0195.png 0202.png 0209.png 0216.png 0223.png 0230.png 0237.png 0244.png 0251.png 0258.png 0265.png 0272.png 0279.png
0105.png 0112.png 0119.png 0126.png 0133.png 0140.png 0147.png 0154.png 0161.png 0168.png 0175.png 0182.png 0189.png 0196.png 0203.png 0210.png 0217.png 0224.png 0231.png 0238.png 0245.png 0252.png 0259.png 0266.png 0273.png 0280.png
0106.png 0113.png 0120.png 0127.png 0134.png 0141.png 0148.png 0155.png 0162.png 0169.png 0176.png 0183.png 0190.png 0197.png 0204.png 0211.png 0218.png 0225.png 0232.png 0239.png 0246.png 0253.png 0260.png 0267.png 0274.png 0281.png
For some script, I will need to find out what the number of the first missing file is. In the above output, it would be 0285.png. However, it is also possible that files in between are missing. In the end, I am only interested in the number 285, which is part of the file name.
This is part of recovery logic: The files should be created by the script, but this step can fail. Therefore I want to have a means to check which files are missing and try to create them in a second step.
This is what I got so far (from how to extract part of a filename before '.' or before extension):
ls blender/output/celebAnim/ | awk -F'[.]' '{print $1}'
What I cannot figure out is how do I find the smallest number missing from that result, above a certain offset? The offset in this case is 100.
You could loop over all number from 100 to 500 and check if the corresponding file exists; if it doesn't, you'd print the number you're looking at:
for i in {100..500}; do
[[ ! -f 0$i.png ]] && { echo "$i missing!"; break; }
done
This prints, for your example, 285 missing!.
This solution could be made a bit more flexible by, for example, looping over zero padded numbers and then extracting the unpadded number:
for i in {0100..0500}; do
[[ ! -f $i.png ]] && { echo "${i##*(0)} missing!"; break; }
done
This requires extended globs (shopt -s extglob) for the *(0) pattern ("zero or more repetitions of 0").
begin=100
end=500
for i in `seq $begin 1 $end`; do
fname="0"$i".png"
if [ ! -f $fname ]; then
echo "$fname is missing"
fi
done
#!/bin/sh
search_dir=blender/output/celebAnim/
ls $search_dir > file_list
count=`wc -l file_list | awk '{ print $1 }'`
if [[ $count -eq 0 ]]
then
echo "No files in given directory!"
break
fi
file_extension=`head -1 file_list | tail -1 | awk -F "." '{ print $2 }'`
init_file_value=`head -1 file_list | tail -1 | awk -F "." '{ print $1 }'`
i=2
while [ $i -le $count ]
do
next_file_value=`head -$i file_list | tail -1 | awk -F "." '{ print $1 }'`
next_value=$((init_file_value+1));
if [ $next_file_value -ne $next_value ]
then
echo $next_value"."$file_extension
break
fi
init_file_value=$next_value;
i=$((i+1));
done
try it:
ls blender/output/celebAnim/ | sort -r | head -n1 | awk -F'.' '{print $1+1}'
command return 285
if need return 0285 than try it:
ls blender/output/celebAnim/ | sort -r | head -n1 | awk -F'.' '{print 0($1+1)}'