I have a quite simple script I'd like to write just using bash.
Given a folder with 0..N *.XML files; I want to sort those by name and remove N-10 files (leave the last 10 in place).
I've been tinkering with find and tail/head but couldn't figure a way
find /mnt/user/Temporary/1 -name *.xml | tail -n +10 | rm
Please read up. It is about keeping the last 10. If there are 10 or less files, none should be deleted!
EDIT:
As someone closed, but did not repoen the question, here is the solution for those getting here with the same question.
#!/bin/bash
files=()
while IFS= read -r -d $'\0'; do
files+=("$REPLY")
done < <(find . -name *.xml -print0 | sort)
Limit=$((${#files[#]}-10))
count=0
while [ $Limit -gt $count ]; do
rm "${files[count]}"
let count=count+1
done
Maybe some linux "pro" can optimize it or give it some parameters (like limit, path and file pattern) to make it callable anywhere.
EDIT: New answer
#!/usr/bin/env bash
files=$(find *.xml | wc -l)
[ "$files" -lt 10 ] && echo "Files are less than 10..." && exit 1
count=$(($files-10))
for i in $(find *.xml | sort -V); do
[ $count -eq 0 ] && echo "Done" && exit 1
rm $i
((count--))
done
$files stores the number of *.xml in the folder
if the number is less or equal to 10 exit
set a counter that of the number of files to delete
loop through each file in order
if the counter is equal to 0 exit
if not remove the file and increment the counter
Related
I need to write script in loop which will count the number of files and directories and indicates which grater and by how much. Like etc: there are 10 more files than directories.
I was trying something like that but it just show files and directories and I don't have idea how to indicates which is greater etc. Thanks for any help
shopt -s dotglob
count=0
for dir in *; do
test -d "$dir" || continue
test . = "$dir" && continue
test .. = "$dir" && continue
((count++))
done
echo $count
for -f in *; do
"$fname"
done
Here is a recursive dir walk I used for something a while back. Added counting of dirs and files:
#!/bin/sh
# recursive directory walk
loop() {
for i in *
do
if [ -d "$i" ]
then
dir=$((dir+1))
cd "$i"
loop
else
file=$((file+1))
fi
done
cd ..
}
loop
echo dirs: $dir, files: $file
Paste it to a script.sh and run with:
$ sh script.sh
dirs: 1, files: 11
You can use the find command to make things simplier.
The following command will list all the files in the given path:
find "path" -mindepth 1 -maxdepth 1 -type f
And also using the -type d you will get the directories.
Piping find into the wc -l will give you the number instead of the actual file and directory names, so:
root="${1:-.}"
files=$( find "$root" -mindepth 1 -maxdepth 1 -type f | wc -l)
dirs=$( find "$root" -mindepth 1 -maxdepth 1 -type d | wc -l)
if [ $files -gt $dirs ]; then
echo "there are $((files - dirs)) more files"
elif [ $files -lt $dirs ]; then
echo "there are $((dirs - files)) more dirs"
else
echo "there are the same"
fi
Use could use find to get the number of files/folders in a directory. Use wc -l to count the number of found paths, which you could use to calculate/show the result;
#!/bin/bash
# Path to search
search="/Users/me/Desktop"
# Get number of files
no_files=$(find "$search" -type f | wc -l )
# Number of folders
no_folders=$(find "$search" -type d | wc -l )
echo "Files: ${no_files}"
echo "Folders: ${no_folders}"
# Caculate dif
diff=$((no_files - $no_folders))
# Check if there are more folders or files
if [ "$diff" -gt 0 ]; then
echo "There are $diff more files then folders!"
else
diff=$((diff * -1 ) # Invert negative number to positive (-10 -> 10)
echo "There are $diff more folders then files!"
fi;
Files: 13
Folders: 2
There are 11 more files then folders!
How do I split a parent folder into 2 or more without creating subfolders.
like folder A into folderA1, FolderA2 but all in the same directory rather than being subfolders in folder A.
Actually this is the script I use but it only ends up creating subfolders
let fileCount=3000
let dirNum=1
for f in *
do
[ -d $f ] && continue
[ $fileCount -eq 3000 ] && {
dir=$(printf "%03d" $dirNum)
mkdir $dir
let dirNum=$dirNum+1
let fileCount=0
}
mv $f $dir
let fileCount=$fileCount+1
done
In the parent directory of folderA, run the following script:
#!/bin/bash
i=0 # counter for current file
j=0 # counter for current directory
batchsize=1000 # size of each batch
find folderA -type f -print0 | while read -r -d $'\0' file
do
if (( i % batchsize == 0 ))
then
(( j++ ))
mkdir "dir_$j"
fi
mv -- "$file" "dir_$j"
(( i++ ))
done
If all files in folderA have "normal" names, i.e. no whitespace, no glob characters, etc, the script can be written as
#!/bin/bash
find folderA -maxdepth 2 -type f | xargs -n 1000 | while read files
do
mkdir dir_$((++i))
mv $files dir_$i/
done
Which is briefer, and also much more performant.
I'm currently monitoring a number of dirs for log files; specifically those just created. It's been a long time since my Linux and after some trial and error I've hacked together what I need but it takes a full 20secs or more to return. I'm hoping I can have an expert look at it and advise me on something a little more streamlined.
find . -type f -follow -print | xargs ls -ltr 2>/dev/null | grep '2\?10' | tail
So for example find the last 10 files matching the name. Optimally I'd like to turn this into a bash script that accepts one argument and replaces the grep expression but I figure one thing at a time.
Thanks for your help in advance!
I bit the bullet and wrote the script; I'll tinker with it more later.
#!/bin/bash
if [ $# != 2 ]; then
echo findLog Usage: findLog [3 digit cluster] [pick 1: main message service detail soap]
exit 0
fi
if [ "$2" == "service" ]; then
file="$2-time-"
elif [ "$2" == "detail" ]; then
file="$2-time-"
else file="$2-"
fi
cluster="$1"
#store logpaths for readability
a="/pathto/A"
b="/pathto/B"
c="/pathto/C"
d="/pathto/D"
e="/pathto/E"
f="/pathto/F"
g="/pathto/G"
h="/pathto/H"
logpaths=( $a $b $c $d $e $f $g $h )
for i in "${logpaths[#]}"
do
ls -ltr "$i"/*.log | grep "$file"${cluster:0:1}${i: -1}${cluster: -2}
done
I have written a script to zip a set of files into one zip file if the number of files go above a limit.
limit=1000 #limit the number of files
files=( /mnt/md0/capture/dcn/*.pcap) #file format to be zipped
if((${#files[0]}>limit )); then #if number of files above limit
zip -j /mnt/md0/capture/dcn/capture_zip-$(date "+%b_%d_%Y_%H_%M_%S").zip /mnt/md0/capture/dcn/*.pcap
fi
I need to modify this, so that the script checks for number of files from previous month rather than the whole set of files. How do I implement that
This script perhaps.
#!/bin/bash
[ -n "$BASH_VERSION" ] || {
echo "You need Bash to run this script."
exit 1
}
shopt -s extglob || {
echo "Unable to enable extglob option."
exit 1
}
LIMIT=1000
FILES=(/mnt/md0/capture/dcn/*.pcap)
ONE_MONTH_BEFORE=0
ONE_MONTH_OLD_FILES=()
read ONE_MONTH_BEFORE < <(date -d 'TODAY - 1 month' '+%s') && [[ $ONE_MONTH_BEFORE == +([[:digit:]]) && ONE_MONTH_BEFORE -gt 0 ]] || {
echo "Unable to get timestamp one month before current day."
exit 1
}
for F in "${FILES[#]}"; do
read TIMESTAMP < <(date -r "$F" '+%s') && [[ $TIMESTAMP == +([[:digit:]]) && TIMESTAMP -le ONE_MONTH_BEFORE ]] && ONE_MONTH_OLD_FILES+=("$F")
done
if [[ ${#ONE_MONTH_OLD_FILES[#]} -gt LIMIT ]]; then
# echo "Zipping ${FILES[*]}." ## Just an example message you can create.
zip -j "/mnt/md0/capture/dcn/capture_zip-$(date '+%b_%d_%Y_%H_%M_%S').zip" "${ONE_MONTH_OLD_FILES[#]}"
fi
Make sure you save in unix file format and run bash script.sh.
You could also modify the script to get files by arguments instead by:
FILES=("$#")
Complete update:
#!/bin/bash
#Limit of your choice
LIMIT=1000
#Get the number of files, that has `*.txt` in its name, with last modified time 30 days ago
NUMBER=$(find /yourdirectory -maxdepth 1 -name "*.pcap" -mtime +30 | wc -l)
if [[ $NUMBER -gt $LIMIT ]]
then
FILES=$(find /yourdirectory -maxdepth 1 -name "*.pcap" -mtime +30)
zip archive.zip $FILES
fi
The reason I am getting the files twice, is because the bash array is delimeted by space, rather than \n, and I couldn't find a clear way to count the number of files, you might want to do some research on that to make find once.
Just replace your if line with
if [[ "$(find $(dirname "$files") -maxdepth 1 -wholename "$files" -mtime -30 | wc -l)" -gt "$limit" ]]; then
From left to right this expression
searches (find)
in the path of your pattern ($(dirname "$files") strips away everything from the last "/")
but not in its subdirectories (-maxdepth 1)
for files matching your pattern (-wholename "$files")
that are newer than 30 days (-mtime -30)
and counts the number of those files (wc -l)
I prefer -gt for comparisons, but else it is the same as in your example.
Note that this will only work when all your files are in the same directory!
I'm trying to write a function that will traverse the file directory and give me the value of the deepest directory. I've written the function and it seems like it is going to each directory, but my counter doesn't seem to work at all.
dir_depth(){
local olddir=$PWD
local dir
local counter=0
cd "$1"
for dir in *
do
if [ -d "$dir" ]
then
dir_depth "$1/$dir"
echo "$dir"
counter=$(( $counter + 1 ))
fi
done
cd "$olddir"
}
What I want it to do is feed the function a directory, say /home, and it'll go down each subdirectory within and find the deepest value. I'm trying to learn recursion better, but I'm not sure what I'm doing wrong.
Obviously find should be used for this
find . -type d -exec bash -c 'echo $(tr -cd / <<< "$1"|wc -c):$1' -- {} \; | sort -n | tail -n 1 | awk -F: '{print $1, $2}'
At the end I use awk to just print the output, but if that were the output you wanted it would be better just to echo it that way to begin with.
Not that it helps learn about recursion, of course.
Here's a one–liner that's pretty fast:
find . -type d -printf '%d:%p\n' | sort -n | tail -1
Or as a function:
depth()
{
find $1 -type d -printf '%d:%p\n' | sort -n | tail -1
}
Here is a version that seems to work:
#!/bin/sh
dir_depth() {
cd "$1"
maxdepth=0
for d in */.; do
[ -d "$d" ] || continue
depth=`dir_depth "$d"`
maxdepth=$(($depth > $maxdepth ? $depth : $maxdepth))
done
echo $((1 + $maxdepth))
}
dir_depth "$#"
Just a few small changes to your script. I've added several explanatory comments:
dir_depth(){
# don't need olddir and counter needs to be "global"
local dir
cd -- "$1" # the -- protects against dirnames that start with -
# do this out here because we're counting depth not visits
((counter++))
for dir in *
do
if [ -d "$dir" ]
then
# we want to descend from where we are rather than where we started from
dir_depth "$dir"
fi
done
if ((counter > max))
then
max=$counter # these are what we're after
maxdir=$PWD
fi
((counter--)) # decrement and test to see if we're back where we started
if (( counter == 0 ))
then
echo $max $maxdir # ta da!
unset counter # ready for the next run
else
cd .. # go up one level instead of "olddir"
fi
}
It prints the max depth (including the starting directory as 1) and the first directory name that it finds at that depth. You can change the test if ((counter > max)) to >= and it will print the last directory name it finds at that depth.
The AIX (6.1) find command seems to be quite limited (e.g. no printf option). If you like to list all directories up to a given depth try this combination of find and dirname. Save the script code as maxdepth.ksh. In comparison to the Linux find -maxdepth option, AIX find will not stop at the given maximum level which results in a longer runtime, depending on the size/depth of the scanned direcory:
#!/usr/bin/ksh
# Param 1: maxdepth
# Param 2: Directoryname
max_depth=0
netxt_dir=$2
while [[ "$netxt_dir" != "/" ]] && [[ "$netxt_dir" != "." ]]; do
max_depth=$(($max_depth + 1))
netxt_dir=$(dirname $netxt_dir)
done
if [ $1 -lt $max_depth ]; then
ret=1
else
ret=0
ls -d $2
fi
exit $ret
Sample call:
find /usr -type d -exec maxdepth.ksh 2 {} \;
The traditional way to do this is to have dir_depth return the maximum depth too. So you'll return both the name and depth.
You can't return an array, struct, or object in bash, so you can return e.g. a comma-separated string instead..
dir_depth(){
local dir
local max_dir="$1"
local max_depth=0
for dir in $1/*
do
if [ -d "$dir" ]
then
cur_ret=$(dir_depth "$dir")
cur_depth=$(expr "$cur_ret" : '\([^,]*\)')
cur_dir=$(expr "$cur_ret" : '.*,\(.*\)')
if [[ "$cur_depth" -gt "$max_depth" ]]; then
max_depth="$cur_depth"
max_dir="$cur_dir"
fi
fi
done
max_depth=$(($max_depth + 1))
echo "$max_depth,$max_dir"
}
EDIT: Fixed now. It starts with the directory you passed in as level 1, then counts upwards. I removed the cd, as it isn't necessary. Note that this will fail if filenames contain commas.
You might want to consider using a programming language with more built-in data structures, like Python.