I need file size alert script for solaris - linux

file=/root/stacktrace.log
minsize=100
filesize=$(wc -c <"$file")
echo $filesize
if [ $filesize -ge $minsize ]; then
mailx -s 'File size is more than 10MB' example#gmail.com < /dev/null
fi
Above script is working fine in centos. But its not working in solaris os.
Please help me on this.

Try this:
file=/root/stacktrace.log
maxsize=100
if [ -f "$file" ] && [ -n "$(find "$file" -size +"$maxsize"c)" ]; then
mailx -s "File size is more than $maxsize" example#gmail.com < /dev/null
fi
This uses find to determine that the size is $maxsize or larger, in this case 100 bytes. I also required [ -f "$file" ] to ensure we're looking at a file rather than a directory so find's recursive search won't find a file inside that directory's structure that is sufficiently large.
BSD find and GNU find (but not Solaris find) support better units than just c for "characters" (bytes). Try k, M, or G like -size +"$maxsize"M or else -size +$((maxsize*1048576))c
(Never mind that the syntax highlighting looks odd. You are allowed to nest one level of double quotes inside a "$(…)" command substitution.)

Related

extracting files that doesn't have a dir with the same name

sorry for that odd title. I didn't know how to word it the right way.
I'm trying to write a script to filter my wiki files to those got directories with the same name and the ones without. I'll elaborate further.
here is my file system:
what I need to do is print a list of those files which have directories in their name and another one of those without.
So my ultimate goal is getting:
with dirs:
Docs
Eng
Python
RHEL
To_do_list
articals
without dirs:
orphan.txt
orphan2.txt
orphan3.txt
I managed to get those files with dirs. Here is me code:
getname () {
file=$( basename "$1" )
file2=${file%%.*}
echo $file2
}
for d in Mywiki/* ; do
if [[ -f $d ]]; then
file=$(getname $d)
for x in Mywiki/* ; do
dir=$(getname $x)
if [[ -d $x ]] && [ $dir == $file ]; then
echo $dir
fi
done
fi
done
but stuck with getting those without. if this is the wrong way of doing this please clarify the right one.
any help appreciated. Thanks.
Here's a quick attempt.
for file in Mywiki/*.txt; do
nodir=${file##*/}
test -d "${file%.txt}" && printf "%s\n" "$nodir" >&3 || printf "%s\n" "$nodir"
done >with 3>without
This shamelessly uses standard output for the non-orphans. Maybe more robustly open another separate file descriptor for that.
Also notice how everything needs to be quoted unless you specifically require the shell to do whitespace tokenization and wildcard expansion on the value of a token. Here's the scoop on that.
That may not be the most efficient way of doing it, but you could take all files, remove the extension, and the check if there isn't a directory with that name.
Like this (untested code):
for file in Mywiki/* ; do
if [ -f "$d" ]; then
dirname=$(getname "$d")
if [ ! -d "Mywiki/$dirname" ]; then
echo "$file"
fi
fi
done
To List all the files in current dir
list1=`ls -p | grep -v /`
To List all the files in current dir without extension
list2=`ls -p | grep -v / | sed 's/\.[a-z]*//g'`
To List all the directories in current dir
list3=`ls -d */ | sed -e "s/\///g"`
Now you can get the desired directory listing using intersection of list2 and list3. Intersection of two lists in Bash

Bash scripting wanting to find a size of a directory and if size is greater than x then do a task

I have put the following together with a couple of other articles but it does not seem to be working. What I am trying to do eventually do is for it to check the directory size and then if the directory has new content above a certain total size it will then let me know.
#!/bin/bash
file=private/videos/tv
minimumsize=2
actualsize=$(du -m "$file" | cut -f 1)
if [ $actualsize -ge $minimumsize ]; then
echo "nothing here to see"
else
echo "time to sync"
fi
this is the output:
./sync.sh: line 5: [: too many arguments
time to sync
I am new to bash scripting so thank you in advance.
The error:
[: too many arguments
seems to indicate that either $actualsize or $minimumsize is expanding to more than one argument.
Change your script as follows:
#!/bin/bash
set -x # Add this line.
file=private/videos/tv
minimumsize=2
actualsize=$(du -m "$file" | cut -f 1)
echo "[$actualsize] [$minimumsize]" # Add this line.
if [ $actualsize -ge $minimumsize ]; then
echo "nothing here to see"
else
echo "time to sync"
fi
The set -x will echo commands before attempting to execute them, something which assists greatly with debugging.
The echo "[$actualsize] [$minimumsize]" will assist in trying to establish whether these variables are badly formatted or not, before the attempted comparison.
If you do that, you'll no doubt find that some arguments will result in a lot of output from the du -m command since it descends into subdirectories and gives you multiple lines of output.
If you want a single line of output for all the subdirectories aggregated, you have to use the -s flag as well:
actualsize=$(du -ms "$file" | cut -f 1)
If instead you don't want any of the subdirectories taken into account, you can take a slightly different approach, limiting the depth to one and tallying up all the sizes:
actualsize=$(find . -maxdepth 1 -type f -print0 | xargs -0 ls -al | awk '{s += $6} END {print int(s/1024/1024)}')

bash script - I want to check if XLS is empty. if it is, i don't want to do anything. If it is not, I want to do something

I have a bash script that has an if-then-fi statement included. the code block executes only when the XLS is not empty. Currently i'm evaluating this by utilizing the following:
FILESIZE = `wc -c < $FILENAME`
it seems that the default filesize generated is 4096 bytes if the file is empty. So...
if [ $FILESIZE -gt "4096" ]; then
do something
fi
however, my boss isn't a huge fan of hard coded numbers. is there an alternative solution to seeing whether an XLS has data?
thanks!
if [ -r "$FILENAME ] # If there is a readable file "$FILENAME"
then
if [ -s "$FILENAME" ] # If file "$FILENAME" has a size greater than zero bytes
then
do something
fi
fi
You could to use xls2csv command, if result is 0 the file is empty.
xls2csv file.xls | wc -l
This command it's usually in the "catdoc" package.

Linux Bash file Reading Lines and words

I apologize if this is a trivial question. I am learning how to use linux bash and this little task is giving me a headache...
So I need to write a script, let's call it count.sh. I want that: for each file in the working directory, prints the filename, the number of lines, and the number of words to the console:
test.txt 100 1023
someOtherfiles 10 233
So far, I know that the following gives me all the files names in the directory. And thanks for all who helped me, I get this working version:
for f in *; do
echo -n "$f"
cat "$f" | wc -wl
done
I would really appreciate your help! Thanks ahead!
P.s. If you know great resources (links for tutorials) for learning about script and you are willing to share it with me. I think I really need to know these basics. Thanks again!
If you must have the file name as the first field in your output, try this:
for f in *; do
if [ -f "$f" ]; then
echo -n "$f"
cat "$f" | wc -wl
fi
done
for f in *; do
if [[ -f $f ]]; then
echo "$f $(wc -wl < "$f")"
fi
done
[[ -f $f ]] processes only files (excludes subdirectories) and also handles the case where the directory is empty (in which case * is (by default) left unexpanded, i.e. assigned to $f as is).
echo "$f $(wc -wl < "$f")" uses command substitution ($( ... )) to directly include the output from the enclosed command in the output string passed to echo.
Note that the reason that < is used to direct the content of file $f to wc via stdin is that wc would otherwise append the name of the input file to its output (thanks, #R Sahu).

How to move/copy a lot of files (not all files) in a directory?

I got a directory which contains approx 9000 files, the file names are in ascending number (however not necessarily consecutive).
Now I need to copy/move ~3000 files from number xxxx to number yyyy to another direcotory. How can I use cp or mv command for that purpose?
find -type f | while read file; do if [ "$file" -ge xxxx -o "$file" -le yyyy ]; then echo $file; fi; done | xargs cp -t /destination/
If you want to limit to 3000 files, do:
export i=0; find -type f | while read file; do if [ "$file" -ge xxxx -o "$file" -le yyyy ]; then echo $file; let i+=1; fi; if [ $i -gt 3000 ]; then break; fi; done | xargs cp -t /destination/
If the files have a common suffix after the number, use ${file%%suffix} inside the if (you can use globs in the suffix).
You can use the seq utility to generate numbers for this kind of operation:
for i in `seq 4073 7843` ; do cp file_${i}_name.png /destination/folder ; done
On the downside, this will execute cp a lot more often than QuantumMechanic's solution; but QuantumMechanic's solution may not execute if the total length of all the filenames is greater than the kernel's argv size limitation (which could be between 128K and 2048K, depending upon your kernel version and stack-size rlimits; see execve(2) for details).
If the range you want spans orders of magnitudes (e.g., between 900 and 1010) then the seq -w option may be useful, it zero-pads the output numbers.
This isn't the most elegant, but how about something like:
cp 462[5-9] 46[3-9]? 4[7-9]?? 5??? 6[0-2]?? 63[0-4]? 635[0-3] otherDirectory
which would copy files named 4625 to 6353 inclusive to otherDirectory. (You wouldn't want to use something like 4* since that would copy the file 4, 42, 483, etc.)

Resources