please see my code below : The code above below shows that first checking the disk space available in the specific path and it must have the greater space than the allotted space and after checking the disk space it will download the file through using the "wget" and after that it will extract all tar.gz files. As I execute my script, error's occur and it displayed : blah.blah.blah.tar.gz: Scheme missing.
#!/bin/bash
bundle=$(awk -F = '{print $2}' config.txt)
bundlename=$(echo "$bundle" | awk -F / '{print $11}')
diskspace=$(df -h /dev/shm | sed '1d' | awk '{print $5}' | cut -d'%' -f1)
allowed=0
if [ "${diskspace}" -gt "${allowed}" ]; then
wget -A "$bundle"
for file in *.tar.gz; do
gunzip -c "$file" | tar xf -
done
rm -vf "$file"
else
echo "Not enough space to download the bundle"
exit
fi
My question here. What does error means and can you correct my codes if you feel it's wrong ? thank you for the help.
Related
I'm searching in a .docx content with this command:
unzip -p *.docx word/document.xml | sed -e 's/<[^>]\{1,\}>//g; s/[^[:print:]]\{1,\}//g' | grep $1
But I need the name of file which contains the word what I searched. How can I do it?
You can walk through the files via for cycle:
for file in *.docx; do
unzip -p "$file" word/document.xml | sed -e 's/<[^>]\{1,\}>//g; s/[^[:print:]]\{1,\}//g' | grep PATTERN && echo $file
done
The && echo $file part prints the filename when grep finds the pattern.
Try with:
find . -name "*your_file_name*" | xargs grep your_word | cut -d':' -f1
If you're using GNU grep (likely, as you're on Linux), you might want to use this option:
--label=LABEL
Display input actually coming from standard input as input coming from file LABEL. This is especially useful when implementing tools like zgrep, e.g., gzip -cd foo.gz | grep --label=foo -H something. See
also the -H option.
So you'd have something like
for f in *.docx
do unzip -p "$f" word/document.xml \
| sed -e "$sed_command" \
| grep -H --label="$f" "$1"
done
I created a script to download file from URL and I want to download it in the specific directory but the problem is when its time in downloading it will not put to the directory given and also when extracting the file is in the given directory.
diskspace=$(df -h /var/ | sed '1d' | awk '{print $5}' | cut -d'%' -f1)
bundle=$(awk -F = '{print $2}' config.txt)
allowed=10
if [ "${diskspace}" -gt "${allowed}" ]; then
cd `/var/`
wget $bundle
else
echo "Not enough space to download the bundle"
echo $output
exit
fi
while true; do
for f in *.tar.gz; do
case $f in '*.tar.gz') exit 0;; esac
tar zxf "$f"
rm -v "$f"
done
done
Can Someone help me to this problem ? The thing that I want to happen is to download the file in the given directory and also extract it there. Help is greatly appreciated.
I bought a NAS box which has a cut down version of debian on it.
It ran out of space the other day and I did not realise. I am basically wanting to write a bash script that will alert me whenever the disk gets over 90% full.
Is anyone aware of a script that will do this or give me some advice on writing one?
#!/bin/bash
source /etc/profile
# Device to check
devname="/dev/sdb1"
let p=`df -k $devname | grep -v ^File | awk '{printf ("%i",$3*100 / $2); }'`
if [ $p -ge 90 ]
then
df -h $devname | mail -s "Low on space" my#email.com
fi
Crontab this to run however often you want an alert
EDIT: For multiple disks
#!/bin/bash
source /etc/profile
# Devices to check
devnames="/dev/sdb1 /dev/sda1"
for devname in $devnames
do
let p=`df -k $devname | grep -v ^File | awk '{printf ("%i",$3*100 / $2); }'`
if [ $p -ge 90 ]
then
df -h $devname | mail -s "$devname is low on space" my#email.com
fi
done
I tried to use Erik's answer but had issues with devices having long names which wraps the numbers and causes script to fail, also the math looked wrong to me and didn't match the percentages reported by df itself.
Here's an update to his script:
#!/bin/bash
source /etc/profile
# Devices to check
devnames="/dev/sda1 /dev/md1 /dev/mapper/vg1-mysqldisk1 /dev/mapper/vg4-ctsshare1 /dev/mapper/vg2-jbossdisk1 /dev/mapper/vg5-ctsarchive1 /dev/mapper/vg3-muledisk1"
for devname in $devnames
do
let p=`df -Pk $devname | grep -v ^File | awk '{printf ("%i", $5) }'`
if [ $p -ge 70 ]
then
df -h $devname | mail -s "$devname is low on space" my#email.com
fi
done
Key changes are changed df -k to df -Pk to avoid line wrapping and simplified the awk to use pre-calc'd percent instead of recalcing.
You could also use Monit for this kind of job. It's a "free open source utility for managing and monitoring, processes, programs, files, directories and filesystems on a UNIX system".
Based on #Erik answer, here is my version with variables :
#!/bin/bash
DEVNAMES="/ /home"
THRESHOLD=80
EMAIL=you#email.com
host=$(hostname)
for devname in $DEVNAMES
do
current=$(df $devname | grep / | awk '{ print $5}' | sed 's/%//g')
if [ "$current" -gt "$THRESHOLD" ] ; then
mail -s "Disk space alert on $host" "$EMAIL" << EOF
WARNING: partition $devname on $host is $current% !!
To list big files (>100Mo) :
find $devname -xdev -type f -size +100M
EOF
fi
done
And if you do not have the mail command on your server, you can send email via SMPT with swaks :
swaks --from "$EMAIL" --to "$EMAIL" --server "TheServer" --auth LOGIN --auth-user "TheUser" --auth-password "ThePasswrd" --h-Subject "Disk space alert on $host" --body - << EOF
#!/bin/bash
DEVNAMES=$(df --output=source | grep ^/dev)
THRESHOLD=90
EMAIL=your#email
HOST=$(hostname)
for devname in $DEVNAMES
do
current=$(df $devname | awk 'NR>1 {printf "%i",$5}')
[ "$current" -gt "$THRESHOLD" ] && warn="WARNING: partition $devname on $HOST is $current% !! \n$warn"
done
[ "$warn" ] && echo -e "$warn" | mail -s "Disk space alert on $HOST" $EMAIL
Based on previous answers, here's my version with following changes:
Automatically checks all mounted devices
Sends only one mail per check, regardless of how many devices are over the threshold
Code generally tidied up
I run bash scripts from time to time on my servers, I am trying to write a script that monitors log folders and compress log files if folder exceeds defined capacity. I know there are better ways of doing what I am currently trying to do, your suggestions are more than welcome. The script below is throwing an error "unexpected end of file" .Below is my script.
dir_base=$1
size_ok=5000000
cd $dir_base
curr_size=du -s -D | awk '{print $1}' | sed 's/%//g' zipname=archivedate +%Y%m%d
if (( $curr_size > $size_ok ))
then
echo "Compressing and archiving files, Logs folder has grown above 5G"
echo "oldest to newest selected."
targfiles=( `ls -1rt` )
echo "rocess files."
for tfile in ${targfiles[#]}
do
let `du -s -D | awk '{print $1}' | sed 's/%//g' | tail -1`
if [ $curr_size -lt $size_ok ];
then
echo "$size_ok has been reached. Stopping processes"
break
else if [ $curr_size -gt $size_ok ];
then
zip -r $zipname $tfile
rm -f $tfile
echo "Added ' $tfile ' to archive'date +%Y%m%d`'.zip and removed"
else [ $curr_size -le $size_ok ];
echo "files in $dir_base are less than 5G, not archiving"
fi
Look into logrotate. Here is an example of putting it to use.
With what you give us, you lack a "done" to end the for loop and a "fi" to end the main if. Please reformat your code and You will get more precise answers ...
EDIT :
Looking at your reformatted script, it is as said : The "unexpected end of file" comes from the fact you have not closed your "for" loop neither your "if"
As it seems that you mimick the logrotate behaviour, check it as suggested by #Hank...
my2c
My du -s -D does not show % sign. So you can just do.
curr_size=$(du -s -D)
set -- $curr_size
curr_size=$1
saves you a few overheads instead of du -s -D | awk '{print $1}' | sed 's/%//g.
If it does show % sign, you can get rid of it like this
du -s -D | awk '{print $1+0}'. No need to use sed.
Use $() syntax instead of backticks whenever possible
For targfiles=(ls -1rt) , you can omit the -1. So it can be
targfiles=( $(ls -rt) )
Use quotes around your variables whenever possible. eg "$zipname" , "$tfile"
How can I get the size of a file into a variable?
ls -l | grep testing.txt | cut -f6 -d' '
gave the size, but how can I store it in a shell variable?
filesize=$(stat -c '%s' testing.txt)
You can do it this way with ls (check the man page for the meaning of -s)
var=$(ls -s1 testing.txt | awk '{print $1}')
Or you can use stat with -c '%s'.
Or you can use find (GNU):
var=$(find testing.txt -printf "%s")
size() {
file="$1"
if [ -b "$file" ]; then
/sbin/blockdev --getsize64 "$file"
else
wc -c < "$file" # Handles pseudo files like /proc/cpuinfo
# stat --format %s "$file"
# find "$file" -printf '%s\n'
# du -b "$file" | cut -f1
fi
}
fs=$(size testing.txt)
size=`ls -l | grep testing.txt | cut -f6 -d' '`
You can get the file size in bytes with the command wc, which is fairly common on Linux systems since it's part of GNU coreutils:
wc -c < file
In a Bash script you can read it into a variable like this:
FILESIZE=$(wc -c < file)
From man wc:
-c, --bytes
print the byte counts
a=\`stat -c '%s' testing.txt\`;
echo $a