I have my filesystem as like below
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rootvg-rootvol
20G 3.7G 15G 20% /
tmpfs 71G 8.0K 71G 1% /dev/shm
And my code is as below:
varone = df -h | awk ' {print $1 }'
vartwo = df -h | awk ' NR == 2 {print $2","$3","$4","$5","$6 }'
echo "$varone $vartwo" >> /home/jeevagan/test_scripts/sizes/excel.csv
I want to export the 'df -h' into a csv file. Why I printed $1 alone in one variable means, there is space in the 'df -h' output. I want it to be printed in a single line.
When I run the script, it throws an error like
varone: command not found
vartwo: command not found
You can't put spaces around the equal sign, and you need backquotes to put the result of your command in a variable.
Try this:
varone=`df -h | awk ' {print $1 }'`
vartwo=`df -h | awk ' NR == 2 {print $2","$3","$4","$5","$6 }'`
echo "$varone $vartwo" >> /home/jeevagan/test_scripts/sizes/excel.csv
If you just wanted to capture df -h
varOne=`df -h -P| awk '{print $1","$2","$3","$4","$5","$6 }'`
echo "$varOne" >> /home/jeevagan/test_scripts/sizes/excel.csv
Read about 'P' here
How can I *only* get the number of bytes available on a disk in bash?
Related
I need show used disk space as (used+reserved),I have created below script and planning to add used and reserved,Is there a better way to do this?
I need to display "disk total used available" in this format in GB.
#!/bin/sh
output=`df -h --output=source,size,used,avail /dev/vd* /dev/disk/* -x devtmpfs | grep -v 'Filesystem' | awk '{printf $1 "\t" $2 "\t" $3 "\t" $4 "\n" }'`
while read -r line; do
diskname=`echo $line|awk -F " " '{print $1}'`
reserved=`tune2fs -l $diskname|grep -i "Reserved block count"|awk -F ":" '{print $2}'`
reservedInGB=`echo "$((((( $reserved * 4096 ) / 1024 ) / 1024 )))"|bc -l`
total=`echo $line|awk -F " " '{print $2}'`
used=`echo $line|awk -F " " '{print $3}'`
free=`echo $line|awk -F " " '{print $4}'`
echo $diskname $total $used $free $reservedInGB
done <<< "$output"
My local emulation doesn't do --output, but try something like this - tweak to spec.
df -PB 1GB -x devtmpfs /tmp | grep -v ^Filesystem |
while read mnt size used avail cap disk
do printf "%-10s %4d %4d %4d\n" $disk $size $used $avail
done
Note that embedded spaces in the mount break this, but it handles converting to GB right in the data generation with df. Since I can't do --output I saw no reason not to use -P to make sure the mount point and its data came out on the same line. Doing a read, so reordering is easy as well, as long as the fields land correctly.
Try something like that
df -h --output=source,size,used,avail | tail -n +2 | \
while read line; \
do printf "%s\t%s\n" "$line" \
"your calc with tune2fs and ${line%%[[:space:]]*}";done
How to extract only /app/xxxx field in the below lines and add df -P -T command to the extracted strings and save the output in the below format
edlp_nps_app:x:23449:5000:EDLP_NPS_APP (HP):/app/edlp_nps_app:/bin/bash
genxp_app:x:23414:15887:GENXP_APP (HP):/app/genxp_app:/bin/bash
icegnapp:x:21697:15954:ICEGNAPP (HP):/app/icegnapp:/bin/bash
icegnftp:x:21554:15416:ICEGNFTP
(HP):/app/icegnftp:/usr/libexec/openssh/sftp-server
df -P -T /app/XXXXX
df -P -T /app/edlp_nps_app
O/P:
Filesystem Type 1024-blocks Used Available Capacity
Mounted on
/dev/mapper/rootvg-rootvol ext4 144365708 27057836 110769428 20% /
Output i require is FS, type, Mounted on, appname
/dev/mapper/rootvg-rootvol ext4 / /app/edlp_nps_app
I tried it awk command but it didnt work
Using awk test if 6th field starts with /app, if true print 6th field
awk -F':' '$6~/^\/app/{print "df -P -T "$6}' infile | bash
To save in file
awk -F':' '$6~/^\/app/{print "df -P -T "$6}' infile | bash > outfile
To avoid printing of header multiple times you may pipe another awk like below
awk -F':' '$6~/^\/app/{print "df -P -T "$6}' infile | bash | awk 'FNR==1{print;next}/Filesystem/{next}1' >outfile
Test Results :
input :
$ cat file
edlp_nps_app:x:23449:5000:EDLP_NPS_APP (HP):/app/edlp_nps_app:/bin/bash
genxp_app:x:23414:15887:GENXP_APP (HP):/app/genxp_app:/bin/bash
icegnapp:x:21697:15954:ICEGNAPP (HP):/app/icegnapp:/bin/bash
icegnftp:x:21554:15416:ICEGNFTP
(HP):/app/icegnftp:/usr/libexec/openssh/sftp-server
Output:
$ awk -F: '$6~/^\/app/{print "df -P -T "$6}' file
df -P -T /app/edlp_nps_app
df -P -T /app/genxp_app
df -P -T /app/icegnapp
extract /app/xxx
first we drop the lines that do not contain /app
sed '/\/app/!d'
then we extract the /app/...
's|.*\(/app/[^:]*\).*|\1|'
combined:
sed '/\/app/!d;s|.*\(/app/[^:]*\).*|\1|'
Output i require is FS, type, Mounted on, appname
/dev/mapper/rootvg-rootvol ext4 / /app/edlp_nps_app
Let's tweak the df's output a little bit by having it supply only the relevant fields
df /app/foo --output=source,fstype,target
I do not believe we can get df to drop the header, so we strip it
df /app/foo --output=source,fstype,target | sed '1d'
I will append the app name manually and put it all together in a script named "magic.sh":
#!/bin/sh
if [ ! -f "$1" ]
then
echo "Input file does not exist!"
exit 1
fi
if [ -f "$2" ]
then
echo "Output file already exists!"
exit 2
fi
sed '/\/app/!d;s|.*\(/app/[^:]*\).*|\1|' $1 | while read -r app
do
echo "$(df $app --output=source,fstype,target | sed '1d') $app" >> $2
done
Assuming the specified input is in a file named "input" and you want the result in a file named "output", this is how to call it:
sh magic.sh input output
My script is as below. When we run the script, it automatically saves the disk space usage in separate cells.
SIZES_1=`df -h | awk 'FNR == 1 {print $1","$2","$3","$4","$5","$6}'`
SIZES_2=`df -h | awk 'FNR == 2 {print $1","$2","$3","$4","$5","$6}'`
SIZES_3=`df -h | awk 'FNR == 3 {print $1","$2","$3","$4","$5","$6}'`
SIZES_4=`df -h | awk 'FNR == 4 {print $1","$2","$3","$4","$5","$6}'`
SIZES_5=`df -h | awk 'FNR == 5 {print $1","$2","$3","$4","$5","$6}'`
SIZES_6=`df -h | awk 'FNR == 6 {print $1","$2","$3","$4","$5","$6}'`
SIZES_7=`df -h | awk 'FNR == 7 {print $1","$2","$3","$4","$5","$6}'`
SIZES_8=`df -h | awk 'FNR == 8 {print $1","$2","$3","$4","$5","$6}'`
echo `date +%Z-%Y-%m-%d_%H-%M-%S` >>/home/jeevagan/test_scripts/sizes/excel.csv
echo "$SIZES_1" >> /home/jeevagan/test_scripts/sizes/excel.csv
echo "$SIZES_2" >> /home/jeevagan/test_scripts/sizes/excel.csv
echo "$SIZES_3" >> /home/jeevagan/test_scripts/sizes/excel.csv
echo "$SIZES_4" >> /home/jeevagan/test_scripts/sizes/excel.csv
echo "$SIZES_5" >> /home/jeevagan/test_scripts/sizes/excel.csv
echo "$SIZES_6" >> /home/jeevagan/test_scripts/sizes/excel.csv
echo "$SIZES_7" >> /home/jeevagan/test_scripts/sizes/excel.csv
echo "$SIZES_8" >> /home/jeevagan/test_scripts/sizes/excel.csv
This script is okay for my machine.
My doubt is, if somebody else's machine has many file systems, my script won't work to fetch all the file systems usage. How to make it to grab all those automatically?
Assuming you want all filesystems you can simplify that to:
printf '%s\n' "$(date +%Z-%Y-%m-%d_%H-%M-%S)" >> excel.csv
df -h | awk '{print $1","$2","$3","$4","$5","$6}' >> excel.csv
I would simplify this to
{ date +%Z-%F_%H-%M-%S; df -h | tr -s ' ' ','; } >> excel.csv
Group commands so only a single redirect is needed
Squeeze spaces and replace them with a single comma using tr
No need for echo `date` or similar: it's the same as just date
date +%Y-%m-%d is the same as date +%F
Notice that this has a little flaw in that the first line of the output of df -h, which looks something like this originally
Filesystem Size Used Avail Use% Mounted on
has a space in the heading of the last column, so it becomes
Filesystem,Size,Used,Avail,Use%,Mounted,on
with an extra comma. The original awk solution just cut off the last word of the line, though. Similarly, spaces in paths would trip up this solution.
To fix the comma problem, you could for example run
sed -i 's/Mounted,on$/Mounted on/' excel.csv
every now and so often.
As an aside, to replace all field separators in awk, instead of
awk '{print $1","$2","$3","$4","$5","$6}'
you can use
awk 'BEGIN { OFS = "," } { $1 = $1; print }'
or, shorter,
awk -v OFS=',' '{$1=$1}1'
I have some bash that calculates 90% of the total system memory in KB and outputs this into a file:
cat /proc/meminfo | grep MemTotal | cut -d: -f2 | awk '{SUM += $1} END { printf "%d", SUM/100*90}' | awk '{print $1}' > mem.txt
I then want to copy the value into another file (/tmp/limits.conf) and append to a single line.
The below searches for the string "soft memlock" and writes the output of mem.txt created earlier into the /tmp/limistest.conf
sed -i '/soft\smemlock/r mem.txt' /tmp/limitstest.conf
However the script outputs as below:
oracle soft memlock
1695949
I want it to output like this:
oracle soft memlock 1695949
I have tried quite a few things but can't get this to output correctly.
Thanks
Edit here is some of the text in input file /proc/meminfo
MemTotal: 18884388 kB
MemFree: 1601952 kB
MemAvailable: 1607620 kB
It's a bit of a guess since you didn't provide sample input/output but all you need is something like:
awk '
NR==FNR {
if (/MemTotal/) {
split($0,f,/:/)
$0 = f[2]
sum += $1
}
next
}
/soft[[:space:]]+memlock/ { $0 = $0 OFS int(sum/100*90) }
{ print }
' /proc/meminfo /tmp/limitstest.conf > tmp &&
mv tmp /tmp/limitstest.conf
I think your approach is overly complicated: there is no need to store the output in a file and then append it into another file.
What if you just store the value in a variable and then add it into your file?
var=$(command)
sed "/soft memlock/s/.*/& $var/" /tmp/limitstest.conf
Once you are confident with the output, add the -i in the sed operation.
Where, in fact, command can be something awk alone handles:
awk '/MemTotal/ {sum+=$2} END { printf "%d", SUM/100*90}' /proc/meminfo
See a test on the sed part:
$ cat a
hello
oracle soft memlock
bye
$ var=2222
$ sed "/soft memlock/s/.*/& $var/" a
hello
oracle soft memlock 2222
bye
I am trying to write a shell script to monitor file system. script logic is,
for each file system from df -H command, read the file system threshold file and get the critical threshold, warning threshold. Based on the condition, it will send notification.
Here is my script:
#!/bin/sh
df -H | grep -vE '^Filesystem|none|boot|tmp|tmpfs' | awk '{ print $5 " " $6 }' | while read $output
do
echo $output
fsuse=$(echo $output | awk '{ print $1}' | cut -d'%' -f1 )
fsname=$(echo $output | awk '{ print $2 }' )
server=`cat /workspace/OSE/scripts/fs_alert|grep -w $fsname|awk -F":" '{print $2}'`
fscrit=`cat /workspace/OSE/scripts/fs_alert|grep -w $fsname|awk -F":" '{print $3}'`
fswarn=`cat /workspace/OSE/scripts/fs_alert|grep -w $fsname|awk -F":" '{print $4}'`
serenv=`cat /workspace/OSE/scripts/fs_alert|grep -w $fsname|awk -F":" '{print $5}'`
if [ $fsuse -ge $fscrit ]; then
message="CRITICAL:${server}:${serenv}:$fsname Is $fsuse Filled"
_notify;
elif [ $fsuse -gt $fswarn ] && [ $fsuse -lt $fscrit ]; then
message="WARNING: $fsname is $fsuse Filled"
_notify;
else
echo "File system space looks good"
fi
done
Here is /workspace/OSE/scripts/fs_alert:
/:hlpdbq001:90:80:QA:dba_mail
/dev/shm:hlpdbq001:90:80:QA:dba_mail
/boot:hlpdbq001:90:80:QA:dba_mail
/home:hlpdbq001:90:80:QA:dba_mail
/opt:hlpdbq001:90:80:QA:dba_mail
/opt/security:hlpdbq001:90:80:QA:dba_mail
/tmp:hlpdbq001:90:80:QA:dba_mail
/var:hlpdbq001:90:80:QA:dba_mail
/u01/app:hlpdbq001:90:80:QA:dba_mail
/u01/app/oracle:hlpdbq001:90:80:QA:dba_mail
/oratrace:hlpdbq001:90:80:QA:dba_mail
/u01/app/emagent:hlpdbq001:90:80:QA:dba_mail
/gg:hlpdbq001:90:80:QA:dba_mail
/workspace:hlpdbq001:90:80:QA:dba_mail
/dbaudit:hlpdbq001:90:80:QA:dba_mail
/tools:hlpdbq001:90:80:QA:dba_mail
My problem is when the script is trying to get crit_va, warn_val from the file for /u01 file system, I am getting three results. How do I get/filter one file system at a time?
$ df -H|grep /u01
/dev/mapper/datavg-gridbaselv 53G 12G 39G 24% /u01/app
/dev/mapper/datavg-rdbmsbaselv 53G 9.6G 41G 20% /u01/app/oracle
/dev/mapper/datavg-oemagentlv 22G 980M 20G 5% /u01/app/emagent
what is the best way to handle this issue?
do i need logic based on Filesystem or Mounted on.
Don't reinvent the wheel. There are tools out there that can do this for your. Try monit for example:
http://sysadminman.net/blog/2011/monit-disk-space-monitoring-1716
well, monit is fine ), if you need alternative take a look at df-check - a wrapper for df utility to verify that thresholds are not exceeded , on per partition basis. At least it seems very close to what you started to implement in your bash script, but it's written on perl and has a neat and simple installation layout. ready to use tool.
-- Regards
PS discloser - I am the tool author