run the disk space checking script in linux without login - linux

I wrote a script for checking the disk space in the linux machine using shell script when i am login.
Now if i am not login in that machine but i need an alert if the disk space is more than the threshold (ex:80) means how can i check?
1.Using ssh (remote command execution)
2.Run as a background script.
Which one is more efficient?
or is there any other ideas to do this?
I do not want to do any kind of login directly/indirectly. ie. I even don't want to use ssh keygen to store keys. It should work like the webpages, with any new systems also but without any kind of security tuning
Pls let me know.

The cron program is a good place to start. It is available on any Linux system, and can be setup to run programs at regular intervals. If the program produces error messages, those are normally emailed to the account which create the cron job.

Script for checking the disk usage and cpu usage
ALERT=60
df command is used to check the list of disks available and their attributes.Here we get the Filesystem information and take name of the file system and usage of the file
system through awk command.Next for getting the value we cut the % symbol and get the interger only.Then we check the condition for Alert Message.
df -H | grep -vE '^Filesystem' | awk '{ print $5 " " $1 }' | while read output; do
usep=$(echo $output | awk '{ print $1}'| cut -d '%' -f1 )
partition=$(echo $output | awk '{ print $2 }' )
if [ $usep -ge $ALERT ]; then
echo "Alert: Almost out of disk space $usep $partition $HOSTNAME"
fi
done
This is the script i have written for checking the disk space.
Then i gave crontab -e command in the terminal add the script to the file.Now the new cron job is assigned.i set the cronjob to be executed for every 1 hour to monitor the disk space check.
So,Now i can't able to get the message notification.If i am not logged in in that machine,How to get the information about the disk space check?
Is there any thing needs to improve in the script?

Related

Pass variables out of an interactive session from bash script

Hello People of the world,
I am trying to write a script that will allow user to failover apps between sites in bash.
Our applications are controlled by Pacemaker and I thought I would be able to write a function that would take in the necessary variables and act. Stop on one site, start on another. Once I have ssh'd to the remote machine, I am unable to get the value of the grep/awk command back for the status of the application in PCS.
I am encountering a few issues, and have tried answers from stackoverflow and other sites.
I send the ssh command to /dev/null 2>&1 as banners pop up on screen that unix admin have on the local user and -q does not deal with it - Does this stop anything being returned?
when using awk '{print \\\\\\$4}' in the code, I get a "backslash not last character on line" error
To get round this, I tried result=$(sudo pcs status | grep nds_$resource), however this resulted in a password error on sudo
I have tried >/dev/tty and >$(tty)
I tried to not suppress the ssh (remove /dev/null 2>&1) and put the output in variable at function call, removing the awk from the sudo pcs status entry.
result=$(pcs_call "$site1" "1" "2" "disable" "pmr")
echo $result | grep systemd
This was OK, but when I added | awk '{print \\\$4}' I then got the fourth word in the banner.
Any help would be appreciated as I have been going at this for a few days now.
I have been looking at this answer from Bruno, but unsure how to implement as I have multiple sudo commands.
Below is my strip down of the function code for testing on one machine;
site1=lon
site2=ire
function pcs_call()
{
site=$1
serverA=$2
serverB=$3
activity=$4
resource=$5
ssh -tt ${site}servername0${serverA} <<SSH > /dev/null 2>&1
sudo pcs resource ${activity} proc_${resource}
sleep 10
sudo pcs status | grep proc_$resource | awk '{print \\\$4}' | tee $output
exit
SSH
echo $output
}
echo ====================================================================================
echo Shutting Down PMR in $site1
pcs_call "$site1" "1" "2" "disable" "pmr"
I'd say start by pasting the whole thing into ShellCheck.net and fixing errors until there are no suggestions, but there are some serious issues here shellcheck is not going to be able to handle alone.
> /dev/null says "throw away into the bitbucket any data that is returned. 2>&1 says "Send any useful error reporting on stderr wherever stdout is going". Your initial statement, intended to retrieve information from a remote system, is immediately discarding it. Unless you just want something to occur on the remote system that you don't want to know more about locally, you're wasting your time with anything after that, because you've dumped whatever it had to say.
You only need one backslash in that awk statement to quote the dollar sign on $4.
Unless you have passwordless sudo on the remote system, this is not going to work out for you. I think we need more info on that before we discuss it any deeper.
As long as the ssh call is throwing everything to /dev/null, nothing inside the block of code being passed is going to give you any results on the calling system.
In your code you are using $output, but it looks as if you intend for tee to be setting it? That's not how that works. tee's argument is a filename into which it expects to write a copy of the data, which it also streams to stdout (tee as in a "T"-joint, in plumbing) but it does NOT assign variables.
(As an aside, you aren't even using serverB yet, but you can add that back in when you get past the current issues.)
At the end you echo $output, which is probably empty, so it's basically just echo which won't send anything but a newline, which would just be sent back to the origin server and dumped in /dev/null, so it's all kind of pointless....
Let's clean up
sudo pcs status | grep proc_$resource | awk '{print \\\$4}' | tee $output
and try it a little differently, yes?
First, I'm going to assume you have passwordless sudo, otherwise there's a whole other conversation to work that out.
Second, it's generally an antipattern to use both grep AND awk in a pipeline, as they are both basically regex engines at heart. Choose one. If you can make grep do what you want, it's pretty efficient. If not, awk is super flexible. Please read the documentation pages on the tools you are using when something isn't working. A quick search for "bash man grep" or "awk manual" will quickly give you great resources, and you're going to want them if you're trying to do things this complex.
So, let's look at a rework, making some assumptions...
function pcs_call() {
local site="$1" serverA="$2" activity="$3" resource="$4" # make local and quotes habits you only break on purpose
ssh -qt ${site}servername0${serverA} "
sudo pcs resource ${activity} proc_${resource}; sleep 10; sudo pcs status;
" 2>&1 | awk -v resource="$resource" '$0~"proc_"resource { print $4 }'
}
pcs_call "$site1" 1 disable pmr # should print the desired field
If you want to cath the data in a variable to use later -
var1="$( pcs_call "$site1" 1 disable pmr )"
addendum
Addressing your question - use $(seq 1 10) or just {1..10}.
ssh -qt chis03 '
for i in {1..10}; do sudo pcs resource disable ipa $i; done;
sleep 10; sudo pcs status;
' 2>&1 | awk -v resource=ipa '$0~"proc_"resource { print $2" "$4 }'
It's reporting the awk first, because order of elements in a pipeline is "undefined", but the stdout of the ssh is plugged into the stdin of the awk (and since it was duped to stdout, so is the stderr), so they are running asynchronously/simultaneously.
Yes, since these are using literals, single quotes is simpler and effectively "better". If abstracting with vars, it doesn't change much, but switch back to double quotes.
# assuming my vars (svr, verb, target) preset in the context
ssh -qt $svr "
for i in {1..10}; do sudo pcs resource $verb $target \$i; done;
sleep 10; sudo pcs status;
" 2>&1 | awk -v resource="$target" '$0~"proc_"resource { print $2" "$4 }'
Does that help?

Monitoring multiple Linux Systems or Servers Script

I want to modify my script in such a way it can monitor my cpu, memory and ram on 4 servers on my network, the script below is a script that can monitor for one server, is there a way i can check or modify my script below if i have the hosts and username and password.
printf "Memory\t\tDisk\t\tCPU\n"
end=$((SECONDS+3600))
while [ $SECONDS -lt $end ]; do
MEMORY=$(free -m | awk 'NR==2{printf "%.2f%%\t\t", $3*100/$2 }')
DISK=$(df -h | awk '$NF=="/"{printf "%s\t\t", $4}')
CPU=$(top -bn1 | grep load | awk '{printf "%.2f%%\t\t\n", $(NF-2)}')
echo "$MEMORY$DISK$CPU"
sleep 5
done
any ideas or suggestions?
A simple, naive implementation might look like:
for server in host1 host2 host3 host4; do
ssh "$server" bash -s <<'EOF'
...your script here...
EOF
done
...with RSA keys preconfigured for passwordless authentication. That could be made slightly less naive by leveraging ControlMaster/ControlSocket functionality in ssh, so you're keeping the same transport up between multiple ssh sessions and reusing it wherever possible.
However -- rolling your own system monitoring tools is a fool's errand, at least until you've been around the block with the existing established ones, know their strengths, know their weaknesses, and can make a reasoned argument as to why they aren't a good fit for you. Use something off-the-shelf maintained by people who've been doing this for a while.

Cron to detect low available memory

Hello I have a memory leak on my server which I finding it difficult to trace, apparently so is support. They told me I to try and write a cron to detect when my server is low on memory but I have no idea how to do this.
I use PHP to build my apps on a VPS server with CentOS6 installed..
Quoting from https://cookbook.wdt.io/memory.html:
free is a standard unix command that displays used and available memory. Used with the options -m it will output the values in megabytes. The last value in the line labeled "-/+ buffers/cache:" shows the total available memory. So we can use grep and awk to get this value and turn it into a number.
free -m | grep cache: | awk '{ print int($NF) }'
*/5 * * * * ((`free -m | grep cache: | awk '{ print int($NF) }'` >= 50)) && curl -sm 30 http://any_monitoring_url
The "curl ... any_monitoring_url" in the above example is pinging an external monitoring system like the one we built (wdt.io) to catch memory leaks and then email / sms / slack you. This step is not strictly necessary. You could do something as simple as touch file_to_check_timestamp or echo "Low Memory!" >> file_to_check_for_low_memory_alerts. The problem is that if memory (or CPU or disk space) get pinned, you could hit deadlock and the scheduled cron task may not run. Hence the value of a third-party monitor.
Also see our articles on cron monitoring CPU and Disk Space and other recipes, in case they're of value as well.

Linux Show a message to the connected display without logging

Hello I am trying to secure a linux machine that I have built.
I want to prevent people from clonning the hard disk drive.
I have a script that runs with init on startup and every 1 hour.
Script is something like this. It has hard coded hard disk drive serial number, motherboard serial number and few other serials. Script checks the serial of number of the hard drive and mother against hard coded serial. If it does not match. It sends a halt message to the system and the system shuts down.
Now I want to display a message on the connected display / monitor before halting the system. Obviously user wont be logged on when this happens. So I am looking for a way to send the message to the screen without logging.
Here is the script snippet.
#!/bin/bash
$coded-hdserial="W4837486938473ASD534354"
$coded-mbserial="XFD6345-32423-IRJDFJDF-234823729"
$check-hdserial=`smartctl -i /dev/sda | grep "Serial Number" | awk -F ":" {'print $2'} | sed -e 's/^[ \t]*//'`
$check-mbserial=`lshw | grep -A 15 "*-core" | grep "serial:" | awk -F ":" {'print $2'} |sed -e 's/^[ \t]*//' | tr -d '.'`
if
$coded-hdserial == $check-hdserial
then
echo "OK" >/dev/null 2>&1
else
echo "This copy of the software cannot be validated, it may be is a counterfeit. Therefore you cannot continue to enjoy the privilege to use this software. Please contact the support for further information.
This system will now be shut down. Further attempts to make counterfeit or duplicate copies of this system will initiate the self destruction and you will loose all data."
halt in 1 minute
fi
if
$coded-mbserial == $check-mbserial
then
echo "OK" >/dev/null 2>&1
else
echo "This copy of the software cannot be validated, it may be is a counterfeit. Therefore you cannot continue to enjoy the privilege to use this software. Please contact the support for further information.
This system will now be shut down. Further attempts to make counterfeit or duplicate copies of this system will initiate the self destruction and you will loose all data."
halt in 1 minute
fi
My question how can I display a good nice looking warning message to connected monitor. Or probably a JPG image on the screen with warning.
Look at kernel support for frame buffer devices to which you might be able to dump raw pixel data like from a bitmap image file. This is how the GRand Unified Bootloader (GRUB) makes a graphical boot menu. No need for X, but you won't get actual dialog windows. If you want a lightweight ASCII art interactive menu system, look at a Curses implementations like ncurses.

Monitor linux users logins and logouts script

My problem is I need to monitor all the users that have logged on or logged out in real time. I know there is auth.log file, but I don't have permissions to it. Is there any way of displaying only the usernames and login/logout time?
To see which users are currently logged in, there are traditionally the commands who and w on Unix systems. Calling these is not restricted. Due to privacy reasons normal users should not be allowed to see when which users logged on or off.
That is the reason why what you want to do cannot be achieved properly with what is available to you. You will have to use workarounds each of which will have caveats.
The answer of Michael tries to achieve your goal by logging the list of current users (he's using ps but I would prefer who or w for this task). If this is done regularly (each minute or each hour or so) then later you can scan your log file to find out when who appeared and disappeared. I'd use it like this:
#!/bin/bash
log() {
line=$(who | cut -d' ' -f1 | sort -u)
echo "$(date): " $line # _NO_ quotes around $line!
}
while sleep 3600
do
log >> user.log
done & # do this in the background
Each hour this will log who is online into the file user.log.
You can use trace logged users with focus on running processes. Following scenario do it:
#!/bin/sh
mv current.log previous.log #Use two log file for compare users
ps aux | awk " {print $ 1}" |sort | uniq > current.log #Here unique users list
diff current.log previous.log | grep ">\|<" #comparring users lists
In result you can view next:
< avahi #logout user
> 123 #login user
> sfdfs #login user
Also, maybe the last command is what you can use.

Resources