Remote SSH commands responses do not appear in browser from CGI - linux

I have a simple CGI page (running on linux, apache) that grabs some responses from remote servers. When I manually run the script (from a terminal) it echos the complete web-page correctly, including all remote responses. But when I open the browser, the responses are not there!
Here's my script for reference.
#!/bin/bash
echo "Content-type: text/html"
echo ""
echo "<html class="background"><head><title>My Page"
echo "</title></head><body>"
echo ""
echo "<h2>Local Uptime :</h2>"
echo `uptime` #Local commands work normally
echo "<h2>Remote Uptime: </h2>"
echo `/usr/bin/ssh root#remote-server "uptime"`
echo "</body></html>"
Of course, I previously set keys for password-less logins.

Check to make sure selinux is turned off (or set the correct options for the web browser to execute ssh).
cat /selinux/enforce
If it's 1 set it to 0 as root
echo 0 > /selinux/enforce

Related

How to get popup message after log into Linux server

We are using linux Servers (CentOS) and one backup server
We only login to prod server if any there is any issue, else we check connection from backup server
We do no have any kind of monitoring tool
I have created a simple bash script on backup server as below
#!/bin/bash
date
cat /tmp/servers.txt | while read output
do
ping -c 1 "$output" > /dev/null
if [ $? -eq 0 ] ; then
echo "Server $output is UP"
else
echo "Server $output is Down"
fi
done
How do we get output of this script after log into this server automatically.
~/.bash_profile runs every time you log in, so that seems like the place you would want to put this code.

How do I pass a set of bash commands through SSH? [duplicate]

This question already has answers here:
Pass commands as input to another command (su, ssh, sh, etc)
(3 answers)
Closed 4 years ago.
I'm writing a simple bash server health check that runs on a local machine and asks the user which server they are interested in looking at. When provided with the name, the script runs a set of health check commands on that server and returns the output to the user. Currently, the script will just log the user into the server but won't run the health check until the user exits that ssh session, then it runs those checks locally, not on the remote server as intended. I don't want the user to actually log on to the server, I just need the output from that server forwarded to the users local console. Any clue what I'm doing wrong here? Thanks in advance!
#!/bin/bash
echo -n "Hello "$USER""
echo "which server would you like to review?"
read var1
ssh -tt $var1
echo ">>> SYSTEM PERFORMANCE <<<"
top -b -n1 | head -n 10
echo ">>> STORAGE STATISTICS <<<"
df -h
echo ">>> USERS CURRENTLY LOGGED IN <<<"
w
echo ">>> USERS PREVIOUSLY LOGGED IN <<<"
lastlog -b 0 -t 100
Using a here-doc :
#!/bin/bash
echo -n "Hello "$USER""
echo "which server would you like to review?"
read var1
ssh -t $var1<<'EOF'
echo ">>> SYSTEM PERFORMANCE <<<"
top -b -n1 | head -n 10
echo ">>> STORAGE STATISTICS <<<"
df -h
echo ">>> USERS CURRENTLY LOGGED IN <<<"
w
echo ">>> USERS PREVIOUSLY LOGGED IN <<<"
lastlog -b 0 -t 100
EOF
There is a tool that is used extensively for all classes of remote access (via ssh and/or telnet, http, etc) that is called expect(1) It allows you to send commands and wait for the responses, allowing even use of interactive commands (like vi(1) in screen mode) or even to supply passwords over the line. Do some googling on it and you'll see how useful it is.

Instructions in an ssh remote execution block are evaluated locally on the client side first. Why?

Why are the instructions in an ssh remote execution block evaluated locally on the client side first?
Consider the following code:
ssh -tt serverhostname "
if [ `grep -cm 1 "string" /serverside/file`!= "1" ]; then
echo "Doing some action on" `date`
fi
" 2> /dev/null
One might expect for the instructions listed in the ssh block to be executed on the remote. This is not the case. The if statement is actually executed on the client side. Why? Is there a syntactically correct way to process the instructions server side (on the remote)?
Unquoted and double-quoted arguments are expanded before the command is invoked:
ssh server "echo $HOSTNAME"
is the same as
ssh server "echo clienthostname"
and sends the literal string echo clienthostname to the server. The server therefore writes out the client's hostname.
Single quoted and escaped arguments, meanwhile, are not expanded:
ssh server 'echo $HOSTNAME'
ssh server "echo \$HOSTNAME"
will both send the literal string echo $HOSTNAME to the server, and therefore cause the server to write out the server's hostname.
This applies to all commands, and has nothing to do with ssh.
I asked this question to perhaps help someone else having the same dilemma. You want to execute something on the server, and use the value of that instruction as the basis for something else. But you find the server isn't performing the instructions.
Bash is in fact performing the injected instruction client-side. The only work around I've found is to issue separate commands and have the result store client side. ie.
# Step 1
a=`ssh server grep -cm 1 "something" /serverside/file`
# Step 2
if [ $a != "1" ]; then
ssh -tt server "
echo "perform some series of tasks"
"
fi

Linux FTP put success output

I have a bash script that creates backups incrementally(daily) and full(on Mondays). Every 7 days the script combines the week of backups(full and Incremental) and sends them off to an FTP server, the problem i am having is i want to delete the files from my backup directory after the FTP upload is finished, but i cant do that until i know the file was successfully uploaded. I need to figure out how to capture the '226 transfer complete' so i can use that in an 'IF' statement to delete the backup files. Any help is greatly appreciated. also he is my FTP portion of the script
if [ -a "$WKBKDIR/weekending-$YEAR-$MONTH-$DAY.tar.gz" ]; then
HOST=192.168.40.30 #This is the FTP servers host or IP address.
USER=username #This is the FTP user that has access to the server.
PASS=password #This is the password for the FTP user.
ftp -inv $HOST << EOF
user $USER $PASS
cd /baks
lcd $WKBKDIR
put weekending-$YEAR-$MONTH-$DAY.tar.gz
bye
EOF
fi
I could use whatever mean i needed i suppose, FTP was something already setup for another backup function for something else, thanks
2nd EDIT Ahmed the rsync works great in test from command line, its a lot faster than FTP, the server is on the local network so SSH not that big of a deal but nice to have for added security, i will finish implementing in my script tomorrow, thanks again
FTP OPTION
The simple solution would be to do something like this:
ftp -inv $HOST >ftp-results-$YEAR-$MONTH-$DAY.out 2>&1 <<-EOF
user $USER $PASS
cd /baks
bin
lcd $WKBKDIR
put weekending-$YEAR-$MONTH-$DAY.tar.gz
bye
EOF
Also there is an issue with your here-document syntax; there is no space between << and the delimiter word (in your case EOF) and I added a - because you are putting white-spaces before the ACTUAL delimeter (it's tabbed in for the if / fi block) so the [-] is required
Now when you do this; you can parse the output file to look for the successful put of the file. For example:
if grep -q '226 transfer complete' ftp-results-$YEAR-$MONTH-$DAY.out; then
echo "It seems that FTP transfer completed fine, we can schedule a delete"
echo "rm -f $PWD/weekending-$YEAR-$MONTH-$DAY.tar.gz" >> scheduled_cleanup.sh
fi
and just run scheduled_cleanup.sh using cron at a given time; this way you will have some margin before the files are cleaned up
If your remote FTP server has good SITE or PROXY options you may be able to get the remote FTP to run a checksum on the uploaded file after successful upload and return the result.
SCP / RSYNC OPTION
Using FTP is clunky and dangerous, you should really try and see if you can have scp or ssh access to the remote system.
If you can then generate an ssh key if you don't have one using ssh-keygen:
ssh-keygen -N "" -t rsa -f ftp-rsa
put the ftp-rsa.pub file into the $HOST/home/$USER/.ssh/authorized_keys and you have a much nicer method for uploading files:
if scp -B -C weekending-$YEAR-$MONTH-$DAY.tar.gz $USER#$HOST:/baks/ ; then
echo Upload successful 1>&2
else
echo Upload failed 1>&2
fi
Or better yet using rsync:
if rsync --progress -a weekending-$YEAR-$MONTH-$DAY.tar.gz $HOST:/baks/ ; then
echo Upload successful 1>&2
else
echo Upload failed 1>&2
fi
et voilĂ  you are done since rsync works over ssh you are happy and secure
Try the next
#!/bin/bash
runifok() { echo "will run this when the transfer is OK"; }
if [ -a "$WKBKDIR/weekending-$YEAR-$MONTH-$DAY.tar.gz" ]; then
HOST=192.168.40.30 #This is the FTP servers host or IP address.
USER=username #This is the FTP user that has access to the server.
PASS=password #This is the password for the FTP user.
ftp -inv <<EOF | grep -q '226 transfer complete' && runifok
user $USER $PASS
cd /baks
lcd $WKBKDIR
put weekending-$YEAR-$MONTH-$DAY.tar.gz
bye
EOF
fi
test it and when will run ok - replace the echo in the runifok function for your commands what want execute after the upload is succesful.

How to check if a server is running

I want to use ping to check to see if a server is up. How would I do the following:
ping $URL
if [$? -eq 0]; then
echo "server live"
else
echo "server down"
fi
How would I accomplish the above? Also, how would I make it such that it returns 0 upon the first ping response, or returns an error if the first ten pings fail? Or, would there be a better way to accomplish what I am trying to do above?
I'ld recommend not to use only ping. It can check if a server is online in general but you can not check a specific service on that server.
Better use these alternatives:
curl
man curl
You can use curl and check the http_response for a webservice like this
check=$(curl -s -w "%{http_code}\n" -L "${HOST}${PORT}/" -o /dev/null)
if [[ $check == 200 || $check == 403 ]]
then
# Service is online
echo "Service is online"
exit 0
else
# Service is offline or not working correctly
echo "Service is offline or not working correctly"
exit 1
fi
where
HOST = [ip or dns-name of your host]
(optional )PORT = [optional a port; don't forget to start with :]
200 is the normal success http_response
403 is a redirect e.g. maybe to a login page so also accetable and most probably means the service runs correctly
-s Silent or quiet mode.
-L Defines the Location
-w In which format you want to display the response
-> %{http_code}\n we only want the http_code
-o the output file
-> /dev/null redirect any output to /dev/null so it isn't written to stdout or the check variable. Usually you would get the complete html source code before the http_response so you have to silence this, too.
nc
man nc
While curl to me seems the best option for Webservices since it is really checking if the service's webpage works correctly,
nc can be used to rapidly check only if a specific port on the target is reachable (and assume this also applies to the service).
Advantage here is the settable timeout of e.g. 1 second while curl might take a bit longer to fail, and of course you can check also services which are not a webpage like port 22 for SSH.
nc -4 -d -z -w 1 ${HOST} ${PORT} &> /dev/null
if [[ $? == 0 ]]
then
# Port is reached
echo "Service is online!"
exit 0
else
# Port is unreachable
echo "Service is offline!"
exit 1
fi
where
HOST = [ip or dns-name of your host]
PORT = [NOT optional the port]
-4 force IPv4 (or -6 for IPv6)
-d Do not attempt to read from stdin
-z Only listen, don't send data
-w timeout
If a connection and stdin are idle for more than timeout seconds, then the connection is silently closed. (In this case nc will exit 1 -> failure.)
(optional) -n If you only use an IP: Do not do any DNS or service lookups on any specified addresses, hostnames or ports.
&> /dev/null Don't print out any output of the command
You can use something like this -
serverResponse=`wget --server-response --max-redirect=0 ${URL} 2>&1`
if [[ $serverResponse == *"Connection refused"* ]]
then
echo "Unable to reach given URL"
exit 1
fi
Use the -c option with ping, it'll ping the URL only given number of times or until timeout
if ping -c 10 $URL; then
echo "server live"
else
echo "server down"
fi
Short form:
ping -c5 $SERVER || echo 'Server down'
Do you need it for some other script? Or are trying to hack some simple monitoring tool? In this case, you may want to take a look at Pingdom: https://www.pingdom.com/.
I using the following script function to check servers are online or not. It's useful when you want to check multiple servers. The function hide the ping output, and you can handle separately the server live or server down case.
#!/bin/bash
#retry count of ping request
RETRYCOUNT=1;
#pingServer: implement ping server functionality.
#Param1: server hostname to ping
function pingServer {
#echo Checking server: $1
ping -c $RETRYCOUNT $1 > /dev/null 2>&1
if [ $? -ne 0 ]
then
echo $1 down
else
echo $1 live
fi
}
#usage example, pinging some host
pingServer google.com
pingServer server1
One good solution is to use MRTG (a simple graphing tool for *NIX) with ping-probe script. look it up on Google.
read this for start.
Sample Graph:

Resources