In my code I have the following to run a remote script.
ssh root#host.domain.com "sh /home/user/backup_mysql.sh"
For some reason it keeps 255'ing on me. Any ideas?
I can SSH into the box just fine (passless keys setup)
REMOTE SCRIPT:
MUSER='root'
MPASS='123123'
MHOST="127.0.0.1"
VERBOSE=0
### Set bins path ###
GZIP=/bin/gzip
MYSQL=/usr/bin/mysql
MYSQLDUMP=/usr/bin/mysqldump
RM=/bin/rm
MKDIR=/bin/mkdir
MYSQLADMIN=/usr/bin/mysqladmin
GREP=/bin/grep
### Setup dump directory ###
BAKRSNROOT=/.snapshots/tmp
#####################################
### ----[ No Editing below ]------###
#####################################
### Default time format ###
TIME_FORMAT='%H_%M_%S%P'
### Make a backup ###
backup_mysql_rsnapshot(){
local DBS="$($MYSQL -u $MUSER -h $MHOST -p$MPASS -Bse 'show databases')"
local db="";
[ ! -d $BAKRSNROOT ] && ${MKDIR} -p $BAKRSNROOT
${RM} -f $BAKRSNROOT/* >/dev/null 2>&1
# [ $VERBOSE -eq 1 ] && echo "*** Dumping MySQL Database ***"
# [ $VERBOSE -eq 1 ] && echo -n "Database> "
for db in $DBS
do
local tTime=$(date +"${TIME_FORMAT}")
local FILE="${BAKRSNROOT}/${db}.${tTime}.gz"
# [ $VERBOSE -eq 1 ] && echo -n "$db.."
${MYSQLDUMP} --single-transaction -u ${MUSER} -h ${MHOST} -p${MPASS} $db | ${GZIP} -9 > $FILE
done
# [ $VERBOSE -eq 1 ] && echo ""
# [ $VERBOSE -eq 1 ] && echo "*** Backup done [ files wrote to $BAKRSNROOT] ***"
}
### Die on demand with message ###
die(){
echo "$#"
exit 999
}
### Make sure bins exists.. else die
verify_bins(){
[ ! -x $GZIP ] && die "File $GZIP does not exists. Make sure correct path is set in $0."
[ ! -x $MYSQL ] && die "File $MYSQL does not exists. Make sure correct path is set in $0."
[ ! -x $MYSQLDUMP ] && die "File $MYSQLDUMP does not exists. Make sure correct path is set in $0."
[ ! -x $RM ] && die "File $RM does not exists. Make sure correct path is set in $0."
[ ! -x $MKDIR ] && die "File $MKDIR does not exists. Make sure correct path is set in $0."
[ ! -x $MYSQLADMIN ] && die "File $MYSQLADMIN does not exists. Make sure correct path is set in $0."
[ ! -x $GREP ] && die "File $GREP does not exists. Make sure correct path is set in $0."
}
### Make sure we can connect to server ... else die
verify_mysql_connection(){
$MYSQLADMIN -u $MUSER -h $MHOST -p$MPASS ping | $GREP 'alive'>/dev/null
[ $? -eq 0 ] || die "Error: Cannot connect to MySQL Server. Make sure username and password are set correctly in $0"
}
### main ####
verify_bins
verify_mysql_connection
backup_mysql_rsnapshot
This is usually happens when the remote is down/unavailable; or the remote machine doesn't have ssh installed; or a firewall doesn't allow a connection to be established to the remote host.
ssh returns 255 when an error occurred or 255 is returned by the remote script:
EXIT STATUS
ssh exits with the exit status of the remote command or
with 255 if an error occurred.
Usually you would an error message something similar to:
ssh: connect to host host.domain.com port 22: No route to host
Or
ssh: connect to host HOSTNAME port 22: Connection refused
Check-list:
What happens if you run the ssh command directly from the command line?
Are you able to ping that machine?
Does the remote has ssh installed?
If installed, then is the ssh service running?
This error will also occur when using pdsh to hosts which are not contained in your "known_hosts" file.
I was able to correct this by SSH'ing into each host manually and accepting the question "Do you want to add this to known hosts".
If there's a problem with authentication or connection, such as not being able to read a password from the terminal, ssh will exit with 255 without being able to run your actual script. Verify to make sure you can run 'true' instead, to see if the ssh connection is established successfully.
Isn't the problem in the lines:
### Die on demand with message ###
die(){
echo "$#"
exit 999
}
Correct me if I'm wrong but I believe exit 999 is out of range for an exit code and results in a exit status of 255.
I was stumped by this. Once I got passed the 255 problem... I ended up with a mysterious error code 1. This is the foo to get that resolved:
pssh -x '-tt' -h HOSTFILELIST -P "sudo yum -y install glibc"
-P means write the output out as you go and is optional. But the -x '-tt' trick is what forces a psuedo tty to be allocated.
You can get a clue what the error code 1 means this if you try:
ssh AHOST "sudo yum -y install glibc"
You may see:
[slc#bastion-ci ~]$ ssh MYHOST "sudo yum -y install glibc"
sudo: sorry, you must have a tty to run sudo
[slc#bastion-ci ~]$ echo $?
1
Notice the return code for this is 1, which is what pssh is reporting to you.
I found this -x -tt trick here. Also note that turning on verbose mode (pssh --verbose) for these cases does nothing to help you.
It can very much be an ssh-agent issue.
Check whether there is an ssh-agent PID currently running with eval "$(ssh-agent -s)"
Check whether your identity is added with ssh-add -l and if not, add it with ssh-add <pathToYourRSAKey>.
Then try again your ssh command (or any other command that spawns ssh daemons, like autossh for example) that returned 255.
If above didn't help: check if locale is valid on client and server:
https://www.linuxbabe.com/linux-server/fix-ssh-locale-environment-variable-error
How do not pass locale through ssh
### Die on demand with message ###
die(){
echo "$#"
exit 999
}
I don't have the rep to comment on Alex's answer but the exit 999 line returns code 231 on my WSL Ubuntu 20.04.4 box. Not quite sure why that is returned but I understand that it's out of range.
Related
I am working with the latest Manjaro with the kernel: x86_64 Linux 5.10.15-1-MANJARO.
I am connected to my company network via VPN.
For this I use SNX with the build version 800010003.
When I start a Docker container (Docker version 20.10.3, build 48d30b5b32) which should connect to a machine from the company network, I get the following message.
[maurice#laptop ~]$ docker run --rm alpine ping company-server
ping: bad address 'company-server'
Also using the IP from the 'company-server' doesn't work.
A ping outside the container works, no matter using the name or IP.
The resolv.conf looks correct to me.
[maurice#laptop ~]$ docker run --rm alpine cat /etc/resolv.conf
# Generated by NetworkManager
search lan
nameserver 10.1.0.250
nameserver 10.1.0.253
nameserver 192.168.86.1
What I have found out so far.
If I downgrade packages glibc and lib32-glibc to version 2.32-5, the ping out of the container works again. Because of dependencies I have also to downgrade gcc, gcc-libs and lib32-gcc-libs to version 10.2.0-4.
I tried the whole thing with a fresh Pop OS 20.10 installation, same problem.
I also did a test with another VPN (OpenVPN) which worked fine. However, this was only a test scenario and cannot be used as an alternative.
I have been looking for a solution for several days but have not found anything. It would be really nice if someone could help me with this.
TL;DR:
on kernel >5.8 the tunsnx interface is no longer created with global scope and need to be recreated. small script to the rescure https://gist.github.com/Fahl-Design/ec1e066ec2ef8160d101dff96a9b56e8
Longer version:
Here are my findings and the solution to (temp) fix it:
Steps to reproduce:
connect your snx tunnel
see ping fails to server behind tunnel
docker run --rm -ti --net=company_net busybox /bin/sh -c "ping 192.168.210.210"
run this command to check ip and scope of the "tunsnx" interface
ip -o address show "tunsnx" | awk -F ' +' '{print $4 " " $6 " " $8}'
if you get something like
192.168.210.XXX 192.168.210.30/32 247
or (Thx Timz)
192.168.210.XXX 192.168.210.30/32 nowhere
the scope is not set to "global" and no connection can be established
to fix this, like "ronan lanore" suggested, you need to change the scope to global
this can be done with a little helper script like this one:
#!/usr/bin/env bash
#
# Usage: [dry_run=1] [debug=1] [interface=tunsnx] docker-fix-snx
#
# Credits to: https://github.com/docker/for-linwux/issues/288#issuecomment-825580160
#
# Env Variables:
# interface - Defaults to tunsnx
# dry_run - Set to 1 to have a dry run, just printing out the iptables command
# debug - Set to 1 to see bash substitutions
set -eu
_log_stderr() {
echo "$*" >&2
}
if [ "${debug:=0}" = 1 ]; then
set -x
dry_run=${dry_run:=1}
fi
: ${dry_run:=0}
: ${interface:=tunsnx}
data=($(ip -o address show "$interface" | awk -F ' +' '{print $4 " " $6 " " $8}'))
LOCAL_ADDRESS_INDEX=0
PEER_ADDRESS_INDEX=1
SCOPE_INDEX=2
if [ "$dry_run" = 1 ]; then
echo "[-] DRY-RUN MODE"
fi
if [ "${data[$SCOPE_INDEX]}" == "global" ]; then
echo "[+] Interface ${interface} is already set to global scope. Skip!"
exit 0
else
echo "[+] Interface ${interface} is set to scope ${data[$SCOPE_INDEX]}."
tmpfile=$(mktemp --suffix=snxwrapper-routes)
echo "[+] Saving current IP routing table..."
if [ "$dry_run" = 0 ]; then
sudo ip route save >$tmpfile
fi
echo "[+] Deleting current interface ${interface}..."
if [ "$dry_run" = 0 ]; then
sudo ip address del ${data[$LOCAL_ADDRESS_INDEX]} peer ${data[$PEER_ADDRESS_INDEX]} dev ${interface}
fi
echo "[+] Recreating interface ${interface} with global scope..."
if [ "$dry_run" = 0 ]; then
sudo ip address add ${data[$LOCAL_ADDRESS_INDEX]} dev ${interface} peer ${data[$PEER_ADDRESS_INDEX]} scope global
fi
echo "[+] Restoring routing table..."
if [ "$dry_run" = 0 ]; then
sudo ip route restore <$tmpfile 2>/dev/null
fi
echo "[+] Cleaning temporary files..."
rm $tmpfile
echo "[+] Interface ${interface} is set to global scope. Done!"
if [ "$dry_run" = 0 ]; then
echo "[+] Result:"
ip -o address show "tunsnx" | awk -F ' +' '{print $4 " " $6 " " $8}'
fi
exit 0
fi
[ "$debug" = 1 ] && set +x
Same problem for me now. Nothing big change but tunsnx interface scope change from global to 247. Delete it and re create with global scope.
Just for collection of possible solutions. I had the same problem but found that "tunsnx" interface was configured properly with "global" keyword. In my case the problem was that snx was started after docker daemon and restarting docker service docker restart helped.
This question already has answers here:
Test if remote TCP port is open from a shell script
(17 answers)
Closed 6 years ago.
I need check port on the remote server in bash script before script will continue.
I search here and on the internet, but I can´t find answer which works for me.
I´m using RHEL 7.2 virtual machine so I don´t have -z parameter in nc command or /dev/tcp/ thing.
Also nc remote.host.com 1284 < /dev/null don´t work, because every time I get exit code 1.
Basically I need something like that:
/bin/someting host port
if [ $? -eq 0 ]; then
echo "Great, remote port is ready."
else
exit 1
fi
How about nmap?
SERVER=google.com
PORT=443
state=`nmap -p $PORT $SERVER | grep "$PORT" | grep open`
if [ -z "$state" ]; then
echo "Connection to $SERVER on port $PORT has failed"
else
echo "Connection to $SERVER on port $PORT was successful"
exit 1
fi
Please note You have to install nmap.
yum install nmap #Centos/RHEL
apt-get install nmap #Debian/Ubuntu
Our you can compile and install from source.
You can do this with Bash itself, using it's built-in /dev/tcp device file.
The following will throw a connection refused message if a port is closed.
: </dev/tcp/remote.host.com/1284
Can be scripted like this:
(: </dev/tcp/remote.host.com/1284) &>/dev/null && echo "OPEN" || echo "CLOSED"
Details of /dev/tcp in bash reference manual: https://www.gnu.org/software/bash/manual/html_node/Redirections.html
I have written the below script to check whether my server running fine or not.But it is not properly working.It always showing Not runing even if it is running fine.Also the telnet in the script is not running working properly .Can any one help?
#!/bin/sh
export smtp=smtprelay.intra.xxx.com:25
Connect_redmine(){
telnet redmine.intra.xxx.com 443 <<EOF
exit 1;
EOF
}
Connect_redmine>/home/ssx00001/log_connect.txt
grep "Connected" /home/ssx00001/log_connect.txt
status=$?
if [ $status == 0 ]; then
echo `date` "Redmine PROD server is running fine"|mailx -r Redmine#xxx -s "Redmine PROD server is running" 777.p#xxx.com
else
echo "Redmine PROD server is not Running"|mailx -r redmine#xxx.com -s "Redmine PROD server is not running" 777.p#xxx.com
fi
A couple of questions first:
1] What does redmine do? Is it just a HTTPS server?
2] If [1] is true, can you do a wget of the index page, and use the result of that? It should be a lot easier to parse.
3] Telnetting into a HTTPS server, as far as I know, won't work, because it's not doing any of the handshaking that would be necessary for an SSL connection (which needs to occur before any content will be sent).
Using wget, you can do something like this:
wget https://redmine.intra.xxx.com/index.html
if [[-f "index.html" ] && [ -s "index.html" ]]
then
# The service is live
else
# Something is wrong
fi
I have this script :
#!/bin/bash
./process-list $1
det=$?
echo $det
if [ $det -eq 1 ]
then
echo "!!!"
ssh -n -f 192.0.2.1 "/usr/local/bin/sshfs -r 192.0.2.2:/home/sth/rootcheck_redhat /home/ossl7/r"
rk=$(ssh -n -f 192.0.2.1 'cd /home/s/r/rootcheck-2.4; ./ossec-rootcheck >&2; echo $?' 2>res)
if [ $rk -eq 0 ]
then
echo "not!"
fi
fi
exit;
I ssh to system 192.0.2.1 and run sshfs command on it. actualy I want to mount a directory of system 192.0.2.2 on system 192.0.2.1 and then run a program (which is located in that directory) on system 192.0.2.1. all these ssh and sshfs commands work properly. when I run them manually and output of program ossec-rootcheck is written to file res ,but when I run this script, mount is done but no output is written to file res. I guess program ossec-rootcheck is runned but I don't know why the output isn't written!
this script used to work properly before I don't know what happend suddenly!
As far as I understand the program, the remote machine has stdin>stderr, but how do you get that to the local machine where ssh is being evaluated?
The end ' means on the rk= line, the 2>res happens locally. (and there is no error from ssh, the remote error, if any, is lost when ssh successfully completes.) You could try >res it will get whatever ssh prints out, unfortunately including non-errors.
Sometimes when connecting to a remote SSH server I get Connection Closed By *IP*; Couldn't read packet: Connection reset by peer. But after trying one or two more times it connects properly.
This presents a problem with a few bash scripts I use to automatically upload my archived backups to the SSH server, like so;
export SSHPASS=$sshpassword
sshpass -e sftp -oBatchMode=no -b - root#$sshaddress << !
cd $remotefolder
put $backupfolder/Qt_$date.sql.gz
bye
!
How can I have this part loop until it actually properly connects?
UPDATE: (Solution)
RETVAL=1
while [ $RETVAL -ne 0 ]
do
export SSHPASS=$sshpassword
sshpass -e sftp -oBatchMode=no -b - root#$sshaddress << !
cd $remotefolder
put $backupfolder/Qt_$date.tgz
bye
!
RETVAL=$?
[ $RETVAL -eq 0 ] && echo Success
[ $RETVAL -ne 0 ] && echo Failure
done
Try something like this :
export SSHPASS=$sshpassword
sshpassFunc() {
sshpass -e sftp -oBatchMode=no -b - root#$sshaddress << !
cd $remotefolder
put $backupfolder/Qt_$date.sql.gz
bye
!
}
until sshpassFunc; do
sleep 1
done
(not tested)
I am not a shell scripting expert, but I would check the return value of sshpass when it exits.
From man ssh:
ssh exits with the exit status of the remote command or
with 255 if an error occurred.
From man sshpath:
Return Values
As with any other program, sshpass returns 0 on success. In case of
failure, the following return codes are used:
Invalid command line argument
Conflicting arguments given
General runtime error
Unrecognized response from ssh (parse error)
Invalid/incorrect password
Host public key is unknown. sshpass exits without confirming the new key.
In addition, ssh might be complaining about a man in the middle
attack. This complaint does not go to the tty. In other words, even
with sshpass, the error message from ssh is printed to standard error.
In such a case ssh's return code is reported back. This is typically
an unimaginative (and non-informative) "255" for all error cases.
So try to run the command, and check its return value. If the return value was not 0 (for SUCCESS) then try again. Repeat using a while loop until you succeed.
Sidenote: why are you using sshpass instead of public-key (passwordless) authentication? It is more secure (you don't have to write down your password) and makes logging in via regular ssh as easy as ssh username#host.
There's even an easy tool to set it up: ssh-copy-id.