Jenkins execute bash shell command - linux

Hello all I have build an CI server by jenkins. I want to execute remote bash shell on testing server (Ubuntu server 12.04 LTTS) by this sample script through ssh plugin
#!/bin/bash
cd "$(dirname "$0")"
echo "[INFO] Stopping service mix service"
cd /home/setup/Development/apache-servicemix-4.5.0/bin
./stop
echo "[INFO] Wait few secs for stopping service"
sleep 5
echo "[INFO] Start service"
./start
sleep 1
exit;
But the servicemix can not start, if I do manually ssh login from bash shell, then execute this script (stored in test server) it can work well.
Any Idea for this.
Thank you

I have resolve this. This is error failed load JAVA_HOME from script. Actually this run on different shell configuration environment that have not load configuration from my .profile. – TienHoang

Related

How to execute a local bash script on remote server via ssh with nohup

I can run a local script on a remote server using the -s option, like so:
# run a local script without nohup...
ssh $SSH_USER#$SSH_HOST "bash -s" < myLocalScript.sh;
And I can run a remote script using nohup, like so:
# run a script on server with nohup...
ssh $SSH_USER#$SSH_HOST "nohup bash myRemoteScript.sh > results.out 2>&1 &"
But can I run my local script with nohup on the remote server? I expect the script to take many hours to complete so I need something like nohup. I know I can copy the script to the server and execute it but then I have to make sure I delete it once the script is complete, would rather not have to do that if possible.
I've tried the following but it's not working:
# run a local script without nohup...
ssh $SSH_USER#$SSH_HOST "nohup bash -s > results.out 2>&1 &" < myLocalScript.sh;
You shouldn't have to do anything special - Once you kick off a script on another machine, it should finish running even if you terminate the connection:
For example
ssh $SSH_USER#$SSH_HOST "bash -s > results.out 2>&1" < myLocalScript.sh &
# Please wait a few seconds for the connection to be established
kill $! # Optional: Kill the last process
If you want to test it, try a simple script like this
# myLocalScript.sh
echo 'File created - sleeping'
sleep 30
echo 'Finally done!'
The results.out file should immediately be created on the other machine with "File created - sleeping" in it. You can actually kill the local ssh process with kill <your_pid>, and it will still keep running on the other machine, and after 30 seconds, print "Finally done!" into the file, and exit.

Wifi disconnected before init.d script is run

I've set up a simple init.d script "S3logrotate" to run on shutdown. The "S3logrotate" script works fine when run manually from command line but the script does not function correctly on shut down.
The script uploads logs from my PC to an Amazon S3 bucket and requires wifi to run correctly.
Debugging proved that the script is actually run but the upload process fails.
I found that the problem seems to be that the script seems to run after wifi is terminated.
These are the blocks I used to test my internet connection in the script.
if ping -q -c 1 -W 1 8.8.8.8 >/dev/null; then
echo "IPv4 is up" >> *x.txt*
else
echo "IPv4 is down" >> *x.txt*
fi
if ping -q -c 1 -W 1 google.com >/dev/null; then
echo "The network is up" >> *x.txt*
else
echo "The network is down" >> *x.txt*
fi
The output for this block is:
IPv4 is down
The network is down
Is there any way to set the priority of an init.d script? As in, can I make my script run before the network connection is terminated? If not, is there any alternative to init.d?
I use Ubuntu 16.04 and have dual booted with Windows 10 if that's significant.
Thanks,
sganesan7
You should place you scrip in:
/etc/NetworkManager/dispatcher.d/pre-down.d
change group and owner to root
chown root:root S3logrotate
and it should work. If you need to do this for separate interface place script in
create a script inside
/etc/NetworkManager/dispatcher.d/
and name it (for example):
wlan0-down
and should work too.

ssh does not return even after execution

The following ssh command does not return to terminal. It hangs though the execution is completed. The execution hangs after echo hi command.
ssh user#testserver "echo hello;source .profile;source .bash_profile;/apps/myapp/deploytools/ciInstallAndRun.sh; echo hi"
Output
hello
<outoutfrom remote script"
hi
ciInstallAndRun.sh
echo 'starting'
cd /apps/myapp/current
./tctl kill
cd /apps/myapp
mv myapp_v1.0 "myapp_v1.0_`date '+%Y%m%d%H%M'`"
unzip -o /apps/myapp/myappdist-bin.zip
java -classpath .:/apps/myapp/deploytools/cleanup.jar se.telenor.project.cleanup.Cleanup /apps/myapp myapp_v1.0_ 3
cd /apps/myapp/myapp_v1.0
echo 'Done with deploy'
chmod -R 775 *
echo 'Done'
./tctl start test
Source OS: Redhat
Dest Os: Solaris 10 8/07
Any idea to fix this.
Any idea to fix this.
Your installation script has spawned a child process.
Add a ps -f or ptree $$ command before echo hi. You'll see a child process or multiple child processes spawned by your install script.
To stop the SSH command from hanging, you need to detach such child process(es) from your terminal's input/output. You can sedirect your script's output to a file - both stdout and stderr with > /some/output/file 2>&1, and also redirect its input with < /dev/null.
Or you can use the nohup command.
You haven't provided an MCVE, as others have noted, but this is likely the problem command in you install script, since your question implies that you see the expected output from your install script:
./tctl start test
You probably would do better to replace it with something like:
./tctl start test </dev/null >/some/log/file/path.log 2>&1

Shell Script not running properly

I have a linux shell script that when run from command line works perfectly but when scheduled to run via crontab, it does not give desired results.
The script is quite simple, it checks to see if mysql-proxy is running or not by checking if its pid is found using the pidof command. If found to be off, it attempts to start the proxy.
# Check if mysql proxy is off
# if found off, attempt to start it
if pidof mysql-proxy
then
echo "Proxy running."
else
echo "Proxy off ... attempting to restart"
/usr/local/mysql-proxy/bin/mysql-proxy -P 172.20.10.196:3306 --daemon --proxy-backend-addresses=172.20.10.194 --proxy-backend-addresses=172.20.10.195
if pidof mysql-proxy
then
echo "Proxy started"
else
echo "Proxy restar failed"
fi
fi
echo "==============================================="
The script is saved in a file check-sql-proxy.sh and has permissions set to 777. When I run the script from command line (sh check-sql-proxy.sh) it gives the desired output.
4066
Proxy running.
===============================================
The script is also scheduled to run every 5 minutes in crontab as
*/5 * * * * bash /root/auto-restart-mysql-proxy.sh > /dev/sql-proxy-restart-log.log
However, when I see the sql-proxy-restart-log.log file it contains the output:
Proxy off ... attempting to restart
Proxy restar failed
===============================================
It seems that pidof command fails to return the pid of the running application which brings the flow of script in else condition.
I am unable to figure out how to resolve this since when I run the script manually, it works fine.
Can anyone help what I am missing with regards to permissions or settings?
Thanks in advance.
Mudasser
Check that the shell is what you think it is (usually /bin/sh, not bash)
Also check that PATH environment variable. Usually, for cron jobs it is a good practice to fully qualify all paths to binaries, e.g.
#!/bin/bash
# Check if mysql proxy is off
# if found off, attempt to start it
if /bin/pidof mysql-proxy
etc.
Try pidof /usr/local/mysql-proxy/bin/mysql-proxy (full path to executable)
In common, try use the same command name as was used to start the instance of mysql-proxy.
The problem seems that crontab environment don't have the same environment as you.
You have 2 simple & proper solutions :
In the first lines of crontab :
PATH=/foo:/bar:/qux
SHELL=/bin/bash
or
source ~/.bashrc
in your scripts.

Getting environment from .profile while executing a command through ssh

I am new to the linux/unix world. I would like to trigger a make on a remote machine. For this purpose i created a little script on the remote machine which i want to execute via ssh.
The script looks something like this:
echo "loading .profile"
. ~/.profile
echo "profile loaded"
echo "starting gmake"
cd ~/helloWorld/
gmake all
I invoke the script with following ssh command:
ssh user#remotehost "cd ~/helloWorld && ./myscript.sh"
When I execute this command my machine connects to the remote machine. It tells me that the profile is loaded and then i have to press [CTRL]+[D] to continue with the script. So it seems that the . ./myscript.sh command creates something like a new terminal. I dont want this behaviour. I would like to use the ssh command to automate the building process without the need of closing the terminal manually. Is there a way to do this?
Thanking you in anticipation,
John
./source does not invoke a new shell. It runs the commands in the script in the current shell. Something else is going wrong.

Resources