Execute shell script on server using Ansible - linux

I have to run shell script on multiple server using Ansible. I am using following code.
-name: start script
hosts: list_of_host
become: yes
gather_facts: no
role:
- startscript
in startscript i have below code...
-name: start script
shell: /bin/bash /home/ansible/test_script.sh
chnaged_when: false
Test_script is forever running process on sever. so when i am executing this command it's not coming out from that server. I need that it should start script on server and come out and go to another server to start script.

If /home/ansible/test_script.sh is "forever running", you should convert that to a service and start it via systemctl.
Obviously ansible would wait for the shell: command to complete..
shell should not be used for "forever running" scripts.
You can also start the /home/ansible/test_script.sh by starting a screen and executing the script within it, so it continues to run after ansible exits,
or you can try and push that script to background execution by adding & at the end of your shell: command, as well as prefixing the command with nohup so it is not terminated when ansible's ssh connection disconnects

Related

The subtle difference in an ansible script "SHELL" and the directly running Shell commands on an ubuntu machine 20.04 EC2?

I come with a Vagrant script that pulls down an bionic 18 which then runs an ansible playbook which generates two EC2s ubuntu 20.04 which successfully run though every "task:" I assign. I am able to run everything I want in a largely automated download and execution for a publisher subscriber method. Here is the issue: I can run my .sh and .py scripts manually and the systems works, but when I use the ansible methods I must be doing something wrong much like these solutions point to:
Shell command works on direct hosts but fail on Ansible
ansible run command on remote host in background
https://superuser.com/questions/870871/run-a-remote-script-application-in-detached-mode-in-ansible
What I want to do is simply correct the issue with this, and run it in the background.
- name: Start Zookeeper
shell: sudo /usr/local/kafka-server/bin/zookeeper-server-start.sh /usr/local/kafka-server/config/zookeeper.properties </dev/null >/dev/null 2>&1 &
- name: Sleep for 30 seconds and continue with play
wait_for:
timeout: 15
- name: Start Kafka broker
shell: sudo /usr/local/kafka-server/bin/kafka-server-start.sh /home/ubuntu/usr/local/kafka-server/config/server.properties </dev/null >/dev/null 2>&1 &
I have tried it with just a single "&" at the end as well as passing in explicit calls to my user account "ubuntu". I've used the "become: yes". I really don't want to use a daemon especially since others seem to have used this successfully before.
I do want to note that a glaring sign to you that I can't seem to think through is that it hangs when I don't include the &, but if I do include the & it just outright fails, which made me think it was running, but the script won't proceed because these are listener processes.
# - name: Start Zookeeper
# become: yes
# script: /usr/local/kafka-server/bin/zookeeper-server-start.sh
# args:
# chdir: /usr/local/kafka-server/config/zookeeper.properties
This failed, and I'd rather not create another script to copy over the directories and localize it if there is a simple solution to the first block of code.
Multiple ways to skin this cat, but I'd rather just have my mistake on the shell ansible command fixed, and I don't see it.
As explained in the answers written in your third link:
https://superuser.com/questions/870871/run-a-remote-script-application-in-detached-mode-in-ansible
This happens because the script process is a child process from the shell spawned by ansible. To keep the process running after ansible has finished you would need to disown this child process.
The proper way to do this is configuring the software (zookeeper in your case) as a service. There are plenty of examples for this such as:
https://askubuntu.com/questions/979498/how-to-start-a-zookeeper-daemon-after-booting-under-specific-user-in-ubuntu-serv
Once you have configured it as a service, you can start it or stop it using ansible service module.

Running forever node tasks from shell script using ansible on ec2 instanses

I'm new to ansible and was trying to configure my multiple ec2 instances using the configuration management tool.
I was able to run it. However, I am stuck at fag end. I have a shell script (to be run by the playbook) that further has task inside it namely:
forever start app.js
forever start serverScripts.js
The script does run, but the app is not instantiated with forever. When I do forever list it shows
"No forever processes running"
but node processes does run in the background (shown below).
root 9725 65.0 6.1 933920 125432 ? Sl 17:35 0:02
/usr/bin/nodejs app.js
Command used in the playbook:
- name: Execute the script
command: sh {{ path }}/developmentProcessScript.sh
Kindly guide me to accomplish running of the script in the exact same manner by forever!
You'll be able to see the forever processes with forever list when you run as the same user that's started it. In this case it's root.
sudo su - root -c "forever list"
If you want forever to run as a different user, set the become_user to something else inside your playbook.
- name: Execute the script
command: sh {{ path }}/developmentProcessScript.sh
become_user: some_other_user

Node JS app server deployment

I have a dedicated Godaddy server.
I need to run a node app on it.
I can do that by SSH running
node app.js
The problem is that when the ssh connection is disconnected ... The app stops working.
How do I run it so that it does not stops.
Create a shell script (eg. yourScript.sh), and put your command "node app.js" inside.
Example yourScript.sh:
#!/usr/bin/env bash
node app.js
Make sure you have execute permission:
chmod +x yourScript.sh
Then run with:
nohup ./yourScript.sh &
This will mean the process doesn't exit when you disconnect. Nohup catches the HUP signals. Nohup doesn't put the job automatically in the background. We need to tell that explicitly using &
I use Supervisor to run Node.js app in production. It has convenient command line API to show status of the process, start/stop it, allows restart on reboot, etc.
Config file looks like this:
[program:myapp]
directory=/home/myapp/app/current
command=node server/index.js
autostart=true
autorestart=true
environment=
PORT=3000,
MY_ANOTHER_VAR="something"
stderr_logfile=/var/log/myapp.err.log
stdout_logfile=/var/log/myapp.out.log
user=myapp

Cannot make a cronjob get the status of a service

I'm using Upstart to run a couple of services when the systems reboots. Those services should always be running. I have noticed that some of then crashed eventually, so I'm trying (without success) to create a watchdog script.
This script will check the status of the service. If the service is down, then it should start the service and send me an email about the issue. The email script is in php and is okay.
The problem with the watchdog bash script is that I'm just able to execute the script and read the status of the service if I launch the script manually.When using a cronjob for executing the script I get an "empty status" output.
I'll show you the script:
#!/bin/bash
# Check the service
status=$(status SERVICE | awk '{print $2}')
echo "Status of the SERVICE: $status"
When I execute it manually I get:
Status of the SERVICE: stop/waiting
And If execute it with a cronjob I get:
Status of the SERVICE:
As you see I'm not getting any output when executing the script with a cronjob. In short, the cronjob is running but without providing me with the status of the service.
Hope your X-vision can see the error that I'm not able.
BR,
albertof
The difference is usually environment variables. Likely PATH. Run
which status
To show what executable it's running, and then put that full path into the cron invocation.

Upstart failing to initialize a gunicorn server

I am trying to start an instance of a server running gunicorn. Here is my upstart script:
expect daemon
script
cd /opt/app/live/srv/poi_proxy
exec /usr/local/bin/gunicorn server:app -c /etc/gunicorn.conf
end script
And here is the gunicorn config file:
bind = '0.0.0.0:80'
workers = 3
worker_class = 'gevent'
The problem i'm having is that when running the command through the command prompt the server starts without issue. However when using the upstart script it generates a defunct process for each of the children.
Also i believe the path has to do something with it. When starting the server through the command line if i do:
cd /opt/app/live/srv/poi_proxy
sudo /usr/local/bin/gunicorn server:app -c /etc/gunicorn.conf
It works fine, however:
sudo /usr/local/bin/gunicorn /opt/app/live/srv/poi_proxy/server:app -c /etc/gunicorn.conf
I am face with the same problem as when using upstart
Any idea of what could be wrong or how to fix it would be greatly appreciated.
It looks like part of the problem here is you have expect daemon in the upstart config file but you are not calling gunicorn in the daemon mode. This should causes the pid to be reported incorrectly and the initctl stop command to hang.

Resources