I've got a server on which Supervisord is managing my processes. I normally start supervisord with the following command:
sudo /var/www/imd/venv/bin/supervisord -c /var/www/imd/deploy/supervisord.conf
I'm now trying to set things up with Ansible, but I'm unsure of how I should start Ansible. I can of course do it using something like:
- name: run supervisord
command: "/var/www/imd/venv/bin/supervisord -c /var/www/imd/deploy/supervisord.conf"
This works, but only the first time that you run it. Second time you run the same script supervisord is of course already running, which causes the following error:
TASK [run supervisord]
******************************************************* fatal: [ansible-test1]: FAILED! => {"changed": true, "cmd":
["/var/www/imd/venv/bin/supervisord", "-c",
"/var/www/imd/deploy/supervisord.conf"], "delta": "0:00:00.111700",
"end": "2016-06-03 11:57:38.605804", "failed": true, "rc": 2, "start":
"2016-06-03 11:57:38.494104", "stderr": "Error: Another program is
already listening on a port that one of our HTTP servers is configured
to use. Shut this program down first before starting
supervisord.\nFor help, use /var/www/imd/venv/bin/supervisord -h",
"stdout": "", "stdout_lines": [], "warnings": []}
Does anybody know how I can correctly run supervisord with Ansible? All tips are welcome!
[EDIT]
Because the solution in the answer by mbarthelemy doesn't work for socket files I now managed to get it working with the following:
- name: run supervisord
shell: if [ ! -S /var/run/supervisor.sock ]; then sudo /var/www/imd/venv/bin/supervisord -c /var/www/imd/deploy/supervisord.conf; fi
This of course is not very "ansibleish". If anybody has a real Ansible-based solution that would still be really welcome.
You can use supervisor module
- supervisorctl:
name: my_app
state: restarted
config: /var/opt/my_project/supervisord.conf
or
- name: Restart my_app
supervisorctl:
name: my_app
state: restarted
config: /var/opt/my_project/supervisord.conf
full documentation on https://docs.ansible.com/ansible/2.7/modules/supervisorctl_module.html#examples
Your situation is specific since you don't seem to use a regular Supervisor installed as a normal system package ; in that case you would start/stop/restart it like any other regular system service, using Ansible's service module.
By default, upon starting Supervisor creates a socket to listen for administrations commands from supervisorctl. When it stops it it supposed to remove it.
Try to find where this socket is created in your specific setup (default would be /var/run/supervisor.sock).
Then, let the Ansible command module know that if the Suopervisord process is already running, the socket exists, using the creates option (documentation). This way it won't try to run the command if it's already running:
- name: run supervisord
command: "./venv/bin/supervisord -c ./deploy/supervisord.conf"
args:
chdir=/var/www/imd
creates=/var/run/supervisor.sock
Edit : while this would be the right answer if /var/run/supervisor.sock were a file, it won't work because it's a socket, and Ansible's create parameter won't work.
The most Ansible-ish solution I can think of is using an external Ansible module like one of these, to check if you process already exists (test_process) or is already listening (test_tcp)
Related
I am using VS Code's feature to create development containers for my services. Using the following layout, I've defined a single service (for now). I'd like to automatically run my node project after the container is configured to listen for http requests but haven't found the best way to do so.
My Project Directory
project-name
.devcontainer.json
package.json (etc)
docker-compose.yaml
Now in my docker-compose.yaml, I've defined the following structure:
version: '3'
services:
project-name:
image: node:14
command: /bin/sh -c "while sleep 1000; do :; done"
ports:
- 4001:4001
restart: always
volumes:
- ./:/workspace:cached
Note how I need to have /bin/sh -c "while sleep 1000; do :; done" as the service command, which is required according to VS Code docs so that the service doesn't close?
Within my .devcontainer.json:
{
"name": "My Project",
"dockerComposeFile": [
"../docker-compose.yaml"
],
"service": "project-name",
"shutdownAction": "none",
"postCreateCommand": "npm install",
"postStartCommand": "npm run dev" // this causes the project to hang while configuring?
"workspaceFolder": "/workspace/project-name"
}
I've added a postCreateCommand to install dependencies, but I also need to run npm run dev to have my server listen for requests. However, if I add this command in the postStartCommand, the project does build and run, but it technically hangs on Configuring Dev Server (with a spinner at the bottom of VS Code) since this starts my server and the script doesn't "exit", so I feel like there should be a better way to trigger the server to run after the container is set up?
See https://code.visualstudio.com/remote/advancedcontainers/start-processes
In other cases, you may want to start up a process and leave it running. This can be accomplished by using nohup and putting the process into the background using &. For example:
"postStartCommand": "nohup bash -c 'your-command-here &'"
I just tried it, and it works for me - it removes the spinning "Configuring Dev Container" that I also saw. However, it does mean the process is running in the background so your logs will not be surfaced to the devcontainer terminal. I got used to watching my ng serve logs in that terminal to know when compilation was done, but now I can't see them. Undecided if I'll switch back to the old way. Having the Configuring Dev Container spinning constantly was annoying but otherwise did not obstruct anything that I could see.
I'm trying to start graphhopper using pm2... graphhopper is a java application and the way I initiate it on the terminal is by going to its folder and entering the following command:
java -jar matching-web/target/graphhopper-map-matching-web-1.0-SNAPSHOT.jar server config.yml
This application works fine running from the command line, but I haven't succeeded on running it as a service with pm2. The config file I'm using is this one (pm2 start config.json):
{
"apps":[
{
"name":"graphhopper",
"cwd":".",
"script":"/usr/bin/java",
"args":[
"-jar",
"/home/myyser/graphhopper/map-matching/matching-web/target/graphhopper-map-matching-web-1.0-SNAPSHOT.jar",
"server",
"config.yml"
],
"log_date_format":"YYYY-MM-DD HH:mm Z",
"exec_interpreter":"",
"exec_mode":"fork"
}
]
}
I'm 100% sure that what I'm getting wrong here is the way I'm writing the "server", "config.yml" parameters... Looking into pm2 logs graphhopper I can see that those parameters are not being recognized at all... I've tried to tweak the way it's done as well but I didn't manage to figure out the right solution. I know how to start a java application using pm2 with no parameters. But how can I do it with a java application that has parameters as in the case of graphhopper?
As stated in the comments, this issue can be solved by creating a bash script and running it with pm2 instead of running directly the java application... The bash script used was the file graphhopper.sh as the following:
#!/bin/bash
java -jar matching-web/target/graphhopper-map-matching-web-1.0-SNAPSHOT.jar server config.yml
And to start it as a service with pm2:
pm2 start graphhopper.sh --name=graphhopper
You can also run a fat jar directly in pm2 - use two dashes to separate the command args:
pm2 start java -- -jar matching-web/target/graphhopper-map-matching-web-1.0-SNAPSHOT.jar server config.yml
Mine java app worked fine - just use 1 arg - no array.
apps:
- name : 'admin'
script: '/opt/homebrew/opt/openjdk#11/bin/java'
args: '-jar ./wweevvAdmin/target/wweevvAdmin-1.0-SNAPSHOT.jar'
instances: '1'
autorestart: true
I am trying to use Ansible to install a .run file (created using Makeself 2.1.5) using the following task in a playbook:
- name: Install Program
command: /home/user/folder/program.run -- /S /D=/home/user/folder/destination/
Here, /S is a switch to run a silent installation and the parameter /D sets the destination for the installation. Running this command in the console succeeds.
Ansible claims to run the task without error:
changed: [127.0.0.1] => {
"changed": true,
"cmd": [
"/home/user/folder/program.run",
"--",
"/S",
"/D=/home/user/folder/destination/"
],
"delta": "0:00:00.065261",
"end": "2017-01-06 09:08:43.114265",
"invocation": {
"module_args": {
"_raw_params": "/home/user/folder/program.run -- /S /D=/home/user/folder/destination/",
"_uses_shell": false,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"warn": true
},
"module_name": "command"
},
"rc": 0,
"start": "2017-01-06 09:08:43.049004",
"stderr": "",
"stdout": "",
"stdout_lines": [],
"warnings" : []
So somehow the additional parameters cause the execution to fail without Ansible noticing. I've tried using the shell command and various ways of quoting my command, but to no avail.
If I do not pass parameters to the .run file, that is use command: /home/user/folder/program.run, an installation prompt is opened asking for user input, which defeats the purpose of Ansible.
Does anybody have a solution for this? A possible workaround might be to use the expect module, but I would prefer to be able to use the command line arguments, as this is not the only file I would like to install.
I am using Ansible 2.2.0.0 on Ubuntu 16.04.1 LTS.
EDIT:
Following techraf's advice, I found a simple solution using the shell module. Using shell: konsole -e /home/user/folder/program.run /S /D=/home/user/folder/destination/ caused the installation to complete correctly. It is also possible to put the command in a script file and run it using the script module.
Try using the shell module instead of command:
- name: Install Program
shell: /home/user/folder/program.run -- /S /D=/home/user/folder/destination/
You are using -- in the command execution, which actually prevents shell form parsing the arguments that follow. It's a shell built-in, not a parameter of the command.
Can't test it now (and frankly showing without the real program you run it's impossible), but I bet it should work.
If the above won't work, you'd probably have to put this line in a script and run it with the script module.
I've installed an npm package / script in a JAIL on FreeNAS 9.10. (FreeBSD based)
It works perfectly if I run "npm start" in the directory where the scripts are installed.
However, I need this to be auto-starting when the jail starts. I don't know now to do that. Do I need to create an rc script?
Basically all I need to do is give the "npm start" in the correct directory on start up. How do I do that?
thanks
Yes, you can place an rc script within the jail and enable it using the jail's /etc/rc.conf file.
But, for a quick and dirty solution, you could create a /etc/rc.local script (also within the jail's environment) and put your startup commands in there.
See the manual page here.
Don't know about npm start, but for node.js I made such RC srcipt:
#!/bin/sh
# $FreeBSD: 340872 2014-01-24 00:14:07Z mat $
#
# PROVIDE: SERVICENAME
# REQUIRE: NETWORKING
# KEYWORD: shutdown
#
# Add the following line to /etc/rc.conf to enable SERVICENAME:
#
# SERVICENAME_enable="YES"
#
. /etc/rc.subr
name="SERVICENAME"
rcvar=SERVICENAME_enable
pidfile=${SERVICENAME_pidfile:-"/var/run/SERVICENAME.pid"}
command="/usr/sbin/daemon"
#command_args="-r -u USERNAME -P /var/run/SERVICENAME.pid /usr/local/bin/node /home/USERNAME/PROGDIR" # cjayho: restart if crashed
command_args="-u USERNAME -P /var/run/SERVICENAME.pid /usr/local/bin/node /home/USERNAME/PROGDIR"
load_rc_config $name
: ${SERVICENAME_enable:="NO"}
run_rc_command "$1"
name this file something like SERVICENAME and put to /usr/local/etc/rc.d
to enable automatic startup run command as root:
sysrc SERVICENAME_enable="YES"
do not forget to replace SERVICENAME, USERNAME and PROGDIR to your values, and add
process.chdir('/home/USERNAME/PROGDIR')
to your entry js file.
I'm setting up a container with the following Dockerfile
# Start with project/baseline
FROM project/baseline # => image with mongo / nodejs / sailsjs
# Create folder that will contain all the sources
RUN mkdir -p /var/project
# Load the configuration file and the deployment script
ADD init.sh /var/project/init.sh
ADD src/ /var/project/ # src contains a list of folder, each one being a sails app
# Compile the sources / run the services / run mongodb
CMD /var/project/init.sh
The init.sh script is called when the container runs.
It should start a couple of webapp and mongodb.
#!/bin/bash
PROJECT_PATH=/var/project
# Start mongodb
function start_mongo {
mongod --fork --logpath /var/log/mongodb.log # attempt to have mongo running in daemon
}
# Start services
function start {
for service in $(ls);do
cd $PROJECT_PATH/$service
npm start # Runs sails lift on each service
done
}
# start mongodb
start_mongo
# start web applications defined in /var/project
start
Basically, there is a couple of nodejs (sailsjs) application in /var/project.
When I run the container, I got the following message:
$ sudo docker run -t -i projects/test
about to fork child process, waiting until server is ready for connections.
forked process: 10
and then it remains stuck.
How can mongo and the sails processes can be started and the container to remain in a running state ?
UPDATE
I now use this supervisord.conf file
[supervisord]
nodaemon=false
[program:mongodb]
command=/usr/bin/mongod
[program:process1]
command=/bin/bash "cd /var/project/service1 && node app.js"
[program:process2]
command=/bin/bash "cd /var/project/service2 && node app.js"
it is called in the Dockerfile like:
# run the applications (mongodb + project related services)
CMD ["/usr/bin/supervisord"]
As my services are dependent upon mongo starting correctly, supervisord does not wait that long and the services are not started then. Any idea to solve that ?
By the way, it that a so best practice to use mongo in the same container ?
UPDATE 2
I went back to a service.sh script that is called when the container is running. I know this is not clean (but I'll say it's temporary so I can fix the pb I have in supervisor), but I'm doing the following:
run nohup mongod &
wait 60 sec
run my node (forever) processes
The thing is, the container exit right after the forever processes are ran... how can it be kept active ?
If you want to cleanly start multiple services inside a container, one option is to use a process supervisor of some sort. One option is documented here, in the official Docker documentation.
I've done something similar using runit. You can see my base runit image here, and a multi-service application image using that here.