I want to be able to run node inside a docker container, and then be able to run docker stop <container>. This should stop the container on SIGTERM rather than timing out and doing a SIGKILL. Unfortunately, I seem to be missing something, and the information I have found seems to contradict other bits.
Here is a test Dockerfile:
FROM ubuntu:14.04
RUN apt-get update && apt-get install -y curl
RUN curl -sSL http://nodejs.org/dist/v0.11.14/node-v0.11.14-linux-x64.tar.gz | tar -xzf -
ADD test.js /
ENTRYPOINT ["/node-v0.11.14-linux-x64/bin/node", "/test.js"]
Here is the test.js referred to in the Dockerfile:
var http = require('http');
var server = http.createServer(function (req, res) {
console.log('exiting');
process.exit(0);
}).listen(3333, function (err) {
console.log('pid is ' + process.pid)
});
I build it like so:
$ docker build -t test .
I run it like so:
$ docker run --name test -p 3333:3333 -d test
Then I run:
$ docker stop test
Whereupon the SIGTERM apparently doesn't work, causing it to timeout 10 seconds later and then die.
I've found that if I start the node task through sh -c then I can kill it with ^C from an interactive (-it) container, but I still can't get docker stop to work. This is contradictory to comments I've read saying sh doesn't pass on the signal, but might agree with other comments I've read saying that PID 1 doesn't get SIGTERM (since it's started via sh, it'll be PID 2).
The end goal is to be able to run docker start -a ... in an upstart job and be able to stop the service and it actually exits the container.
My way to do this is to catch SIGINT (interrupt signal) in my JavaScript.
process.on('SIGINT', () => {
console.info("Interrupted");
process.exit(0);
})
This should do the trick when you press Ctrl+C.
Ok, I figured out a workaround myself, which I'll venture as an answer in the hope it helps others. It doesn't completely answer why the signals weren't working before, but it does give me the behaviour I want.
Using baseimage-docker seems to solve the issue. Here's what I did to get this working with the minimal test example above:
Keep test.js as is.
Modify Dockerfile to look like the following:
FROM phusion/baseimage:0.9.15
# disable SSH
RUN rm -rf /etc/service/sshd /etc/my_init.d/00_regen_ssh_host_keys.sh
# install curl and node as before
RUN apt-get update && apt-get install -y curl
RUN curl -sSL http://nodejs.org/dist/v0.11.14/node-v0.11.14-linux-x64.tar.gz | tar -xzf -
# the baseimage init process
CMD ["/sbin/my_init"]
# create a directory for the runit script and add it
RUN mkdir /etc/service/app
ADD run.sh /etc/service/app/run
# install the application
ADD test.js /
baseimage-docker includes an init process (/sbin/my_init) which handles starting other processes and dealing with zombie processes. It uses runit for service supervision. The Dockerfile therefore sets the my_init process as the command to run on boot, and adds a script /etc/service for runit to pick it up.
The run.sh script is simple:
#!/bin/sh
exec /node-v0.11.14-linux-x64/bin/node /test.js
Don't forget to chmod +x run.sh!
By default, runit will automatically restart the service if it goes down.
Following these steps (and build, run, and stop as before), the container properly responds to requests for it to shutdown, in a timely fashion.
Related
This might seems like a similar question roaming around on the internet but it not as I didn't find any similar, so asking here.
The thing is, I have a go program named abc.go which contains two functions which are to run and stop someScript.sh script. Run() and stop() are being called at API hit. I am running this abc.go file using command sudo go run abc.go someFolder/someScript.sh, while passing someScript.sh path as argument. Instop(), I am saving the process-groupID and then killing the whole process-group.
But when I call run and then stop functions, it gives me this output
pid=5844 duration=13.667µs err=exec: already started
and doesn't actually stop the running docker container (I am checking using docker container ls -a ).
The someScript.sh file is:
#!/bin/bash
docker container run --rm --name someContainerName nginx
The abc.go file is:
func Run(){
someVar= true
execCMD = exec.Command("/bin/sh", "-c", commandFromTerminal)
output, err = execCMD.CombinedOutput()
fmt.Println("Output()=", bp.Output())
someVar= false
}
func Stop(){
execCMD.SysProcAttr = &syscall.SysProcAttr{Setpgid: true}
start := time.Now()
syscall.Kill(-execCMD.Process.Pid, syscall.SIGKILL)
err := execCMD.Run()
fmt.Printf("pid=%d duration=%s err=%s\n", execCMD.Process.Pid, time.Since(start),
err)
}
As per my understanding, it seems like docker command which is written in someScript.sh, didn't run the docker container as a subchild/grandchild of /bin/bash but rather ran it as a separate process which the code in my stop() is unable to actaully stop it
Below is the flow diagram which is according to my understanding where i think on calling abc.go, it internally calling /bin/bash, then running sudo as its child, further sudo has a subchildsomeScript.sh. And finally the docker, which is not running as any child/subchild of the above hierarchy, but as a different process.
My question finally is, how to stop this docker container on calling stop(). Or how to make this docker container run as a subchild of the hierarchy so that I can kill it using process-groupID method which I have used above.
PS: I have also tried
err := execCMD.Process.Kill()
if err != nil {
panic(err.Error())
}
execCMD.Process.Release()
but it too didn't help.
docker is just a client for the docker daemon. docker run simply sends a few HTTP requests to the daemon, and the daemon sets up the container and executes it.
So docker run is a grandchild of your Go program, but the nginx processes are descendants of the Docker daemon, and entirely unrelated to your Go program. Mind you, the docker daemon can even be on a different machine, in principle at least.
That being said,
Assigning SysProcAttr after a process has been started has no effect.
You're calling Run in Stop (very suspicious) and you cannot Run a process that has already been started, even after it terminated.
Sending SIGKILL gives docker run no chance to terminate the container. After fixing the other errors, it's possible that the docker daemon takes care of the cleanup due to the --rm flag (I forget how this works, exactly). If not, send SIGTERM instead.
I have the following script:
#!/bin/bash
su newuser
node index.js
This script is an entrypoint for a docker container.
When I run the container, I see that the script gets executed and I switch to newuser. However, index.js does not get called. But as soon as I type "exit" to exit newuser, index.js starts running.
Can someone explain what the problem is here, please?
su newuser will create a new shell. Basically, that command launches a process that takes time to exit. Only once it exits will the next command in your original bash script execute.
If you want to run node as newuser, use this command instead:
su newuser -c "node index.js"
Probably you want to include the full path to node as well, because launching scripts this way often doesn't bring up the full environment that you might expect (PATH might not be complete compared to running a full shell):
su newuser -c "/path/to/node index.js"
If the script is an entrypoint script, you shouldn’t need to set the username at all; a USER directive in the Dockerfile can set the (default) user name or user ID.
For this simple setup I wouldn’t use an entrypoint script at all. I’d put in my Dockerfile
USER newuser
CMD ["node", "index.js"]
In general I’d avoid entrypoint scripts or ENTRYPOINT directives that run fixed commands (and prefer CMD over ENTRYPOINT) because they make it difficult to do the otherwise very-useful-when-things-are-broken
docker run --rm -it myimage sh
I'm following a tutorial on PluralSight regarding vagrant and hubot slack setup.
The only difference is that I'm using hubot-slack.
If I start the hubot by invoking hubot script from terminal - everything works fine - the bot connects and responds to commands.
Unfortunately, when the hubot is started as a service from by the upstart - I get this logged into /var/log/upstart/myhubot.log `Cannot load adapter slack - Error: Cannot find module 'hubot-slack'
my /bin/hubot file looks like this (this works just fine when executed from cli):
#!/bin/sh
set -e
npm install
export PATH="node_modules:node_modules/.bin:node_modules/hubot/node_modules/.bin:$PATH"
export HUBOT_SLACK_TOKEN={}
exec node_modules/.bin/hubot --name "hubot" --adapter slack "$#"
my .conf file that's executed as a service looks like this (can't find module):
description "My hubot"
author "Me bla#bla.com"
start on runlevel [2345]
stop on runlevel [016]
setuid vagrant
env HOME="/home/vagrant"
chdir /vagrant/my-awesome-hubot
console log
script
export PATH="node_modules:node_modules/.bin:node_modules/hubot/node_modules/.bin:/usr/bin/coffee:/usr/bin/node:$PATH"
export HUBOT_SLACK_TOKEN={}
echo "DEBUG: `set`" >> /tmp/myhubot.log
exec node_modules/.bin/hubot --name "hubot" --adapter slack
end script
respawn
Keep in mind that the slack token is excluded from these scripts.
Debug reveals that chdir does the correct thing and the pwd is exactly the same as when I execute the script manually.
I've tried removing entire nodejs project and generating with yeoman from scratch and also tried installing hubot-slack both globaly and localy but to no avail.
In case of a .conf file - there is no npm install but in the provision.sh file - I am cd-ing (as a vagrant user) to the root directory, doing npm install - and only then, service restart. I am also making sure to clean up everything before another round of testing before I do - vagrant provision
cp /vagrant/upstart/myhubot.conf /etc/init/myhubot.conf
sudo -u vagrant -i sh -c 'cd /vagrant/my-awesome-hubot; npm install'
service myhubot restart
Do you have any suggestions.
I've just spent the day working through the same issue as this unanswered question so thought I would update with my solution.
The current hubot generated app is started with the cli with command HUBOT_SLACK_TOKEN=xoxb-YOUR-TOKEN-HERE ./bin/hubot --adapter slack whilst in the folder where hubot was generated. Therefore the utilises the default bin/hubot script.
Your conf file needs to pick this up therefore should run the following:
description "My hubot"
author "Me bla#bla.com"
start on runlevel [2345]
stop on runlevel [016]
script
chdir /vagrant/my-awesome-hubot
export PATH="node_modules:node_modules/.bin:node_modules/hubot/node_modules/.bin:/usr/bin/coffee:/usr/bin/node:$PATH"
HUBOT_SLACK_TOKEN=xoxb-YOUR-TOKEN-HERE ./bin/hubot --adapter slack --name "hubot" >> /tmp/myhubot.log
end script
respawn
After update strongloop to v2.10 slc stops writing logs.
Also I couldn't make the app to start in production mode.
/etc/init/app.conf
#!upstart
description "StrongLoop app"
start on startup
stop on shutdown
env NODE_ENV=production
script
exec slc run /home/ubuntu/app/ \
-l /home/ubuntu/app/app.log \
-p /var/run/app.pid
end script
Can anybody check my upstart config or provide another working copy?
Are you were writing the pid to a file so that you can use it to send SIGUSR2 to the process to trigger log re-opening from logrotate?
Assuming you are using Upstart 1.4+ (Ubuntu 12.04 or newer), then you would be better off letting slc run log to its stdout and let Upstart take care of writing it to a file so that log rotation is done for you:
#!upstart
description "StrongLoop app"
start on startup
stop on shutdown
# assuming this is /etc/init/app.conf,
# stdout+stderr logged to: /var/log/upstart/app.log
console log
env NODE_ENV=production
exec /usr/local/bin/slc run --cluster=CPUs /home/ubuntu/app
The log rotation for "free" is nice, but the biggest benefit to this approach is Upstart can log errors that slc run reports even if they are a crash while trying to set up its internal logging, which makes debugging a lot easier.
Aside from what it means to your actual application, the only effect NODE_ENV has on slc run is to set the default number of cluster workers to the number of detected CPU cores, which literally translates to --cluster=CPUs.
Another problem I find is the node/npm path prefix not being in the $PATH as used by Upstart, so I normally put the full paths for executables in my Upstart jobs.
Service Installer
You could also try using strong-service-install, which is a module used by slc pm-install to install strong-pm as an OS service:
$ npm install -g strong-service-install
$ sudo sl-svc-install --name app --user ubuntu --cwd /home/ubuntu/app -- slc run --cluster=CPUs .
Note the spaces around the -- before slc run
I'm setting up a container with the following Dockerfile
# Start with project/baseline
FROM project/baseline # => image with mongo / nodejs / sailsjs
# Create folder that will contain all the sources
RUN mkdir -p /var/project
# Load the configuration file and the deployment script
ADD init.sh /var/project/init.sh
ADD src/ /var/project/ # src contains a list of folder, each one being a sails app
# Compile the sources / run the services / run mongodb
CMD /var/project/init.sh
The init.sh script is called when the container runs.
It should start a couple of webapp and mongodb.
#!/bin/bash
PROJECT_PATH=/var/project
# Start mongodb
function start_mongo {
mongod --fork --logpath /var/log/mongodb.log # attempt to have mongo running in daemon
}
# Start services
function start {
for service in $(ls);do
cd $PROJECT_PATH/$service
npm start # Runs sails lift on each service
done
}
# start mongodb
start_mongo
# start web applications defined in /var/project
start
Basically, there is a couple of nodejs (sailsjs) application in /var/project.
When I run the container, I got the following message:
$ sudo docker run -t -i projects/test
about to fork child process, waiting until server is ready for connections.
forked process: 10
and then it remains stuck.
How can mongo and the sails processes can be started and the container to remain in a running state ?
UPDATE
I now use this supervisord.conf file
[supervisord]
nodaemon=false
[program:mongodb]
command=/usr/bin/mongod
[program:process1]
command=/bin/bash "cd /var/project/service1 && node app.js"
[program:process2]
command=/bin/bash "cd /var/project/service2 && node app.js"
it is called in the Dockerfile like:
# run the applications (mongodb + project related services)
CMD ["/usr/bin/supervisord"]
As my services are dependent upon mongo starting correctly, supervisord does not wait that long and the services are not started then. Any idea to solve that ?
By the way, it that a so best practice to use mongo in the same container ?
UPDATE 2
I went back to a service.sh script that is called when the container is running. I know this is not clean (but I'll say it's temporary so I can fix the pb I have in supervisor), but I'm doing the following:
run nohup mongod &
wait 60 sec
run my node (forever) processes
The thing is, the container exit right after the forever processes are ran... how can it be kept active ?
If you want to cleanly start multiple services inside a container, one option is to use a process supervisor of some sort. One option is documented here, in the official Docker documentation.
I've done something similar using runit. You can see my base runit image here, and a multi-service application image using that here.