Run node.js after switching user in a script - node.js

I have the following script:
#!/bin/bash
su newuser
node index.js
This script is an entrypoint for a docker container.
When I run the container, I see that the script gets executed and I switch to newuser. However, index.js does not get called. But as soon as I type "exit" to exit newuser, index.js starts running.
Can someone explain what the problem is here, please?

su newuser will create a new shell. Basically, that command launches a process that takes time to exit. Only once it exits will the next command in your original bash script execute.
If you want to run node as newuser, use this command instead:
su newuser -c "node index.js"
Probably you want to include the full path to node as well, because launching scripts this way often doesn't bring up the full environment that you might expect (PATH might not be complete compared to running a full shell):
su newuser -c "/path/to/node index.js"

If the script is an entrypoint script, you shouldn’t need to set the username at all; a USER directive in the Dockerfile can set the (default) user name or user ID.
For this simple setup I wouldn’t use an entrypoint script at all. I’d put in my Dockerfile
USER newuser
CMD ["node", "index.js"]
In general I’d avoid entrypoint scripts or ENTRYPOINT directives that run fixed commands (and prefer CMD over ENTRYPOINT) because they make it difficult to do the otherwise very-useful-when-things-are-broken
docker run --rm -it myimage sh

Related

Docker - unable to run script

What I'm doing
I am using AWS batch to run a docker container for a large compute job. I have configured the ECR/ECS successfully to the best of my knowledge but am having issues running the required commands for reasons that are beyond my level of understanding with docker ( newbie )
What I need to do is pass the below commands into my application and start my application to perform some heavy computing tasks; all commands listed below must be present.
The Issue(s)
The issue arises when I send the submit job to AWS batch; this service pulls the image from the ACR ( amazon container repository ) and spins up a compute environment. The issue comes from when I try to run the command I pass in, below I will go throgh it.
"command": [
"mkdir -p logging",
"chmod 777 logging/",
"docker run -t -i -e my-application", # container name
"-e APIKEY",
"-e BASEURI",
"-e APIUSER",
"-v WORKSPACE /logging:/src/log",
"DOCKERIMAGE",
"python my_app.py",
"-t APP_USER",
"-e APP_ENVIRONMENT",
"-u APP_USERNAME",
"-p APP_PASSWORD",
"-i IN_PATH",
"-o OUT_PATH",
"-b tmp/"
]
The command above generates the following error(s)
container_linux.go:370: starting container process caused: exec: "mkdir -p log": executable file not found in $PATH
I tried to pass in the command to echo the env var $PATH but was unsuccesfull getting a response and resulted in a similar error.
I have ran successfully "ls" and was able to see the directory contents of my application inside.
I am not however able to run any of these commands that I have included in the command [] section. I have tried just running python and such in hopes of getting a more detailed error but was unsuccessful.
Logic in plain English
Create a path called logging if it doesnt exist
set the permissions for logging
run the docker container and pass in the environment variables while doing so
Tell docker to run the python file my_app.py and pass in the expected runtime args
Execute and perform the required logic deligated in the python3 application
Questions
Why can I not create a directory here called "logging" where am I ?
Am I running these properly as defined by AWS batch? or docker
What am I missing or where am I going wrong?
AWS Batch high level doc
AWS Batch link specific to what i'm doing
Assuming that you're following the syntax described in the Container
Properties
section of the AWS docs, you have several problems with the syntax of
your command directive.
First
The command directive can only run a single command. You can't mash together a bunch of commands as you're trying to do in your example. If you need to run multiple commands you would need to embed them as an argument to a shell. For example, something like:
command: ["/bin/sh", "-c", "mkdir -p logging; chmod 777 logging; ..."]
Second
You must properly tokenize your
command lines -- that is, when you type mkdir -p logging at the
command prompt, the shell splits this into three parts (or "tokens"): ['mkdir', '-p', 'logging']. You need to do the same thing when building up the
list of arguments to command.
This is invalid:
command: ["mkdir -p logging"]
That would looking for a command named mkdir -p logging, and of course no such command exists. That would properly be written as:
command: ["mkdir", "-p", "logging"]
Third
I'm not very familiar with the AWS batch environment, but it's unlikely you can run a docker command inside a docker` container as you're trying to do. It's unclear why you're doing this, though: why not just configure your AWS batch job with the appropriate image, environment variables, etc?
Take a look at some of these example job definitions.

Node.js permission denied public key

I'm trying to run a Git command via node over SSH but I keep getting the error:
Permission denied (publickey)
I guess this is because Node does not have access to my SSH keys since it is running in a child process?
How can I get past this?
On windows, if your credentials are correct then it should work.
I guess you are running something like this -
require('child_process').spawn('git', ['push', 'origin', 'master']);
It works for me in both cases, ssh and https.
I fixed this running a Bash script from node as follows:
"scripts": {
"start": "npm-run-all -p server update",
"server": "dyson rest 7070",
"update": "sh update.sh"
}
update.sh looks like:
#!/usr/bin/env bash
set -e
set -o pipefail
SSH_KEY=/path/.ssh/id_rsa
function update {
eval $(ssh-agent -s)
ssh-add ${SSH_KEY}
git submodule update --recursive --remote
}
update
The main thing is to start the ssh-agent in Node and then add your ssh key to the agent before running your git command.
UPDATE: The above works but I found a better answer. The reason it was failing is because inside the executing script the HOME environment variable was not pointing to the same HOME when outside the script. I fixed this by putting my ssh keys in both HOME folders. Then the script was able to find the right key and voilà.
PS You can determine the value of the HOME variable in Node by logging process.env.HOME or in a shell script with echo "${HOME}".

Docker & PM2: String based CMD with environment variables

I'm currently using the shell-form of CMD in Docker for launching my node app:
CMD /usr/src/app/node_modules/.bin/trifid --config $TRIFID_CONFIG
The env-var TRIFID_CONFIGis set to a default in the Dockerfile:
ENV TRIFID_CONFIG config.customer.json
This makes it easy to pass another config file for dev-environments for example.
Now I try to switch this to PM2 for production. However it looks like all PM2 samples are using the "exec" form which from what I understood does not evaluate ENV-vars. I tried the shell-form with PM2:
CMD pm2-docker /usr/src/app/node_modules/trifid/server.js --config $TRIFID_CONFIG
But it looks like the variable is not evaluated like this, it fails back to default on execution.
What would be the proper way to handle this with PM2 inside a Docker image?
I had a discussion on Github and meanwhile figured it out:
CMD pm2-docker /usr/src/app/node_modules/.bin/trifid -- --config $TRIFID_CONFIG
So the trick is to use -- after the command and the rest will be passed as argument. If I use the shell form env-vars do seem to get evaluated properly.

Making an npm script auto start in a FreeBSD Jail

I've installed an npm package / script in a JAIL on FreeNAS 9.10. (FreeBSD based)
It works perfectly if I run "npm start" in the directory where the scripts are installed.
However, I need this to be auto-starting when the jail starts. I don't know now to do that. Do I need to create an rc script?
Basically all I need to do is give the "npm start" in the correct directory on start up. How do I do that?
thanks
Yes, you can place an rc script within the jail and enable it using the jail's /etc/rc.conf file.
But, for a quick and dirty solution, you could create a /etc/rc.local script (also within the jail's environment) and put your startup commands in there.
See the manual page here.
Don't know about npm start, but for node.js I made such RC srcipt:
#!/bin/sh
# $FreeBSD: 340872 2014-01-24 00:14:07Z mat $
#
# PROVIDE: SERVICENAME
# REQUIRE: NETWORKING
# KEYWORD: shutdown
#
# Add the following line to /etc/rc.conf to enable SERVICENAME:
#
# SERVICENAME_enable="YES"
#
. /etc/rc.subr
name="SERVICENAME"
rcvar=SERVICENAME_enable
pidfile=${SERVICENAME_pidfile:-"/var/run/SERVICENAME.pid"}
command="/usr/sbin/daemon"
#command_args="-r -u USERNAME -P /var/run/SERVICENAME.pid /usr/local/bin/node /home/USERNAME/PROGDIR" # cjayho: restart if crashed
command_args="-u USERNAME -P /var/run/SERVICENAME.pid /usr/local/bin/node /home/USERNAME/PROGDIR"
load_rc_config $name
: ${SERVICENAME_enable:="NO"}
run_rc_command "$1"
name this file something like SERVICENAME and put to /usr/local/etc/rc.d
to enable automatic startup run command as root:
sysrc SERVICENAME_enable="YES"
do not forget to replace SERVICENAME, USERNAME and PROGDIR to your values, and add
process.chdir('/home/USERNAME/PROGDIR')
to your entry js file.

docker stop doesn't work for node process

I want to be able to run node inside a docker container, and then be able to run docker stop <container>. This should stop the container on SIGTERM rather than timing out and doing a SIGKILL. Unfortunately, I seem to be missing something, and the information I have found seems to contradict other bits.
Here is a test Dockerfile:
FROM ubuntu:14.04
RUN apt-get update && apt-get install -y curl
RUN curl -sSL http://nodejs.org/dist/v0.11.14/node-v0.11.14-linux-x64.tar.gz | tar -xzf -
ADD test.js /
ENTRYPOINT ["/node-v0.11.14-linux-x64/bin/node", "/test.js"]
Here is the test.js referred to in the Dockerfile:
var http = require('http');
var server = http.createServer(function (req, res) {
console.log('exiting');
process.exit(0);
}).listen(3333, function (err) {
console.log('pid is ' + process.pid)
});
I build it like so:
$ docker build -t test .
I run it like so:
$ docker run --name test -p 3333:3333 -d test
Then I run:
$ docker stop test
Whereupon the SIGTERM apparently doesn't work, causing it to timeout 10 seconds later and then die.
I've found that if I start the node task through sh -c then I can kill it with ^C from an interactive (-it) container, but I still can't get docker stop to work. This is contradictory to comments I've read saying sh doesn't pass on the signal, but might agree with other comments I've read saying that PID 1 doesn't get SIGTERM (since it's started via sh, it'll be PID 2).
The end goal is to be able to run docker start -a ... in an upstart job and be able to stop the service and it actually exits the container.
My way to do this is to catch SIGINT (interrupt signal) in my JavaScript.
process.on('SIGINT', () => {
console.info("Interrupted");
process.exit(0);
})
This should do the trick when you press Ctrl+C.
Ok, I figured out a workaround myself, which I'll venture as an answer in the hope it helps others. It doesn't completely answer why the signals weren't working before, but it does give me the behaviour I want.
Using baseimage-docker seems to solve the issue. Here's what I did to get this working with the minimal test example above:
Keep test.js as is.
Modify Dockerfile to look like the following:
FROM phusion/baseimage:0.9.15
# disable SSH
RUN rm -rf /etc/service/sshd /etc/my_init.d/00_regen_ssh_host_keys.sh
# install curl and node as before
RUN apt-get update && apt-get install -y curl
RUN curl -sSL http://nodejs.org/dist/v0.11.14/node-v0.11.14-linux-x64.tar.gz | tar -xzf -
# the baseimage init process
CMD ["/sbin/my_init"]
# create a directory for the runit script and add it
RUN mkdir /etc/service/app
ADD run.sh /etc/service/app/run
# install the application
ADD test.js /
baseimage-docker includes an init process (/sbin/my_init) which handles starting other processes and dealing with zombie processes. It uses runit for service supervision. The Dockerfile therefore sets the my_init process as the command to run on boot, and adds a script /etc/service for runit to pick it up.
The run.sh script is simple:
#!/bin/sh
exec /node-v0.11.14-linux-x64/bin/node /test.js
Don't forget to chmod +x run.sh!
By default, runit will automatically restart the service if it goes down.
Following these steps (and build, run, and stop as before), the container properly responds to requests for it to shutdown, in a timely fashion.

Resources