child_process is not executing on ec2 nodejs server - node.js

I have node server which is on an ec2 instance running Ubuntu.
I maintain code on Gitlab and push updates there.
The manual updating process is to simply pull the changes and restart the server which I want to automate and for that I am using Gitlab webhook and created a simple endpoint which executes:
childProcess.exec('git pull && bash deploy.sh',{ cwd: '/home/ubuntu/someXyzFolder' }, function(err, stdout, stderr){
if (err) {
return res.status(500).send(err);
}
res.status(200).send("OK");
});
So childProcess is not able to execute these commands as its sending 500 status with this error:
{"killed":false,"code":1,"signal":null,"cmd":"git pull && bash deploy.sh"}
several people have suffered from this problem and solved it by creating a swap due to low memory
I ran free command on my server which replied back
ubuntu#ip-xxx-xxx-xxx-xxx:/var/somefolder/somefolder$ free
total used free shared buff/cache available
Mem: 1007532 404276 381512 772 221744 452944
Swap: 0 0 0
According to this, I should be having enough ram right?
I am not sure whether Linux oom is killing it or something else.
Do let me know your thoughts on it.
-Thanks
EDIT: here is my deploy.sh
git pull
npm install
gulp build
sudo killall forever
sudo killall node
rm /removeTheSymlinkOfTheCurrentBuildFolder
ln -s /makeaNewFolderWithCurrentTimeStamp/`ls -ltr /some | tail -n 1 | awk {' print $9 '}` /andMakeItCurrent
sudo forever /currentBuildFolder/server.js &

So, I finally fixed the issue.
The ssh was generated with when I logged in as the user ubuntu and then stored in Gitlab.
I was opening my 2nd server ( which would kill and restart my main server ) using sudo which was creating the issue.
Now everything seems to be working fine.

Related

Accessing logs of a node js code running as a server

There is a node js code running on a linux server as a process.
I have the process id of the process using
ps -aef | grep node
which gives
amit 20897 1 0 Sep26 ? 03:07:06 node energyMonitor/newBroker.js
I want to access the stdout,stderr of this process, to view the what console.log() statements are priniting. I tried with
tail -f /proc/20897/fd/1
but of no use, can someone help me with this?
Thanks

Proper way for keep process running on a container

I'm not aware if this could be considered as a duplicate since it's a problem for an specific case.
Currently, I have created a docker outside docker image for handling my Jenkins agent which will perform auto restarts without using supervisor as a solution ( lack of python 3.7 support ), and by that, since I'm using openjdk:slim as base image and I don't want to install any additional dependencies I opted to compensate the lack of tools like lsof and ps, or others for checking if the process is running or not, by writing the started process pid on a file which will be used for validating if the process exists or not under the path /proc/pid/status. Currently this works and the main reason of creating this solution for handling the auto start of the agents.
But my question is, Is this the best or more appropriated approach?
Please find the following code with the implementation:
#!/bin/bash
set -e
agent_runner() {
while :
do
if [ ! -f "/proc/$(cat /tmp/agent.pid)/status" ]
then
curl $JNLP_AGENT_DOWNLOAD_URL -o agent.jar
java \
-Dorg.jenkinsci.plugins.durabletask.BourneShellScript.HEARTBEAT_CHECK_INTERVAL=300 \
-Dhttps.protocols=TLSv1.2 \
-jar agent.jar \
-jnlpUrl $JNLP_AGENT_URL \
-secret $JENKINS_SECRET \
-workDir "$JENKINS_WORKDIR" &
echo $! > /tmp/agent.pid
else
:
fi
sleep 10
done
}
while :
do
if [ cat < /dev/tcp/"$TARGET" ]; then
echo "Starting Agent"
agent_runner
else
echo "Jenkins master is offline, waiting...."
fi
sleep 10
done
Link for the repository: https://github.com/thcp/jenkins-agent-dod
If the main process in the container dies, you should let the container die with it.
Docker and the various layers above it have functionality to restart whole containers. There is a docker run --restart option for the basic Docker CLI, and equivalent Docker Compose option, and restarting dying containers after some backoff is the default behavior for Kubernetes pods.
So, if you just let a container die on its own, you’ll have out-of-the-box support for the container engine to restart itself, without adding any special support into your image; just set the CMD to the thing you actually need the container to do and go. This approach also has the benefit that if you detect your environment has become unstable (“I depend on a database and it’s unreachable”) the process can choose to abort itself and let it be restarted later when hopefully the environment has improved.

Stopping nodejs process start by npm?

I am in the process of creating some scripts to deploy my node.js based application, via continuous integration and I am having trouble seeing the right way to stop the node process.
I start the application via a start-dev.sh script:
#!/bin/sh
scripts_dir=`dirname $0`
cd "${scripts_dir}/"..
npm start &
echo $! > app.pid
And then I was hoping to stop it via:
#!/bin/sh
scripts_dir=`dirname $0`
cd "${scripts_dir}/"..
echo killing pid `cat app.pid`
kill -9 `cat app.pid`
The issue I am finding is that npm is no longer running at this point, so the pid isn't useful to stop the process tree. The only workaround I can think of at this point is to skip npm completely for launch and simply call node directly?
Can anyone suggest an appropriate way to deal with this? Is foregoing npm for launching a good approach, in this context?
Forever can do the process management stuff for you.
forever start app.js
forever stop app.js
Try to avoid relying on npm start outside of development, it just adds an additional layer between you and node.
just use supervisor example conf is like
[program:long_script]
command=/usr/bin/node SOURCE_FOLDER/EXECUTABLE_JAVASCRIPT_FILE.js
autostart=true
autorestart=true
stderr_logfile=/var/log/long.err.log
stdout_logfile=/var/log/long.out.log
where
SOURCE_FOLDER is the folder for your project
EXECUTABLE_JAVASCRIPT_FILE the file to be run
you can check the post here

docker stop doesn't work for node process

I want to be able to run node inside a docker container, and then be able to run docker stop <container>. This should stop the container on SIGTERM rather than timing out and doing a SIGKILL. Unfortunately, I seem to be missing something, and the information I have found seems to contradict other bits.
Here is a test Dockerfile:
FROM ubuntu:14.04
RUN apt-get update && apt-get install -y curl
RUN curl -sSL http://nodejs.org/dist/v0.11.14/node-v0.11.14-linux-x64.tar.gz | tar -xzf -
ADD test.js /
ENTRYPOINT ["/node-v0.11.14-linux-x64/bin/node", "/test.js"]
Here is the test.js referred to in the Dockerfile:
var http = require('http');
var server = http.createServer(function (req, res) {
console.log('exiting');
process.exit(0);
}).listen(3333, function (err) {
console.log('pid is ' + process.pid)
});
I build it like so:
$ docker build -t test .
I run it like so:
$ docker run --name test -p 3333:3333 -d test
Then I run:
$ docker stop test
Whereupon the SIGTERM apparently doesn't work, causing it to timeout 10 seconds later and then die.
I've found that if I start the node task through sh -c then I can kill it with ^C from an interactive (-it) container, but I still can't get docker stop to work. This is contradictory to comments I've read saying sh doesn't pass on the signal, but might agree with other comments I've read saying that PID 1 doesn't get SIGTERM (since it's started via sh, it'll be PID 2).
The end goal is to be able to run docker start -a ... in an upstart job and be able to stop the service and it actually exits the container.
My way to do this is to catch SIGINT (interrupt signal) in my JavaScript.
process.on('SIGINT', () => {
console.info("Interrupted");
process.exit(0);
})
This should do the trick when you press Ctrl+C.
Ok, I figured out a workaround myself, which I'll venture as an answer in the hope it helps others. It doesn't completely answer why the signals weren't working before, but it does give me the behaviour I want.
Using baseimage-docker seems to solve the issue. Here's what I did to get this working with the minimal test example above:
Keep test.js as is.
Modify Dockerfile to look like the following:
FROM phusion/baseimage:0.9.15
# disable SSH
RUN rm -rf /etc/service/sshd /etc/my_init.d/00_regen_ssh_host_keys.sh
# install curl and node as before
RUN apt-get update && apt-get install -y curl
RUN curl -sSL http://nodejs.org/dist/v0.11.14/node-v0.11.14-linux-x64.tar.gz | tar -xzf -
# the baseimage init process
CMD ["/sbin/my_init"]
# create a directory for the runit script and add it
RUN mkdir /etc/service/app
ADD run.sh /etc/service/app/run
# install the application
ADD test.js /
baseimage-docker includes an init process (/sbin/my_init) which handles starting other processes and dealing with zombie processes. It uses runit for service supervision. The Dockerfile therefore sets the my_init process as the command to run on boot, and adds a script /etc/service for runit to pick it up.
The run.sh script is simple:
#!/bin/sh
exec /node-v0.11.14-linux-x64/bin/node /test.js
Don't forget to chmod +x run.sh!
By default, runit will automatically restart the service if it goes down.
Following these steps (and build, run, and stop as before), the container properly responds to requests for it to shutdown, in a timely fashion.

Cannot run nodejs app and mongo within a docker container

I'm setting up a container with the following Dockerfile
# Start with project/baseline
FROM project/baseline # => image with mongo / nodejs / sailsjs
# Create folder that will contain all the sources
RUN mkdir -p /var/project
# Load the configuration file and the deployment script
ADD init.sh /var/project/init.sh
ADD src/ /var/project/ # src contains a list of folder, each one being a sails app
# Compile the sources / run the services / run mongodb
CMD /var/project/init.sh
The init.sh script is called when the container runs.
It should start a couple of webapp and mongodb.
#!/bin/bash
PROJECT_PATH=/var/project
# Start mongodb
function start_mongo {
mongod --fork --logpath /var/log/mongodb.log # attempt to have mongo running in daemon
}
# Start services
function start {
for service in $(ls);do
cd $PROJECT_PATH/$service
npm start # Runs sails lift on each service
done
}
# start mongodb
start_mongo
# start web applications defined in /var/project
start
Basically, there is a couple of nodejs (sailsjs) application in /var/project.
When I run the container, I got the following message:
$ sudo docker run -t -i projects/test
about to fork child process, waiting until server is ready for connections.
forked process: 10
and then it remains stuck.
How can mongo and the sails processes can be started and the container to remain in a running state ?
UPDATE
I now use this supervisord.conf file
[supervisord]
nodaemon=false
[program:mongodb]
command=/usr/bin/mongod
[program:process1]
command=/bin/bash "cd /var/project/service1 && node app.js"
[program:process2]
command=/bin/bash "cd /var/project/service2 && node app.js"
it is called in the Dockerfile like:
# run the applications (mongodb + project related services)
CMD ["/usr/bin/supervisord"]
As my services are dependent upon mongo starting correctly, supervisord does not wait that long and the services are not started then. Any idea to solve that ?
By the way, it that a so best practice to use mongo in the same container ?
UPDATE 2
I went back to a service.sh script that is called when the container is running. I know this is not clean (but I'll say it's temporary so I can fix the pb I have in supervisor), but I'm doing the following:
run nohup mongod &
wait 60 sec
run my node (forever) processes
The thing is, the container exit right after the forever processes are ran... how can it be kept active ?
If you want to cleanly start multiple services inside a container, one option is to use a process supervisor of some sort. One option is documented here, in the official Docker documentation.
I've done something similar using runit. You can see my base runit image here, and a multi-service application image using that here.

Resources