Docker: cant access env variables inside pm2 nodejs code - node.js

So I am facing a weird problem. I am starting a docker container with a entrypoint file and a env variable edgeboxId. The entrypoint.sh contains some pm2 commands to start my nodejs application.
Entrypoint.sh
#!/bin/bash
sudo -u deploy bash << EOF
cd /home/deploy/ce-edgebox-agent
pm2 start ecosystem.json
EOF
echo "out of the deploy user"
echo Entering Entryopint file
echo "export edgeboxId=$edgeboxId">> /etc/bash.bashrc
sudo -u edgebox bash << EOF
pm2 start ce-edgebox-application --update-env
EOF
echo "out of application user"
sleep infinity
Inside dockerfile, I am using following line to import entrypoint.sh file.
RUN ["chmod", "+x", "./entrypoint.sh"]
ENTRYPOINT [ "/bin/bash", "./entrypoint.sh" ]
Then i enter inside container using:
docker exec -it <containerId> bash
Expectation:
I expect edgeboxId accessible inside nodejs application running inside pm2 process as soon as the the container starts with my entrypoint.sh file.
Whats really happening:
edgeboxId appears undefined inside nodejs application but when i run the command pm2 restart ce-edgebox-application --update-env manually inside container then edgeboxId is accessible inside nodejs application.
Question:
How to make edgeboxId accessible inside nodejs as soon as the container starts with my entrypoint.sh file?

Related

how to run feedconsumers and consumers multiple for kafka in docker?

So I have this docker file and i want to run feed-consumers and consumers multiple times and i tried to do so. We have a node.js application for feed-consumers and consumer and pass user_levels to it.
I just want to ask is this the right approach?
FROM ubuntu:18.04
# Set Apt to noninteractive mode
ENV DEBIAN_FRONTEND noninteractive
# Install Helper Commands
ADD scripts/bin/* /usr/local/bin/
RUN chmod +x /usr/local/bin/*
RUN apt-install-and-clean curl \
build-essential \
git >> /dev/null 2>&1
RUN install-node-12.16.1
RUN mkdir -p /usr/src/app
COPY . /usr/src/app
WORKDIR /usr/src/app
#RUN yarn init-cache
#RUN yarn init-temp
#RUN yarn init-user
RUN yarn install
RUN yarn build
RUN node ./feedsconsumer/consumer.js user_level=0
RUN for i in {1..10}; do node ./feedsconsumer/consumer.js user_level=1; done
RUN for i in {1..20}; do node ./feedsconsumer/consumer.js user_level=2; done
RUN for i in {1..20}; do node ./feedsconsumer/consumer.js user_level=3; done
RUN for i in {1..30}; do node ./feedsconsumer/consumer.js user_level=4; done
RUN for i in {1..40}; do node ./feedsconsumer/consumer.js user_level=5; done
RUN for i in {1..10}; do node ./consumer/consumer.js; done
ENTRYPOINT ["tail", "-f", "/dev/null"]
Or is there any other way around?
Thanks
A container runs exactly one process. Your container's is
ENTRYPOINT ["tail", "-f", "/dev/null"]
This translates to "do absolutely nothing, in a way that's hard to override". I typically recommend using CMD over ENTRYPOINT, and the main container command shouldn't ever be an artificial "do nothing but keep the container running" command.
Before that, you're trying to RUN the process(es) that are the main container process. The RUN only happens during the image build phase, the running process(es) aren't persisted in the image, the build will block until these processes complete, and they can't connect to other containers or data stores. These are the lines you want to be the CMD.
A container only runs one processes, but you can run multiple containers off the same image. It's somewhat easier to add parameters by setting environment variables than by adjusting the command line (you have to replace the whole thing), so in your code look for process.env.USER_LEVEL. Also make sure the process stays as a foreground process and doesn't use a package to daemonize itself.
Then the final part of the Dockerfile just needs to set a default CMD that launches one copy of your application:
...
COPY package.json yarn.lock .
RUN yarn install
COPY . .
RUN yarn build
CMD node ./feedsconsumer/consumer.js
Now you can start a single container running this process
docker build -t my/consumer .
docker run -d --name consumer my/consumer
And you can start multiple containers to run the whole set of them
for user_level in `seq 5`; do
for i in `seq 10`; do
docker run -d \
--name "feed-consumer-$user_level-$i" \
-e "USER_LEVEL=$user_level" \
my/consumer
done
done
for i in `seq 10`; do
docker run -d --name "consumer-$i" \
my/consumer \
node ./consumer/consumer.js
done
Notice this last invocation overrides the CMD to run the alternate script; this becomes a more contorted invocation if it needs to override ENTRYPOINT instead. (docker run --entrypoint node my/consumer ./consumer/consumer.js)
If you're looking forward to cluster environments like Kubernetes, it's often straightforward to run multiple identical copies of a container, which is what you're trying to do here. A Kubernetes Deployment object has a replicas: count, and you can kubectl scale deployment feed-consumer-5 --replicas=40 to change what's in the question, or potentially configure a HorizontalPodAutoscaler to set it dynamically based on the topic length (this last is involved, but possible and rewarding).

Dockerized React App failed to bind to $PORT on Heroku

I'm trying to deploy a Dockerized React App to Heroku, but keep getting the
"R10: Failed to bind to $PORT error on Heroku"
.
The dockerized app runs perfectly fine when i docker run it locally.
My docker file looks like the following:
FROM node:10.15.3
RUN mkdir -p /app
WORKDIR /app
COPY . .
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json /app/package.json
RUN npm install --verbose
RUN npm install serve -g -silent
# start app
RUN npm run build
CMD ["serve", "-l", "tcp://0.0.0.0:${PORT}", "-s", "/app/build"]
I followed the online solution to change the "listening" port on serve to $PORT from Heroku. Now the application is served on Heroku's port according to logs, but still, get the
"Failed to bind to $PORT error"
.
Please help!
variable substitution does not happen in CMD that is why ${PORT} is consider as a text instead of consuming its value.
Unlike the shell form, the exec form does not invoke a command shell. This means that normal shell processing does not happen. For example, CMD [ "echo", "$HOME" ] will not do variable substitution on $HOME. If you want shell processing then either use the shell form or execute a shell directly, for example: CMD [ "sh", "-c", "echo $HOME" ]. When using the exec form and executing a shell directly, as in the case for the shell form, it is the shell that is doing the environment variable expansion, not docker.
docker-cmd
Change CMD to
CMD ["sh", "-c", "serve -l tcp://0.0.0.0:${PORT} -s /app/build"]

How to stop running node in docker

I have just installed dockers and installed node.
I am able to run a basic express site. My issue now is I can't stop it. Control-C is not doing anything.
Temporarily what I did to exit was:
Close the docker's terminal.
Open a new one.
Search for all docker containers that is running.
Then docker stop [container]
Is this the proper way?
As described here: github: docker-node best practice
You can add the --init flag to your docker run command.
docker run -it --init -p 3000:3000 --name nodetest mynodeimage
I don't know if this is too late. But the correct way to do this is to catch the SIGINT (interrupt signal in your javascript).
var process = require('process')
process.on('SIGINT', () => {
console.info("Interrupted")
process.exit(0)
})
This should do the trick when you press Ctrl+C
I came across this same problem today, and struggled to find an explanation/solution. I discovered (through trial and error) that this only occurs when the CMD in the Dockerfile is set to:
CMD [ "node", "server.js" ]
However, Ctrl+C works fine when the CMD is changed to:
CMD [ "npm", "start" ]
The npm start script in my package.json file is set to node server.js, so I have no idea why this change works, but hopefully this helps.
A docker run should have gave you back the prompt, avoiding the need for CTRL+C, or closing the docker terminal.
Once you log back in that terminal, a docker ps -a + docker stop should be enough to make your container exit (you still need to remove it before trying to launch it again)
If you just want to stop node without stopping the container, you could go inside the container and run:
$ ps aux | grep node #to obtain process ID (value in second column)
$ kill <process ID>
As a part of solution, you can open your package.js and add 3 new commands/scripts :
"scripts": {
"docker-build-and-run": "docker build -t image-dev-local . && docker run -p 3001:3001 --name container-dev-local image-dev-local",
"docker-stop-and-clear": "(docker stop container-dev-local || true) && (docker rm container-dev-local || true)",
"docker-run": "npm run docker-stop-and-clear && npm run docker-build-and-run"
}
and just simply run in the terminal :
npm run docker-run
to up your app on 3001 port in docker and have fun. Every next run will clear previous and build/run again.
To stop and delete it, just run :
npm run docker-stop-and-clear
docker stop <containerName/containerId>
docker kill --signal=SIGINT <containerName/containerId>
docker rm -f <containerName/containerId>
From what I can gather you need both -t and -i for Ctrl-C to work as expected. Command like this would be helpful i believe.
Simple example which i can think of this below
Case 1 to retain container:
$ ID=$(sudo docker run -t -d ubuntu /usr/bin/top -b)
$ sudo docker attach $ID
Control-C
$ sudo docker ps
Case 2 to terminate the container:
$ ID=$(sudo docker run -t -i -d ubuntu /usr/bin/top -b)
$ sudo docker attach $ID
Control-C
$ sudo docker ps
The solution is to use in the Dockerfile this command for starting the application
CMD ["/bin/sh", "-c", "node app.js"]
then we can listen in the app.js with
process.on('SIGTERM', () => {
console.log('SIGTERM received, shutting down...');
});
and we have to run the dockerfile with the --init flag
docker run --init -p 3000:3000 --name nodetest mynodeimage
or we can add in docker-compose beginning from version 3.7 the entry
init: true
to the desired service.
For the app to receive the signal you should use docker stop nodetest or docker-compose down. Shutting down with Ctrl+C does not send the SIGTERM signal.
Inside the node console, after running docker run -it node,
you can exit with the following:
Enter .exit
Press two times
ctrl+c
ctrl+d
If the node container is started in detached mode docker run -d node,
you can stop it with docker stop <CONTAINER_ID or CONTAINER_NAME>.
For example, assuming you want to kill the newest node container:
docker stop $(docker ps | grep node | awk 'NR == 1 { print $1}')

cannot pm2 list in docker containers

I build a Docker image with Node.js and pm2. I started the container with:
docker run -d --name test -p 22 myImage
Then I go inside the container with:
docker exec -it test /bin/bash
In the container, exec the command:
pm2 list
And it stuck here:
P.s.: My application works well in the Docker container, if I add CMD pm2 start app.js in the Dockerfile.
If your dockerfile CMD is a pm2 command, you have you include --no-daemon arg option so pm2 runs in the foreground and so your docker container continues to run.
An example Dockerfile CMD:
CMD ["pm2", "start", "app.js", "--no-daemon"]
Otherwise, without --no-daemon, pm2 launches as a background process and docker thinks the execution of the pm2 command is done running and stops.
See https://github.com/Unitech/PM2/issues/259
CMD ["pm2-docker", "pm2.yaml"]
This is the new approach.
Please do not use previous approaches.

Docker cannot run on build when running container with a different user

I don't know the specifics why the node application does not run. Basically I added a dockerfile in a nodejs app, and here is my Dockerfile
FROM node:0.10-onbuild
RUN mv /usr/src/app /ghost && useradd ghost --home /ghost && \
cd /ghost
ENV NODE_ENV production
VOLUME ["/ghost/content"]
WORKDIR /ghost
EXPOSE 2368
CMD ["bash", "start.bash"]
Where start.bash looks like this:
#!/bin/bash
GHOST="/ghost"
chown -R ghost:ghost /ghost
su ghost << EOF
cd "$GHOST"
NODE_ENV={$NODE_ENV:-production} npm start
EOF
I usually run docker like so:
docker run --name ghost -d -p 80:2368 user/ghost
With that I cannot see what is going on, and I decided to run it like this:
docker run --name ghost -it -p 80:2368 user/ghost
And I got this output:
> ghost#0.5.2 start /ghost
> node index
Seems, like starting, but as I check the status of the container docker ps -a , it is stopped.
Here is the repo for that but, the start.bash and dockerfile is different, because I haven't committed the latest, since both are not working:
JoeyHipolito/Ghost
I manage to make it work, there is no error in the start bash file nor in the Dockerfile, it's just that I failed to build the image again.
With that said, you can checkout the final Dockerfile and start.bash file in my repository:
Ghost-blog__Docker (https://github.com/joeyhipolito/ghost)
At the time I write this answer, you can see it in the feature-branch, feature/dockerize.

Resources