Nodejs Kubernetes Deployment keeps crashing - node.js

I'm pulling my hair out for a week but I am close to giving up. Please share your wisdom.
This is my Docker file:
FROM node
RUN apt-get update
RUN mkdir -p /var/www/stationconnect
RUN mkdir -p /var/log/node
WORKDIR /var/www/stationconnect
COPY stationconnect /var/www/stationconnect
RUN chown node:node /var/log/node
COPY ./stationconnect_fromstage/api/config /var/www/stationconnect/api/config
COPY ./etc/stationconnect /etc/stationconnect
WORKDIR /var/www/stationconnect/api
RUN cd /var/www/stationconnect/api
RUN npm install
RUN apt-get install -y vim nano
RUN npm install supervisor forever -g
EXPOSE 8888
USER node
WORKDIR /var/www/stationconnect/api
CMD ["bash"]
It works fine in docker alone running e.g.
docker run -it 6bcee4528c7c
Any advice?

When create a container, you should have a foreground process to keep the container alive.
What i’ve done is add a shell script line
while true; do sleep 1000; done at the end of my docker-entrypoint.sh, and refer to it in ENTRYPOINT [/docker-entrypoint.sh]
Take a look at this issue to find out more.
There’s an example how to make a Nodejs dockerfile, be sure to check it out.

this is kind of obvious. You are running it with interactive terminal bash session with docker run -it <container>. When you run a container in kube (or in docker without -it) bash will exit immediately, so this is what it is doing in kube deployment. Not crashing per say, just terminating as expected.
Change your command to some long lasting process. Even sleep 1d will do - it will die no longer. Nor will your node app work though... for that you need your magic command to launch your app in foreground.

You could add an ENTRYPOINT command to your Dockerfile that executes something that is run in the background indefinitely, say, for example, you run a script my_service.sh. This, in turn, could start a webserver like nginx as a service or simply do a tail -f /dev/null. This will keep your pod running in kubernetes as the main task of this container is not done yet. In your Dockerfile above, bash is executed, but once it runs it finishes and the container completes. Therefore, when you try to do kubectl run NAME --image=YOUR_IMAGE it fails to connect because k8s is terminating the pod that runs your container almost immediately after the new pod is started. This process will continue like this infinitely.
Please see this answer here for a in-line command that can help you run your image as is for debugging purposes...

Related

uWSGI server does not run with & argument

I build a docker image which runs flask on an uWSGI server. I have a uwsgi.ini file with configuration. When I use
CMD ["uwsgi", "--ini", "uwsgi.ini"]
in the Dockerfile, everything works fine and the server runs, I can access my flask app via the browser. The only problem is that I cannot use the container's bash while it's running, as it is blocked by the uWSGI process. I found that appending an & should make the uWSGI run in the background. However, when I use
CMD ["uwsgi", "--ini", "uwsgi.ini", "&"]
I get an error saying
unable to load configuration from &
I get that when I try this, the uWSGI thinks I'm passing another argument that it should process. However, I cannot find any way to tell it that it is not the case. Using docker run with the -d argument also only detaches the container from the current terminal on the host, but when I use docker attach, I get a bash that I can't do anything with.
Is there a way to tell uWSGI explicitly that I want it to run in the background? Am I missing something?
You can execute a command on your container by using the exec command.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9fb488c77d55 nginx "/docker-entrypoint.…" 18 minutes ago Up 18 minutes 80/tcp distracted_brown
so here I have an nginx container image running in a container called distracted_brown. So I can ask the container to run a command using exec. In this case the command i want to run is the shell sh. I also pass the -it flag so i can run interactivly with STDIN and STDOUT
docker container exec -it distracted_brown sh
this will allow me shell access to the container where nginx is running as PID 1. As a side note you dont normally want to run your CMD process in the background as when PID 1 closes the container will close.

start docker container interactively

I have a very simple dockerfile with only one row, namely "FROM ubuntu". I created an image from this dockerfile by the command docker build -t ubuntu_ .
I know that I can create a new docker container from this image an run it interactively with the command
docker run -it my_new_container
I can later start this new container with the command
start my_new container
As I understand it, I should also be able to use this container it interactively by
start -i my_new container
But, it does not work. It just runs and exits. I don't get to the container's command prompt as I do when I use run. What am I doing wrong?
If i understood correctly, you want to see the logs from container in terminal, same as when you run the image with docker run. If that's the case, then try with
docker start -a my_docker_container
You can enter a running container with:
docker exec -it <container name> /bin/bash
example:
docker exec -it my_new_container /bin/bash
you can replace bash with sh if bash is not available in the container.
and if you need to explicitly use a UID , like root = UID 0, you can specify this:
docker exec -it -u 0 my_new_container /bin/bash
which will log you as root
Direct answer:
To run an interactive shell for a non-running container, first find the image that the container is based on.
Then:
docker container run -it [yourImage] bash
If your eventual container is based on an alpine image, replace bash with sh.
Technically, this will create a NEW container, but it gets the job done.
EDIT [preferred method]:
An even better way is to give the container something irrelevant to do. A nice solution from the VSCode docs is to put the following command into your service definition of the docker-compose.yml file:
services:
my-app-service:
command: ["sleep", "infinity"]
# other relevant parts of your service def...
The idea here is that you're telling your container to sleep for some time (infinite amount of time). Ironically, this your container will have to maintain this state, forcing the container to keep running.
This is how I run containers. Best wishes to whomever needs this nugget of info. We're all learning :)
You cannot get a shell into a container in its stopped state, or restart it directly with another entry point. If the container keeps exiting and you need to examine it, the only option I know of is to commit the container as a new image and then start a new container with such image, as per a related answer.
If you don't need that container anymore and just want it to stay up, you should run it with a process that will not exit. An example with an Ubuntu image would be (you don't need a Dockerfile for this):
docker run -d ubuntu --name carrot tail -f /dev/null
You will see that this container stays up and you can now run bash on it, to access the CLI:
docker exec -ti carrot bash
If the container has stopped for whatever reason, such as a machine restart, you can bring it back up:
docker start carrot
And it will continue to stay up again.

Running bash script in a dockerfile

I am trying to run multiple js files in a bash script like this. This doesn't work. The container comes up but doesn't run the script. However when I ssh to the container and run this script, the script runs fine and the node service comes up. Can anyone tell me what am I doing wrong?
Dockerfile
FROM node:8.16
MAINTAINER Vivek
WORKDIR /a
ADD . /a
RUN cd /a && npm install
CMD ["./node.sh"]
Script is as below
node.sh
#!/bin/bash
set -e
node /a/b/c/d.js &
node /a/b/c/e.js &
As #hmm mentions your script might be run, but your container is not waiting for your two sub-processes to finish.
You could change your node.sh to:
#!/bin/bash
set -e
node /a/b/c/d.js &
pid1=$!
node /a/b/c/e.js &
pid2=$!
wait pid1
wait pid2
Checkout https://stackoverflow.com/a/356154/1086545 for a more general solution of waiting for sub-processes to finish.
As #DavidMaze is mentioning, a container should generally run one "service". It is of course up to you to decide what constitutes a service in your system. As described officially by docker:
It is generally recommended that you separate areas of concern by using one service per container. That service may fork into multiple processes (for example, Apache web server starts multiple worker processes). It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application.
See https://docs.docker.com/config/containers/multi-service_container/ for more details.
Typically you should run only a single process in a container. However, you can run any number of containers from a single image, and it's easy to set the command a container will run when you start it up.
Set the image's CMD to whatever you think the most common path will be:
CMD ["node", "b/c/d.js"]
If you're using Docker Compose for this, you can specify build: . for both containers, but in the second container, specify an alternate command:.
version: '3'
services:
node-d:
build: .
node-e:
build: .
command: node b/c/e.js
Using bare docker run you can specify an alternate command after the image name
docker build -t me/node-app .
docker run -d --name node-d me/node-app
docker run -d --name node-e me/node-app \
node b/c/e.js
This lets you do things like independently set restart policies for each container; if you run this in a clustered environment like Docker Swarm or Kubernetes, you can independently scale the two containers/pods/processes as well.

docker RUN/CMD is possibly not executed

I'm trying to build a docker file in which I first download and install the Cloud SQL Proxy, before running nodejs.
FROM node:13
WORKDIR /usr/src/app
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
RUN chmod +x cloud_sql_proxy
COPY . .
RUN npm install
EXPOSE 8000
RUN cloud_sql_proxy -instances=[project-id]:[region]:[instance-id]=tcp:5432 -credential_file=serviceaccount.json &
CMD node index.js
When building the docker file, I don't get any errors. Also, the file serviceaccount.json is included and is found.
When running the docker file and checking the logs, I see that the connection in my nodejs app is refused. So there must be a problem with the Cloud SQL proxy. Also, I don't see any output of the Cloud SQL proxy in the logs, only from the nodejs app. When I create a VM and install both packages separately, it works. I get output like "ready for connections".
So somehow, my docker file isn't correct, because the Cloud SQL proxy is not installed or running. What am I missing?
Edit:
I got it working, but I'm not sure this is the correct way to do.
This is my dockerfile now:
FROM node:13
WORKDIR /usr/src/app
COPY . .
RUN chmod +x wrapper.sh
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
RUN chmod +x cloud_sql_proxy
RUN npm install
EXPOSE 8000
CMD ./wrapper.sh
And this is my wrapper.sh file:
#!/bin/bash
set -m
./cloud_sql_proxy -instances=phosphor-dev-265913:us-central1:dev-sql=tcp:5432 -credential_file=serviceaccount.json &
sleep 5
node index.js
fg %1
When I remove the "sleep 5", it does not work because the server is already running before the connection of the cloud_sql_proxy is established. With sleep 5, it works.
Is there any other/better way to wait untill the first command is completely done?
RUN commands are used to do stuff that changes something in the file system of the image like installing packages etc. It is not meant to run a process when the you start a container from the resulting image like you are trying to do. Dockerfile is only used to build a static container image. When you run this image, only the arguments you give to CMD instruction(node index.js) is executed inside the container.
If you need to run both cloud_sql_proxy and node inside your container, put them in a shell script and run that shell script as part of CMD instruction.
See Run multiple services in a container
You should ideally have a separate container per process. I'm not sure what cloud_sql_proxy does, but probably you can run it in its own container and run your node process in its own container and link them using docker network if required.
You can use docker-compose to manage, start and stop these multiple containers with single command. docker-compose also takes care of setting up the network between the containers automatically. You can also declare that your node app depends on cloud_sql_proxy container so that docker-compose starts cloud_sql_proxy container first and then it starts the node app.

Automatically Start Services in Docker Container

I'm doing some initial tests with docker. At moment i have my images and I can put some containers running, with:
docker ps
I do docker attach container_id and start apache2 service.
Then from the main console I commit the container to the image.
After exiting the container, if I try to start the container or try to run one new container from the committed image, the service is always stopped.
How can create or restart one container with the services started, for example apache?
EDIT:
I've learned a lot about Docker since originally posting this answer. "Starting services automatically in Docker containers" is not a good usage pattern for Docker. Instead, use something like fleet, Kubernetes, or even Monit/SystemD/Upstart/Init.d/Cron to automatically start services that execute inside Docker containers.
ORIGINAL ANSWER:
If you are starting the container with the command /bin/bash, then you can accomplish this in the manner outlined here: https://stackoverflow.com/a/19872810/2971199
So, if you are starting the container with docker run -i -t IMAGE /bin/bash and if you want to automatically start apache2 when the container is started, edit /etc/bash.bashrc in the container and add /usr/local/apache2/bin/apachectl -f /usr/local/apache2/conf/httpd.conf (or whatever your apache2 start command is) to a newline at the end of the file.
Save the changes to your image and restart it with docker run -i -t IMAGE /bin/bash and you will find apache2 running when you attach.
An option that you could use would to be use a process manager such as Supervisord to run multiple processes. Someone accomplished this with sshd and mongodb: https://github.com/justone/docker-mongodb
I guess you can't. What you can do is create an image using a Dockerfile and define a CMD in that, which will be executed when the container starts. See the builder documentation for the basics (https://docs.docker.com/reference/builder/) and see Run a service automatically in a docker container for information on keeping your service running.
You don't need to automate this using a Dockerfile. You could also create the image via a manual commit as you do, and run it command line. Then, you supply the command it should run (which is exactly what the Dockerfile CMD actually does). You can also override the Dockerfiles CMD in this way: only the latest CMD will be executed, which is the command line command if you start the container using one. The basic docker run -i -t base /bin/bash command from the documentation is an example. If your command becomes too long you could create a convenience script of course.
By design, containers started in detached mode exit when the root process used to run the container exits.
You need to start a Apache service in FOREGROUND mode.
docker run -p 8080:80 -d ubuntu/apache apachectl -D FOREGROUND
Reference: https://docs.docker.com/engine/reference/run/#detached-vs-foreground
Try to add start script to entrypoint in dockerfile like this;
ENTRYPOINT service apache2 restart && bash

Resources