docker RUN/CMD is possibly not executed - node.js

I'm trying to build a docker file in which I first download and install the Cloud SQL Proxy, before running nodejs.
FROM node:13
WORKDIR /usr/src/app
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
RUN chmod +x cloud_sql_proxy
COPY . .
RUN npm install
EXPOSE 8000
RUN cloud_sql_proxy -instances=[project-id]:[region]:[instance-id]=tcp:5432 -credential_file=serviceaccount.json &
CMD node index.js
When building the docker file, I don't get any errors. Also, the file serviceaccount.json is included and is found.
When running the docker file and checking the logs, I see that the connection in my nodejs app is refused. So there must be a problem with the Cloud SQL proxy. Also, I don't see any output of the Cloud SQL proxy in the logs, only from the nodejs app. When I create a VM and install both packages separately, it works. I get output like "ready for connections".
So somehow, my docker file isn't correct, because the Cloud SQL proxy is not installed or running. What am I missing?
Edit:
I got it working, but I'm not sure this is the correct way to do.
This is my dockerfile now:
FROM node:13
WORKDIR /usr/src/app
COPY . .
RUN chmod +x wrapper.sh
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
RUN chmod +x cloud_sql_proxy
RUN npm install
EXPOSE 8000
CMD ./wrapper.sh
And this is my wrapper.sh file:
#!/bin/bash
set -m
./cloud_sql_proxy -instances=phosphor-dev-265913:us-central1:dev-sql=tcp:5432 -credential_file=serviceaccount.json &
sleep 5
node index.js
fg %1
When I remove the "sleep 5", it does not work because the server is already running before the connection of the cloud_sql_proxy is established. With sleep 5, it works.
Is there any other/better way to wait untill the first command is completely done?

RUN commands are used to do stuff that changes something in the file system of the image like installing packages etc. It is not meant to run a process when the you start a container from the resulting image like you are trying to do. Dockerfile is only used to build a static container image. When you run this image, only the arguments you give to CMD instruction(node index.js) is executed inside the container.
If you need to run both cloud_sql_proxy and node inside your container, put them in a shell script and run that shell script as part of CMD instruction.
See Run multiple services in a container
You should ideally have a separate container per process. I'm not sure what cloud_sql_proxy does, but probably you can run it in its own container and run your node process in its own container and link them using docker network if required.
You can use docker-compose to manage, start and stop these multiple containers with single command. docker-compose also takes care of setting up the network between the containers automatically. You can also declare that your node app depends on cloud_sql_proxy container so that docker-compose starts cloud_sql_proxy container first and then it starts the node app.

Related

How to know which slash (forward or backslash) will be used as folder separator on a Cloud Run Node.js dockerized environment?

I'm running my express server on a Node.js environment on Cloud Run (docker container).
I need to access the __filename variable in one of my functions.
How can I know which slash will be returned as folder separator? forward or backslash?
Is this defined only by Node itself or should I look which OS that Node environment will be created on?
On my local Powershell Windows, it comes back as a backslash \.
Before you upload your image to Googles Docker registry can you try to run your image locally and see how it works. It should work in the same way in your Cloud Run container.
Cloud Run supports only Linux containers, so it should be with forwardslash: /
You can try to run it local with the following commands:
Navigate to the folder with your Dockerfile in
Build the container with docker build -t myimage .
Wait for build to complete...
Run now the container with: docker run myimage
I think maybe you would like to expose ports from the container on your machine. You can do that with this command: docker run -p 3000:3000 myimage (it will expose your container to http://localhost:3000

Running bash script in a dockerfile

I am trying to run multiple js files in a bash script like this. This doesn't work. The container comes up but doesn't run the script. However when I ssh to the container and run this script, the script runs fine and the node service comes up. Can anyone tell me what am I doing wrong?
Dockerfile
FROM node:8.16
MAINTAINER Vivek
WORKDIR /a
ADD . /a
RUN cd /a && npm install
CMD ["./node.sh"]
Script is as below
node.sh
#!/bin/bash
set -e
node /a/b/c/d.js &
node /a/b/c/e.js &
As #hmm mentions your script might be run, but your container is not waiting for your two sub-processes to finish.
You could change your node.sh to:
#!/bin/bash
set -e
node /a/b/c/d.js &
pid1=$!
node /a/b/c/e.js &
pid2=$!
wait pid1
wait pid2
Checkout https://stackoverflow.com/a/356154/1086545 for a more general solution of waiting for sub-processes to finish.
As #DavidMaze is mentioning, a container should generally run one "service". It is of course up to you to decide what constitutes a service in your system. As described officially by docker:
It is generally recommended that you separate areas of concern by using one service per container. That service may fork into multiple processes (for example, Apache web server starts multiple worker processes). It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application.
See https://docs.docker.com/config/containers/multi-service_container/ for more details.
Typically you should run only a single process in a container. However, you can run any number of containers from a single image, and it's easy to set the command a container will run when you start it up.
Set the image's CMD to whatever you think the most common path will be:
CMD ["node", "b/c/d.js"]
If you're using Docker Compose for this, you can specify build: . for both containers, but in the second container, specify an alternate command:.
version: '3'
services:
node-d:
build: .
node-e:
build: .
command: node b/c/e.js
Using bare docker run you can specify an alternate command after the image name
docker build -t me/node-app .
docker run -d --name node-d me/node-app
docker run -d --name node-e me/node-app \
node b/c/e.js
This lets you do things like independently set restart policies for each container; if you run this in a clustered environment like Docker Swarm or Kubernetes, you can independently scale the two containers/pods/processes as well.

Running Spark history server in Docker to view AWS Glue jobs

I have set up AWS Glue to output Spark event logs so that they can be imported into Spark History Server. AWS provide a CloudFormation stack for this, I just want to run the history server locally and import the event logs. I want to use Docker for this so colleagues can easily run the same thing.
I'm running into problems because the history server is a daemon process, so the container starts and immediately shuts down.
How can I keep the Docker image alive?
My Dockerfile is as follows
ARG SPARK_IMAGE=gcr.io/spark-operator/spark:v2.4.4
FROM ${SPARK_IMAGE}
RUN apk --update add coreutils
RUN mkdir /tmp/spark-events
ENTRYPOINT ["/opt/spark/sbin/start-history-server.sh"]
I start it using:
docker run -v ${PWD}/events:/tmp/spark-events -p 18080:18080 sparkhistoryserver
You need the SPARK_NO_DAEMONIZE environment variable, see here. This will keep the container alive.
Just modify your Dockerfile as follows:
ARG SPARK_IMAGE=gcr.io/spark-operator/spark:v2.4.4
FROM ${SPARK_IMAGE}
RUN apk --update add coreutils
RUN mkdir /tmp/spark-events
ENV SPARK_NO_DAEMONIZE TRUE
ENTRYPOINT ["/opt/spark/sbin/start-history-server.sh"]
See here for a repo with more detailed readme.

Flask container is not up and running using docker

So my issue is very simple, but still can't get hold of it and it's not performing like I wanted to.
sample docker file:
FROM ubuntu:16.04
RUN apt-get update -y && \
apt-get install -y python3-pip python3-dev
COPY ./requirements.txt /requirements.txt
WORKDIR /
RUN pip3 install -r requirements.txt
COPY . /
RUN chmod a+x start.sh
EXPOSE 5000
CMD ["./start.sh"]
sample start.sh
#!/usr/bin/env bash
# sleep 600
nohup python3 /code/app.py &
python3 /code/helloworld_extract.py
sample flask app.py
from flask import Flask
app = Flask(__name__)
#app.route("/")
def index():
return """
<h1>Python Flask in Docker!</h1>
<p>A sample web-app for running Flask inside Docker.</p>
"""
if __name__ == "__main__":
app.run(debug=True, host='0.0.0.0')
So, my issue is as soon as I build the image and run it,
docker run --name flaskapp -p5000:5000 docker-flask:latest... I can't reach localhost:5000.
While if I get inside the container and run explict nohup command with python3 app.py. I can reach localhost.
So, why can't I reach the localhost host with run command?
The thing is I need to run 2 scripts one is flask and another one is helloworld_extract.py which eventually exit after writing some information to the files.
When your start.sh script says
#!/bin/sh
do_some_stuff_in_the_background &
some_foreground_process
The entire lifecycle of the container is tied to the some_foreground_process. In your case, since you clarify that it's doing some initial data load and exits, once it exits, the start.sh script is finished, and so the container exits.
(As a general rule, try to avoid nohup and & in Docker land, since it leads to confusing issues like this.)
I would suggest making the main container process be only the Flask server.
CMD ["python3", "/code/app.py"]
You don't say what's in the loader script. Since its lifecycle is completely different from the main application, it makes sense to run it separately; you can replace the CMD with docker run options. Say you need to populate some shared data in the filesystem. You can:
# Build the image
docker build -t myimage .
# Create a (named) shared filesystem volume
docker volume create extract
# Start the Flask server
docker run -d -p 5000:5000 -v extract:/data myimage
# Run the script to prepopulate the data
docker run -v extract:/data myimage python3 /code/helloworld_extract.py
Notice that the same volume name extract is used in all the commands. The path name /data is an arbitrary choice, though since both commands run on the same image it makes sense that they'd have the same filesystem layout.

Nodejs Kubernetes Deployment keeps crashing

I'm pulling my hair out for a week but I am close to giving up. Please share your wisdom.
This is my Docker file:
FROM node
RUN apt-get update
RUN mkdir -p /var/www/stationconnect
RUN mkdir -p /var/log/node
WORKDIR /var/www/stationconnect
COPY stationconnect /var/www/stationconnect
RUN chown node:node /var/log/node
COPY ./stationconnect_fromstage/api/config /var/www/stationconnect/api/config
COPY ./etc/stationconnect /etc/stationconnect
WORKDIR /var/www/stationconnect/api
RUN cd /var/www/stationconnect/api
RUN npm install
RUN apt-get install -y vim nano
RUN npm install supervisor forever -g
EXPOSE 8888
USER node
WORKDIR /var/www/stationconnect/api
CMD ["bash"]
It works fine in docker alone running e.g.
docker run -it 6bcee4528c7c
Any advice?
When create a container, you should have a foreground process to keep the container alive.
What i’ve done is add a shell script line
while true; do sleep 1000; done at the end of my docker-entrypoint.sh, and refer to it in ENTRYPOINT [/docker-entrypoint.sh]
Take a look at this issue to find out more.
There’s an example how to make a Nodejs dockerfile, be sure to check it out.
this is kind of obvious. You are running it with interactive terminal bash session with docker run -it <container>. When you run a container in kube (or in docker without -it) bash will exit immediately, so this is what it is doing in kube deployment. Not crashing per say, just terminating as expected.
Change your command to some long lasting process. Even sleep 1d will do - it will die no longer. Nor will your node app work though... for that you need your magic command to launch your app in foreground.
You could add an ENTRYPOINT command to your Dockerfile that executes something that is run in the background indefinitely, say, for example, you run a script my_service.sh. This, in turn, could start a webserver like nginx as a service or simply do a tail -f /dev/null. This will keep your pod running in kubernetes as the main task of this container is not done yet. In your Dockerfile above, bash is executed, but once it runs it finishes and the container completes. Therefore, when you try to do kubectl run NAME --image=YOUR_IMAGE it fails to connect because k8s is terminating the pod that runs your container almost immediately after the new pod is started. This process will continue like this infinitely.
Please see this answer here for a in-line command that can help you run your image as is for debugging purposes...

Resources