Flask container is not up and running using docker - python-3.x

So my issue is very simple, but still can't get hold of it and it's not performing like I wanted to.
sample docker file:
FROM ubuntu:16.04
RUN apt-get update -y && \
apt-get install -y python3-pip python3-dev
COPY ./requirements.txt /requirements.txt
WORKDIR /
RUN pip3 install -r requirements.txt
COPY . /
RUN chmod a+x start.sh
EXPOSE 5000
CMD ["./start.sh"]
sample start.sh
#!/usr/bin/env bash
# sleep 600
nohup python3 /code/app.py &
python3 /code/helloworld_extract.py
sample flask app.py
from flask import Flask
app = Flask(__name__)
#app.route("/")
def index():
return """
<h1>Python Flask in Docker!</h1>
<p>A sample web-app for running Flask inside Docker.</p>
"""
if __name__ == "__main__":
app.run(debug=True, host='0.0.0.0')
So, my issue is as soon as I build the image and run it,
docker run --name flaskapp -p5000:5000 docker-flask:latest... I can't reach localhost:5000.
While if I get inside the container and run explict nohup command with python3 app.py. I can reach localhost.
So, why can't I reach the localhost host with run command?
The thing is I need to run 2 scripts one is flask and another one is helloworld_extract.py which eventually exit after writing some information to the files.

When your start.sh script says
#!/bin/sh
do_some_stuff_in_the_background &
some_foreground_process
The entire lifecycle of the container is tied to the some_foreground_process. In your case, since you clarify that it's doing some initial data load and exits, once it exits, the start.sh script is finished, and so the container exits.
(As a general rule, try to avoid nohup and & in Docker land, since it leads to confusing issues like this.)
I would suggest making the main container process be only the Flask server.
CMD ["python3", "/code/app.py"]
You don't say what's in the loader script. Since its lifecycle is completely different from the main application, it makes sense to run it separately; you can replace the CMD with docker run options. Say you need to populate some shared data in the filesystem. You can:
# Build the image
docker build -t myimage .
# Create a (named) shared filesystem volume
docker volume create extract
# Start the Flask server
docker run -d -p 5000:5000 -v extract:/data myimage
# Run the script to prepopulate the data
docker run -v extract:/data myimage python3 /code/helloworld_extract.py
Notice that the same volume name extract is used in all the commands. The path name /data is an arbitrary choice, though since both commands run on the same image it makes sense that they'd have the same filesystem layout.

Related

Dockerfile USER cmd vs Linux su command

I am trying to deploy db2 express image to docker using non-root user.
The below code is used to start the db2engine using root user, it works fine.
FROM ibmoms/db2express-c:10.5.0.5-3.10.0
ENV LICENSE=accept \
DB2INST1_PASSWORD=password
RUN su - db2inst1 -c "db2start"
CMD ["db2start"]
The below code is used to start the db2engine from db2inst1 profile, giving below exception during image build. please help to resolve this.( I am trying to avoid su - command )
FROM ibmoms/db2express-c:10.5.0.5-3.10.0
ENV LICENSE=accept \
DB2INST1_PASSWORD=password
USER db2inst1
RUN /bin/bash -c ~db2inst1/sqllib/adm/db2start
CMD ["db2start"]
SQL1641N The db2start command failed because one or more DB2 database manager program files was prevented from executing with root privileges by file system mount settings.
Can you show us your Dockerfile please?
It's worth noting that a Dockerfile is used to build an image. You can execute commands while building, but once an image is published, running processses are not maintained in the image definition.
This is the reason that the CMD directive exists, so that you can tell the container which process to start and encapsulate.
If you're using the pre-existing db2 image from IBM on DockerHub (docker pull ibmcom/db2), then you will not need to start the process yourself.
Their quickstart guide demonstrates this with the following example command:
docker run -itd --name mydb2 --privileged=true -p 50000:50000 -e LICENSE=accept -e DB2INST1_PASSWORD=<choose an instance password> -e DBNAME=testdb -v <db storage dir>:/database ibmcom/db2
As you can see, you only specify the image, and leave the default ENTRYPOINT and CMD, resulting in the DB starting.
Their recommendation for building your own container on top of theirs (FROM) is to load all custom scripts into /var/custom, and they will be executed automatically after the main process has started.

docker RUN/CMD is possibly not executed

I'm trying to build a docker file in which I first download and install the Cloud SQL Proxy, before running nodejs.
FROM node:13
WORKDIR /usr/src/app
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
RUN chmod +x cloud_sql_proxy
COPY . .
RUN npm install
EXPOSE 8000
RUN cloud_sql_proxy -instances=[project-id]:[region]:[instance-id]=tcp:5432 -credential_file=serviceaccount.json &
CMD node index.js
When building the docker file, I don't get any errors. Also, the file serviceaccount.json is included and is found.
When running the docker file and checking the logs, I see that the connection in my nodejs app is refused. So there must be a problem with the Cloud SQL proxy. Also, I don't see any output of the Cloud SQL proxy in the logs, only from the nodejs app. When I create a VM and install both packages separately, it works. I get output like "ready for connections".
So somehow, my docker file isn't correct, because the Cloud SQL proxy is not installed or running. What am I missing?
Edit:
I got it working, but I'm not sure this is the correct way to do.
This is my dockerfile now:
FROM node:13
WORKDIR /usr/src/app
COPY . .
RUN chmod +x wrapper.sh
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
RUN chmod +x cloud_sql_proxy
RUN npm install
EXPOSE 8000
CMD ./wrapper.sh
And this is my wrapper.sh file:
#!/bin/bash
set -m
./cloud_sql_proxy -instances=phosphor-dev-265913:us-central1:dev-sql=tcp:5432 -credential_file=serviceaccount.json &
sleep 5
node index.js
fg %1
When I remove the "sleep 5", it does not work because the server is already running before the connection of the cloud_sql_proxy is established. With sleep 5, it works.
Is there any other/better way to wait untill the first command is completely done?
RUN commands are used to do stuff that changes something in the file system of the image like installing packages etc. It is not meant to run a process when the you start a container from the resulting image like you are trying to do. Dockerfile is only used to build a static container image. When you run this image, only the arguments you give to CMD instruction(node index.js) is executed inside the container.
If you need to run both cloud_sql_proxy and node inside your container, put them in a shell script and run that shell script as part of CMD instruction.
See Run multiple services in a container
You should ideally have a separate container per process. I'm not sure what cloud_sql_proxy does, but probably you can run it in its own container and run your node process in its own container and link them using docker network if required.
You can use docker-compose to manage, start and stop these multiple containers with single command. docker-compose also takes care of setting up the network between the containers automatically. You can also declare that your node app depends on cloud_sql_proxy container so that docker-compose starts cloud_sql_proxy container first and then it starts the node app.

Running Spark history server in Docker to view AWS Glue jobs

I have set up AWS Glue to output Spark event logs so that they can be imported into Spark History Server. AWS provide a CloudFormation stack for this, I just want to run the history server locally and import the event logs. I want to use Docker for this so colleagues can easily run the same thing.
I'm running into problems because the history server is a daemon process, so the container starts and immediately shuts down.
How can I keep the Docker image alive?
My Dockerfile is as follows
ARG SPARK_IMAGE=gcr.io/spark-operator/spark:v2.4.4
FROM ${SPARK_IMAGE}
RUN apk --update add coreutils
RUN mkdir /tmp/spark-events
ENTRYPOINT ["/opt/spark/sbin/start-history-server.sh"]
I start it using:
docker run -v ${PWD}/events:/tmp/spark-events -p 18080:18080 sparkhistoryserver
You need the SPARK_NO_DAEMONIZE environment variable, see here. This will keep the container alive.
Just modify your Dockerfile as follows:
ARG SPARK_IMAGE=gcr.io/spark-operator/spark:v2.4.4
FROM ${SPARK_IMAGE}
RUN apk --update add coreutils
RUN mkdir /tmp/spark-events
ENV SPARK_NO_DAEMONIZE TRUE
ENTRYPOINT ["/opt/spark/sbin/start-history-server.sh"]
See here for a repo with more detailed readme.

Nodejs Kubernetes Deployment keeps crashing

I'm pulling my hair out for a week but I am close to giving up. Please share your wisdom.
This is my Docker file:
FROM node
RUN apt-get update
RUN mkdir -p /var/www/stationconnect
RUN mkdir -p /var/log/node
WORKDIR /var/www/stationconnect
COPY stationconnect /var/www/stationconnect
RUN chown node:node /var/log/node
COPY ./stationconnect_fromstage/api/config /var/www/stationconnect/api/config
COPY ./etc/stationconnect /etc/stationconnect
WORKDIR /var/www/stationconnect/api
RUN cd /var/www/stationconnect/api
RUN npm install
RUN apt-get install -y vim nano
RUN npm install supervisor forever -g
EXPOSE 8888
USER node
WORKDIR /var/www/stationconnect/api
CMD ["bash"]
It works fine in docker alone running e.g.
docker run -it 6bcee4528c7c
Any advice?
When create a container, you should have a foreground process to keep the container alive.
What i’ve done is add a shell script line
while true; do sleep 1000; done at the end of my docker-entrypoint.sh, and refer to it in ENTRYPOINT [/docker-entrypoint.sh]
Take a look at this issue to find out more.
There’s an example how to make a Nodejs dockerfile, be sure to check it out.
this is kind of obvious. You are running it with interactive terminal bash session with docker run -it <container>. When you run a container in kube (or in docker without -it) bash will exit immediately, so this is what it is doing in kube deployment. Not crashing per say, just terminating as expected.
Change your command to some long lasting process. Even sleep 1d will do - it will die no longer. Nor will your node app work though... for that you need your magic command to launch your app in foreground.
You could add an ENTRYPOINT command to your Dockerfile that executes something that is run in the background indefinitely, say, for example, you run a script my_service.sh. This, in turn, could start a webserver like nginx as a service or simply do a tail -f /dev/null. This will keep your pod running in kubernetes as the main task of this container is not done yet. In your Dockerfile above, bash is executed, but once it runs it finishes and the container completes. Therefore, when you try to do kubectl run NAME --image=YOUR_IMAGE it fails to connect because k8s is terminating the pod that runs your container almost immediately after the new pod is started. This process will continue like this infinitely.
Please see this answer here for a in-line command that can help you run your image as is for debugging purposes...

Docker ENTRYPOINT to run after volume mount

My Dockerfile have a script to run on ENTRYPOINT.
The container is planned to run with a volume mount where my code resides, and it needs to run couple of commands once the container is up with volume mount.
But from errors I get while running container, I believe Docker volume mount happens after the ENTRYPOINT script.
I sure can run the commands with docker exec options once container is up. But that makes it more lines of running commands. Is there any work-around, even by using docker-compose?
Dockerfile :
FROM my-container
WORKDIR /my-mount-dir
ADD startup-script.sh /root/startup-script.sh
ENTRYPOINT ["/root/startup-script.sh"]
Docker Run :
docker run -itd -v /home/user/directory:/my-mount-dir build-container
Note : startup-script.sh includes commands supposed to run on the mounted directory files.
I'm not sure if this is the solution you want but I've been using this run command which uses cat command to supply my script.sh to the container:
docker run -it --name=some_name --rm \
-v "host/path:/path/inside/container" \
image_name \
/bin/bash -c "$(cat ./script.sh)"
In this case the script runs after the mount is complete. I am sure of this as I've used the files from the mounted volumes in the script.
I have seen that in some of my scripts and it looks like file system cache problem to me... I use the following hack in my docker file and it works like a charm:
ENTRYPOINT ls /my-mount-dir && /root/startup-script.sh
But then you cannot use the list form for the ENTRYPOINT
The entrypoint script does run after the volume mount. I encountered a similar issue but it was actually due to using single quotes around the entrypoint instead of double quotes. Because of this, the container was falling back to using the default entrypoint of /bin/sh. As your question is already answered, I'm leaving this for others who end up here via Google.
This:
ENTRYPOINT ["entrypoint.sh"]
Not this:
ENTRYPOINT ['entrypoint.sh']

Resources