I'm trying to run a bash script in a docker container on initialization when the container runs before starting a CRA application. This bash script copies an environment variable GCP_PROJECT_ID to a .env file as defined by the docker-entrypoint.sh file. I tried using an Entrypoint to run the file but it doesn't work.
Dockerfile
FROM node:9-alpine
RUN npm install yarn
WORKDIR /usr/app
COPY ./yarn.lock /usr/app
COPY ./package.json /usr/app
RUN yarn install
COPY . /usr/app/
RUN yarn build
FROM node:9-alpine
WORKDIR /usr/app
COPY --from=0 /usr/app/build /usr/app/build
COPY --from=0 /usr/app/package.json /usr/app/package.json
COPY .env /usr/app/.env
COPY docker-entrypoint.sh /usr/app/docker-entrypoint.sh
RUN chmod +x /usr/app/docker-entrypoint.sh
RUN yarn global add serve
RUN npm prune --production
ARG GCP_PROJECT_ID=xxxxx # SAMPLE ENVIRONMENT VARIABLES
ENV GCP_PROJECT_ID=$GCP_PROJECT_ID
ENTRYPOINT [ "/usr/app/docker-entrypoint.sh" ]
CMD [ "serve", "build" ]
docker-entrypoint.sh
#!/bin/sh
printf "\n" >> /usr/app/.env
printf "REACT_APP_GCP_PROJECT_ID=$GCP_PROJECT_ID" >> /usr/app/.env
I can verify that the environment variables do exist i.e. running docker run -it --entrypoint sh <IMAGE NAME> and echo $GCP_PROJECT_ID does print xxxxx.
How can I run a bash script before starting up my CRA application in docker?
The ENTRYPOINT script gets passed the CMD as arguments. You need to include a line to tell it to actually run the command, typically exec "$#".
#!/bin/sh
if [ -n "$GCP_PROJECT_ID" ]; then
echo "REACT_APP_GCP_PROJECT_ID=$GCP_PROJECT_ID" >> /usr/app/.env
fi
exec "$#"
If you use this pattern, you don't need --entrypoint
sudo docker run -e GCP_PROJECT_ID=... imagename cat /usr/app/.env
# should include the REACT_APP_GCP_PROJECT_ID line
Depending on what else is in the file, it's common enough to use docker run -v to inject a config file wholesale, instead of trying to construct it at startup time.
sudo docker run -v $PWD/dot-env:/usr/app/.env imagename
You can use && in CMD in Dockerfile
CMD sh /usr/app/docker-entrypoint.sh && serve build
or put serve build command into your script
Related
My code is in directory /test-scripts, details structure is as follows.
/test-scripts
│___Dockerfile
│
└───requirements.txt
....
|
└───IssueAnalyzer.py
Run the following command in directory /test-scripts.
/test-scripts(master)>
docker run --rm -p 5000:5000 --name testdockerscript testdockerscript:1.0
python3: can't open file '/testdocker/IssueAnalyzer.py -u $user -p $pwd': [Errno 2] No such file or directory
And my Dockerfile content is as follows.
FROM python:3.6-buster
ENV PROJECT_DIR=/testdocker
WORKDIR $PROJECT_DIR
COPY requirements.txt $PROJECT_DIR/.
RUN apt-get update \
&& apt-get -yy install libmariadb-dev
RUN pip3 install -r requirements.txt
COPY . $PROJECT_DIR/.
CMD ["python3", "/testdocker/IssueAnalyzer.py -u $user -p $pwd"]
Use $user, $pwd above to replace the real value in this question.
In my opinion, the file IssueAnalyzer.py will be copy from current directory /test-scripts to /testdocker, but actually it is not. Please help to let me know how to change this Dockerfile. Thanks!
You have 2 different way to define the CMD: exec form and shell form.
You're using the exec form but you're not splitting the command correctly.
For this specific case, I suggest to use the shell form:
[...]
CMD python3 /testdocker/IssueAnalyzer.py -u $user -p $pwd
If you really want to use the exec form:
[...]
CMD ["python3", "/testdocker/IssueAnalyzer.py", "-u $user", "-p $pwd"]
Reference: https://docs.docker.com/engine/reference/builder/#cmd
I try to mount image's original /docker-entrypoint.sh to a volume in read/write mode, in order to be able to change it easily from outside (without entering the container) and then restart the container to observe the changes.
I do it (in ansible) like this:
/app/docker-entrypoint.sh:/docker-entrypoint.sh:rw
If /app/docker-entrypoint.sh doesn't exist on the host, a directory /app/docker-entrypoint.sh (not a file, as wish) is created, and I get following error:
Error starting container e40a90eef1525f554e6078e20b3ab5d1c4b27ad2a7d73aa3bf4f7c6aa337be4f: 400 Client Error: Bad Request (\"OCI runtime create failed: container_linux.go:348: starting container process caused \"process_linux.go:402: container init caused \\\"rootfs_linux.go:58: mounting \\\\\\\"/app/docker-entrypoint.sh\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/devicemapper/mnt/4d3e4f4082ca621ab5f3a4ec3f810b967634b1703fd71ec00b4860c59494659a/rootfs\\\\\\\" at \\\\\\\"/var/lib/docker/devicemapper/mnt/4d3e4f4082ca621ab5f3a4ec3f810b967634b1703fd71ec00b4860c59494659a/rootfs/docker-entrypoint.sh\\\\\\\" caused \\\\\\\"not a directory\\\\\\\"\\\"\": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
If I touch /app/docker-entrypoint.sh (and set proper permissions) before launching the container - the container fails to start up and keeps restarting (I assume because the /app/docker-entrypoint.sh and therefore internal /docker-entrypoint.sh are empty).
How can I mount the original content of container's /docker-entrypoint.sh to the outside?
If you want to override docker-entry point it should be executable or in other words you have to set chmod +x your_mount_entrypoint.sh in the host then you can mount otherwise it will through permission error. As entrypoint script should be executable.
Second thing, As mentioned in the comment you can mount the file better to keep the entrypoint script in directory like docker-entrypoint/entrypoint.sh.
or if you want to mount specific file then both name should be same otherwise entrypoint script will not be overridden.
docker run --name test -v $PWD/entrypoint.sh:/docker-entrypoint/entrypoint.sh --rm my_image
or
docker run --name test -v $PWD/entrypoint.sh:/docker-entrypoint/entrypoint.sh:rw --rm my_image
See this example, entrypoint generated inside dockerfile and you can overide this from any script but it should be executable and should be mount to docker-entrypoint.
Dockerfile
FROM alpine
RUN mkdir -p /docker-entrypoint
RUN echo -e $'#!/bin/sh \n\
echo "hello from docker generated entrypoint" >> /test.txt \n\
tail -f /test.txt ' >> /docker-entrypoint/entrypoint.sh
RUN chmod +x /docker-entrypoint/entrypoint.sh
ENTRYPOINT ["/docker-entrypoint/entrypoint.sh"]
if you build and run it you will
docker build -t my_image .
docker run -t --rm my_image
#output
hello from docker generated entrypoint
Now if you want to overide
Create and set permission
host_path/entrypoint/entrypoint.sh
for example entrypoint.sh
#!/bin/sh
echo "hello from entrypoint using mounted"
Now run
docker run --name test -v $PWD/:/docker-entrypoint/ --rm my_image
#output
hello from entrypoint using mounted
Update:
If you mount directory of the host it will hide the content of docker image.
The workaround
Mount some directory other then entrypoint name it backup
add instruction in entrypoint to copy entrypoint to that location at run time
So it will create new file on the host directory instead
FROM alpine
RUN mkdir -p /docker-entrypoint
RUN touch /docker-entrypoint/entrypoint.sh
RUN echo -e $'#!/bin/sh \n\
echo "hello from entrypoint" \n\
cp /docker-entrypoint/entrypoint.sh /for_hostsystem/ ' >> /docker-entrypoint/entrypoint.sh
RUN chmod +x /docker-entrypoint/entrypoint.sh
ENTRYPOINT ["/docker-entrypoint/entrypoint.sh"]
Now if you run you will have the docker entrypoint in the host, as opposit as you want
docker run --name test -v $PWD/:/for_hostsystem/ --rm my_image
I am trying to build an API wrapped in a docker image that serves Openvino model. How do I run the "setupvars.sh" from Dockerfile itself so that my application can access it?
I have tried running the script using RUN. For ex: RUN /bin/bash setupvars.sh
or RUN ./setupvars.sh . However, none of them work and I get ModelNotFoundError: no module named openvino
RUN $INSTALL_DIR/install_dependencies/install_openvino_dependencies.sh
RUN cd /opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites && sudo ./install_prerequisites_tf.sh
COPY . /app
WORKDIR /app
RUN apt autoremove -y && \
rm -rf /openvino /var/lib/apt/lists/*
RUN /bin/bash -c "source $INSTALL_DIR/bin/setupvars.sh"
RUN echo "source $INSTALL_DIR/bin/setupvars.sh" >> /root/.bashrc
CMD ["/bin/bash"]
RUN python3 -m pip install opencv-python
RUN python3 test.py
I want OpenVino accessible to my gunicorn application that will serve the model in a docker image
Next commands works for me.
ARG OPENVINO_DIR=/opt/intel/computer_vision_sdk
# Unzip the OpenVINO installer
RUN cd ${APP_DIR} && tar -xvzf l_openvino_toolkit*
# installing OpenVINO dependencies
RUN cd ${APP_DIR}/l_openvino_toolkit* && \
./install_cv_sdk_dependencies.sh
# installing OpenVINO itself
RUN cd ${APP_DIR}/l_openvino_toolkit* && \
sed -i 's/decline/accept/g' silent.cfg && \
./install.sh --silent silent.cfg
# Setup the OpenVINO environment
RUN /bin/bash -c "source ${OPENVINO_DIR}/bin/setupvars.sh"
You need to re-run it every time you start the container, because those variables are only for the session.
Option 1:
Run your application something like this:
CMD /bin/bash -c "source /opt/intel/openvino/bin/setupvars.sh && python test.py"
Option 2 (not tested):
Add the source command to your .bashrc so it will be run every time on startup
# Assuming running as root
RUN echo "/bin/bash -c 'source /opt/intel/openvino/bin/setupvars.sh'" >> ~root/.bashrc
CMD python test.py
For the rest of the Dockerfile, there is a guide here (also not tested, and it doesn't cover the above):
https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_docker_linux.html
As mentioned in the two previous answers, the setupvars.sh script sets the environment variables required by OpenVINO.
But rather than running this every time, you can add the variables to your Dockerfile. While writing your Dockerfile run:
CMD /bin/bash -c "source /opt/intel/openvino/bin/setupvars.sh && printenv"
This will give you the values that the environment variables are set to. You might also want to run printenv without setting the OpenVINO variables:
CMD /bin/bash printenv
Comparing the two outputs will let you figure out exactly what the setupvars.sh script is setting.
Once you know the values set by the script, you can set these as part of the Dockerfile using the ENV instruction. I wouldn't copy this because it's likely to be specific to your setup, but in my case, this ended up looking like:
ENV PATH=/opt/intel/openvino/deployment_tools/model_optimizer:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENV LD_LIBRARY_PATH=/opt/intel/openvino/opencv/lib:/opt/intel/openvino/deployment_tools/ngraph/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/tbb/lib::/opt/intel/openvino/deployment_tools/inference_engine/external/hddl/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/omp/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/gna/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/opt/intel/openvino/deployment_tools/inference_engine/lib/intel64
ENV INTEL_CVSDK_DIR=/opt/intel/openvino
ENV OpenCV_DIR=/opt/intel/openvino/opencv/cmake
ENV TBB_DIR=/opt/intel/openvino/deployment_tools/inference_engine/external/tbb/cmake
# The next one will be whatever your working directory is
ENV PWD=/workspace
ENV InferenceEngine_DIR=/opt/intel/openvino/deployment_tools/inference_engine/share
ENV ngraph_DIR=/opt/intel/openvino/deployment_tools/ngraph/cmake
ENV SHLVL=1
ENV PYTHONPATH=/opt/intel/openvino/python/python3.6:/opt/intel/openvino/python/python3:/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit:/opt/intel/openvino/deployment_tools/open_model_zoo/tools/accuracy_checker:/opt/intel/openvino/deployment_tools/model_optimizer
ENV HDDL_INSTALL_DIR=/opt/intel/openvino/deployment_tools/inference_engine/external/hddl
ENV _=/usr/bin/printenv
I'm learning docker and create a Dockerfile to fill container with apps. Step1 is Dockerfile
FROM golang:alpine as builder
ADD /src/common $GOPATH/src/common
ADD /src/ins_signal_node $GOPATH/src/ins_signal_node
WORKDIR $GOPATH/src/ins_signal_node
RUN go build -o /go/bin/signal_server .
ADD /src/ins_full_node $GOPATH/src/ins_full_node
WORKDIR $GOPATH/src/ins_full_node
RUN go build -o /go/bin/full_node .
FROM alpine
COPY --from=builder /go/bin/signal_server /go/bin/signal_server
COPY --from=builder /go/bin/full_node /go/bin/full_node
COPY run_test.sh /go/bin
No questions here - it's ok. After this I run my script to rebuild and run this Container and enter it's bash - Step2:
#!/bin/bash
docker container rm -f full
docker image rm -f ss
docker build -t ss .
winpty docker run -it --name full ss
So at this moment I'm in containers console. And as it scripted I ran 2 commands - Step3
cd go/bin/
./run_test.sh
It works!
But. After Step2 - when I'm in console - I want Step 3 - run the starter script to be automated. So at the end of my Dockerfile from Step1 I add line
CMD ["cd go/bin/ && ./run_test.sh"]
And after I ran Step2 - with full start now - I've got the error message:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"cd go/bin/ && ./run_test.sh\": stat cd go/bin/ && ./run_test
.sh: no such file or directory": unknown.
And if I ran this CMD - cd go/bin/ && ./run_test.sh - manualy when I'm in container's console - it works!
So my question - what's bad with my
CMD ["cd go/bin/ && ./run_test.sh"]
UPDATE
Ok. I try now with ["/go/bin/run_test.sh"] and ["./go/bin/run_test.sh"] and got
initializing…
/go/bin/run_test.sh: line 2: ./signal_server: not found
starting…
/go/bin/run_test.sh: line 10: ./full_node: not found
/go/bin/run_test.sh: line 9: ./full_node: not found
/go/bin/run_test.sh: line 8: ./full_node: not found
/go/bin/run_test.sh: line 7: ./full_node: not found
UPDATE 2
So in my Dockerfile I create
FROM alpine
COPY --from=builder /go/bin/signal_server /go/bin/signal_server
COPY --from=builder /go/bin/full_node /go/bin/full_node
COPY run_test.sh /go/bin
COPY entry_point.sh /
ENTRYPOINT ["./entry_point.sh"]
entry_point.sh - and has entry_point.sh in the root. If I use ENTRYPOINT - it says
standard_init_linux.go:190: exec user process caused "no such file or directory"
Use docker entrypoint in your Dockerfile:
Create a entrypoint.sh with:
#!/bin/bash
set -e
cd go/bin/
./run_test.sh
exec "$#"
In your Dockerfile on last line:
ENTRYPOINT ["/entrypoint.sh"]
Just add the run command to your entrypoint script:
cd /go/bin \
&& ./run_test.sh
As per documentation, CMD syntax is
CMD ["executable","param1","param2"] (exec form, this is the preferred form)
CMD ["param1","param2"] (as default parameters to ENTRYPOINT)
CMD command param1 param2 (shell form)
Docker treats your composite command as the name of the executable because you are using double quotes around it. You can easily solve this for example by putting all commands in a script such as /bin/myscript.sh, then you just need CMD /bin/myscript.sh.
I'd like to run CMD ["npm", "run", "dev"] if a dev script is present in package.json, otherwise CMD ["npm", "start"]
Is there an idiomatic docker way for doing this? I suppose my CMD could be a bash script which checks for the presence of 'npm run dev'. Is there a cleaner way to do it than that?
You could dynamically create a bash script containing appropriate command:
RUN bash -c "if npm run | grep -q dev ; then echo npm run dev > run.sh; else echo npm start > run.sh; fi; chmod 777 run.sh;"
CMD ./run.sh
if npm run | grep -q dev will check if dev script is present in package.json. Running npm run with no arguments will list all runable scripts and grep -q dev will return true only if dev is among them.
echo command > run.sh will create a file with provided command
chmod 777 run.sh will make the run.sh executable
CMD ./run.sh will execute created file
The Dockerfile syntax does not cover that such use case. You have to rely on scripts to achieve that.
If CMD is the only place you want to be different based on the node environment, then it is pretty straightforward. This is what I use sometimes:
CMD yarn "$(if [ $NODE_ENV = 'production' ] ; then echo 'start' ; else echo 'dev'; fi)"
Note that this is yarn, but you can change that to npm
Then the mentioned scripts (yarn start and yarn dev) will be triggered based on the value of NODE_ENV.
NODE_ENV is set to 'production' by default in most cloud providers BTW.
Reading your package.json within your Dockerfile won't make any sense. What you can do is having separate Dockerfiles for dev environment and another Dockerfile which doesn't contain "dev" command. This way would provide you more control over the Docker images that you are building as well.