dockerfile simplest automation - linux

I'm learning docker and create a Dockerfile to fill container with apps. Step1 is Dockerfile
FROM golang:alpine as builder
ADD /src/common $GOPATH/src/common
ADD /src/ins_signal_node $GOPATH/src/ins_signal_node
WORKDIR $GOPATH/src/ins_signal_node
RUN go build -o /go/bin/signal_server .
ADD /src/ins_full_node $GOPATH/src/ins_full_node
WORKDIR $GOPATH/src/ins_full_node
RUN go build -o /go/bin/full_node .
FROM alpine
COPY --from=builder /go/bin/signal_server /go/bin/signal_server
COPY --from=builder /go/bin/full_node /go/bin/full_node
COPY run_test.sh /go/bin
No questions here - it's ok. After this I run my script to rebuild and run this Container and enter it's bash - Step2:
#!/bin/bash
docker container rm -f full
docker image rm -f ss
docker build -t ss .
winpty docker run -it --name full ss
So at this moment I'm in containers console. And as it scripted I ran 2 commands - Step3
cd go/bin/
./run_test.sh
It works!
But. After Step2 - when I'm in console - I want Step 3 - run the starter script to be automated. So at the end of my Dockerfile from Step1 I add line
CMD ["cd go/bin/ && ./run_test.sh"]
And after I ran Step2 - with full start now - I've got the error message:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"cd go/bin/ && ./run_test.sh\": stat cd go/bin/ && ./run_test
.sh: no such file or directory": unknown.
And if I ran this CMD - cd go/bin/ && ./run_test.sh - manualy when I'm in container's console - it works!
So my question - what's bad with my
CMD ["cd go/bin/ && ./run_test.sh"]
UPDATE
Ok. I try now with ["/go/bin/run_test.sh"] and ["./go/bin/run_test.sh"] and got
initializing…
/go/bin/run_test.sh: line 2: ./signal_server: not found
starting…
/go/bin/run_test.sh: line 10: ./full_node: not found
/go/bin/run_test.sh: line 9: ./full_node: not found
/go/bin/run_test.sh: line 8: ./full_node: not found
/go/bin/run_test.sh: line 7: ./full_node: not found
UPDATE 2
So in my Dockerfile I create
FROM alpine
COPY --from=builder /go/bin/signal_server /go/bin/signal_server
COPY --from=builder /go/bin/full_node /go/bin/full_node
COPY run_test.sh /go/bin
COPY entry_point.sh /
ENTRYPOINT ["./entry_point.sh"]
entry_point.sh - and has entry_point.sh in the root. If I use ENTRYPOINT - it says
standard_init_linux.go:190: exec user process caused "no such file or directory"

Use docker entrypoint in your Dockerfile:
Create a entrypoint.sh with:
#!/bin/bash
set -e
cd go/bin/
./run_test.sh
exec "$#"
In your Dockerfile on last line:
ENTRYPOINT ["/entrypoint.sh"]

Just add the run command to your entrypoint script:
cd /go/bin \
&& ./run_test.sh

As per documentation, CMD syntax is
CMD ["executable","param1","param2"] (exec form, this is the preferred form)
CMD ["param1","param2"] (as default parameters to ENTRYPOINT)
CMD command param1 param2 (shell form)
Docker treats your composite command as the name of the executable because you are using double quotes around it. You can easily solve this for example by putting all commands in a script such as /bin/myscript.sh, then you just need CMD /bin/myscript.sh.

Related

In Gitlab where is the Dockerfile located when its configured to generate dynamically?

I have an enhancement to use Kaniko for Docker builds on Gitlab but the pipeline is failing to locate the dynamically generated Dockerfile with error :
$ echo "Docker build"
Docker build
$ cd ./src
$ pwd
/builds/group/subgroup/labs/src
$ cp /builds/group/subgroup/labs/src/Dockerfile /builds/group/subgroup/labs
cp: can't stat '/builds/group/subgroup/labs/src/Dockerfile': No such file or directory
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: command terminated with exit code 1
For context the pipeline was designed to generate a Dockerfile dynamically for any particular project :
ci-scripts.yml
.create_dockerfile:
script: |
echo "checking dockerfile existence"
if ! [ -e Dockerfile ]; then
echo "dockerfile doesn't exist. Trying to create a new dockerfile from csproj."
docker_entrypoint=$(grep -m 1 AssemblyName ./src/*.csproj | sed -r 's/\s*<[^>]*>//g' | sed -r 's/\r$//g').dll
cat > Dockerfile << EOF
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /app
COPY ./publish .
ENTRYPOINT dotnet $docker_entrypoint
EOF
echo "dockerfile created"
else
echo "dockerfile exists"
fi
And in the main pipeline all that was needed was to reference .ci-scripts.yml as appropriate and do docker push.
After switching to Kaniko for Docker builds,Kaniko itself expects a Dockerfile at the location ${CI_PROJECT_DIR}/Dockerfile. In my context this is the path /builds/group/subgroup/labs.
And the main pipeline looks like this :
build-push.yml
docker_build_dev:
tags:
- aaa
image:
name: gcr.io/kaniko-project/executor:v1.6.0-debug
entrypoint: [""]
only:
- develop
stage: docker
before_script:
- echo "Docker build"
- pwd
- cd ./src
- pwd
extends: .create_dockerfile
variables:
DEV_TAG: dev-latest
script:
- cp /builds/group/subgroup/labs/src/Dockerfile /builds/group/subgroup/labs
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":\"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\"}}}" > /kaniko/.docker/config.json
- >-
/kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
--destination "${CI_REGISTRY_IMAGE}:${DEV_TAG}"
In the block below I maintained the dynamically generated Dockerfile at the same path (./src) by switching from default Docker build directory (/builds/group/subgroup/labs) to (/builds/group/subgroup/labs/src). The assumption is that even with dynamic generation the Dockerfile should still be maintained at (./src)
Expected
The dynamically generated Dockerfile should be available at the default Docker build path /builds/group/subgroup/labs after the script ci-script.yml finishes executing.
When I maintain a Dockerfile at the project root (at /src ) (without Kaniko usage) the Docker-build runs successfully but once I switch to dynamically generating the Dockerfile (with Kaniko usage) the pipeline cannot find the Dockerfile. When the Dockerfile is maintained at project root this way as opposed to dynamic generation I have to copy the file to the Kaniko load path via :
script:
- cp ./src/Dockerfile /builds/group/subgroup/labs/Dockerfile
- mkdir -p /kaniko/.docker
I have a blank on how ci-script.yml is working (it was done by someone no longer around). I have tried to pwd in the script itself so as to check which directory its executing from :
.create_dockerfile:
script: |
pwd
echo "checking dockerfile existence"
....
....
but I get an error :
$ - pwd # collapsed multi-line command
/scripts-1175-34808/step_script: eval: line 123: -: not found
My questions :
Where exactly does Gitlab store Dockerfiles that are being generated on the fly?.
Is the generated Dockerfile treated as an artifact and if so at which path will it be?

docker run failed at "python3: can't open file"

My code is in directory /test-scripts, details structure is as follows.
/test-scripts
│___Dockerfile
│
└───requirements.txt
....
|
└───IssueAnalyzer.py
Run the following command in directory /test-scripts.
/test-scripts(master)>
docker run --rm -p 5000:5000 --name testdockerscript testdockerscript:1.0
python3: can't open file '/testdocker/IssueAnalyzer.py -u $user -p $pwd': [Errno 2] No such file or directory
And my Dockerfile content is as follows.
FROM python:3.6-buster
ENV PROJECT_DIR=/testdocker
WORKDIR $PROJECT_DIR
COPY requirements.txt $PROJECT_DIR/.
RUN apt-get update \
&& apt-get -yy install libmariadb-dev
RUN pip3 install -r requirements.txt
COPY . $PROJECT_DIR/.
CMD ["python3", "/testdocker/IssueAnalyzer.py -u $user -p $pwd"]
Use $user, $pwd above to replace the real value in this question.
In my opinion, the file IssueAnalyzer.py will be copy from current directory /test-scripts to /testdocker, but actually it is not. Please help to let me know how to change this Dockerfile. Thanks!
You have 2 different way to define the CMD: exec form and shell form.
You're using the exec form but you're not splitting the command correctly.
For this specific case, I suggest to use the shell form:
[...]
CMD python3 /testdocker/IssueAnalyzer.py -u $user -p $pwd
If you really want to use the exec form:
[...]
CMD ["python3", "/testdocker/IssueAnalyzer.py", "-u $user", "-p $pwd"]
Reference: https://docs.docker.com/engine/reference/builder/#cmd

Running a bash script before running a React instance in docker?

I'm trying to run a bash script in a docker container on initialization when the container runs before starting a CRA application. This bash script copies an environment variable GCP_PROJECT_ID to a .env file as defined by the docker-entrypoint.sh file. I tried using an Entrypoint to run the file but it doesn't work.
Dockerfile
FROM node:9-alpine
RUN npm install yarn
WORKDIR /usr/app
COPY ./yarn.lock /usr/app
COPY ./package.json /usr/app
RUN yarn install
COPY . /usr/app/
RUN yarn build
FROM node:9-alpine
WORKDIR /usr/app
COPY --from=0 /usr/app/build /usr/app/build
COPY --from=0 /usr/app/package.json /usr/app/package.json
COPY .env /usr/app/.env
COPY docker-entrypoint.sh /usr/app/docker-entrypoint.sh
RUN chmod +x /usr/app/docker-entrypoint.sh
RUN yarn global add serve
RUN npm prune --production
ARG GCP_PROJECT_ID=xxxxx # SAMPLE ENVIRONMENT VARIABLES
ENV GCP_PROJECT_ID=$GCP_PROJECT_ID
ENTRYPOINT [ "/usr/app/docker-entrypoint.sh" ]
CMD [ "serve", "build" ]
docker-entrypoint.sh
#!/bin/sh
printf "\n" >> /usr/app/.env
printf "REACT_APP_GCP_PROJECT_ID=$GCP_PROJECT_ID" >> /usr/app/.env
I can verify that the environment variables do exist i.e. running docker run -it --entrypoint sh <IMAGE NAME> and echo $GCP_PROJECT_ID does print xxxxx.
How can I run a bash script before starting up my CRA application in docker?
The ENTRYPOINT script gets passed the CMD as arguments. You need to include a line to tell it to actually run the command, typically exec "$#".
#!/bin/sh
if [ -n "$GCP_PROJECT_ID" ]; then
echo "REACT_APP_GCP_PROJECT_ID=$GCP_PROJECT_ID" >> /usr/app/.env
fi
exec "$#"
If you use this pattern, you don't need --entrypoint
sudo docker run -e GCP_PROJECT_ID=... imagename cat /usr/app/.env
# should include the REACT_APP_GCP_PROJECT_ID line
Depending on what else is in the file, it's common enough to use docker run -v to inject a config file wholesale, instead of trying to construct it at startup time.
sudo docker run -v $PWD/dot-env:/usr/app/.env imagename
You can use && in CMD in Dockerfile
CMD sh /usr/app/docker-entrypoint.sh && serve build
or put serve build command into your script

Docker: mount image's original /docker-entrypoint.sh to a volume in read/write mode

I try to mount image's original /docker-entrypoint.sh to a volume in read/write mode, in order to be able to change it easily from outside (without entering the container) and then restart the container to observe the changes.
I do it (in ansible) like this:
/app/docker-entrypoint.sh:/docker-entrypoint.sh:rw
If /app/docker-entrypoint.sh doesn't exist on the host, a directory /app/docker-entrypoint.sh (not a file, as wish) is created, and I get following error:
Error starting container e40a90eef1525f554e6078e20b3ab5d1c4b27ad2a7d73aa3bf4f7c6aa337be4f: 400 Client Error: Bad Request (\"OCI runtime create failed: container_linux.go:348: starting container process caused \"process_linux.go:402: container init caused \\\"rootfs_linux.go:58: mounting \\\\\\\"/app/docker-entrypoint.sh\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/devicemapper/mnt/4d3e4f4082ca621ab5f3a4ec3f810b967634b1703fd71ec00b4860c59494659a/rootfs\\\\\\\" at \\\\\\\"/var/lib/docker/devicemapper/mnt/4d3e4f4082ca621ab5f3a4ec3f810b967634b1703fd71ec00b4860c59494659a/rootfs/docker-entrypoint.sh\\\\\\\" caused \\\\\\\"not a directory\\\\\\\"\\\"\": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
If I touch /app/docker-entrypoint.sh (and set proper permissions) before launching the container - the container fails to start up and keeps restarting (I assume because the /app/docker-entrypoint.sh and therefore internal /docker-entrypoint.sh are empty).
How can I mount the original content of container's /docker-entrypoint.sh to the outside?
If you want to override docker-entry point it should be executable or in other words you have to set chmod +x your_mount_entrypoint.sh in the host then you can mount otherwise it will through permission error. As entrypoint script should be executable.
Second thing, As mentioned in the comment you can mount the file better to keep the entrypoint script in directory like docker-entrypoint/entrypoint.sh.
or if you want to mount specific file then both name should be same otherwise entrypoint script will not be overridden.
docker run --name test -v $PWD/entrypoint.sh:/docker-entrypoint/entrypoint.sh --rm my_image
or
docker run --name test -v $PWD/entrypoint.sh:/docker-entrypoint/entrypoint.sh:rw --rm my_image
See this example, entrypoint generated inside dockerfile and you can overide this from any script but it should be executable and should be mount to docker-entrypoint.
Dockerfile
FROM alpine
RUN mkdir -p /docker-entrypoint
RUN echo -e $'#!/bin/sh \n\
echo "hello from docker generated entrypoint" >> /test.txt \n\
tail -f /test.txt ' >> /docker-entrypoint/entrypoint.sh
RUN chmod +x /docker-entrypoint/entrypoint.sh
ENTRYPOINT ["/docker-entrypoint/entrypoint.sh"]
if you build and run it you will
docker build -t my_image .
docker run -t --rm my_image
#output
hello from docker generated entrypoint
Now if you want to overide
Create and set permission
host_path/entrypoint/entrypoint.sh
for example entrypoint.sh
#!/bin/sh
echo "hello from entrypoint using mounted"
Now run
docker run --name test -v $PWD/:/docker-entrypoint/ --rm my_image
#output
hello from entrypoint using mounted
Update:
If you mount directory of the host it will hide the content of docker image.
The workaround
Mount some directory other then entrypoint name it backup
add instruction in entrypoint to copy entrypoint to that location at run time
So it will create new file on the host directory instead
FROM alpine
RUN mkdir -p /docker-entrypoint
RUN touch /docker-entrypoint/entrypoint.sh
RUN echo -e $'#!/bin/sh \n\
echo "hello from entrypoint" \n\
cp /docker-entrypoint/entrypoint.sh /for_hostsystem/ ' >> /docker-entrypoint/entrypoint.sh
RUN chmod +x /docker-entrypoint/entrypoint.sh
ENTRYPOINT ["/docker-entrypoint/entrypoint.sh"]
Now if you run you will have the docker entrypoint in the host, as opposit as you want
docker run --name test -v $PWD/:/for_hostsystem/ --rm my_image

FileNotFound exception when deploying application to Azure

So, I have a spring boot application, and it in part takes in a file and reads the contents. It runs perfectly locally, but when I put it on a Docker image and deploy to azure, I get a file not found error. Here is my dockerfile:
FROM [place]
VOLUME /tmp
ARG jarFileName
RUN mkdir /app
COPY $jarFileName /app/app.jar
RUN sh -c 'touch /app/app.jar'
ENV JAVA_OPTS=""
COPY startup.sh /app/startup.sh
RUN ["chmod", "+x", "/app/startup.sh"]
CMD ["startup.sh"]
With the Dockerfile you posted, I think there are several places need to pay attention to.
The first is that the line COPY $jarFileName /app/app.jar will get the error if you do not pass the variable jarFileName in the command docker run.
The second is that you should check the current directory for the line COPY startup.sh /app/startup.sh if there is the file startup.sh.
The last is that the line CMD ["startup.sh"], I think you should change it into CMD ["./startup.sh"]. Usually, we execute a shell script using the command sh script.sh or ./script.sh if the script has the permission 'x'.

Resources