FileNotFound exception when deploying application to Azure - azure

So, I have a spring boot application, and it in part takes in a file and reads the contents. It runs perfectly locally, but when I put it on a Docker image and deploy to azure, I get a file not found error. Here is my dockerfile:
FROM [place]
VOLUME /tmp
ARG jarFileName
RUN mkdir /app
COPY $jarFileName /app/app.jar
RUN sh -c 'touch /app/app.jar'
ENV JAVA_OPTS=""
COPY startup.sh /app/startup.sh
RUN ["chmod", "+x", "/app/startup.sh"]
CMD ["startup.sh"]

With the Dockerfile you posted, I think there are several places need to pay attention to.
The first is that the line COPY $jarFileName /app/app.jar will get the error if you do not pass the variable jarFileName in the command docker run.
The second is that you should check the current directory for the line COPY startup.sh /app/startup.sh if there is the file startup.sh.
The last is that the line CMD ["startup.sh"], I think you should change it into CMD ["./startup.sh"]. Usually, we execute a shell script using the command sh script.sh or ./script.sh if the script has the permission 'x'.

Related

Execute two node commands from a bash file as Docker CMD

I am trying to run a nodejs app from a file named start.sh.
I am using this file because I want to execute two process in a serial way as a CMD in a Dockerfile.
This is the content:
#!/bin/sh
node /get-secrets.mjs -c /secrets-config.json -t /.env.production
npm run start -p 8000
As you can notice, I want to perform two things:
First execute the get-secrets.mjs file, that is a small script that internally uses commander.js two read the flags -c and -t (--config and --target respectively). These flags receive strings arguments to locate files.
The second command is just the start for my node js app.
I have no idea how should I write this file, because those commands work on my machine but in the container it seems my format is wrong.
This is the problem so far:
How should I pass the arguments to the mjs script?
If someone else fine this useful, I solved my issue using this:
FROM node:16.14-alpine
# Dependencies that my script needs
RUN npm i -g #aws-sdk/client-secrets-manager#3.121.0 commander#9.3.0
# Folder for my source code
WORKDIR /app
# SOME OTHER COMMANDS
# ...
# In this folder I have the start.sh and the get-secrets.mjs files
COPY ${SOURCE_PATH}/server-helpers/* ./
# This was required since I am working on windows
RUN sed -i -e 's/\r$//' start.sh
RUN chmod +x start.sh
ENTRYPOINT ["/app/start.sh" ]

docker RUN mkdir does not work when folder exist in prev image

the only difference between them is that the "dev" folder exists in centos image,
check the comment in this piece of code(while executing docker build),appreciate it if anyone can explain why?
FROM centos:latest
LABEL maintainer="xxxx"
RUN dnf clean packages
RUN dnf -y install sudo openssh-server openssh-clients curl vim lsof unzip zip
**below works well!**
# RUN mkdir -p oop/script
# RUN cd oop/script
# ADD text.txt /oop/script
**/bin/sh: line 0: cd: dev/script: No such file or directory**
RUN mkdir -p dev/script
RUN cd dev/script
ADD text.txt /dev/script
EXPOSE 22
There are two things going on here.
The root of your problem is that /dev is a special directory, and is re-created for each RUN command. So while RUN mkdir -p dev/script successfully creates a /dev/script directory, that directory is gone once the RUN command is complete.
Additionally, a command like this...
RUN cd /some/directory
...is a complete no-op. This is exactly the same thing as running sh -c "cd /some/directory" on your local system; while the cd is successful, the cd only affects the process running the cd command, and has no effect on the parent process or subsequent commands.
If you really need to place something into /dev, you can copy it into a different location in your Dockerfile (e.g., COPY test.txt /docker/test.txt), and then at runtime via your CMD or ENTRYPOINT copy it into an appropriate location in /dev.

Why do I get "s6-log: fatal: unable to open_append /run/service/app/lock: Not a directory"?

I'm learning about s6 and I've come to a point where I want to use s6-log. I have the following Dockerfile
FROM alpine:3.10
RUN apk --no-cache --update add s6
WORKDIR /run/service
COPY \
./rootfs/run \
./rootfs/app /run/service/
CMD ["s6-supervise", "."]
with ./rootfs/app being just a simple sh script
#!/bin/sh
while true;
do
sleep 1
printf "Hello %s\n" "$(date)"
done
and run being
#!/bin/execlineb -P
fdmove -c 2 1
s6-log -b n20 s1000000 t /var/log/app/
/run/service/app
Why do I keep getting
s6-log: fatal: unable to open_append /run/service/app/lock: Not a directory
? Without the s6-log line it all works fine.
So it seems that I've been doing this incorrectly. Namely I should've used s6-svscan instead of s6-supervice.
Using s6-svscan I can create a log/ subdirectory in my service's directory so that my app's stdout is redirected to logger's stdin as described on s6-svscan's website:
For every new subdirectory dir it finds, the scanner spawns a s6-supervise process on it. If dir/log exists, it spawns a s6-supervise process on both dir and dir/log, and maintains a never-closing pipe from the service's stdout to the logger's stdin.
I've written run script like so:
#!/bin/execlineb -P
s6-log -b n20 s512 T /var/log/app
and with that I've changed the CMD to
CMD ["s6-svscan", "/run/"]
where /run/service/ contains both run script for my service (without s6-log call) and log subdirectory with the run script above.

Docker: mount image's original /docker-entrypoint.sh to a volume in read/write mode

I try to mount image's original /docker-entrypoint.sh to a volume in read/write mode, in order to be able to change it easily from outside (without entering the container) and then restart the container to observe the changes.
I do it (in ansible) like this:
/app/docker-entrypoint.sh:/docker-entrypoint.sh:rw
If /app/docker-entrypoint.sh doesn't exist on the host, a directory /app/docker-entrypoint.sh (not a file, as wish) is created, and I get following error:
Error starting container e40a90eef1525f554e6078e20b3ab5d1c4b27ad2a7d73aa3bf4f7c6aa337be4f: 400 Client Error: Bad Request (\"OCI runtime create failed: container_linux.go:348: starting container process caused \"process_linux.go:402: container init caused \\\"rootfs_linux.go:58: mounting \\\\\\\"/app/docker-entrypoint.sh\\\\\\\" to rootfs \\\\\\\"/var/lib/docker/devicemapper/mnt/4d3e4f4082ca621ab5f3a4ec3f810b967634b1703fd71ec00b4860c59494659a/rootfs\\\\\\\" at \\\\\\\"/var/lib/docker/devicemapper/mnt/4d3e4f4082ca621ab5f3a4ec3f810b967634b1703fd71ec00b4860c59494659a/rootfs/docker-entrypoint.sh\\\\\\\" caused \\\\\\\"not a directory\\\\\\\"\\\"\": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
If I touch /app/docker-entrypoint.sh (and set proper permissions) before launching the container - the container fails to start up and keeps restarting (I assume because the /app/docker-entrypoint.sh and therefore internal /docker-entrypoint.sh are empty).
How can I mount the original content of container's /docker-entrypoint.sh to the outside?
If you want to override docker-entry point it should be executable or in other words you have to set chmod +x your_mount_entrypoint.sh in the host then you can mount otherwise it will through permission error. As entrypoint script should be executable.
Second thing, As mentioned in the comment you can mount the file better to keep the entrypoint script in directory like docker-entrypoint/entrypoint.sh.
or if you want to mount specific file then both name should be same otherwise entrypoint script will not be overridden.
docker run --name test -v $PWD/entrypoint.sh:/docker-entrypoint/entrypoint.sh --rm my_image
or
docker run --name test -v $PWD/entrypoint.sh:/docker-entrypoint/entrypoint.sh:rw --rm my_image
See this example, entrypoint generated inside dockerfile and you can overide this from any script but it should be executable and should be mount to docker-entrypoint.
Dockerfile
FROM alpine
RUN mkdir -p /docker-entrypoint
RUN echo -e $'#!/bin/sh \n\
echo "hello from docker generated entrypoint" >> /test.txt \n\
tail -f /test.txt ' >> /docker-entrypoint/entrypoint.sh
RUN chmod +x /docker-entrypoint/entrypoint.sh
ENTRYPOINT ["/docker-entrypoint/entrypoint.sh"]
if you build and run it you will
docker build -t my_image .
docker run -t --rm my_image
#output
hello from docker generated entrypoint
Now if you want to overide
Create and set permission
host_path/entrypoint/entrypoint.sh
for example entrypoint.sh
#!/bin/sh
echo "hello from entrypoint using mounted"
Now run
docker run --name test -v $PWD/:/docker-entrypoint/ --rm my_image
#output
hello from entrypoint using mounted
Update:
If you mount directory of the host it will hide the content of docker image.
The workaround
Mount some directory other then entrypoint name it backup
add instruction in entrypoint to copy entrypoint to that location at run time
So it will create new file on the host directory instead
FROM alpine
RUN mkdir -p /docker-entrypoint
RUN touch /docker-entrypoint/entrypoint.sh
RUN echo -e $'#!/bin/sh \n\
echo "hello from entrypoint" \n\
cp /docker-entrypoint/entrypoint.sh /for_hostsystem/ ' >> /docker-entrypoint/entrypoint.sh
RUN chmod +x /docker-entrypoint/entrypoint.sh
ENTRYPOINT ["/docker-entrypoint/entrypoint.sh"]
Now if you run you will have the docker entrypoint in the host, as opposit as you want
docker run --name test -v $PWD/:/for_hostsystem/ --rm my_image

dockerfile simplest automation

I'm learning docker and create a Dockerfile to fill container with apps. Step1 is Dockerfile
FROM golang:alpine as builder
ADD /src/common $GOPATH/src/common
ADD /src/ins_signal_node $GOPATH/src/ins_signal_node
WORKDIR $GOPATH/src/ins_signal_node
RUN go build -o /go/bin/signal_server .
ADD /src/ins_full_node $GOPATH/src/ins_full_node
WORKDIR $GOPATH/src/ins_full_node
RUN go build -o /go/bin/full_node .
FROM alpine
COPY --from=builder /go/bin/signal_server /go/bin/signal_server
COPY --from=builder /go/bin/full_node /go/bin/full_node
COPY run_test.sh /go/bin
No questions here - it's ok. After this I run my script to rebuild and run this Container and enter it's bash - Step2:
#!/bin/bash
docker container rm -f full
docker image rm -f ss
docker build -t ss .
winpty docker run -it --name full ss
So at this moment I'm in containers console. And as it scripted I ran 2 commands - Step3
cd go/bin/
./run_test.sh
It works!
But. After Step2 - when I'm in console - I want Step 3 - run the starter script to be automated. So at the end of my Dockerfile from Step1 I add line
CMD ["cd go/bin/ && ./run_test.sh"]
And after I ran Step2 - with full start now - I've got the error message:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"cd go/bin/ && ./run_test.sh\": stat cd go/bin/ && ./run_test
.sh: no such file or directory": unknown.
And if I ran this CMD - cd go/bin/ && ./run_test.sh - manualy when I'm in container's console - it works!
So my question - what's bad with my
CMD ["cd go/bin/ && ./run_test.sh"]
UPDATE
Ok. I try now with ["/go/bin/run_test.sh"] and ["./go/bin/run_test.sh"] and got
initializing…
/go/bin/run_test.sh: line 2: ./signal_server: not found
starting…
/go/bin/run_test.sh: line 10: ./full_node: not found
/go/bin/run_test.sh: line 9: ./full_node: not found
/go/bin/run_test.sh: line 8: ./full_node: not found
/go/bin/run_test.sh: line 7: ./full_node: not found
UPDATE 2
So in my Dockerfile I create
FROM alpine
COPY --from=builder /go/bin/signal_server /go/bin/signal_server
COPY --from=builder /go/bin/full_node /go/bin/full_node
COPY run_test.sh /go/bin
COPY entry_point.sh /
ENTRYPOINT ["./entry_point.sh"]
entry_point.sh - and has entry_point.sh in the root. If I use ENTRYPOINT - it says
standard_init_linux.go:190: exec user process caused "no such file or directory"
Use docker entrypoint in your Dockerfile:
Create a entrypoint.sh with:
#!/bin/bash
set -e
cd go/bin/
./run_test.sh
exec "$#"
In your Dockerfile on last line:
ENTRYPOINT ["/entrypoint.sh"]
Just add the run command to your entrypoint script:
cd /go/bin \
&& ./run_test.sh
As per documentation, CMD syntax is
CMD ["executable","param1","param2"] (exec form, this is the preferred form)
CMD ["param1","param2"] (as default parameters to ENTRYPOINT)
CMD command param1 param2 (shell form)
Docker treats your composite command as the name of the executable because you are using double quotes around it. You can easily solve this for example by putting all commands in a script such as /bin/myscript.sh, then you just need CMD /bin/myscript.sh.

Resources