Execute two node commands from a bash file as Docker CMD - node.js

I am trying to run a nodejs app from a file named start.sh.
I am using this file because I want to execute two process in a serial way as a CMD in a Dockerfile.
This is the content:
#!/bin/sh
node /get-secrets.mjs -c /secrets-config.json -t /.env.production
npm run start -p 8000
As you can notice, I want to perform two things:
First execute the get-secrets.mjs file, that is a small script that internally uses commander.js two read the flags -c and -t (--config and --target respectively). These flags receive strings arguments to locate files.
The second command is just the start for my node js app.
I have no idea how should I write this file, because those commands work on my machine but in the container it seems my format is wrong.
This is the problem so far:
How should I pass the arguments to the mjs script?

If someone else fine this useful, I solved my issue using this:
FROM node:16.14-alpine
# Dependencies that my script needs
RUN npm i -g #aws-sdk/client-secrets-manager#3.121.0 commander#9.3.0
# Folder for my source code
WORKDIR /app
# SOME OTHER COMMANDS
# ...
# In this folder I have the start.sh and the get-secrets.mjs files
COPY ${SOURCE_PATH}/server-helpers/* ./
# This was required since I am working on windows
RUN sed -i -e 's/\r$//' start.sh
RUN chmod +x start.sh
ENTRYPOINT ["/app/start.sh" ]

Related

docker RUN mkdir does not work when folder exist in prev image

the only difference between them is that the "dev" folder exists in centos image,
check the comment in this piece of code(while executing docker build),appreciate it if anyone can explain why?
FROM centos:latest
LABEL maintainer="xxxx"
RUN dnf clean packages
RUN dnf -y install sudo openssh-server openssh-clients curl vim lsof unzip zip
**below works well!**
# RUN mkdir -p oop/script
# RUN cd oop/script
# ADD text.txt /oop/script
**/bin/sh: line 0: cd: dev/script: No such file or directory**
RUN mkdir -p dev/script
RUN cd dev/script
ADD text.txt /dev/script
EXPOSE 22
There are two things going on here.
The root of your problem is that /dev is a special directory, and is re-created for each RUN command. So while RUN mkdir -p dev/script successfully creates a /dev/script directory, that directory is gone once the RUN command is complete.
Additionally, a command like this...
RUN cd /some/directory
...is a complete no-op. This is exactly the same thing as running sh -c "cd /some/directory" on your local system; while the cd is successful, the cd only affects the process running the cd command, and has no effect on the parent process or subsequent commands.
If you really need to place something into /dev, you can copy it into a different location in your Dockerfile (e.g., COPY test.txt /docker/test.txt), and then at runtime via your CMD or ENTRYPOINT copy it into an appropriate location in /dev.

Why do I get "s6-log: fatal: unable to open_append /run/service/app/lock: Not a directory"?

I'm learning about s6 and I've come to a point where I want to use s6-log. I have the following Dockerfile
FROM alpine:3.10
RUN apk --no-cache --update add s6
WORKDIR /run/service
COPY \
./rootfs/run \
./rootfs/app /run/service/
CMD ["s6-supervise", "."]
with ./rootfs/app being just a simple sh script
#!/bin/sh
while true;
do
sleep 1
printf "Hello %s\n" "$(date)"
done
and run being
#!/bin/execlineb -P
fdmove -c 2 1
s6-log -b n20 s1000000 t /var/log/app/
/run/service/app
Why do I keep getting
s6-log: fatal: unable to open_append /run/service/app/lock: Not a directory
? Without the s6-log line it all works fine.
So it seems that I've been doing this incorrectly. Namely I should've used s6-svscan instead of s6-supervice.
Using s6-svscan I can create a log/ subdirectory in my service's directory so that my app's stdout is redirected to logger's stdin as described on s6-svscan's website:
For every new subdirectory dir it finds, the scanner spawns a s6-supervise process on it. If dir/log exists, it spawns a s6-supervise process on both dir and dir/log, and maintains a never-closing pipe from the service's stdout to the logger's stdin.
I've written run script like so:
#!/bin/execlineb -P
s6-log -b n20 s512 T /var/log/app
and with that I've changed the CMD to
CMD ["s6-svscan", "/run/"]
where /run/service/ contains both run script for my service (without s6-log call) and log subdirectory with the run script above.

FileNotFound exception when deploying application to Azure

So, I have a spring boot application, and it in part takes in a file and reads the contents. It runs perfectly locally, but when I put it on a Docker image and deploy to azure, I get a file not found error. Here is my dockerfile:
FROM [place]
VOLUME /tmp
ARG jarFileName
RUN mkdir /app
COPY $jarFileName /app/app.jar
RUN sh -c 'touch /app/app.jar'
ENV JAVA_OPTS=""
COPY startup.sh /app/startup.sh
RUN ["chmod", "+x", "/app/startup.sh"]
CMD ["startup.sh"]
With the Dockerfile you posted, I think there are several places need to pay attention to.
The first is that the line COPY $jarFileName /app/app.jar will get the error if you do not pass the variable jarFileName in the command docker run.
The second is that you should check the current directory for the line COPY startup.sh /app/startup.sh if there is the file startup.sh.
The last is that the line CMD ["startup.sh"], I think you should change it into CMD ["./startup.sh"]. Usually, we execute a shell script using the command sh script.sh or ./script.sh if the script has the permission 'x'.

Execute configuration bash script during docker build

During docker build I need to run a bash script, which sets up some environment variables.
The script looks something like this:
#!/bin/bash
export ENVVAR=TEST
export HOST=local
export PORT=port
I try to call this script in my dockerfile in different ways but none of them are working. I tried these:
ADD ./myscript.sh
RUN chmod +x /myscript.sh;\
/bin/bash -c 'source ./myscript.sh';\
/bin/bash -c 'source /myscript.sh';\
/bin/bash -c source ./myscript.sh;\
/bin/bash -c source /myscript.sh;\
source ./myscript.sh;\
source /myscript.sh;\
/myscript.sh;\
./myscript.sh;\
ENTRYPOINT ["/bin/bash"]
Of course I only had one of these commands in my RUN and I just put them here grouped.
If I run the container and use source ./myscript.sh it works as expected.
Because of multiple restrictions and other reasons it is not possible for me to use docker compose, the -e argument, ENV KEY VALUE in dockerfile or similar approaches. I need to set up the environment variables during the docker build process.
You're simply not sourcing the shell that is specified in your ENTRYPOINT. Just add your myscript.sh to your image (use COPY instead of ADD).
COPY myscript.sh /usr/local/bin
Then source it on the shell that is actually started by your entrypoint.
docker run myimage source /usr/local/bin/myscript.sh
btw, myscript.sh is pretty non descriptive. You could use env.sh for instance.

Unable to start node app with shell script

I created an node.js Docker image.
Using CMD node myapp.js in the end of my Dockerfile, it starts.
But when I use CMD /root/start.sh, then it fails.
This is how my start.sh looks like:
#!/bin/bash
node myapp.js
And here are the important lines of my Dockerfile:
FROM debian:latest
COPY config/start.sh /root/start.sh
RUN chmod +x /root/start.sh
WORKDIR /my/app/directory
RUN apt-get install -y wget && \
wget https://nodejs.org/dist/latest-v5.x/node-v5.12.0-linux-x64.tar.gz && \
tar -C /usr/local --strip-components 1 -xzf node-v5.12.0-linux-x64.tar.gz && \
rm -f node-v5.12.0-linux-x64.tar.gz && \
ln -s /usr/bin/nodejs /usr/bin/node
# works:
CMD node myapp.js
# doesn't work:
CMD /root/start.sh
Using docker logs I get: standard_init_linux.go:175: exec user process caused "no such file or directory"
But I don't understand, because if I add RUN ls /root in my Dockerfile, I can see the file exists.
I also tried with full paths in my script:
#!/bin/bash
/usr/bin/node /my/app/directory/myapp.js
but nothing changed. So what can be the problem?
Use docker run -entrypoint="/bin/bash" -i your_image.
What you used is the shell form of dockerfile CMD. As described in the doc, the default shell binary is /bin/sh, not as your expected /bin/bash in start.sh line 1.
Or try using exec form, that is CMD ["/root/start.sh"].
Most common error I've seen is creating the start.sh on a Windows system and saving the file either with a different character encoding or including windows linefeeds. The /bin/bash^M is not the same as /bin/bash but you won't see that linefeed on Windows. You also want to save the file in ascii encoding, not any of the multi-character UTF encodings.

Resources