I have a Dockerfile that installs multiple services on a ubuntu baseimage such as npm, nodejs and ssh.
I want be able to ssh into the container and also run a node express application.
It works perfectly to run one of that. But i cant figure out how to start both services!
To run ssh i did:
CMD ["/usr/sbin/sshd","-D"]
For the node application i clone a git repo and run:
CMD ["node" "app.js"]
Each of that runs perfectly.
But how can i execute both commands?
I tried putting them both in the CMD directive:
CMD ["/usr/sbin/sshd","-D", "node", "app.js"]
I also tried to execute one of them with RUN:
RUN node app.js
CMD ["/usr/sbin/sshd","-D"]
It executes but is than stuck at this point and doesnt continue to compute the image..
How can i execute /usr/sbin/sshd -D (which i need to run ssh) and also node app.js?
Heres the full Dockerfile:
FROM ubuntu:latest
RUN apt update && apt install openssh-server sudo -y
RUN apt install git -y
RUN apt install nodejs -y
RUN apt install npm -y
RUN npm install express
RUN npm install better-sqlite3
RUN npm install morgan
RUN echo "PermitRootLogin yes">etc/ssh/sshd_config
RUN echo 'root:root' | chpasswd
RUN git clone https://github.com/mauriceKalevra/Web2-Projekt.git
WORKDIR Web2-Projekt
RUN npm install
RUN service ssh start
EXPOSE 22
#CMD ["/usr/sbin/sshd","-D", "&&", "node", "app.js"lss" ]
CMD ["node", "app.js"]
Two options are available to do this.
Option 1
Use a shell and && to execute two commands.
FROM debian:stretch-slim as builder
CMD touch test.txt && ls
Option 2
Put all commands you want to execute in executable script entrypoint.sh and run that script in CMD
FROM debian:stretch-slim as builder
COPY entrypoint.sh /entrypoint.sh
CMD ./entrypoint.sh
entrypoint.sh
#!/bin/sh
touch test.txt
ls
EDIT
Please note, that the commands will by default be executed sequentially so the second command will only be executed after the first. If your first process does never terminate, the second one will never start. Use & to execute commands in the background. For more information on how to run commands in parallel or sequentially please see this thread.
Related
Hi I have 3 Docker containers A, B and C. I need to execute a .sh file embedded in A which should go something like this:
!#/bin/bash
ssh root#containerIP "mkdir /path/to/dir"
ssh root#containerIP "touch someFile.txt"
....
My Dockerfiles for B & C are:
FROM node
RUN apt-get update
# RUN apk add --update --no-cache openssh
COPY docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
RUN apt-get install -y openssh-server
ENTRYPOINT ["sh", "/docker-entrypoint.sh"]
WORKDIR /home/debian
COPY ./ /home/debian
RUN git init
RUN git clone --branch master https://github.com/SamLProgrammer/LAB1SD.git
WORKDIR /home/debian/LAB1SD
RUN npm install
CMD ["node", "index.js"]
and Dockerfile for middleware A is:
FROM node
RUN apt-get update
WORKDIR /home/debian
COPY ./ /home/debian
RUN git init
RUN git clone --branch another https://github.com/SamLProgrammer/DockerMiddlewareLab1.git
WORKDIR /home/debian/DockerMiddlewareLab1
RUN npm install
CMD ["node", "index.js"]
My problem is that I dont know the remote servers (B & C) ssh key. And all the process needs to be automated.
So, as far as I know, creating an ssh key-gen is not an option since it generates a ssh key on remote servers (B & C) and then I need to copy that key file in the "Middleware Server" (A). but in the proces I will be asked for (A) ssh password which I wont be able to input manually because of not having access to B & C terminals.
That's why Im thinking of creating a .txt file which contains the ssh key for (B & C) before I run the Docker Images as Containers and set it as their key, so my middleware (A) can know that ssh key and execute SSH commands using sshpass or something like that.
How can I achieve this? how can I set the ssh key from a .txt file when I run the docker Images?.
Or am I planning this in a very wrong way? and what would be the correct one to achieve making those ssh commands autmoated just by calling the bash in my situation?
I'm fairly noob at these automation infrastructures workarounds, any suggestion would help me a lot. Thanks
I am currently located within my folder of the vue.js project. The source files are located in ./src/ folder and test files are located within ./tests/
Usually to run unit tests locally I run the following commands:
npm install
npm ci
npm run test:ci
and it produces, among the others ./report/coverage.lcov file
However, I want to use node:12-alpine docker image to run unit tests inside of it. DO NOT offer to use Dockerfile. I want to run it using docker run --rm node:12-alpine .... and copy the content of ./report folder into my local folder when docker run command is complete. However, I could not figure out how I can do that? What docker run arguments I should use?
Why bot mount your report target as a volume while running?
I am able to run it locally by creating script.sh file
cd /tmp/run
npm install
npm ci
npm run test:ci
in project directory and running:
docker run --rm -v ${PWD}/:/tmp/run -u 0 node:12-alpine sh /tmp/run/script.sh
This is great as I do not need to install another node version locally and container is deleted after run... However, I could not replace the last line via:
cd /tmp/run && npm install ... as it raised an error. Did not really wanted to introduce an extra script. Using --entrypoint outputs another error.
Yes, I can do this and it is very simple
docker run --rm -v ${PWD}/:/tmp/run -u 0 --workdir=/tmp/run node:14-alpine npm install && npm ci && npm run test:ci
I am building a docker version of my rails 6.1.3.1 app.
The original app runs fine on Ubuntu 20.04.2 with Node and I took the setup roughly from there.
In Docker, Node installs fine via NVM but unfortunately Rails does not find it at startup. It seems that the Execjs gem somehow does not recognize it. A look at its sourcecode did not help, though
The version and path checks during Docker creation run as supposed and without errors.
Thanks a lot :)
Here you go with the error message:
/usr/local/bundle/gems/execjs-2.7.0/lib/execjs/runtimes.rb:58:in `autodetect': Could not find a JavaScript runtime. See https://github.com/rails/execjs for a list of available runtimes. (ExecJS::RuntimeUnavailable)
from /usr/local/bundle/gems/execjs-2.7.0/lib/execjs.rb:5:in `<module:ExecJS>'
from /usr/local/bundle/gems/execjs-2.7.0/lib/execjs.rb:4:in `<main>'
from /usr/local/bundle/gems/bootsnap-1.7.4/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `require'
... etc. ...
from /usr/local/bundle/gems/spring-2.1.1/lib/spring/application.rb:139:in `run'
from /usr/local/bundle/gems/spring-2.1.1/lib/spring/application/boot.rb:19:in `<top (required)>'
from /usr/local/lib/ruby/2.6.0/rubygems/core_ext/kernel_require.rb:54:in `require'
from /usr/local/lib/ruby/2.6.0/rubygems/core_ext/kernel_require.rb:54:in `require'
The dockerfile:
# Dockerfile
# Use ruby image to build our own image
FROM ruby:2.6.6
# We specify everything will happen within the /app folder inside the container
WORKDIR /app
# We copy these files from our current application to the /app container
COPY Gemfile Gemfile.lock ./
# install Node with NVM
# was a pain to make nvm run after install then followed https://stackoverflow.com/a/60137919/10297304
SHELL ["/bin/bash", "--login", "-i", "-c"] # different way of execution sources .bashrc..
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.37.0/install.sh | bash
SHELL ["/bin/bash", "--login", "-c"] # back to normal
RUN nvm install 14.15.0
RUN nvm alias default v14.15.0
RUN npm install --global yarn
RUN gem install bundler:2.2.11 # otherwise version mismatch
RUN bundle install
RUN yarn install
RUN which node
RUN node -v
RUN nvm -v
RUN yarn -v
# We copy all the files from our current application to the /app container
COPY . .
# We expose the port
EXPOSE 3000
# Start the main process.
CMD ["rails", "server", "-b", "0.0.0.0"]
I had the same problem.
After upgrading the Docker engine from 18 to 20 it was solved
I have Node service which is running in Docker container
Due following exception service gets stopped after some time.
events.js:141 throw er; // Unhandled 'error' event
Error: read ECONNRESET
at exports._errnoException (util.js:870:11)
at TLSWrap.onread (net.js:544:26)
I am not aware why this exception is coming up.
I am looking for work around which can restart the service once its stop.
I am using shell file to run these service, So is there something that I can add in shell file which can restart this stopped service.
Here is a sample of my shell file:
#!/bin/bash
ORGANISATION="$1"
SERVICE_NAME="$2"
VERSION="$3"
ENVIRONMENT="$4"
INTERNAL_PORT_NUMBER="$5"
EXTERNAL_PORT_NUMBER="$6"
NETWORK="$7"
docker build -t ${ORGANISATION}/${SERVICE_NAME}:${VERSION} --build-arg PORT=${INTERNAL_PORT_NUMBER} --build-arg ENVIRONMENT=${ENVIRONMENT} --no-cache .
docker stop ${SERVICE_NAME}
docker rm ${SERVICE_NAME}
sudo npm install
sudo npm install -g express
docker run -p ${EXTERNAL_PORT_NUMBER}:${INTERNAL_PORT_NUMBER} --network ${NETWORK} --restart always --name ${SERVICE_NAME} -itd ${ORGANISATION}/${SERVICE_NAME}:${VERSION}
Here is my Dockerfile
FROM ubuntu
ARG ENVIRONMENT
ARG PORT
ENV PORT $PORT
ENV ENVIRONMENT $ENVIRONMENT
RUN apt-get update -qq
RUN apt-get install -y build-essential nodejs npm nodejs-legacy vim
RUN mkdir /database_service
ADD . /database_service
WORKDIR /database_service
RUN npm install -g path
RUN npm cache clean
EXPOSE $PORT
ENTRYPOINT [ "node", "server.js" ]
CMD [ $PORT, $ENVIRONMENT ]
Thanks in advance.
You can use docker run --restart always .... Then Docker will restart the container every time it is stopped.
The error comes from a tcp connection that is abruptly closed, maybe from a database or websocket.
I don't know why you use npm in your script, because it is outside of the container. If you want it to be installed inside the container add it to a RUN in your Dockerfile.
Maybe take a look at docker-compose. With it you can write your config in a docker-compose.yml file and simply use docker-compose up --build and have the same functionality as this script.
I am having some trouble mounting a directory on my machine into my Docker container. I would like to mount a directory containing files necessary to run a node server. So far, I have successfully been able to run and access my server in browser using the Dockerfile below:
# Use an ubuntu base image
FROM ubuntu:14.04
# Install Node.js and npm (this will install the latest version for ubuntu)
RUN apt-get update
RUN apt-get -y install curl
RUN curl -sL https://deb.nodesource.com/setup_0.12 | sudo bash -
RUN apt-get -y install nodejs
RUN apt-get -y install git
# Bundle app source (note: all of my configuration files/folders are in the current directory along with the Dockerfile)
COPY . /src
Install app dependencies
#WORKDIR /src
RUN npm install
RUN npm install -g bower
RUN bower install --allow-root
RUN npm install -g grunt
RUN npm install -g grunt-cli
#What port to expose?
EXPOSE 1234
#Run grunt on container start
CMD grunt
And these commands:
docker build -t test/test .
docker run -p 1234:1234 -d test/test
However, I figured that I would like the configuration files and whatnot to persist, and thought to do this by mounting the directory (with the files and Dockerfile) as a volume. I used other solutions on StackOverflow to get this command:
docker run -p 1234:1234 -v //c/Users/username/directory:/src -d test/test
My node server seems to start up fine (no errors in the log), but it takes significantly longer to do so, and when I try to access my webpage in browser I just get a blank page.
Am I doing something incorrectly?
EDIT: I have gotten my server to run--seems to have been a weird error in my configuration. However, it still takes a long time (around a minute or two) for my server to start when I mount a volume from my host machine. Does anyone have some insight as to why this is?