Setting up a Dockerfile to install node prereqs and then set up supervisor in order to run the final npm install command. Running Docker in CoreOS under VirtualBox.
I have a Dockerfile that sets everything up correctly:
FROM ubuntu
MAINTAINER <<Me>>
# Install docker basics
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get update
RUN apt-get upgrade -y
# Install dependencies and nodejs
RUN apt-get update
RUN apt-get install -y python-software-properties python g++ make
RUN add-apt-repository ppa:chris-lea/node.js
RUN apt-get update
RUN apt-get install -y nodejs
# Install git
RUN apt-get install -y git
# Install supervisor
RUN apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
# Add supervisor config file
ADD ./etc/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# Bundle app source
ADD . /src
# create supervisord user
RUN /usr/sbin/useradd --create-home --home-dir /usr/local/nonroot --shell /bin/bash nonroot
RUN chown -R nonroot: /src
# set install script to executable
RUN /bin/chmod +x /src/etc/install.sh
#set up .env file
RUN echo "NODE_ENV=development\nPORT=5000\nRIAK_SERVERS={SERVER}" > /src/.env
#expose the correct port
EXPOSE 5000
# start supervisord when container launches
CMD ["/usr/bin/supervisord"]
And then I want to set up supervisord to launch one of a few possible processes, including an installation shell script that I've confirmed to work correctly, install.sh, which is located in the application's /etc directory:
#!/bin/bash
cd /src; npm install
export PATH=$PATH:node_modules/.bin
However, I'm very new to supervisor syntax, and I can't get it to launch the shell script correctly. This is what I have in my supervisord.conf file:
[supervisord]
nodaemon=true
[program:install]
command=install.sh
directory=/src/etc/
user=nonroot
When I run the Dockerfile, everything runs correctly, but when I launch the image, I get the following:
2014-03-15 07:39:56,854 CRIT Supervisor running as root (no user in config file)
2014-03-15 07:39:56,856 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
2014-03-15 07:39:56,913 INFO RPC interface 'supervisor' initialized
2014-03-15 07:39:56,913 WARN cElementTree not installed, using slower XML parser for XML-RPC
2014-03-15 07:39:56,914 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2014-03-15 07:39:56,915 INFO supervisord started with pid 1
2014-03-15 07:39:57,918 INFO spawnerr: can't find command 'install.sh'
2014-03-15 07:39:58,920 INFO spawnerr: can't find command 'install.sh'
Clearly, I have not set up supervisor correctly to run this shell script -- is there part of the syntax that I'm screwing up?
The best way that I found was setting this:
[program:my-program-name]
command = /path/to/my/command.sh
startsecs = 0
autorestart = false
startretries = 1
think I got this sorted: needed the full path in command, and instead of having user=nonroot in the .conf file, I put su nonroot into the install.sh script.
I had a quick look in the source code for supervisor and noticed that if the command does not contain a forward slash /, it will look in the PATH environmental variable for that file. This imitates the behaviour of execution via shell.
The following methods should fix your initial problem:
Specify the full path of the script (like you have done in your own answer)
Prefix the command with ./, i.e. ./install.sh (in theory, but untested)
Prefix the command with the shell executable, i.e. /bin/bash install.sh
I do not understand why user= does not work for you (have you tried it after fixing execution?), but the problem you encountered in your own answer was probably due to the incorrect usage of su which does not work like sudo. su will create its own interactive shell and will therefore hang while waiting for standard input. To run commands with su, use the -c flag, i.e. su -c "some-program" nonroot. An explicit shell can also be specified with the -s flag if necessary.
I had this issue too. For me, the root cause was failing to set the shebang line. Even if the script can run in bash fine, for supervisord to be able to exec() it, it has to begin with e.g. #!/bin/bash.
Related
I have Dockerfile that looks like this
FROM alpine:3.7
WORKDIR /home/tmp
RUN apk add autoconf && apk add py-pip && apk add python3 &&\
pip install --upgrade pip && pip install wheel
Originally I wanted to execute a .sh script on startup (via ENTRYPOINT) and immediately destroy the container. However as it failed to find the file I decided to do that manually.
I run container like this
docker run -it --rm -v c:/projects/mega-nz-sdk:/home/tmp mega_sdk_python
And it connects me to bash in the container.
In the list of files I can see the script I want to execute
/home/tmp # ls
Dockerfile compile.sh sdk-develop
/home/tmp #
However when I try to run it it cannot find the script
/home/tmp # ./compile.sh
/bin/sh: ./compile.sh: not found
/home/tmp #
What is the problem?
Script compile.sh looks like this
#!/bin/bash
cd sdk-develop
sh autogen.sh
./configure --disable-silent-rules --enable-python --disable-examples &&\
make
cd /bindings/python
python setup.py bdist_wheel
Ideally I would like to execute during instantiation of the container in order to have already configured container on startup (without need to run script each I run the container).
It seems in order to execute my .sh file I need to run it like this
sh compile.sh
So I added
CMD ["sh", "compile.sh"]
And I started to work (though failed with other errors like missing make etc. but that's due to missing packages in Alpine Linux itself so a separate matter).
Guess It is something to do with Alpine Linux itself. But I am not sure.
Alpine Linux is a very minimal distribution; it includes a minimal version of most Unix tools that conform to the POSIX specification, but no more. In particular it does not include GNU Bash.
Your script doesn't actually use any special Bash features, so it would be enough to change the first line of the script to run the default system Bourne shell
#!/bin/sh
Using the Alpine apk package manager to install bash would work too, but it's not necessary for what you're showing here.
Usually you'd run the sorts of "compile" commands you show during the course of building an image, not when the image starts up. I'd expect a much more typical Dockerfile to COPY the application source code and in then RUN the commands you show. That would happen just once, when you docker build the image, and not every time you want to run the packaged application.
you mounted the directory from your win host to the docker machine. I BELIEVE this is a permission problem - the file is not executable flagged.
Show detailed listing
# ls -lh
copy the folder to internal dir and add executable bit
cp /home/tmp /home/tmp2 -r
chmod +x /home/tmp2/*.sh
/home/tmp2/compile.sh
hello i have a problem with docker, recently i make dockerfile for create a image of "mosquitto-mqtt" to make my own broken mqtt with ssl protection. i build dockerfile all is good, i don't have a problem but if i run a new container with " docker run -itd --name broken ce69ee4b2f4e" a container run and exit automaticly, and if a check log all is good "[ ok .] Starting network daemon:: mosquitto.". i don't have why ? check my dockerfile. i need help to solve it, thanks you
#Download base image debian
FROM debian:latest
#Update system
RUN apt-get update -y
#Install Wget and gnup2
RUN apt-get install wget -y && apt-get install gnupg2 -y
#Download and add key
RUN wget http://repo.mosquitto.org/debian/mosquitto-repo.gpg.key
RUN apt-key add mosquitto-repo.gpg.key
RUN rm mosquitto-repo.gpg.key
## append apt mirror for debian
RUN echo "# mirror" >> /etc/apt/source.list
RUN echo "deb http://repo.mosquitto.org/debian stretch main" >> /etc/apt/source.list
#Update and upgrade system
RUN apt-get update -y && apt-get upgrade -y
#install mosquitto
RUN apt-get install mosquitto -y
#Copy file configuration
COPY mosquitto.conf /etc/mosquitto
#Copy certificate folder
COPY certs/mosquitto-ca.crt /etc/mosquitto/certs
COPY certs/mosquitto-server.crt /etc/mosquitto/certs
COPY certs/mosquitto-server.key /etc/mosquitto/certs
#Run command
ENTRYPOINT ["/etc/init.d/mosquitto", "start"]
log print
[ ok .] Starting network daemon:: mosquitto.
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d00bd23ae2d6 ce69ee4b2f4e "/etc/init.d/mosquit…" 9 minutes ago Exited (0) 9 minutes ago broken
Containers are a wrapper around a process, and when that process exits, the container exits. In this case:
ENTRYPOINT ["/etc/init.d/mosquitto", "start"]
That process is /etc/init.d/mosquitto which almost certainly runs, spawns a daemon in the background, and exits (standard for anything in init.d). You should instead run mosquito directly with foreground options if available.
If that's some possible, something like supervisord would be a less than optimal fallback, with the ability to watch a background daemon.
And if neither of those work, you can run your command from a script that ends with a tail -f /dev/null, but that would be the worst option since you ignore any errors.
it works ! i found the solution, it just need to add "-C" on command and specify directory
this is a good method
ENTRYPOINT ["mosquitto", "-c", "/etc/mosquitto/mosquitto.conf"]
thanks all to help Me!
The below 3 lines are part of my shell script, but it is executing first line and copying file properly.
In-order execute this rpm file, i need to prompt to root user. Hence, 2nd step i wrote. But it is not executing, hence i'm not able to install the rpm file.
aws s3 cp s3://mybucket/oracle-instantclient12.2-basiclite.rpm /home/user1/
sudo su
yum -y install /home/user1/oracle-instantclient12.2-basiclite.rpm
So, any alternate solution to this (sudo su) or tell me how to prompt to root user in-order to install the mentioned rpm file.
Thanks
You could try using sudo -s or
sudo yum -y install /home/user1/oracle-instantclient12.2-basiclite-12.2.0.1.0-1.x86_64.rpm
The first option switches you to the root user, while the second allows you to run the command as root.
aws s3 cp s3://mybucket/oracle-instantclient12.2-basiclite.rpm /home/user1/ && sudo -i yum -y install /home/user1/oracle-instantclient12.2-basiclite.rpm
you'd have to add && (see this answer) in between the two commands and install with sudo yum:
aws s3 cp s3://mybucket/oracle-instantclient12.2-basiclite.rpm /home/user1/ && sudo yum -y install /home/user1/oracle-instantclient12.2-basiclite.rpm
sudo rpm -i /home/user1/oracle-instantclient12.2-basiclite.rpm should also work.
there is no other way to run two commands from a single command-line ...
are you sure the seconds half of the command-line even runs on the remote host? because I'd rather would expect it to be prefixed with send-command (in case running this from a local shell and not on the remote host). it is also not being indicated which Linux distribution you attempt to run the command against; adding the relevant RPM repository and then installing from there, might be the most reliable method of doing so.
#!/bin/bash
yum -y install gcc-c++
wget https://nodejs.org/dist/v0.12.7/node-v0.12.7.tar.gz
tar -xvzf node-v0.12.7.tar.gz
cd node-v0.12.7
./configure
make
sudo make install
yum -y install git
/usr/local/bin/npm install pm2 -g
cd /home/admin/Order-Management/
/usr/local/lib/node_modules/pm2/bin/pm2 start processes.json
The above script run perfectly when I run it locally but when I try to execute it with puppet on the client machine the last line throwing the following error.
/usr/bin/env: node: No such file or directory
I am using RedHat 6 master and RedHat 6 client. I saw a solution here
Node forever /usr/bin/env: node: No such file or directory However not working for me. Any help will be much appreciated.
I had to add environment variable for puppet exec resource. That was not a problem for node.
I'm trying to switch user to the tomcat7 user in order to setup SSH certificates.
When I do su tomcat7, nothing happens.
whoami still ruturns root after doing su tomcat7
Doing a more /etc/passwd, I get the following result which clearly shows that a tomcat7 user exists:
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/bin/sh
bin:x:2:2:bin:/bin:/bin/sh
sys:x:3:3:sys:/dev:/bin/sh
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/bin/sh
man:x:6:12:man:/var/cache/man:/bin/sh
lp:x:7:7:lp:/var/spool/lpd:/bin/sh
mail:x:8:8:mail:/var/mail:/bin/sh
news:x:9:9:news:/var/spool/news:/bin/sh
uucp:x:10:10:uucp:/var/spool/uucp:/bin/sh
proxy:x:13:13:proxy:/bin:/bin/sh
www-data:x:33:33:www-data:/var/www:/bin/sh
backup:x:34:34:backup:/var/backups:/bin/sh
list:x:38:38:Mailing List Manager:/var/list:/bin/sh
irc:x:39:39:ircd:/var/run/ircd:/bin/sh
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/bin/sh
nobody:x:65534:65534:nobody:/nonexistent:/bin/sh
libuuid:x:100:101::/var/lib/libuuid:/bin/sh
messagebus:x:101:104::/var/run/dbus:/bin/false
colord:x:102:105:colord colour management daemon,,,:/var/lib/colord:/bin/false
saned:x:103:106::/home/saned:/bin/false
tomcat7:x:104:107::/usr/share/tomcat7:/bin/false
What I'm trying to work around is this error in Hudson:
Command "git fetch -t git#________.co.za:_______/_____________.git +refs/heads/*:refs/remotes/origin/*" returned status code 128: Host key verification failed.
This is my Dockerfile, it takes an existing hudson war file and config that is tarred and builds an image, hudson runs fine, it just can't access git due to certificates not existing for user tomcat7.
FROM debian:wheezy
# install java on image
RUN apt-get update
RUN apt-get install -y openjdk-7-jdk tomcat7
# install hudson on image
RUN rm -rf /var/lib/tomcat7/webapps/*
ADD ./ROOT.tar.gz /var/lib/tomcat7/webapps/
# copy hudson config over to image
RUN mkdir /usr/share/tomcat7/.hudson
ADD ./dothudson.tar.gz /usr/share/tomcat7/
RUN chown -R tomcat7:tomcat7 /usr/share/tomcat7/
# add ssh certificates
RUN mkdir /root/.ssh
ADD ssh.tar.gz /root/
# install some dependencies
RUN apt-get update
RUN apt-get install --y maven
RUN apt-get install --y git
RUN apt-get install --y subversion
# background script
ADD run.sh /root/run.sh
RUN chmod +x /root/run.sh
# expose port 8080
EXPOSE 8080
CMD ["/root/run.sh"]
I'm using the latest version of Docker (Docker version 1.0.0, build 63fe64c/1.0.0), is this a bug in Docker or am I missing something in my Dockerfile?
You should not use su in a dockerfile, however you should use the USER instruction in the Dockerfile.
At each stage of the Dockerfile build, a new container is created so any change you make to the user will not persist on the next build stage.
For example:
RUN whoami
RUN su test
RUN whoami
This would never say the user would be test as a new container is spawned on the 2nd whoami. The output would be root on both (unless of course you run USER beforehand).
If however you do:
RUN whoami
USER test
RUN whoami
You should see root then test.
Alternatively you can run a command as a different user with sudo with something like
sudo -u test whoami
But it seems better to use the official supported instruction.
As a different approach to the other answer, instead of indicating the user upon image creation on the Dockerfile, you can do so via command-line on a particular container as a per-command basis.
With docker exec, use --user to specify which user account the interactive terminal will use (the container should be running and the user has to exist in the containerized system):
docker exec -it --user [username] [container] bash
See https://docs.docker.com/engine/reference/commandline/exec/
In case you need to perform privileged tasks like changing permissions of folders you can perform those tasks as a root user and then create a non-privileged user and switch to it.
FROM <some-base-image:tag>
# Switch to root user
USER root # <--- Usually you won't be needed it - Depends on base image
# Run privileged command
RUN apt install <packages>
RUN apt <privileged command>
# Set user and group
ARG user=appuser
ARG group=appuser
ARG uid=1000
ARG gid=1000
RUN groupadd -g ${gid} ${group}
RUN useradd -u ${uid} -g ${group} -s /bin/sh -m ${user} # <--- the '-m' create a user home directory
# Switch to user
USER ${uid}:${gid}
# Run non-privileged command
RUN apt <non-privileged command>
Add this line to docker file
USER <your_user_name>
Use docker instruction USER
You should also be able to do:
apt install sudo
sudo -i -u tomcat
Then you should be the tomcat user. It's not clear which Linux distribution you're using, but this works with Ubuntu 18.04 LTS, for example.
There's no real way to do this. As a result, things like mysqld_safe fail, and you can't install mysql-server in a Debian docker container without jumping through 40 hoops because.. well... it aborts if it's not root.
You can use USER, but you won't be able to apt-get install if you're not root.