Run a shell script from docker-compose command, inside the container - linux

I am attempting to run a shell script by using docker-compose inside the docker container. I am using the Dockerfile to build the container environment and installing all dependancies. I then copy all the project files to the container. This works well as far as I can determine. (I am still fairly new to docker, docker-compose)
My Dockerfile:
FROM python:3.6-alpine3.7
RUN apk add --no-cache --update \
python3 python3-dev gcc \
gfortran musl-dev \
libffi-dev openssl-dev
RUN pip install --upgrade pip
ENV PYTHONUNBUFFERED 1
ENV APP /app
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN mkdir $APP
WORKDIR $APP
ADD requirements.txt .
RUN pip install -r requirements.txt
COPY . .
What I am currently attempting is this:
docker-compose file:
version: "2"
services:
nginx:
image: nginx:latest
container_name: nginx
ports:
- "8000:8000"
- "443:443"
volumes:
- ./:/app
- ./config/nginx:/etc/nginx/conf.d
- ./config/nginx/ssl/certs:/etc/ssl/certs
- ./config/nginx/ssl/private:/etc/ssl/private
depends_on:
- api
api:
build: .
container_name: app
command: /bin/sh -c "entrypoint.sh"
expose:
- "5000"
This results in the container not starting up, and from the log I get the following:
/bin/sh: 1: entrypoint.sh: not found
For more reference and information this is my entrypoint.sh script:
python manage.py db init
python manage.py db migrate --message 'initial database migration'
python manage.py db upgrade
gunicorn -w 1 -b 0.0.0.0:5000 manage:app
Basically, I know I could run the container with only the gunicorn line above in the command line of the dockerfile. But, I am using a sqlite db inside the app container, and really need to run the db commands for the database to initialise/migrate.
Just for reference this is a basic Flask python web app with a nginx reverse proxy using gunicorn.
Any insight will be appreciated. Thanks.

First thing, You are copying entrypoint.sh to $APP which you passed from your build args but you did not mentioned that and second thing you need to set permission for entrypoint.sh. Better to add these three lines so you will not need to add command in docker-compose file.
FROM python:3.6-alpine3.7
RUN apk add --no-cache --update \
python3 python3-dev gcc \
gfortran musl-dev \
libffi-dev openssl-dev
RUN pip install --upgrade pip
ENV PYTHONUNBUFFERED 1
ENV APP /app
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN mkdir $APP
WORKDIR $APP
ADD requirements.txt .
RUN pip install -r requirements.txt
COPY . .
# These line for /entrypoint.sh
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
entrypoint "/entrypoint.sh"
docker compose for api will be
api:
build: .
container_name: app
expose:
- "5000"
or you can use you own also will work fine
version: "2"
services:
api:
build: .
container_name: app
command: /bin/sh -c "entrypoint.sh"
expose:
- "5000"
Now you can check with docker run command too.
docker run -it --rm myapp

entrypoint.sh needs to be specified with its full path.
It's not clear from your question where exactly you install it; if it's in the current directory, ./entrypoint.sh should work.
(Tangentially, the -c option to sh is superfluous if you want to run a single script file.)

Related

docker-compose does not forward the user

There is a problem with the docker-compose file.
The task is to run playbook ansible in docker-compose container. Mount local directory with playbooks/config/ssh to container and run playbook.
And in this case everything works. But when I add user forwarding, the container stops forwarding keys.
What am I doing wrong?
Dockerfile:
FROM alpine:3.15.3
RUN apk update && apk add --no-cache musl-dev openssl-dev make gcc
python3 py3-pip py3-cryptography python3-dev RUN pip3 install cffi RUN
pip3 install ansible RUN apk add --update openssh \ && rm -rf /tmp/*
/var/cache/apk/*
WORKDIR /etc/ansible
Working docker-compose:
version: "3.3"
services:
ansible:
build: .
volumes:
- ./inventory:/etc/ansible
- ~/.ssh:/root/.ssh
Not working docker-compose
version: "3.3"
services:
ansible:
build: .
volumes:
- ./inventory:/etc/ansible
- ~/.ssh:/root/.ssh
user: ${UID}:${GID}
Last docker-compose i`ll run:
sudo UID=${UID} GID=${GID} docker-compose run --rm ansible
Find solution
version: "2.3"
services:
ansible:
build: .
user: ${UID}:${GID}
volumes:
- /etc/passwd:/etc/passwd:ro
- /etc/group:/etc/group:ro
- /home/${USER}/.ssh:/home/&{USER}/.ssh
Run:
UID="$(id -u)" GID="$(id -g)" USER="$(id -u -n)" docker-compose run --rm ansible

/bin/sh: /usr/sbin/sshd-keygen: No such file or directory

I'm completely new to linux and docker concepts
In my windows machine I boot up centos7 in virtual box
while running docker-compose build I get
/bin/sh: /usr/sbin/sshd-keygen: No such file or directory
How to rectify it
I tried to create a remote user
docker-compose.yml
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins/jenkins
ports:
- "8080:8080"
volumes:
- "$PWD/jenkins_home:/var/jenkins_home"
networks:
- net
remote_host:
container_name: remote-host
image: remote-host
build:
context: centos7
networks:
- net
networks:
net:
DockerFile
FROM centos
RUN yum -y install openssh-server
RUN useradd remote_user && \
echo "Thevenus987$" | passwd remote_user --stdin && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user/.ssh && \
chmod 600 /home/remote_user/.ssh/authorized_keys
RUN /usr/sbin/sshd-keygen
CMD /usr/sbin/sshd -D
In Dockerfile
Change RUN /usr/sbin/sshd-keygen // Centos8 doesn't accept this command
to RUN ssh-keygen -A // This works.
I hope this solution works fine.
Change the FROM as centos:7
Replace RUN /usr/sbin/sshd-keygen to RUN ssh
The Dockerfile should be like this:
FROM centos:7
RUN yum -y install openssh-server && \
yum install -y passwd && \ #Added
yum install -y initscripts #Added
RUN useradd remote_user && \
echo "1234" | passwd remote_user --stdin && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user/.ssh/ && \
chmod 600 /home/remote_user/.ssh/authorized_keys
RUN /usr/sbin/sshd-keygen
#CMD /usr/sbin/sshd -D
CMD ["/usr/sbin/sshd", "-D"]
just use FROM centos:7 (instead of using centos8 base image)
and
yum install -y initscripts
Note: Updated initscripts bug fix enhancement package fixes several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6/7.
you don't need to remove or twek this below line at all
RUN /usr/sbin/sshd-keygen
it will work 100% ..
To learn more about initscripts bug fix enhancement:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/6.5_technical_notes/initscripts
Change the base image FROM centos to FROM centos:7 and it will work
The problem is with this line in your Dockerfile:
RUN /usr/sbin/sshd-keygen
This is what you get when this line gets executed: /etc/rc.d/init.d/functions: No such file or directory.
/usr/sbin/sshd-keygen: command not found.
This init.d/functions file is different for different linux distros. It's specific to whatever distribution you're running. This file contains functions to be used by most or all shell scripts stored in the /etc/init. d directory.
To try this yourself simply pull the CentOS:7 image from docker hub and test your RUN steps from your Dockerfile as follows:
docker container run -i -t -d --name test centos:7
docker exec -it test bash
cd /etc/rc.d/init.d
ls -a
There is no file called functions in this directory.
In CentOS:7 Docker image you have to simply install the package initscripts in order for this script to be installed, so add these lines to your Dockerfile:
FROM centos:7
RUN yum install -y initscripts
FROM centos pulls the latest by default which does not include sshd-keygen.
You need to change your Dockerfile to:
FROM centos:7
...
&& yum install -y initscripts \
&& /usr/sbin/sshd-keygen
CMD ["/usr/sbin/sshd", "-D"]
Just change FROM centos
FROM centos:7
That error happened because before docker centos run centos7 and now run centos 8
try below command instead of RUN /usr/sbin/sshd-keygen
and also as others pointed out use:
FROM centos:7
RUN ssh-keygen -A
1)
in Dockerfile change:
RUN /usr/sbin/sshd-keygen
to
RUN /usr/bin/ssh-keygen
2) or try
RUN sshd-keygen
if that is included and exists anywhere in your $PATH, it will execute.

Docker-compose EACCESS error when spawning executable

I have a Dockerfile where I bring in some files and chmod some stuff. it's a node server that spawns an executable file.
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y --no-install-recommends curl sudo
RUN curl -sL https://deb.nodesource.com/setup_9.x | sudo -E bash -
RUN apt-get install -y nodejs && \
apt-get install --yes build-essential
RUN apt-get install --yes npm
#VOLUME "/usr/local/app"
# Set up C++ dev env
RUN apt-get update && \
apt-get dist-upgrade -y && \
apt-get install gcc-multilib g++-multilib cmake wget -y && \
apt-get clean autoclean && \
apt-get autoremove -y
#wget -O /tmp/conan.deb -L https://github.com/conan-io/conan/releases/download/0.25.1/conan-ubuntu-64_0_25_1.deb && \
#dpkg -i /tmp/conan.deb
#ADD ./scripts/cmake-build.sh /build.sh
#RUN chmod +x /build.sh
#RUN /build.sh
RUN curl -sL https://deb.nodesource.com/setup_9.x | sudo -E bash -
RUN apt-get install -y nodejs sudo
RUN mkdir -p /usr/local/app
WORKDIR /usr/local/app
COPY package.json /usr/local/app
RUN ["npm", "install"]
COPY . .
RUN echo "/usr/local/app/dm" > /etc/ld.so.conf.d/mythrift.conf
RUN echo "/usr/lib/x86_64-linux-gnu" >> /etc/ld.so.conf.d/mythrift.conf
RUN echo "/usr/local/lib64" >> /etc/ld.so.conf.d/mythrift.conf
RUN ldconfig
EXPOSE 9090
RUN chmod +x dm/dm3
RUN ldd dm/dm3
RUN ["chmod", "+x", "dm/dm3"]
RUN ["chmod", "777", "policy"]
RUN ls -al .
CMD ["nodejs", "app.js"]
it works all fine but when I use docker-compose for the purpose of having an autoreload dev enviornment in docker, I get an EACCES error when spawning the executable process.
version: '3'
services:
web:
build: .
command: npm run start
volumes:
- .:/usr/local/app/
- /usr/app/node_modules
ports:
- "3000:3000"
I'm using nodemon to restart the server on changes, hence the volumes in the compose. woulds love to get that workflow up again.
I think that you problem is how you wrote the docker-compose.yml file.
I think that the line command doesn't necessary because you
especified how start the program in Dockerfile.
Could you try to run this lines?
version: '3'
services:
web:
build:
context: ./
dockerfile: Dockerfile
volumes:
- .:/usr/local/app/
- /usr/app/node_modules
ports:
- "3000:3000"
Otherwise, I think that the volumes property doesn't share /usr/app/node_modules. And I think that this is bad practice. You can run "npm install" in your Dockerfile
I hope that you could understand me =)

docker mounting volume with permission denied

I am trying to setup a docker container that mounts a volume from the host. No matter what I try, it always says permission denied when I remote into the docker container. This is some of the commands I have tried adding to my docker file:
RUN su -c "setenforce 0"
and
chcon -Rt svirt_sandbox_file_t /app
Still I get the following error when I remote into my container:
Error: EACCES: permission denied, scandir '/app'
at Error (native)
Error: EACCES: permission denied, open 'npm-debug.log.578996924'
at Error (native)
And as you can see, the app directory is assigned to some user with uid 1000:
Here is my docker file:
FROM php:5.6-fpm
# Install modules
RUN apt-get update && apt-get install -y \
git \
unzip \
libmcrypt-dev \
libicu-dev \
mysql-client \
freetds-dev \
libxml2-dev
RUN apt-get install -y freetds-dev php5-sybase
# This symlink fixes the pdo_dblib install
RUN ln -s /usr/lib/x86_64-linux-gnu/libsybdb.a /usr/lib/
RUN docker-php-ext-install pdo \
&& docker-php-ext-install pdo_mysql \
&& docker-php-ext-install pdo_dblib \
&& docker-php-ext-install iconv \
&& docker-php-ext-install mcrypt \
&& docker-php-ext-install intl \
&& docker-php-ext-install opcache \
&& docker-php-ext-install mbstring
# Override the default php.ini with a custom one
COPY ./php.ini /usr/local/etc/php/
# replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# nvm environment variables
ENV NVM_DIR /usr/local/nvm
ENV NODE_VERSION 4.4.7
# install nvm
RUN curl --silent -o- https://raw.githubusercontent.com/creationix/nvm/v0.31.2/install.sh | bash
# install node and npm
RUN source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
# add node and npm to path so the commands are available
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# confirm installation
RUN node -v
RUN npm -v
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN composer --version
# Configure freetds
ADD ./freetds.conf /etc/freetds/freetds.conf
WORKDIR /app
# Gulp install
RUN npm install -g gulp
RUN npm install -g bower
CMD ["php-fpm"]
Here is my docker-compose:
nginx_dev:
container_name: nginx_dev
build: docker/nginx_dev
ports:
- "80:80"
depends_on:
- php_dev
links:
- php_dev
volumes:
- ./:/app
php_dev:
container_name: php_dev
build: docker/php-dev
volumes:
- ./:/app`
Is there any commands I can run to give the root user permissions to access the app directory? I am using docker-compose as well.
From the directory listing, it appears that you have selinux configured (that's the trailing dots on the permission bits). In Docker with selinux enabled, you need to mount volumes with an extra flag, :z. Docker describes this as a volume label but I believe this is an selinux term rather than a docker label on the volume.
Your resulting docker-compose.yml should look like:
version: '2'
services:
nginx_dev:
container_name: nginx_dev
build: docker/nginx_dev
ports:
- "80:80"
depends_on:
- php_dev
links:
- php_dev
volumes:
- ./:/app:z
php_dev:
container_name: php_dev
build: docker/php-dev
volumes:
- ./:/app:z
Note, I also updated the syntax to version 2. Version 1 of the docker-compose.yml is being phased out. Version 2 will result in the containers being run in their own network by default which is usually preferred but may cause issues if you have other containers trying to talk to these.

Correct Dockerfile syntax for pyramid app for use with Python 3.5?

I want to run a pyramid app in a docker container, but I'm struggling with the correct syntax in the Dockerfile. Pyramid doesn't have an official Dockerfile, butI found this site that recommended using an Ubuntu base image.
https://runnable.com/docker/python/dockerize-your-pyramid-application
But this is for Python 2.7. Any ideas how I can change this to 3.5? This is what I tried:
Dockerfile
FROM ubuntu:16.04
RUN apt-get update -y && \
apt-get install -y python3-pip python3-dev && \
pip3 install --upgrade pip setuptools
# We copy this file first to leverage docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt
COPY . /app
ENTRYPOINT [ "python" ]
CMD [ "pserve development.ini" ]
and I run this from the command line:
docker build -t testapp .
but that generates a slew of errors ending with this
FileNotFoundError: [Errno 2] No such file or directory: '/usr/local/lib/python3.5/dist-packages/appdirs-1.4.3.dist-info/METADATA'
The command '/bin/sh -c pip3 install -r requirements.txt' returned a non-zero code: 2
And even if that did build, how will pserve execute in 3.5 instead of 2.7? I tried modifying the Dockerfile to create a virtual environment to force execution in 3.5, but still, no luck. For what it's worth, this works just fine on my machine with a 3.5 virtual environment.
So, can anyone help me build the proper Dockerfile so I can run this Pyramid application with Python 3.5? I'm not married to the Ubuntu image.
If that can help, here's my Dockerfile for a Pyramid app that we develop using Docker. It's not running in production using Docker though.
FROM python:3.5.2
ADD . /code
WORKDIR /code
ENV PYTHONUNBUFFERED 0
RUN echo deb http://apt.postgresql.org/pub/repos/apt/ jessie-pgdg main >> /etc/apt/sources.list.d/pgdg.list
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
RUN apt-get update
RUN apt-get install -y \
gettext \
postgresql-client-9.5
RUN pip install -r requirements.txt
RUN python setup.py develop
As you may notice, we use Postgres and gettext, but you can install whatever dependencies you need.
The line ENV PYTHONUNBUFFERED 0 I think we added that because Python would buffer all outputs so nothing would be printed in the console.
And we use Python 3.5.2 for now. We tried a version a bit more recent, but we ran into issues. Maybe that's fixed now.
Also, if that can help, here's an edited version of the docker-compose.yml file:
version : '2'
services:
db:
image: postgres:9.5
ports:
- "15432:5432"
rabbitmq:
image: "rabbitmq:3.6.6-management"
ports:
- '15672:15672'
worker:
image: image_from_dockerfile
working_dir: /code
command: command_for_worker development.ini
env_file: .env
volumes:
- .:/code
web:
image: image_from_dockerfile
working_dir: /code
command: pserve development.ini --reload
ports:
- "6543:6543"
env_file: .env
depends_on:
- db
- rabbitmq
volumes:
- .:/code
We build the image by doing
docker build -t image_from_dockerfile .
Instead of passing directly the Dockerfile path in the docker-compose.yml config, because we use the same image for the web app and the worker, so we would have to rebuild twice every time we have to rebuild.
And one last thing, if you run locally for development like we do, you have to run
docker-compose run web python setup.py develop
one time in the console, otherwise, you'll get an error like if the app was not accessible when you docker-compose up. This happens because when you mount the volume with the code in it, it removes the one from the image, so the package files (like .egg) are "removed".
Update
Instead of running docker-compose run web python setup.py develop to generate the .egg locally, you can tell Docker to use the .egg directory from the image by including the directory in the volumes.
E.g.
volumes:
- .:/code
- /code/packagename.egg-info

Resources