There is a problem with the docker-compose file.
The task is to run playbook ansible in docker-compose container. Mount local directory with playbooks/config/ssh to container and run playbook.
And in this case everything works. But when I add user forwarding, the container stops forwarding keys.
What am I doing wrong?
Dockerfile:
FROM alpine:3.15.3
RUN apk update && apk add --no-cache musl-dev openssl-dev make gcc
python3 py3-pip py3-cryptography python3-dev RUN pip3 install cffi RUN
pip3 install ansible RUN apk add --update openssh \ && rm -rf /tmp/*
/var/cache/apk/*
WORKDIR /etc/ansible
Working docker-compose:
version: "3.3"
services:
ansible:
build: .
volumes:
- ./inventory:/etc/ansible
- ~/.ssh:/root/.ssh
Not working docker-compose
version: "3.3"
services:
ansible:
build: .
volumes:
- ./inventory:/etc/ansible
- ~/.ssh:/root/.ssh
user: ${UID}:${GID}
Last docker-compose i`ll run:
sudo UID=${UID} GID=${GID} docker-compose run --rm ansible
Find solution
version: "2.3"
services:
ansible:
build: .
user: ${UID}:${GID}
volumes:
- /etc/passwd:/etc/passwd:ro
- /etc/group:/etc/group:ro
- /home/${USER}/.ssh:/home/&{USER}/.ssh
Run:
UID="$(id -u)" GID="$(id -g)" USER="$(id -u -n)" docker-compose run --rm ansible
Related
I have a containerised Django Rest Framework application along with a Celery container for background task processing and Redis container as a broker.
I want to deploy this application to the Azure Serverless. I have been trying to look for the solutions but had no luck so far.
Here are the two question I have -
Is it possible to deploy multi-container group application to Azure Serverless?
Is it possible to deploy the tech stack I have stated above to Azure Serverless? (please provide an example or a link to the documentation to guide me.)
Project's Docker Configurations -
docker-compose
version: "3"
services:
redis:
container_name: redis-container
image: "redis:latest"
ports:
- "6379:6379"
restart: always
command: "redis-server"
snowflake-optimizer:
container_name: snowflake-optimizer
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/snowflake-backend
ports:
- "80:80"
expose:
- "80"
restart: always
command: "python manage.py runserver 0.0.0.0:80"
links:
- redis
celery:
container_name: celery-container
build:
context: .
dockerfile: Dockerfile.celery
command: "celery -A snowflake_optimizer worker -l INFO"
volumes:
- .:/snowflake-optimizer
restart: always
links:
- redis
depends_on:
- snowflake-optimizer
Dockerfile (Django Rest Framework)
FROM python:3.7-slim
ENV PYTHONUNBUFFERED 1
RUN apt-get update &&\
apt-get install python3-dev default-libmysqlclient-dev gcc -y &&\
apt-get install -y libssl-dev libffi-dev &&\
python -m pip install --upgrade pip &&\
mkdir /snowflake-backend
WORKDIR /snowflake-backend
COPY ./requirements.txt /requirements.txt
COPY . /snowflake-backend
EXPOSE 80
RUN pip install -r /requirements.txt
RUN pip install -U "celery[redis]"
RUN python manage.py makemigrations &&\
python manage.py migrate
CMD [ "python", "manage.py", "runserver", "0.0.0.0:80"]
Docker.celery (Celery)
FROM python:3.7-slim
ENV PYTHONUNBUFFERED 1
RUN apt-get update &&\
apt-get install python3-dev default-libmysqlclient-dev gcc -y &&\
apt-get install -y libssl-dev libffi-dev &&\
python -m pip install --upgrade pip &&\
mkdir /snowflake-backend
WORKDIR /snowflake-backend
COPY ./requirements.txt /requirements.txt
COPY . /snowflake-backend
EXPOSE 80
RUN pip install -r /requirements.txt
RUN pip install -U "celery[redis]"
CMD [ "celery", "-A", "snowflake_optimizer", "worker", "-l ", "INFO"]
The questions is what do we mean here by Azure Serverless. Usually people associate serverless with serverless functions. As of 2020 November, Azure Functions do not support Docker deployment, at least not in a way to be able to deploy a Django application.
Understandably there are other options to be able to deploy containers into Azure:
Azure Container Instances: they do support multi-container deployments. The container orchestration is limited and you get billed by second.
Azure WebApp: it has a preview feature for support for multi-container deployments. Since it is in preview, the pricing may change and there is not much information about orchestration. I think this will be the recommended approach for the deployment described in your question, I would give this a try.
AKS - Azure Kubernetes Service: this should be used for serious production workloads, obviously it requires Kubernetes knowledge which may be a huge overhead for a momentary simple deployment.
To deploy a multi-container app on Azure Containers Instances using a Docker Compose file using the Azure CLI:
az container create --resource-group myResourceGroup --file docker-compose.yaml
Documentation
You can also use the docker-compose CLI directly.
Documentation
I am attempting to run a shell script by using docker-compose inside the docker container. I am using the Dockerfile to build the container environment and installing all dependancies. I then copy all the project files to the container. This works well as far as I can determine. (I am still fairly new to docker, docker-compose)
My Dockerfile:
FROM python:3.6-alpine3.7
RUN apk add --no-cache --update \
python3 python3-dev gcc \
gfortran musl-dev \
libffi-dev openssl-dev
RUN pip install --upgrade pip
ENV PYTHONUNBUFFERED 1
ENV APP /app
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN mkdir $APP
WORKDIR $APP
ADD requirements.txt .
RUN pip install -r requirements.txt
COPY . .
What I am currently attempting is this:
docker-compose file:
version: "2"
services:
nginx:
image: nginx:latest
container_name: nginx
ports:
- "8000:8000"
- "443:443"
volumes:
- ./:/app
- ./config/nginx:/etc/nginx/conf.d
- ./config/nginx/ssl/certs:/etc/ssl/certs
- ./config/nginx/ssl/private:/etc/ssl/private
depends_on:
- api
api:
build: .
container_name: app
command: /bin/sh -c "entrypoint.sh"
expose:
- "5000"
This results in the container not starting up, and from the log I get the following:
/bin/sh: 1: entrypoint.sh: not found
For more reference and information this is my entrypoint.sh script:
python manage.py db init
python manage.py db migrate --message 'initial database migration'
python manage.py db upgrade
gunicorn -w 1 -b 0.0.0.0:5000 manage:app
Basically, I know I could run the container with only the gunicorn line above in the command line of the dockerfile. But, I am using a sqlite db inside the app container, and really need to run the db commands for the database to initialise/migrate.
Just for reference this is a basic Flask python web app with a nginx reverse proxy using gunicorn.
Any insight will be appreciated. Thanks.
First thing, You are copying entrypoint.sh to $APP which you passed from your build args but you did not mentioned that and second thing you need to set permission for entrypoint.sh. Better to add these three lines so you will not need to add command in docker-compose file.
FROM python:3.6-alpine3.7
RUN apk add --no-cache --update \
python3 python3-dev gcc \
gfortran musl-dev \
libffi-dev openssl-dev
RUN pip install --upgrade pip
ENV PYTHONUNBUFFERED 1
ENV APP /app
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN mkdir $APP
WORKDIR $APP
ADD requirements.txt .
RUN pip install -r requirements.txt
COPY . .
# These line for /entrypoint.sh
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
entrypoint "/entrypoint.sh"
docker compose for api will be
api:
build: .
container_name: app
expose:
- "5000"
or you can use you own also will work fine
version: "2"
services:
api:
build: .
container_name: app
command: /bin/sh -c "entrypoint.sh"
expose:
- "5000"
Now you can check with docker run command too.
docker run -it --rm myapp
entrypoint.sh needs to be specified with its full path.
It's not clear from your question where exactly you install it; if it's in the current directory, ./entrypoint.sh should work.
(Tangentially, the -c option to sh is superfluous if you want to run a single script file.)
I have a Dockerfile where I bring in some files and chmod some stuff. it's a node server that spawns an executable file.
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y --no-install-recommends curl sudo
RUN curl -sL https://deb.nodesource.com/setup_9.x | sudo -E bash -
RUN apt-get install -y nodejs && \
apt-get install --yes build-essential
RUN apt-get install --yes npm
#VOLUME "/usr/local/app"
# Set up C++ dev env
RUN apt-get update && \
apt-get dist-upgrade -y && \
apt-get install gcc-multilib g++-multilib cmake wget -y && \
apt-get clean autoclean && \
apt-get autoremove -y
#wget -O /tmp/conan.deb -L https://github.com/conan-io/conan/releases/download/0.25.1/conan-ubuntu-64_0_25_1.deb && \
#dpkg -i /tmp/conan.deb
#ADD ./scripts/cmake-build.sh /build.sh
#RUN chmod +x /build.sh
#RUN /build.sh
RUN curl -sL https://deb.nodesource.com/setup_9.x | sudo -E bash -
RUN apt-get install -y nodejs sudo
RUN mkdir -p /usr/local/app
WORKDIR /usr/local/app
COPY package.json /usr/local/app
RUN ["npm", "install"]
COPY . .
RUN echo "/usr/local/app/dm" > /etc/ld.so.conf.d/mythrift.conf
RUN echo "/usr/lib/x86_64-linux-gnu" >> /etc/ld.so.conf.d/mythrift.conf
RUN echo "/usr/local/lib64" >> /etc/ld.so.conf.d/mythrift.conf
RUN ldconfig
EXPOSE 9090
RUN chmod +x dm/dm3
RUN ldd dm/dm3
RUN ["chmod", "+x", "dm/dm3"]
RUN ["chmod", "777", "policy"]
RUN ls -al .
CMD ["nodejs", "app.js"]
it works all fine but when I use docker-compose for the purpose of having an autoreload dev enviornment in docker, I get an EACCES error when spawning the executable process.
version: '3'
services:
web:
build: .
command: npm run start
volumes:
- .:/usr/local/app/
- /usr/app/node_modules
ports:
- "3000:3000"
I'm using nodemon to restart the server on changes, hence the volumes in the compose. woulds love to get that workflow up again.
I think that you problem is how you wrote the docker-compose.yml file.
I think that the line command doesn't necessary because you
especified how start the program in Dockerfile.
Could you try to run this lines?
version: '3'
services:
web:
build:
context: ./
dockerfile: Dockerfile
volumes:
- .:/usr/local/app/
- /usr/app/node_modules
ports:
- "3000:3000"
Otherwise, I think that the volumes property doesn't share /usr/app/node_modules. And I think that this is bad practice. You can run "npm install" in your Dockerfile
I hope that you could understand me =)
I am trying to setup a docker container that mounts a volume from the host. No matter what I try, it always says permission denied when I remote into the docker container. This is some of the commands I have tried adding to my docker file:
RUN su -c "setenforce 0"
and
chcon -Rt svirt_sandbox_file_t /app
Still I get the following error when I remote into my container:
Error: EACCES: permission denied, scandir '/app'
at Error (native)
Error: EACCES: permission denied, open 'npm-debug.log.578996924'
at Error (native)
And as you can see, the app directory is assigned to some user with uid 1000:
Here is my docker file:
FROM php:5.6-fpm
# Install modules
RUN apt-get update && apt-get install -y \
git \
unzip \
libmcrypt-dev \
libicu-dev \
mysql-client \
freetds-dev \
libxml2-dev
RUN apt-get install -y freetds-dev php5-sybase
# This symlink fixes the pdo_dblib install
RUN ln -s /usr/lib/x86_64-linux-gnu/libsybdb.a /usr/lib/
RUN docker-php-ext-install pdo \
&& docker-php-ext-install pdo_mysql \
&& docker-php-ext-install pdo_dblib \
&& docker-php-ext-install iconv \
&& docker-php-ext-install mcrypt \
&& docker-php-ext-install intl \
&& docker-php-ext-install opcache \
&& docker-php-ext-install mbstring
# Override the default php.ini with a custom one
COPY ./php.ini /usr/local/etc/php/
# replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# nvm environment variables
ENV NVM_DIR /usr/local/nvm
ENV NODE_VERSION 4.4.7
# install nvm
RUN curl --silent -o- https://raw.githubusercontent.com/creationix/nvm/v0.31.2/install.sh | bash
# install node and npm
RUN source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
# add node and npm to path so the commands are available
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# confirm installation
RUN node -v
RUN npm -v
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN composer --version
# Configure freetds
ADD ./freetds.conf /etc/freetds/freetds.conf
WORKDIR /app
# Gulp install
RUN npm install -g gulp
RUN npm install -g bower
CMD ["php-fpm"]
Here is my docker-compose:
nginx_dev:
container_name: nginx_dev
build: docker/nginx_dev
ports:
- "80:80"
depends_on:
- php_dev
links:
- php_dev
volumes:
- ./:/app
php_dev:
container_name: php_dev
build: docker/php-dev
volumes:
- ./:/app`
Is there any commands I can run to give the root user permissions to access the app directory? I am using docker-compose as well.
From the directory listing, it appears that you have selinux configured (that's the trailing dots on the permission bits). In Docker with selinux enabled, you need to mount volumes with an extra flag, :z. Docker describes this as a volume label but I believe this is an selinux term rather than a docker label on the volume.
Your resulting docker-compose.yml should look like:
version: '2'
services:
nginx_dev:
container_name: nginx_dev
build: docker/nginx_dev
ports:
- "80:80"
depends_on:
- php_dev
links:
- php_dev
volumes:
- ./:/app:z
php_dev:
container_name: php_dev
build: docker/php-dev
volumes:
- ./:/app:z
Note, I also updated the syntax to version 2. Version 1 of the docker-compose.yml is being phased out. Version 2 will result in the containers being run in their own network by default which is usually preferred but may cause issues if you have other containers trying to talk to these.
I want to run a pyramid app in a docker container, but I'm struggling with the correct syntax in the Dockerfile. Pyramid doesn't have an official Dockerfile, butI found this site that recommended using an Ubuntu base image.
https://runnable.com/docker/python/dockerize-your-pyramid-application
But this is for Python 2.7. Any ideas how I can change this to 3.5? This is what I tried:
Dockerfile
FROM ubuntu:16.04
RUN apt-get update -y && \
apt-get install -y python3-pip python3-dev && \
pip3 install --upgrade pip setuptools
# We copy this file first to leverage docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt
COPY . /app
ENTRYPOINT [ "python" ]
CMD [ "pserve development.ini" ]
and I run this from the command line:
docker build -t testapp .
but that generates a slew of errors ending with this
FileNotFoundError: [Errno 2] No such file or directory: '/usr/local/lib/python3.5/dist-packages/appdirs-1.4.3.dist-info/METADATA'
The command '/bin/sh -c pip3 install -r requirements.txt' returned a non-zero code: 2
And even if that did build, how will pserve execute in 3.5 instead of 2.7? I tried modifying the Dockerfile to create a virtual environment to force execution in 3.5, but still, no luck. For what it's worth, this works just fine on my machine with a 3.5 virtual environment.
So, can anyone help me build the proper Dockerfile so I can run this Pyramid application with Python 3.5? I'm not married to the Ubuntu image.
If that can help, here's my Dockerfile for a Pyramid app that we develop using Docker. It's not running in production using Docker though.
FROM python:3.5.2
ADD . /code
WORKDIR /code
ENV PYTHONUNBUFFERED 0
RUN echo deb http://apt.postgresql.org/pub/repos/apt/ jessie-pgdg main >> /etc/apt/sources.list.d/pgdg.list
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
RUN apt-get update
RUN apt-get install -y \
gettext \
postgresql-client-9.5
RUN pip install -r requirements.txt
RUN python setup.py develop
As you may notice, we use Postgres and gettext, but you can install whatever dependencies you need.
The line ENV PYTHONUNBUFFERED 0 I think we added that because Python would buffer all outputs so nothing would be printed in the console.
And we use Python 3.5.2 for now. We tried a version a bit more recent, but we ran into issues. Maybe that's fixed now.
Also, if that can help, here's an edited version of the docker-compose.yml file:
version : '2'
services:
db:
image: postgres:9.5
ports:
- "15432:5432"
rabbitmq:
image: "rabbitmq:3.6.6-management"
ports:
- '15672:15672'
worker:
image: image_from_dockerfile
working_dir: /code
command: command_for_worker development.ini
env_file: .env
volumes:
- .:/code
web:
image: image_from_dockerfile
working_dir: /code
command: pserve development.ini --reload
ports:
- "6543:6543"
env_file: .env
depends_on:
- db
- rabbitmq
volumes:
- .:/code
We build the image by doing
docker build -t image_from_dockerfile .
Instead of passing directly the Dockerfile path in the docker-compose.yml config, because we use the same image for the web app and the worker, so we would have to rebuild twice every time we have to rebuild.
And one last thing, if you run locally for development like we do, you have to run
docker-compose run web python setup.py develop
one time in the console, otherwise, you'll get an error like if the app was not accessible when you docker-compose up. This happens because when you mount the volume with the code in it, it removes the one from the image, so the package files (like .egg) are "removed".
Update
Instead of running docker-compose run web python setup.py develop to generate the .egg locally, you can tell Docker to use the .egg directory from the image by including the directory in the volumes.
E.g.
volumes:
- .:/code
- /code/packagename.egg-info