Deploying multi container group to Azure Serverless - azure

I have a containerised Django Rest Framework application along with a Celery container for background task processing and Redis container as a broker.
I want to deploy this application to the Azure Serverless. I have been trying to look for the solutions but had no luck so far.
Here are the two question I have -
Is it possible to deploy multi-container group application to Azure Serverless?
Is it possible to deploy the tech stack I have stated above to Azure Serverless? (please provide an example or a link to the documentation to guide me.)
Project's Docker Configurations -
docker-compose
version: "3"
services:
redis:
container_name: redis-container
image: "redis:latest"
ports:
- "6379:6379"
restart: always
command: "redis-server"
snowflake-optimizer:
container_name: snowflake-optimizer
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/snowflake-backend
ports:
- "80:80"
expose:
- "80"
restart: always
command: "python manage.py runserver 0.0.0.0:80"
links:
- redis
celery:
container_name: celery-container
build:
context: .
dockerfile: Dockerfile.celery
command: "celery -A snowflake_optimizer worker -l INFO"
volumes:
- .:/snowflake-optimizer
restart: always
links:
- redis
depends_on:
- snowflake-optimizer
Dockerfile (Django Rest Framework)
FROM python:3.7-slim
ENV PYTHONUNBUFFERED 1
RUN apt-get update &&\
apt-get install python3-dev default-libmysqlclient-dev gcc -y &&\
apt-get install -y libssl-dev libffi-dev &&\
python -m pip install --upgrade pip &&\
mkdir /snowflake-backend
WORKDIR /snowflake-backend
COPY ./requirements.txt /requirements.txt
COPY . /snowflake-backend
EXPOSE 80
RUN pip install -r /requirements.txt
RUN pip install -U "celery[redis]"
RUN python manage.py makemigrations &&\
python manage.py migrate
CMD [ "python", "manage.py", "runserver", "0.0.0.0:80"]
Docker.celery (Celery)
FROM python:3.7-slim
ENV PYTHONUNBUFFERED 1
RUN apt-get update &&\
apt-get install python3-dev default-libmysqlclient-dev gcc -y &&\
apt-get install -y libssl-dev libffi-dev &&\
python -m pip install --upgrade pip &&\
mkdir /snowflake-backend
WORKDIR /snowflake-backend
COPY ./requirements.txt /requirements.txt
COPY . /snowflake-backend
EXPOSE 80
RUN pip install -r /requirements.txt
RUN pip install -U "celery[redis]"
CMD [ "celery", "-A", "snowflake_optimizer", "worker", "-l ", "INFO"]

The questions is what do we mean here by Azure Serverless. Usually people associate serverless with serverless functions. As of 2020 November, Azure Functions do not support Docker deployment, at least not in a way to be able to deploy a Django application.
Understandably there are other options to be able to deploy containers into Azure:
Azure Container Instances: they do support multi-container deployments. The container orchestration is limited and you get billed by second.
Azure WebApp: it has a preview feature for support for multi-container deployments. Since it is in preview, the pricing may change and there is not much information about orchestration. I think this will be the recommended approach for the deployment described in your question, I would give this a try.
AKS - Azure Kubernetes Service: this should be used for serious production workloads, obviously it requires Kubernetes knowledge which may be a huge overhead for a momentary simple deployment.

To deploy a multi-container app on Azure Containers Instances using a Docker Compose file using the Azure CLI:
az container create --resource-group myResourceGroup --file docker-compose.yaml
Documentation
You can also use the docker-compose CLI directly.
Documentation

Related

Install Linux dependencies in Azure App Service permanently using Azure DevOps

I created a CI/CD DevOps pipeline for deploying the Django app. After the deployment, manually I go to SSH in the azure app service to execute the below Linux dependencies
apt-get update && apt install -y libxrender1 libxext6
apt-get install -y libfontconfig1
After every deployment, this package is removed automatically. Is there any way to install these Linux dependencies permanently?
I suppose you are using Azure App Service Linux. Azure App Service Linux uses its own customized Docker image to host your application.
Unfortunately you cannot customize Azure Linux App Service Docker image, but you can use App Service Linux container with your own custom Docker image that includes your Linux dependencies.
https://github.com/Azure-Samples/docker-django-webapp-linux
Dockerfile example:
# For more information, please refer to https://aka.ms/vscode-docker-python
FROM python:3.8-slim
EXPOSE 8000
EXPOSE 27017
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
# Install pip requirements
COPY requirements.txt .
RUN python -m pip install -r requirements.txt \
&& apt-get update && apt install -y libxrender1 libxext6 \
&& apt-get install -y libfontconfig1
WORKDIR /app
COPY . /app
RUN chmod u+x /usr/local/bin/init.sh
EXPOSE 8000
ENTRYPOINT ["init.sh"]
init.sh example:
#!/bin/bash
set -e
python /app/manage.py runserver 0.0.0.0:8000
https://learn.microsoft.com/en-us/azure/developer/python/tutorial-containerize-deploy-python-web-app-azure-01

docker-compose does not forward the user

There is a problem with the docker-compose file.
The task is to run playbook ansible in docker-compose container. Mount local directory with playbooks/config/ssh to container and run playbook.
And in this case everything works. But when I add user forwarding, the container stops forwarding keys.
What am I doing wrong?
Dockerfile:
FROM alpine:3.15.3
RUN apk update && apk add --no-cache musl-dev openssl-dev make gcc
python3 py3-pip py3-cryptography python3-dev RUN pip3 install cffi RUN
pip3 install ansible RUN apk add --update openssh \ && rm -rf /tmp/*
/var/cache/apk/*
WORKDIR /etc/ansible
Working docker-compose:
version: "3.3"
services:
ansible:
build: .
volumes:
- ./inventory:/etc/ansible
- ~/.ssh:/root/.ssh
Not working docker-compose
version: "3.3"
services:
ansible:
build: .
volumes:
- ./inventory:/etc/ansible
- ~/.ssh:/root/.ssh
user: ${UID}:${GID}
Last docker-compose i`ll run:
sudo UID=${UID} GID=${GID} docker-compose run --rm ansible
Find solution
version: "2.3"
services:
ansible:
build: .
user: ${UID}:${GID}
volumes:
- /etc/passwd:/etc/passwd:ro
- /etc/group:/etc/group:ro
- /home/${USER}/.ssh:/home/&{USER}/.ssh
Run:
UID="$(id -u)" GID="$(id -g)" USER="$(id -u -n)" docker-compose run --rm ansible

Run a shell script from docker-compose command, inside the container

I am attempting to run a shell script by using docker-compose inside the docker container. I am using the Dockerfile to build the container environment and installing all dependancies. I then copy all the project files to the container. This works well as far as I can determine. (I am still fairly new to docker, docker-compose)
My Dockerfile:
FROM python:3.6-alpine3.7
RUN apk add --no-cache --update \
python3 python3-dev gcc \
gfortran musl-dev \
libffi-dev openssl-dev
RUN pip install --upgrade pip
ENV PYTHONUNBUFFERED 1
ENV APP /app
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN mkdir $APP
WORKDIR $APP
ADD requirements.txt .
RUN pip install -r requirements.txt
COPY . .
What I am currently attempting is this:
docker-compose file:
version: "2"
services:
nginx:
image: nginx:latest
container_name: nginx
ports:
- "8000:8000"
- "443:443"
volumes:
- ./:/app
- ./config/nginx:/etc/nginx/conf.d
- ./config/nginx/ssl/certs:/etc/ssl/certs
- ./config/nginx/ssl/private:/etc/ssl/private
depends_on:
- api
api:
build: .
container_name: app
command: /bin/sh -c "entrypoint.sh"
expose:
- "5000"
This results in the container not starting up, and from the log I get the following:
/bin/sh: 1: entrypoint.sh: not found
For more reference and information this is my entrypoint.sh script:
python manage.py db init
python manage.py db migrate --message 'initial database migration'
python manage.py db upgrade
gunicorn -w 1 -b 0.0.0.0:5000 manage:app
Basically, I know I could run the container with only the gunicorn line above in the command line of the dockerfile. But, I am using a sqlite db inside the app container, and really need to run the db commands for the database to initialise/migrate.
Just for reference this is a basic Flask python web app with a nginx reverse proxy using gunicorn.
Any insight will be appreciated. Thanks.
First thing, You are copying entrypoint.sh to $APP which you passed from your build args but you did not mentioned that and second thing you need to set permission for entrypoint.sh. Better to add these three lines so you will not need to add command in docker-compose file.
FROM python:3.6-alpine3.7
RUN apk add --no-cache --update \
python3 python3-dev gcc \
gfortran musl-dev \
libffi-dev openssl-dev
RUN pip install --upgrade pip
ENV PYTHONUNBUFFERED 1
ENV APP /app
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN mkdir $APP
WORKDIR $APP
ADD requirements.txt .
RUN pip install -r requirements.txt
COPY . .
# These line for /entrypoint.sh
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
entrypoint "/entrypoint.sh"
docker compose for api will be
api:
build: .
container_name: app
expose:
- "5000"
or you can use you own also will work fine
version: "2"
services:
api:
build: .
container_name: app
command: /bin/sh -c "entrypoint.sh"
expose:
- "5000"
Now you can check with docker run command too.
docker run -it --rm myapp
entrypoint.sh needs to be specified with its full path.
It's not clear from your question where exactly you install it; if it's in the current directory, ./entrypoint.sh should work.
(Tangentially, the -c option to sh is superfluous if you want to run a single script file.)

gitlab.com CI - build a NodeJS app using docker in docker

I'm currently facing a problem with the gitlab.com shared-runners. What I'm trying to archieve in my pipeline is:
- NPM install and using grunt to make some uncss, minimize and compress tasks
- Cleaning up
- Building a docker container with the app included
- Moving the container to gitlab registry
Unfortunateley I don't get it running since a long time! I tried a lot of different gitlab.ci configs - without success. The problem is, that I have to use the "image: docker:latest" to have all the docker-tools running. But then I don't have node and grunt installed in the container.
Also the other way around is not working. I was trying to use image: centos:latest and install docker manually - but this is also not working as I always just get a Failed to get D-Bus connection: Operation not permitted
Does anyone has some more experience on the gitlab-ci using docker build commands in a docker shared runner?
Any help is highly appreciated!!
Thank you
Jannik
Gitlab can be a bit tricky :) I dont have an example based on CentOS, but I have one based on Ubuntu if that helps you. Here is some copy paste of a working gitlab pipeline of mine which uses gulp (you should be easily able to adjust it to work with your grunt).
The .gitlab-ci.yml looks like this (adjust the CONTAINER... variables at the beginning):
variables:
CONTAINER_TEST_IMAGE: registry.gitlab.com/psono/psono-client:$CI_BUILD_REF_NAME
CONTAINER_RELEASE_IMAGE: registry.gitlab.com/psono/psono-client:latest
stages:
- build
docker-image:
stage: build
image: ubuntu:16.04
services:
- docker:dind
variables:
DOCKER_HOST: 'tcp://docker:2375'
script:
- sh ./var/build-ubuntu.sh
- docker info
- docker login -u gitlab-ci-token -p "$CI_BUILD_TOKEN" registry.gitlab.com
- docker build -t $CONTAINER_TEST_IMAGE .
- docker push $CONTAINER_TEST_IMAGE
in addition i have a this "./var/build-ubuntu.sh" which you can adjust a bit according to your needs, replace some ubuntu dependencies or switch gulp for grunt as needed:
#!/usr/bin/env bash
apt-get update && \
apt-get install -y libfontconfig zip nodejs npm git apt-transport-https ca-certificates curl openssl software-properties-common && \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get install -y docker-ce && \
ln -s /usr/bin/nodejs /usr/bin/node && \
npm install && \
node --version && \
npm --version

Correct Dockerfile syntax for pyramid app for use with Python 3.5?

I want to run a pyramid app in a docker container, but I'm struggling with the correct syntax in the Dockerfile. Pyramid doesn't have an official Dockerfile, butI found this site that recommended using an Ubuntu base image.
https://runnable.com/docker/python/dockerize-your-pyramid-application
But this is for Python 2.7. Any ideas how I can change this to 3.5? This is what I tried:
Dockerfile
FROM ubuntu:16.04
RUN apt-get update -y && \
apt-get install -y python3-pip python3-dev && \
pip3 install --upgrade pip setuptools
# We copy this file first to leverage docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt
COPY . /app
ENTRYPOINT [ "python" ]
CMD [ "pserve development.ini" ]
and I run this from the command line:
docker build -t testapp .
but that generates a slew of errors ending with this
FileNotFoundError: [Errno 2] No such file or directory: '/usr/local/lib/python3.5/dist-packages/appdirs-1.4.3.dist-info/METADATA'
The command '/bin/sh -c pip3 install -r requirements.txt' returned a non-zero code: 2
And even if that did build, how will pserve execute in 3.5 instead of 2.7? I tried modifying the Dockerfile to create a virtual environment to force execution in 3.5, but still, no luck. For what it's worth, this works just fine on my machine with a 3.5 virtual environment.
So, can anyone help me build the proper Dockerfile so I can run this Pyramid application with Python 3.5? I'm not married to the Ubuntu image.
If that can help, here's my Dockerfile for a Pyramid app that we develop using Docker. It's not running in production using Docker though.
FROM python:3.5.2
ADD . /code
WORKDIR /code
ENV PYTHONUNBUFFERED 0
RUN echo deb http://apt.postgresql.org/pub/repos/apt/ jessie-pgdg main >> /etc/apt/sources.list.d/pgdg.list
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
RUN apt-get update
RUN apt-get install -y \
gettext \
postgresql-client-9.5
RUN pip install -r requirements.txt
RUN python setup.py develop
As you may notice, we use Postgres and gettext, but you can install whatever dependencies you need.
The line ENV PYTHONUNBUFFERED 0 I think we added that because Python would buffer all outputs so nothing would be printed in the console.
And we use Python 3.5.2 for now. We tried a version a bit more recent, but we ran into issues. Maybe that's fixed now.
Also, if that can help, here's an edited version of the docker-compose.yml file:
version : '2'
services:
db:
image: postgres:9.5
ports:
- "15432:5432"
rabbitmq:
image: "rabbitmq:3.6.6-management"
ports:
- '15672:15672'
worker:
image: image_from_dockerfile
working_dir: /code
command: command_for_worker development.ini
env_file: .env
volumes:
- .:/code
web:
image: image_from_dockerfile
working_dir: /code
command: pserve development.ini --reload
ports:
- "6543:6543"
env_file: .env
depends_on:
- db
- rabbitmq
volumes:
- .:/code
We build the image by doing
docker build -t image_from_dockerfile .
Instead of passing directly the Dockerfile path in the docker-compose.yml config, because we use the same image for the web app and the worker, so we would have to rebuild twice every time we have to rebuild.
And one last thing, if you run locally for development like we do, you have to run
docker-compose run web python setup.py develop
one time in the console, otherwise, you'll get an error like if the app was not accessible when you docker-compose up. This happens because when you mount the volume with the code in it, it removes the one from the image, so the package files (like .egg) are "removed".
Update
Instead of running docker-compose run web python setup.py develop to generate the .egg locally, you can tell Docker to use the .egg directory from the image by including the directory in the volumes.
E.g.
volumes:
- .:/code
- /code/packagename.egg-info

Resources