Deploying Docker Container Registry on Azure App Service Issue - azure

I am unable to rum Docker Container Registry on the Azure App service. I have a flask app and the following is the Dockerfile of it:-
FROM python:3.8-slim-buster
WORKDIR /usr/src/app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# copy project
WORKDIR /usr/src/app
COPY . /usr/src/app/
# expose port 80
EXPOSE 80
CMD ["gunicorn", "-w", "4", "-b", "0.0.0.0:80", "app:app"]
I have deployed the docker image on the Container Registry. I have also set WEBSITES_PORT to 80 under App Service -> Application Settings.
Even after doing that, I get the following error:-
ERROR - Container XYZ didn't respond to HTTP pings on port: 80, failing site start.
I have tried running it locally and it works fine. But, it just does not seem to work on the Azure App service. Any help is highly appreciated.

I don't see an issue in the code you posted but to verify, here is a configuration for a Flask app with a Gunicorn server that works on a containerized Azure App Service:
app.py
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello_world():
return "<p>Hello World!</p>"
Dockerfile
FROM python:3.8-slim-buster
ADD app.py app.py
ADD requirements.txt requirements.txt
RUN pip install --upgrade pip
RUN python3 -m pip install -r requirements.txt
EXPOSE 80
CMD ["gunicorn", "--bind=0.0.0.0:80", "app:app"]
requirements.txt
flask
gunicorn
I assume you selected "Docker Container" when you created the Azure App Service?
And then simply chose your image?

Related

Docker Container Azure WebAPP Flask error while deploying due to port

I have implemented a Flask APP with python and by using Docker Desktop I have obtained an image of my APP. I pushed that app on my private docker HUB. In the Azure portal I have created a docker container based APP service including user, password.
The dockerized APP works perfectly on my laptop with this dockerfile:
FROM python:3.7.3
RUN apt-get update -y
RUN apt-get install -y python3-pip \
python3-dev \
build-essential \
cmake
ADD . /demo
WORKDIR /demo
RUN pip install -r requirements.txt
EXPOSE 80 5000
here the docker-composer.yml
version: "3.8"
services:
app:
build: .
command: python app.py
ports:
- "5000:5000"
volumes:
- .:/demo
here a little part of app.py
if __name__ == '__main__':
app.run(host="0.0.0.0")
As said with these files the docker works on my laptop, but during the deploying stage in azure I received the following error:
Please use https://aka.ms/linux-diagnostics to enable logging to see container logs here.
INFO - Initiating warmup request to container xxx for site xxx
ERROR - Container xxx for site xxx has exited, failing site start
ERROR - Container xxx didn't respond to HTTP pings on port: 80, failing site start. See container logs for debugging.
INFO - Stopping site xxx because it failed during startup.

Stop Flask app when not in use on Azure Container Instance

I made a test flask application that looks like the following:
from flask import Flask
from flask_cors import CORS
import os
app = Flask(__name__)
CORS(app)
#app.route('/')
def hello_word():
return 'hello', 200
if __name__ == '__main__':
app.run(threaded=True, host='0.0.0.0', port=int(os.environ.get("PORT", 8080)))
However, if i host this application on Azure Container Instance, the application never "stops". The memory usage is always at around 50mb and I'm constantly getting charged. If I host the same application on Google Cloud run, I'm only charged for the request time (20ms or so). The following is my dockerfile
FROM python:3.9-slim
RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
RUN pip install Flask gunicorn
ENV PORT=80
CMD exec gunicorn --bind :$PORT --workers 3 --threads 3 --timeout 100 main:app --access-logfile -
Any thoughts on how to stop the container instance once the request is served on Azure?
Actually, the ACI just run the image for you and nothing else. It means if your image has an application that keeps running, then the ACI keeps running. And it seems you need to schedule to stop the ACI, maybe you can try the Azure logic App. You can use it to create the ACI and then stop it after a period that you need.

how do i connect to redis server in python script inside a Docker container

Here is my python file 'app.py'
import redis
cache = redis.Redis(host='redis', port=6379)
for i in range(8):
cache.set(i,i)
for i in range(8):
print(cache.get(i))
Here is my Dockerfile
FROM python:3.7-alpine
COPY . /code
WORKDIR /code
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
CMD ["python", "app.py"]
But when i built and run docker image i am getting error not able to connect.
The container you run based on the image does not know who 'redis' is. You can tell it by using the --add-host option to docker run.
Find out the public IP of your redis server. Then use that IP to map it to the redis hostname that your script tries to connect to.
docker run --add-host redis:<public_ip> ....

Azure Container generates error '503' reason 'site unavailable'

The issue
I have built a docker image on my local machine running windows 10
running docker-compose build, docker-compose up -dand docker-compse logs -f do generate expected results (no error)
the app runs correctly by running winpty docker container run -i -t -p 8000:8000 --rm altf1be.plotly.docker-compose:2019-12-17
I upload the docker image on a private Azure Container Registry
I deploy the web app based on the docker image Azure Portal > Container registry > Repositories > altf1be.plotly.docker-compose > v2019-12-17 > context-menu > deploy to web app
I run the web app and I get The service is unavailable
what is wrong with my method?
Thank you in advance for the time you will invest on this issue
docker-compose.yml
version: '3.7'
services:
twikey-plot_ly_service:
# container_name: altf1be.plotly.docker-container-name
build: .
image: altf1be.plotly.docker-compose:2019-12-17
command: gunicorn --config=app/conf/gunicorn.conf.docker.staging.py app.webapp:server
ports:
- 8000:8000
env_file: .env.staging
.env/staging
apiUrl=https://api.beta.alt-f1.be
authorizationUrl=/api/auth/authorization/code
serverUrl=https://dunningcashflow-api.alt-f1.be
transactionFeedUrl=/creditor/tx
api_token=ANICETOKEN
Dockerfile
# read the Dockerfile reference documentation
# https://docs.docker.com/engine/reference/builder
# build the docker
# docker build -t altf1be.plotly.docker-compose:2019-12-17.
# https://learn.microsoft.com/en-us/azure/app-service/containers/tutorial-custom-docker-image#use-a-docker-image-from-any-private-registry-optional
# Use the docker images used by Microsoft on Azure
FROM mcr.microsoft.com/oryx/python:3.7-20190712.5
LABEL Name=altf1.be/plotly Version=1.19.0
LABEL maintainer="abo+plotly_docker_staging#alt-f1.be"
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
# copy the code from the local drive to the docker
ADD . /code/
# non interactive front-end
ARG DEBIAN_FRONTEND=noninteractive
# update the software repository
ENV SSH_PASSWD 'root:!astrongpassword!'
RUN apt-get update && apt-get install -y \
apt-utils \
# enable SSH
&& apt-get install -y --no-install-recommends openssh-server \
&& echo "$SSH_PASSWD" | chpasswd
RUN chmod u+x /code/init_container.sh
# update the python packages and libraries
RUN pip3 install --upgrade pip
RUN pip3 install --upgrade setuptools
RUN pip3 install --upgrade wheel
RUN pip3 install -r requirements.txt
# copy sshd_config file. See https://man.openbsd.org/sshd_config
COPY sshd_config /etc/ssh/
EXPOSE 8000 2222
ENV PORT 8000
ENV SSH_PORT 2222
# install dependencies
ENV ACCEPT_EULA=Y
ENV APPENGINE_INSTANCE_CLASS=F2
ENV apiUrl=https://api.beta.alt-f1.be
ENV serverUrl=https://dunningcashflow-api.alt-f1.be
ENV DOCKER_REGISTRY altf1be.azurecr.io
ENTRYPOINT ["/code/init_container.sh"]
/code/init_container.sh
gunicorn --config=app/conf/gunicorn.conf.docker.staging.py app.webapp:server
app/conf/gunicorn.conf.docker.staging.py
# -*- coding: utf-8 -*-
workers = 1
# print("workers: {}".format(workers))
bind = '0.0.0.0'
timeout = 600
log_level = "debug"
reload = True
print(
f"workers={workers} bind={bind} timeout={timeout} --log-level={log_level} --reload={reload}"
)
container settings
application settings
web app running - 'the service is unavailable'
Kudu - 'the service is unavailable'
Kudu - http pings on port 8000 (the app was not running)
ERROR - Container altf1be-plotly-docker_0_ee297002 for site altf1be-plotly-docker has exited, failing site start
ERROR - Container altf1be-plotly-docker_0_ee297002 didn't respond to HTTP pings on port: 8000, failing site start. See container logs for debugging.
What I can see is your use the wrong environment variable, it should be WEBSITES_PORT, you miss the s behind the WEBSITE. You can add it and try to deploy the image again.
And I think Azure Web App will not help you set the environment variables for you as you did in the docker-compose file with option env_file. So I suggest you create the image through command docker build and test it locally with setting the environment variables through -e. When the images run well, then you can push it to ACr and deploy it from the ACR also with the environment variables in Web App.
Or you can still use the docker-compose file to deploy your image in Web App, instead of the env_file and build with environment and image when the image is in ACR.

Docker Compose: Accessing my webapp from the browser

I'm finding the docs sorely lacking for this (or, I'm dumb), but here's my setup:
Webapp is running on Node and Express, in port 8080. It also connects with a MongoDB container (hence why I'm using docker-compose).
In my Dockerfile, I have:
FROM node:4.2.4-wheezy
# Set correct environment variables.
ENV HOME /root
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
RUN npm install;
# Bundle app source
COPY . /usr/src/app
CMD ["node", "app.js"]
EXPOSE 8080
I run:
docker-compose build web
docker-compose build db
docker-compose up -d db
docker-compose up -d web
When I run docker-machine ip default I get 192.168.99.100.
So when I go to 192.168.99.100:8080 in my browser, I would expect to see my app - but I get connection refused.
What silly thing have I done wrong here?
Fairly new here as well, but did you publish your ports on your docker-compose file?
Your Dockerfile will simply expose the ports but not open access to your host. Publishing (with the -p flag on a docker run command or ports on a docker-compose file. Will enable access from outside the container (see ports in the docs)
Something like this may help:
ports:
- "8080:8080"

Resources