Azure Container generates error '503' reason 'site unavailable' - azure

The issue
I have built a docker image on my local machine running windows 10
running docker-compose build, docker-compose up -dand docker-compse logs -f do generate expected results (no error)
the app runs correctly by running winpty docker container run -i -t -p 8000:8000 --rm altf1be.plotly.docker-compose:2019-12-17
I upload the docker image on a private Azure Container Registry
I deploy the web app based on the docker image Azure Portal > Container registry > Repositories > altf1be.plotly.docker-compose > v2019-12-17 > context-menu > deploy to web app
I run the web app and I get The service is unavailable
what is wrong with my method?
Thank you in advance for the time you will invest on this issue
docker-compose.yml
version: '3.7'
services:
twikey-plot_ly_service:
# container_name: altf1be.plotly.docker-container-name
build: .
image: altf1be.plotly.docker-compose:2019-12-17
command: gunicorn --config=app/conf/gunicorn.conf.docker.staging.py app.webapp:server
ports:
- 8000:8000
env_file: .env.staging
.env/staging
apiUrl=https://api.beta.alt-f1.be
authorizationUrl=/api/auth/authorization/code
serverUrl=https://dunningcashflow-api.alt-f1.be
transactionFeedUrl=/creditor/tx
api_token=ANICETOKEN
Dockerfile
# read the Dockerfile reference documentation
# https://docs.docker.com/engine/reference/builder
# build the docker
# docker build -t altf1be.plotly.docker-compose:2019-12-17.
# https://learn.microsoft.com/en-us/azure/app-service/containers/tutorial-custom-docker-image#use-a-docker-image-from-any-private-registry-optional
# Use the docker images used by Microsoft on Azure
FROM mcr.microsoft.com/oryx/python:3.7-20190712.5
LABEL Name=altf1.be/plotly Version=1.19.0
LABEL maintainer="abo+plotly_docker_staging#alt-f1.be"
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
# copy the code from the local drive to the docker
ADD . /code/
# non interactive front-end
ARG DEBIAN_FRONTEND=noninteractive
# update the software repository
ENV SSH_PASSWD 'root:!astrongpassword!'
RUN apt-get update && apt-get install -y \
apt-utils \
# enable SSH
&& apt-get install -y --no-install-recommends openssh-server \
&& echo "$SSH_PASSWD" | chpasswd
RUN chmod u+x /code/init_container.sh
# update the python packages and libraries
RUN pip3 install --upgrade pip
RUN pip3 install --upgrade setuptools
RUN pip3 install --upgrade wheel
RUN pip3 install -r requirements.txt
# copy sshd_config file. See https://man.openbsd.org/sshd_config
COPY sshd_config /etc/ssh/
EXPOSE 8000 2222
ENV PORT 8000
ENV SSH_PORT 2222
# install dependencies
ENV ACCEPT_EULA=Y
ENV APPENGINE_INSTANCE_CLASS=F2
ENV apiUrl=https://api.beta.alt-f1.be
ENV serverUrl=https://dunningcashflow-api.alt-f1.be
ENV DOCKER_REGISTRY altf1be.azurecr.io
ENTRYPOINT ["/code/init_container.sh"]
/code/init_container.sh
gunicorn --config=app/conf/gunicorn.conf.docker.staging.py app.webapp:server
app/conf/gunicorn.conf.docker.staging.py
# -*- coding: utf-8 -*-
workers = 1
# print("workers: {}".format(workers))
bind = '0.0.0.0'
timeout = 600
log_level = "debug"
reload = True
print(
f"workers={workers} bind={bind} timeout={timeout} --log-level={log_level} --reload={reload}"
)
container settings
application settings
web app running - 'the service is unavailable'
Kudu - 'the service is unavailable'
Kudu - http pings on port 8000 (the app was not running)
ERROR - Container altf1be-plotly-docker_0_ee297002 for site altf1be-plotly-docker has exited, failing site start
ERROR - Container altf1be-plotly-docker_0_ee297002 didn't respond to HTTP pings on port: 8000, failing site start. See container logs for debugging.

What I can see is your use the wrong environment variable, it should be WEBSITES_PORT, you miss the s behind the WEBSITE. You can add it and try to deploy the image again.
And I think Azure Web App will not help you set the environment variables for you as you did in the docker-compose file with option env_file. So I suggest you create the image through command docker build and test it locally with setting the environment variables through -e. When the images run well, then you can push it to ACr and deploy it from the ACR also with the environment variables in Web App.
Or you can still use the docker-compose file to deploy your image in Web App, instead of the env_file and build with environment and image when the image is in ACR.

Related

Deploying Docker Container Registry on Azure App Service Issue

I am unable to rum Docker Container Registry on the Azure App service. I have a flask app and the following is the Dockerfile of it:-
FROM python:3.8-slim-buster
WORKDIR /usr/src/app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# copy project
WORKDIR /usr/src/app
COPY . /usr/src/app/
# expose port 80
EXPOSE 80
CMD ["gunicorn", "-w", "4", "-b", "0.0.0.0:80", "app:app"]
I have deployed the docker image on the Container Registry. I have also set WEBSITES_PORT to 80 under App Service -> Application Settings.
Even after doing that, I get the following error:-
ERROR - Container XYZ didn't respond to HTTP pings on port: 80, failing site start.
I have tried running it locally and it works fine. But, it just does not seem to work on the Azure App service. Any help is highly appreciated.
I don't see an issue in the code you posted but to verify, here is a configuration for a Flask app with a Gunicorn server that works on a containerized Azure App Service:
app.py
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello_world():
return "<p>Hello World!</p>"
Dockerfile
FROM python:3.8-slim-buster
ADD app.py app.py
ADD requirements.txt requirements.txt
RUN pip install --upgrade pip
RUN python3 -m pip install -r requirements.txt
EXPOSE 80
CMD ["gunicorn", "--bind=0.0.0.0:80", "app:app"]
requirements.txt
flask
gunicorn
I assume you selected "Docker Container" when you created the Azure App Service?
And then simply chose your image?

ASP.NET Unable to configure HTTPS endpoint - Docker & Linux

I'm having an issue with my docker images, a month ago they worked well but I have done a minor change (HTML change, in a minor page) and try to rebuild a new docker image.
But when I deploy the docker image, I got the following error message:
System.InvalidOperationException: Unable to configure HTTPS endpoint. No server certificate was specified, and the default developer certificate could not be found or is out of date.
To generate a developer certificate run 'dotnet dev-certs https'. To trust the certificate (Windows and macOS only) run 'dotnet dev-certs https --trust'.
For more information on configuring HTTPS see https://go.microsoft.com/fwlink/?linkid=848054.
at Microsoft.AspNetCore.Hosting.ListenOptionsHttpsExtensions.UseHttps(ListenOptions listenOptions, Action`1 configureOptions)
at Microsoft.AspNetCore.Hosting.ListenOptionsHttpsExtensions.UseHttps(ListenOptions listenOptions)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.AddressesStrategy.BindAsync(AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.BindAsync(IEnumerable`1 listenOptions, AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServerImpl.BindAsync(CancellationToken cancellationToken)
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServerImpl.StartAsync[TContext](IHttpApplication`1 application, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Hosting.GenericWebHostService.StartAsync(CancellationToken cancellationToken)
at Microsoft.Extensions.Hosting.Internal.Host.StartAsync(CancellationToken cancellationToken)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.Run(IHost host)
Here is a summary of the setup
I tried to build the docker image manually, and also automatically with my CI/CD (AzureDevOps). But both are producing the same error.
I checked for any change in GIT history... nothing.
Here is the DockerFile I use
### >>> GLOBALS
ARG ENVIRONMENT="Production"
ARG PROJECT="SmartPixel.SoCloze.Web"
# debian buster - AMD64
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
### >>> IMPORTS
ARG ENVIRONMENT
ARG PROJECT
ARG NUGET_CACHE=https://api.nuget.org/v3/index.json
ARG NUGET_FEED=https://api.nuget.org/v3/index.json
# Copy sources
COPY src/ /app/src
ADD common.props /app
WORKDIR /app
RUN apt-get update
RUN apt-get install curl
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash -
RUN apt-get install -y nodejs
RUN npm install /app/src/SmartPixel.Core.Blazor/
# Installs the required dependencies on top of the base image
# Publish a self-contained image
RUN apt-get update && apt-get install -y libgdiplus libc6-dev && dotnet dev-certs https --clean;\
dotnet dev-certs https && dotnet dev-certs https --trust;\
dotnet publish --self-contained --runtime linux-x64 -c Debug -o out src/${PROJECT};
# Execute
# Start a new image from aspnet runtime image
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS runtime
### >>> IMPORTS
ARG ENVIRONMENT
ARG PROJECT
ENV ASPNETCORE_ENVIRONMENT=${ENVIRONMENT}
ENV ASPNETCORE_URLS="http://+:80;https://+:443;https://+:44390"
ENV PROJECT="${PROJECT}.dll"
# Make logs a volume for persistence
VOLUME /app/Logs
# App directory
WORKDIR /app
# Copy our build from the previous stage in /app
COPY --from=build /app/out ./
RUN apt-get update && apt-get install -y ffmpeg libgdiplus libc6-dev
# Ports
EXPOSE 80
EXPOSE 443
EXPOSE 44390
# Execute
ENTRYPOINT dotnet ${PROJECT}
What is strange is that old images (> 1-month-old) are all working, but not when I rebuild them.
Here is also the docker compose file:
version: '3.3'
services:
web:
image: registry.gitlab.com/mycorp/socloze.web:1.1.1040
volumes:
- keys-vol:/root/.aspnet
- logs-vol:/app/Logs
- sitemap-vol:/data/sitemap/
networks:
- haproxy-net
- socloze-net
configs:
-
source: socloze-web-conf
target: /app/appsettings.json
logging:
driver: json-file
deploy:
placement:
constraints:
- node.role == manager
networks:
haproxy-net:
external: true
socloze-net:
external: true
volumes:
keys-vol:
driver: local
driver_opts:
device: /data/socloze/web/keys
o: bind
type: none
logs-vol:
driver: local
driver_opts:
device: /data/socloze/web/logs
o: bind
type: none
sitemap-vol:
driver: local
driver_opts:
device: /data/sitemap
o: bind
type: none
configs:
socloze-web-conf:
external: true
What can be the cause if:
old images are working perfectly
new images are producing this error
no code change, no change in the 'DockerFile'
the OS is Debian, the Docker images system is Ubuntu
Do you have any idea? I'm searching for weeks about a solution!
You need to add a couple more environment variable and might also have to mount a certificate volume:
Environment Variables and their values:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:443;http://+:80
- ASPNETCORE_Kestrel__Certificates__Default__Password=password
- ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
and volumes:
- ~/.aspnet/https:/https:ro
depending on how you plan to add these env variables and mount volumes like dokcer run or through docker-compose you will have to play with the adding double quotes to the parameter lists and right spots.

Docker Container Azure WebAPP Flask error while deploying due to port

I have implemented a Flask APP with python and by using Docker Desktop I have obtained an image of my APP. I pushed that app on my private docker HUB. In the Azure portal I have created a docker container based APP service including user, password.
The dockerized APP works perfectly on my laptop with this dockerfile:
FROM python:3.7.3
RUN apt-get update -y
RUN apt-get install -y python3-pip \
python3-dev \
build-essential \
cmake
ADD . /demo
WORKDIR /demo
RUN pip install -r requirements.txt
EXPOSE 80 5000
here the docker-composer.yml
version: "3.8"
services:
app:
build: .
command: python app.py
ports:
- "5000:5000"
volumes:
- .:/demo
here a little part of app.py
if __name__ == '__main__':
app.run(host="0.0.0.0")
As said with these files the docker works on my laptop, but during the deploying stage in azure I received the following error:
Please use https://aka.ms/linux-diagnostics to enable logging to see container logs here.
INFO - Initiating warmup request to container xxx for site xxx
ERROR - Container xxx for site xxx has exited, failing site start
ERROR - Container xxx didn't respond to HTTP pings on port: 80, failing site start. See container logs for debugging.
INFO - Stopping site xxx because it failed during startup.

running docker container is not reachable by browser

I started to work with docker. I dockerized simple node.js app. I'm not able to access to my container from outside world (means by browser).
Stack:
node.js app with 4 endpoints (I used hapi server).
macOS
docker desktop community version 2.0.0.2
Here is my dockerfile:
FROM node:10.13-alpine
ENV NODE_ENV production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
RUN npm install -g nodemon
COPY . .
EXPOSE 8000
CMD ["npm","run", "start-server"]
I did following steps:
I run from command line from my working dir:
docker image build -t ares-maros .
docker container run -d --name rest-api -p 8000:8000 ares-maros
I checked if container is running via docker container ps
Here is the result:
- container is running
I open the browser and type 0.0.0.0:8000 (also tried with 127.0.0.1:8000 or localhost:8000)
result:
So running docker container is not rechable by browser
I also go into the container typing docker exec -it 81b3d9b17db9 sh and try to reach my node-app inside of container via wget/curl and that's works. I get responses fron all node.js endpoints.
Where could be the problem ? Maybe my mac can blocked connection ?
Thanks for help.
Please check the order of the parameters of the following command:
docker container run -d --name rest-api -p 8000:8000 ares-maros
I faced a similar. I was using -p port:port at the end of the command. Simply moving it to after 'Docker run' solved it for me.

Inside Docker container NetworkingError: connect ECONNREFUSED 127.0.0.1:8002

I'm building up a nodejs app which is running in the docker container and getting following error
NetworkingError: connect ECONNREFUSED 127.0.0.1:8000"
And If I tried with dynamodb-local:8000 then it will give me following error
NetworkingError: write EPROTO
140494555330368:error:1408F10B:SSLroutines:ssl3_get_record:wrong
version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:252:
I am using the following docker-compose.yml
version: "3"
services:
node_app:
build: .
container_name: 'node_app'
restart: 'always'
command: 'npm run start:local'
ports:
- "3146:3146"
links:
- dynamodb-local
dynamodb-local:
container_name: 'dynamodb-local'
build: dynamodb-local/
restart: 'always'
ports:
- "8000:8000"
Node js docker configuration as follows, node_app
FROM node:latest
RUN mkdir -p /app/node_app
WORKDIR /app/node_app
# Install app dependencies
COPY package.json /app/node_app
#RUN npm cache clean --force && npm install
RUN npm install
# Bundle app source
COPY . /app/node_app
# Build the built version
EXPOSE 3146
#RUN npm run dev
CMD ["npm", "start"]
Dynamo DB local docker configuration as follows, dynamodb-local
#
# Dockerfile for DynamoDB Local
#
# https://aws.amazon.com/blogs/aws/dynamodb-local-for-desktop-development/
#
FROM openjdk:7-jre
RUN mkdir -p /var/dynamodb_local
RUN mkdir -p /var/dynamodb_picstgraph
# Create working space
WORKDIR /var/dynamodb_picstgraph
# Default port for DynamoDB Local
EXPOSE 8000
# Get the package from Amazon
RUN wget -O /tmp/dynamodb_local_latest https://s3-us-west-2.amazonaws.com/dynamodb-local/dynamodb_local_latest.tar.gz && \
tar xfz /tmp/dynamodb_local_latest && \
rm -f /tmp/dynamodb_local_latest
# Default command for image
ENTRYPOINT ["/usr/bin/java", "-Djava.library.path=.", "-jar", "DynamoDBLocal.jar", "-sharedDb", "-dbPath", "/var/dynamodb_local"]
CMD ["-port", "8000"]
# Add VOLUMEs to allow backup of config, logs and databases
VOLUME ["/var/dynamodb_local", "/var/dynamodb_nodeapp"]
But when I tried to connect outside docker container to local dynamodb and it will work perfectly.
Please help me to sort out this issue.
Inside the docker container, the DB will be available with the host dynamodb-local:8000.
It might be an SSL issue, please check your apache configuration if you have used the port for other application.
For that case, you can use link dynamo on another port as follows,
#
# Dockerfile for DynamoDB Local
#
# https://aws.amazon.com/blogs/aws/dynamodb-local-for-desktop-development/
#
FROM openjdk:7-jre
RUN mkdir -p /var/dynamodb_local
RUN mkdir -p /var/dynamodb_picstgraph
# Create working space
WORKDIR /var/dynamodb_picstgraph
# Default port for DynamoDB Local
EXPOSE 8004
# Get the package from Amazon
RUN wget -O /tmp/dynamodb_local_latest https://s3-us-west-2.amazonaws.com/dynamodb-local/dynamodb_local_latest.tar.gz && \
tar xfz /tmp/dynamodb_local_latest && \
rm -f /tmp/dynamodb_local_latest
# Default command for image
ENTRYPOINT ["/usr/bin/java", "-Djava.library.path=.", "-jar", "DynamoDBLocal.jar", "-sharedDb", "-dbPath", "/var/dynamodb_local"]
CMD ["-port", "8004"]
# Add VOLUMEs to allow backup of config, logs and databases
VOLUME ["/var/dynamodb_local", "/var/dynamodb_nodeapp"]
Now in your docker container, the DB will be available with the host dynamodb-local:8004.

Resources