Docker never runs on Azure - Waiting for response to warmup request for container - azure

I'm trying to deploy a dockerized app on Azure's App Service. I enter all the fields correctly, my image gets pulled, put I keep getting this error until something times out.
Waiting for response to warmup request for container -<container name > Elapsed time = 154.673506 sec
I did set WEBSITE_PORT 8080 (used by my app)
Here is the dockerfile
FROM google/dart
WORKDIR /app
ADD pubspec.* /app/
RUN pub get --no-precompile
ADD . /app/
RUN pub get --offline --no-precompile
WORKDIR /app
EXPOSE 8080
ENTRYPOINT ["pub", "run", "aqueduct:aqueduct", "serve", "--port", "8080"]
It was working fine. I had it working last night. I wanted to refresh my image so I restarted. Nothing worked. After multiple hours I deleted my app and started again... no luck. Any ideas?
EDIT 1:
Tried changing port to 80, no luck (This was the port I was using at first when it was working fine)
RESOLVED (Partially)*
I changed everything to port 8000. I realized that Linux and windows did not like having something non-system listening on 80. Therefore I changed everything on 8000 and set the system properties on Azure {WEBSITE_PORT, 8000}. IT now seems to work fine. I don't know if this is an official fix... But it does warmup after 30-ish seconds

You can also try setting WEBSITES_CONTAINER_START_TIME_LIMIT to 1800

Depending which App Service plan you have, if there is an option ‘always on’, try to set ‘always on’ in the configuration of your app in Azure portal.
If you are using a Premium App service plan, you can set pre-warm number of instances. Try to set that to 2-3 and see if it gets any better..here
I had the same experience as you, but my container was really big since it contained ML model, so at the end I switched to AKS because it performed better..

what actually worked for me was a combination of the answers above by Ethiene and kgalic, setting all ports to 8000 in the docker file
EXPOSE 8000
CMD gunicorn -w 4 -b :8000 app:app
in the azure configuration application settings adding
"WEBSITES_PORT" : "8000"
in the azure configuration general settings setting
"Always on" : "on"

App Service - Docker container deploy
In my case, this slowdown was caused by automatic port detection.
Setting the WEBSITES_PORT in the application setting solved the problem.
WEBSITES_PORT=8000
Pay attention if you have more slots (production/staging?), you have to set this env variable in the other slots too.
From: Azure App Service on Linux FAQ - Custom Contaniers
We have automatic port detection. You can also specify an app setting called WEBSITES_PORT and give it the value of the expected port number. Previously, the platform used the PORT app setting. We are planning to deprecate this app setting and to use WEBSITES_PORT exclusively.

I had this same problem when I used the nodejs application, so I did build the dist folder by npm build on the creation of the docker image, so it is part of the docker image rather than the docker cmd creating the build image on the initial execution of the app. Maybe the RAM and CPU wasn't enough for the npm build to happen at the initial runtime

Related

Issue with running Node application in AWS ECS container

I have created a Docker image and pushed it to the AWS ECR repository
I'm creating a task with 3 containers included, one for Redis one for PostgreSQL and another one for the given Image which is my Node project
In Dockerfile, I have added a CMD to run the App with node command, here is the Dockerfile content:
FROM node:16-alpine as build
WORKDIR /usr/token-manager/app
COPY package*.json .
RUN npm install
COPY . .
RUN npm run build
FROM node:16-alpine as production
ARG ENV_ARG=production
ENV NODE_ENV=${ENV_ARG}
WORKDIR /usr/token-manager/app
COPY package*.json .
RUN npm install --production
COPY --from=build /usr/token-manager/app/dist ./dist
CMD ["node", "./dist/index.js"]
This image is working in a docker-compose locally without any issue
The issue is when I run the task in ECS Cluster it's not running the Node project, it seems that it's not running the CMD command
I tried to override that CMD command by adding a new command to the Task definition:
When I run task with this command, there is nothing in the CloudWatch log and obviously the Node App is not running, here you can see that there is no log for api-container:
When I change the command to something else, for example "ls" it gets executed and I can see the result in CloudWatch log:
or when I change it to a wrong command, I get an error in the log:
But When I change it to the right command which should run the App, nothing happens, it's not even showing anything in the log as error
I have added inbound rules to allow the port number needed for connecting to the App but it seems it's not running at all!
What should I do? How can I check to see what is the issue?
UPDATE: I changed the App Container configuration to make it Essential, it means that the whole Task will fail and stop if this container exits with any error, then I started the Task again and it gets stopped, so now I'm sure that the App Container is crashing and exiting some how but there is nothing in the log!
First: Make Sure your Docker image in deployed to ECR(you can using Codepipeline) because that is where the ECS will look for the DockerImage.
Second:Please Specify your launch-Type, in case of Ec2 make sure you are using latest Node Image while adding container.
Here you can find latest Docker Image for Node: https://hub.docker.com/_/node
Third: Create Task-Definition and Run the task, now make sure you navigate to cluster and check if task is running and check task status.
Fourth: Make sure you allow all inbound traffic in Security group and open HTTP for 0.0.0.0/0
You can test using curl i.e :http://ec2-52-38-113-251.us-west-2.compute.amazonaws.com
In case you failed to do so, i would recommend deploying simple Node App and get that running and then deploy your project. Thank you
I found the issue, I'll post it here, it may help someone else
If you go to Cluster details screen > Tasks tab > Stopped > Task ID, then you can see a brief status message regarding each container in Containers list:
it saying that container killed due to Memory issue, we can fix it by increasing the memory we specify for containers when adding new Task Definition
This is the total amount of memory you want to give to the whole Task, which will be shared between all containers:
When you are adding new Container, there is a place for specifying the memory limit:
Hard Limit: If you specify a Hard Limit, your container will get killed when attempt to exceed that limit of memory usage
Soft Limit: If you specify the Soft Limit, ECS will reserve that memory for your container, but your container can request more memory up to the Hard Limit
So the main point here is when there is some kind of Initial issue for container, there won't be any log in CloudWatch and when there is and issue but we didn't find anything in Log, then we should check possibilities like Memory or anything prevent container from being started

How to add additional options to docker run on Azure Service App

I am trying to run docker run -i --rm -d --cap-add=SYS_ADMIN --name <azure-container> <azure-container-registry>/<image-name>:<tag>
By default the container is created using docker run -p port1:port2
I want to remove the -p option and add --cap-add=SYS_ADMIN , every time my container gets created from azure container registry using App Service.
Any help appreciated.
Regards,
Aarushi
Unfortunately, the docker command cannot be custom when you deploy your image to Azure Web App. It runs by Azure. You can add the environment variables in the App Setting, but not change the docker command.
You cannot run a container without exposing a port, on App Services. It needs to run a server process in order to become 'healthy'. You can use port 80 (default) or 8080.
Also, as Charles Xu said, you cannot add capabilities at this time.
In Service Fabric you have more port mapping options, but you should still expose a port for the liveness probe.
No cap-add support here either.

Azure App Service Container keeps on restarting

I created an azure app Service running a docker container.
but the container seems to be restarting all the time:
2020-01-09 07:21:56.543 INFO - Container XXX for site xxx initialized successfully and is ready to serve requests.
2020-01-09 07:22:01.559 ERROR - Container for xxx site xxx is unhealthy, Stopping site.
2020-01-09 07:22:01.559 INFO - Stoping site xxx because it is not healthy.
As it is a ressource intensive application it can be that the Service is not responding quickly.
i already tried to add the following Setting:
{
"name": "CONTAINER_AVAILABILITY_CHECK_MODE",
"value": "Off",
"slotSetting": false
}
but with no effect.
According to the Dockerfile of the image apache/drill:1.17.0 you used, it does not expose the port 8047 to access outside. And there are two ways to expose the port that displayed here and then you can access outside if you want. So the second way is suitable for this situation, and when you set the environment variable WEBSITES_PORT with value 8047, then you can access the web app outside. Here is the screenshot which works fine on my side:
Update:
It seems the image needs an interactive mode and the docker command to run it should be docker run -i --name drill-1.17.0 -p 8047:8047 --detach -t apache/drill:1.17.0 /bin/bash or another similar command with an interactive mode. Or it would stop in a few munites later. But you cannot change the command that runs the image in Web App, so you also cannot use an interactive mode in Web App for the image.
So the solution is that you need to create a custom image to keep it on running state base on the image apache/drill:1.17.0. Then it will work well.

Opening Azure Web App for Containers ports?

I have a 3rd party Tomcat server image/container running in LinuxVM on Azure. The LinxVM actually started as 2 images(NGINX loadbalancer) running via a docker-compose script, but to test this on a webapp I've wittled down to just the single tomcat image. Now, the compose script uses the key:
ports:
- 80:8090
- 8445:8443
In the VM I can run the docker-compose script and hit http://mypage:80 and it works just fine. I can also run:
docker run -d -p <somePort>:8090 --name tomcat_1 <myrepo/myimage>
I can then access my site with http://mypage:<somePort> regardless of which port I want to map to the container. This all works great.
Now, with the Azure Web App, I'm using an Azure Web App for Containers --> Docker Compose (Preview). My compose script looks something like:
version: "3.0"
services:
pdfd-tomcat:
image: <myrepo/myimage>
build:
context: .
args:
INCLUDE_DEMO: 'true'
LIBRE_SUPPORT: 'false'
HTML_SUPPORT: 'false'
container_name: Blackbox
environment:
TRN_PDFNET_KEY:
TRN_DB_TYPE: SQLITE
TRN_DB_LINK:
TRN_BALANCER_COOKIE_NAME: HAPROXID
TRN_DEBUG_MODE: 'false'
ports:
- 80:8090
- 8445:8443
I've exposed 80:8090 because I've read that Azure Web Apps only expose port 80 and 443 by default. However, I cannot access this site from any port once the web app is spun up. I've verified running this same compose script works in a VM. Now, when I hit the web app logs, I see this:
Status: Image is up to date for <myrepo/myimage>
2018-06-17 05:38:41.298 INFO - Starting container for site
2018-06-17 05:38:41.298 INFO - docker run -d -p 18455:8090 --name tomcat_1 -e WEBSITE_SITE_NAME=<mywebsite> -e WEBSITE_AUTH_ENABLED=False -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_INSTANCE_ID=<stuff goes here>
2018-06-17 05:38:41.298 INFO - Logging is not enabled for this container.
Please use https://aka.ms/linux-diagnostics to enable logging to see container logs here.
2018-06-17 05:38:56.994 INFO - Started multi-container app
2018-06-17 05:38:57.008 INFO - Container tomcat_1 for site <mywebsite> initialized successfully.
So, it seems that it's trying to map external port: 18455 to my internal 8090 port. Why? Also, if I try to hit my site via that port, I can't. Each time the app deploys/restarts it maps a different external port.
Also, if I retroactively go to 'Application Settings' and use the key/value: WEBSITES_PORT:<current externally mapped port> it has literally no effect. Then if the app gets restarted/redeployed, I can see that WEBSITES_PORT:<port> is what the previous port was mapped with, but since that changes every deployment, the current external port and the WEBSITES_PORT values never match. I don't even know if it works to begin with.
How the heck do get this working in a deterministic manner? I can supply other material if needed.
This boiled down to a permissions issue when using Tomcat 9.0+ with Docker.
Permission problem while running tomcat as a non-root user
The Dockerfile would create a new usergroup and add a user, then give that user permissions in the folders where Tomcat existed. If you jumped into the container via docker exec /bin/bash and checked permissions, they all seemed perfectly fine. However, logs would show that Tomcat couldn't gain access to those folders.
Once I implemented the fix as described by Empinator in the link everything worked (using root also worked).

asp.net core docker + App Service On Linux

According to the docs is now possible run an asp.net core docker container on a linux app service.
https://learn.microsoft.com/en-us/azure/app-service-web/app-service-linux-using-custom-docker-image
I created a simple asp.net core 1.1 web api and followed the steps to run it. On the log, it says my app started, but I can't access.
Anyone tried to run the same ?!
My Dockerfile:
FROM microsoft/dotnet:latest
COPY src/WebApi/bin/Release/netcoreapp1.1/publish/ /root/
EXPOSE 5000/tcp
WORKDIR /root
ENTRYPOINT dotnet WebApi.dll
In Application Settings I added an entry called PORT and gave it the 5000 value. Let me know if that works.
Best of luck!

Resources