Azure App Service Container keeps on restarting - azure

I created an azure app Service running a docker container.
but the container seems to be restarting all the time:
2020-01-09 07:21:56.543 INFO - Container XXX for site xxx initialized successfully and is ready to serve requests.
2020-01-09 07:22:01.559 ERROR - Container for xxx site xxx is unhealthy, Stopping site.
2020-01-09 07:22:01.559 INFO - Stoping site xxx because it is not healthy.
As it is a ressource intensive application it can be that the Service is not responding quickly.
i already tried to add the following Setting:
{
"name": "CONTAINER_AVAILABILITY_CHECK_MODE",
"value": "Off",
"slotSetting": false
}
but with no effect.

According to the Dockerfile of the image apache/drill:1.17.0 you used, it does not expose the port 8047 to access outside. And there are two ways to expose the port that displayed here and then you can access outside if you want. So the second way is suitable for this situation, and when you set the environment variable WEBSITES_PORT with value 8047, then you can access the web app outside. Here is the screenshot which works fine on my side:
Update:
It seems the image needs an interactive mode and the docker command to run it should be docker run -i --name drill-1.17.0 -p 8047:8047 --detach -t apache/drill:1.17.0 /bin/bash or another similar command with an interactive mode. Or it would stop in a few munites later. But you cannot change the command that runs the image in Web App, so you also cannot use an interactive mode in Web App for the image.
So the solution is that you need to create a custom image to keep it on running state base on the image apache/drill:1.17.0. Then it will work well.

Related

How to add additional options to docker run on Azure Service App

I am trying to run docker run -i --rm -d --cap-add=SYS_ADMIN --name <azure-container> <azure-container-registry>/<image-name>:<tag>
By default the container is created using docker run -p port1:port2
I want to remove the -p option and add --cap-add=SYS_ADMIN , every time my container gets created from azure container registry using App Service.
Any help appreciated.
Regards,
Aarushi
Unfortunately, the docker command cannot be custom when you deploy your image to Azure Web App. It runs by Azure. You can add the environment variables in the App Setting, but not change the docker command.
You cannot run a container without exposing a port, on App Services. It needs to run a server process in order to become 'healthy'. You can use port 80 (default) or 8080.
Also, as Charles Xu said, you cannot add capabilities at this time.
In Service Fabric you have more port mapping options, but you should still expose a port for the liveness probe.
No cap-add support here either.

Docker never runs on Azure - Waiting for response to warmup request for container

I'm trying to deploy a dockerized app on Azure's App Service. I enter all the fields correctly, my image gets pulled, put I keep getting this error until something times out.
Waiting for response to warmup request for container -<container name > Elapsed time = 154.673506 sec
I did set WEBSITE_PORT 8080 (used by my app)
Here is the dockerfile
FROM google/dart
WORKDIR /app
ADD pubspec.* /app/
RUN pub get --no-precompile
ADD . /app/
RUN pub get --offline --no-precompile
WORKDIR /app
EXPOSE 8080
ENTRYPOINT ["pub", "run", "aqueduct:aqueduct", "serve", "--port", "8080"]
It was working fine. I had it working last night. I wanted to refresh my image so I restarted. Nothing worked. After multiple hours I deleted my app and started again... no luck. Any ideas?
EDIT 1:
Tried changing port to 80, no luck (This was the port I was using at first when it was working fine)
RESOLVED (Partially)*
I changed everything to port 8000. I realized that Linux and windows did not like having something non-system listening on 80. Therefore I changed everything on 8000 and set the system properties on Azure {WEBSITE_PORT, 8000}. IT now seems to work fine. I don't know if this is an official fix... But it does warmup after 30-ish seconds
You can also try setting WEBSITES_CONTAINER_START_TIME_LIMIT to 1800
Depending which App Service plan you have, if there is an option ‘always on’, try to set ‘always on’ in the configuration of your app in Azure portal.
If you are using a Premium App service plan, you can set pre-warm number of instances. Try to set that to 2-3 and see if it gets any better..here
I had the same experience as you, but my container was really big since it contained ML model, so at the end I switched to AKS because it performed better..
what actually worked for me was a combination of the answers above by Ethiene and kgalic, setting all ports to 8000 in the docker file
EXPOSE 8000
CMD gunicorn -w 4 -b :8000 app:app
in the azure configuration application settings adding
"WEBSITES_PORT" : "8000"
in the azure configuration general settings setting
"Always on" : "on"
App Service - Docker container deploy
In my case, this slowdown was caused by automatic port detection.
Setting the WEBSITES_PORT in the application setting solved the problem.
WEBSITES_PORT=8000
Pay attention if you have more slots (production/staging?), you have to set this env variable in the other slots too.
From: Azure App Service on Linux FAQ - Custom Contaniers
We have automatic port detection. You can also specify an app setting called WEBSITES_PORT and give it the value of the expected port number. Previously, the platform used the PORT app setting. We are planning to deprecate this app setting and to use WEBSITES_PORT exclusively.
I had this same problem when I used the nodejs application, so I did build the dist folder by npm build on the creation of the docker image, so it is part of the docker image rather than the docker cmd creating the build image on the initial execution of the app. Maybe the RAM and CPU wasn't enough for the npm build to happen at the initial runtime

How to hit through HTTP a container created from another container mounted with /var/run/docker.sock?

This is a for a CI server setup. The CI doesn't have tools like node installed, only Docker. So I have to run my tests inside a container.
This container will, in turn, create a second container to run the integration tests against.
The first container has mounted the /var/run/docker.sock so that it can create a second container. Both containers live side by side.
My build steps are the following:
Clone source code
Build docker image and tag it my-app
Run unit tests docker run ..... my-app yarn test
Run integration tests, which fire up a container. docker run -v /var/run/docker.sock:/var/run/docker.sock ..... my-app yarn test:integration
The problem is in the integration tests:
In summary, the first container calls yarn:integration which fires up the 2nd container running the app on port 3001, and then runs the tests against the 2nd container. Finally, it stops the second container.
The problem is that my integration tests in the first container attempt to hit the 2nd container through localhost:3001, but localhost is not the right host to the second container.
How can I access the second container from the first one, considering they are side by side (and not one within the other)?
localhost within the container doesn't point to the host machine, it points to the container itself. If you want to reach another container you need to use that container's actual IP which can be discovered by docker inspect <CONTAINER ID> and the internal port (i.e. not the one mapped to your host).
Alternatively, you can create a user-defined network and connect your containers to it. Then you will be able to use container names as hostnames, e.g. my-app:3001. Note that container name is the one specified by --name parameter of docker run. Also you need to use the container's internal port and not the one published with -p parameter.

Opening Azure Web App for Containers ports?

I have a 3rd party Tomcat server image/container running in LinuxVM on Azure. The LinxVM actually started as 2 images(NGINX loadbalancer) running via a docker-compose script, but to test this on a webapp I've wittled down to just the single tomcat image. Now, the compose script uses the key:
ports:
- 80:8090
- 8445:8443
In the VM I can run the docker-compose script and hit http://mypage:80 and it works just fine. I can also run:
docker run -d -p <somePort>:8090 --name tomcat_1 <myrepo/myimage>
I can then access my site with http://mypage:<somePort> regardless of which port I want to map to the container. This all works great.
Now, with the Azure Web App, I'm using an Azure Web App for Containers --> Docker Compose (Preview). My compose script looks something like:
version: "3.0"
services:
pdfd-tomcat:
image: <myrepo/myimage>
build:
context: .
args:
INCLUDE_DEMO: 'true'
LIBRE_SUPPORT: 'false'
HTML_SUPPORT: 'false'
container_name: Blackbox
environment:
TRN_PDFNET_KEY:
TRN_DB_TYPE: SQLITE
TRN_DB_LINK:
TRN_BALANCER_COOKIE_NAME: HAPROXID
TRN_DEBUG_MODE: 'false'
ports:
- 80:8090
- 8445:8443
I've exposed 80:8090 because I've read that Azure Web Apps only expose port 80 and 443 by default. However, I cannot access this site from any port once the web app is spun up. I've verified running this same compose script works in a VM. Now, when I hit the web app logs, I see this:
Status: Image is up to date for <myrepo/myimage>
2018-06-17 05:38:41.298 INFO - Starting container for site
2018-06-17 05:38:41.298 INFO - docker run -d -p 18455:8090 --name tomcat_1 -e WEBSITE_SITE_NAME=<mywebsite> -e WEBSITE_AUTH_ENABLED=False -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_INSTANCE_ID=<stuff goes here>
2018-06-17 05:38:41.298 INFO - Logging is not enabled for this container.
Please use https://aka.ms/linux-diagnostics to enable logging to see container logs here.
2018-06-17 05:38:56.994 INFO - Started multi-container app
2018-06-17 05:38:57.008 INFO - Container tomcat_1 for site <mywebsite> initialized successfully.
So, it seems that it's trying to map external port: 18455 to my internal 8090 port. Why? Also, if I try to hit my site via that port, I can't. Each time the app deploys/restarts it maps a different external port.
Also, if I retroactively go to 'Application Settings' and use the key/value: WEBSITES_PORT:<current externally mapped port> it has literally no effect. Then if the app gets restarted/redeployed, I can see that WEBSITES_PORT:<port> is what the previous port was mapped with, but since that changes every deployment, the current external port and the WEBSITES_PORT values never match. I don't even know if it works to begin with.
How the heck do get this working in a deterministic manner? I can supply other material if needed.
This boiled down to a permissions issue when using Tomcat 9.0+ with Docker.
Permission problem while running tomcat as a non-root user
The Dockerfile would create a new usergroup and add a user, then give that user permissions in the folders where Tomcat existed. If you jumped into the container via docker exec /bin/bash and checked permissions, they all seemed perfectly fine. However, logs would show that Tomcat couldn't gain access to those folders.
Once I implemented the fix as described by Empinator in the link everything worked (using root also worked).

How to run a docker container as a windows service

I have a windows service that I want to run in a docker container on Azure.
I would like to have the same setup when running the service locally, so I would like to run the same docker container locally as a windows service (I think?).
How would I do that? Or is there a better approach?
Thanks,
Michael
IMHO Michael asked how to start docker images without the need to have a user logged in. The docker restart flag actually only deals with starting images after docker is running. To get docker to run without logged in user (or after automatic windows updates) it seems to me you will also need to make a windows service that runs docker.
A good explanation for this part of the problem can be found here (no good solution has been found yet without paying for it - docker team ignored request to make this work without third party so far):
How to start Docker daemon (windows service) at startup without the need to log-in?
You can use the flag --restart=unless-stopped with the docker run command and the docker container will run automatically even if the server was shutdown.
Further read for the restart policy and flag here
but conditions apply - docker itself should always run on startup. which is default setting by itself.

Resources