asp.net core docker + App Service On Linux - azure-web-app-service

According to the docs is now possible run an asp.net core docker container on a linux app service.
https://learn.microsoft.com/en-us/azure/app-service-web/app-service-linux-using-custom-docker-image
I created a simple asp.net core 1.1 web api and followed the steps to run it. On the log, it says my app started, but I can't access.
Anyone tried to run the same ?!
My Dockerfile:
FROM microsoft/dotnet:latest
COPY src/WebApi/bin/Release/netcoreapp1.1/publish/ /root/
EXPOSE 5000/tcp
WORKDIR /root
ENTRYPOINT dotnet WebApi.dll

In Application Settings I added an entry called PORT and gave it the 5000 value. Let me know if that works.
Best of luck!

Related

Docker never runs on Azure - Waiting for response to warmup request for container

I'm trying to deploy a dockerized app on Azure's App Service. I enter all the fields correctly, my image gets pulled, put I keep getting this error until something times out.
Waiting for response to warmup request for container -<container name > Elapsed time = 154.673506 sec
I did set WEBSITE_PORT 8080 (used by my app)
Here is the dockerfile
FROM google/dart
WORKDIR /app
ADD pubspec.* /app/
RUN pub get --no-precompile
ADD . /app/
RUN pub get --offline --no-precompile
WORKDIR /app
EXPOSE 8080
ENTRYPOINT ["pub", "run", "aqueduct:aqueduct", "serve", "--port", "8080"]
It was working fine. I had it working last night. I wanted to refresh my image so I restarted. Nothing worked. After multiple hours I deleted my app and started again... no luck. Any ideas?
EDIT 1:
Tried changing port to 80, no luck (This was the port I was using at first when it was working fine)
RESOLVED (Partially)*
I changed everything to port 8000. I realized that Linux and windows did not like having something non-system listening on 80. Therefore I changed everything on 8000 and set the system properties on Azure {WEBSITE_PORT, 8000}. IT now seems to work fine. I don't know if this is an official fix... But it does warmup after 30-ish seconds
You can also try setting WEBSITES_CONTAINER_START_TIME_LIMIT to 1800
Depending which App Service plan you have, if there is an option ‘always on’, try to set ‘always on’ in the configuration of your app in Azure portal.
If you are using a Premium App service plan, you can set pre-warm number of instances. Try to set that to 2-3 and see if it gets any better..here
I had the same experience as you, but my container was really big since it contained ML model, so at the end I switched to AKS because it performed better..
what actually worked for me was a combination of the answers above by Ethiene and kgalic, setting all ports to 8000 in the docker file
EXPOSE 8000
CMD gunicorn -w 4 -b :8000 app:app
in the azure configuration application settings adding
"WEBSITES_PORT" : "8000"
in the azure configuration general settings setting
"Always on" : "on"
App Service - Docker container deploy
In my case, this slowdown was caused by automatic port detection.
Setting the WEBSITES_PORT in the application setting solved the problem.
WEBSITES_PORT=8000
Pay attention if you have more slots (production/staging?), you have to set this env variable in the other slots too.
From: Azure App Service on Linux FAQ - Custom Contaniers
We have automatic port detection. You can also specify an app setting called WEBSITES_PORT and give it the value of the expected port number. Previously, the platform used the PORT app setting. We are planning to deprecate this app setting and to use WEBSITES_PORT exclusively.
I had this same problem when I used the nodejs application, so I did build the dist folder by npm build on the creation of the docker image, so it is part of the docker image rather than the docker cmd creating the build image on the initial execution of the app. Maybe the RAM and CPU wasn't enough for the npm build to happen at the initial runtime

Angular CLI app not running when deploying to Linux App Service

I'm trying to deploy an Angular CLI app to Azure App Service on Linux OS, using Azure Dev Ops - but no success. I get Image 1. No error in the server or application logs.
This is what I done so far:
Built the Angular CLI app using DevOps Build and placed the resulted "dist" folder to the "drop" folder. See below (Image 2) the tasks that compose my build. This is working fine and creating the expected files.
Created a release in DevOps, deploying all the dist files in the wwwroot folder in the Azure App Service in Linux. Shown below are both, the wwwroot folder (left) and my local dist folder (right) after I run a ng build --prod.
I have the suspicion that I need to kickstart the angular by feeding some time of command when doing the deployment. I have tried running "ng serve --host 0.0.0.0" but that didn't work.
Check the Azure App Service > Linux section on this page. Essentially, you have to serve the app. You can configure this with a ecoysystem.config.js PM2 file in the root directory, with this inside.
module.exports = {
apps: [
{
script: "npx serve -s"
}
]
};
I also added npx serve -s in the App Service Configuration > General Settings > Startup Command
See also: https://burkeholland.github.io/posts/static-site-azure
I had to give the npx serve -s as the startup command
Then set the Runtime Stack with node framework 10.16 (NODE|10.16). See below
Then everything started working.
If you still want to use App Service - Web App you can just the Windows OS instead of Linux.
Here are the parameters I Used:
Since the output of building angular is a static web app, IIS will serve the site right away.
When using Linux App Service container, you may also select PHP stack containing Apache2 Server. Since, Angular files are static ones (JS, CSS, HTML), so you just need some web server to serve this content.
Example configuration:
If you look at the default 'Deploy Node.js App to Azure App Service' release template, it uses the 'Deploy Azure App Service' task and adds the following commands to the task. this might help you out.
There is a subtle and big difference between the Linux and Windows App service: IIS - which in Windows is actively looking to serve any app, whereas in Linux you have to spin up something on your own to serve it - Express for example.
After some research I discovered that I don't need a full App service dedicated to run a static app (such as Angular or React). It can be done just as efficiently and much cheaper with something like Storage. -> https://code.visualstudio.com/tutorials/static-website/getting-started
I had the same problem on Azure App Service with Linux and Node, solved it using below startup command
pm2 serve /home/site/wwwroot --no-daemon --spa

Redirect after authentication is to http when it should be https

Starting point here is a standard template from Microsoft for ASP.NET Core: I used Visual Studio => New Project => .NET Core => ASP.NET Core Web Application
I then ticked the box for HTTPS support and configured Work or School account authentication against an existing Azure AD instance.
Then no matter whether I use http or https to run it locally, the redirect url during authentication always points to https - which is exactly as it should be.
When I deploy this into regular Azure App Service, it has the same behavior, which is fine.
BUT: If I create a docker image from this and deploy it to an Azure AppService with container support (linux-based in this case), then the authentication redirect always goes against http, which is really not what I want.
For reference, this is the Dockerfile used to build the image:
FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /src
COPY HttpsTest/HttpsTest.csproj HttpsTest/
RUN dotnet restore HttpsTest/HttpsTest.csproj
COPY . .
WORKDIR /src/HttpsTest
RUN dotnet build HttpsTest.csproj -c Release -o /app
FROM build AS publish
RUN dotnet publish HttpsTest.csproj -c Release -o /app
FROM microsoft/dotnet:2.1-aspnetcore-runtime
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "HttpsTest.dll"]
Please note that I am not fiddling around with custom certificates inside the container, as I am using Azure App Service to terminate the SSL connection (and bring its own certificate with it along the way).
I am pretty sure there is something I am overlooking here.
Take a look at Configure ASP.NET Core to work with proxy servers and load balancers, the discussed configuration might help you solve the HTTP redirect issue.
Hope it helps!

How to run a docker container as a windows service

I have a windows service that I want to run in a docker container on Azure.
I would like to have the same setup when running the service locally, so I would like to run the same docker container locally as a windows service (I think?).
How would I do that? Or is there a better approach?
Thanks,
Michael
IMHO Michael asked how to start docker images without the need to have a user logged in. The docker restart flag actually only deals with starting images after docker is running. To get docker to run without logged in user (or after automatic windows updates) it seems to me you will also need to make a windows service that runs docker.
A good explanation for this part of the problem can be found here (no good solution has been found yet without paying for it - docker team ignored request to make this work without third party so far):
How to start Docker daemon (windows service) at startup without the need to log-in?
You can use the flag --restart=unless-stopped with the docker run command and the docker container will run automatically even if the server was shutdown.
Further read for the restart policy and flag here
but conditions apply - docker itself should always run on startup. which is default setting by itself.

Azure Linux App Service: Just start one container

I am using this Dockerfile on Azure Linux App Service:
FROM ruby:2.3.3
ENV "GEM_HOME" "/home/gems"
ENV "BUNDLE_PATH" "/home/gems"
EXPOSE 3000
WORKDIR /home/webapp
CMD bundle install && bundle exec rails server puma -b 0.0.0.0 -e production
As you can see the gems folder is located in the home folder. The home folder is shared with the host system of the App Service. Now my problem the App Service LogFiles/docker/docker_***_out.log indicates that bundle install is called multiple times (probably from different containers). This leads to that some gems are never successfully installed.
Is there some setting which runs just one container so that my gems can be installed successfully and not interferring with each other? Or am I making wrong assumptions here? Maybe the problem isn't that there a multiple containers started?
Is there an easier way to install the gems the first time in the shared folder of the host system?

Resources