Dockerized .net core app doesn't load on Azure - azure

trying the following steps
Create .NET core web application in VS2017
Added docker support via Right click > add > Docker Support
Publish to Azure App service (Linux) via Right click > Publish
Leave all to default values and publish
Site doesn't load
This is my dockerfile (didn't change anything)
FROM microsoft/aspnetcore:1.1
ARG source
WORKDIR /app
EXPOSE 80
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "WebApplication3.dll"]
What am I doing wrong?

Related

node js app engine github trigger builds not getting published to development?

I have a node.js website that runs locally fine with node server.js. I added a Dockerfile:
FROM node:carbon
VOLUME ["/root"]
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm install --only=production
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "npm", "start" ]
And if I deploy my app with gcloud app deploy I can get it accessible online via a url. I believe my project is an 'App Engine' project? If I run subsequent gcloud app deploy commands, my new code gets pushed to the online site. But I can't get github master commits trigger and publish a new build.
I tried adding a trigger so that everytime new code gets added to my public github repo master branch, it gets sent to my production URL.
Full Trigger:
So I merge a PR into the master branch of my github repo. I look in my build history and see there is a new build, clicking the commit takes me to the new pr I just merged into the master branch of my github repo.
But If I access my website url, the new code is not there. If I run cloud app deploy again eventually it will appear, my trigger seems to be working fine from the logs, why is my build not getting published?
I think the problem might be with the fact that you're using a Dockerfile instead of Cloud Build configuration file... Unless there's something else I'm not seeing.
Look here under the fourth step, fifth bullet, for the solution. It says:
Under Build configuration, select Cloud Build configuration file.

Azure storage emulator does not work with azure function docker run

I am trying to run my timer trigger azure function sample using Docker. It works fine in the visual studio both in debug and release mode. But when use docker
docker run -d -p 8080:80 testdockercors
The application starts and it says the below message, but my timer trigger azure function is not running.
Hosting environment: Production
Content root path: /app
Now listening on: http://[::]:80
Application started. Press Ctrl+C to shut down..
But the timer trigger works fine, when running from visual studio. Please find my docker file.
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["Testdockercors/Testdockercors.csproj", "Testdockercors/"]
RUN dotnet restore "Testdockercors/Testdockercors.csproj"
COPY . .
WORKDIR "/src/Testdockercors"
RUN dotnet build "Testdockercors.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Testdockercors.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENV AzureWebJobsScriptRoot=/app
ENV AzureWebJobsStorage="UseDevelopmentStorage=true"
When I change the environment AzureWebJobsStorage value to an existing azure storage connection string it works in docker too. But i want to use the storage that is part of docker, not one available in azure.
I fixed the issue by changing the value of AzureWebJobsStorage as below, appending DevelopmentStorageProxyUri=http://host.docker.internal to environment did the trick.
ENV AzureWebJobsStorage="UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://host.docker.internal"
You can read more about # https://www.maneu.net/blog/use-local-storage-emulator-remote-container/
This will not work in Linux vm
I can confirm that using:
ENV AzureWebJobsStorage="UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://host.docker.internal"
works for the Python runtime (in particular mcr.microsoft.com/azure-functions/python:3.0-python3.7) as well.
In effect this resolves to the azurite endpoints running on the host where the docker runtime is running (in my case it is a macOS). Note that this does not imply that it will work if you run azurite in a container.

Azure App Service setting ASPNETCORE_ENVIRONMENT=Development result in 404

I notice this issue when I deploy my asp.net core MVC application docker image to Azure App Service. Either setting ASPNETCORE_ENVIRONMENT in Dockerfile ENV ASPNETCORE_ENVIRONMENT Development or setting in docker-compose
environment:
- ASPNETCORE_ENVIRONMENT=Development
I always get 404 when accessing website.
The weird part is that if I set ASPNETCORE_ENVIRONMENT to any other value (Staging, Production, etc.), 404 will be gone and website can be accessed normally
How to reproduce:
Create a asp.net core MVC project (just create a bare bone project, don't change any code or add any logic)
Build this project on local machine (Dockerfile is like below)
FROM microsoft/dotnet:2.2-sdk AS build-env
WORKDIR /app
# Copy necessary files and restore
COPY *.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM microsoft/dotnet:2.2-aspnetcore-runtime
COPY --from=build-env /app/out .
# Start
ENV ASPNETCORE_ENVIRONMENT Development
ENTRYPOINT ["dotnet", "CICDTesting.dll"]
Push this image to Azure container registry
I have a webhook attached with App Service, so deployment will be triggered automatically
From the Log Stream I can see the image is pulled successfully and container is up and running
Access the website, then it gives 404
If I change ENV ASPNETCORE_ENVIRONMENT Development to ENV ASPNETCORE_ENVIRONMENT Staging, and repeat build and deploy steps, website will be accessible.
It's the same if I remove ENV ASPNETCORE_ENVIRONMENT Development from Dockerfile and configure it in docker-compose

core3.0 docker container crashing with no status

I have a asp.net core 3.0 preview application that runs fine on my Windows box. I move it to linux (Ubuntu 18.04), running on docker there. It builds and runs fine. I'm using NETGeographic library, which relies on Geographic.dll. When it gets to the point of executing a function that relies on NETGeographic, it crashes silently. No exception thrown, no anything. Just a full on crash.
I've tried release / debug, publishing as a self-contained application, messing with hint and other paths, and I just can't get it to go for whatever reason on Linux. For some reason I thought it was working before, and I see new files published as of today. Is this a possible regression? Is there an open bug on referencing this anywhere? I haven't been able to find one.
Dockerfile:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.0-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.0-buster AS build
WORKDIR /src
COPY ["Test.csproj", ""]
RUN dotnet restore "Test.csproj"
COPY . .
WORKDIR "/src/"
RUN dotnet build "Test.csproj" -c Debug -o /app
FROM build AS publish
RUN dotnet publish "Test.csproj" -c Debug --self-contained --runtime ubuntu.18.04-x64 -o /app
FROM base AS final
WORKDIR "/src/"
COPY Geographic.dll /app
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "Test.dll"]
output when ran:
info: Microsoft.Hosting.Lifetime[0]
Now listening on: http://[::]:80
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
Content root path: /app
Before LLAtoXYZ
user#server:~$
function call is just Convert XYZtoLLA from NetGeographic. I've put the code in / out of a try/catch block with no change. I've put logging in the catch, it never hits it, and so on. Just crash to console. Other api endpoints that do other things work great and never crash, including ones that use of dll's. But whenever I use NETGeographic, it does, I assume because it's a wrapper around the C++ Geographic.dll lib? Shouldn't this work?

How to publish .Net core docker application from Windows to Linux machine?

I created a .Net core application with Linux docker support using Visual Studio 2017 on a Windows 10 PC with Docker for Windows installed. I can use the following command to run it (a console application)
docker run MyApp
I have another Linux machine with Docker installed. How to publish the .Net core application to the Linux machine? I need to publish and run the dockerized application on the Linux machine.
The linux has the following docker packages installed.
$ sudo yum list installed "*docker*"
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
Installed Packages
docker-engine.x86_64 17.05.0.ce-1.el7.centos #dockerrepo
docker-engine-selinux.noarch 17.05.0.ce-1.el7.centos #dockerrepo
There are many ways to do this, just search for any tool for CI/CD.
The easiest way to do it is manually, connect to your Linux server, make a git pull of the code and then run the same commands that you run locally.
Other option is to do a push of your docker image to a container registry, then do a pull in you docker server and you are ready to go
Edit:
You should really take a look to some CI service, for example, in our environment, we use GitLab, when we do a push to master there is a gitlab.yml that builds the project, then do a push:
image: docker:latest
services:
- docker:dind
stages:
- build
api:
variables:
IMAGE_NAME: git.lagersoft.com:4567/gumbo/vtae/api:${CI_BUILD_REF}
stage: build
only:
- master
script:
- docker build -t ${IMAGE_NAME} -f vtae.api/Dockerfile .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN ${IMAGE_NAME}
- docker push ${IMAGE_NAME}
With this we only need to do a pull in our server with the latest version.
It's worth noticing that docker by itself does not handle the publication part, so you need to do it manually or with some tool (any CI tool like gitlab, jenkins, circleci, amazon code pipeline...) if you are starting learning I would recommend to start manually and then integrate some CI tool.
Edit 2
About the Visual Studio tool, I would not recommend to use it for anything else than local development, since yeah, it only works in windows and it only works in visual studio (Rider has integrated just very recently), so, to do the deploy in a linux environment we use our own docker and docker compose files, they are based in the defaults anyway, they are something like this:
FROM microsoft/aspnetcore:2.0 AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/aspnetcore-build:2.0 AS build
WORKDIR /src
COPY lagersoft.common/lagersoft.common.csproj lagersoft.common/
COPY vtae.redirect/vtae.redirect.csproj vtae.redirect/
COPY vtae.data/vtae.data.csproj vtae.data/
COPY vtae.common/vtae.common.csproj vtae.common/
RUN dotnet restore vtae.redirect/vtae.redirect.csproj
COPY . .
WORKDIR /src/vtae.redirect
RUN dotnet build vtae.redirect.csproj -c Release -o /app
FROM build AS publish
RUN dotnet publish vtae.redirect.csproj -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "vtae.redirect.dll"]
This docker file copy all the related projects (I hate the copying part, but is the same as Microsoft do their default file), the do the build and publish the app, on the other hand we have a docker-compose to add some services (this files must be in the solution folder to access all the related projects):
version: '3.4'
services:
vtae.redirect.redis:
image: redis
volumes:
- "./volumes/redirect/redis/data:/data"
container_name: vtae.redirect.redis
vtae.redirect:
image: vtae.redirect
depends_on:
- vtae.redirect.redis
build:
context: .
dockerfile: vtae.redirect/Dockerfile
ports:
- "8080:80"
volumes:
- "./volumes/redirect/data:/data"
container_name: vtae.redirect
entrypoint: dotnet /app/vtae.redirect.dll
With this parts there is only left to do a commit, then a pull in the server and run the docker-compose up command to run our app (you could do it from the docker file directly, but it is easier and more manageable with docker compose.
Edit 3
To make the deployment in the server we use two tools.
First the gitlab ci is run after the commit is done
It makes the build specified in the docker file and pushes it to our Gitlab container registry, same if it was the container registry of amazon, google, azure... etc...
Then it makes a post request to the server in production, this server is running a special tool in a separate port
The server receive the post request and validates it, for this we use this tool (a friend is the repo owner)
The script receive the request, check the login, and if it is valid, then it simply does the pull from our gitlab container registry and run docker-compose up
Notes
The tool is not perfect, we are moving from just docker to use kubernetes, were you can connect to your cluster directly from your machine or some CI integration and do the deploys directly, no matter what solution do you choose, i recommend you that start to see how kubernetes can help you, sadly is one more layer to learn, but it is very promising, were you will be able to publish to almos any cloud or metal painless, with fallbacks, scaling and other stuff.
Also
If you do not want or can not use the container registry (I strongly recommend this way), you can use the same tool, in the .sh that executes it, just do a git pull and then a docker build or docker compose.
The most simple scenario could be to create an script yourself where you do ssh to the server, upload the files as zip and then run it in the server, remember, Ubuntu is in the microsoft store and could run this script, but the other solutions are more "independient" and scalable, so, make your choose!

Resources