Azure App Service setting ASPNETCORE_ENVIRONMENT=Development result in 404 - azure-web-app-service

I notice this issue when I deploy my asp.net core MVC application docker image to Azure App Service. Either setting ASPNETCORE_ENVIRONMENT in Dockerfile ENV ASPNETCORE_ENVIRONMENT Development or setting in docker-compose
environment:
- ASPNETCORE_ENVIRONMENT=Development
I always get 404 when accessing website.
The weird part is that if I set ASPNETCORE_ENVIRONMENT to any other value (Staging, Production, etc.), 404 will be gone and website can be accessed normally
How to reproduce:
Create a asp.net core MVC project (just create a bare bone project, don't change any code or add any logic)
Build this project on local machine (Dockerfile is like below)
FROM microsoft/dotnet:2.2-sdk AS build-env
WORKDIR /app
# Copy necessary files and restore
COPY *.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM microsoft/dotnet:2.2-aspnetcore-runtime
COPY --from=build-env /app/out .
# Start
ENV ASPNETCORE_ENVIRONMENT Development
ENTRYPOINT ["dotnet", "CICDTesting.dll"]
Push this image to Azure container registry
I have a webhook attached with App Service, so deployment will be triggered automatically
From the Log Stream I can see the image is pulled successfully and container is up and running
Access the website, then it gives 404
If I change ENV ASPNETCORE_ENVIRONMENT Development to ENV ASPNETCORE_ENVIRONMENT Staging, and repeat build and deploy steps, website will be accessible.
It's the same if I remove ENV ASPNETCORE_ENVIRONMENT Development from Dockerfile and configure it in docker-compose

Related

node js app engine github trigger builds not getting published to development?

I have a node.js website that runs locally fine with node server.js. I added a Dockerfile:
FROM node:carbon
VOLUME ["/root"]
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm install --only=production
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "npm", "start" ]
And if I deploy my app with gcloud app deploy I can get it accessible online via a url. I believe my project is an 'App Engine' project? If I run subsequent gcloud app deploy commands, my new code gets pushed to the online site. But I can't get github master commits trigger and publish a new build.
I tried adding a trigger so that everytime new code gets added to my public github repo master branch, it gets sent to my production URL.
Full Trigger:
So I merge a PR into the master branch of my github repo. I look in my build history and see there is a new build, clicking the commit takes me to the new pr I just merged into the master branch of my github repo.
But If I access my website url, the new code is not there. If I run cloud app deploy again eventually it will appear, my trigger seems to be working fine from the logs, why is my build not getting published?
I think the problem might be with the fact that you're using a Dockerfile instead of Cloud Build configuration file... Unless there's something else I'm not seeing.
Look here under the fourth step, fifth bullet, for the solution. It says:
Under Build configuration, select Cloud Build configuration file.

Azure storage emulator does not work with azure function docker run

I am trying to run my timer trigger azure function sample using Docker. It works fine in the visual studio both in debug and release mode. But when use docker
docker run -d -p 8080:80 testdockercors
The application starts and it says the below message, but my timer trigger azure function is not running.
Hosting environment: Production
Content root path: /app
Now listening on: http://[::]:80
Application started. Press Ctrl+C to shut down..
But the timer trigger works fine, when running from visual studio. Please find my docker file.
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["Testdockercors/Testdockercors.csproj", "Testdockercors/"]
RUN dotnet restore "Testdockercors/Testdockercors.csproj"
COPY . .
WORKDIR "/src/Testdockercors"
RUN dotnet build "Testdockercors.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Testdockercors.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENV AzureWebJobsScriptRoot=/app
ENV AzureWebJobsStorage="UseDevelopmentStorage=true"
When I change the environment AzureWebJobsStorage value to an existing azure storage connection string it works in docker too. But i want to use the storage that is part of docker, not one available in azure.
I fixed the issue by changing the value of AzureWebJobsStorage as below, appending DevelopmentStorageProxyUri=http://host.docker.internal to environment did the trick.
ENV AzureWebJobsStorage="UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://host.docker.internal"
You can read more about # https://www.maneu.net/blog/use-local-storage-emulator-remote-container/
This will not work in Linux vm
I can confirm that using:
ENV AzureWebJobsStorage="UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://host.docker.internal"
works for the Python runtime (in particular mcr.microsoft.com/azure-functions/python:3.0-python3.7) as well.
In effect this resolves to the azurite endpoints running on the host where the docker runtime is running (in my case it is a macOS). Note that this does not imply that it will work if you run azurite in a container.

Problem deploying MERN app with Docker to GCP App Engine - should deploy take multiple hours?

I am inexperienced with Dev Ops, which drew me to using Google App Engine to deploy my MERN application. Currently, I have the following Dockerfile and entrypoint.sh:
# Dockerfile
FROM node:13.12.0-alpine
WORKDIR /app
COPY . ./
RUN npm install --silent
WORKDIR /app/client
RUN npm install --silent
WORKDIR /app
RUN chmod +x /app/entrypoint.sh
ENTRYPOINT [ "/app/entrypoint.sh" ]
# Entrypoint.sh
#!/bin/sh
node /app/index.js &
cd /app/client
npm start
The React front end is in a client folder, which is located in the base directory of the Node application. I am attempting to deploy these together, and would generally prefer to deploy together rather than separate. Running docker-compose up --build successfully redeploys my application on localhost.
I have created a very simple app.yaml file which is needed for Google App Engine:
# app.yaml
runtime: custom
env: standard
I read in the docs here to use runtime: custom when using a Dockerfile to configure the runtime environment. I initially selected a standard environment over a flexible environment, and so I've added env: standard as the other line in the app.yaml.
After installing and running gcloud app deploy, things kicked off, however for the last several hours this is what I've seen in my terminal window:
Hours seems like a higher magnitude of time than what seems right for deploying an application, and I've begun to think that I've done something wrong.
You are probably uploading more files than you need.
Use .gcloudignore file to describe the files/folders that you do not want to upload. LINK
You may need to change the file structure of your current project.
Additionally, it might be worth researching further the use of the Standard nodejs10 runtime. It uploads and starts much faster than the Flexible alternative (custom env is part of App Engine Flex). Then you can deploy each part to a different service.

GCP Cloud build ignores timeout settings

I use Cloud Build for copying the configuration file from storage and deploying the app to App Engine flex.
The problem is that the build fails every time when it lasts more than 10 minutes. I've specified timeout in my cloudbuild.yaml but it looks like it's ignored. Also, I configured app/cloud_build_timeout and set it to 1000. Could somebody explain to me what is wrong here?
My cloudbuild.yaml looks in this way:
steps:
- name: gcr.io/cloud-builders/gsutil
args: ["cp", "gs://myproj-dev-247118.appspot.com/.env.cloud", ".env"]
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: 1000s
timeout: 1600s
My app.yaml use custom env that build it from Dockerfile and looks like this:
runtime: custom
env: flex
manual_scaling:
instances: 1
env_variables:
NODE_ENV: dev
Dockerfile also contains nothing special, just installing dependencies and app building:
FROM node:10 as front-builder
WORKDIR /app
COPY front-end .
RUN npm install
RUN npm run build:web
FROM node:12
WORKDIR /app
COPY api .
RUN npm install
RUN npm run build
COPY .env .env
EXPOSE 8080
COPY --from=front-builder /app/web-build web-build
CMD npm start
When running gcloud app deploy directly for an App Engine Flex app, from your local machine for example, under the hood it spawns a Cloud Build job to build the image that is then deployed to GAE (you can see that build in Cloud Console > Cloud Build). This build has a 10min timeout that can be customized via:
gcloud config set app/cloud_build_timeout 1000
Now, the issue here is that you're issuing the gcloud app deploy command from within Cloud Build itself. Since each individual Cloud Build step is running in its own Docker container, you can't just add a previous step to customize the timeout since the next one will use the default gcloud setting.
You've got several options to solve this:
Add a build step to first build the image with docker build, upload it to Google Cloud Registry. You can set a custom timeout on these steps to fit your needs. Finally, deploy your app with glcoud app deploy --image-url=IMAGE-URL.
Create your own custom gcloud builder where app/cloud_build_timeout is set to your custom value. You can derive it from the default gcloud builder Dockerfile and add /builder/google-cloud-sdk/bin/gcloud config set app/cloud_build_timeout 1000
Just in case if you are using Google Cloud Build with Skaffold, remember checking the skaffold.yaml if you setted the timeout option inside the googleCloudBuild section in build. For example:
build:
googleCloudBuild:
timeout: 3600s
Skaffold will ignore the gcloud config of the machine where you are running the deploy. For example it will ignore this CLI command: gcloud config set app/cloud_build_timeout 3600

Dockerized .net core app doesn't load on Azure

trying the following steps
Create .NET core web application in VS2017
Added docker support via Right click > add > Docker Support
Publish to Azure App service (Linux) via Right click > Publish
Leave all to default values and publish
Site doesn't load
This is my dockerfile (didn't change anything)
FROM microsoft/aspnetcore:1.1
ARG source
WORKDIR /app
EXPOSE 80
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "WebApplication3.dll"]
What am I doing wrong?

Resources