core3.0 docker container crashing with no status - linux

I have a asp.net core 3.0 preview application that runs fine on my Windows box. I move it to linux (Ubuntu 18.04), running on docker there. It builds and runs fine. I'm using NETGeographic library, which relies on Geographic.dll. When it gets to the point of executing a function that relies on NETGeographic, it crashes silently. No exception thrown, no anything. Just a full on crash.
I've tried release / debug, publishing as a self-contained application, messing with hint and other paths, and I just can't get it to go for whatever reason on Linux. For some reason I thought it was working before, and I see new files published as of today. Is this a possible regression? Is there an open bug on referencing this anywhere? I haven't been able to find one.
Dockerfile:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.0-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.0-buster AS build
WORKDIR /src
COPY ["Test.csproj", ""]
RUN dotnet restore "Test.csproj"
COPY . .
WORKDIR "/src/"
RUN dotnet build "Test.csproj" -c Debug -o /app
FROM build AS publish
RUN dotnet publish "Test.csproj" -c Debug --self-contained --runtime ubuntu.18.04-x64 -o /app
FROM base AS final
WORKDIR "/src/"
COPY Geographic.dll /app
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "Test.dll"]
output when ran:
info: Microsoft.Hosting.Lifetime[0]
Now listening on: http://[::]:80
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
Content root path: /app
Before LLAtoXYZ
user#server:~$
function call is just Convert XYZtoLLA from NetGeographic. I've put the code in / out of a try/catch block with no change. I've put logging in the catch, it never hits it, and so on. Just crash to console. Other api endpoints that do other things work great and never crash, including ones that use of dll's. But whenever I use NETGeographic, it does, I assume because it's a wrapper around the C++ Geographic.dll lib? Shouldn't this work?

Related

Unit testing in Docker image in Azure DevOps Pipeline

I have my PR pipeline with the following stages, one being dependent on the previous one:
Changed: basically determines what services have changed so it ignores testing and buildings ones that haven't.
UnitTesting: installs services into agent pool and runs the unit tests if the services have changes to them.
BuildAndPush: builds and pushes the test-<version> image to ACR if unit tests are passing.
IntegrationTesting: honestly haven't figured this out yet, but it should pull all the images and deploy them somehow.
SeleniumTesting: same as 4.
Anyway, I had UnitTesting before BuildAndPush because I figured it was the quickest way to determine if a build is failing by cutting out the the time of building the image.
It is quicker:
UnitTesting one service takes about 1.5 minutes. The bulk of that is npm install and npm build. The tests themselves are only about 10-15 seconds.
BuildAndPush one service takes about 2.25 minutes, but that doesn't including the testing portion.
That being said I feel like the UnitTesting stage adds an additional 1.25 minutes per service that really isn't necessary: just BuildAndPush and unit test the image would actually be faster overall for the pipeline. The developer would know if the build is failing still well under 5 minutes.
So that is what I can't figure out:
How do you run unit tests in a Docker image in an Azure Pipeline?
How do you run unit tests in a Docker image in an Azure Pipeline?
Since Docker volumes are not supported when building a container. We need to add task to build the image via below script, including running the unit tests, and the copiying the test results file from the container to a folder on the build server. We use the Docker Copy command to do this:
docker build -f ./WebApplication1/Dockerfile --target build -t webapplication1:$(build.buildid) .
docker create -ti --name testcontainer webapplication1:$(build.buildid)
docker cp testcontainer:/src/WebApplication1.UnitTests/TestResults/ $(Build.ArtifactStagingDirectory)/testresults
docker rm -fv testcontainer
When you create a ASP.NET Core project in Visual Studio and add Docker support for it you will get a Docker file that looks something like this:
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /src
COPY ["WebApplication1/WebApplication1.csproj", "WebApplication1/"]
RUN dotnet restore "WebApplication1/WebApplication1.csproj"
COPY . .
WORKDIR "/src/WebApplication1"
RUN dotnet build "WebApplication1.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "WebApplication1.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "WebApplication1.dll"]
If we want to build and run the tests inside the container, we need to extend the Docker file. For example, add a XUnit test project called WebApplication1.UnitTests.
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /src
COPY ["WebApplication1/WebApplication1.csproj", "WebApplication1/"]
COPY ["WebApplication1.UnitTests/WebApplication1.UnitTests.csproj", "WebApplication1.UnitTests/"]
RUN dotnet restore "WebApplication1/WebApplication1.csproj"
RUN dotnet restore "WebApplication1.UnitTests/WebApplication1.UnitTests.csproj"
COPY . .
RUN dotnet build "WebApplication1/WebApplication1.csproj" -c Release -o /app
RUN dotnet build "WebApplication1.UnitTests/WebApplication1.UnitTests.csproj" -c Release -o /app
RUN dotnet test "WebApplication1.UnitTests/WebApplication1.UnitTests.csproj" --logger "trx;LogFileName=webapplication1.trx"
FROM build AS publish
RUN dotnet publish "WebApplication1.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "WebApplication1.dll"]
Then add task Publish Test Results to publish the test result.
You could check these blogs for more details:
We start the container with unit tests in Azure DevOps (VSTS)
Running .NET Core Unit Tests with Docker and Azure Pipelines

Deploying Angular Universal 9 to Google App Engine

I am not sure the situation has been changed but it seems I got stuck with the versions I am using.
Previously, in Angular 7, we were able to generate server files for Angular Universal at the root level so we could have node main.js in app yaml and Google App Engine just found the way to run our web application. It seems this is not possible anymore for Angular 9.
We are using Angular SSR for our production web site. It compiles all the server files in /dist-server folder. There is a docker file to deploy it on Google App Engine:
FROM node:12-alpine as buildContainer
WORKDIR /app
COPY ./package.json ./package-lock.json /app/
RUN npm install
COPY . /app
RUN npm run build:ssr // This will create dist/ and dist-server/ folders in the docker
FROM node:12-alpine
WORKDIR /app
COPY --from=buildContainer /app/package.json /app
COPY --from=buildContainer /app/dist /app/dist
COPY --from=buildContainer /app/dist-server /app/dist-server
EXPOSE 4000
CMD ["npm", "run", "serve:ssr"]
In package.json we have :
"serve:ssr": "node dist-server/main.js",
In order to start the deployment, we type gcloud app deploy in the terminal and everything works fine for this process. The main problem is this takes almost 25 mins to finish. The main bottleneck for the time consuming part is the compiling.
I thought we could have compiled the repo on our local dev machine, and copy only dist/ and dist-server folder to the docker and add node dist-server/main.js to run our web application in the docker file. Whenever I tried to copy only dist and dist-server folder. I got below error message:
COPY failed: stat /var/lib/docker/tmp/docker-builder{random numbers}/dist: no such file or directory
I also tried to compile the main.js which is the main server file for angular universal at the same level as app.yaml. I assumed this is required according to Google App Engine Node JS deployment rule since there is an example repo from Google. I cannot compile our main.js file into the root folder, it gives below error message:
An unhandled exception occurred: Output path MUST not be project root directory!
So I am looking for a solution to which does not require Google App Engine to rebuild our repo (since we can do this in our dev machine and upload the compiled files for the sake of time saving) to make the deployment process faster.
Thanks for your help
I have found that the .dockerignore file had dist and dist-server folder in it. I have removed those entries. I am able to compile and deploy the docker file on google app engine now.

Azure storage emulator does not work with azure function docker run

I am trying to run my timer trigger azure function sample using Docker. It works fine in the visual studio both in debug and release mode. But when use docker
docker run -d -p 8080:80 testdockercors
The application starts and it says the below message, but my timer trigger azure function is not running.
Hosting environment: Production
Content root path: /app
Now listening on: http://[::]:80
Application started. Press Ctrl+C to shut down..
But the timer trigger works fine, when running from visual studio. Please find my docker file.
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["Testdockercors/Testdockercors.csproj", "Testdockercors/"]
RUN dotnet restore "Testdockercors/Testdockercors.csproj"
COPY . .
WORKDIR "/src/Testdockercors"
RUN dotnet build "Testdockercors.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Testdockercors.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENV AzureWebJobsScriptRoot=/app
ENV AzureWebJobsStorage="UseDevelopmentStorage=true"
When I change the environment AzureWebJobsStorage value to an existing azure storage connection string it works in docker too. But i want to use the storage that is part of docker, not one available in azure.
I fixed the issue by changing the value of AzureWebJobsStorage as below, appending DevelopmentStorageProxyUri=http://host.docker.internal to environment did the trick.
ENV AzureWebJobsStorage="UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://host.docker.internal"
You can read more about # https://www.maneu.net/blog/use-local-storage-emulator-remote-container/
This will not work in Linux vm
I can confirm that using:
ENV AzureWebJobsStorage="UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://host.docker.internal"
works for the Python runtime (in particular mcr.microsoft.com/azure-functions/python:3.0-python3.7) as well.
In effect this resolves to the azurite endpoints running on the host where the docker runtime is running (in my case it is a macOS). Note that this does not imply that it will work if you run azurite in a container.

Project dependency not found when running DockerFile build command on Azure DevOps

This is the Dockerfile generated by VS2017.
I changed a little bit for using it on Azure DevOps
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /src
COPY ["WebApi.csproj", "WebApi/"]
COPY ["./MyProject.Common/MyProject.Common.csproj", "MyProj.Common/"]
RUN dotnet restore "MyProject.WebApi/MyProject.WebApi.csproj"
COPY . .
WORKDIR "/src/MyProject.WebApi"
RUN dotnet build "MyProject.WebApi.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "MyProject.WebApi.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "MyProject.WebApi.dll"]
Solution structure
MyProject.sln
-MyProject.Common
...
-MyProject.WebApi
...
Dockerfile
I have created a Build Pipeline under Azure DevOps to run Docker Build with these steps :
Get Sources Step from Azure Repos Git
Agent Job (Hosted Ubuntu 1604)
Command Line script docker build -t WebApi .
I have this error
2019-02-02T18:14:33.4984638Z ---> 9af3faec3d9e
2019-02-02T18:14:33.4985440Z Step 7/17 : COPY ["./MyProject.Common/MyProject.Common.csproj", "MyProject.Common/"]
2019-02-02T18:14:33.4999594Z COPY failed: stat /var/lib/docker/tmp/docker-builder671248463/MyProject.Common/MyProject.Common.csproj: no such file or directory
2019-02-02T18:14:33.5327830Z ##[error]Bash exited with code '1'.
2019-02-02T18:14:33.5705235Z ##[section]Finishing: Command Line Script
Attached Screenshot with the working directory used
I don't understand if I have to change something inside Dockerfile or into Console Script step on DevOps
This is just a hunch, but considering your Dockerfile is located under MyProject.WebApi and you want to copy files from MyProject.Common which is on the same level, then you might need to specify a different context root directory when running docker build:
docker build -t WebApi -f Dockerfile ../
When Docker builds an image it collects a context - a list of files which are accessible during build and can be copied into image.
When you run docker build -t WebApi . it runs inside MyProject.WebApi directory and all files in the directory . (unless you have .dockerignore file), which is MyProject.WebApi in this case, are included into context. But MyProject.Common is not part of the context and thus you can't copy anything from it.
Hope this helps
EDIT: Perhaps you don't need not specify Working Directory (shown in the screenshot), then the command would change into:
docker build -t WebApi -f MyProject.WebApi/Dockerfile .
In this case Docker will use Dockerfile located inside MyProject.WebApi and include all files belonging to the solution into the context.
You can also read about context in the Extended description for the docker build command in the official documentation.

How to run an asp dot net core image in Linux?

I am using this docker file to build it
FROM microsoft/dotnet:2.1-sdk-alpine3.7 AS build
WORKDIR /app
COPY Packt.HelloWorld.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o /published
RUN ls -alrth /published
FROM microsoft/dotnet:2.1-runtime-deps-alpine3.7 AS base
WORKDIR /app
COPY --from=build /published .
RUN ls -alrth /app
# ENTRYPOINT ["dotnet", "TestApp.dll"] <-- I can run in Windows like this
ENTRYPOINT ["Packt.HelloWorld"]
The debug ls can show all files under the /app folder
But if I use this
ENTRYPOINT ["/app/Packt.HelloWorld"]
It complained there is no such a path.
docker: Error response from daemon: OCI runtime create failed:
container_linux.go:348: starting container process ca
used "exec: \"/app/Packt.HelloWorld\": stat /app/Packt.HelloWorld: no such file or directory": unknown.
if I use
ENTRYPOINT ["Packt.HelloWorld"]
It gave me
docker: Error response from daemon: OCI runtime create failed:
container_linux.go:348: starting container process caused "exec:
\"Packt.HelloWorld\": executable file not found in $PATH": unknown.
The image seems to build properly and content are also in there, it's just the running not successful.
Can anyone give me a hint what I am missing?
Thanks
# ENTRYPOINT ["dotnet", "TestApp.dll"] <-- I can run in Windows like this
This is pretty much exactly how it should work on Linux as well.
It looks like you are doing Framework Dependent Deployment (no -r or --runtime argument for dotnet publish). So the same steps to build/run on Windows should work on Linux as well.
You need to specify the runtime of the publish subcommand of dotnet-cli. Also, the publishing process must create a self-contained application. Check the link for more information.

Resources