I have my PR pipeline with the following stages, one being dependent on the previous one:
Changed: basically determines what services have changed so it ignores testing and buildings ones that haven't.
UnitTesting: installs services into agent pool and runs the unit tests if the services have changes to them.
BuildAndPush: builds and pushes the test-<version> image to ACR if unit tests are passing.
IntegrationTesting: honestly haven't figured this out yet, but it should pull all the images and deploy them somehow.
SeleniumTesting: same as 4.
Anyway, I had UnitTesting before BuildAndPush because I figured it was the quickest way to determine if a build is failing by cutting out the the time of building the image.
It is quicker:
UnitTesting one service takes about 1.5 minutes. The bulk of that is npm install and npm build. The tests themselves are only about 10-15 seconds.
BuildAndPush one service takes about 2.25 minutes, but that doesn't including the testing portion.
That being said I feel like the UnitTesting stage adds an additional 1.25 minutes per service that really isn't necessary: just BuildAndPush and unit test the image would actually be faster overall for the pipeline. The developer would know if the build is failing still well under 5 minutes.
So that is what I can't figure out:
How do you run unit tests in a Docker image in an Azure Pipeline?
How do you run unit tests in a Docker image in an Azure Pipeline?
Since Docker volumes are not supported when building a container. We need to add task to build the image via below script, including running the unit tests, and the copiying the test results file from the container to a folder on the build server. We use the Docker Copy command to do this:
docker build -f ./WebApplication1/Dockerfile --target build -t webapplication1:$(build.buildid) .
docker create -ti --name testcontainer webapplication1:$(build.buildid)
docker cp testcontainer:/src/WebApplication1.UnitTests/TestResults/ $(Build.ArtifactStagingDirectory)/testresults
docker rm -fv testcontainer
When you create a ASP.NET Core project in Visual Studio and add Docker support for it you will get a Docker file that looks something like this:
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /src
COPY ["WebApplication1/WebApplication1.csproj", "WebApplication1/"]
RUN dotnet restore "WebApplication1/WebApplication1.csproj"
COPY . .
WORKDIR "/src/WebApplication1"
RUN dotnet build "WebApplication1.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "WebApplication1.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "WebApplication1.dll"]
If we want to build and run the tests inside the container, we need to extend the Docker file. For example, add a XUnit test project called WebApplication1.UnitTests.
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /src
COPY ["WebApplication1/WebApplication1.csproj", "WebApplication1/"]
COPY ["WebApplication1.UnitTests/WebApplication1.UnitTests.csproj", "WebApplication1.UnitTests/"]
RUN dotnet restore "WebApplication1/WebApplication1.csproj"
RUN dotnet restore "WebApplication1.UnitTests/WebApplication1.UnitTests.csproj"
COPY . .
RUN dotnet build "WebApplication1/WebApplication1.csproj" -c Release -o /app
RUN dotnet build "WebApplication1.UnitTests/WebApplication1.UnitTests.csproj" -c Release -o /app
RUN dotnet test "WebApplication1.UnitTests/WebApplication1.UnitTests.csproj" --logger "trx;LogFileName=webapplication1.trx"
FROM build AS publish
RUN dotnet publish "WebApplication1.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "WebApplication1.dll"]
Then add task Publish Test Results to publish the test result.
You could check these blogs for more details:
We start the container with unit tests in Azure DevOps (VSTS)
Running .NET Core Unit Tests with Docker and Azure Pipelines
Related
I have a build step in the docker file that generates some files. Since I also need those files locally (when testing) I have the generation of them not in Cloud Build itself but in the Dockerfile (simple node script that executes via npx). Locally this works perfectly fine and my Docker image does contain those generated files. But whenever I throw this Dockerfile into Cloud Build it executes the script but it does not keep the generated files in the resulting image. I also scanned the logs and so on but found no error (such as a persission error or something similar).
Is there any flag or something I am missing here that prevents my Dockerfile from generating those files and storing them into the image?
Edit:
Deployment pipeline is a trigger onto a GitHub pull request that runs the cloud build.yaml in which the docker build command is located. Afterwards the image is getting pushed to the Artifact Registry and to Cloud Run. On Cloud Run itself the files are gone. Steps in-between I can't check but when building locally the files are getting generated and they are persistent in the image.
Dockerfile
FROM node:16
ARG ENVIRONMENT
ARG GOOGLE_APPLICATION_CREDENTIALS
ARG DISABLE_CLOUD_LOGGING
ARG DISABLE_CONSOLE_LOGGING
ARG GIT_ACCESS_TOKEN
WORKDIR /usr/src/app
COPY ./*.json ./
COPY ./src ./src
COPY ./build ./build
ENV ENVIRONMENT="${ENVIRONMENT}"
ENV GOOGLE_APPLICATION_CREDENTIALS="${GOOGLE_APPLICATION_CREDENTIALS}"
ENV DISABLE_CLOUD_LOGGING="${DISABLE_CLOUD_LOGGING}"
ENV DISABLE_CONSOLE_LOGGING="${DISABLE_CONSOLE_LOGGING}"
ENV PORT=8080
RUN git config --global url."https://${GIT_ACCESS_TOKEN}#github.com".insteadOf "ssh://git#github.com"
RUN npm install
RUN node ./build/generate-files.js
RUN rm -rf ./build
EXPOSE 8080
ENTRYPOINT [ "node", "./src/index.js" ]
Cloud Build (stuff before and after is just normal deployment to Cloud Run stuff)
...
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: [ '-c', 'docker build --build-arg ENVIRONMENT=${_ENVIRONMENT} --build-arg DISABLE_CONSOLE_LOGGING=true --build-arg GIT_ACCESS_TOKEN=$$GIT_ACCESS_TOKEN -t location-docker.pkg.dev/$PROJECT_ID/atrifact-registry/docker-image:${_ENVIRONMENT} ./' ]
secretEnv: ['GIT_ACCESS_TOKEN']
...
I figured it out. Somehow the build process does not fail when crashing a RUN statement. This lead to me thinking there are no problem, when in fact it could not authorize my generation script. Adding --network=cloudbuild to the docker build command fixed the authorization problem.
I created a new docker container for a Node.js app.
My Dockerfile is:
FROM node:14
# app directory
WORKDIR /home/my-username/my-proj-name
# Install app dependencies
COPY package*.json ./
RUN npm install
# bundle app source
COPY . .
EXPOSE 3016
CMD ["node", "src/app.js"]
After this I ran:
docker build . -t my-username/node-web-app
Then I ran: docker run -p 8160:3016 -d -v /home/my-username/my-proj-name:/my-proj-name my-username/node-web-app
The app is successfully hosted at my-public-ip:8160.
However, any changes I make on my server do not propagate to the docker container. For example, if I touch test.txt in my server, I will not be able GET /test.txt online or see it in the container. The only way I can make changes is to rebuild the image, which is quite tedious.
Did I miss something here when binding the volume or something? How can I make it so that the changes I make locally also appear in the container?
I am trying to run my timer trigger azure function sample using Docker. It works fine in the visual studio both in debug and release mode. But when use docker
docker run -d -p 8080:80 testdockercors
The application starts and it says the below message, but my timer trigger azure function is not running.
Hosting environment: Production
Content root path: /app
Now listening on: http://[::]:80
Application started. Press Ctrl+C to shut down..
But the timer trigger works fine, when running from visual studio. Please find my docker file.
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["Testdockercors/Testdockercors.csproj", "Testdockercors/"]
RUN dotnet restore "Testdockercors/Testdockercors.csproj"
COPY . .
WORKDIR "/src/Testdockercors"
RUN dotnet build "Testdockercors.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Testdockercors.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENV AzureWebJobsScriptRoot=/app
ENV AzureWebJobsStorage="UseDevelopmentStorage=true"
When I change the environment AzureWebJobsStorage value to an existing azure storage connection string it works in docker too. But i want to use the storage that is part of docker, not one available in azure.
I fixed the issue by changing the value of AzureWebJobsStorage as below, appending DevelopmentStorageProxyUri=http://host.docker.internal to environment did the trick.
ENV AzureWebJobsStorage="UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://host.docker.internal"
You can read more about # https://www.maneu.net/blog/use-local-storage-emulator-remote-container/
This will not work in Linux vm
I can confirm that using:
ENV AzureWebJobsStorage="UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://host.docker.internal"
works for the Python runtime (in particular mcr.microsoft.com/azure-functions/python:3.0-python3.7) as well.
In effect this resolves to the azurite endpoints running on the host where the docker runtime is running (in my case it is a macOS). Note that this does not imply that it will work if you run azurite in a container.
This is the Dockerfile generated by VS2017.
I changed a little bit for using it on Azure DevOps
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /src
COPY ["WebApi.csproj", "WebApi/"]
COPY ["./MyProject.Common/MyProject.Common.csproj", "MyProj.Common/"]
RUN dotnet restore "MyProject.WebApi/MyProject.WebApi.csproj"
COPY . .
WORKDIR "/src/MyProject.WebApi"
RUN dotnet build "MyProject.WebApi.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "MyProject.WebApi.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "MyProject.WebApi.dll"]
Solution structure
MyProject.sln
-MyProject.Common
...
-MyProject.WebApi
...
Dockerfile
I have created a Build Pipeline under Azure DevOps to run Docker Build with these steps :
Get Sources Step from Azure Repos Git
Agent Job (Hosted Ubuntu 1604)
Command Line script docker build -t WebApi .
I have this error
2019-02-02T18:14:33.4984638Z ---> 9af3faec3d9e
2019-02-02T18:14:33.4985440Z Step 7/17 : COPY ["./MyProject.Common/MyProject.Common.csproj", "MyProject.Common/"]
2019-02-02T18:14:33.4999594Z COPY failed: stat /var/lib/docker/tmp/docker-builder671248463/MyProject.Common/MyProject.Common.csproj: no such file or directory
2019-02-02T18:14:33.5327830Z ##[error]Bash exited with code '1'.
2019-02-02T18:14:33.5705235Z ##[section]Finishing: Command Line Script
Attached Screenshot with the working directory used
I don't understand if I have to change something inside Dockerfile or into Console Script step on DevOps
This is just a hunch, but considering your Dockerfile is located under MyProject.WebApi and you want to copy files from MyProject.Common which is on the same level, then you might need to specify a different context root directory when running docker build:
docker build -t WebApi -f Dockerfile ../
When Docker builds an image it collects a context - a list of files which are accessible during build and can be copied into image.
When you run docker build -t WebApi . it runs inside MyProject.WebApi directory and all files in the directory . (unless you have .dockerignore file), which is MyProject.WebApi in this case, are included into context. But MyProject.Common is not part of the context and thus you can't copy anything from it.
Hope this helps
EDIT: Perhaps you don't need not specify Working Directory (shown in the screenshot), then the command would change into:
docker build -t WebApi -f MyProject.WebApi/Dockerfile .
In this case Docker will use Dockerfile located inside MyProject.WebApi and include all files belonging to the solution into the context.
You can also read about context in the Extended description for the docker build command in the official documentation.
I am using this docker file to build it
FROM microsoft/dotnet:2.1-sdk-alpine3.7 AS build
WORKDIR /app
COPY Packt.HelloWorld.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o /published
RUN ls -alrth /published
FROM microsoft/dotnet:2.1-runtime-deps-alpine3.7 AS base
WORKDIR /app
COPY --from=build /published .
RUN ls -alrth /app
# ENTRYPOINT ["dotnet", "TestApp.dll"] <-- I can run in Windows like this
ENTRYPOINT ["Packt.HelloWorld"]
The debug ls can show all files under the /app folder
But if I use this
ENTRYPOINT ["/app/Packt.HelloWorld"]
It complained there is no such a path.
docker: Error response from daemon: OCI runtime create failed:
container_linux.go:348: starting container process ca
used "exec: \"/app/Packt.HelloWorld\": stat /app/Packt.HelloWorld: no such file or directory": unknown.
if I use
ENTRYPOINT ["Packt.HelloWorld"]
It gave me
docker: Error response from daemon: OCI runtime create failed:
container_linux.go:348: starting container process caused "exec:
\"Packt.HelloWorld\": executable file not found in $PATH": unknown.
The image seems to build properly and content are also in there, it's just the running not successful.
Can anyone give me a hint what I am missing?
Thanks
# ENTRYPOINT ["dotnet", "TestApp.dll"] <-- I can run in Windows like this
This is pretty much exactly how it should work on Linux as well.
It looks like you are doing Framework Dependent Deployment (no -r or --runtime argument for dotnet publish). So the same steps to build/run on Windows should work on Linux as well.
You need to specify the runtime of the publish subcommand of dotnet-cli. Also, the publishing process must create a self-contained application. Check the link for more information.