I would like to do a Java build into pipeline artifact and then put it in docker file.
i can't figure out a way to add artifact to a dockerfile in one pipeline
I would like one pipeline to build java code and build a docker image with jar.
Azure DevOps Pipeline - build Docker with artifacts
You could set the target path to $(Build.ArtifactStagingDirectory) for the task build java code.
Then, add a Docker task after above the publish task, configured to "Build" and set the "Build Context" in the task to $(Build.ArtifactStagingDirectory). That's the root path Docker will use for commands like COPY in a Dockerfile.
And set the Dockerfile path in the task to match its location:
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2
WORKDIR /app
COPY . .
ENTRYPOINT ["dotnet", "myAppNameHere.dll"]
Since you've set the Docker Build Context to $(Build.ArtifactStagingDirectory), where your app has been published, the COPY command will use that as a "current working directory." The translation of the COPY is "copy everything in $(Build.ArtifactStagingDirectory) to the /app folder inside the container."
Related
I have my PR pipeline with the following stages, one being dependent on the previous one:
Changed: basically determines what services have changed so it ignores testing and buildings ones that haven't.
UnitTesting: installs services into agent pool and runs the unit tests if the services have changes to them.
BuildAndPush: builds and pushes the test-<version> image to ACR if unit tests are passing.
IntegrationTesting: honestly haven't figured this out yet, but it should pull all the images and deploy them somehow.
SeleniumTesting: same as 4.
Anyway, I had UnitTesting before BuildAndPush because I figured it was the quickest way to determine if a build is failing by cutting out the the time of building the image.
It is quicker:
UnitTesting one service takes about 1.5 minutes. The bulk of that is npm install and npm build. The tests themselves are only about 10-15 seconds.
BuildAndPush one service takes about 2.25 minutes, but that doesn't including the testing portion.
That being said I feel like the UnitTesting stage adds an additional 1.25 minutes per service that really isn't necessary: just BuildAndPush and unit test the image would actually be faster overall for the pipeline. The developer would know if the build is failing still well under 5 minutes.
So that is what I can't figure out:
How do you run unit tests in a Docker image in an Azure Pipeline?
How do you run unit tests in a Docker image in an Azure Pipeline?
Since Docker volumes are not supported when building a container. We need to add task to build the image via below script, including running the unit tests, and the copiying the test results file from the container to a folder on the build server. We use the Docker Copy command to do this:
docker build -f ./WebApplication1/Dockerfile --target build -t webapplication1:$(build.buildid) .
docker create -ti --name testcontainer webapplication1:$(build.buildid)
docker cp testcontainer:/src/WebApplication1.UnitTests/TestResults/ $(Build.ArtifactStagingDirectory)/testresults
docker rm -fv testcontainer
When you create a ASP.NET Core project in Visual Studio and add Docker support for it you will get a Docker file that looks something like this:
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /src
COPY ["WebApplication1/WebApplication1.csproj", "WebApplication1/"]
RUN dotnet restore "WebApplication1/WebApplication1.csproj"
COPY . .
WORKDIR "/src/WebApplication1"
RUN dotnet build "WebApplication1.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "WebApplication1.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "WebApplication1.dll"]
If we want to build and run the tests inside the container, we need to extend the Docker file. For example, add a XUnit test project called WebApplication1.UnitTests.
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /src
COPY ["WebApplication1/WebApplication1.csproj", "WebApplication1/"]
COPY ["WebApplication1.UnitTests/WebApplication1.UnitTests.csproj", "WebApplication1.UnitTests/"]
RUN dotnet restore "WebApplication1/WebApplication1.csproj"
RUN dotnet restore "WebApplication1.UnitTests/WebApplication1.UnitTests.csproj"
COPY . .
RUN dotnet build "WebApplication1/WebApplication1.csproj" -c Release -o /app
RUN dotnet build "WebApplication1.UnitTests/WebApplication1.UnitTests.csproj" -c Release -o /app
RUN dotnet test "WebApplication1.UnitTests/WebApplication1.UnitTests.csproj" --logger "trx;LogFileName=webapplication1.trx"
FROM build AS publish
RUN dotnet publish "WebApplication1.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "WebApplication1.dll"]
Then add task Publish Test Results to publish the test result.
You could check these blogs for more details:
We start the container with unit tests in Azure DevOps (VSTS)
Running .NET Core Unit Tests with Docker and Azure Pipelines
Is there a good way to create an authentication mechanism to Devops to be able to access the artifact NuGet feed? I would like to create a base image for my team that would allow them to just pull an image from our Azure Container Registry that has access to our devops nuget feed. Ideally people wouldn't have to have the same stock dockerfile code in every single project that grabs a PAT from their host build system. This would also allow us to CICD this a little more nicely.
My current solution
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
WORKDIR /app
ARG IT_PAT
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS "{\"endpointCredentials\": [{\"endpoint\": \"https://pkgs.dev.azure.com/MNPIT/_packaging/MNP/nuget/v3/index.json\",\"username\": \"build\",\"password\": \"${IT_PAT}\"}]}"
RUN mkdir -p $HOME/.nuget/plugins
WORKDIR /deps
# Downloads and installs the NuGet credential plugin so we can login to the private NuGet feed
RUN curl https://github.com/microsoft/artifacts-credprovider/releases/download/v0.1.24/Microsoft.NetCore2.NuGet.CredentialProvider.tar.gz -L -o creds.tar.gz -s
RUN tar -xzf creds.tar.gz
RUN cp -r plugins/netcore/ ~/.nuget/plugins
Stock code in every build file
Each user configuring their environment variables with a PAT
Passing the PAT on every build
Does not work with an automated build system
YAML
Run NuGetAuthenticate task to add VSS_NUGET_ACCESSTOKEN to environment variables (more info)
Pass token to Docker task as an argument
- task: NuGetAuthenticate#0
- task: Docker#2
displayName: 'build docker image'
inputs:
command: build
containerRegistry: 'happycodeacr'
repository: 'hc-app-sample-api-dev'
buildContext: '$(Pipeline.Workspace)/app'
Dockerfile: '$(Pipeline.Workspace)/app/src/HappyCode.Api/Dockerfile'
arguments: '--build-arg FEED_ACCESSTOKEN=$(VSS_NUGET_ACCESSTOKEN)'
tags: |
latest
$(Build.BuildId)
Dockerfile
Download and install artifacts provider (more info)
Receive token
Set VSS_NUGET_EXTERNAL_FEED_ENDPOINTS environment variable with feed url and token for nuget restore process
Copy NuGet.config file
Run dotnet restore
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /work
RUN curl -L https://raw.githubusercontent.com/Microsoft/artifacts-credprovider/master/helpers/installcredprovider.sh | sh
ARG FEED_ACCESSTOKEN
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS \
"{\"endpointCredentials\": [{\"endpoint\":\"https://happycode.pkgs.visualstudio.com/_packaging/hc-nuget-feed/nuget/v3/index.json\", \"password\":\"${FEED_ACCESSTOKEN}\"}]}"
COPY ["NuGet.config", "./"]
COPY ["src/*/*.csproj", "./"]
RUN for projectFile in $(ls *.csproj); \
do \
mkdir -p ${projectFile%.*}/ && mv $projectFile ${projectFile%.*}/; \
done
RUN dotnet restore /work/HappyCode.Api/HappyCode.Api.csproj
# further instructions
I would like to create a base image for my team that would allow them to just pull an image from our Azure Container Registry that has access to our devops nuget feed.
You can include the credentials inside your image to achieve this, But for security concern, you've better add some extra steps or codes to pass the credentials from outside the image.
Based on your current solution, you can use the system predefined variable $(System.AccessToken) to get the security token in the azure devops CICD pipeline. Then in the docker build task, you pass the access token to the ARG IT_PAT as arguement,
--build-arg IT_PAT=$(System.AccessToken)
Besides using the NuGet credential plugin, You can also use the dotnet cli to add credentials to the nuget source. And then pass the $(System.AccessToken) in the build arguements. See below:
ARG PAT
COPY . .
RUN dotnet nuget add source "your-source-url" --name "source-name" --username "useless" --password "$PAT" --store-password-in-clear-text
RUN dotnet restore
Another workaround is to include the nuget.config in the build context. But you need to include a nuget.config file without the credentials first, and then add an extra nuget task to add the credentials to the config file. Then copy the nuget.config in your docker file . See below:
Add a nuget task to run below custom command to add the credentials to the nuget.config file.
sources Add -Name "MyPackages" -Source "https://my.pkgs.visualstudio.com/_packaging/MyPackages/nuget/v3/index.json" -username any -password $(System.AccessToken) -ConfigFile Source/Nuget.config -StorePasswordInClearText
Copy the nuget.config in the docker file, Donot forget to delete the nuget.config file when the restore is complete:
COPY *.csproj .
COPY ./nuget.config .
RUN dotnet restore
RUN rm nuget.config
If you are using Yaml based pipeline. You can also check out container jobs. Then you use your private container by setting up the container endpoints. And then you can directly use the restore tasks in your pipeline. See below example, the nuget restore task will run in your private container, and it can access to your azure feeds directly by specifying attribute vstsFeed to your nuget feed:
When you specify a container in your pipeline, the agent will first fetch and start the container. Then, each step of the job will run inside the container.
container:
image: myprivate/registry:ubuntu1604
endpoint: private_dockerhub_connection
steps:
- task: NuGetCommand#2
inputs:
command: 'restore'
feedsToUse: 'select'
vstsFeed: 'my-azure-nuget-feed'
restoreSolution: '**/*.sln'
For more information you can check out this thread.
Adding to other replies - you can avoid modifying nuget.config in runtime and use environment variable instead.
During authentication nuget checks environmental variables of format
NuGetPackageSourceCredentials_'name', where 'name' is the key of the feed in the nuget.config file.
For example
NuGetPackageSourceCredentials_MyPackages="Username=unused_but_required_by_nuget;Password=$(System.AccessToken)"
Cannot build the docker image in the cicd , the path to the image is not found.
here is my dockerfile
FROM mcr.microsoft.com/dotnet/framework/aspnet:4.7.2-windowsservercore-ltsc2019
ARG source
WORKDIR /inetpub/wwwroot
COPY ${source:-./obj/Docker/publish} .
#I am choising Azure pipeline agent windows-2019
Step 4/4 : COPY ${source:-./obj/Docker/publish} .
COPY failed: CreateFile
\?\C:\ProgramData\docker\tmp\docker-builder174212468\obj\Docker\publish:
The system cannot find the path specified.
[error]C:\Program Files\Docker\docker.exe failed with return code: 1
I had the same issue and finally solved it. I'm using the new YAML pipeline with " Docker#2" Task. The solution was to specify the "buildContext" property. I must be the same as the WorkingDir in your Dockerfile.
Dockerfile:
WORKDIR /src
Azure Pipeline Yaml:
buildContext: 'src/'
Full task:
- task: Docker#2
displayName: Build and push an image to container registry
inputs:
containerRegistry: '$(dockerRegistryServiceConnection)'
repository: '$(imageRepository)'
command: 'buildAndPush'
Dockerfile: 'src/Dockerfile'
buildContext: 'src/'
tags: '$(tag)'
The error is very clear COPY can not find the file you are trying to copy during the build.
$source variable refers to the current directory. If $source is empty or absent, just use the default path obj/Docker/publish.
You can check this issue (Is it possible to change default directory (obj/docker/publish) to copy files) on Github or issues-6.
When building the image it copies the content from the path specified
in the source argument to the current directory within the container.
If there is no source argument specified, the contents from the path
obj/Docker/publish are used.
understanding-docker-with-visual-studio
This is the Dockerfile generated by VS2017.
I changed a little bit for using it on Azure DevOps
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /src
COPY ["WebApi.csproj", "WebApi/"]
COPY ["./MyProject.Common/MyProject.Common.csproj", "MyProj.Common/"]
RUN dotnet restore "MyProject.WebApi/MyProject.WebApi.csproj"
COPY . .
WORKDIR "/src/MyProject.WebApi"
RUN dotnet build "MyProject.WebApi.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "MyProject.WebApi.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "MyProject.WebApi.dll"]
Solution structure
MyProject.sln
-MyProject.Common
...
-MyProject.WebApi
...
Dockerfile
I have created a Build Pipeline under Azure DevOps to run Docker Build with these steps :
Get Sources Step from Azure Repos Git
Agent Job (Hosted Ubuntu 1604)
Command Line script docker build -t WebApi .
I have this error
2019-02-02T18:14:33.4984638Z ---> 9af3faec3d9e
2019-02-02T18:14:33.4985440Z Step 7/17 : COPY ["./MyProject.Common/MyProject.Common.csproj", "MyProject.Common/"]
2019-02-02T18:14:33.4999594Z COPY failed: stat /var/lib/docker/tmp/docker-builder671248463/MyProject.Common/MyProject.Common.csproj: no such file or directory
2019-02-02T18:14:33.5327830Z ##[error]Bash exited with code '1'.
2019-02-02T18:14:33.5705235Z ##[section]Finishing: Command Line Script
Attached Screenshot with the working directory used
I don't understand if I have to change something inside Dockerfile or into Console Script step on DevOps
This is just a hunch, but considering your Dockerfile is located under MyProject.WebApi and you want to copy files from MyProject.Common which is on the same level, then you might need to specify a different context root directory when running docker build:
docker build -t WebApi -f Dockerfile ../
When Docker builds an image it collects a context - a list of files which are accessible during build and can be copied into image.
When you run docker build -t WebApi . it runs inside MyProject.WebApi directory and all files in the directory . (unless you have .dockerignore file), which is MyProject.WebApi in this case, are included into context. But MyProject.Common is not part of the context and thus you can't copy anything from it.
Hope this helps
EDIT: Perhaps you don't need not specify Working Directory (shown in the screenshot), then the command would change into:
docker build -t WebApi -f MyProject.WebApi/Dockerfile .
In this case Docker will use Dockerfile located inside MyProject.WebApi and include all files belonging to the solution into the context.
You can also read about context in the Extended description for the docker build command in the official documentation.
I'm working on a Node.js application for which my current Dockerfile looks like this:
# Stage 0
# =======
FROM node:10-alpine as build-stage
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install
COPY . ./
RUN yarn build
# Stage 1
# =======
FROM nginx:mainline-alpine
COPY --from=build-stage /app/build /usr/share/nginx/html
I'd like to integrate this into a GitLab CI pipeline but I'm not sure if I got the basic idea. So far I know that I need to create a .gitlab-ci.yml file which will be later picked up by GitLab.
My basic idea is:
I push my code changes to GitLab.
GitLab builds a new Docker image based on my Dockerfile.
GitLab pushes this newly create image to a "production" server (later).
So, my question is:
My .gitlab-ci.yml should then contain something like a build job which triggers... what? The docker build command? Or do I need to "copy" the Dockerfile content to the CI file?
GitLab CI executes the pipeline in the Runners that need to be registered into the project using generated tokens (Settings/CI CD/Runners). You also can used Shared Runners for multiple projects. The pipeline is configured with the .gitlab-ci.yml file and you can build, test, push and deploy docker images using the yaml file, when something is done in the repo (push to branch, merge request, etc).
It’s also useful when your application already has the Dockerfile that
can be used to create and test an image
So basically you need to install the runner, register it with the token of your project (or use Shared Runners) and configure your CI yaml file. The recommended aproach is docker in docker but it is up to you. You can also check this basic example. Finally you can deploy your container directly into Kubernetes, Heroku or Rancher. Remember to safely configure your credentials and secrets in Settings/Variables.
Conclusion
GitLab CI is awesome, but I recommend you to firstly think about your git workflow to use in order to set the stages in the .gitlab-ci.yml file. This will allow you to configure your node project as a pipeline an then it would be easy to export to other tools such as Jenkins pipelines or Travis for example.
build job trigger:
option 1:
add when: manual in the job and you can run the job by manual in CI/CD>Pipelines
option 2:
only:
- <branchname>
in this case the job start when you push into the defined branch
(this my personal suggest)
option 3:
add nothin' and the job will run every time when you push code
Of corse you can combine the options above.
In addition may star the job with web request by using the job token.
docker build command will work in pipeline. I think in script section.
Requirements docker engine on the gitlab-runner which pick the job.
Or do I need to "copy" the Dockerfile content to the CI file?
no