Is there a good way to create an authentication mechanism to Devops to be able to access the artifact NuGet feed? I would like to create a base image for my team that would allow them to just pull an image from our Azure Container Registry that has access to our devops nuget feed. Ideally people wouldn't have to have the same stock dockerfile code in every single project that grabs a PAT from their host build system. This would also allow us to CICD this a little more nicely.
My current solution
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
WORKDIR /app
ARG IT_PAT
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS "{\"endpointCredentials\": [{\"endpoint\": \"https://pkgs.dev.azure.com/MNPIT/_packaging/MNP/nuget/v3/index.json\",\"username\": \"build\",\"password\": \"${IT_PAT}\"}]}"
RUN mkdir -p $HOME/.nuget/plugins
WORKDIR /deps
# Downloads and installs the NuGet credential plugin so we can login to the private NuGet feed
RUN curl https://github.com/microsoft/artifacts-credprovider/releases/download/v0.1.24/Microsoft.NetCore2.NuGet.CredentialProvider.tar.gz -L -o creds.tar.gz -s
RUN tar -xzf creds.tar.gz
RUN cp -r plugins/netcore/ ~/.nuget/plugins
Stock code in every build file
Each user configuring their environment variables with a PAT
Passing the PAT on every build
Does not work with an automated build system
YAML
Run NuGetAuthenticate task to add VSS_NUGET_ACCESSTOKEN to environment variables (more info)
Pass token to Docker task as an argument
- task: NuGetAuthenticate#0
- task: Docker#2
displayName: 'build docker image'
inputs:
command: build
containerRegistry: 'happycodeacr'
repository: 'hc-app-sample-api-dev'
buildContext: '$(Pipeline.Workspace)/app'
Dockerfile: '$(Pipeline.Workspace)/app/src/HappyCode.Api/Dockerfile'
arguments: '--build-arg FEED_ACCESSTOKEN=$(VSS_NUGET_ACCESSTOKEN)'
tags: |
latest
$(Build.BuildId)
Dockerfile
Download and install artifacts provider (more info)
Receive token
Set VSS_NUGET_EXTERNAL_FEED_ENDPOINTS environment variable with feed url and token for nuget restore process
Copy NuGet.config file
Run dotnet restore
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /work
RUN curl -L https://raw.githubusercontent.com/Microsoft/artifacts-credprovider/master/helpers/installcredprovider.sh | sh
ARG FEED_ACCESSTOKEN
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS \
"{\"endpointCredentials\": [{\"endpoint\":\"https://happycode.pkgs.visualstudio.com/_packaging/hc-nuget-feed/nuget/v3/index.json\", \"password\":\"${FEED_ACCESSTOKEN}\"}]}"
COPY ["NuGet.config", "./"]
COPY ["src/*/*.csproj", "./"]
RUN for projectFile in $(ls *.csproj); \
do \
mkdir -p ${projectFile%.*}/ && mv $projectFile ${projectFile%.*}/; \
done
RUN dotnet restore /work/HappyCode.Api/HappyCode.Api.csproj
# further instructions
I would like to create a base image for my team that would allow them to just pull an image from our Azure Container Registry that has access to our devops nuget feed.
You can include the credentials inside your image to achieve this, But for security concern, you've better add some extra steps or codes to pass the credentials from outside the image.
Based on your current solution, you can use the system predefined variable $(System.AccessToken) to get the security token in the azure devops CICD pipeline. Then in the docker build task, you pass the access token to the ARG IT_PAT as arguement,
--build-arg IT_PAT=$(System.AccessToken)
Besides using the NuGet credential plugin, You can also use the dotnet cli to add credentials to the nuget source. And then pass the $(System.AccessToken) in the build arguements. See below:
ARG PAT
COPY . .
RUN dotnet nuget add source "your-source-url" --name "source-name" --username "useless" --password "$PAT" --store-password-in-clear-text
RUN dotnet restore
Another workaround is to include the nuget.config in the build context. But you need to include a nuget.config file without the credentials first, and then add an extra nuget task to add the credentials to the config file. Then copy the nuget.config in your docker file . See below:
Add a nuget task to run below custom command to add the credentials to the nuget.config file.
sources Add -Name "MyPackages" -Source "https://my.pkgs.visualstudio.com/_packaging/MyPackages/nuget/v3/index.json" -username any -password $(System.AccessToken) -ConfigFile Source/Nuget.config -StorePasswordInClearText
Copy the nuget.config in the docker file, Donot forget to delete the nuget.config file when the restore is complete:
COPY *.csproj .
COPY ./nuget.config .
RUN dotnet restore
RUN rm nuget.config
If you are using Yaml based pipeline. You can also check out container jobs. Then you use your private container by setting up the container endpoints. And then you can directly use the restore tasks in your pipeline. See below example, the nuget restore task will run in your private container, and it can access to your azure feeds directly by specifying attribute vstsFeed to your nuget feed:
When you specify a container in your pipeline, the agent will first fetch and start the container. Then, each step of the job will run inside the container.
container:
image: myprivate/registry:ubuntu1604
endpoint: private_dockerhub_connection
steps:
- task: NuGetCommand#2
inputs:
command: 'restore'
feedsToUse: 'select'
vstsFeed: 'my-azure-nuget-feed'
restoreSolution: '**/*.sln'
For more information you can check out this thread.
Adding to other replies - you can avoid modifying nuget.config in runtime and use environment variable instead.
During authentication nuget checks environmental variables of format
NuGetPackageSourceCredentials_'name', where 'name' is the key of the feed in the nuget.config file.
For example
NuGetPackageSourceCredentials_MyPackages="Username=unused_but_required_by_nuget;Password=$(System.AccessToken)"
Related
I want to create access to my private feed from pipeline (Project A), but everytime when i try run dotnet restore to restore .net project with nugets from private feed (project scoped feed in Project B) i got
error NU1301: Unable to load the service index for source <<url_to_my_feed>>
My pipeline.yml looks like:
[previous jobs]
- task: NuGetAuthenticate#0
- task: Docker#2
displayName: Build image
inputs:
command: build
containerRegistry: $(dockerRegistryServiceConnection)
repository: $(contentHostRepositoryName)
Dockerfile: "$(Build.SourcesDirectory)/src/modules/content/src/Dockerfile"
arguments: '--build-arg PAT=$(VSS_NUGET_ACCESSTOKEN)'
tags: |
$(tag)
[next jobs]
My dockerfile looks like:
...
ARG PAT
RUN dotnet nuget add source <<url_to_my_feed>> --name <<name>> --username <<username>> --password $PAT --store-password-in-clear-text
...
When i replace '--build-arg PAT=$(VSS_NUGET_ACCESSTOKEN)' to '--build-arg PAT=<<pat_token>>' where <<pat_token>> is token, who i generate manualty to my personal account in Azure Devops - everythink work fine.
What i try:
using $(System.AccessToken) instead $(VSS_NUGET_ACCESSTOKEN)
in feed set contribution permission to Project A Build Service
in project B disable Limit job authorization scope to current project for non-release pipelines and Limit job authorization scope to current project for release pipelines
use NuGetAuthenticate#0 and NuGetAuthenticate#1
into Pipelines Security in Project A allow Project Build Service for everythink
before using $(System.AccessToken) main into varaible
Add the Project Collection Build Service (Organization) and let it has related permission.
Since you use yml file pipeline, and you disable Limit job authorization scope to current project for non-release pipelines, so project build service account will change to project collection build service account.
And please use $(System.AccessToken) to auth.
I'm building a windows container and want to copy some files from github to my container which was built inside the VSTS agent (azure devops).
I created a copy task from list of inbuilt tasks in pipelines, but not sure how it will work.
YAML is:
steps:
- task: CopyFiles#2
displayName: 'Copy Files'
inputs:
SourceFolder: ..
Contents: |
default.aspx
web.config
TargetFolder: '$(build.artifactstagingdirectory)'
I just need to copy default.aspx and web.config from github to container inside my VSTS agent.
The target path is c:\inetpub\wwwroot inside the container.
Or instead of this can i use some copy steps in my dockerfile ?
My docker file is simple:
FROM mcr.microsoft.com/windows/servercore/iis
SHELL ["powershell"]
RUN Install-WindowsFeature NET-Framework-45-ASPNET ; \
Install-WindowsFeature Web-Asp-Net45
EXPOSE 80
Assuming you have this folder structure:
root-folder:
DOCKERFILE
content:
default.aspx
web.config
Accodringly to documentation you should be good to go adding these lines:
WORKDIR /inetpub/wwwroot
COPY content/ .
Putting all together:
FROM mcr.microsoft.com/windows/servercore/iis
SHELL ["powershell"]
RUN Install-WindowsFeature NET-Framework-45-ASPNET ; \
Install-WindowsFeature Web-Asp-Net45
WORKDIR /inetpub/wwwroot
COPY content/ .
EXPOSE 80
But my assumptions is folder structure above. If you have different you should copy your files use CopyFiles#2 task.
I am trying to install a pip package from Azure Artifacts as part of a Docker image(with Docker#2 task) but whatever I try does not work.
It looks like my pip inside Docker cannot authenticate against Azure Artifacts whatever I try. Closest I got is with
RUN pip install keyring artifacts-keyring
ENV ARTIFACTS_KEYRING_NONINTERACTIVE_MODE true
RUN pip install <> --index-url https://pkgs.dev.azure.com/<>/_packaging/<>/pypi/simple/
but in my Azure devops, i keep getting
ERROR: Could not find a version that satisfies the requirement <> (from versions: none)
ERROR: No matching distribution found for <>
Also - Azure documentation on this seems to very poor, if I switch ENV ARTIFACTS_KEYRING_NONINTERACTIVE_MODE false it prompts my Azure DevOps pipeline to authenticate intercatively which is not what I want.
How can I install a Python package published in Azure Artifacts as part of my Azure Pipeline Docker task automatically?
How can I install a Python package published in Azure Artifacts as part of my Azure Pipeline Docker task automatically?
We could use the PipAuthenticate task to populates the PIP_EXTRA_INDEX_URL environment variable:
It authenticates with your artifacts feed and per the docs, will store
the location of a config file that can be used to connect in the
PYPIRC_PATH environment variable.
Then pass it in the build arg:
arguments: --build-arg INDEX_URL=$(PIP_EXTRA_INDEX_URL)
You could check this document Consuming Azure Pipelines Python artifact feeds in Docker for some more details.
Hope this helps.
To add to the accepted answer, here is a somewhat more complete code example:
azure-pipelines.yml
- task: PipAuthenticate#1
inputs:
artifactFeeds: 'my_artifacts_feed'
# 'onlyAddExtraIndex' populates PIP_EXTRA_INDEX_URL env variable
onlyAddExtraIndex: True
- task: Docker#2
displayName: 'Build Docker Image'
inputs:
command: build
dockerfile: $(dockerfilePath)
tags: |
$(tag)
arguments: --build-arg INDEX_URL=$(PIP_EXTRA_INDEX_URL)
Dockerfile
FROM python:3.8-buster
# add an URL that PIP automatically searches (e.g., Azure Artifact Store URL)
ARG INDEX_URL
ENV PIP_EXTRA_INDEX_URL=$INDEX_URL
COPY requirements.txt
RUN pip install -r requirements.txt
Cannot build the docker image in the cicd , the path to the image is not found.
here is my dockerfile
FROM mcr.microsoft.com/dotnet/framework/aspnet:4.7.2-windowsservercore-ltsc2019
ARG source
WORKDIR /inetpub/wwwroot
COPY ${source:-./obj/Docker/publish} .
#I am choising Azure pipeline agent windows-2019
Step 4/4 : COPY ${source:-./obj/Docker/publish} .
COPY failed: CreateFile
\?\C:\ProgramData\docker\tmp\docker-builder174212468\obj\Docker\publish:
The system cannot find the path specified.
[error]C:\Program Files\Docker\docker.exe failed with return code: 1
I had the same issue and finally solved it. I'm using the new YAML pipeline with " Docker#2" Task. The solution was to specify the "buildContext" property. I must be the same as the WorkingDir in your Dockerfile.
Dockerfile:
WORKDIR /src
Azure Pipeline Yaml:
buildContext: 'src/'
Full task:
- task: Docker#2
displayName: Build and push an image to container registry
inputs:
containerRegistry: '$(dockerRegistryServiceConnection)'
repository: '$(imageRepository)'
command: 'buildAndPush'
Dockerfile: 'src/Dockerfile'
buildContext: 'src/'
tags: '$(tag)'
The error is very clear COPY can not find the file you are trying to copy during the build.
$source variable refers to the current directory. If $source is empty or absent, just use the default path obj/Docker/publish.
You can check this issue (Is it possible to change default directory (obj/docker/publish) to copy files) on Github or issues-6.
When building the image it copies the content from the path specified
in the source argument to the current directory within the container.
If there is no source argument specified, the contents from the path
obj/Docker/publish are used.
understanding-docker-with-visual-studio
In Gitlab, is it possible to transfer caches or artifacts between pipelines?
I am building a library in one pipeline and I want to build an application with the library in another pipeline.
Yes, it is possible. There are a couple of options to achieve this:
Using Job API and GitLab Premium
The first option is to use Job API to fetch artifacts. This method is available only if you have GitLab Premium. In this option, you use CI_JOB_TOKEN in Job API to fetch artifacts from another pipeline. Read more here.
Here is quick example of a job you would put in your application pipeline configuration:
build_application:
image: debian
stage: build
script:
- apt update && apt install -y unzip
- curl --location --output artifacts.zip "https://gitlab.example.com/api/v4/projects/${PROJECT_ID}/jobs/artifacts/master/download?job=build&job_token=$CI_JOB_TOKEN"
- unzip artifacts.zip
Using S3
The second option is to use some third-party intermediate storage, for instance, AWS S3. To pass artifacts follow below example.
In your library pipeline configuration create the following job:
variables:
TARGET_PROJECT_TOKEN: [get token from Settings -> CI/CD -> Triggers]
TARGET_PROJECT_ID: [get project id from project main page]
publish-artifact:
image: "python:latest"
stage: publish
before_script:
- pip install awscli
script:
- aws s3 cp output/artifact.zip s3://your-s3-bucket-name/artifact.zip.${CI_JOB_ID}
- "curl -X POST -F token=${TARGET_PROJECT_TOKEN} -F ref=master -F variables[ARTIFACT_ID]=${CI_JOB_ID} https://gitlab.com/api/v4/projects/${TARGET_PROJECT_ID}/trigger/pipeline"
Then in your application pipeline configuration retrieve the artifact from the s3 bucket:
fetch-artifact-from-s3:
image: "python:latest"
stage: prepare
artifacts:
paths:
- artifact/
before_script:
- pip install awscli
script:
- mkdir artifact
- aws s3 cp s3://your-s3-bucket-name/artifact.zip.${ARTIFACT_ID} artifact/artifact.zip
only:
variables:
- $ARTIFACT_ID
Once fetch-artifact-from-s3 job is completed you will have your artifact available in artifact/ directory. It can now be consumed in other jobs within application pipeline.