Azure Devops install Python package from Azure Artifacts inside Docker - azure

I am trying to install a pip package from Azure Artifacts as part of a Docker image(with Docker#2 task) but whatever I try does not work.
It looks like my pip inside Docker cannot authenticate against Azure Artifacts whatever I try. Closest I got is with
RUN pip install keyring artifacts-keyring
ENV ARTIFACTS_KEYRING_NONINTERACTIVE_MODE true
RUN pip install <> --index-url https://pkgs.dev.azure.com/<>/_packaging/<>/pypi/simple/
but in my Azure devops, i keep getting
ERROR: Could not find a version that satisfies the requirement <> (from versions: none)
ERROR: No matching distribution found for <>
Also - Azure documentation on this seems to very poor, if I switch ENV ARTIFACTS_KEYRING_NONINTERACTIVE_MODE false it prompts my Azure DevOps pipeline to authenticate intercatively which is not what I want.
How can I install a Python package published in Azure Artifacts as part of my Azure Pipeline Docker task automatically?

How can I install a Python package published in Azure Artifacts as part of my Azure Pipeline Docker task automatically?
We could use the PipAuthenticate task to populates the PIP_EXTRA_INDEX_URL environment variable:
It authenticates with your artifacts feed and per the docs, will store
the location of a config file that can be used to connect in the
PYPIRC_PATH environment variable.
Then pass it in the build arg:
arguments: --build-arg INDEX_URL=$(PIP_EXTRA_INDEX_URL)
You could check this document Consuming Azure Pipelines Python artifact feeds in Docker for some more details.
Hope this helps.

To add to the accepted answer, here is a somewhat more complete code example:
azure-pipelines.yml
- task: PipAuthenticate#1
inputs:
artifactFeeds: 'my_artifacts_feed'
# 'onlyAddExtraIndex' populates PIP_EXTRA_INDEX_URL env variable
onlyAddExtraIndex: True
- task: Docker#2
displayName: 'Build Docker Image'
inputs:
command: build
dockerfile: $(dockerfilePath)
tags: |
$(tag)
arguments: --build-arg INDEX_URL=$(PIP_EXTRA_INDEX_URL)
Dockerfile
FROM python:3.8-buster
# add an URL that PIP automatically searches (e.g., Azure Artifact Store URL)
ARG INDEX_URL
ENV PIP_EXTRA_INDEX_URL=$INDEX_URL
COPY requirements.txt
RUN pip install -r requirements.txt

Related

Running Docker container in Azure pipeline running on self hosted agent that is running in the container as well

I am trying to pull Docker container in my Azure pipeline. Azure pipeline is running on the self hosted agent, that is running in the docker container. I get a following error:
Is it possible to run the container in the pipeline, when the pipeline itself runs on the container self hosted agent?
Pipeline YAML:
# Node.js
# Build a general Node.js project with npm.
# Add steps that analyze code, save build artifacts, deploy, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/javascript
trigger:
- master
resources:
containers:
- container: qmate
image: qmate.int.repositories.cloud.sap/qmate-executor:latest
pool:
vmImage: ubuntu-latest
name: SYS-DEV-Self-hosted
demands:
- agent.name -equals SYSDEV-agent
steps:
- task: NodeTool#0
inputs:
versionSpec: '15.x'
displayName: 'Install Node.js'
- task: DockerInstaller#0
inputs:
dockerVersion: '17.09.0-ce'
- script: docker pull qmate
workingDirectory: ./
displayName: 'Docker Pull'
- script: |
cd tests/QmateE2E/regression
npm install
npx wdio config.js
displayName: 'npm install and build'
You may configure the self-hosted agent in the docker container.
You don't need to run the docker container in the pipeline. You could install the self-hosted agent in the docker instance.
And then make the docker container as a self-hosted which can be set in the agent pool.
You can specify multiple containers to run with the container jobs... (If you want to run another container to interact with) (The container that you specify on the pipeline would be pulled and started automatically by Azure Devops) (I would normally specify the container to run on in a top-level container: or for one under the specific job, if multiple jobs are present.)
(The way it is done currently, the safe option, in case more containers are added, is to have a target: qmate for each of the steps that should run in the container)
For the error you had here: For steps that interact with docker, like docker build, you can also set target: host on the specific task. (Azure DevOps seems to mount stuff to allow most of the context to be shared) (in this case the image that you are trying to pull was likely already pulled when the pipeline started)

The changes were made on Databricks repos not displaying in destination branch after merge with main

My work flow is Developer creates a feature branch from main in Databricks repos -> after they make changes on it -> they raise a pull request for merge into main in azure devops-> it triggers the CICD pipeline push the code into the destination branch with changes.
In above process Am able to push the code from feature to feature branch but not into main because main don't have write permissions with CICD. Am confusing to how to achieve this. Below is my yaml pipeline. and also wanted to know, is there possible to update the code into feature branch from main. My target is to update into main. then after move the code from main to feature with same pipeline.
- script: |
pip install --upgrade databricks-cli
displayName: 'Install dependencies'
- script: |
echo "Checking out the $(branchName) branch"
databricks repos update --path $(Production) --branch $(branchName)
env:
DATABRICKS_HOST: $(databricks-url)
DATABRICKS_TOKEN: $(databricks-pat)
displayName: 'Update Staging project'
An also I tried to create repos in databricks repos api on higher level folders prod/staging, but this one also failed and throwing error:
Generating script.
Script contents: shell
databricks repos create --url https://harish.alagandula#royhill.com.au:***#dev.azure.com/royhill/Enterprise%20Data%20Platform/_git/EDP_Core --provider azureDevOpsServices --path /Repos/Staged/testrepo
========================== Starting Command Output ===========================
"C:\Windows\system32\cmd.exe" /D /E:ON /V:OFF /S /C "CALL "C:\agents\_work\_temp\7554fb9a-2bb8-4c75-98c3-a4d9819e4420.cmd""
Error: b'{"error_code":"BAD_REQUEST","message":"Remote repo not found. Please ensure that:\\n1. Your remote Git repo URL is valid.\\n2. Your personal access token or app password has the correct repo access."}'
##[error]Cmd.exe exited with code '1'.
Finishing: Update Production project
Below are the commands I tried create repos on Databricks repos API with cicd pipeline
- task: UsePythonVersion#0
displayName: Configuring Python Version
inputs:
versionSpec: '3.9'
addToPath: true
architecture: 'x64'
- script: |
pip install --upgrade databricks-cli
displayName: 'Install dependencies'
- script: |
databricks repos create --url https://$(userName):$(devOpsPat)#dev.azure.com/<org>/<projectname>/_git/<reponame> --provider azureDevOpsServices --path /Repos/Staged/reponame
env:
DATABRICKS_HOST: $(databricks-url)
DATABRICKS_TOKEN: $(databricks-pat)
displayName: 'Update Production project'
Even I tried below command also, but I am getting the same error.
databricks repos create --url https://github.com/<github_profile>/glow
--provider gitHub --path /Repos/Production/<reponame>
Please some one help me to sort out these two issues.

Sending Azure Build Artifacts to Feed

I have been having issues with sending build artifacts to my feed and can't figure out where my issue is at.
I forked over this repository from an Azure document since I am new to this and learning to create a CI/CD pipeline (https://github.com/Azure-Samples/python-docs-hello-world).
With the twine or universal package publishing setup guides there are steps for connecting to the feed such as creating a .piyrc file in your home directory but is that done locally or somewhere within the pipeline YAML?
Universal Publishing YAML
steps:
- task: UniversalPackages#0
displayName: 'Universal publish'
inputs:
command: publish
vstsFeed: 'cd75ead1-7beb-42f9-9477-e958501bb986'
publishDirectory: '$(Pipeline.Workspace)'
vstsFeedPublish: 'cd75ead1-7beb-42f9-9477-e958501bb986'
vstsFeedPackagePublish: drop
Twine Method
twine upload -r {Feed} --config-file $(PYPIRC_PATH) $(Pipeline.Workspace)
With Universal Publishing I receive an error about the path provided as being invalid.
With Twine I get an error about InvalidDistribution: Cannot find file (or expand pattern)
The $(Pipeline.Workspace) that I have written above was created as a path in my build pipeline to copy all files over from an Archive step. I see the artifact being made in the build pipeline and then downloaded in the first step of the release pipeline so I'm not sure what is going on or if it's something as simple as using the incorrect path.
With Twine I get an error about InvalidDistribution: Cannot find file (or expand pattern)
You need to specify the specific artifacts path instead of using the $(Pipeline.Workspace).
The $(pipeline.workspcae) is equal to the $(Agent.BuildDirectory). You could refer to this doc.
From the Github link, it seems that you want to publish a python package to feed.
You could refer to the following steps to create CI\CD.
In CI , you could Build sdist and publish the artifact to pipeline.
Here is the sample:
steps:
- task: UsePythonVersion#0
displayName: 'Use Python 3.6'
inputs:
versionSpec: 3.6
- script: 'python setup.py sdist'
displayName: 'Build sdist'
- task: PublishBuildArtifacts#1
displayName: 'Publish Artifact: dist'
inputs:
PathtoPublish: dist
ArtifactName: dist
In CD , you could set the build artifacts as resource and use twine to upload the python package to feed.
Here is an example:
twine upload -r AzureTest23 --config-file $(PYPIRC_PATH) D:\a\r1\a\{Source alias}\dist\*
The twine authenticate task could give the $(PYPIRC_PATH) variable.
If you want to determine your correct path, you can find it in the release log.
Note: If there are spaces or special characters in the path, they need to be escaped in cmd, otherwise they will not be recognized.
The name relates with the source alias, you could change it in artifacts source.
By the way, if you use the Universal Publish task, you also need to give the correct path.

Azure Devops nuget artifact feed and docker

Is there a good way to create an authentication mechanism to Devops to be able to access the artifact NuGet feed? I would like to create a base image for my team that would allow them to just pull an image from our Azure Container Registry that has access to our devops nuget feed. Ideally people wouldn't have to have the same stock dockerfile code in every single project that grabs a PAT from their host build system. This would also allow us to CICD this a little more nicely.
My current solution
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
WORKDIR /app
ARG IT_PAT
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS "{\"endpointCredentials\": [{\"endpoint\": \"https://pkgs.dev.azure.com/MNPIT/_packaging/MNP/nuget/v3/index.json\",\"username\": \"build\",\"password\": \"${IT_PAT}\"}]}"
RUN mkdir -p $HOME/.nuget/plugins
WORKDIR /deps
# Downloads and installs the NuGet credential plugin so we can login to the private NuGet feed
RUN curl https://github.com/microsoft/artifacts-credprovider/releases/download/v0.1.24/Microsoft.NetCore2.NuGet.CredentialProvider.tar.gz -L -o creds.tar.gz -s
RUN tar -xzf creds.tar.gz
RUN cp -r plugins/netcore/ ~/.nuget/plugins
Stock code in every build file
Each user configuring their environment variables with a PAT
Passing the PAT on every build
Does not work with an automated build system
YAML
Run NuGetAuthenticate task to add VSS_NUGET_ACCESSTOKEN to environment variables (more info)
Pass token to Docker task as an argument
- task: NuGetAuthenticate#0
- task: Docker#2
displayName: 'build docker image'
inputs:
command: build
containerRegistry: 'happycodeacr'
repository: 'hc-app-sample-api-dev'
buildContext: '$(Pipeline.Workspace)/app'
Dockerfile: '$(Pipeline.Workspace)/app/src/HappyCode.Api/Dockerfile'
arguments: '--build-arg FEED_ACCESSTOKEN=$(VSS_NUGET_ACCESSTOKEN)'
tags: |
latest
$(Build.BuildId)
Dockerfile
Download and install artifacts provider (more info)
Receive token
Set VSS_NUGET_EXTERNAL_FEED_ENDPOINTS environment variable with feed url and token for nuget restore process
Copy NuGet.config file
Run dotnet restore
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /work
RUN curl -L https://raw.githubusercontent.com/Microsoft/artifacts-credprovider/master/helpers/installcredprovider.sh | sh
ARG FEED_ACCESSTOKEN
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS \
"{\"endpointCredentials\": [{\"endpoint\":\"https://happycode.pkgs.visualstudio.com/_packaging/hc-nuget-feed/nuget/v3/index.json\", \"password\":\"${FEED_ACCESSTOKEN}\"}]}"
COPY ["NuGet.config", "./"]
COPY ["src/*/*.csproj", "./"]
RUN for projectFile in $(ls *.csproj); \
do \
mkdir -p ${projectFile%.*}/ && mv $projectFile ${projectFile%.*}/; \
done
RUN dotnet restore /work/HappyCode.Api/HappyCode.Api.csproj
# further instructions
I would like to create a base image for my team that would allow them to just pull an image from our Azure Container Registry that has access to our devops nuget feed.
You can include the credentials inside your image to achieve this, But for security concern, you've better add some extra steps or codes to pass the credentials from outside the image.
Based on your current solution, you can use the system predefined variable $(System.AccessToken) to get the security token in the azure devops CICD pipeline. Then in the docker build task, you pass the access token to the ARG IT_PAT as arguement,
--build-arg IT_PAT=$(System.AccessToken)
Besides using the NuGet credential plugin, You can also use the dotnet cli to add credentials to the nuget source. And then pass the $(System.AccessToken) in the build arguements. See below:
ARG PAT
COPY . .
RUN dotnet nuget add source "your-source-url" --name "source-name" --username "useless" --password "$PAT" --store-password-in-clear-text
RUN dotnet restore
Another workaround is to include the nuget.config in the build context. But you need to include a nuget.config file without the credentials first, and then add an extra nuget task to add the credentials to the config file. Then copy the nuget.config in your docker file . See below:
Add a nuget task to run below custom command to add the credentials to the nuget.config file.
sources Add -Name "MyPackages" -Source "https://my.pkgs.visualstudio.com/_packaging/MyPackages/nuget/v3/index.json" -username any -password $(System.AccessToken) -ConfigFile Source/Nuget.config -StorePasswordInClearText
Copy the nuget.config in the docker file, Donot forget to delete the nuget.config file when the restore is complete:
COPY *.csproj .
COPY ./nuget.config .
RUN dotnet restore
RUN rm nuget.config
If you are using Yaml based pipeline. You can also check out container jobs. Then you use your private container by setting up the container endpoints. And then you can directly use the restore tasks in your pipeline. See below example, the nuget restore task will run in your private container, and it can access to your azure feeds directly by specifying attribute vstsFeed to your nuget feed:
When you specify a container in your pipeline, the agent will first fetch and start the container. Then, each step of the job will run inside the container.
container:
image: myprivate/registry:ubuntu1604
endpoint: private_dockerhub_connection
steps:
- task: NuGetCommand#2
inputs:
command: 'restore'
feedsToUse: 'select'
vstsFeed: 'my-azure-nuget-feed'
restoreSolution: '**/*.sln'
For more information you can check out this thread.
Adding to other replies - you can avoid modifying nuget.config in runtime and use environment variable instead.
During authentication nuget checks environmental variables of format
NuGetPackageSourceCredentials_'name', where 'name' is the key of the feed in the nuget.config file.
For example
NuGetPackageSourceCredentials_MyPackages="Username=unused_but_required_by_nuget;Password=$(System.AccessToken)"

Is it possible to transfer caches/artifacts between pipelines?

In Gitlab, is it possible to transfer caches or artifacts between pipelines?
I am building a library in one pipeline and I want to build an application with the library in another pipeline.
Yes, it is possible. There are a couple of options to achieve this:
Using Job API and GitLab Premium
The first option is to use Job API to fetch artifacts. This method is available only if you have GitLab Premium. In this option, you use CI_JOB_TOKEN in Job API to fetch artifacts from another pipeline. Read more here.
Here is quick example of a job you would put in your application pipeline configuration:
build_application:
image: debian
stage: build
script:
- apt update && apt install -y unzip
- curl --location --output artifacts.zip "https://gitlab.example.com/api/v4/projects/${PROJECT_ID}/jobs/artifacts/master/download?job=build&job_token=$CI_JOB_TOKEN"
- unzip artifacts.zip
Using S3
The second option is to use some third-party intermediate storage, for instance, AWS S3. To pass artifacts follow below example.
In your library pipeline configuration create the following job:
variables:
TARGET_PROJECT_TOKEN: [get token from Settings -> CI/CD -> Triggers]
TARGET_PROJECT_ID: [get project id from project main page]
publish-artifact:
image: "python:latest"
stage: publish
before_script:
- pip install awscli
script:
- aws s3 cp output/artifact.zip s3://your-s3-bucket-name/artifact.zip.${CI_JOB_ID}
- "curl -X POST -F token=${TARGET_PROJECT_TOKEN} -F ref=master -F variables[ARTIFACT_ID]=${CI_JOB_ID} https://gitlab.com/api/v4/projects/${TARGET_PROJECT_ID}/trigger/pipeline"
Then in your application pipeline configuration retrieve the artifact from the s3 bucket:
fetch-artifact-from-s3:
image: "python:latest"
stage: prepare
artifacts:
paths:
- artifact/
before_script:
- pip install awscli
script:
- mkdir artifact
- aws s3 cp s3://your-s3-bucket-name/artifact.zip.${ARTIFACT_ID} artifact/artifact.zip
only:
variables:
- $ARTIFACT_ID
Once fetch-artifact-from-s3 job is completed you will have your artifact available in artifact/ directory. It can now be consumed in other jobs within application pipeline.

Resources