I want to create access to my private feed from pipeline (Project A), but everytime when i try run dotnet restore to restore .net project with nugets from private feed (project scoped feed in Project B) i got
error NU1301: Unable to load the service index for source <<url_to_my_feed>>
My pipeline.yml looks like:
[previous jobs]
- task: NuGetAuthenticate#0
- task: Docker#2
displayName: Build image
inputs:
command: build
containerRegistry: $(dockerRegistryServiceConnection)
repository: $(contentHostRepositoryName)
Dockerfile: "$(Build.SourcesDirectory)/src/modules/content/src/Dockerfile"
arguments: '--build-arg PAT=$(VSS_NUGET_ACCESSTOKEN)'
tags: |
$(tag)
[next jobs]
My dockerfile looks like:
...
ARG PAT
RUN dotnet nuget add source <<url_to_my_feed>> --name <<name>> --username <<username>> --password $PAT --store-password-in-clear-text
...
When i replace '--build-arg PAT=$(VSS_NUGET_ACCESSTOKEN)' to '--build-arg PAT=<<pat_token>>' where <<pat_token>> is token, who i generate manualty to my personal account in Azure Devops - everythink work fine.
What i try:
using $(System.AccessToken) instead $(VSS_NUGET_ACCESSTOKEN)
in feed set contribution permission to Project A Build Service
in project B disable Limit job authorization scope to current project for non-release pipelines and Limit job authorization scope to current project for release pipelines
use NuGetAuthenticate#0 and NuGetAuthenticate#1
into Pipelines Security in Project A allow Project Build Service for everythink
before using $(System.AccessToken) main into varaible
Add the Project Collection Build Service (Organization) and let it has related permission.
Since you use yml file pipeline, and you disable Limit job authorization scope to current project for non-release pipelines, so project build service account will change to project collection build service account.
And please use $(System.AccessToken) to auth.
Related
Hello guys I'm currently working with Azure Synapse studio. My situation could be described in this way:
I have 3 env: Dev, Test and Prod, each of them has a Azure synapse workspace but I can access only to the Dev one. I need to make some changes from Dev also for the other 2 env (sql script, pipelines etc) and then publish them to other env without touching them.
So I think Azure DevOps can be the solution.
From Dev Syanapse studio Workspace I created 3 branches 1 per env, all of them linked to an Azure DevOps repo. Also Test and Prod are linked to the same repo.
The problem is that the code on Test and Prod workspace could be different from the code on Dev. So I can't use the same ARM template (generated by publishing on the publish branch of the workspace) for all the 3 environment. A good way could be find a way to hit the publish button also on the other envs without using the portal, for example by a REST API ? It is possible ?
Now I only set up the 3 branch solution so I can magae the 3 env directly from Dev env but I think that this will not be the right solution, are changes applied on other envs ? Can I run SQL scripts or pipelines manually from other envs ?
This is my current situation on the other envs I asked to set collaboration and publish branch with the same value as the env branch name (test-test-test and prod-prod-prod)
with the new version (V2) of the Synapse workspace deployment (in Preview 2022-06), it is now possible to deploy from any branch using Azure Devops, so no need for a workspace_publish branch or the Publish button.
Just make the object json files available as artifacts to the release pipeline, and select "Validate and deploy" as the Operation Type.
I am working with Microsoft directly, building a Synapse warehouse myself for a large corporation. We have the same issue, in that the Publish button must be pressed manually for the ARM templates to be generated. Microsoft have confirmed that there is no automatic method for this available right now; we had hoped to receive a Preview AzDevOps deployment task this month, but it turns out that it simply allows us to validate the JSON assets - it still deploys using the ARM template.
We have also looked at using Azure Data Factory tools to deploy from the JSON component files, but we run into issues with the dedicated pool stored procedure tasks being unsupported. :(
The only standard option to achieve this is by creating GitHub repository and then creating Continuous Integration and creating a self-hosted Azure DevOps VM agent or use an Azure DevOps hosted agent.
Then you can setup release pipelines in Azure DevOps to work with different environments. But still you need to commit the changes in the GitHub repository for each environment, there is no Publish button kind of this available.
Refer Continuous integration and delivery for an Azure Synapse Analytics workspace for more details.
This was bothering me as well, so put together the following to be run once any PR is approved to merge into the Synapse collaboration branch, in our case, "main".
For your case, you can modify to target the relevant workspaces.
See below Azure DevOps pipeline code.
What it does is:
runs the Synapse workspace validation task, which also generates the workspace template jsons as an artifact that need to be published to the workspace_publish branch.
It will then check out your publish branch and commit and push the templates that were generated from the previous task
Such that the workspace UI does not think there are any unpublished changes when you click the "Publish" button, we need to update the workspace configuration to reflect the latest commit ID from the workspace COLLABORATION branch (main in this example) that was used to generate what we pushed to the PUBLISH branch in the previous step.
Any suggestions/improvements welcome. Hope this helps.
name: $(TeamProject)_$(Build.DefinitionName)_$(SourceBranchName)_$(Date:yyyyMMdd)$(Rev:.r) # sets Build.BuildNumber
trigger:
branches:
include:
- main
paths:
include:
- synapse/*
resources:
repositories:
- repository: 'Synapse-Publish'
type: git
name: Synapse # update to the name of your repo
ref: workspace_publish # update to the name of your synapse PUBLISH branch
variables:
repoName: $(Build.Repository.Name)
azureSubscription: your_subscription
azureTenantId: your_tenant_guid
adoOrg: your_azure_devops_org_name
adoProject: your_azure_devops_project_name
SourceWorkspaceName: your_synapse_workspace_name
workspacePublishBranch: workspace_publish # should be the same for you but update if not
stages:
- stage: build_stage
displayName: Build, Run Validations, Publish NonProd if merged to main
jobs:
# other jobs excluded from this snippet
- job: publish_workspace_artifacts_job
displayName: Publish for $(SourceWorkspaceName) $(workspacePublishBranch)
# only kick off workspace publish job for non-PR builds
condition: and(not(or(failed(), canceled())), ne(variables['Build.Reason'], 'PullRequest'))
pool:
name: 'linux-vmss' # update this for whatever you need
steps:
- checkout: self # main
clean: true
persistCredentials: true
- task: Synapse workspace deployment#2
displayName: Generate workspace artifact templates
condition: true
continueOnError: false
inputs:
operation: 'validate' # despite this name, it also generates the templates
ArtifactsFolder: '$(Build.SourcesDirectory)/$(repoName)/synapse'
TargetWorkspaceName: $(SourceWorkspaceName)
- checkout: 'Synapse-Publish' # workspace_publish
clean: true
persistCredentials: true
- task: CmdLine#2
displayName: 'Set git user'
inputs:
workingDirectory: '$(System.DefaultWorkingDirectory)'
failOnStderr: true
script: |
git config --global user.email "whatever.you.want#your_org.com"
git config --global user.name "Whatever You Want"
- task: AzurePowerShell#5
displayName: Publish to $(SourceWorkspaceName) $(workspace_publish)
condition: true
inputs:
azureSubscription: '$(azureSubscription)'
ScriptType: InlineScript
Inline: |
# the output from the workspace validate step above are saved here, also published as artifact with name = the synapse workspace name
# Get-ChildItem $(Build.SourcesDirectory)/ExportedArtifacts -Name
cd $(Build.SourcesDirectory)/$(repoName)
git pull origin $(workspacePublishBranch)
git switch $(workspacePublishBranch)
Move-Item -Path $(Build.SourcesDirectory)/ExportedArtifacts/*.json -Destination $(Build.SourcesDirectory)/$(repoName)/$(SourceWorkspaceName) -Force -Verbose
git add $(Build.SourcesDirectory)/$(repoName)/$(SourceWorkspaceName)/*.json
$diff = git diff --cached
$status = git status
if (!($status.ToLower() -like "*nothing to commit*"))
{
echo "##[section]git push changes to repo";
git commit -m "Update $(workspacePublishBranch) for source workspace $(SourceWorkspaceName) [skip ci]";
git pull --rebase;
git push origin $(workspacePublishBranch);
}
else
{
echo "##[warning]No new changes to push for source workspace $(SourceWorkspaceName) templates";
git reset –-hard origin/$(workspacePublishBranch)
git clean -fxd
}
azurePowerShellVersion: 'LatestVersion'
- task: AzurePowerShell#5
displayName: Update $(SourceWorkspaceName) Git Config # this is required so when you click "Publish" within the workspace it doesn't think there are any changes vs. what's already published
inputs:
azureSubscription: '$(azureSubscription)'
ScriptType: InlineScript
Inline: |
# get latest version of this module which now has the LastCommitId parameter that we need
Install-Module -Name Az.Synapse -Confirm:$false -RequiredVersion 1.5.0 -Force
Import-Module -Name Az.Synapse -MinimumVersion 1.5.0
cd $(Build.SourcesDirectory)/$(repoName)
[String] $latestCommitHash = git log -n 1 origin/main --pretty=format:"%H" # format to get only the hash value of the latest commit
$config = New-AzSynapseGitRepositoryConfig `
-RepositoryType AzureDevOpsGit `
-TenantId $(azureTenantId) `
-AccountName $(adoOrg) `
-ProjectName $(adoProject) `
-RepositoryName $(repoName) `
-CollaborationBranch main `
-RootFolder "/synapse" `
-LastCommitId $latestCommitHash
echo "##[section] Updating $(SourceWorkspaceName) git configuration to point to the latest main branch commit ID"
# see https://learn.microsoft.com/en-us/powershell/module/az.synapse/update-azsynapseworkspace?view=azps-8.0.0
Update-AzSynapseWorkspace -Name $(SourceWorkspaceName) -GitRepository $config
azurePowerShellVersion: 'LatestVersion'
I'm setting up a pipeline on Azure DevOps. Previously, I used to build and publish in VS.
I created a yml file using the Azure Functions template and added lines to deploy to a slot. It looks like the release configuration is also set up. How would I be able to declare my target framework, deployment mode, and target runtime in the yml? I also need to enable ReadyToRun and removing additional files at destination.
We can use the field Arguments to configure the VS profile setting in the yaml build.
Sample:
- task: DotNetCoreCLI#2
displayName: 'Build project'
inputs:
projects: '**/*.csproj'
arguments: '--configuration Release --framework <framework> --runtime <runtime> '
Enable ReadyToRun.
Add task .NET Core->change the command to publish and add the Arguments --PublishReadyToRun true
Removing additional files at destination.
When using the Azure App Service Deploy task, and you are using the Publish using Web Deploy option, there is an additional option to Remove Additional Files at Destination.
Steps:
Additional Deployment Options->enable select deployment method->enable the option Remove additional files at destination.
Update1
Is there a good way to create an authentication mechanism to Devops to be able to access the artifact NuGet feed? I would like to create a base image for my team that would allow them to just pull an image from our Azure Container Registry that has access to our devops nuget feed. Ideally people wouldn't have to have the same stock dockerfile code in every single project that grabs a PAT from their host build system. This would also allow us to CICD this a little more nicely.
My current solution
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
WORKDIR /app
ARG IT_PAT
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS "{\"endpointCredentials\": [{\"endpoint\": \"https://pkgs.dev.azure.com/MNPIT/_packaging/MNP/nuget/v3/index.json\",\"username\": \"build\",\"password\": \"${IT_PAT}\"}]}"
RUN mkdir -p $HOME/.nuget/plugins
WORKDIR /deps
# Downloads and installs the NuGet credential plugin so we can login to the private NuGet feed
RUN curl https://github.com/microsoft/artifacts-credprovider/releases/download/v0.1.24/Microsoft.NetCore2.NuGet.CredentialProvider.tar.gz -L -o creds.tar.gz -s
RUN tar -xzf creds.tar.gz
RUN cp -r plugins/netcore/ ~/.nuget/plugins
Stock code in every build file
Each user configuring their environment variables with a PAT
Passing the PAT on every build
Does not work with an automated build system
YAML
Run NuGetAuthenticate task to add VSS_NUGET_ACCESSTOKEN to environment variables (more info)
Pass token to Docker task as an argument
- task: NuGetAuthenticate#0
- task: Docker#2
displayName: 'build docker image'
inputs:
command: build
containerRegistry: 'happycodeacr'
repository: 'hc-app-sample-api-dev'
buildContext: '$(Pipeline.Workspace)/app'
Dockerfile: '$(Pipeline.Workspace)/app/src/HappyCode.Api/Dockerfile'
arguments: '--build-arg FEED_ACCESSTOKEN=$(VSS_NUGET_ACCESSTOKEN)'
tags: |
latest
$(Build.BuildId)
Dockerfile
Download and install artifacts provider (more info)
Receive token
Set VSS_NUGET_EXTERNAL_FEED_ENDPOINTS environment variable with feed url and token for nuget restore process
Copy NuGet.config file
Run dotnet restore
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /work
RUN curl -L https://raw.githubusercontent.com/Microsoft/artifacts-credprovider/master/helpers/installcredprovider.sh | sh
ARG FEED_ACCESSTOKEN
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS \
"{\"endpointCredentials\": [{\"endpoint\":\"https://happycode.pkgs.visualstudio.com/_packaging/hc-nuget-feed/nuget/v3/index.json\", \"password\":\"${FEED_ACCESSTOKEN}\"}]}"
COPY ["NuGet.config", "./"]
COPY ["src/*/*.csproj", "./"]
RUN for projectFile in $(ls *.csproj); \
do \
mkdir -p ${projectFile%.*}/ && mv $projectFile ${projectFile%.*}/; \
done
RUN dotnet restore /work/HappyCode.Api/HappyCode.Api.csproj
# further instructions
I would like to create a base image for my team that would allow them to just pull an image from our Azure Container Registry that has access to our devops nuget feed.
You can include the credentials inside your image to achieve this, But for security concern, you've better add some extra steps or codes to pass the credentials from outside the image.
Based on your current solution, you can use the system predefined variable $(System.AccessToken) to get the security token in the azure devops CICD pipeline. Then in the docker build task, you pass the access token to the ARG IT_PAT as arguement,
--build-arg IT_PAT=$(System.AccessToken)
Besides using the NuGet credential plugin, You can also use the dotnet cli to add credentials to the nuget source. And then pass the $(System.AccessToken) in the build arguements. See below:
ARG PAT
COPY . .
RUN dotnet nuget add source "your-source-url" --name "source-name" --username "useless" --password "$PAT" --store-password-in-clear-text
RUN dotnet restore
Another workaround is to include the nuget.config in the build context. But you need to include a nuget.config file without the credentials first, and then add an extra nuget task to add the credentials to the config file. Then copy the nuget.config in your docker file . See below:
Add a nuget task to run below custom command to add the credentials to the nuget.config file.
sources Add -Name "MyPackages" -Source "https://my.pkgs.visualstudio.com/_packaging/MyPackages/nuget/v3/index.json" -username any -password $(System.AccessToken) -ConfigFile Source/Nuget.config -StorePasswordInClearText
Copy the nuget.config in the docker file, Donot forget to delete the nuget.config file when the restore is complete:
COPY *.csproj .
COPY ./nuget.config .
RUN dotnet restore
RUN rm nuget.config
If you are using Yaml based pipeline. You can also check out container jobs. Then you use your private container by setting up the container endpoints. And then you can directly use the restore tasks in your pipeline. See below example, the nuget restore task will run in your private container, and it can access to your azure feeds directly by specifying attribute vstsFeed to your nuget feed:
When you specify a container in your pipeline, the agent will first fetch and start the container. Then, each step of the job will run inside the container.
container:
image: myprivate/registry:ubuntu1604
endpoint: private_dockerhub_connection
steps:
- task: NuGetCommand#2
inputs:
command: 'restore'
feedsToUse: 'select'
vstsFeed: 'my-azure-nuget-feed'
restoreSolution: '**/*.sln'
For more information you can check out this thread.
Adding to other replies - you can avoid modifying nuget.config in runtime and use environment variable instead.
During authentication nuget checks environmental variables of format
NuGetPackageSourceCredentials_'name', where 'name' is the key of the feed in the nuget.config file.
For example
NuGetPackageSourceCredentials_MyPackages="Username=unused_but_required_by_nuget;Password=$(System.AccessToken)"
I have a very weird (and I suppose easy to fix) problem :) I am trying to have a working CI/CD pipeline in Azure. For this purpose, I have a repository in Azure devops and build and release pipeline created. I am publishing docker images to Azure Container Registry and during release, I am pulling this image (or at least - I am trying because it doesn't work) and I am trying to publish it on Webapp for containers. The "app" in my case it is SingalR hub on .NET Core 3.1 (but I don't suppose it makes a difference in the problem I am having)
If somebody wants to know in details how did i configure it - here is the tutorial i did use:
https://wikiazure.com/devops/azure-devops-automate-your-release-pipeline-to-provision-a-docker-container-to-azure-web-app-for-containers/
There were some doubts/differences in the tutorial (for example - why initially in the tutorial web app is being configured on Docker hub, when in fact it is using ACR. And why to connect to ACR the tutorial uses Azure Resource Manager connection (And not dedicated Docker container --> ACR connetion) And why later on in build pipeline there is some weird id set for dockerRegistryServiceConnection (i am giving in this place name of my ACR docker service connection)
But the whole build pipeline is working. It is publishing image to ACR. Everything is fine till this step.
The problem starts when I want to publish Azure WebApp with this image. The problem is with ... TAGS :) They are mismatching. I have automatic CI/CD - so when i push some change to the repo i see that release pipeline is working. It is creating the image in the ACR. Then i see, that release pipeline is running. Everything is "correct" - meaning no error are seen and the release is green.
But when i go to App service and Container settings i see from logs:
2020-04-21 18:02:28.321 INFO - Pulling image: myAcrName.azurecr.io/mobile/signalr:c7aead0c46b66afc4131935efc7e6a51280dfb1a
2020-04-21 18:02:28.761 ERROR - DockerApiException: Docker API responded with status code=NotFound, response={"message":"manifest for myAcrName.azurecr.io/mobile/signalr:c7aead0c46b66afc4131935efc7e6a51280dfb1a not found: manifest unknown: manifest unknown"}
2020-04-21 18:02:28.761 ERROR - Pulling docker image myAcrName.azurecr.io/mobile/signalr:c7aead0c46b66afc4131935efc7e6a51280dfb1a failed:
2020-04-21 18:02:28.762 INFO - Pulling image from Docker hub: myAcrName.azurecr.io/mobile/signalr:c7aead0c46b66afc4131935efc7e6a51280dfb1a
2020-04-21 18:02:28.867 ERROR - DockerApiException: Docker API responded with status code=InternalServerError, response={"message":"Get https://myAcrName.azurecr.io/v2/mobile/signalr/manifests/c7aead0c46b66afc4131935efc7e6a51280dfb1a: unauthorized: authentication required"}
2020-04-21 18:02:28.870 ERROR - Image pull failed: Verify docker image configuration and credentials (if using private repository)
Very sophisticated error but the root cause is, that he is trying to get the image with non-existing tag, which is GIT COMMIT tag. And it suppose to get image by $(Build.BuildId) (this was my first attempt) or by $(Build.BuilNumber) (this was my second attempt)
Here is how this pipeline step (Deploy Azure App Service) looks like:
- task: AzureRmWebAppDeployment#4
displayName: 'Deploy Azure App Service'
inputs:
azureSubscription: mySubcsriptionARM
appType: webAppContainer
WebAppName: myProductsignalr
DockerNamespace: myAcrName.azurecr.io
DockerRepository: mobile/signalr
DockerImageTag: '$(Build.BuildNumber)'
When i go to Release pipeline logs as a "Deploy Azure App Service" log i see that
2020-04-21T18:41:01.6012767Z ##[section]Starting: Deploy Azure App Service
2020-04-21T18:41:01.6367124Z ==============================================================================
2020-04-21T18:41:01.6367787Z Task : Azure App Service deploy
2020-04-21T18:41:01.6368381Z Description : Deploy to Azure App Service a web, mobile, or API app using Docker, Java, .NET, .NET Core, Node.js, PHP, Python, or Ruby
2020-04-21T18:41:01.6368765Z Version : 4.163.5
2020-04-21T18:41:01.6369158Z Author : Microsoft Corporation
2020-04-21T18:41:01.6369603Z Help : https://aka.ms/azureappservicetroubleshooting
2020-04-21T18:41:01.6369976Z ==============================================================================
2020-04-21T18:41:03.8970184Z Got service connection details for Azure App Service:'myProductsignalr'
2020-04-21T18:41:04.5534864Z Trying to update App Service Configuration settings. Data: {"appCommandLine":null,"linuxFxVersion":"DOCKER|myAcrName.azurecr.io/mobile/signalr:1f283100"}
2020-04-21T18:41:05.5465725Z Updated App Service Configuration settings.
2020-04-21T18:41:05.5495890Z Trying to update App Service Application settings. Data: {"DOCKER_CUSTOM_IMAGE_NAME":"myAcrName.azurecr.io/mobile/signalr:1f283100"}
2020-04-21T18:41:06.2703349Z Updated App Service Application settings and Kudu Application settings.
2020-04-21T18:41:32.4715682Z Updated App Service Application settings and Kudu Application settings.
2020-04-21T18:41:33.4179962Z Successfully updated deployment History at https://myProductsignalr.scm.azurewebsites.net/api/deployments/111587494492765
2020-04-21T18:41:33.5945654Z App Service Application URL: http://myProductsignalr.azurewebsites.net
2020-04-21T18:41:33.6180118Z ##[section]Finishing: Deploy Azure App Service
What amazes me, that it is showing, that everything was ok - when it was far from "ok" :)
When i go to container settings after:
a) new code is published
b) build pipeline fires
c) release pipeline fires
i see it like this:
The tag is empty. If i would pick some tag manually:
And would choose: "SAVE" everything works correctly (SingalR is up and running correctly)
Clearly, I am missing something :/ Help me to see what;)
The root cause for me is that this fragment:
DockerImageTag: '$(Build.BuildNumber)'
should insert build number (as stated) and the info from container settings should be:
Pulling image: myAcrName.azurecr.io/mobile/signalr:20200421.09 (for BuildNumber 20200421.09) and it is inserting GIT COMMIT there as a tag and ends up with: Pulling image: myAcrName.azurecr.io/mobile/signalr:c7aead0c46b66afc4131935efc7e6a51280dfb1a Why o why?:)
[UPDATE 22.04 10:56]
I am posting build pipeline that i am using currently. I don't suppose it is important as it is working correctly, and the problem is more with deployment of correctly created docker image (on ACR), than with creating this image by the build pipeline. Nevertheless, here is the pipeline:
# Docker
# Build a Docker image
# https://learn.microsoft.com/azure/devops/pipelines/languages/docker
trigger:
- master
resources:
- repo: self
variables:
dockerRegistryServiceConnection: 'MyProductDockerACR'
imageRepository: 'mobile/signalr'
containerRegistry: 'myAcrName.azurecr.io'
dockerfilePath: '**/Dockerfile'
tag: '$(Build.BuildNumber)'
vmImageName: 'ubuntu-latest'
stages:
- stage: Build
displayName: Build and push stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: Docker#2
displayName: Build and push image to container registry
inputs:
containerRegistry: $(dockerRegistryServiceConnection)
repository: $(imageRepository)
command: 'buildAndPush'
Dockerfile: $(dockerfilePath)
tags: |
$(tag)
I saw the release you are using is configured by UI. It's work logic much different with the one which configured by YAML.
In fact, here what you received just be the different performance produced while the running reason of the release are different.
I guess this release has the artifact source which targeting to Repos, right? You can confirm by checking its icon.
While the release source is coming from Repos, then the Build.BuildNumber would be the short part of the commit id(8 characters). And the Build.BuildId is the complete commit id.
If you want the release keep using the Build.Buildnumber value which the corresponding build(created/pushed image) was using, you must make sure the release source is targeting to this build. Also, this build need has artifacts generated. According to the YAML you shared, obviously, you haven't done that.
Only the release triggered by build along with artifact, then the Build.BuildNumber can be like 20200422.1 which the build was using.
So, please go your release definition, and re-configure its source to make sure it is coming from build artifact instead of repository.
Yes. You are right. You have mismatch in tags.
In Docker#2 task you can define tags:
steps:
- task: Docker#2
displayName: Login to ACR
inputs:
command: login
containerRegistry: devopsmanual-acr
- task: Docker#2
displayName: Build and Push
inputs:
repository: $(imageName)
command: buildAndPush
Dockerfile: build-docker-image/SampleAppForDocker/DOCKERFILE
tags: |
$(Build.BuildNumber)
- task: Docker#2
displayName: Logout of ACR
inputs:
command: logout
containerRegistry: devopsmanual-acr
Your definition should be pretty much like this one. Whre devopsmanual-acr is connection to your ACR.
.
I recently mad a blog post about creating docker images on Azure DevOps so maybe this will be also helpful for you.
If this won't be enough to solve your issue, please edit your question and show how you create and push your images.
I have a free account of gitlab.
I also have a company account (not sure which plan).
I have the exact same project, a wrapper on EventStore.
In the CI pipeline I want to spin up a container with event store so that I can run some integration tests against it.
This is my .gitlab-ci.yml that restores, compiles, runs tests and publishes nuget packages
#Stages
stages:
- ci
- pack
#Global variables
variables:
GITLAB_RUNNER_DOTNET_CORE: mcr.microsoft.com/dotnet/core/sdk:2.2
EVENT_STORE: eventstore/eventstore:release-5.0.2
NUGET_REPOSITORY: $NEXUS_NUGET_REPOSITORY
NUGET_API_KEY: $NEXUS_API_KEY
NUGET_FOLDER_NAME: nupkgs
#Docker image
image: $GITLAB_RUNNER_DOTNET_CORE
#Jobs
ci:
stage: ci
services:
- $EVENT_STORE
variables:
# event store service params testing with standard ports
EVENTSTORE_INT_TCP_PORT: "1113"
EVENTSTORE_EXT_TCP_PORT: "1113"
EVENTSTORE_INT_HTTP_PORT: "2113"
EVENTSTORE_EXT_HTTP_PORT: "2113"
EVENTSTORE_EXT_HTTP_PREFIXES: "http://*:2113/"
script:
- dotnet restore --no-cache --force
- dotnet build --configuration Release
- dotnet vstest test/*Tests/bin/Release/**/*Tests.dll
pack-beta-nuget:
stage: pack
script:
- export VERSION_SUFFIX=beta$CI_PIPELINE_ID
- dotnet pack *.sln --configuration Release --output $NUGET_FOLDER_NAME --version-suffix $VERSION_SUFFIX --include-source --include-symbols -p:SymbolPackageFormat=snupkg
- dotnet nuget push **/*.nupkg --api-key $NUGET_API_KEY --source $NUGET_REPOSITORY
except:
- master
pack-nuget:
stage: pack
script:
- dotnet restore
- dotnet pack *.sln --configuration Release --output $NUGET_FOLDER_NAME
- dotnet nuget push **/*.nupkg --api-key $NUGET_API_KEY --source $NUGET_REPOSITORY
only:
- master
As you can see, I spin up the event store container.
From my integration tests I try to connect to the container within the CI using the following connection string:
"ConnectTo=tcp://admin:changeit#127.0.0.1:1113; HeartBeatTimeout=500;";
With my work account it works fine, there is a container listening on 127.0.0.1 on port 1113 and I can connect to it using the above connection string.
With my free personal account it is unable to connect.
Why?
I suspect it has something to do with the way docker is available on both gitlab CI runners, but why is different?
And more important, how can I configure event store on my personal CI pipeline on my free account so that I can connect to it if the localhost is not a valid host Uri for whatever reason?
Well, you have not provided any details but it seems you're using the Docker executor. In that case, services are not available on localhost but only accessible as service aliases.
This is an extract from the working CI file:
test:
stage: test
script:
- dotnet test
variables:
ASPNETCORE_ENVIRONMENT: Testing
EVENTSTORE_EXT_HTTP_PORT: 2113
EVENTSTORE_EXT_TCP_PORT: 1113
EVENTSTORE_RUN_PROJECTIONS: all
EVENTSTORE_START_STANDARD_PROJECTIONS: "true"
EventStore__ConnectionString: ConnectTo=tcp://admin:changeit#eventstore:1113
services:
- name: eventstore/eventstore:latest
alias: eventstore
only:
refs:
- branches
- tags
For this to work, your appsettings.Testing.json file needs to point to ConnectTo=tcp://admin:changeit#eventstore:1113.
If you want to keep using the appsettings file with the configuration that points to localhost, you can override the setting using env variable in the CI file. Just remember to add environment variables as the configuration source. The code snippet above has such an override, matching our settings structure:
{
"EventStore": {
"ConnectionString": "ConnectTo=whatever"
}
}
If you ever decide using the Kubernetes executor, you will need to revert to using localhost, because Kubernetes executor creates one pod per build with multiple containers, including all service containers. There's an open issue to support service aliases with Kubernetes runners, I think it will be like 12.9 or 13, pretty soon. That being said, using service aliases is a safe, future proof way of making it all work.
P.S. Just noticed that your setup works with one account and doesn't work with another. My guess would be that you either use different executors (Docker doesn't work and Kubernetes works) or different GitLab versions (if the service alias issue has already been fixed).