Unable to connect to container in Gitlab CI in my free account - gitlab

I have a free account of gitlab.
I also have a company account (not sure which plan).
I have the exact same project, a wrapper on EventStore.
In the CI pipeline I want to spin up a container with event store so that I can run some integration tests against it.
This is my .gitlab-ci.yml that restores, compiles, runs tests and publishes nuget packages
#Stages
stages:
- ci
- pack
#Global variables
variables:
GITLAB_RUNNER_DOTNET_CORE: mcr.microsoft.com/dotnet/core/sdk:2.2
EVENT_STORE: eventstore/eventstore:release-5.0.2
NUGET_REPOSITORY: $NEXUS_NUGET_REPOSITORY
NUGET_API_KEY: $NEXUS_API_KEY
NUGET_FOLDER_NAME: nupkgs
#Docker image
image: $GITLAB_RUNNER_DOTNET_CORE
#Jobs
ci:
stage: ci
services:
- $EVENT_STORE
variables:
# event store service params testing with standard ports
EVENTSTORE_INT_TCP_PORT: "1113"
EVENTSTORE_EXT_TCP_PORT: "1113"
EVENTSTORE_INT_HTTP_PORT: "2113"
EVENTSTORE_EXT_HTTP_PORT: "2113"
EVENTSTORE_EXT_HTTP_PREFIXES: "http://*:2113/"
script:
- dotnet restore --no-cache --force
- dotnet build --configuration Release
- dotnet vstest test/*Tests/bin/Release/**/*Tests.dll
pack-beta-nuget:
stage: pack
script:
- export VERSION_SUFFIX=beta$CI_PIPELINE_ID
- dotnet pack *.sln --configuration Release --output $NUGET_FOLDER_NAME --version-suffix $VERSION_SUFFIX --include-source --include-symbols -p:SymbolPackageFormat=snupkg
- dotnet nuget push **/*.nupkg --api-key $NUGET_API_KEY --source $NUGET_REPOSITORY
except:
- master
pack-nuget:
stage: pack
script:
- dotnet restore
- dotnet pack *.sln --configuration Release --output $NUGET_FOLDER_NAME
- dotnet nuget push **/*.nupkg --api-key $NUGET_API_KEY --source $NUGET_REPOSITORY
only:
- master
As you can see, I spin up the event store container.
From my integration tests I try to connect to the container within the CI using the following connection string:
"ConnectTo=tcp://admin:changeit#127.0.0.1:1113; HeartBeatTimeout=500;";
With my work account it works fine, there is a container listening on 127.0.0.1 on port 1113 and I can connect to it using the above connection string.
With my free personal account it is unable to connect.
Why?
I suspect it has something to do with the way docker is available on both gitlab CI runners, but why is different?
And more important, how can I configure event store on my personal CI pipeline on my free account so that I can connect to it if the localhost is not a valid host Uri for whatever reason?

Well, you have not provided any details but it seems you're using the Docker executor. In that case, services are not available on localhost but only accessible as service aliases.
This is an extract from the working CI file:
test:
stage: test
script:
- dotnet test
variables:
ASPNETCORE_ENVIRONMENT: Testing
EVENTSTORE_EXT_HTTP_PORT: 2113
EVENTSTORE_EXT_TCP_PORT: 1113
EVENTSTORE_RUN_PROJECTIONS: all
EVENTSTORE_START_STANDARD_PROJECTIONS: "true"
EventStore__ConnectionString: ConnectTo=tcp://admin:changeit#eventstore:1113
services:
- name: eventstore/eventstore:latest
alias: eventstore
only:
refs:
- branches
- tags
For this to work, your appsettings.Testing.json file needs to point to ConnectTo=tcp://admin:changeit#eventstore:1113.
If you want to keep using the appsettings file with the configuration that points to localhost, you can override the setting using env variable in the CI file. Just remember to add environment variables as the configuration source. The code snippet above has such an override, matching our settings structure:
{
"EventStore": {
"ConnectionString": "ConnectTo=whatever"
}
}
If you ever decide using the Kubernetes executor, you will need to revert to using localhost, because Kubernetes executor creates one pod per build with multiple containers, including all service containers. There's an open issue to support service aliases with Kubernetes runners, I think it will be like 12.9 or 13, pretty soon. That being said, using service aliases is a safe, future proof way of making it all work.
P.S. Just noticed that your setup works with one account and doesn't work with another. My guess would be that you either use different executors (Docker doesn't work and Kubernetes works) or different GitLab versions (if the service alias issue has already been fixed).

Related

Running Docker container in Azure pipeline running on self hosted agent that is running in the container as well

I am trying to pull Docker container in my Azure pipeline. Azure pipeline is running on the self hosted agent, that is running in the docker container. I get a following error:
Is it possible to run the container in the pipeline, when the pipeline itself runs on the container self hosted agent?
Pipeline YAML:
# Node.js
# Build a general Node.js project with npm.
# Add steps that analyze code, save build artifacts, deploy, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/javascript
trigger:
- master
resources:
containers:
- container: qmate
image: qmate.int.repositories.cloud.sap/qmate-executor:latest
pool:
vmImage: ubuntu-latest
name: SYS-DEV-Self-hosted
demands:
- agent.name -equals SYSDEV-agent
steps:
- task: NodeTool#0
inputs:
versionSpec: '15.x'
displayName: 'Install Node.js'
- task: DockerInstaller#0
inputs:
dockerVersion: '17.09.0-ce'
- script: docker pull qmate
workingDirectory: ./
displayName: 'Docker Pull'
- script: |
cd tests/QmateE2E/regression
npm install
npx wdio config.js
displayName: 'npm install and build'
You may configure the self-hosted agent in the docker container.
You don't need to run the docker container in the pipeline. You could install the self-hosted agent in the docker instance.
And then make the docker container as a self-hosted which can be set in the agent pool.
You can specify multiple containers to run with the container jobs... (If you want to run another container to interact with) (The container that you specify on the pipeline would be pulled and started automatically by Azure Devops) (I would normally specify the container to run on in a top-level container: or for one under the specific job, if multiple jobs are present.)
(The way it is done currently, the safe option, in case more containers are added, is to have a target: qmate for each of the steps that should run in the container)
For the error you had here: For steps that interact with docker, like docker build, you can also set target: host on the specific task. (Azure DevOps seems to mount stuff to allow most of the context to be shared) (in this case the image that you are trying to pull was likely already pulled when the pipeline started)

Azure Devops - release pipeline with docker and Azure Container Registry (ACR) - problem with tag

I have a very weird (and I suppose easy to fix) problem :) I am trying to have a working CI/CD pipeline in Azure. For this purpose, I have a repository in Azure devops and build and release pipeline created. I am publishing docker images to Azure Container Registry and during release, I am pulling this image (or at least - I am trying because it doesn't work) and I am trying to publish it on Webapp for containers. The "app" in my case it is SingalR hub on .NET Core 3.1 (but I don't suppose it makes a difference in the problem I am having)
If somebody wants to know in details how did i configure it - here is the tutorial i did use:
https://wikiazure.com/devops/azure-devops-automate-your-release-pipeline-to-provision-a-docker-container-to-azure-web-app-for-containers/
There were some doubts/differences in the tutorial (for example - why initially in the tutorial web app is being configured on Docker hub, when in fact it is using ACR. And why to connect to ACR the tutorial uses Azure Resource Manager connection (And not dedicated Docker container --> ACR connetion) And why later on in build pipeline there is some weird id set for dockerRegistryServiceConnection (i am giving in this place name of my ACR docker service connection)
But the whole build pipeline is working. It is publishing image to ACR. Everything is fine till this step.
The problem starts when I want to publish Azure WebApp with this image. The problem is with ... TAGS :) They are mismatching. I have automatic CI/CD - so when i push some change to the repo i see that release pipeline is working. It is creating the image in the ACR. Then i see, that release pipeline is running. Everything is "correct" - meaning no error are seen and the release is green.
But when i go to App service and Container settings i see from logs:
2020-04-21 18:02:28.321 INFO - Pulling image: myAcrName.azurecr.io/mobile/signalr:c7aead0c46b66afc4131935efc7e6a51280dfb1a
2020-04-21 18:02:28.761 ERROR - DockerApiException: Docker API responded with status code=NotFound, response={"message":"manifest for myAcrName.azurecr.io/mobile/signalr:c7aead0c46b66afc4131935efc7e6a51280dfb1a not found: manifest unknown: manifest unknown"}
2020-04-21 18:02:28.761 ERROR - Pulling docker image myAcrName.azurecr.io/mobile/signalr:c7aead0c46b66afc4131935efc7e6a51280dfb1a failed:
2020-04-21 18:02:28.762 INFO - Pulling image from Docker hub: myAcrName.azurecr.io/mobile/signalr:c7aead0c46b66afc4131935efc7e6a51280dfb1a
2020-04-21 18:02:28.867 ERROR - DockerApiException: Docker API responded with status code=InternalServerError, response={"message":"Get https://myAcrName.azurecr.io/v2/mobile/signalr/manifests/c7aead0c46b66afc4131935efc7e6a51280dfb1a: unauthorized: authentication required"}
2020-04-21 18:02:28.870 ERROR - Image pull failed: Verify docker image configuration and credentials (if using private repository)
Very sophisticated error but the root cause is, that he is trying to get the image with non-existing tag, which is GIT COMMIT tag. And it suppose to get image by $(Build.BuildId) (this was my first attempt) or by $(Build.BuilNumber) (this was my second attempt)
Here is how this pipeline step (Deploy Azure App Service) looks like:
- task: AzureRmWebAppDeployment#4
displayName: 'Deploy Azure App Service'
inputs:
azureSubscription: mySubcsriptionARM
appType: webAppContainer
WebAppName: myProductsignalr
DockerNamespace: myAcrName.azurecr.io
DockerRepository: mobile/signalr
DockerImageTag: '$(Build.BuildNumber)'
When i go to Release pipeline logs as a "Deploy Azure App Service" log i see that
2020-04-21T18:41:01.6012767Z ##[section]Starting: Deploy Azure App Service
2020-04-21T18:41:01.6367124Z ==============================================================================
2020-04-21T18:41:01.6367787Z Task : Azure App Service deploy
2020-04-21T18:41:01.6368381Z Description : Deploy to Azure App Service a web, mobile, or API app using Docker, Java, .NET, .NET Core, Node.js, PHP, Python, or Ruby
2020-04-21T18:41:01.6368765Z Version : 4.163.5
2020-04-21T18:41:01.6369158Z Author : Microsoft Corporation
2020-04-21T18:41:01.6369603Z Help : https://aka.ms/azureappservicetroubleshooting
2020-04-21T18:41:01.6369976Z ==============================================================================
2020-04-21T18:41:03.8970184Z Got service connection details for Azure App Service:'myProductsignalr'
2020-04-21T18:41:04.5534864Z Trying to update App Service Configuration settings. Data: {"appCommandLine":null,"linuxFxVersion":"DOCKER|myAcrName.azurecr.io/mobile/signalr:1f283100"}
2020-04-21T18:41:05.5465725Z Updated App Service Configuration settings.
2020-04-21T18:41:05.5495890Z Trying to update App Service Application settings. Data: {"DOCKER_CUSTOM_IMAGE_NAME":"myAcrName.azurecr.io/mobile/signalr:1f283100"}
2020-04-21T18:41:06.2703349Z Updated App Service Application settings and Kudu Application settings.
2020-04-21T18:41:32.4715682Z Updated App Service Application settings and Kudu Application settings.
2020-04-21T18:41:33.4179962Z Successfully updated deployment History at https://myProductsignalr.scm.azurewebsites.net/api/deployments/111587494492765
2020-04-21T18:41:33.5945654Z App Service Application URL: http://myProductsignalr.azurewebsites.net
2020-04-21T18:41:33.6180118Z ##[section]Finishing: Deploy Azure App Service
What amazes me, that it is showing, that everything was ok - when it was far from "ok" :)
When i go to container settings after:
a) new code is published
b) build pipeline fires
c) release pipeline fires
i see it like this:
The tag is empty. If i would pick some tag manually:
And would choose: "SAVE" everything works correctly (SingalR is up and running correctly)
Clearly, I am missing something :/ Help me to see what;)
The root cause for me is that this fragment:
DockerImageTag: '$(Build.BuildNumber)'
should insert build number (as stated) and the info from container settings should be:
Pulling image: myAcrName.azurecr.io/mobile/signalr:20200421.09 (for BuildNumber 20200421.09) and it is inserting GIT COMMIT there as a tag and ends up with: Pulling image: myAcrName.azurecr.io/mobile/signalr:c7aead0c46b66afc4131935efc7e6a51280dfb1a Why o why?:)
[UPDATE 22.04 10:56]
I am posting build pipeline that i am using currently. I don't suppose it is important as it is working correctly, and the problem is more with deployment of correctly created docker image (on ACR), than with creating this image by the build pipeline. Nevertheless, here is the pipeline:
# Docker
# Build a Docker image
# https://learn.microsoft.com/azure/devops/pipelines/languages/docker
trigger:
- master
resources:
- repo: self
variables:
dockerRegistryServiceConnection: 'MyProductDockerACR'
imageRepository: 'mobile/signalr'
containerRegistry: 'myAcrName.azurecr.io'
dockerfilePath: '**/Dockerfile'
tag: '$(Build.BuildNumber)'
vmImageName: 'ubuntu-latest'
stages:
- stage: Build
displayName: Build and push stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: Docker#2
displayName: Build and push image to container registry
inputs:
containerRegistry: $(dockerRegistryServiceConnection)
repository: $(imageRepository)
command: 'buildAndPush'
Dockerfile: $(dockerfilePath)
tags: |
$(tag)
I saw the release you are using is configured by UI. It's work logic much different with the one which configured by YAML.
In fact, here what you received just be the different performance produced while the running reason of the release are different.
I guess this release has the artifact source which targeting to Repos, right? You can confirm by checking its icon.
While the release source is coming from Repos, then the Build.BuildNumber would be the short part of the commit id(8 characters). And the Build.BuildId is the complete commit id.
If you want the release keep using the Build.Buildnumber value which the corresponding build(created/pushed image) was using, you must make sure the release source is targeting to this build. Also, this build need has artifacts generated. According to the YAML you shared, obviously, you haven't done that.
Only the release triggered by build along with artifact, then the Build.BuildNumber can be like 20200422.1 which the build was using.
So, please go your release definition, and re-configure its source to make sure it is coming from build artifact instead of repository.
Yes. You are right. You have mismatch in tags.
In Docker#2 task you can define tags:
steps:
- task: Docker#2
displayName: Login to ACR
inputs:
command: login
containerRegistry: devopsmanual-acr
- task: Docker#2
displayName: Build and Push
inputs:
repository: $(imageName)
command: buildAndPush
Dockerfile: build-docker-image/SampleAppForDocker/DOCKERFILE
tags: |
$(Build.BuildNumber)
- task: Docker#2
displayName: Logout of ACR
inputs:
command: logout
containerRegistry: devopsmanual-acr
Your definition should be pretty much like this one. Whre devopsmanual-acr is connection to your ACR.
.
I recently mad a blog post about creating docker images on Azure DevOps so maybe this will be also helpful for you.
If this won't be enough to solve your issue, please edit your question and show how you create and push your images.

Are containers available between stages in Gitlab CI

Is a container that is used in the build stage accessible in the next stage? I have yaml like this:
build_backend:
image: web-app
services:
- mysql:5.7
stage: build
script:
- make build
test_frontend:
image: node:8
stage: test
script:
- make run-tests
My tests, that are triggered in make run-tests need to run HTTP requests against the backend container if possible?
I was trying to avoid building a new container and then pushing to a registry only to pull it down again, but maybe there is no other way? If I did this, would my web-app container still have access to the mysql container if I added it as a service in the test_frontend job.
No, containers are not available between stages. Job artifacts (i.e. files) will be passed between stages by default and can also be passed explicitly betweeen jobs.
If you need to run tests against a container, you should indeed pull it down again from a registry. Then, you can use the docker in docker (dind) service to run your tests.
I think this blog post explains a similar use case nicely. The testing job that's is described there is the following:
test:
stage: test
script:
- docker run -d --env-file=.postgres-env postgres:9.5
- docker run --env-file=.environment --link=postgres:db $CONTAINER_TEST_IMAGE nosetests --with-coverage --cover-erase --cover-package=${CI_PROJECT_NAME} --cover-html

Possible solution for bitbucket pipeline docker-run limitation

My integration tests are highly dependent of a elastic search, given that to build my integration tests on bitbucket pipeline I would have to execute the docker-run command to be able to spin up my elastic-search instance during my integration tests.
But as probably some of you know, there's a limitation on the bitbucket pipeline
See the Docker command line reference for information on how to use
these commands. Other commands, such as docker run, are currently
forbidden for security reasons on our shared build infrastructure.
So given that I don't know how can I spin-up my escluster with all configurations that I need inside, painless scripts, mappings, ports exposed to be available for my integrations tests.
Does someone have any idea how could i achieve this?
Ok I managed to get it working, I was struggling to run elastic search, due this error https://github.com/docker-library/elasticsearch/issues/111
This was fixed by applying the config discovery-type: single-node. Since I'm using this for integration tests I don't need to run ES in production mode. The thing is bitbucket-pipeline was not showing error logs for this error, so I was completely blind and I had to try many things till find out. Since I can't build and run my own image on pipelines, I uploaded an image with my own configuration (including single-node config) and scripts and to docker hub.
This is how my yaml looked like in the end:
image: maven:3.3.9
pipelines:
default:
- step:
caches:
- maven
script:
- docker version
- mvn clean package verify -Dmaven.docker.plugin.skip=true -s settings.xml
services:
- elasticsearch
definitions:
services:
elasticsearch:
image: elastic-search-bitbucket-pipeline
options:
docker: true
You can try to define your elastic-search image as a service as described there:
Use services and databases in Bitbucket Pipelines
For those still looking for a more elaborate solution, I have created a Dockerfile like this:
FROM elasticsearch:7.0.1
COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/
In the same folder I have also created a custom config elasticsearch.yml:
network.host: 127.0.0.1
I then added the custom image to Docker Hub, for more info how to do that, look here: https://docs.docker.com/docker-hub/repos/
You can now use the custom image in your Pipelines service configuration and use it to run your tests.
You could also supply some more configuration inside your elasticsearch.yml
Enable CORS:
http.cors.enabled: true
http.cors.allow-origin: "*"
Set discovery type:
discovery.type: single-node
You can use my docker image:
https://hub.docker.com/r/xiting/elasticsearch-bitbucket-pipeline
Add service to your pipeline as below:
definitions:
steps:
- step: &run-tests
name: Run tests
script:
- sleep 30 # Waiting elasticsearch. In your real pipeline you can not use it.
- curl -XGET localhost:9250/_cat/health
services:
- elasticsearch
services:
elasticsearch:
image: xiting/elasticsearch-bitbucket-pipeline
variables:
ES_JAVA_OPTS: '-Xms512m -Xmx512m'
docker:
memory: 2048
pipelines:
pull-requests:
'**':
- step: *run-tests

GitlabCi deploy on multiple servers

I use Gitlab runner and works fine for a single server. The gitlab-ci.yml is simple:
stages:
- test
- deploy
test:
stage: test
image: php
tags:
- docker
script:
- echo "Run tests..."
deploy:
stage: deploy
tags:
- shell
script:
- sh deploy.sh
As i said this is fine for a single server but to deploy same app on another server? I tried with same gitlab-runner config (same conf.toml) but then it was only updating one of them randomly.
Is there somehow gitlab Ci to be triggered by more than 1 runner and deploy all of them according gitlab-ci.yml?
You can register several runners (e.g. tagged serverA and serverB) from different servers and have multiple deployment jobs, each of them performed by a different runner. This is because you can set more than one tag in a job and only a runner having all the tags will be used.
stages:
- test
- deploy
test:
stage: test
image: php
tags:
- docker
script:
- echo "Run tests..."
deployA:
stage: deploy
tags:
- shell
- serverA
script:
- sh deploy.sh
deployB:
stage: deploy
tags:
- shell
- serverB
script:
- sh deploy.sh
However, take into account a situation when one of the deployment jobs fails - this would end up in you having two different versions of the code on the servers. Depending on your situation this might or might not be a problem.
Yes there is, just set up two jobs for the same stage:
stages:
- deploy
deploy:one:
stage: deploy
script:
- echo "Hello CI one"
deploy:two:
stage: deploy
script:
- echo "Hello CI two"
If necessary you can use tags on your runners to choose which one to use.
Since 2016, you now have Environments and deployments
Environments describe where code is deployed.
Each time GitLab CI/CD deploys a version of code to an environment, a deployment is created.
GitLab:
Provides a full history of deployments to each environment.
Tracks your deployments, so you always know what is deployed on your servers.
It does integrates well with Prometheis, and, with GitLab 13.11 (April 2021), you even have:
Update a deploy freeze period in the UI
In GitLab 13.2, we added the ability to create a deploy freeze period in the project’s CI/CD settings.
This capability helps teams avoid unintentional deployments, reduce uncertainty, and mitigate deployment risks.
However, it was not possible to update deploy freezes.
In GitLab 13.11, we are adding the ability to edit an existing deploy freeze. This way, you can update the freeze period to match your business needs.
See Documentation and Issue.
As shown in "gitlab-ci.yml deployment on multiple hosts", you can use YAML anchors to trigger parallel deployment on multiple environments, which means "multiple servers".

Resources