Gitlab Pipeline failing "remote: HTTP Basic: Access denied" - gitlab

I'm new to Gitlab Pipelines and want to set up one for one of my Python projects.
I'm using the docker GitLab-runner container with this Configuration file:
version: '3'
services:
runner:
container_name: runner
image: gitlab/gitlab-runner:latest
restart: unless-stopped
environment:
- TZ=Europe/Berlin
volumes:
- ./data:/etc/gitlab-runner/
- /var/run/docker.sock:/var/run/docker.sock
Whenever a pipeline is executed I get this error message:
Running with GitLab-runner 14.10.1 (f761588f)
on docker xxxxxxx
Preparing the "docker" executor
Using Docker executor with image python:latest ...
Pulling docker image python:latest ...
Using docker image sha256:8dec8e39f2eca1ee1f1b668619023da929039a39983de4433d42d25a7b79267c for python:latest with digest python#sha256:567018293e51a89db96ce4c9679fdefc89b3d17a9fe9e94c0091b04ac5bb4e89 ...
Preparing environment
Running on runner-xxxxxxxxx-project-38-concurrent-0 via xxxxxxxx...
Getting source from Git repository
Fetching changes with git depth set to 20...
Reinitialized existing Git repository in /builds/group/project/.git/
remote: HTTP Basic: Access denied
fatal: Authentication failed for 'http://mygitlab.de/group/projekt.git/'
Cleaning up a project directory and file-based variables
ERROR: Job failed: exit code 1
The Gitlab Runner is assigned to a project. I already tried to reset everything and use it with my IP address, my DNS address, my local IP, my local device name but nothing worked yet
I read about others having the same problems, mostly in 2016 or older. Is there anything I'm missing? Is there a setting I have to set correctly?
EDIT:
Thanks, #Vadim for correcting my tags
After some more testing, I tried the same with a public repository. And to my surprise, it worked. The Problem is the Authorisation. I still need to add as much as possible to my configuration, test if it affects the public repo, and then try it with a private repo.
I will keep this more updated as I heard of others having the same problems

For my case, gitlab was behind a proxy other than the built-in traefik proxy. I believe this caused the necessity of using this setting. After registering your runner, edit the config.toml and and add the clone_URL
[[runners]]
url = "https://gitlab.example.com"
clone_url = "https://gitlab.example.com"
This solved the issue for me.

One thing that might help you is to try and pass the actual IP in the extra hosts for the runner.
It should go into the config.toml for the runner something like extra_hosts = [ 192.1xx.x.x:mygitlab.de]

Related

How to select a shared gitlab runner with docker installed on?

Its my first time to set a CI/CD using gitlab.com and I would like to use Gitlab's shared runners for my project.
My CI/CD process uses a docker-compose file which I build and deploy.
I saw that there is an option to select a gitlab-runner instance that has docker installed on so I will be able to run my CI/CD process on it.
With that being said, I can't find a way to configure a specific shared gitlab-runner to my project.
Will be happy to get your help here, how can I assign a specific shared gitlab-runner to my project?
Thanks
You can use tags to select runners, but in your case this is not necessary. You can just specify that the image is docker and attach a service container of docker:dind and your project will build on shared runners.
image: docker:latest
services:
- docker:dind

Access denied when pushing docker image to gitlab's (on prem) integrated docker registry

When pushing a docker image with a modified tag (to contain registry) to the gitlab integrated registry i get an access denied.
Using the gitlab registry is using it per project. Once the registry is enabled for a project there is a hint how to push the images to the registry https://gitlab.mydomain.com/**path/to/project**/container_registry.
The problem got solved when the full path was included in the TAG Name.
When i changed the tagname to [registryUrl]:[registryPort]/path/to/project/[imageNameWithTags] i was able to push to the repository/registry.
Indeed you need to do docker login ... as described on the /container_registry page.
You can also rely on some GitLab Predefined environment variables to make code generic and re-use it in many projects.
Here is the example of doing it in .gitlab-ci.yml:
build-image:
stage: build
image: docker:latest
services:
- name: docker:dind
script:
- docker build -t $CI_REGISTRY_IMAGE .
- docker login -u $CI_REGISTRY_USER -p "$CI_JOB_TOKEN" $CI_REGISTRY
- docker push $CI_REGISTRY_IMAGE
See full example in one of our projects

GitLab CI/CD configuration problem using shared runners

I have problems with GitLab CI/CD configuration - I'm using free runners on GitLab it self.
I have joomla (test) project using docker - I'm learng how it's work.
I created .gitlab-ci.yml with:
image: docker:latest
services:
- docker:dind
at top of file.
On test stage I want run docker image created at the build stage.
When I add:
services:
- mariadb:latest
to test stage I always get
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? at docker pull command. Without it I get error at docker run command at joomla image initialization cose of lack of MySql server
Any help will be appreciated.
If you set
services:
- mariadb:latest
in your test job, this will override the globally defined services. Therefore, the docker daemon is not running during test. This also explains why you do not get the Docker daemon error when you omit the services definition for the test job.
Either specify the docker:dind service also for the test job, or remove the local services definition and add mariadb to your global services definition.

Possible solution for bitbucket pipeline docker-run limitation

My integration tests are highly dependent of a elastic search, given that to build my integration tests on bitbucket pipeline I would have to execute the docker-run command to be able to spin up my elastic-search instance during my integration tests.
But as probably some of you know, there's a limitation on the bitbucket pipeline
See the Docker command line reference for information on how to use
these commands. Other commands, such as docker run, are currently
forbidden for security reasons on our shared build infrastructure.
So given that I don't know how can I spin-up my escluster with all configurations that I need inside, painless scripts, mappings, ports exposed to be available for my integrations tests.
Does someone have any idea how could i achieve this?
Ok I managed to get it working, I was struggling to run elastic search, due this error https://github.com/docker-library/elasticsearch/issues/111
This was fixed by applying the config discovery-type: single-node. Since I'm using this for integration tests I don't need to run ES in production mode. The thing is bitbucket-pipeline was not showing error logs for this error, so I was completely blind and I had to try many things till find out. Since I can't build and run my own image on pipelines, I uploaded an image with my own configuration (including single-node config) and scripts and to docker hub.
This is how my yaml looked like in the end:
image: maven:3.3.9
pipelines:
default:
- step:
caches:
- maven
script:
- docker version
- mvn clean package verify -Dmaven.docker.plugin.skip=true -s settings.xml
services:
- elasticsearch
definitions:
services:
elasticsearch:
image: elastic-search-bitbucket-pipeline
options:
docker: true
You can try to define your elastic-search image as a service as described there:
Use services and databases in Bitbucket Pipelines
For those still looking for a more elaborate solution, I have created a Dockerfile like this:
FROM elasticsearch:7.0.1
COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/
In the same folder I have also created a custom config elasticsearch.yml:
network.host: 127.0.0.1
I then added the custom image to Docker Hub, for more info how to do that, look here: https://docs.docker.com/docker-hub/repos/
You can now use the custom image in your Pipelines service configuration and use it to run your tests.
You could also supply some more configuration inside your elasticsearch.yml
Enable CORS:
http.cors.enabled: true
http.cors.allow-origin: "*"
Set discovery type:
discovery.type: single-node
You can use my docker image:
https://hub.docker.com/r/xiting/elasticsearch-bitbucket-pipeline
Add service to your pipeline as below:
definitions:
steps:
- step: &run-tests
name: Run tests
script:
- sleep 30 # Waiting elasticsearch. In your real pipeline you can not use it.
- curl -XGET localhost:9250/_cat/health
services:
- elasticsearch
services:
elasticsearch:
image: xiting/elasticsearch-bitbucket-pipeline
variables:
ES_JAVA_OPTS: '-Xms512m -Xmx512m'
docker:
memory: 2048
pipelines:
pull-requests:
'**':
- step: *run-tests

Gitlab CI cannot pull image from private docker registry

I'd like to create a Docker based Gitlab CI runner which pulls the docker images for the build from a private Docker Registry (v2). I cannot make the Gitlab Runner to pull the image from a local Registry, it tries to GET something from a /v1 API. I get the following error message:
ERROR: Build failed: Error while pulling image: Get http://registry:5000/v1/repositories/maven/images: dial tcp: lookup registry on 127.0.1.1:53: no such host
Here's a minimal example, using docker-compose and a web browser.
I have the following docker-compose.yml file:
version: "2"
services:
gitlab:
image: gitlab/gitlab-ce
ports:
- "22:22"
- "8080:80"
links:
- registry:registry
gitlab_runner:
image: gitlab/gitlab-runner
volumes:
- /var/run/docker.sock:/var/run/docker.sock
links:
- registry:registry
- gitlab:gitlab
registry:
image: registry:2
After the first Gitlab login, I register the runner into the Gitlab instance:
root#130d08732613:/# gitlab-runner register
Running in system-mode.
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/ci):
http://192.168.61.237:8080/ci
Please enter the gitlab-ci token for this runner:
tE_1RKnwkfj2HfHCcrZW
Please enter the gitlab-ci description for this runner:
[130d08732613]: docker
Please enter the gitlab-ci tags for this runner (comma separated):
Registering runner... succeeded runner=tE_1RKnw
Please enter the executor: docker-ssh+machine, docker, docker-ssh, parallels, shell, ssh, virtualbox, docker+machine:
docker
Please enter the default Docker image (eg. ruby:2.1):
maven:latest
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!
After this, I see the Gitlab runner in my Gitlab instance:
After this I push a simple maven image to my newly created Docker repository:
vilmosnagy#vnagy-dell:~/$ docker tag maven:3-jdk-7 172.19.0.2:5000/maven:3-jdk7
vilmosnagy#vnagy-dell:~/$ docker push 172.19.0.2:5000/maven:3-jdk7
The push refers to a repository [172.19.0.2:5000/maven]
79ab7e0adb89: Pushed
f831784a6a81: Pushed
b5fc1e09eaa7: Pushed
446c0d4b63e5: Pushed
338cb8e0e9ed: Pushed
d1c800db26c7: Pushed
42755cf4ee95: Pushed
3-jdk7: digest: sha256:135e7324ccfc7a360c7641ae20719b068f257647231d037960ae5c4ead0c3771 size: 1794
(I got the 172.19.0.2 IP-address from a docker inspect command's output)
After this I create a test project in the Gitlab and add a simple .gitlab-ci.yml file:
image: registry:5000/maven:3-jdk-7
stages:
- build
- test
- analyze
maven_build:
stage: build
script:
- "mvn -version"
And after the build the Gitlab gives the error in seen in the beginning of the post.
If I enter into the running gitlab-runner container, I can access the registry under the given URL:
vilmosnagy#vnagy-dell:~/$ docker exec -it comptest_gitlab_runner_1 bash
root#c0c5cebcc06f:/# curl http://registry:5000/v2/maven/tags/list
{"name":"maven","tags":["3-jdk7"]}
root#c0c5cebcc06f:/# exit
exit
vilmosnagy#vnagy-dell:~/$
But the error still the same:
Do you have any idea how to force the gitlab-runner to use the v2 api of the private registry?
Current Gitlab and Gitlab Runners support this, see: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#use-a-private-container-registry
On older Gitlab I've solved this with copying an auth key into ~/.docker/config.json
{
"auths": {
"my.docker.registry.url": {
"auth": "dmlsbW9zLm5hZ3k6VGZWNTM2WmhC"
}
}
}
I've logged into this container from my computer and copied this auth key into the Gitlab Runner's docker container.
What version of docker do you run on Gitlab ?
Also for a v2 registry, you have to explicitly allow insecure registry with a command line switch, or secure your registry using a certificate.
Otherwise Docker fallback to the v1 registry if it gets a security exception.

Resources