I recently switched on GitLab's Docker Registry feature and it works quite well on my desktop. The next step for us is to use the registry when building our software via GitLab CI. Instead of build a docker image, which is described in various ways in the documentation and on several medium.com posts which the official twitter account mentions, I can not find a way to pull our own images from GitLab's registry.
If we specify an image from the registry via the image directive in .gitlab-ci.yml we get an authorization error (I expected that the worker perhaps authorize themselves transparently, but this is not the case):
Running with gitlab-ci-multi-runner 1.4.1 (fae8f18)
Using Docker executor with image registry.host.tld/NAMESPACE/PROJECT:latest ...
Pulling docker image registry.host.tld/NAMESPACE/PROJECT:latest ...
ERROR: Build failed (system failure): API error (500): Get https://registry.host.tld/v2/NAMESPACE/PROJECT/manifests/latest: error parsing HTTP 403 response body: no error details found in HTTP response body: "{\"message\":\"forbidden\",\"status\":\"error\",\"http_status\":403}"
So someone on the worker should call docker login. Two problems with that:
We authenticate our users via LDAP. Should we really create a static user just for the CI server?
We have quite a few worker running here. That's a lot of work to SSH'd into every single one.
Do I have overlooked something very obvious or should I go the route of docker login on every worker node?
We run GitLab 8.10.6.
I had this problem, too.
There is also an issue about that on the gitlab-ci-multi-runner project:
https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/issues/1663
Refering to the docs you should just Login in to the private registry on the machine qhere the gitlab-runner runs:
enter link description here
If you are using the gitlab-runner inside a docker container you have to mount the file with the login credentials into the gitlab runner container.
So the command to run the gitlab container looks like this:
docker run -d --name gitlab-runner --restart always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /srv/gitlab-runner/config:/etc/gitlab-runner \
-v /root/.docker/config.json:/root/.docker/config.json \
gitlab/gitlab-runner
Refering to this example root musst login to the registry:
docker login https://registry.host.tld:5000
The important part here is the URI for the registration. I got an error similar to your's and overcame it by running:
docker login https://registry.host.tld:5000/v2/
This may be different on other registries, you have to look at the eoor message and use the part before 'namespace/project/ as login path.
E.g. if the Error message looks like this:
ERROR: Build failed (system failure): API error (500):
Get https://registry.host.tld/v1/NAMESPACE/PROJECT/manifests/latest:
error parsing HTTP 403 response body: no error details found in HTTP response body: "
{\"message\":\"forbidden\",\"status\":\"error\",\"http_status\":403}"
The login musst be:
docker login https://registry.host.tld/v1/
Hope this helps.
Related
While learning how to use Azure Container Registry with the official tutorial : https://learn.microsoft.com/en-us/azure/container-registry/container-registry-get-started-portal?tabs=azure-cli
I tried to push images to my registry, the Hello World image in the course works fine, but when i try to use my own images it fails. It also fails when i pull images from docker and try to push them to my Azure registry.
Of course, the images are correctly tagged and the CLI connection works fine.
i'm also following another Azure course in which i build the image with Github actions ( https://learn.microsoft.com/en-us/azure/aks/kubernetes-action ), it also works great on the repo of this course, but once i try with my own projects, it fails. This time the error is about the url / the credentials :
After investigations, i'm sure that the credentials are correct, but the URL is maybe false because it never create it. That's why i was trying to push it manually in the first place.
EDIT : I managed to make it work by changing the wifi source i used, but i still don't understand how is this possible, why it doesn't work on github actions and what should i change in my conf to make it work with the original wifi again.
I tried to reproduce the same issue in my environment and got the below output
I have created the docker file and write the some script
vi dockerfile
FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/
I have build the docker file using below command
docker build -t my-apache2 .
I have run the Image id using the below command
docker run -d -p 80:80 image_id
Created the container registry
After creating the registry we should enable the Access key if not we will not able to fetch the image to container instances
I have logged into the registry server
docker login login_server
Username:XXXX
password:XXXXX
After succeeded I have pushed the image into the container registry
I have tagged the image and pushed into the registry
docker tag image_name login_server/image_name
docker push login_server/image_name
Here we can find the image in repositories which we have pushed
I have created the container instance, while creating we have to give the Image resource as container registry then only we will get the pushed image
Problem:
We are trying to run self-hosted agent on my Windows 10 (Enterprise) machine using docker-container approach as explained in article. We can create docker image successfully (for Windows) as explained in mentioned article but while executing the created image with run command we are getting below error. We tried to google it but didn’t find any resolution.
Error:
Determining matching Azure Pipelines agent...
Invoke-RestMethod : The remote name could not be resolved: 'dev.azure.com'
Steps Followed:
Installed docker engine on my Windows 10 laptop
Followed instructions mentioned in aforementioned article and able to create docker image with docker build command.
But while running below command to run created docker image, we are getting above error.
docker run -e AZP_URL="https://dev.azure.com/MyOrg/" -e AZP_TOKEN="XXXXXXXXXXXXXXXXXXXXXXXXXX" -e AZP_AGENT_NAME="LocalSelfHostTest1" -e AZP_POOL="LocalSelfHostTest" dockeragent:latest
XXXXXXXXX – PAT generated for my project.
We’ll appreciate your help.
Regards
arvind
I tried installing docker on a server of mine using this tutorial.
I want to run docker images remotely and use the portainer web-interface to administrate everything.
However, when I get to the point where I need to test my installation and I enter the command $ sudo docker run hello-world, I only get the following error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:424: container init caused \"join session keyring: create session key: disk quota exceeded\"": unknown. ERRO[0000] error waiting for container: context canceled
I tried the following methods:
"Install Docker CE / Install using the convenience script"
"Install Docker CE / Install using the repository"
This also happens when I try to run other images (eg. portainer).
I hope this is enough information.
I am new to docker, so I don't know how I should debug it efficiently.
Try to increase maxkeys kernel parameter:
echo 50000 > /proc/sys/kernel/keys/maxkeys
see: https://discuss.linuxcontainers.org/t/error-with-docker-inside-lxc-container/922/2
So, as it turns out, I connected to the wrong vServer.
The one I was connected to is using LXD (as you might have seen in my previous comment), which doesn't support Docker (at least not the way this guide advises).
When I ran the same setup on a vServer using a bare-metal(type 1) hypervisor, it worked without a problem.
I think this has to do with automatic storage allocation under LXD, but this is just a guess.
Artifactory 6.0.1 PRO
$ docker --version
Docker version 18.03.1-ce, build 9ee9f40
We have setup artifactory with several docker instances. We do have SSO enabled to our SAML authentication system but now we cannot seem to login to the docker instance:
I do have DOCKER_OPTS=" --insecure-registry"
$ docker login artifactory.foo.internal
Username: admin
Password:
Error response from daemon: Get https://artifactory.foo.internal/v2/: unknown: Unsupported docker repository request for 'v2'
No matter what I use in username/password I get the same error.
I have also tried using artifactory.foo.internal:443
We're using Tomcat as a direct connection. Docker configuration is set to PATH.
Any suggestions would be greatly appreciated.
This is the output I get by clicking show complete raw because the normal view is just 1 black line (no log inside).
I tried what https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/ci/ssh_keys/README.md says, but with no luck.
I have a server with two docker containers: one with gitlab, the other with gitlab-runner. I have tested with shared runner and with a specific runner (like the code I show below). In both cases is the same: it fails because it cannot fetch my repository.
It only works (both shared and specific runners) if I set my repo public (neither private, nor internal). And I would like to work with a private repo.
[0KRunning with gitlab-runner 10.1.0 (c1ecf97f)
on runner-myrepo-js (900c71a4)
[0;m[0KUsing Docker executor with image node:latest ...
[0;m[0KUsing docker image sha256:46da793b60ee30f8df451729e315f531f0acd24a92c8fba10401513530beff99 for predefined container...
[0;m[0KPulling docker image node:latest ...
[0;m[0KUsing docker image node:latest ID=sha256:cbea0ebe4f3cf07daecf22b196ba6b5d76d1fe1b6548098e5bc9dd9b0736f262 for build container...
[0;msection_start:1509656546:prepare_script
[0KRunning on runner-900c71a4-project-32-concurrent-0 via a16b07f5f84b...
section_end:1509656548:prepare_script
[0Ksection_start:1509656548:get_sources
[0K[32;1mCloning repository...[0;m
Cloning into '/builds/gitlab/myuser/myrepo-js'...
remote: HTTP Basic: Access denied
fatal: Authentication failed for 'https://example.com/gitlab/myuser/myrepo-js.git/'
section_end:1509656549:get_sources
[0K[31;1mERROR: Job failed: exit code 1
[0;m
What should I do?
I have both gitlab and gitlab-runner in version 10.
We were struggling today and definetely it is a problem of SSL:
Our frontend used SSL certificates, but the backend (gitlab) used plain HTTP (behind a proxy). This broke the real-time log of jobs and the access to private repos from runners.
Adding the SSLs and forcing HTTPS in the backend solved the problem.