Runners cannot pull from private gitlab container registry - gitlab

Tried to update our infra to use our pipeline images from self-hosted GitLab using Container Registry (prev. using DO's Container Registry, all working).
The pipeline image is in a separate repo on the same GitLab instance and pushed to the Container Registry.
The application's pipeline sometimes fails with a permission error that it cannot pull the image. Which is weird as if I restart the pipeline it works and can pull the image. I'm not using any env like DOCKER_AUTH_CONFIG as from the docs it should be able to access images from the private project's container registry. Also, it always works on master branch.
If the image is pulled, it usually works around 1-2 hours then it starts to fail.
We are using auto-scaled workers if that's important.

Issue was a syntax error in config.toml. Gitlab runner did not throw any error just randomly failing to load it.

Related

Gitlab Build Design: Tests from local image?

I’m working on a build pipeline using docker-in-docker (through a docker:20.10-dind service) that should:
build a docker image from a base image + plugin files
run unit and integration tests using that image (requiring a mariadb service, so I’d like to cleanly separate that out into a test phase)
then publish the image by pushing it to the registry if the tests were successful
During build I tag the image as all of:
name:latest
registry/projectid/name:latest
registry/projectid/name:base-image-version
In the test phase I tell it to use image: name:latest tag (i.e. without remote registry information) as the image for running the job.
I expected it to use the image existing in the local D-in-D service, but it doesn’t, & I get the following error:
ERROR: Job failed (system failure): failed to pull image "name:latest" with specified policies [always]: Error response from daemon: pull access denied for name, repository does not exist or may require 'docker login' (manager.go:205:0s)
Is there any way to change the pull policy only for one pipeline, or even better only for one phase/job in a pipeline?
The only place I could find was config.toml for a whole build runner, which is really not the granularity I am looking for.
If it’s absolutely not possible, I could tag the image as registry/project/name:candidate in build and push it + then pull it again for test.
That would however occasionally leave broken images lying around, and would also be extremely wasteful and make my build much slower, so I’d really prefer not to pull an image that has to already exist in the docker service for the build.
Sorry, the answer is no.
The only way is to tag the image and push it to the registry and then pull it again for the tests.
After the tests you can delete this very tag from the registry. Or you set up a cleanup policy which removes these tags occasionally.

Missing images in gke private images registry from gitlab ci/cd build

GKE private image registry is missing images. No changes to the environment have been done, this process was working fine until about 2 weeks ago. Here's the process
(This environment was handed to me and it is my first time into the CI/CD process and I am a newbie on the GKE environment as well.)
I have a GitLab pipeline that builds and deploys my app to a GKE dev environment when triggered. There are no errors reported in this process and it completes using gitlab.com in 4-5 minutes. )
The issue that manifested is that many of the images in a google private registry are no longer there, the current version is gone. The pod is trying to pull that image and it is failing with the ImagePullBackoff error, which makes sense due to the missing images. (That is most of them have disappeared, over 40 past versions are not longer in the registry, some older images are still there. )
First, I cannot tell how the images, from the CI/CD process, get placed into the private registry. There is only a reference to pull the registry.gitlab.com and no corresponding push to eu.gcr.io references at all (in the ci/cd files) which is the location of the gke image registry.
There are 3 files related to the ci/cd process:
gitlab-ci.yaml
kube-init.sh
migration.sh
All the secrets are in place and none have been changes. It seems there is a piece missing which moves/saves the files to the private google image registry...where would that be defined?
I can post the files in this process but since there are no errors there, I am not sure that would help. (Let me know if they are needed.)
Thanks in advance...I can't wait to get a DevOps engineer:)
-glen
As a summary of the conclusion reached in the comments:
The images are hosted on gitlab and aren't pushed to the GKE registry. as can be seen here.
The issue OP had was related to the token created for the pipeline from Google Cloud Platform to Gitlab, as it was linked to the previous account which is no longer associated. A new token was issued and the images can be pulled from Gitlab.

Deploying docker images

I have a nodejs server app and a separate reacts client app.
I have created docker images for both and a docker compose at the top level to build and run both
I'm struggling to understand how I can deploy/host these somewhere?
Do I deploy both separate images to the docker register? Or is this a way of hosting this on it's own as an entire docker container?
If you've already built the docker images on local, you can use DockerHub for hosting the docker images. If you're using Github Actions this gist script can be helpful.
Docker Registry is storage for built images. Think it as location for compiled "binaries" if comparing regular software.
Regularly, you might have some kind of CI for your source code, and when you trigger it for example by committing into 'master' branch, new image is built on the CI. It can push it into registry for long term storing, or push it directly to your hosting server (or registry in your server).
You can configure your docker-compose to pull latest images from private registry, when you just rerun it in your server.
Basically, hosting happens when you just run docker-compose up in some server, if you have done required configurations. It really depends where you are going to host them.
Maybe helpful:
https://www.docker.com/blog/how-to-deploy-on-remote-docker-hosts-with-docker-compose/
https://medium.com/#stoyanov.veseline/pushing-docker-images-to-a-private-registry-with-docker-compose-d2797097751

What is gitlab runner

I think I'm fundamentally missing something. I'm new to CI/CD and trying to set up my first pipeline ever with gitlab.
The project is a pre-existing PHP project.
I don't want to clean it up just yet, at the moment I've pushed the whole thing into a docker container and it's running fine talking to google cloud's mysql databases etc as it should locally and also on a remote google cloud testing VM.
The dream is to be able to push to the development branch, and then merge the dev banch into the test branch which then TRIGGERS automated tests (easy part), and also causes the remote test VM (hosted on google cloud), to PULL the newest changes, rebuild the image from the latest docker file (or pull the latest image from gitlab image register)... and then rebuild the container with the newest image.
I'm playing around with gitlab's runner but I'm not understanding what it's actually for, despite looking through almost all the online content for it.
Do I just install it in the google cloud VM, and then when I push to gitlab from my development machine.. the repo will 'signal' the runner (which is running on the VM, to execute a bunch of scripts (which might include git pull on the newest changes?).
Because I already pre-package my app into a container locally (and push the image to the image registry) do I need to use docker as my executor on the runner? or can i just use shell and shell the commands in?
What am I missing?
TLDR and extra:
Questions:
What is runner actually for,
where is it meant to be installed?
Does it care which directory it is run in?
If it doesn't care which directory it's run,
where does it execute it's script commands? At root?
If I am locally building my own images and uploading them to gitlab's registry,
Do I need to set my executor to docker? Shouldn't I just set it to shell, pull the image, and build it? (Assuming the runner is runing on the remote VM).
What is runner actually for?
You have your project along with a .gitlab-ci.yml file. .gitlab-ci.yml defines what stages your CI/CD pipeline has and what to do in each stage. This typically consists of a build,test,deploy stages. Within each stage you can define multiple job. For example in build stage you may have 3 jobs to build on debian, centos and windows (in GitLab glossary build:debian, build:centos, build:windows). A GitLab runner clones the project read the gitlab-ci.yaml file and do what he is instructed to do. So basically GitLab runner is a Golang process that executes some instructed tasks.
where is it meant to be installed?
You can install a runner in your desired environment listed here. https://docs.gitlab.com/runner/install/
or
you can use a shared runner that is already installed on GitLab's infrastructure.
Does it care which directory it is run in?
Yes. Every task executed by runner is relativly to CI_PROJECT_DIR defined in https://gitlab.com/help/ci/variables/README. But you can alter this behaviour.
where does it execute it's script commands? At root?
Do I need to set my executor to docker? Shouldn't I just set it to shell, pull the image, and build it?
A runner can have mutiple executors such as docker, shell, virtualbox etc but docker being the most common one. If you use docker as the executor you can pull any image from docker hub or your configured registry and you can do loads of stff with docker images. In a docker environment normally you run them as the root user.
https://docs.gitlab.com/runner/executors/README.html
See gitlab access logs , runner is constantly polling the server

How can I solve the deployment/updating of dockerized app on my VPS?

Not easy to make good title for this question so if someone have better idea please edit.
That's what I have:
VPS (KVM)
Docker
Nginx-proxy so all docker containers supposed to be exposed are automatically exposed to appropriate domain.
Some apps like Wordpress are just using container with connected volumes which are accesible by FTP so this is not an issue to manage them/update stuff etc.
I have SailsJS app (NodeJS) which I have to dockerize. It will be kept updated quite often.
I will have some apps written in C#(ASP.NET) / Java (Spring) with similar scenario as in point 5.
Both 5 and 6 source code is stored on BitBucket but can be changed if it would be better to have self hosted git server to solve issues.
What I am looking for is to have automated process which will build the docker image when I do commit and make sure that docker will pull the new image and restart container with new content. I do not want to use DockerHub as there is only 1 private repository so it will not work for long term.
I thought I can do it with Jenkins somehow but have no idea how...
You can setup private GitLab server.
It provides THREE necessary things - Git repository (managed as admin by your own), completely private Docker registry (so you can privately store your own docker images) , and own CI - complete and sufficient to do what you request, integrated seamlessly and working with former two.
You would setup GitLab runner so when you do commit image being rebuilt and pushed to component-specific registry, and there are hooks and environments which allow you to set up back connection.

Resources