Gitlab Build Design: Tests from local image? - gitlab

I’m working on a build pipeline using docker-in-docker (through a docker:20.10-dind service) that should:
build a docker image from a base image + plugin files
run unit and integration tests using that image (requiring a mariadb service, so I’d like to cleanly separate that out into a test phase)
then publish the image by pushing it to the registry if the tests were successful
During build I tag the image as all of:
name:latest
registry/projectid/name:latest
registry/projectid/name:base-image-version
In the test phase I tell it to use image: name:latest tag (i.e. without remote registry information) as the image for running the job.
I expected it to use the image existing in the local D-in-D service, but it doesn’t, & I get the following error:
ERROR: Job failed (system failure): failed to pull image "name:latest" with specified policies [always]: Error response from daemon: pull access denied for name, repository does not exist or may require 'docker login' (manager.go:205:0s)
Is there any way to change the pull policy only for one pipeline, or even better only for one phase/job in a pipeline?
The only place I could find was config.toml for a whole build runner, which is really not the granularity I am looking for.
If it’s absolutely not possible, I could tag the image as registry/project/name:candidate in build and push it + then pull it again for test.
That would however occasionally leave broken images lying around, and would also be extremely wasteful and make my build much slower, so I’d really prefer not to pull an image that has to already exist in the docker service for the build.

Sorry, the answer is no.
The only way is to tag the image and push it to the registry and then pull it again for the tests.
After the tests you can delete this very tag from the registry. Or you set up a cleanup policy which removes these tags occasionally.

Related

Cloud Bitbucket Can I use the self-hosted runner docker image as a base and augment it?

NOTE: I'm an embedded programmer, so devops stuff is mildly mysterious to me and I might be using the wrong terms.
When creating my BitBucket self hosted runners, do I HAVE to use docker in docker, or can I take the self-hosted runner container image and add my required tools and licenses to it?
i.e. the docker command it gives me when I create a self-hosted runner has this in it: docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-runner, can I just create my own Dockerfile image which uses that, and add my software packages, environment variables, etc. and invoke that instead of the original one?
Or do I necessarily need to do docker-in-docker?
As I mentioned in the beginning, a lot of the devops stuff is just what google/stackexchange tells me to do and thus vaguely cargo-cultish. Getting credentials and other stuff from the self-hosted runner image into my docker-in-docker image (without building credentials into the image) seems like its more work to me.
Thanks for any insight

Runners cannot pull from private gitlab container registry

Tried to update our infra to use our pipeline images from self-hosted GitLab using Container Registry (prev. using DO's Container Registry, all working).
The pipeline image is in a separate repo on the same GitLab instance and pushed to the Container Registry.
The application's pipeline sometimes fails with a permission error that it cannot pull the image. Which is weird as if I restart the pipeline it works and can pull the image. I'm not using any env like DOCKER_AUTH_CONFIG as from the docs it should be able to access images from the private project's container registry. Also, it always works on master branch.
If the image is pulled, it usually works around 1-2 hours then it starts to fail.
We are using auto-scaled workers if that's important.
Issue was a syntax error in config.toml. Gitlab runner did not throw any error just randomly failing to load it.

Do artifacts have to upload to gitlab to be reused between stages?

I'm trying to use copy some Jars in my Dockerfile after I run a gradle build.
highlevel, the gitlab-yml has a build and a docker build and a deploy. I don't want to upload anything to gitlab artifact server, but simply just use the temporary files to build my image and publish it elsewhere. Is this possible? I'm wondering if artifacts implies an upload to the gitlab servers.
Artifacts are one way, but you can also use cache instead. Which one is the best option depends on your exact use case, but seems like the cache should work ok.
Alternatively, you can build the docker image, push it to the gitlab container registry, and pull it directly from there in later steps.

How to specify image platform in gitlab-ci.yml

I am trying to build CI pipeline which does build for particular image. In the CI file however, I could not find a way to specify image platform.
stages:
- build
- deploy
build_j:
image: customServer/debian/jessy
I checked Docker Images doc and this but could not find any example. Alternative way perhaps is to pull image explicitly and run commands using script.
docker pull debian:jessy -platform i386
Since multi architecture/platform tags of a Docker image have different digests, You can pull a Docker image using its digest (instead of using tags) to pull the desired architecture/platform.
Here is an example of multi architecture/platform tag of a Docker image (Ubuntu) in Docker Hub:
As you can see, 20.04 is a multi architecture tag and there are different digests for each of architectures in the tag.
If you run command docker pull ubuntu:20.04
it will pull all architectures.
But command
docker pull ubuntu#sha256:55e5613c8c7bcd8044aaf09d64d20518964a0d7a6e41af129f95b731301c2659
will pull just linux/arm/v7.
As I tried, it is possible to use digest in .gitlab-ci.yml:
job_1:
image: ubuntu#sha256:55e5613c8c7bcd8044aaf09d64d20518964a0d7a6e41af129f95b731301c2659
script:
- ...
job_2:
image: alpine#sha256:71465c7d45a086a2181ce33bb47f7eaef5c233eace65704da0c5e5454a79cee5
script:
- ...
Speaking of image digest, GitLab 13.5 (October 2020) proposes:
Create release with image digest on new tag
Docker supports immutable image identifiers and we have adopted this best practice to update our cloud-deploy images.
When a new image is tagged, we also programmatically retrieve the image digest upon its build, and create a release note to effectively communicate this digest to users.
This guarantees that every instance of the service runs exactly the same code.
You can roll back to an earlier version of the image, even if that version wasn’t tagged (or is no longer tagged). This can even prevent race conditions if a new image is pushed while a deploy is in progress.
See Documentation and Issue.

Docker container/image is not rebuilding automatically on code amendment

In short i want my docker container/image rebuild automatically whenever i write a new chunk of functions.
I have created a node app, running the server in Docker container via compose.
The container works fine, however whenever i make changing in the files or directory it doesn't render the changes automatically. I need to rebuild the directory again via
$ docker-compose up --build
so that the changes may take effect.
is there any solution that i might not need to rebuild the container manually?
Regards.
You either want to look at some kind of delivery pipeline tool as Boynux suggests, btw Dockerhub can watch github for checkins and trigger automatic image builds.
Or you can mount the code into the container using a volume so that changes are picked up.
The option you pick depends on your philosophy / delivery pipeline.

Resources