How to specify image platform in gitlab-ci.yml - gitlab

I am trying to build CI pipeline which does build for particular image. In the CI file however, I could not find a way to specify image platform.
stages:
- build
- deploy
build_j:
image: customServer/debian/jessy
I checked Docker Images doc and this but could not find any example. Alternative way perhaps is to pull image explicitly and run commands using script.
docker pull debian:jessy -platform i386

Since multi architecture/platform tags of a Docker image have different digests, You can pull a Docker image using its digest (instead of using tags) to pull the desired architecture/platform.
Here is an example of multi architecture/platform tag of a Docker image (Ubuntu) in Docker Hub:
As you can see, 20.04 is a multi architecture tag and there are different digests for each of architectures in the tag.
If you run command docker pull ubuntu:20.04
it will pull all architectures.
But command
docker pull ubuntu#sha256:55e5613c8c7bcd8044aaf09d64d20518964a0d7a6e41af129f95b731301c2659
will pull just linux/arm/v7.
As I tried, it is possible to use digest in .gitlab-ci.yml:
job_1:
image: ubuntu#sha256:55e5613c8c7bcd8044aaf09d64d20518964a0d7a6e41af129f95b731301c2659
script:
- ...
job_2:
image: alpine#sha256:71465c7d45a086a2181ce33bb47f7eaef5c233eace65704da0c5e5454a79cee5
script:
- ...

Speaking of image digest, GitLab 13.5 (October 2020) proposes:
Create release with image digest on new tag
Docker supports immutable image identifiers and we have adopted this best practice to update our cloud-deploy images.
When a new image is tagged, we also programmatically retrieve the image digest upon its build, and create a release note to effectively communicate this digest to users.
This guarantees that every instance of the service runs exactly the same code.
You can roll back to an earlier version of the image, even if that version wasn’t tagged (or is no longer tagged). This can even prevent race conditions if a new image is pushed while a deploy is in progress.
See Documentation and Issue.

Related

Cloud Bitbucket Can I use the self-hosted runner docker image as a base and augment it?

NOTE: I'm an embedded programmer, so devops stuff is mildly mysterious to me and I might be using the wrong terms.
When creating my BitBucket self hosted runners, do I HAVE to use docker in docker, or can I take the self-hosted runner container image and add my required tools and licenses to it?
i.e. the docker command it gives me when I create a self-hosted runner has this in it: docker-public.packages.atlassian.com/sox/atlassian/bitbucket-pipelines-runner, can I just create my own Dockerfile image which uses that, and add my software packages, environment variables, etc. and invoke that instead of the original one?
Or do I necessarily need to do docker-in-docker?
As I mentioned in the beginning, a lot of the devops stuff is just what google/stackexchange tells me to do and thus vaguely cargo-cultish. Getting credentials and other stuff from the self-hosted runner image into my docker-in-docker image (without building credentials into the image) seems like its more work to me.
Thanks for any insight

Gitlab Build Design: Tests from local image?

I’m working on a build pipeline using docker-in-docker (through a docker:20.10-dind service) that should:
build a docker image from a base image + plugin files
run unit and integration tests using that image (requiring a mariadb service, so I’d like to cleanly separate that out into a test phase)
then publish the image by pushing it to the registry if the tests were successful
During build I tag the image as all of:
name:latest
registry/projectid/name:latest
registry/projectid/name:base-image-version
In the test phase I tell it to use image: name:latest tag (i.e. without remote registry information) as the image for running the job.
I expected it to use the image existing in the local D-in-D service, but it doesn’t, & I get the following error:
ERROR: Job failed (system failure): failed to pull image "name:latest" with specified policies [always]: Error response from daemon: pull access denied for name, repository does not exist or may require 'docker login' (manager.go:205:0s)
Is there any way to change the pull policy only for one pipeline, or even better only for one phase/job in a pipeline?
The only place I could find was config.toml for a whole build runner, which is really not the granularity I am looking for.
If it’s absolutely not possible, I could tag the image as registry/project/name:candidate in build and push it + then pull it again for test.
That would however occasionally leave broken images lying around, and would also be extremely wasteful and make my build much slower, so I’d really prefer not to pull an image that has to already exist in the docker service for the build.
Sorry, the answer is no.
The only way is to tag the image and push it to the registry and then pull it again for the tests.
After the tests you can delete this very tag from the registry. Or you set up a cleanup policy which removes these tags occasionally.

Azure DevOps deployment pipeline - how do I find value to use for Linux image name?

I have an Azure DevOps pipeline for a .NET Core app. The pipeline currently specifies ubuntu-latest as the Linux image name to use when building the Docker image. I want to change it to the latest version of Alpine. Is there are reference anywhere to the values that need to be used? And what is that value actually referring to - is it a reference name for a Linux image in the Docker Container Registry?
Actually I'd misunderstood the purpose of the vmImageName - I though it was specifying the Linux image that would be used by the deployed container. It's actually specifying a name from the following table, that determines the Linux version that will be used by the Azure DevOps build agent VM. It's used as a way of specifying a Microsoft-hosted agent for the build, that is automatically updated and maintained by Microsoft. The values come from the following table:
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops&tabs=yaml#use-a-microsoft-hosted-agent
To change the OS used by the docker container when building the image, I had to change the tag for the .NET Core SDK and runtime images specified in my dockerfile, as follows:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-alpine3.12 AS build
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-alpine3.12 AS runtime
The full tag listings, describing how to specify which OS will be used by the .NET Core images, is found on both of the following pages under the heading "Full Tag Listing":
Runtime: https://hub.docker.com/_/microsoft-dotnet-aspnet
SDK: https://hub.docker.com/_/microsoft-dotnet-sdk

Software Bill of materials for docker image

I have node 12.14 docker image which I am using for my applications. But today I was asked to provide Software Bill of materials (SBOM) for this docker image. I am not sure how to get that.
Any inputs that you provide to help me get Software Bill of materials will be greatly appreciated.
I've personally not been tasked with something like this before, but I'd take a guess that looking at the history might be a good start:
# You may need to first run "docker pull node:12.14"
docker history --format '{{.CreatedBy}}' --no-trunc --human node:12.14
This will output the list of commands used to build the image and you'll have to decide what's appropriate for the team requesting the bill of materials from you.
Otherwise, you can look at the source for the Dockerfile directly at GitHub. This point in the history appears to be the latest commit that builds the 12.14 release (I could be wrong so please feel free to dig around that repository and its history yourself as well).
Docker just announced (Apr. 7th, 2022)
Introducing 'docker sbom' — the CLI command developed alongside #anchore (using their Syft project) that displays the SBOM of any Docker image.
CTO #justincormack explains how this functionality will help improve trust in the software supply chain by providing more visibility:
Announcing Docker SBOM: A step towards more visibility into Docker images
This command is just a first step that Docker is taking to make container images more self descriptive. We believe that the best time to determine and record what is in a container image is when you are putting the image together with docker build. To enable this, we are working on making it easy for partners and the community to add SBOM functionality to docker build using BuildKit’s extensibility.
As this information is generated at build time, we believe that it should be included as part of the image artifact. This means that if you move images between registries (or even into air gapped environments), you should still be able to read the SBOM and other image build metadata off of the image.
Example:
docker sbom neo4j:4.4.5
Result:
Syft v0.42.2
✔ Loaded image
✔ Parsed image
✔ Cataloged packages [385 packages]
NAME VERSION TYPE
...
bsdutils 1:2.36.1-8+deb11u1 deb
ca-certificates 20210119 deb
...
log4j-api 2.17.1 java-archive
log4j-core 2.17.1 java-archive
...
Note that the output includes not only the Debian packages that have been installed inside the image but also the Java libraries used by the application.
Getting this information reliably and with minimal effort allows you to promptly respond and reduce the chance that you will be breached.
In the above example, we can see that Neo4j uses version 2.17.1 of the log4j-core library which means that it is not affected by log4shell.
Engin Diri adds (tweet)
The new #Docker sbom command is great in terms of UX.
Plenty of choices for the output format (#CycloneDX_Spec, #SyftProject, SPDX or even #github JSON!) Great collab with #anchore.
BTW: You can pipe docker sbom output via "--format syft-json | grype" into #GrypeProject to get the vulnerabilities displayed!

Deploying docker images

I have a nodejs server app and a separate reacts client app.
I have created docker images for both and a docker compose at the top level to build and run both
I'm struggling to understand how I can deploy/host these somewhere?
Do I deploy both separate images to the docker register? Or is this a way of hosting this on it's own as an entire docker container?
If you've already built the docker images on local, you can use DockerHub for hosting the docker images. If you're using Github Actions this gist script can be helpful.
Docker Registry is storage for built images. Think it as location for compiled "binaries" if comparing regular software.
Regularly, you might have some kind of CI for your source code, and when you trigger it for example by committing into 'master' branch, new image is built on the CI. It can push it into registry for long term storing, or push it directly to your hosting server (or registry in your server).
You can configure your docker-compose to pull latest images from private registry, when you just rerun it in your server.
Basically, hosting happens when you just run docker-compose up in some server, if you have done required configurations. It really depends where you are going to host them.
Maybe helpful:
https://www.docker.com/blog/how-to-deploy-on-remote-docker-hosts-with-docker-compose/
https://medium.com/#stoyanov.veseline/pushing-docker-images-to-a-private-registry-with-docker-compose-d2797097751

Resources