Azure pipeline not applying label to Docker build container - azure

As the title says, I am trying to apply a label to my docker container so I can then reference said container in a further step in my pipeline. My end result that I am going for is to be able to copy my test results out of the container and publish and display those results.
azure-pipeline.yml
# Docker
# Build a Docker image
# https://learn.microsoft.com/azure/devops/pipelines/languages/docker
trigger:
- main
- develop
resources:
- repo: self
variables:
tag: '$(Build.BuildId)'
stages:
- stage: Build
displayName: Build image
jobs:
- job: Build
displayName: Build
pool:
name: default
steps:
- task: Docker#2
displayName: Build an image
inputs:
command: build
arguments: '--build-arg BuildId=$(Build.BuildId)'
dockerfile: '$(Build.SourcesDirectory)/Dockerfile'
tags: |
$(tag)
- powershell: |
$id=docker images --filter "label=test=$(Build.BuildId)" -q | Select-Object -First 1
docker create --name testcontainer $id
docker cp testcontainer:/testresults ./testresults
docker rm testcontainer
displayName: 'Copy test results'
- task: PublishTestResults#2
displayName: 'Publish test results'
inputs:
testResultsFormat: 'xUnit'
testResultsFiles: '**/*.trx'
searchFolder: '$(System.DefaultWorkingDirectory)/testresults'
Dockerfile
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/aspnet:6.0-bullseye-slim-amd64 AS base
WORKDIR /app
EXPOSE 80
FROM mcr.microsoft.com/dotnet/sdk:6.0-bullseye-slim-amd64 AS build
WORKDIR /src
COPY ["Forecast/Forecast.csproj", "Forecast/"]
WORKDIR "/src/Forecast"
RUN dotnet restore "Forecast.csproj"
COPY Forecast/ .
RUN dotnet build "Forecast.csproj" -c Release -o /app/build
FROM build AS publish
WORKDIR /src
RUN dotnet publish "Forecast/Forecast.csproj" --no-restore -c Release -o /app/publish
FROM build as test
ARG BuildId
LABEL test=${BuildId}
WORKDIR /src
COPY ["ForecastXUnitTest/ForecastXUnitTest.csproj", "ForecastXUnitTest/"]
WORKDIR "/src/ForecastXUnitTest"
RUN dotnet restore "ForecastXUnitTest.csproj"
COPY ForecastXUnitTest/ .
RUN dotnet build "ForecastXUnitTest.csproj" -c Release -o /app
RUN dotnet test -c Release --results-directory /testresults --logger "trx;LogFileName=test_results.trx" "ForecastXUnitTest.csproj"
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Forecast.dll"]
I can see that the label hasn't been applied upon inspection of the build steps. The powershell step specifically the line docker create --name testcontainer $id the variable $id is empty and is telling me that the label is never applied so I'm not able to go any further.

I ended up figuring out where my issue lay. I end up with cache'd ephemeral container images. With these in place Docker no longer runs those steps so along the way I added in the steps but with what had preceded in builds I wasn't getting anything from subsequent calls.
On top of this I had been referring to a previous image with the directive FROM that was preventing my directory from being found.

Related

Cache task in Azure Pipeline to reducer build time

I would like to use Azure Pipeline "Cache Task" to cache npm and later on when I build my docker image based on dockerfile, use that cache to decrease my build time. I do rarely, to almost never change my package.json file nowadays, but the docker image build in Azure Pipeline is very slow. If I take a look at the logs while building etc. based on my dockerfile, the command 'npm install' makes up most of the build time. I've made some research but can't find such case...
This is what I've come up with at the moment:
azure-pipeline.yml
- "test"
resources:
- repo: self
pool:
vmImage: "ubuntu-latest"
variables:
tag: "$(Build.BuildId)"
DOCKER_BUILDKIT: 1
npm_config_cache: $(Pipeline.Workspace)/.npm
steps:
- task: Cache#2
inputs:
key: 'npm | "$(Agent.OS)" | package.json'
path: "$(npm_config_cache)"
cacheHitVar: "CACHE_RESTORED"
displayName: Cache npm
- script: npm ci
- task: Docker#2
displayName: Build an image
inputs:
command: "build"
Dockerfile: "**/Dockerfile.test"
arguments: "--cache-from=$(npm_config_cache)" <--- I WOULD LIKE TO USE THE CACHE IN THIS TASK
tags: "$(tag)"
Dockerfile
FROM node:19-alpine as build
WORKDIR /app
COPY package.json .
ENV REACT_APP_ENVIRONMENT=test
RUN npm ci --cache $(npm_config_cache) <-- This is not right but I would like to do something like this
COPY . .
RUN npm run build
#Stage 2
FROM nginx:1.23.2-alpine
WORKDIR /usr/share/nginx/html
RUN rm -rf *
COPY nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80
ENTRYPOINT ["nginx", "-g", "daemon off;"]
So what I would like to do is the use the cache task in my azure-pipeline in the next step/task of the azure-pipeline where I build my docker image based on the dockerfile provided above. In my Dockerfile I got the line -> RUN npm ci --cache $(npm_config_cache) That somehow I would like to reach it, or is that even possible? I can't really figure it out and maybe I'm totaly wrong about my approach?
Many thanks!

How to combine docker creation and putting it in DEV Azure Container Registry?

Here is my scenario:
I create a docker image from an SQL dump with the following commands, executed from command prompt:
docker pull mariadb:10.4.26
docker run --name test_smdb -e MYSQL_ROOT_PASSWORD=<some_password> -p 3306:3306 -d mariadb:10.4.26
docker exec -it test_smdb mariadb --user root -p<some_password>
MariaDB [(none)]> CREATE DATABASE smdb_dev;
docker exec -i test_smdb mariadb -uroot -p<some_password> smdb_dev --force < C:\smdb-dev.sql
But my task now is create a pipeline, that creates this docker image and puts it into the Azure Container Registry
I found this link - Build and push Docker images to Azure Container Registry:
https://learn.microsoft.com/en-us/azure/devops/pipelines/ecosystems/containers/acr-template?view=azure-devops
And i see that the result should be a yaml file like this:
- stage: Build
displayName: Build and push stage
jobs:
- job: Build
displayName: Build job
pool:
vmImage: $(vmImageName)
steps:
- task: Docker#2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
But can someone show me how to combine the two things - the docker image creation and the putting it into the Azure Container Registry?
You would need to make a dockerfile and put this in the repository.
The commands you specified at the top of your question should be your input
It could look something like this (just threw something together. probably wont work as is):
# syntax=docker/dockerfile:1
FROM mariadb:10.4.26
WORKDIR /app
COPY . .
run --name test_smdb -e MYSQL_ROOT_PASSWORD=<some_password> -p 3306:3306 -d mariadb:10.4.26
run MariaDB [(none)]> CREATE DATABASE smdb_dev;
EXPOSE {mariadb port #}

Docker multi platform builds extremely slow for ARM64 on Gitlab CI

I have the following dockerfile for a Node.js application
# ---> Build stage
FROM node:18-bullseye as node-build
ENV NODE_ENV=production
WORKDIR /usr/src/app
COPY . /usr/src/app/
RUN yarn install --silent --production=true --frozen-lockfile
RUN yarn build --silent
# ---> Serve stage
FROM nginx:stable-alpine
COPY --from=node-build /usr/src/app/dist /usr/share/nginx/html
Up until now I was building exclusively for AMD64, but now I need to build also for ARM64.
I edited my .gitlab-ci.yml to look like the following
image: docker:20
variables:
PROJECT_NAME: "project"
BRANCH_NAME: "main"
IMAGE_NAME: "$PROJECT_NAME:$CI_COMMIT_TAG"
services:
- docker:20-dind
build_image:
script:
# Push to Gitlab registry
- docker login $CI_REGISTRY -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
- docker context create builder-context
- docker buildx create --name builderx --driver docker-container --use builder-context
- docker buildx build --tag $CI_REGISTRY/mygroup/$PROJECT_NAME/$IMAGE_NAME --push --platform=linux/arm64/v8,linux/amd64 .
Everything works relatively fine for AMD64 but it is extremely slow for ARM64. Almost 10x slower than AMD64, giving me timeouts on the Gitlab Job.
Is there any way to speed up the process?
I'm guessing your pipeline is executing on amd64 hardware and that docker buildx is performing emulation to build the arm64 target. You will likely see a large improvement if you break build_image into two jobs (one for amd64 and one for arm64) and then send them to two different gitlab runners so that they each can execute on their native hardware.
Even if you can't or don't want stop using emulation, you could still break the build_image job into two jobs (one per image built) in hopes that running them in parallel will allow the jobs to finish before the timeout limit.
With changes to your Dockerfile and the use of image caching you can make some of your subsequent builds faster, but these changes won't help you until you get an initial image built (which can be used as the cache).
Updated Dockerfile:
# ---> Build stage
FROM node:18-bullseye as node-build
ENV NODE_ENV=production
WORKDIR /usr/src/app
# only COPY yarn.lock so not to break cache if dependencies have not changed
COPY . /usr/src/app/yarn.lock
RUN yarn install --silent --production=true --frozen-lockfile
# once the dependencies are installed, then copy in the frequently changing source code files
COPY . /usr/src/app/
RUN yarn build --silent
# ---> Serve stage
FROM nginx:stable-alpine
COPY --from=node-build /usr/src/app/dist /usr/share/nginx/html
Updated gitlab-ci.yml:
image: docker:20
variables:
PROJECT_NAME: "project"
BRANCH_NAME: "main"
IMAGE_NAME: "$PROJECT_NAME:$CI_COMMIT_TAG"
REGISTRY_IMAGE_NAME: "$CI_REGISTRY/mygroup/$PROJECT_NAME/$IMAGE_NAME"
CACHE_IMAGE_NAME: "$CI_REGISTRY/mygroup/$PROJECT_NAME/$PROJECT_NAME:cache"
BUILDKIT_INLINE_CACHE: "1"
services:
- docker:20-dind
stages:
- build
- push
before_script:
- docker login $CI_REGISTRY -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
- docker context create builder-context
- docker buildx create --name builderx --driver docker-container --use builder-context
build_amd64:
stage: build
script:
- docker buildx build --cache-from "$CACHE_IMAGE_NAME" --tag "$CACHE_IMAGE_NAME" --push --platform=linux/amd64 .
build_arm64:
stage: build
script:
- docker buildx build --cache-from "$CACHE_IMAGE_NAME" --tag "$CACHE_IMAGE_NAME" --push --platform=linux/arm64/v8 .
push:
stage: push
script:
- docker buildx build --cache-from "$CACHE_IMAGE_NAME" --tag "$REGISTRY_IMAGE_NAME" --push --platform=linux/arm64/v8,linux/amd64 .
The build_amd64 and build_arm64 jobs each pull in the last image (of their arch) that was built and use it as a cache for docker image layers. These two build jobs then push their result back as the new cache.
The push stage runs docker buildx ... again, but they won't actually build anything new as they will just pull in the cached results from the two build jobs. This allows you to break up the builds but still have a single push command that results in the two different images ending up in a single multi-platform docker manifest.
I ran the issue of slow builds on Google Cloud Build and ended up using native arm64 hardware to speed up the arm64 part of the build.
I wrote up a detailed tutorial on this, which uses Docker contexts to point to the remote arm64 VM.

Authenticate to private npm registry from Dockerfile in azure?

I have the below Dockerfile:
FROM node:alpine
WORKDIR ./usr/local/lib/node_modules/npm/
COPY .npmrc ./
COPY package.json ./
COPY package-lock.json ./
RUN npm ci
COPY . .
EXPOSE 3000
CMD ["npm", "run", "start-prod"]
This file is used in an azure pipeline like so:
variables:
- name: NPMRC_LOCATION
value: $(Agent.TempDirectory)
- stage: BuildPublishDockerImage
displayName: Build and publish Docker image
dependsOn: Build
jobs:
- job: BuildPublishDockerImage
steps:
- checkout: self
- task: DownloadSecureFile#1
name: npmrc
inputs:
secureFile: .npmrc
- task: npmAuthenticate#0
inputs:
workingFile: $(NPMRC_LOCATION)/.npmrc
- task: Docker#2
displayName: Build a Docker image
inputs:
command: build
arguments: --no-cache
I know .npmrc should be in that location (I run RUN ls in the Dockerfile and its there).
However when I run it I keep getting this error:
failed to solve with frontend dockerfile.v0: failed to build LLB: failed to compute cache key: "/.npmrc" not found: not found
I just want to authenticate to a private npm registry. I'm mystified by this. Grateful for any help.
You can in Dockerfile set registry
npm config set registry

unable to prepare context: path " " not found for Github Actions Runner

I am trying to build and tag a docker image in Github Actions runner and am getting this error from the runner
unable to prepare context: path " " not found
Error: Process completed with exit code 1.
I have gone through all other similar issues on StackOverflow and implemented them but still, no way forward.
The interesting thing is, I have other microservices using similar workflow and Dockerfile working perfectly fine.
My workflow
name: some-tests
on:
pull_request:
branches: [ main ]
jobs:
tests:
runs-on: ubuntu-latest
env:
AWS_REGION: us-east-1
IMAGE_NAME: service
IMAGE_TAG: 1.1.0
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Create cluster
uses: helm/kind-action#v1.2.0
- name: Read secrets from AWS Secrets Manager into environment variables
uses: abhilash1in/aws-secrets-manager-action#v1.1.0
id: read-secrets
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
secrets: |
users-service/secrets
parse-json: true
- name: Build and Tag Image
id: build-image
run: |
# Build a docker container and Tag
docker build --file Dockerfile \
--build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \
-t $IMAGE_NAME:$IMAGE_TAG .
echo "::set-output name=image::$IMAGE_NAME:$IMAGE_TAG"
- name: Push Image to Kind cluster
id: kind-cluster-image-push
env:
KIND_IMAGE: ${{ steps.build-image.outputs.image }}
CLUSTER_NAME: chart-testing
CLUSTER_CONTROLLER: chart-testing-control-plane
run: |
kind load docker-image $KIND_IMAGE --name $CLUSTER_NAME
docker exec $CLUSTER_CONTROLLER crictl images
Dockerfile*
FROM node:14 AS base
WORKDIR /app
FROM base AS development
COPY .npmrc .npmrc
COPY package.json ./
RUN npm install --production
RUN cp -R node_modules /tmp/node_modules
RUN npm install
RUN rm -f .npmrc
COPY . .
FROM development AS builder
COPY .npmrc .npmrc
RUN yarn run build
RUN rm -f .npmrc
RUN ls -la
FROM node:14-alpine AS production
# Install curl
RUN apk update && apk add curl
COPY --from=builder /tmp/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package.json ./
ARG APP_API
# set environmental variables
ENV APP_API=$APP_API
EXPOSE ${PORT}
CMD [ "yarn", "start" ]
I guess the problem is coming from the building command or something, these are the different things I have tried
I used --file explicitly with period(.)*
docker build --file Dockerfile \
--build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \
-t $IMAGE_NAME:$IMAGE_TAG .
echo "::set-output name=image::$IMAGE_NAME:$IMAGE_TAG"
I used only period (.)
docker build \
--build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \
-t $IMAGE_NAME:$IMAGE_TAG .
echo "::set-output name=image::$IMAGE_NAME:$IMAGE_TAG"
I used relative path for Dockerfile (./Dockerfile)
docker build --file ./Dockerfile \
--build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \
-t $IMAGE_NAME:$IMAGE_TAG .
echo "::set-output name=image::$IMAGE_NAME:$IMAGE_TAG"
I used relative path for the period (./)
docker build \
--build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \
-t $IMAGE_NAME:$IMAGE_TAG ./
echo "::set-output name=image::$IMAGE_NAME:$IMAGE_TAG"
I have literally exhausted everything I've read from SO
The problem was basically a white-spacing issue. Nothing could show this. Thanks to This Github answer

Resources