Cache task in Azure Pipeline to reducer build time - azure

I would like to use Azure Pipeline "Cache Task" to cache npm and later on when I build my docker image based on dockerfile, use that cache to decrease my build time. I do rarely, to almost never change my package.json file nowadays, but the docker image build in Azure Pipeline is very slow. If I take a look at the logs while building etc. based on my dockerfile, the command 'npm install' makes up most of the build time. I've made some research but can't find such case...
This is what I've come up with at the moment:
azure-pipeline.yml
- "test"
resources:
- repo: self
pool:
vmImage: "ubuntu-latest"
variables:
tag: "$(Build.BuildId)"
DOCKER_BUILDKIT: 1
npm_config_cache: $(Pipeline.Workspace)/.npm
steps:
- task: Cache#2
inputs:
key: 'npm | "$(Agent.OS)" | package.json'
path: "$(npm_config_cache)"
cacheHitVar: "CACHE_RESTORED"
displayName: Cache npm
- script: npm ci
- task: Docker#2
displayName: Build an image
inputs:
command: "build"
Dockerfile: "**/Dockerfile.test"
arguments: "--cache-from=$(npm_config_cache)" <--- I WOULD LIKE TO USE THE CACHE IN THIS TASK
tags: "$(tag)"
Dockerfile
FROM node:19-alpine as build
WORKDIR /app
COPY package.json .
ENV REACT_APP_ENVIRONMENT=test
RUN npm ci --cache $(npm_config_cache) <-- This is not right but I would like to do something like this
COPY . .
RUN npm run build
#Stage 2
FROM nginx:1.23.2-alpine
WORKDIR /usr/share/nginx/html
RUN rm -rf *
COPY nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80
ENTRYPOINT ["nginx", "-g", "daemon off;"]
So what I would like to do is the use the cache task in my azure-pipeline in the next step/task of the azure-pipeline where I build my docker image based on the dockerfile provided above. In my Dockerfile I got the line -> RUN npm ci --cache $(npm_config_cache) That somehow I would like to reach it, or is that even possible? I can't really figure it out and maybe I'm totaly wrong about my approach?
Many thanks!

Related

Authenticate to private npm registry from Dockerfile in azure?

I have the below Dockerfile:
FROM node:alpine
WORKDIR ./usr/local/lib/node_modules/npm/
COPY .npmrc ./
COPY package.json ./
COPY package-lock.json ./
RUN npm ci
COPY . .
EXPOSE 3000
CMD ["npm", "run", "start-prod"]
This file is used in an azure pipeline like so:
variables:
- name: NPMRC_LOCATION
value: $(Agent.TempDirectory)
- stage: BuildPublishDockerImage
displayName: Build and publish Docker image
dependsOn: Build
jobs:
- job: BuildPublishDockerImage
steps:
- checkout: self
- task: DownloadSecureFile#1
name: npmrc
inputs:
secureFile: .npmrc
- task: npmAuthenticate#0
inputs:
workingFile: $(NPMRC_LOCATION)/.npmrc
- task: Docker#2
displayName: Build a Docker image
inputs:
command: build
arguments: --no-cache
I know .npmrc should be in that location (I run RUN ls in the Dockerfile and its there).
However when I run it I keep getting this error:
failed to solve with frontend dockerfile.v0: failed to build LLB: failed to compute cache key: "/.npmrc" not found: not found
I just want to authenticate to a private npm registry. I'm mystified by this. Grateful for any help.
You can in Dockerfile set registry
npm config set registry

Azure pipeline not applying label to Docker build container

As the title says, I am trying to apply a label to my docker container so I can then reference said container in a further step in my pipeline. My end result that I am going for is to be able to copy my test results out of the container and publish and display those results.
azure-pipeline.yml
# Docker
# Build a Docker image
# https://learn.microsoft.com/azure/devops/pipelines/languages/docker
trigger:
- main
- develop
resources:
- repo: self
variables:
tag: '$(Build.BuildId)'
stages:
- stage: Build
displayName: Build image
jobs:
- job: Build
displayName: Build
pool:
name: default
steps:
- task: Docker#2
displayName: Build an image
inputs:
command: build
arguments: '--build-arg BuildId=$(Build.BuildId)'
dockerfile: '$(Build.SourcesDirectory)/Dockerfile'
tags: |
$(tag)
- powershell: |
$id=docker images --filter "label=test=$(Build.BuildId)" -q | Select-Object -First 1
docker create --name testcontainer $id
docker cp testcontainer:/testresults ./testresults
docker rm testcontainer
displayName: 'Copy test results'
- task: PublishTestResults#2
displayName: 'Publish test results'
inputs:
testResultsFormat: 'xUnit'
testResultsFiles: '**/*.trx'
searchFolder: '$(System.DefaultWorkingDirectory)/testresults'
Dockerfile
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/aspnet:6.0-bullseye-slim-amd64 AS base
WORKDIR /app
EXPOSE 80
FROM mcr.microsoft.com/dotnet/sdk:6.0-bullseye-slim-amd64 AS build
WORKDIR /src
COPY ["Forecast/Forecast.csproj", "Forecast/"]
WORKDIR "/src/Forecast"
RUN dotnet restore "Forecast.csproj"
COPY Forecast/ .
RUN dotnet build "Forecast.csproj" -c Release -o /app/build
FROM build AS publish
WORKDIR /src
RUN dotnet publish "Forecast/Forecast.csproj" --no-restore -c Release -o /app/publish
FROM build as test
ARG BuildId
LABEL test=${BuildId}
WORKDIR /src
COPY ["ForecastXUnitTest/ForecastXUnitTest.csproj", "ForecastXUnitTest/"]
WORKDIR "/src/ForecastXUnitTest"
RUN dotnet restore "ForecastXUnitTest.csproj"
COPY ForecastXUnitTest/ .
RUN dotnet build "ForecastXUnitTest.csproj" -c Release -o /app
RUN dotnet test -c Release --results-directory /testresults --logger "trx;LogFileName=test_results.trx" "ForecastXUnitTest.csproj"
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Forecast.dll"]
I can see that the label hasn't been applied upon inspection of the build steps. The powershell step specifically the line docker create --name testcontainer $id the variable $id is empty and is telling me that the label is never applied so I'm not able to go any further.
I ended up figuring out where my issue lay. I end up with cache'd ephemeral container images. With these in place Docker no longer runs those steps so along the way I added in the steps but with what had preceded in builds I wasn't getting anything from subsequent calls.
On top of this I had been referring to a previous image with the directive FROM that was preventing my directory from being found.

Management of CI / CD vuejs deployments

I need help with deploying vuejs project using CI / CD gitlab.
in this case I have 3 file environments:
.env.development.local
.env.staging.local
.env.production.local
And I'm here using a different baseHref / publicPath:
/ development
/ stage
/ production
In this stage I follow the tutorial Here.
but I am still confused about how I deployed using a different env.
Usually I use the command:
Staging
    
npm run build -- --mode stage
Production
    
npm run build -- --mode production
Here is an example of the env that I made:
# Environment Local
NODE_ENV=development
BASE_URL = /development/
VUE_APP_TITLE=Website (development)
VUE_APP_END_POINT='http://localhost:8000/api/v1/'
VUE_APP_CLIENT_ID = 12341
VUE_APP_CLIENT_SECRET = 'asdASD1123s'
VUE_APP_SCOPE = '*'
VUE_APP_BASE_URL_LINK = 'http://localhost:8080'
VUE_APP_VERSION =
And this is the vue.config.js file that I have:
process.env.VUE_APP_VERSION = require('./package.json').version
module.exports = {
publicPath: process.env.BASE_URL,
"transpileDependencies": [
"vuetify"
]
}
branch that I made:
test
development
master
In gitlab-ci.yml:
build site:
image: node:6
stage: build
script:
- npm install --progress=false
- npm run build
artifacts:
expire_in: 1 week
paths:
- dist
unit test:
image: node:6
stage: test
script:
- npm install --progress=false
- npm run unit
deploy:
image: alpine
stage: deploy
script:
- apk add --no-cache rsync openssh
- mkdir -p ~/.ssh
- echo "$SSH_PRIVATE_KEY" >> ~/.ssh/id_dsa
- chmod 600 ~/.ssh/id_dsa
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
- rsync -rav --delete dist/ user#server.com:/var/www/website/
with the configuration of gitlab-ci.yml above I can only deploy local env.
I hope someone wants to share their knowledge and experience about CI / CD deploy vuejs. Or give some CI / CD vuejs tutorial references multiple environment.
Thank you very much.
I found a dockerfile in my projet if it's can help you :
FROM node:12.16.1 AS builder
RUN mkdir /app
COPY *.json /app/
COPY src /app/src
WORKDIR /app
RUN npm install
RUN npm run build
FROM nginx:1.15.8
COPY --from=builder /app/dist/ /usr/share/nginx/html
EXPOSE 80:80
ENTRYPOINT ["nginx", "-g", "daemon off;"]
Don't forget to use multi stages docker ;)

Bitbucket Pipeline with docker-compose: Container ID 166535 cannot be mapped to a host ID

I'm trying to use docker-compose inside bitbucket pipeline in order to build several microservices and run tests against them. However I'm getting the following error:
Step 19/19 : COPY . .
Service 'app' failed to build: failed to copy files: failed to copy directory: Error processing tar file(exit status 1): Container ID 166535 cannot be mapped to a host ID
As of now, my docker-compose.yml looks like this:
version: '2.3'
services:
app:
build:
context: .
target: dev
ports:
- "3030:3030"
image: myapp:dev
entrypoint: "/docker-entrypoint-dev.sh"
command: [ "npm", "run", "watch" ]
volumes:
- .:/app/
- /app/node_modules
environment:
NODE_ENV: development
PORT: 3030
DATABASE_URL: postgres://postgres:#postgres/mydb
and my Dockerfile is as follow:
# ---- Base ----
#
FROM node:10-slim AS base
ENV PORT 80
ENV HOST 0.0.0.0
EXPOSE 80
WORKDIR /app
COPY ./scripts/docker-entrypoint-dev.sh /
RUN chmod +x /docker-entrypoint-dev.sh
COPY ./scripts/docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
COPY package.json package-lock.json ./
# ---- Dependencies ----
#
FROM base as dependencies
RUN npm cache verify
RUN npm install --production=true
RUN cp -R node_modules node_modules_prod
RUN npm install --production=false
# ---- Development ----
#
FROM dependencies AS dev
ENV NODE_ENV development
COPY . .
# ---- Release ----
#
FROM dependencies AS release
ENV NODE_ENV production
COPY --from=dependencies /app/node_modules_prod ./node_modules
COPY . .
CMD ["npm", "start"]
And in my bitbucket-pipelines.yml I define my pipeline as:
image: node:10.15.3
pipelines:
default:
- step:
name: 'install docker-compose, and run tests'
script:
- curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
- docker-compose -v
- docker-compose run app npm run test
- echo 'tests done'
services:
- docker
However, this example works when I try to use docker without docker-compose, defining my pipeline as:
pipelines:
default:
- step:
name: 'install and run tests'
script:
- docker build -t myapp .
- docker run --entrypoint="" myapp npm run test
- echo 'done!'
services:
- postgres
- docker
I found this issue (https://jira.atlassian.com/browse/BCLOUD-17319) in atlassian community, however I could not find a solution to fix my broken usecase. Any suggestions?
I would try to use an image with installed docker-compose already instead of installing it during the pipeline.
image: node:10.15.3
pipelines:
default:
- step:
name: 'run tests'
script:
- docker-compose -v
- docker-compose run app npm run test
- echo 'tests done'
services:
- docker
definitions:
services:
docker:
image: docker/compose:1.25.4
try to add this to your bitbucket-pipelines.yml
if it doesn't work rename docker to customDocker in the definition and in the service sections.
if it doesn't work too, then because you don't need nodejs in the pipeline directly, try to use this approach:
image: docker/compose:1.25.4
pipelines:
default:
- step:
name: 'run tests'
script:
- docker-compose -v
- docker-compose run app npm run test
- echo 'tests done'
services:
- docker
TL;DR: Start from your baseimage and check for the ID that is creating the problem using commands in your dockerfile. Use "problem_id = error_message_id - 100000 - 65536" to find the uid or gid that is not supported. Chown copies the files that are modified inflating your docker image.
The details:
We were using base image tensorflow/tensorflow:2.2.0-gpu and though we tried to find the problem ourselves, we were looking too late in our Dockerfile and making assumptions that were wrong.With help from Atlassian support we found that /usr/local/lib/python3.6 contained many files belonging to group staff (gid = 50)
Assumption 1: Bitbucket pipelines have definitions for the standard "linux" user ids and group ids.
Reality: Bitbucket pipelines only define a subset of the standard users and groups. Specifically they do not define group "staff" with gid 50. Your Dockerfile base image may define group staff (in /etc/groups) but the Bitbucket pipeline is run in a docker container without that gid. DO NOT USE
RUN cat /etc/group && RUN /etc/passwd
to check for ids. Execute these commands as Bitbucket pipeline commands in your script.
Assumption 2: It was something we were installing that was breaking the build.
Reality: Although we could "move the build failure around" by adjusting which packages we installed. This was likely just a case of some packages overwriting the ownership of pre-existing
We were able to find the files by using the relationship between the id in the error message and the docker build id of
problem_id = error_message_id - 100000 - 65536
And used the computed id value (50) to fined the files early in our Dockerfile:
RUN find / -uid 50-ls
RUN find / -gid 50 -ls
For example:
Error processing tar file(exit status 1): Container ID 165586 cannot be mapped to a host ID
50 = 165586 - 100000 - 65536
Final solution (for us):
Adding this command early to our Dockerfile:
RUN chown -R root:root /usr/local/lib/python*
Fixed the Bitbucket pipeline build problem, but also increases the size of our Docker image because Docker makes a copy of all of the files that are modified (contents or filesystem flags). We will look again at multi-stage builds to reduce the size of our docker images.

Error: Cannot find module '/app/__sapper__/build' on Cloud Build

I'm trying to setup an automated Cloud Build for a sapper project that gets deployed to Cloud Run. However I'm getting an error on the deploy.
This is my first attempt at CI work flow so I'm sure there are multiple things I'm doing wrong.
cloudbuild.yaml
steps:
- name: "gcr.io/cloud-builders/gcloud"
args:
- kms
- decrypt
- --ciphertext-file=.env.enc
- --plaintext-file=.env
- --location=global
- --keyring=jointcreative
- --key=cloudbuild-env
- name: "gcr.io/cloud-builders/docker"
args: ["build", "-t", "gcr.io/$PROJECT_ID/$PROJECT_ID", "."]
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/$PROJECT_ID/$PROJECT_ID"]
- name: "gcr.io/cloud-builders/npm"
args: ["ci", "--production"]
- name: 'gcr.io/cloud-builders/gcloud'
args:
- 'run'
- 'deploy'
- 'jointcreative'
- '--image'
- 'gcr.io/$PROJECT_ID/$PROJECT_ID'
- '--region'
- 'us-central1'
- '--platform'
- 'managed'
- name: "gcr.io/$PROJECT_ID/firebase"
args: ['deploy']
Dockerfile
FROM mhart/alpine-node:12
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --production
FROM mhart/alpine-node:slim-12
WORKDIR /app
COPY --from=0 /app .
COPY . .
ENV PORT 8080
ENV HOST 0.0.0.0
EXPOSE 8080
CMD ["node", "__sapper__/build"]
Error logs
The reason you get this error is because you don't build the Sapper application with npm run build.
I published a repository with Sapper deployed to Cloud Run a few minutes ago on Github at https://github.com/mikenikles/sapper-on-cloud-run.
The Dockerfile I use is based on 3 stages to minimize the final image size.
# This stage builds the sapper application.
FROM mhart/alpine-node:12 AS build-app
WORKDIR /app
COPY . .
RUN npm install --no-audit --unsafe-perm
RUN npm run build
# This stage installs the runtime dependencies.
FROM mhart/alpine-node:12 AS build-runtime
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --production --unsafe-perm
# This stage only needs the compiled Sapper application
# and the runtime dependencies.
FROM mhart/alpine-node:slim-12
WORKDIR /app
COPY --from=build-app /app/__sapper__ ./__sapper__
COPY --from=build-app /app/static ./static
COPY --from=build-runtime /app/node_modules ./node_modules
EXPOSE 3000
CMD ["node", "__sapper__/build"]
I also recommend the following .dockerignore file to copy only what is necessary for Sapper to run:
/*
!/package.json
!/package-lock.json
!/rollup.config.js
!/src
!/static
In your cloudbuild.yaml, you may want to consider adding the following to the Cloud Run deploy script if you plan on exposing the service publicly:
- 'managed'
- '--allow-unauthenticated'
It looks like you're missing the step (which can be placed into your cloudbuild/ci script, or your Dockerfile), to actually build the application.
Sapper uses Rollup or Webpack to bundle your app and places the output in the __sapper__/build directory. The COPY step in your Dockerfile copies this output into your final container.
Try adding a step which runs npm run build into your process, sometime before the Docker image is built.

Resources