unable to prepare context: path " " not found for Github Actions Runner - node.js

I am trying to build and tag a docker image in Github Actions runner and am getting this error from the runner
unable to prepare context: path " " not found
Error: Process completed with exit code 1.
I have gone through all other similar issues on StackOverflow and implemented them but still, no way forward.
The interesting thing is, I have other microservices using similar workflow and Dockerfile working perfectly fine.
My workflow
name: some-tests
on:
pull_request:
branches: [ main ]
jobs:
tests:
runs-on: ubuntu-latest
env:
AWS_REGION: us-east-1
IMAGE_NAME: service
IMAGE_TAG: 1.1.0
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Create cluster
uses: helm/kind-action#v1.2.0
- name: Read secrets from AWS Secrets Manager into environment variables
uses: abhilash1in/aws-secrets-manager-action#v1.1.0
id: read-secrets
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
secrets: |
users-service/secrets
parse-json: true
- name: Build and Tag Image
id: build-image
run: |
# Build a docker container and Tag
docker build --file Dockerfile \
--build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \
-t $IMAGE_NAME:$IMAGE_TAG .
echo "::set-output name=image::$IMAGE_NAME:$IMAGE_TAG"
- name: Push Image to Kind cluster
id: kind-cluster-image-push
env:
KIND_IMAGE: ${{ steps.build-image.outputs.image }}
CLUSTER_NAME: chart-testing
CLUSTER_CONTROLLER: chart-testing-control-plane
run: |
kind load docker-image $KIND_IMAGE --name $CLUSTER_NAME
docker exec $CLUSTER_CONTROLLER crictl images
Dockerfile*
FROM node:14 AS base
WORKDIR /app
FROM base AS development
COPY .npmrc .npmrc
COPY package.json ./
RUN npm install --production
RUN cp -R node_modules /tmp/node_modules
RUN npm install
RUN rm -f .npmrc
COPY . .
FROM development AS builder
COPY .npmrc .npmrc
RUN yarn run build
RUN rm -f .npmrc
RUN ls -la
FROM node:14-alpine AS production
# Install curl
RUN apk update && apk add curl
COPY --from=builder /tmp/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package.json ./
ARG APP_API
# set environmental variables
ENV APP_API=$APP_API
EXPOSE ${PORT}
CMD [ "yarn", "start" ]
I guess the problem is coming from the building command or something, these are the different things I have tried
I used --file explicitly with period(.)*
docker build --file Dockerfile \
--build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \
-t $IMAGE_NAME:$IMAGE_TAG .
echo "::set-output name=image::$IMAGE_NAME:$IMAGE_TAG"
I used only period (.)
docker build \
--build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \
-t $IMAGE_NAME:$IMAGE_TAG .
echo "::set-output name=image::$IMAGE_NAME:$IMAGE_TAG"
I used relative path for Dockerfile (./Dockerfile)
docker build --file ./Dockerfile \
--build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \
-t $IMAGE_NAME:$IMAGE_TAG .
echo "::set-output name=image::$IMAGE_NAME:$IMAGE_TAG"
I used relative path for the period (./)
docker build \
--build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \
-t $IMAGE_NAME:$IMAGE_TAG ./
echo "::set-output name=image::$IMAGE_NAME:$IMAGE_TAG"
I have literally exhausted everything I've read from SO

The problem was basically a white-spacing issue. Nothing could show this. Thanks to This Github answer

Related

How can I create a custom Dockerfile with kaniko?

I am creating a pipeline in Gitlab and need to make use of the image gcr.io/kaniko-project/executor:debug. My problem is that I cannot use artifacts because of some dns related issues so I need to create a custom Dockerfile that includes git. I have come across a few example Dockerfiles, which I have tested by building the Dockerfile and pushing to AWS ECR then using that image in the Gitlab job but each are outputting the same error in the pipeline:
exec /bin/sh: exec format error
Dockerfile:
FROM gcr.io/kaniko-project/executor:debug AS kaniko
FROM alpine:3.14.2
RUN apk --update add git
# RUN setcap cap_ipc_lock= /usr/sbin/vault
COPY --from=kaniko /kaniko/ /kaniko/
ENV PATH $PATH:/usr/local/bin:/kaniko
ENV DOCKER_CONFIG /kaniko/.docker/
ENV DOCKER_CREDENTIAL_GCR_CONFIG /kaniko/.config/gcloud/docker_credential_gcr_config.json
ENV SSL_CERT_DIR /kaniko/ssl/certs
ENTRYPOINT ["/kaniko/executor"]
Gitlab CI Job:
build dockerfile:
stage: build
image:
name: $ECR_REGISTRY/$ECR_REPO:0.0.8
entrypoint: [""]
before_script:
- git --version
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":\"$(printf "%s:%s" "${REGISTRY_USERNAME}" "${REGISTRY_PASSWORD}" | base64 | tr -d '\n')\"}}}" > /kaniko/.docker/config.json
- >-
/kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
--destination "$ECR_REGISTRY/$ECR_REPO:$nextReleaseVersion"
Can anyone suggest what I am doing wrong?

Azure pipeline not applying label to Docker build container

As the title says, I am trying to apply a label to my docker container so I can then reference said container in a further step in my pipeline. My end result that I am going for is to be able to copy my test results out of the container and publish and display those results.
azure-pipeline.yml
# Docker
# Build a Docker image
# https://learn.microsoft.com/azure/devops/pipelines/languages/docker
trigger:
- main
- develop
resources:
- repo: self
variables:
tag: '$(Build.BuildId)'
stages:
- stage: Build
displayName: Build image
jobs:
- job: Build
displayName: Build
pool:
name: default
steps:
- task: Docker#2
displayName: Build an image
inputs:
command: build
arguments: '--build-arg BuildId=$(Build.BuildId)'
dockerfile: '$(Build.SourcesDirectory)/Dockerfile'
tags: |
$(tag)
- powershell: |
$id=docker images --filter "label=test=$(Build.BuildId)" -q | Select-Object -First 1
docker create --name testcontainer $id
docker cp testcontainer:/testresults ./testresults
docker rm testcontainer
displayName: 'Copy test results'
- task: PublishTestResults#2
displayName: 'Publish test results'
inputs:
testResultsFormat: 'xUnit'
testResultsFiles: '**/*.trx'
searchFolder: '$(System.DefaultWorkingDirectory)/testresults'
Dockerfile
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/aspnet:6.0-bullseye-slim-amd64 AS base
WORKDIR /app
EXPOSE 80
FROM mcr.microsoft.com/dotnet/sdk:6.0-bullseye-slim-amd64 AS build
WORKDIR /src
COPY ["Forecast/Forecast.csproj", "Forecast/"]
WORKDIR "/src/Forecast"
RUN dotnet restore "Forecast.csproj"
COPY Forecast/ .
RUN dotnet build "Forecast.csproj" -c Release -o /app/build
FROM build AS publish
WORKDIR /src
RUN dotnet publish "Forecast/Forecast.csproj" --no-restore -c Release -o /app/publish
FROM build as test
ARG BuildId
LABEL test=${BuildId}
WORKDIR /src
COPY ["ForecastXUnitTest/ForecastXUnitTest.csproj", "ForecastXUnitTest/"]
WORKDIR "/src/ForecastXUnitTest"
RUN dotnet restore "ForecastXUnitTest.csproj"
COPY ForecastXUnitTest/ .
RUN dotnet build "ForecastXUnitTest.csproj" -c Release -o /app
RUN dotnet test -c Release --results-directory /testresults --logger "trx;LogFileName=test_results.trx" "ForecastXUnitTest.csproj"
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Forecast.dll"]
I can see that the label hasn't been applied upon inspection of the build steps. The powershell step specifically the line docker create --name testcontainer $id the variable $id is empty and is telling me that the label is never applied so I'm not able to go any further.
I ended up figuring out where my issue lay. I end up with cache'd ephemeral container images. With these in place Docker no longer runs those steps so along the way I added in the steps but with what had preceded in builds I wasn't getting anything from subsequent calls.
On top of this I had been referring to a previous image with the directive FROM that was preventing my directory from being found.

How to set environment variable for my docker image using azure pipeline variable?

I have a azure pipeline that I want to use to deploy my rails app.
The app has a Dockerfile and a docker-compose file.
I am trying to set the RAILS_MASTER_KEY as a secret variable in my pipeline and then reference it as environment variable in my docker compose file.
I can confirm that the agent is setting the variable correctly using echo in the pipeline yaml. However, The environment variable is not getting passed/set properly to my docker-compose and ultimately Dockerfile.
I have troubleshoot this for days reading azure docs, stack overflow and its not working.
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#secret-variables
Here is my code:
azure-pipelines.yaml:
steps:
- script: echo "test $(RAILS_MASTER_KEY)"
- script: echo "test $(testvar)"
- task: DockerCompose#0
inputs:
containerregistrytype: 'Azure Container Registry'
azureSubscription: 'de-identified'
azureContainerRegistry: '{"loginServer":""de-identified"", "id" : "de-identified"}'
dockerComposeFile: '**/docker-compose.yml'
dockerComposeFileArgs: |
RAILS_MASTER_KEY=$(RAILS_MASTER_KEY)
testvar=$(testvar)
action: 'Build services'
env:
RAILS_MASTER_KEY: $(RAILS_MASTER_KEY)
docker-compose.yml:
version: "3.3"
services:
web:
build: .
command: command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0' && echo ${RAILS_MASTER_KEY}"
volumes:
- .:/app
ports:
- "3000:3000"
environment:
RAILS_MASTER_KEY: ${RAILS_MASTER_KEY}
testvar: ${testvar}
Dockerfile:
FROM ruby:3.1.0-slim
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
libpq-dev \
postgresql-client \
git \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
WORKDIR /app
COPY Gemfile /app/Gemfile
COPY Gemfile.lock /app/Gemfile.lock
ENV BUNDLER_VERSION 2.3.5
ENV RAILS_ENV production
ENV RAILS_MASTER_KEY ${RAILS_MASTER_KEY}
ENV testvar ${testvar}
ENV RAILS_SERVE_STATIC_FILES true
ENV RAILS_LOG_TO_STDOUT true
RUN echo "----------------key is 1. ${RAILS_MASTER_KEY}"
RUN echo "----------------key is 11. $( testvar )"
RUN echo "----------------key is 12. $testvar"
RUN echo "----------------key is 2. $[variables.RAILS_MASTER_KEY]"
RUN echo "----------------key is 4. $(RAILS_MASTER_KEY)"
RUN gem install bundler -v $BUNDLER_VERSION
RUN bundle config set --local without 'development test'
RUN bundle install
COPY . .
RUN rm -f /app/tmp/pids/server.pid
EXPOSE 3000
CMD ["bundle", "exec", "rails", "s", "-e", "production", "-b", "0.0.0.0"]
As you can see, I have echo trying when trying to debug this so please ignore all echo statements.
Any help/guide will be appreciated.
Thank you.
From the above snippets, it seems you're using the dockerComposeFileArgs to specify environment variables.
One option that comes in handy for similar situations (and for local debugging purposes) is to use ARG into Dockerfile.
e.g.
FROM ruby:3.1.0-slim
RUN ...
ARG RAILS_MASTER_KEY
ENV RAILS_MASTER_KEY=$RAILS_MASTER_KEY
...
so you'll have the possibility to specify the RAILS_MASTER_KEY value at build-time using secret variables from Azure DevOps
Here you can find some useful posts talking about ENV and ARGS keywords:
https://vsupalov.com/docker-build-time-env-values/
https://blog.bitsrc.io/how-to-pass-environment-info-during-docker-builds-1f7c5566dd0e

Management of CI / CD vuejs deployments

I need help with deploying vuejs project using CI / CD gitlab.
in this case I have 3 file environments:
.env.development.local
.env.staging.local
.env.production.local
And I'm here using a different baseHref / publicPath:
/ development
/ stage
/ production
In this stage I follow the tutorial Here.
but I am still confused about how I deployed using a different env.
Usually I use the command:
Staging
    
npm run build -- --mode stage
Production
    
npm run build -- --mode production
Here is an example of the env that I made:
# Environment Local
NODE_ENV=development
BASE_URL = /development/
VUE_APP_TITLE=Website (development)
VUE_APP_END_POINT='http://localhost:8000/api/v1/'
VUE_APP_CLIENT_ID = 12341
VUE_APP_CLIENT_SECRET = 'asdASD1123s'
VUE_APP_SCOPE = '*'
VUE_APP_BASE_URL_LINK = 'http://localhost:8080'
VUE_APP_VERSION =
And this is the vue.config.js file that I have:
process.env.VUE_APP_VERSION = require('./package.json').version
module.exports = {
publicPath: process.env.BASE_URL,
"transpileDependencies": [
"vuetify"
]
}
branch that I made:
test
development
master
In gitlab-ci.yml:
build site:
image: node:6
stage: build
script:
- npm install --progress=false
- npm run build
artifacts:
expire_in: 1 week
paths:
- dist
unit test:
image: node:6
stage: test
script:
- npm install --progress=false
- npm run unit
deploy:
image: alpine
stage: deploy
script:
- apk add --no-cache rsync openssh
- mkdir -p ~/.ssh
- echo "$SSH_PRIVATE_KEY" >> ~/.ssh/id_dsa
- chmod 600 ~/.ssh/id_dsa
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
- rsync -rav --delete dist/ user#server.com:/var/www/website/
with the configuration of gitlab-ci.yml above I can only deploy local env.
I hope someone wants to share their knowledge and experience about CI / CD deploy vuejs. Or give some CI / CD vuejs tutorial references multiple environment.
Thank you very much.
I found a dockerfile in my projet if it's can help you :
FROM node:12.16.1 AS builder
RUN mkdir /app
COPY *.json /app/
COPY src /app/src
WORKDIR /app
RUN npm install
RUN npm run build
FROM nginx:1.15.8
COPY --from=builder /app/dist/ /usr/share/nginx/html
EXPOSE 80:80
ENTRYPOINT ["nginx", "-g", "daemon off;"]
Don't forget to use multi stages docker ;)

Bitbucket Pipeline with docker-compose: Container ID 166535 cannot be mapped to a host ID

I'm trying to use docker-compose inside bitbucket pipeline in order to build several microservices and run tests against them. However I'm getting the following error:
Step 19/19 : COPY . .
Service 'app' failed to build: failed to copy files: failed to copy directory: Error processing tar file(exit status 1): Container ID 166535 cannot be mapped to a host ID
As of now, my docker-compose.yml looks like this:
version: '2.3'
services:
app:
build:
context: .
target: dev
ports:
- "3030:3030"
image: myapp:dev
entrypoint: "/docker-entrypoint-dev.sh"
command: [ "npm", "run", "watch" ]
volumes:
- .:/app/
- /app/node_modules
environment:
NODE_ENV: development
PORT: 3030
DATABASE_URL: postgres://postgres:#postgres/mydb
and my Dockerfile is as follow:
# ---- Base ----
#
FROM node:10-slim AS base
ENV PORT 80
ENV HOST 0.0.0.0
EXPOSE 80
WORKDIR /app
COPY ./scripts/docker-entrypoint-dev.sh /
RUN chmod +x /docker-entrypoint-dev.sh
COPY ./scripts/docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
COPY package.json package-lock.json ./
# ---- Dependencies ----
#
FROM base as dependencies
RUN npm cache verify
RUN npm install --production=true
RUN cp -R node_modules node_modules_prod
RUN npm install --production=false
# ---- Development ----
#
FROM dependencies AS dev
ENV NODE_ENV development
COPY . .
# ---- Release ----
#
FROM dependencies AS release
ENV NODE_ENV production
COPY --from=dependencies /app/node_modules_prod ./node_modules
COPY . .
CMD ["npm", "start"]
And in my bitbucket-pipelines.yml I define my pipeline as:
image: node:10.15.3
pipelines:
default:
- step:
name: 'install docker-compose, and run tests'
script:
- curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
- docker-compose -v
- docker-compose run app npm run test
- echo 'tests done'
services:
- docker
However, this example works when I try to use docker without docker-compose, defining my pipeline as:
pipelines:
default:
- step:
name: 'install and run tests'
script:
- docker build -t myapp .
- docker run --entrypoint="" myapp npm run test
- echo 'done!'
services:
- postgres
- docker
I found this issue (https://jira.atlassian.com/browse/BCLOUD-17319) in atlassian community, however I could not find a solution to fix my broken usecase. Any suggestions?
I would try to use an image with installed docker-compose already instead of installing it during the pipeline.
image: node:10.15.3
pipelines:
default:
- step:
name: 'run tests'
script:
- docker-compose -v
- docker-compose run app npm run test
- echo 'tests done'
services:
- docker
definitions:
services:
docker:
image: docker/compose:1.25.4
try to add this to your bitbucket-pipelines.yml
if it doesn't work rename docker to customDocker in the definition and in the service sections.
if it doesn't work too, then because you don't need nodejs in the pipeline directly, try to use this approach:
image: docker/compose:1.25.4
pipelines:
default:
- step:
name: 'run tests'
script:
- docker-compose -v
- docker-compose run app npm run test
- echo 'tests done'
services:
- docker
TL;DR: Start from your baseimage and check for the ID that is creating the problem using commands in your dockerfile. Use "problem_id = error_message_id - 100000 - 65536" to find the uid or gid that is not supported. Chown copies the files that are modified inflating your docker image.
The details:
We were using base image tensorflow/tensorflow:2.2.0-gpu and though we tried to find the problem ourselves, we were looking too late in our Dockerfile and making assumptions that were wrong.With help from Atlassian support we found that /usr/local/lib/python3.6 contained many files belonging to group staff (gid = 50)
Assumption 1: Bitbucket pipelines have definitions for the standard "linux" user ids and group ids.
Reality: Bitbucket pipelines only define a subset of the standard users and groups. Specifically they do not define group "staff" with gid 50. Your Dockerfile base image may define group staff (in /etc/groups) but the Bitbucket pipeline is run in a docker container without that gid. DO NOT USE
RUN cat /etc/group && RUN /etc/passwd
to check for ids. Execute these commands as Bitbucket pipeline commands in your script.
Assumption 2: It was something we were installing that was breaking the build.
Reality: Although we could "move the build failure around" by adjusting which packages we installed. This was likely just a case of some packages overwriting the ownership of pre-existing
We were able to find the files by using the relationship between the id in the error message and the docker build id of
problem_id = error_message_id - 100000 - 65536
And used the computed id value (50) to fined the files early in our Dockerfile:
RUN find / -uid 50-ls
RUN find / -gid 50 -ls
For example:
Error processing tar file(exit status 1): Container ID 165586 cannot be mapped to a host ID
50 = 165586 - 100000 - 65536
Final solution (for us):
Adding this command early to our Dockerfile:
RUN chown -R root:root /usr/local/lib/python*
Fixed the Bitbucket pipeline build problem, but also increases the size of our Docker image because Docker makes a copy of all of the files that are modified (contents or filesystem flags). We will look again at multi-stage builds to reduce the size of our docker images.

Resources