I have a Dockerfile for one of my Node.js services that I try to push to my Digitalocean registry through the use of Github actions.
My Node.js service requires a private package that is hosted by myself on npm.js registry.
In my Dockerfile, I have an ARG for that:
FROM node:14-slim
ARG NODE_ENV=production
EXPOSE 5000
WORKDIR /usr/src/app
ARG NPM_TOKEN
COPY .npmrc .npmrc
COPY package*.json ./
RUN npm install
RUN rm -f .npmrc
COPY src src
CMD ["npm", "start"]
and the following .npmrc file:
//registry.npmjs.org/:_authToken=${NPM_TOKEN}
in my Github actions workflow I have two actions. One for running tests:
name: tests-user-service
on:
pull_request:
paths:
- 'user-service/**'
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Install dependencies & run tests
run: cd user-service && npm install && npm run test:ci
env:
NPM_TOKEN: ${{secrets.NPM_ACCESS_TOKEN}}
and one for building the docker file and pushing it to the registry:
name: deploy-user-service
on:
push:
branches:
- main
paths:
- 'user-service/**'
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Check Out Repo
uses: actions/checkout#v2
- name: Install DigitalOcean Controller
uses: digitalocean/action-doctl#v2
with:
token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}
- name: Set up Docker Builder
uses: docker/setup-buildx-action#v1
- name: Authenticate with DigitalOcean Container Registry
run: doctl registry login --expiry-seconds 600
- name: Build and Push to DigitalOcean Container Registry
uses: docker/build-push-action#v2
with:
context: ./user-service
push: true
tags: registry.digitalocean.com/xxx/xxx:latest
env:
NPM_TOKEN: ${{secrets.NPM_ACCESS_TOKEN}}
- name: Logout from DigitalOcean Container Registry
run: doctl registry logout
now, the test file works. So I know the NPM_ACCESS_TOKEN is set correctly.
The deployment file, that one fails. It tells me this:
#10 [5/7] RUN npm install
#10 sha256:28b0590a43c14b889983d16b5e375f0156f7fdacc29e32fc3c219bce54e61d69
#10 0.317 Error: Failed to replace env in config: ${NPM_TOKEN}
I tried the following in my Dockerfile:
FROM node:14-slim
ARG NODE_ENV=production
EXPOSE 5000
WORKDIR /usr/src/app
ARG NPM_TOKEN
ENV NPM_TOKEN ${NPM_TOKEN}
COPY .npmrc .npmrc
COPY package*.json ./
RUN npm install
RUN rm -f .npmrc
COPY src src
CMD ["npm", "start"]
and then it goes wrong and tells me this:
#10 2.784 npm ERR! code E404
#10 2.790 npm ERR! 404 Not Found - GET https://registry.npmjs.org/#xxx%2fxxx - Not found
#10 2.790 npm ERR! 404
#10 2.790 npm ERR! 404 '#xxx/xxx#^0.0.11' is not in the npm registry.
#10 2.790 npm ERR! 404 You should bug the author to publish it (or use the name yourself!)
So then it can't find the module. I assume this happens because the NPM_TOKEN is now set with an empty string or so, as in the tests action it does work properly.
Any idea what else I could try?
Actually, I figured it out. Have to add build-args to the Build and Push part, and remove the env from there.
So instead of:
- name: Build and Push to DigitalOcean Container Registry
uses: docker/build-push-action#v2
with:
context: ./user-service
push: true
tags: registry.digitalocean.com/xxx/xxx:latest
env:
NPM_TOKEN: ${{secrets.NPM_ACCESS_TOKEN}}
should use this:
- name: Build and Push to DigitalOcean Container Registry
uses: docker/build-push-action#v2
with:
context: ./user-service
push: true
tags: registry.digitalocean.com/xxx/xxx:latest
build-args: NPM_TOKEN=${{secrets.NPM_ACCESS_TOKEN}}
Build args are list type CSV, check the readme
build-args: |
NPM_TOKEN =${{ secrets.NPM_TOKEN }}
Related
I would like to use Azure Pipeline "Cache Task" to cache npm and later on when I build my docker image based on dockerfile, use that cache to decrease my build time. I do rarely, to almost never change my package.json file nowadays, but the docker image build in Azure Pipeline is very slow. If I take a look at the logs while building etc. based on my dockerfile, the command 'npm install' makes up most of the build time. I've made some research but can't find such case...
This is what I've come up with at the moment:
azure-pipeline.yml
- "test"
resources:
- repo: self
pool:
vmImage: "ubuntu-latest"
variables:
tag: "$(Build.BuildId)"
DOCKER_BUILDKIT: 1
npm_config_cache: $(Pipeline.Workspace)/.npm
steps:
- task: Cache#2
inputs:
key: 'npm | "$(Agent.OS)" | package.json'
path: "$(npm_config_cache)"
cacheHitVar: "CACHE_RESTORED"
displayName: Cache npm
- script: npm ci
- task: Docker#2
displayName: Build an image
inputs:
command: "build"
Dockerfile: "**/Dockerfile.test"
arguments: "--cache-from=$(npm_config_cache)" <--- I WOULD LIKE TO USE THE CACHE IN THIS TASK
tags: "$(tag)"
Dockerfile
FROM node:19-alpine as build
WORKDIR /app
COPY package.json .
ENV REACT_APP_ENVIRONMENT=test
RUN npm ci --cache $(npm_config_cache) <-- This is not right but I would like to do something like this
COPY . .
RUN npm run build
#Stage 2
FROM nginx:1.23.2-alpine
WORKDIR /usr/share/nginx/html
RUN rm -rf *
COPY nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80
ENTRYPOINT ["nginx", "-g", "daemon off;"]
So what I would like to do is the use the cache task in my azure-pipeline in the next step/task of the azure-pipeline where I build my docker image based on the dockerfile provided above. In my Dockerfile I got the line -> RUN npm ci --cache $(npm_config_cache) That somehow I would like to reach it, or is that even possible? I can't really figure it out and maybe I'm totaly wrong about my approach?
Many thanks!
I am using a github action pipeline and am trying to run an "npm run build" to generate an artifact that will then be used for deployment and production. Right now, my application is loaded from the local /build folder containing the built production application. This folder is populated after i make my changes and manually run "npm run build." I would instead like my pipeline to build the artifact and then deploy the application from artifact in Heroku. Here is my pipeline code:
# This workflow will do a clean installation of node dependencies, cache/restore them, build the source code and run tests across different versions of node
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-nodejs-with-github-actions
name: Node.js CI
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [16.x]
# See supported Node.js release schedule at https://nodejs.org/en/about/releases/
steps:
- uses: actions/checkout#v3
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v3
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
# - run: npm ci
# - run: npm run build --if-present
- run: cd react-frontend && npm install
- run: cd react-frontend && npm run build
# - run: npm test --if-present ```
No artifact is being built at present
I have the below Dockerfile:
FROM node:alpine
WORKDIR ./usr/local/lib/node_modules/npm/
COPY .npmrc ./
COPY package.json ./
COPY package-lock.json ./
RUN npm ci
COPY . .
EXPOSE 3000
CMD ["npm", "run", "start-prod"]
This file is used in an azure pipeline like so:
variables:
- name: NPMRC_LOCATION
value: $(Agent.TempDirectory)
- stage: BuildPublishDockerImage
displayName: Build and publish Docker image
dependsOn: Build
jobs:
- job: BuildPublishDockerImage
steps:
- checkout: self
- task: DownloadSecureFile#1
name: npmrc
inputs:
secureFile: .npmrc
- task: npmAuthenticate#0
inputs:
workingFile: $(NPMRC_LOCATION)/.npmrc
- task: Docker#2
displayName: Build a Docker image
inputs:
command: build
arguments: --no-cache
I know .npmrc should be in that location (I run RUN ls in the Dockerfile and its there).
However when I run it I keep getting this error:
failed to solve with frontend dockerfile.v0: failed to build LLB: failed to compute cache key: "/.npmrc" not found: not found
I just want to authenticate to a private npm registry. I'm mystified by this. Grateful for any help.
You can in Dockerfile set registry
npm config set registry
I am trying to configure angular with docker image and run with docker-compose.
Here is my dockerfile
# Use official node image as the base image
FROM node:16-alpine3.12 as build
# Set the working directory
WORKDIR /usr/local/app
# Add the source code to app
COPY ./ ./
# Install all the dependencies
RUN npm install
# Generate the build of the application
RUN npm run build:prod
# Stage 2: Serve app with nginx server
# Use official nginx image as the base image
FROM nginx:latest
# Copy the build output to replace the default nginx contents.
COPY --from=build /usr/local/app/dist/ /usr/share/nginx/html
#COPY /nginx.conf /etc/nginx/conf.d/default.conf
# Assign permisson
# https://stackoverflow.com/questions/49254476/getting-forbidden-error-while-using-nginx-inside-docker
# RUN chown nginx:nginx /usr/share/nginx/html/*
# Expose port 80
EXPOSE 80
EXPOSE 443
It runs correctly if I run docker build and docker run locally. But, after push to github with yml below:
name: Docker Image CI For Web
on:
push:
tags:
- 'web-v*'
jobs:
push_to_registry:
name: Push Docker image to Docker Hub
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout#v2
- name: Log in to Docker Hub
uses: docker/login-action#f054a8b539a109f9f41c372932f1ae047eff08c9
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action#98669ae865ea3cffbcbaa878cf57c20bbf1c6c38
with:
images: edward8520/meekou
- name: Build and push Docker image
uses: docker/build-push-action#ad44023a93711e3deb337508980b4b5e9bcdc5dc
with:
context: ./angular/
file: ./angular/Dockerfile
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
And then run docker-compose with below,
version: '3.9'
services:
meekou-web:
container_name: 'meekou-web-test'
image: 'edward8520/meekou:web-v0.0.4'
ports:
- "8888:80"
- "8889:443"
environment:
- NODE_ENV=production
volumes:
- ./meekou-web/:/usr/share/nginx/html
expose:
- "80"
- "443"
It throw error 403 Forbidden
I'm trying to setup an automated Cloud Build for a sapper project that gets deployed to Cloud Run. However I'm getting an error on the deploy.
This is my first attempt at CI work flow so I'm sure there are multiple things I'm doing wrong.
cloudbuild.yaml
steps:
- name: "gcr.io/cloud-builders/gcloud"
args:
- kms
- decrypt
- --ciphertext-file=.env.enc
- --plaintext-file=.env
- --location=global
- --keyring=jointcreative
- --key=cloudbuild-env
- name: "gcr.io/cloud-builders/docker"
args: ["build", "-t", "gcr.io/$PROJECT_ID/$PROJECT_ID", "."]
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/$PROJECT_ID/$PROJECT_ID"]
- name: "gcr.io/cloud-builders/npm"
args: ["ci", "--production"]
- name: 'gcr.io/cloud-builders/gcloud'
args:
- 'run'
- 'deploy'
- 'jointcreative'
- '--image'
- 'gcr.io/$PROJECT_ID/$PROJECT_ID'
- '--region'
- 'us-central1'
- '--platform'
- 'managed'
- name: "gcr.io/$PROJECT_ID/firebase"
args: ['deploy']
Dockerfile
FROM mhart/alpine-node:12
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --production
FROM mhart/alpine-node:slim-12
WORKDIR /app
COPY --from=0 /app .
COPY . .
ENV PORT 8080
ENV HOST 0.0.0.0
EXPOSE 8080
CMD ["node", "__sapper__/build"]
Error logs
The reason you get this error is because you don't build the Sapper application with npm run build.
I published a repository with Sapper deployed to Cloud Run a few minutes ago on Github at https://github.com/mikenikles/sapper-on-cloud-run.
The Dockerfile I use is based on 3 stages to minimize the final image size.
# This stage builds the sapper application.
FROM mhart/alpine-node:12 AS build-app
WORKDIR /app
COPY . .
RUN npm install --no-audit --unsafe-perm
RUN npm run build
# This stage installs the runtime dependencies.
FROM mhart/alpine-node:12 AS build-runtime
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --production --unsafe-perm
# This stage only needs the compiled Sapper application
# and the runtime dependencies.
FROM mhart/alpine-node:slim-12
WORKDIR /app
COPY --from=build-app /app/__sapper__ ./__sapper__
COPY --from=build-app /app/static ./static
COPY --from=build-runtime /app/node_modules ./node_modules
EXPOSE 3000
CMD ["node", "__sapper__/build"]
I also recommend the following .dockerignore file to copy only what is necessary for Sapper to run:
/*
!/package.json
!/package-lock.json
!/rollup.config.js
!/src
!/static
In your cloudbuild.yaml, you may want to consider adding the following to the Cloud Run deploy script if you plan on exposing the service publicly:
- 'managed'
- '--allow-unauthenticated'
It looks like you're missing the step (which can be placed into your cloudbuild/ci script, or your Dockerfile), to actually build the application.
Sapper uses Rollup or Webpack to bundle your app and places the output in the __sapper__/build directory. The COPY step in your Dockerfile copies this output into your final container.
Try adding a step which runs npm run build into your process, sometime before the Docker image is built.