I need help with deploying vuejs project using CI / CD gitlab.
in this case I have 3 file environments:
.env.development.local
.env.staging.local
.env.production.local
And I'm here using a different baseHref / publicPath:
/ development
/ stage
/ production
In this stage I follow the tutorial Here.
but I am still confused about how I deployed using a different env.
Usually I use the command:
Staging
npm run build -- --mode stage
Production
npm run build -- --mode production
Here is an example of the env that I made:
# Environment Local
NODE_ENV=development
BASE_URL = /development/
VUE_APP_TITLE=Website (development)
VUE_APP_END_POINT='http://localhost:8000/api/v1/'
VUE_APP_CLIENT_ID = 12341
VUE_APP_CLIENT_SECRET = 'asdASD1123s'
VUE_APP_SCOPE = '*'
VUE_APP_BASE_URL_LINK = 'http://localhost:8080'
VUE_APP_VERSION =
And this is the vue.config.js file that I have:
process.env.VUE_APP_VERSION = require('./package.json').version
module.exports = {
publicPath: process.env.BASE_URL,
"transpileDependencies": [
"vuetify"
]
}
branch that I made:
test
development
master
In gitlab-ci.yml:
build site:
image: node:6
stage: build
script:
- npm install --progress=false
- npm run build
artifacts:
expire_in: 1 week
paths:
- dist
unit test:
image: node:6
stage: test
script:
- npm install --progress=false
- npm run unit
deploy:
image: alpine
stage: deploy
script:
- apk add --no-cache rsync openssh
- mkdir -p ~/.ssh
- echo "$SSH_PRIVATE_KEY" >> ~/.ssh/id_dsa
- chmod 600 ~/.ssh/id_dsa
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
- rsync -rav --delete dist/ user#server.com:/var/www/website/
with the configuration of gitlab-ci.yml above I can only deploy local env.
I hope someone wants to share their knowledge and experience about CI / CD deploy vuejs. Or give some CI / CD vuejs tutorial references multiple environment.
Thank you very much.
I found a dockerfile in my projet if it's can help you :
FROM node:12.16.1 AS builder
RUN mkdir /app
COPY *.json /app/
COPY src /app/src
WORKDIR /app
RUN npm install
RUN npm run build
FROM nginx:1.15.8
COPY --from=builder /app/dist/ /usr/share/nginx/html
EXPOSE 80:80
ENTRYPOINT ["nginx", "-g", "daemon off;"]
Don't forget to use multi stages docker ;)
Related
I would like to use Azure Pipeline "Cache Task" to cache npm and later on when I build my docker image based on dockerfile, use that cache to decrease my build time. I do rarely, to almost never change my package.json file nowadays, but the docker image build in Azure Pipeline is very slow. If I take a look at the logs while building etc. based on my dockerfile, the command 'npm install' makes up most of the build time. I've made some research but can't find such case...
This is what I've come up with at the moment:
azure-pipeline.yml
- "test"
resources:
- repo: self
pool:
vmImage: "ubuntu-latest"
variables:
tag: "$(Build.BuildId)"
DOCKER_BUILDKIT: 1
npm_config_cache: $(Pipeline.Workspace)/.npm
steps:
- task: Cache#2
inputs:
key: 'npm | "$(Agent.OS)" | package.json'
path: "$(npm_config_cache)"
cacheHitVar: "CACHE_RESTORED"
displayName: Cache npm
- script: npm ci
- task: Docker#2
displayName: Build an image
inputs:
command: "build"
Dockerfile: "**/Dockerfile.test"
arguments: "--cache-from=$(npm_config_cache)" <--- I WOULD LIKE TO USE THE CACHE IN THIS TASK
tags: "$(tag)"
Dockerfile
FROM node:19-alpine as build
WORKDIR /app
COPY package.json .
ENV REACT_APP_ENVIRONMENT=test
RUN npm ci --cache $(npm_config_cache) <-- This is not right but I would like to do something like this
COPY . .
RUN npm run build
#Stage 2
FROM nginx:1.23.2-alpine
WORKDIR /usr/share/nginx/html
RUN rm -rf *
COPY nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80
ENTRYPOINT ["nginx", "-g", "daemon off;"]
So what I would like to do is the use the cache task in my azure-pipeline in the next step/task of the azure-pipeline where I build my docker image based on the dockerfile provided above. In my Dockerfile I got the line -> RUN npm ci --cache $(npm_config_cache) That somehow I would like to reach it, or is that even possible? I can't really figure it out and maybe I'm totaly wrong about my approach?
Many thanks!
I am trying to build and tag a docker image in Github Actions runner and am getting this error from the runner
unable to prepare context: path " " not found
Error: Process completed with exit code 1.
I have gone through all other similar issues on StackOverflow and implemented them but still, no way forward.
The interesting thing is, I have other microservices using similar workflow and Dockerfile working perfectly fine.
My workflow
name: some-tests
on:
pull_request:
branches: [ main ]
jobs:
tests:
runs-on: ubuntu-latest
env:
AWS_REGION: us-east-1
IMAGE_NAME: service
IMAGE_TAG: 1.1.0
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Create cluster
uses: helm/kind-action#v1.2.0
- name: Read secrets from AWS Secrets Manager into environment variables
uses: abhilash1in/aws-secrets-manager-action#v1.1.0
id: read-secrets
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
secrets: |
users-service/secrets
parse-json: true
- name: Build and Tag Image
id: build-image
run: |
# Build a docker container and Tag
docker build --file Dockerfile \
--build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \
-t $IMAGE_NAME:$IMAGE_TAG .
echo "::set-output name=image::$IMAGE_NAME:$IMAGE_TAG"
- name: Push Image to Kind cluster
id: kind-cluster-image-push
env:
KIND_IMAGE: ${{ steps.build-image.outputs.image }}
CLUSTER_NAME: chart-testing
CLUSTER_CONTROLLER: chart-testing-control-plane
run: |
kind load docker-image $KIND_IMAGE --name $CLUSTER_NAME
docker exec $CLUSTER_CONTROLLER crictl images
Dockerfile*
FROM node:14 AS base
WORKDIR /app
FROM base AS development
COPY .npmrc .npmrc
COPY package.json ./
RUN npm install --production
RUN cp -R node_modules /tmp/node_modules
RUN npm install
RUN rm -f .npmrc
COPY . .
FROM development AS builder
COPY .npmrc .npmrc
RUN yarn run build
RUN rm -f .npmrc
RUN ls -la
FROM node:14-alpine AS production
# Install curl
RUN apk update && apk add curl
COPY --from=builder /tmp/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package.json ./
ARG APP_API
# set environmental variables
ENV APP_API=$APP_API
EXPOSE ${PORT}
CMD [ "yarn", "start" ]
I guess the problem is coming from the building command or something, these are the different things I have tried
I used --file explicitly with period(.)*
docker build --file Dockerfile \
--build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \
-t $IMAGE_NAME:$IMAGE_TAG .
echo "::set-output name=image::$IMAGE_NAME:$IMAGE_TAG"
I used only period (.)
docker build \
--build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \
-t $IMAGE_NAME:$IMAGE_TAG .
echo "::set-output name=image::$IMAGE_NAME:$IMAGE_TAG"
I used relative path for Dockerfile (./Dockerfile)
docker build --file ./Dockerfile \
--build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \
-t $IMAGE_NAME:$IMAGE_TAG .
echo "::set-output name=image::$IMAGE_NAME:$IMAGE_TAG"
I used relative path for the period (./)
docker build \
--build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \
-t $IMAGE_NAME:$IMAGE_TAG ./
echo "::set-output name=image::$IMAGE_NAME:$IMAGE_TAG"
I have literally exhausted everything I've read from SO
The problem was basically a white-spacing issue. Nothing could show this. Thanks to This Github answer
I am using gitlab ci cd pipe line to deploy my application to ubuntu server. I have different .env file for local and for dev env and its not a part of git repo (included in gitignore) how to get env variables in my app when deployed to ubuntu server
my gitlab-ci.yml
stages:
- deploy
cache:
paths:
- node_modules/
deploy:
stage: deploy
script:
- npm install
- sudo pm2 delete lknodeapi || true
- sudo pm2 start server.js --name lknodeapi
I guess you are looking for this -Create Variables Gitlab.You can create your environment variables in the ui and then change your gitlab-ci.yml like below
stages:
- deploy
cache:
paths:
- node_modules/
deploy:
stage: deploy
script:
- echo "NGINX_REPO_KEY"=$NGINX_REPO_KEY >> ".env"
- npm install
- sudo pm2 delete lknodeapi || true
- sudo pm2 start server.js --name lknodeapi
This will create a .env file in root folder and put your variables in it.
I'm trying to use docker-compose inside bitbucket pipeline in order to build several microservices and run tests against them. However I'm getting the following error:
Step 19/19 : COPY . .
Service 'app' failed to build: failed to copy files: failed to copy directory: Error processing tar file(exit status 1): Container ID 166535 cannot be mapped to a host ID
As of now, my docker-compose.yml looks like this:
version: '2.3'
services:
app:
build:
context: .
target: dev
ports:
- "3030:3030"
image: myapp:dev
entrypoint: "/docker-entrypoint-dev.sh"
command: [ "npm", "run", "watch" ]
volumes:
- .:/app/
- /app/node_modules
environment:
NODE_ENV: development
PORT: 3030
DATABASE_URL: postgres://postgres:#postgres/mydb
and my Dockerfile is as follow:
# ---- Base ----
#
FROM node:10-slim AS base
ENV PORT 80
ENV HOST 0.0.0.0
EXPOSE 80
WORKDIR /app
COPY ./scripts/docker-entrypoint-dev.sh /
RUN chmod +x /docker-entrypoint-dev.sh
COPY ./scripts/docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
COPY package.json package-lock.json ./
# ---- Dependencies ----
#
FROM base as dependencies
RUN npm cache verify
RUN npm install --production=true
RUN cp -R node_modules node_modules_prod
RUN npm install --production=false
# ---- Development ----
#
FROM dependencies AS dev
ENV NODE_ENV development
COPY . .
# ---- Release ----
#
FROM dependencies AS release
ENV NODE_ENV production
COPY --from=dependencies /app/node_modules_prod ./node_modules
COPY . .
CMD ["npm", "start"]
And in my bitbucket-pipelines.yml I define my pipeline as:
image: node:10.15.3
pipelines:
default:
- step:
name: 'install docker-compose, and run tests'
script:
- curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
- docker-compose -v
- docker-compose run app npm run test
- echo 'tests done'
services:
- docker
However, this example works when I try to use docker without docker-compose, defining my pipeline as:
pipelines:
default:
- step:
name: 'install and run tests'
script:
- docker build -t myapp .
- docker run --entrypoint="" myapp npm run test
- echo 'done!'
services:
- postgres
- docker
I found this issue (https://jira.atlassian.com/browse/BCLOUD-17319) in atlassian community, however I could not find a solution to fix my broken usecase. Any suggestions?
I would try to use an image with installed docker-compose already instead of installing it during the pipeline.
image: node:10.15.3
pipelines:
default:
- step:
name: 'run tests'
script:
- docker-compose -v
- docker-compose run app npm run test
- echo 'tests done'
services:
- docker
definitions:
services:
docker:
image: docker/compose:1.25.4
try to add this to your bitbucket-pipelines.yml
if it doesn't work rename docker to customDocker in the definition and in the service sections.
if it doesn't work too, then because you don't need nodejs in the pipeline directly, try to use this approach:
image: docker/compose:1.25.4
pipelines:
default:
- step:
name: 'run tests'
script:
- docker-compose -v
- docker-compose run app npm run test
- echo 'tests done'
services:
- docker
TL;DR: Start from your baseimage and check for the ID that is creating the problem using commands in your dockerfile. Use "problem_id = error_message_id - 100000 - 65536" to find the uid or gid that is not supported. Chown copies the files that are modified inflating your docker image.
The details:
We were using base image tensorflow/tensorflow:2.2.0-gpu and though we tried to find the problem ourselves, we were looking too late in our Dockerfile and making assumptions that were wrong.With help from Atlassian support we found that /usr/local/lib/python3.6 contained many files belonging to group staff (gid = 50)
Assumption 1: Bitbucket pipelines have definitions for the standard "linux" user ids and group ids.
Reality: Bitbucket pipelines only define a subset of the standard users and groups. Specifically they do not define group "staff" with gid 50. Your Dockerfile base image may define group staff (in /etc/groups) but the Bitbucket pipeline is run in a docker container without that gid. DO NOT USE
RUN cat /etc/group && RUN /etc/passwd
to check for ids. Execute these commands as Bitbucket pipeline commands in your script.
Assumption 2: It was something we were installing that was breaking the build.
Reality: Although we could "move the build failure around" by adjusting which packages we installed. This was likely just a case of some packages overwriting the ownership of pre-existing
We were able to find the files by using the relationship between the id in the error message and the docker build id of
problem_id = error_message_id - 100000 - 65536
And used the computed id value (50) to fined the files early in our Dockerfile:
RUN find / -uid 50-ls
RUN find / -gid 50 -ls
For example:
Error processing tar file(exit status 1): Container ID 165586 cannot be mapped to a host ID
50 = 165586 - 100000 - 65536
Final solution (for us):
Adding this command early to our Dockerfile:
RUN chown -R root:root /usr/local/lib/python*
Fixed the Bitbucket pipeline build problem, but also increases the size of our Docker image because Docker makes a copy of all of the files that are modified (contents or filesystem flags). We will look again at multi-stage builds to reduce the size of our docker images.
I'm trying to setup an automated Cloud Build for a sapper project that gets deployed to Cloud Run. However I'm getting an error on the deploy.
This is my first attempt at CI work flow so I'm sure there are multiple things I'm doing wrong.
cloudbuild.yaml
steps:
- name: "gcr.io/cloud-builders/gcloud"
args:
- kms
- decrypt
- --ciphertext-file=.env.enc
- --plaintext-file=.env
- --location=global
- --keyring=jointcreative
- --key=cloudbuild-env
- name: "gcr.io/cloud-builders/docker"
args: ["build", "-t", "gcr.io/$PROJECT_ID/$PROJECT_ID", "."]
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/$PROJECT_ID/$PROJECT_ID"]
- name: "gcr.io/cloud-builders/npm"
args: ["ci", "--production"]
- name: 'gcr.io/cloud-builders/gcloud'
args:
- 'run'
- 'deploy'
- 'jointcreative'
- '--image'
- 'gcr.io/$PROJECT_ID/$PROJECT_ID'
- '--region'
- 'us-central1'
- '--platform'
- 'managed'
- name: "gcr.io/$PROJECT_ID/firebase"
args: ['deploy']
Dockerfile
FROM mhart/alpine-node:12
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --production
FROM mhart/alpine-node:slim-12
WORKDIR /app
COPY --from=0 /app .
COPY . .
ENV PORT 8080
ENV HOST 0.0.0.0
EXPOSE 8080
CMD ["node", "__sapper__/build"]
Error logs
The reason you get this error is because you don't build the Sapper application with npm run build.
I published a repository with Sapper deployed to Cloud Run a few minutes ago on Github at https://github.com/mikenikles/sapper-on-cloud-run.
The Dockerfile I use is based on 3 stages to minimize the final image size.
# This stage builds the sapper application.
FROM mhart/alpine-node:12 AS build-app
WORKDIR /app
COPY . .
RUN npm install --no-audit --unsafe-perm
RUN npm run build
# This stage installs the runtime dependencies.
FROM mhart/alpine-node:12 AS build-runtime
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --production --unsafe-perm
# This stage only needs the compiled Sapper application
# and the runtime dependencies.
FROM mhart/alpine-node:slim-12
WORKDIR /app
COPY --from=build-app /app/__sapper__ ./__sapper__
COPY --from=build-app /app/static ./static
COPY --from=build-runtime /app/node_modules ./node_modules
EXPOSE 3000
CMD ["node", "__sapper__/build"]
I also recommend the following .dockerignore file to copy only what is necessary for Sapper to run:
/*
!/package.json
!/package-lock.json
!/rollup.config.js
!/src
!/static
In your cloudbuild.yaml, you may want to consider adding the following to the Cloud Run deploy script if you plan on exposing the service publicly:
- 'managed'
- '--allow-unauthenticated'
It looks like you're missing the step (which can be placed into your cloudbuild/ci script, or your Dockerfile), to actually build the application.
Sapper uses Rollup or Webpack to bundle your app and places the output in the __sapper__/build directory. The COPY step in your Dockerfile copies this output into your final container.
Try adding a step which runs npm run build into your process, sometime before the Docker image is built.