I hope you are doing well! I'm facing a problem when trying to deploy using Bitbucket pipeline.
The project is a React project at version 18.2.0 and its files are in the frontend folder.
bitbucket-pipelines.yml
image: atlassian/default-image:3
# Workflow Configuration
pipelines:
branches:
staging:
- parallel:
- step:
name: Build and Test
script:
- npm install --prefix ./frontend/ --legacy-peer-deps
- npm audit fix --force --prefix ./frontend/
- npm audit fix --force --prefix ./frontend/
- npm run build --prefix ./frontend/
artifacts:
- ./frontend/build/**
- step:
name: Deploy to Staging
deployment: Staging
script:
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: $USER
SERVER: $SERVER
REMOTE_PATH: '/var/www/html'
LOCAL_PATH: './'
The error that I am not able to solve is related to the LOCAL_PATH folder.
I tried some variations:
./frontend/build/*
No such file or directory
./build/*
No such file or directory
./
error: unexpected filename: .
Thanks in advance for any help
TL;DR
In your build step, use:
artifacts:
- frontend/build/**
In the scp deploy, use the same value for the LOCAL_PATH variable:
variables:
...
LOCAL_PATH: 'frontend/build/*'
Explanation
The reason why the usual syntax does not work is that Bitbucket uses glob patterns for its paths. For example from the documentation on artifacts:
You can use glob patterns to define artifacts. Glob patterns that
start with a * will need to be put in quotes.
Note: As these are glob
patterns, path segments “.” and “..” won’t work. Use paths relative to
the build directory.
That means you don't want or need the leading ./. Check the Bitbucket documentation on scp deployment for a concrete example matching your case.
Related
I've been trying to setup a CI/CD Pipeline on my Repo which runs common tasks like linting/tests etc. I've successfully setup a Gitlab-Runner which is working fine. The only part I'm stuck is the "deploy" part.
When I run my build, how do I actually get the files into my /var/www/xyz folder.
I get that everything is running in a Docker Container and I can't just magically copy paste my files there, but i don't get how I get the files on my actual server-directory. I've been searching for days for good docs / explanations, so as always, StackOverflow is my last resort for help.
I'm running on a Ubuntu 20.04 LTS VPS and a SaaS GitLab-Repository if that info is needed. That's my .gitlab-ci.yml:
image: timbru31/node-alpine-git
before_script:
- git fetch origin
stages:
- setup
- test
- build
- deploy
#All Setup Jobs
Install Dependencies:
stage: setup
interruptible: true
script:
- npm install
- npm i -g #nrwl/cli
artifacts:
paths:
- node_modules/
# All Test Jobs
Lint:
stage: test
script: npx nx run nx-fun:lint
Tests:
stage: test
script: npx nx run nx-fun:test
Deploy:
stage: build
script:
- ls /var/www/
- npx nx build --prod --output-path=dist/
- cp -r dist/* /var/www/html/neostax/
only:
refs:
- master
Normally I would ssh into my server, run the build, and then copy the build to the corresponding web-directory.
TL;DR - How do I get files from a GitLab-Runner to an actual directory on the server?
Due to certain suggestions received from different articles, we have decided to use the "npm ci" to install the node dependencies from package-lock.json file to avoid breaking changes instead of using the "npm install" script.
But after making this change in .gitlab-ci.yml file, the builds are taking much more time to get install the dependencies. It has been increased from 7 minutes to around 23 minutes.
As per the attached screenshot, it looks like taking more time in removing the existing node_modules folder before installation -
Below are some details from the script file -
image: docker:latest
# When using dind, it's wise to use the overlayfs driver for
# improved performance.
variables:
DOCKER_DRIVER: overlay2
services:
- docker:dind
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
stages:
- test and build
# - documentation-server
- deploy
variables:
GIT_STRATEGY: clone
# ELECTRON_SKIP_BINARY_DOWNLOAD: 1
build:library:
image: trion/ng-cli-karma
stage: test and build
only:
- master
- /^.*/#library_name
script:
- echo _auth=${NPM_TOKEN} >> .npmrc
- mkdir -p dist/core
- cd dist/core
- npm init -y
- cd ../..
- ls -al /hugo
- npm ci
Any help or suggestions would be really helpful to fix this issue.
I'm setting up the CI for an existing Express server project that lives in my repo's backend/core folder. Starting with just basic setup and linting. I was able to get npm install and linting to work but I wanted to cache the dependencies so that it wouldn't take 4 minutes to load for each push.
I used the caching scheme they describe here but it still seemed to run the full install each time. Or if it was using cached dependencies, it installed grpc each time which took a while. Any ideas what I can do?
My config.yml for reference:
# Use the latest 2.1 version of CircleCI pipeline process engine. See: https://circleci.com/docs/2.0/configuration-reference
# default executors
executors:
core-executor:
docker:
- image: 'cimg/base:stable'
commands:
init_env:
description: initialize environment
steps:
- checkout
- node/install
- restore_cache:
keys:
# when lock file changes, use increasingly general patterns to restore cache
- node-v1-{{ .Branch }}-{{ checksum "backend/core/package-lock.json" }}
- node-v1-{{ .Branch }}-
- node-v1-
- run: npm --prefix ./backend/core install
- save_cache:
paths:
- ~/backend/core/usr/local/lib/node_modules # location depends on npm version
key: node-v1-{{ .Branch }}-{{ checksum "backend/core/package-lock.json" }}
jobs:
install-node:
executor: core-executor
steps:
- checkout
- node/install
- run: node --version
- run: pwd
- run: ls -A
- run: npm --prefix ./backend/core install
lint:
executor: core-executor
steps:
- init_env
- run: pwd
- run: ls -A
- run: ls backend
- run: ls backend/core -A
- run: npm --prefix ./backend/core run lint
orbs:
node: circleci/node#4.1.0
version: 2.1
workflows:
test_my_app:
jobs:
#- install-node
- lint
#requires:
#- install-node
I think the best thing to do is to use npm ci which is faster. Best explanation of this is here: https://stackoverflow.com/a/53325242/4410223. Even though it will reinstall every time, it is consistent so better than caching. Although when using this, I am unsure what the point of continuing to use cache in your pipeline is, but caching still seems to be recommended with npm ci.
However, the best way to do this is to just use the node orb you already have in your config. A single step of - node/install-packages will do all that work for you. You will be able to replace it with your restore_cache, npm install and save_cache steps. You can even see all the steps it does here: https://circleci.com/developer/orbs/orb/circleci/node#commands-install-packages. Just open the command source and look at the steps on line 71.
I use gitlab-ci to test, compile and deploy a small golang application but the problem is that the stages take longer than necessary because they have to fetch all of the dependencies every time.
How can I keep the golang dependencies between two stages (test and build)?
This is part of my current gitlab-ci config:
test:
stage: test
script:
# get dependencies
- go get github.com/foobar/...
- go get github.com/foobar2/...
# ...
- go tool vet -composites=false -shadow=true *.go
- go test -race $(go list ./... | grep -v /vendor/)
compile:
stage: build
script:
# getting the same dependencies again
- go get github.com/foobar/...
- go get github.com/foobar2/...
# ...
- go build -race -ldflags "-extldflags '-static'" -o foobar
artifacts:
paths:
- foobar
As mentioned by Yan Foto, you can only use paths that are within the project workspace. But you can move the $GOPATH to be inside your project, as suggested by extrawurst blog.
test:
image: golang:1.11
cache:
paths:
- .cache
script:
- mkdir -p .cache
- export GOPATH="$CI_PROJECT_DIR/.cache"
- make test
This is a pretty tricky task, as GitLab does not allow caching outside the project directory. A quick and dirty task would be to copy the contents of $GOPATH under some directory inside the project (say _GO), cache it and copy it upon each stage start back to $GOPATH:
after_script:
- cp -R $GOPATH ./_GO || :
before_script:
- cp -R _GO $GOPATH
cache:
untracked: true
key: "$CI_BUILD_REF_NAME"
paths:
- _GO/
WARNING: This is just a (rather ugly) workaround and I haven't tested it myself. It should only exhibit a possible solution.
I have this pipeline file to unittest my project:
image: jameslin/python-test
pipelines:
default:
- step:
script:
- service mysql start
- pip install -r requirements/test.txt
- export DJANGO_CONFIGURATION=Test
- python manage.py test
but is it possible to switch to another docker image to deploy?
image: jameslin/python-deploy
pipelines:
default:
- step:
script:
- ansible-playbook deploy
I cannot seem to find any documentation saying either Yes or No.
You can specify an image for each step. Like that:
pipelines:
default:
- step:
name: Build and test
image: node:8.6
script:
- npm install
- npm test
- npm run build
artifacts:
- dist/**
- step:
name: Deploy
image: python:3.5.1
trigger: manual
script:
- python deploy.py
Finally found it:
https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html#Configurebitbucket-pipelines.yml-ci_stepstep(required)
step (required) Defines a build execution unit. Steps are executed in
the order in which they appear in the pipeline. Currently, each
pipeline can have only one step (one for the default pipeline and one
for each branch). You can override the main Docker image by specifying
an image in a step.
I have not found any information saying yes or no either so what I have assumed is that since this image can be configured with all the languages and technology you need I would suggest this method:
Create your docker image with all utilities you need for both default and deployment.
Use the branching method they show in their examples https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html#Configurebitbucket-pipelines.yml-ci_branchesbranches(optional)
Use shell scripts or other scripts to run specific tasks you need and
image: yourusername/your-image
pipelines:
branches:
master:
- step:
script: # Modify the commands below to build your repository.
- echo "Starting pipelines for master"
- chmod +x your-task-configs.sh #necessary to get shell script to run in BB Pipelines
- ./your-task-configs.sh
feature/*:
- step:
script: # Modify the commands below to build your repository.
- echo "Starting pipelines for feature/*"
- npm install
- npm install -g grunt-cli
- npm install grunt --save-dev
- grunt build