What's the easiest way to run a Node.js script as a step in Azure DevOps Pipeline (release step)?
I tried a few approaches and it seems like this is not as straightforward as I'd like it to be. I'm looking for a general approach idea.
My last attempt was adding the script to the source repository under tools/script.js. I need to run it in a release pipeline (after build), from where I can't access the repo directly, so I have added the entire repo as a second build artifact. Now I can reach the script file from the release agent, but I haven't found a way to actually run the script, there seems to be no option to run a Node.js script on an agent in general.
Option 1: Visual Pipeline Editor
1) Create the script file you want to run in your build/release steps
You can add the file to your Azure project repository, e.g. under tools/script.js and add any node modules it needs to run to your package.json, for example:
npm install --save-dev <module>
Commit and push, so your changes are online and Azure can see them.
2) Add your repo as an artifact for your release pipeline
You can skip this for build pipelines as they already have access to the repository.
3) Edit your release pipeline to ensure environment
Add a step to make sure correct Node version is on agent (Node.js Tool Installer):
Add a step to install all required node modules (npm):
4) Add your node script step
Use the Bash step to run your node script, make sure the working directory is set to root of the project (where package.json is located):
Option 2: YAML
you have a script\shell step where you can execute custom commands, just use that to achieve the goal. Agent have node installed on them, one thing you might need to do is use the pick node version step to use the right node version for your script
Example:
trigger:
- master
pool:
vmImage: 'Ubuntu-16.04'
steps:
- checkout: self
persistCredentials: true
clean: true
- bash: |
curl $BEDROCK_BUILD_SCRIPT > build.sh
chmod +x ./build.sh
displayName: My script download
env:
BEDROCK_BUILD_SCRIPT: https://url/yourscript.sh
- task: ShellScript#2
displayName: My script execution
inputs:
scriptPath: build.sh
Related
I'm struggling to make work a simple CI/CD pipeline with dependencies between stages. This is my .gitlab-ci.yml file:
image: gcc
stages:
- build
- test
build_app:
stage: build
script:
- make
- make install
test_app:
stage: test
script:
- run app ...
dependencies:
- build_app
So the build stage will compile the application and install it under /usr/local/bin/. The above example will fail because the test stage does not find the executable, even when it is states as dependent on the build step (it seems it is not attached by default).
If I define /usr/local/bin/app as an artifact it will also fail, because these must be relative and child of $CI_PROJECT_DIR (reference).
So I'm now trying to do some modifications on it but I feel I do not really understand what is going on there. I finally tried to attach the file compiled (APP) in the repository directory as an artifact (without using make install), and calling that binary for testing (eg. ./APP instead of APP). This way, it works, but I feel like I had to renounce to the make install instruction, and also that there is probably a much better and straightforward way to implement this.
Is there a recommended way to perform this task?
cache won't work either. It is restricted (though not documented) to the same limitations as artifacts.
These are both security measures since it would otherwise provide access to the entire filesystem the gitlab-ci-runner is installed.
The solution would be to install the binary in a path relative to your build directory instead of /usr/local/bin/app and use artifacts.
I am fairly new to GitLab CI and I've been trying different approaches to use the node_modules directory in my entire pipeline. From what I've read in the official docs, cache and artifacts seem to be valid approaches to pass on files between jobs:
cache is used to specify a list of files and directories which should
be cached between jobs. You can only use paths that are within the
project workspace.
However, my issue with the caching method is that the node_modules would be persisted between pipelines by default:
cache can be set globally and per-job.
from GitLab 9.0, caching is enabled and shared between pipelines and jobs by default.
I do not want to persist the node_modules between pipelines. What I actually want is to trigger a fresh install with npm in my setup stage and then allow all further jobs in the pipeline to use these modules. Hence, I started using artifacts instead of cache, which is described similarly:
artifacts is used to specify a list of files and directories which
should be attached to the job after success. [...]
The artifacts will be sent to GitLab after the job finishes
successfully and will be available for download in the GitLab UI.
The dependency feature should be used in conjunction with artifacts
and allows you to define the artifacts to pass between different jobs.
The artifact-dependency method seems to be usable in my case. However, both cache and artifacts are extremely inefficient and slow. The node_modules are installed and usable, but the entire directory then gets uploaded somewhere and is re-downloaded between each job. (I would really love to know what happens here... Where do the modules go?)
Is there a better approach to run npm install only once at the beginning of the pipeline and then keep the node_modules in the pipeline during its entire runtime? I do not want to keep the node_modules after all jobs are finished so they don't need to be uploaded or downloaded anywhere.
Sample pipeline configuration file to reproduce the behavior:
image: node:lts
stages:
- setup
- build
- test
node:
stage: setup
script:
- npm install
artifacts:
paths:
- node_modules/
build:
stage: build
script:
- npm run build
dependencies:
- node
test:
stage: test
script:
- npm run lint
- npm run test
dependencies:
- node
Where do the modules go?
By default artifacts are saved on the main gitlab machine:
/var/opt/gitlab/gitlab-rails/shared/artifacts
Is there a better approach to run npm install only once at the beginning of the pipeline and then keep the node_modules in the pipeline during its entire runtime?
There are some options that you can try:
Merge setup and build stages to one stage.
Local npm cache on builder machines. Faster npm install times. Or use private npm proxy registry (for example - Nexus/Artifactory)
Check if gitlab main machine and the builders are in the same network so the upload/download will be faster
Consider packaging your build in docker. You will get reusable docker images between your gitlab stages. (Of course that there is an overhead of uploading the images to docker registry)
I've configured build pipeline and upon successful CI completion, it triggers release artifact i.e release pipeline. Now in release pipeline, I want to run integration test. The solution build it self failing.
Git repository: Git repo link
Build CI Pipeline:
Release CD Pipeline:
We are running tests in release pipeline.
Reason for that is, we want to do System tests - with newly released code.
In your pipeline tests are before - so it would be better to have them in build pipeline.
The way we are running .NET Core tests in release is made in two steps:
Publish folder with test project into artifacts
In release pipeline add two .NET Core steps
Command: restore, Path: path to your test.csproj
Command: test, Path: path to your test.csproj, Arguments: --no-build -c Release
We're using VSTS to build and release our front end code (JS + WebPack)
We now have 2 separate builds for Dev and Test.
Build tasks:
Get sources
npm install
npm build dev
Archive dist files
Copy Publish Artifact: drop
(+release pipelines)
In the "Triggers" section in VSTS, it is posible to listen to multiple branches.
It seems unnecessary to have to so similar builds (?) when we have individual release pipelines.
The only different is step 3 (npm build dev and npm build test)
My question is: Is it possible to dynamically at build time determin the build environment based on source branch that triggered the build? And dynamically set arg in step 3?
Sure, you can add a PowerShell task to check source branch (using built-in variable, such as Build.SourceBranch), then add or modify the variable through Logging Commands (e.g. Write-Host "##vso[task.setvariable variable=currentEnv;]Dev").
After that you can use that variable (currentEnv) in npm task (e.g. Command and arguments: run $(currentEnv))
Thank you #starian :+1:
Ended up creating a branch selector shell script (.sh)
The script
VSTS Build tasks
VSTS Triggers (default development)
I have my .gitlab-ci.yml file set up in the typical three stages: test, build, deploy. During the build stage, I run a command that compiles my project and puts it in a tarball. The build stage appears to execute successfully because it moves on to the deploy stage, but the deploy stage then says it can't find the tarball. Is it in another directory? What happened to it? Thanks.
For each test gitlab-ci clean the build folder, therefore the output files of the build stage are not available in the deploy stage.
You need to rebuild your project also in the deploy stage.
The "stages" are only useful to order your tests, i.e. avoid to try to do a deploy test if a build test failed.
EDIT:
Since Gitlab 8.6, it is possible using dependencies feature
I was surprised to see the same behaviour (on GitLab 8.4).
I use cmake to create makefiles, then make to build, and then make test to run the test. I run all these in a build/ directory.
I don't want to repeat myself and identify easily which steps are failing. As such, I've created different gitlab-ci stages: cmake, make, test, etc. I then tell gitlab-ci to keep the build directory using the cache option:
cache:
key: "$CI_BUILD_REF_NAME"
untracked: true
paths:
- build/
I think that the key option will keep the same build directory for all stages acting on the same branch. See the gitlab-ci doc here: http://doc.gitlab.com/ce/ci/yaml/README.html#cache
EDIT: Don't use the cache for this! GitLab implemented reusable artifacts between stages in 8.4: https://gitlab.com/gitlab-org/gitlab-ce/issues/3423
The CI runners will have to be adapted to support this. See: https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/issues/336