We're using VSTS to build and release our front end code (JS + WebPack)
We now have 2 separate builds for Dev and Test.
Build tasks:
Get sources
npm install
npm build dev
Archive dist files
Copy Publish Artifact: drop
(+release pipelines)
In the "Triggers" section in VSTS, it is posible to listen to multiple branches.
It seems unnecessary to have to so similar builds (?) when we have individual release pipelines.
The only different is step 3 (npm build dev and npm build test)
My question is: Is it possible to dynamically at build time determin the build environment based on source branch that triggered the build? And dynamically set arg in step 3?
Sure, you can add a PowerShell task to check source branch (using built-in variable, such as Build.SourceBranch), then add or modify the variable through Logging Commands (e.g. Write-Host "##vso[task.setvariable variable=currentEnv;]Dev").
After that you can use that variable (currentEnv) in npm task (e.g. Command and arguments: run $(currentEnv))
Thank you #starian :+1:
Ended up creating a branch selector shell script (.sh)
The script
VSTS Build tasks
VSTS Triggers (default development)
Related
What's the easiest way to run a Node.js script as a step in Azure DevOps Pipeline (release step)?
I tried a few approaches and it seems like this is not as straightforward as I'd like it to be. I'm looking for a general approach idea.
My last attempt was adding the script to the source repository under tools/script.js. I need to run it in a release pipeline (after build), from where I can't access the repo directly, so I have added the entire repo as a second build artifact. Now I can reach the script file from the release agent, but I haven't found a way to actually run the script, there seems to be no option to run a Node.js script on an agent in general.
Option 1: Visual Pipeline Editor
1) Create the script file you want to run in your build/release steps
You can add the file to your Azure project repository, e.g. under tools/script.js and add any node modules it needs to run to your package.json, for example:
npm install --save-dev <module>
Commit and push, so your changes are online and Azure can see them.
2) Add your repo as an artifact for your release pipeline
You can skip this for build pipelines as they already have access to the repository.
3) Edit your release pipeline to ensure environment
Add a step to make sure correct Node version is on agent (Node.js Tool Installer):
Add a step to install all required node modules (npm):
4) Add your node script step
Use the Bash step to run your node script, make sure the working directory is set to root of the project (where package.json is located):
Option 2: YAML
you have a script\shell step where you can execute custom commands, just use that to achieve the goal. Agent have node installed on them, one thing you might need to do is use the pick node version step to use the right node version for your script
Example:
trigger:
- master
pool:
vmImage: 'Ubuntu-16.04'
steps:
- checkout: self
persistCredentials: true
clean: true
- bash: |
curl $BEDROCK_BUILD_SCRIPT > build.sh
chmod +x ./build.sh
displayName: My script download
env:
BEDROCK_BUILD_SCRIPT: https://url/yourscript.sh
- task: ShellScript#2
displayName: My script execution
inputs:
scriptPath: build.sh
I am fairly new to GitLab CI and I've been trying different approaches to use the node_modules directory in my entire pipeline. From what I've read in the official docs, cache and artifacts seem to be valid approaches to pass on files between jobs:
cache is used to specify a list of files and directories which should
be cached between jobs. You can only use paths that are within the
project workspace.
However, my issue with the caching method is that the node_modules would be persisted between pipelines by default:
cache can be set globally and per-job.
from GitLab 9.0, caching is enabled and shared between pipelines and jobs by default.
I do not want to persist the node_modules between pipelines. What I actually want is to trigger a fresh install with npm in my setup stage and then allow all further jobs in the pipeline to use these modules. Hence, I started using artifacts instead of cache, which is described similarly:
artifacts is used to specify a list of files and directories which
should be attached to the job after success. [...]
The artifacts will be sent to GitLab after the job finishes
successfully and will be available for download in the GitLab UI.
The dependency feature should be used in conjunction with artifacts
and allows you to define the artifacts to pass between different jobs.
The artifact-dependency method seems to be usable in my case. However, both cache and artifacts are extremely inefficient and slow. The node_modules are installed and usable, but the entire directory then gets uploaded somewhere and is re-downloaded between each job. (I would really love to know what happens here... Where do the modules go?)
Is there a better approach to run npm install only once at the beginning of the pipeline and then keep the node_modules in the pipeline during its entire runtime? I do not want to keep the node_modules after all jobs are finished so they don't need to be uploaded or downloaded anywhere.
Sample pipeline configuration file to reproduce the behavior:
image: node:lts
stages:
- setup
- build
- test
node:
stage: setup
script:
- npm install
artifacts:
paths:
- node_modules/
build:
stage: build
script:
- npm run build
dependencies:
- node
test:
stage: test
script:
- npm run lint
- npm run test
dependencies:
- node
Where do the modules go?
By default artifacts are saved on the main gitlab machine:
/var/opt/gitlab/gitlab-rails/shared/artifacts
Is there a better approach to run npm install only once at the beginning of the pipeline and then keep the node_modules in the pipeline during its entire runtime?
There are some options that you can try:
Merge setup and build stages to one stage.
Local npm cache on builder machines. Faster npm install times. Or use private npm proxy registry (for example - Nexus/Artifactory)
Check if gitlab main machine and the builders are in the same network so the upload/download will be faster
Consider packaging your build in docker. You will get reusable docker images between your gitlab stages. (Of course that there is an overhead of uploading the images to docker registry)
As AppVeyor does not pass secure env variables to PR builds. How can you split the the yml file to do different things.
Such as on a PR build I only want to run test_scripts. On branch on master I want it to run the build_scripts as to make artifacts.
I tried
branches
only:
- master
but I can't seem to run build_scripts specifically there.
Basically on a merge into master I do a yarn release that builds the exe. But right now a PR build it runs test_scripts and build_scripts
I'm building a Node project in appveyor specific to windows.
You can use APPVEYOR_PULL_REQUEST_NUMBER environment variable in your script logic. For example IF ($env:APPVEYOR_PULL_REQUEST_NUMBER) will evaluate to false in non-pr build.
For a full list of built-in environment valuables please look here
I have my .gitlab-ci.yml file set up in the typical three stages: test, build, deploy. During the build stage, I run a command that compiles my project and puts it in a tarball. The build stage appears to execute successfully because it moves on to the deploy stage, but the deploy stage then says it can't find the tarball. Is it in another directory? What happened to it? Thanks.
For each test gitlab-ci clean the build folder, therefore the output files of the build stage are not available in the deploy stage.
You need to rebuild your project also in the deploy stage.
The "stages" are only useful to order your tests, i.e. avoid to try to do a deploy test if a build test failed.
EDIT:
Since Gitlab 8.6, it is possible using dependencies feature
I was surprised to see the same behaviour (on GitLab 8.4).
I use cmake to create makefiles, then make to build, and then make test to run the test. I run all these in a build/ directory.
I don't want to repeat myself and identify easily which steps are failing. As such, I've created different gitlab-ci stages: cmake, make, test, etc. I then tell gitlab-ci to keep the build directory using the cache option:
cache:
key: "$CI_BUILD_REF_NAME"
untracked: true
paths:
- build/
I think that the key option will keep the same build directory for all stages acting on the same branch. See the gitlab-ci doc here: http://doc.gitlab.com/ce/ci/yaml/README.html#cache
EDIT: Don't use the cache for this! GitLab implemented reusable artifacts between stages in 8.4: https://gitlab.com/gitlab-org/gitlab-ce/issues/3423
The CI runners will have to be adapted to support this. See: https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/issues/336
I have a teamcity (4.something) install that creates .wsp file for deployment to sharepoint. Currently I have to copy the wsp out of the build artifacts directory and into a little deploy folder I have created. In the folder I run a .bat that deploys the new .wsp to our test server.
What steps can I take to automate this?
Either copy the .bat into the artifacts folder and update the paths etc or copy from the artifacts folder into the 'deploy' folder and run the .bat from there.
I am a neophyte when it comes to the intricacies (or basics!) of MSBuild and the like... so hand holding is appreciated!
In more recent versions of TeamCity...
In the build definition you can identify artifacts which can be copied/zipped. Artifacts can then be downloaded manually or referenced from another build (Artifact Dependency).
You can setup a 'build configuration' to do your deployment directly from artifacts produced by your ci build.
Create a build to do your deployment
Build Step
Run: Executable with parameters
Command executable: .bat file (make sure it as part of the ci build artifacts generated)
Command parameters: whatever parameters your patch files needs
Dependencies
Add New Artifact dependency
Depend on: select the ci build you want to deploy
GetArtifacts from: Last successful build
Artifact rules: +:**/*.*
So, given artifacts (like your batch file) are in the CI build... You now have a 'deploy' build. When you run it (manually or setup a Build Trigger) it will copy all the CI build artifacts to it's working directory (Artifact Dependency) and then run your batch file to do the deployment.
Pretty slick.
note: just make sure that the account running the TeamCity BuildAgent has permissions to do all the deployment stuff.
Hope this helps somebody as it took me a while to sort this out ;)
I've done this by creating a nant task, and then having TeamCity execute the nant task. It's more of a pain than it should be. You should be able to do the same as a post-build event with MSBuild.