IBM Cloud DevOps continuous delivery pipeline Node version too low - node.js

We need to build an Angular 6 front end project on IBM Cloud using the DevOps pipeline, but the project requires at least Node v8, while IBM Cloud DevOps pipeline only support v6.7. How can we build our project? Is it possible to upgrade or customize a Node version in DevOps pipeline environment?

It is true that IBM's DevOps Toolchains currently export up to node 6.7.0 as detailed here https://console.bluemix.net/docs/services/ContinuousDelivery/pipeline_deploy_var.html#deliverypipeline_environment.
But you are free to install any version of node. To do so, add the following to your build job and remove any existing export to node supplied by the pipeline.
#!/bin/bash
npm config delete prefix
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.31.2/install.sh | bash
. ~/.nvm/nvm.sh
nvm install 8.9.0
node --version
# build instructions follow
This approach uses Node Version Manager.

Another different solutions is using a custom Docker image with the Node version required by your service.
So, to configure your Pipeline Step, you need to select as Compilator Type a "Custom Docker image" and then, in the input field for the container name, select one for your Node version, for example "node:10.15.2"

Related

Building different versions of angular in same machine at same time

I'm aware that I can select the node version to use with NVM but, can I build two angular projects (with different ng version and node version) at the same time without issues? My scenario is a self-hosted build server (Windows) with two agents. Each of these might be, at the same time, on charge of building an Angular app with different version.
Regards
Sure you can, instead of running the globally installed ng run the local one with npx like this npx ng build, npx will use the local installed #angular/cli ng command found under ./node_modules/.bin/ of your project, npx comes installed with npm.
Another option is to add a script in package.json:
"scripts": {
"build": "ng build --prod=true --build-optimizer=true --aot=true",
},
And runt it with npm run build.
As #Andrei stated, is not possible to use NVM to set the Node version at the beginning of pipeline because it will change the version in all open consoles (so, if another pipeline with a different version of Node is running, would be affected).
Luckily, I found an easy workaround which does not require install additional tools or change package.json:
Download the node version you want as zip file
Unzip to a folder in Agent (like C:\LocalNode\node-v17.8.0-win-x64)
Add Node path to PATH environment var only for current pipeline
To add Node path only for current pipeline, we have to add a Powershell task as first task of the pipeline with the current command:
Write-Host ("##vso[task.setvariable variable=Path;]D:\LocalNode\node-v17.8.0-win-x64;$Env:Path")
Rest of the tasks of the pipeline will use the Node version from D:\LocalNode\node-v17.8.0-win-x64
If you want to create a pipeline for a different Node version, just add the version to D:\LocalNode and use the above command with the right path as first task of the pipeline. No problem if both pipelines run at the same time.

Is it safe to run "Node.js Tool installer task" multiple times on Azure Devops hosted agent?

We have a dedicated server for Azure Devops hosted agents. The server runs all the pipelines we have. Now we have run into an issue where one project requires node.js version 8, and one project requires version 10 or 12.
So we can't just update the node.js installation on the server.
Microsoft offers the Node.js Tool installer task, but the description says it will add it to the PATH environment variable. We haven't run the installer task yet (don't want to break the builds).
Has anyone tried installing multiple versions of node.js on one server? Is it possible? Or is the task only meant to be used in a hosted (short-lived) build agent?
This is fine and it is also mention in troubleshoot section
If you're using nvm to manage different versions of Node.js, consider switching to the Node Tool Installer task instead. (nvm is installed for historical reasons on the macOS image.) nvm manages multiple Node.js versions by adding shell aliases and altering PATH, which interacts poorly with the way Azure Pipelines runs each task in a new process. The Node Tool Installer task handles this model correctly. However, if your work requires the use of nvm, you can add the following script to the beginning of each pipeline:
steps:
- bash: |
NODE_VERSION=12 # or whatever your preferred version is
npm config delete prefix # avoid a warning
. ${NVM_DIR}/nvm.sh
nvm use ${NODE_VERSION}
nvm alias default ${NODE_VERSION}
VERSION_PATH="$(nvm_version_path ${NODE_VERSION})"
echo "##vso[task.prependPath]$VERSION_PATH"
Then node and other command-line tools will work for the rest of the pipeline job. In each step where you need to use the nvm command, you'll need to start the script with:
- bash: |
. ${NVM_DIR}/nvm.sh
nvm <command>
So to sum up it is fine to use Node Tool Installer but if you decide to use nvm pleace keep in mind above comments.
How about using nvm
It's a good tool to support multiple versions of node.js. It is used a lot.
for example
nvm install v8
nvm install v10
nvm install v12
# first terminal
nvm use 8
npm install
node node_v8_server.js
# second terminal
nvm use 10
npm install
node node_v10_server.js
# third terminal
nvm use 12
npm install
node node_v12_server.js

IBM Cloud DevOps pipeline deploy with NodeJS and private registry

Using private registry in connection with IBM Cloud DevOps pipeline, we've got
modules published. In DevOps pipeline also build is possible using following tactic:
#!/bin/bash
export PATH=/opt/IBM/node-v6.7.0/bin:$PATH
npm config set #<scope>:registry <registry-url>
echo "//<registry-url-short>:_authToken=$NPM_TOKEN" >> ~/.npmrc
npm install
This way both public and private modules are found and installed. However, when it comes time to deploy to NodeJS runtime, then 'npm install' is done on platform side.
How can we instruct that with above ?
Another approach is to package your .npmrc file along with your app when you push it. More info here https://github.com/cloudfoundry/nodejs-buildpack/issues/79
The approach here is to create a .npmrc as part of your build stage and add it to the root of your artifact folder. In the next stage when you deploy the app from the artifact folder your npm configuration will be correctly set for per-project config (see https://docs.npmjs.com/files/npmrc) and the npm install that the cf node build-pack performs will work correctly.
One possible way is to have your private modules downloaded in a different directory using the postinstall script in npm. Here is a good explanation on how to achieve this.
https://github.com/pmuellr/bluemix-private-packages

Deployment of node.js app in one step

We just finished developing a node.js application which includes a Restful API and a Mssql database .
I will need to deploy the app in in-house servers of companies having Windows 2008 and windows 2012 environments
What i want to achieve :
The best case is to make the deployment in an one-step procedure .
What i m currently doing :
Clone the project in a dir
Run npm install ( Best case scenario is to have all dependencies in a folder to avoid problems with versions or npm )
Deploy the db using a script
Start processes with pm2 process manager
Is there anyway to pack all these steps in a simple step ?
Something alternative to docker for example ? (I can't use docker because is not compatible with most of the OS )
Is there anyway to pack all these steps in a simple step ?
Yes. You can write a script that does all of that for you.
Even without writing such a script you can combine the first two steps easily. To simplify the first two steps you could make your project globaly installable with npm install --global. You could install your project that is hosted on GitHub just by doing:
npm install -g username/repo
and it will install your project and all of its dependencies.
You could also use a private package on npm or even a private npm registry for that.
You could install both the start scripts and the db deploy scripts that way as well in a single step, because your module can install multiple executables.

gcloud component update fails

I've deployed to VM's running Debian on GCE and have cron scripts that use gcloud commands.
I noticed that gcloud components update retuns this error
ERROR: (gcloud.components.update) The component manager is disabled for this installation
My mac works fine to update gcloud and add new components.
The built in gcloud tools that were in the VM image won't update. I have not found out how to enable the component manager.
UPDATED
Now you can use sudo apt-get install google-cloud-sdk command to install or update Google Cloud SDK.
You may need to add Cloud SDK repository in your Linux machine. This is the instructions.
Note: The following workaround should not be used anymore.
The component manager is enabled on latest images and gcloud components update command should be working now.
In case you're still experiencing this issue, use the following command to enable updater:
sudo sed -i -e 's/true/false/' /usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/config.json
You cannot update components using the built in SDK tools on a compute engine instance. However you can download another local copy of the SDK from https://cloud.google.com/sdk/ (curl https://sdk.cloud.google.com | bash) and update your path accordingly to use the new SDK install, and you will have the component manager enabled.
Came here while trying to gcloud components install [x] on a Docker container from google/cloud-sdk and getting the same error (I am probably not the only one on this situation).
Unfortunately, apt-get install google-cloud-sdk (as suggested on the most upvoted answer) didn't help.
But the ugly sed on config file did the trick. Dirty but efficient fix (for the moment).
RUN sed -i -e 's/"disable_updater": true,/"disable_updater": false,/' /usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/config.json
Building off of Vilas's explanation above: you can't run the updater for the built in gcloud image. However you can install a copy of gcloud outside of the package manager and run the updater on that gcloud install.
You can now run sudo apt-get install google-cloud-sdk on the Google Compute Engine default images to update the Cloud SDK.

Resources