How to use Bazel in Azure Pipelines? - azure

I tried to set up an Azure Build Pipeline that uses Bazel (0.26.0)
My pipeline YAML definition file looks like this:
trigger:
- master
pool:
vmImage: 'windows-2019'
steps:
- script: echo Hello, world!
displayName: 'Run a one-line script'
- script: |
bazel version
echo Add other tasks to build, test, and deploy your project.
echo See https://aka.ms/yaml
displayName: 'Run a multi-line script'
Currently, I try only to find out which Bazel version is installed by calling bazel version - but Azure DevOps reports:
'bazel' is not recognized as an internal or external command,
operable program or batch file.
Cmd.exe exited with code '9009'.
I wonder how I can install and run Bazel in a Azure pipeline - any hints on this?
It seems that this project got it working. But I do not understand how.

Update in case anyone comes across this thread as of 2022:
Bazel is now installed on Microsoft-hosted agents for macOS-latest, ubuntu-latest, and windows-latest.
Try running this pipeline; you'll be able to see which version of Bazel is installed on each OS.
strategy:
matrix:
linux:
imageName: ubuntu-latest
mac:
imageName: macOS-latest
windows:
imageName: windows-latest
pool:
vmImage: $(imageName)
steps:
- script: bazel version
displayName: Show bazel version

You got this error because you use Microsoft-hosted agent, in those agents bazel is not installed. In the example you provided they use Self Hosted (Private) Agent and they install bazel in their agent machine.
1) Install Self Hosted Agent in your private machine and install bazel in the machine.
2) Install bazel during the build pipeline with choco (simple script task):
choco install bazel
After you install it you can use it.
P.S I tried to install via choco and I got an error but bazel indeed installed and in the next step bazel version gave results, so in the installation task put continueOnError = true. (the error is in the python step, if your project not with python it's ok).

This is how I install Bazel and Azure Pipeline on a windows-2019 image:
steps:
- script: |
echo 'Install Bazel via Choco'
choco install bazel
displayName: 'Install dependencies'
- script: |
bazel version
displayName: 'Show bazel version'

In the meantime, Bazel and Bazelisk come already preinstalled. Seems Bazel takes over the world.
Tested and works with: macOS-10.15, macOS-11, macOS-12, ubuntu-20.04, ubuntu-22.04, windows-2019, windows-2022

Related

Selfhosted Azure DevOps Agent running as NetworkService not able to complete npm install and npm build

The agent is not able to build and generate the build folder for react project while using the windows platform with Microsoft and self hosted agent but works fine with ubuntu.
This is the linux yml file with Microsoft agent
https://gist.github.com/yogeswaran-gnrgy/0354d455e6c85d387281eb75d1a326f1
This is the windows yml file with Microsoft agent
https://gist.github.com/yogeswaran-gnrgy/816b9f06dbe0039c07ad1293d2fce141
This is the log generated during the build step using Microsoft agent
https://gist.github.com/yogeswaran-gnrgy/acbc3c2a268ea3b514cc423726b0a751
In case of Self-hosted agent it has both npm and node installed. What can be the problem
In Linux
- script: |
echo Executing install
npm install
echo Executing build
npm run build
displayName: 'Building the project'
When used as script npm install and npm build works fine only in Linux. For windows, it must be like
- task: Npm#1
displayName: 'Installing dependencies'
inputs:
verbose: false
customCommand: install
- task: Npm#1
displayName: 'Building the project'
inputs:
command: custom
verbose: false
customCommand: run build

Azure pipeline for npm publish does not work as expected

I am a bit new to azure and today I am trying to create a pipeline for publishing npm package to azure artifactory.
The issue is - that after pipeline successfully built, I can see the published package in the artifacts. However, published package is almost empty.
There is only package.json and readme.md. No dist folder at all.
Here is my Pipeline:
# Node.js
# Build a general Node.js project with npm.
# Add steps that analyze code, save build artifacts, deploy, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/javascript
trigger:
- master
pool:
vmImage: ubuntu-latest
steps:
- task: NodeTool#0
inputs:
versionSpec: '10.x'
displayName: 'Install Node.js'
- script: |
npm install
npm run build
npm publish
displayName: 'npm install and build and publish'
Also, when I build the project locally and run npm publish - the package is published as it should,all files in place.
Is there is something I am doing wrong ?
Finally I found the issue.
The Pipeline definition was actually right, besides one little thing:
versionSpec: '10.x'
Version of the Node was incorrect! The pretty old one. Originally the definition was copied from one of the azure official manuals, so the version was from some really old year.
versionSpec: '14.x'
And build was successful with all files on their place.
Hope that will be helpful for somebody here.
when publishing packages to npm, you need to authenticate with your credential. You could run it successfully on local because of the .npmrc file saved on your computer. When running npm publish on CI, the file doesn't exist, which results in an error. Please try the following steps:
Generate an automation access token from this URL: https://www.npmjs.com/
Go to your repo, and add a file named ".npmrc", enter the content with //registry.npmjs.org/:_authToken={your-token-value}
Notice:
It is recommended to set the access token as an environment variable in Pipeline library.
Please use lowercase words as the package's name in package.json. Otherwise you will receive a 400 error.

How to use CircleCI Environment Variables in Firebase Functions

I'm new to Continous Intregration and recently I setup my first project in CircleCI.
Unfortunately I seems like it's not completely working as expected.
I want to deploy my application to Firebase (Hosting and Functions).
Of course I added Environment Variables to the project in CircleCI.
But Firebase Functions doesn't access my Environment Variables so it's running into errors.
In the Functions folder I created a new nodejs application incl. the dotenv package and I'm calling the variables with process.env.CIRCLECI_VARIABLE.
Would be great if someone could give me a hint what's missing.
config.yml
version: 2.1
jobs:
build:
docker:
- image: circleci/node:10
steps:
- checkout
- run:
name: Install packages
command: yarn install
- run:
name: Build project
command: yarn build
- run:
name: Install functions packages
command: cd ./functions && yarn install
deploy:
docker:
- image: circleci/node:10
steps:
- checkout
- run:
name: Install packages
command: yarn install
- run:
name: Build project
command: yarn build
- run:
name: Install functions packages
command: cd ./functions && yarn install
- run:
name: Installing Firebase-Tools
command: yarn add firebase-tools
- run:
name: Firebase Deploy
command: ./node_modules/.bin/firebase deploy --token "$FIREBASE_TOKEN"
workflows:
build_and_deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: master
I've found the solution
I didn't know that I have to add the Environment Variables to the Google Cloud Function.
Now everything is working as expected

Zip directory in BitBucket pipeline using image microsoft/dotnet:sdk

In a bitbucket-pipelines.yml BitBucket Pipeline file I am trying to publish a DotNet Core solution, zip it into the correct form to be understood by AWS and then upload it S3.
My build is based on the image microsoft/dotnet:sdk.
image: microsoft/dotnet:sdk
pipelines:
default:
- step:
caches:
- dotnetcore
script:
- dotnet restore
- dotnet publish MyProj/MyProj.csproj -o ../output
- 7z a output.zip .\output\*
- 7z a MyPackage.zip service.zip aws-windows-deployment-manifest.json
This step fails on the first 7z command because 7Zip isn't installed. What is the best way from the Windows command line to zip these files? Alternatively, is there a different Docker image I should be using?
I'm using Amazon.Lambda.Tools to deploy and had a similar issue where I needed to install zip - you could use zip to do it, or install 7z and use that - just need a couple of extra commands to apt-get
If you use a deployment step you'll also get CI/CD metrics and visuals in BitBucket (this is my config)
image: microsoft/dotnet:sdk
pipelines:
default:
- step:
caches:
- dotnetcore
script:
- dotnet restore
- dotnet build
- dotnet test
- step:
deployment: test
script:
- dotnet tool install -g Amazon.Lambda.Tools
- export PATH="$PATH:/root/.dotnet/tools"
- apt-get update
- apt-get install zip -y # or install 7z instead
- dotnet lambda deploy-serverless --region $...... # or manually upload to S3

Build and deploy node app to Openshift using Gitlab CI

Just mount a Gitlab in digitalocean to keep track of versions of some projects, but now I've read a little about Gitlab I wonder if you can set Gitlab CI so that each time you do a commit automatically make a build of application and if the build is successful can do a deploy to OpenShift.
I think my .gitlab-ci.yml should look something like this:
stages:
- build
- deploy
before_script:
- npm install
job_build:
stage: build
script:
- grunt build
job_deploy:
stage: deploy
But I really do not know if this is as valid and neither tell Gitlab CI must only make a git push to OpenShift repository.
After much reading and searching finally found documentation about this [1], in the end I have resolved some file using the following .gitlab-ci.yml
stages:
- build
- deploy
job_build:
stage: build
script:
- npm install -g grunt-cli
- npm rebuild node-sass
- npm install
- grunt build
job_deploy:
stage: deploy
script:
- apt-get update -yq
- apt-get install -y ruby-dev rubygems
- gem install dpl
- dpl --provider=openshift --user=$OPENSHIFT_USER --password=$OPENSHIFT_PASS --domain=mydomain --app=example
only:
- master
The magic happens with a Travis library call dpl [2] that supports a lot of providers [3]
[1]http://doc.gitlab.com/ce/ci/deployment/README.html
[2]https://github.com/travis-ci/dpl
[3]https://github.com/travis-ci/dpl#supported-providers

Resources