Deploy Nodejs apllication on azure linux vm using azure release pipeline - node.js

I am creating CI & CD pipeline for nodejs application using azure devops.
I deployed build code to azure linux vm using azure release pipeline, here I configured deployment group job.
In deployment groups I used extract files task to unzip the build files.
Unzip will works fine and my code also deployed in this path: $(System.DefaultWorkingDirectory)/LearnCab-Manage(V1.5)-CI (1)/coreservices/ *.zip
After that i would like to run the pm2 command using azure release pipeline, for this task i take bash in deployment group jobs and write the command
cd $(System.DefaultWorkingDirectory)/LearnCab-Manage(V1.5)-CI (1)/coreservices/*.zip
cd coreservices
pm2 start server.js
But bash not executed it will give exit code 2.

it will give exit code 2
This error caused by your argument are using parentheses ( in the command at your first line. As usual, the parentheses is used as group. This could not be compiled as a normal character in command line.
To solve it, you need transfer the parentheses as a normal character with \:
cd $(System.DefaultWorkingDirectory)/LearnCab-Manage\(V1.5\)-CI \(1\)/coreservices/*.zip
And now, \(V1.5\) and \(1\) could be translated into (V1.5) and (1) normally.
And also, you can use single or double quote to around the path:
cd "$(System.DefaultWorkingDirectory)/LearnCab-Manage(V1.5)-CI (1)/coreservices/*.zip"
Or
cd '$(System.DefaultWorkingDirectory)/LearnCab-Manage(V1.5)-CI (1)/coreservices/*.zip'

Related

Gitlab job failure: Cannot overwrite variable Host because it is read-only or constant

Hi people of the internet.
Basically I am unable to run even the simplest job and I keep getting the same error no matter what I put in the .gitlab-ci.yml file. See example below:
Here is the .gitlab-ci.yml file:
stages:
- test
job1:
stage: test
tags:
- testing
script:
- echo "Hello world!"
Here is the output ("?" corresponds to intentionally blacked out information):
Running with gitlab-runner 14.10.0 (c6bb62f6)
on runner_test ????????
Preparing the "shell" executor
00:00
Using Shell executor...
Preparing environment
00:00
Running on LAPTOP-????????...
Getting source from Git repository
00:01
WriteError:
Line |
219 | $HOST="[MASKED]"
| ~~~~~~~~~~~~~~~~~~~~~~
| Cannot overwrite variable Host because it is read-only or constant.
ERROR: Job failed: exit status 1
I know that $HOST is a reserved variable in powershell but I don't see the link between the error and the code. It may have something to do with the configuration of the runner on Windows. Has anyone encountered this error on Gitlab before? Or any suggestions on how to debug?
Here are the steps that I took to install the runner on Gitlab for Windows (see https://docs.gitlab.com/runner/install/windows.html):
Create a folder somewhere in the system: C:\GitLab-Runner.
Download the binary for 64-bit and put it into the folder (see https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-windows-amd64.exe).
Run prompt as an administrator
Run the following command:
cd C:\GitLab-Runner
gitlab-runner.exe register
Enter your GitLab instance URL (see Gitlab > Settings > CI/CD > Runners > Specific runners)
Enter the token to register the runner (see Gitlab > Settings > CI/CD > Runners > Specific runners)
Enter a description for the runner: runner_test for instance
Enter the tags associated with the runner, separated by commas: testing, windows for instance
Provide the runner executor: shell
Install GitLab Runner as a service and start it
cd C:\GitLab-Runner
gitlab-runner.exe install
gitlab-runner.exe start
I also had to install the latest version of pwsh in Windows (see gitlab-runner: prepare environment failed to start process pwsh in windows):
Run prompt as an administrator
Install the newer pwsh.exe:
winget install Microsoft.PowerShell
Restart the runner
cd C:\GitLab-Runner
gitlab-runner.exe restart
This issue was due to my choice of shell for some reason. A Gitlab runner can choose a shell among the following: bash, sh, powershell, pwsh, and cmd (the last one being deprecated now).
As I stated above I had been using pwsh. So, I went after the config.toml file inside of the C:\GitLab-Runner directory to manually make the change from pwsh to powershell.
...
[[runners]]
name = "runner_test"
executor = "shell"
shell = "powershell"
...
I then restarted the runner and got the job to complete properly:
cd C:\GitLab-Runner
gitlab-runner restart
I still get the error (more like a warning now) but it does not prevent the job from finishing anymore. If anyone has a better answer with a proper explanation I would gladly accept it as the answer to this question.
Note that pwsh to powershell are both powershell scripts (see https://docs.gitlab.com/runner/shells/index.html):
powershell Fully Supported PowerShell script. All commands are executed in PowerShell Desktop context. In GitLab Runner 12.0-13.12, this is the default when registering a new runner.
pwsh Fully Supported PowerShell script. All commands are executed in PowerShell Core context. In GitLab Runner 14.0 and later, this is the default when registering a new runner.

Azure startup script is not executed

I've learned how to deploy .sh scripts to Azure with Azure CLI. But it seems like I have no clear understanding of how they work.
I'm creating the script that simply unarchives a .tgz archive in a current directory of Azure Web App, and then just deletes it. Quite simple:
New-Item ./startup.sh
Set-Content ./startup.sh '#!/bin/sh'
Add-Content ./startup.sh 'tar zxvf archive.tgz; rm-rf ./archive.tgz'
And then I deploy the script like this:
az webapp deploy --resource-group Group
--name Name
--src-path ./startup.sh
--target-path /home/site/wwwroot/startup.sh
--type=startup
Supposedly, it should appear in /home/site/wwwroot/, but for some reason it never does. No matter how I try. I thought it just gets executed and then deleted automatically (since I specified it as a startup script), but the archive is there, not unarchived at all.
My stack is .NET Core.
What am I doing wrong, and what's the right way to do what I need to do? Thank you.
I don't know if it makes sense, but I think the problem might be that you're using the target-path parameter while you should be using path instead.
From the documentation you cited, when describing the Azure CLI functionality, they state:
The CLI command uses the Kudu publish API to deploy the package and can be
fully customized.
The Kudu publish API reference indicates, when describing the different values for type and especially startup:
type=startup: Deploy a script that App Service automatically uses as the
startup script for your app. By default, the script is deployed to
D:\home\site\scripts\<name-of-source> for Windows and
home/site/wwwroot/startup.sh for Linux. The target path can be specified
with path.
Note the use of path:
The absolute path to deploy the artifact to. For example,
"/home/site/deployments/tools/driver.jar", "/home/site/scripts/helper.sh".
I never tested it, I am aware that the option is not described when taking about the az webapp deploy command itself, and it may be just an error in the documentation, but it may work:
az webapp deploy --resource-group Group
--name Name
--src-path ./startup.sh
--path /home/site/wwwroot/startup.sh
--type=startup
Note that the path you are providing is the default one; as a consequence, you could safely delete it if required:
az webapp deploy --resource-group Group
--name Name
--src-path ./startup.sh
--type=startup
Finally, try including some debug or echo commands in your script: perhaps the problem can be motivated for any permissions issue and having some traces in the logs could be helpful as well.

Azure Powershell - load variables from another script file

In Azure DevOps, I have an Azure Powershell task to create some resources using ps1 script in repo. This script working fine.
Now I need to split the script and variables into different files.
I created files SB-Config.ps1 for variables and ServiceBus.ps1 with main script. Moved all vars into SB-Config.ps1 .
Both files are in the same folder and in ServiceBus.ps1 I added:
. .\SB-Config.ps1
But Azure Devops fails with error:
What I'm doing wrong and how to get variables from SB-Config.ps1 script, when running ServiceBus.ps1 file?
I am able to reproduce your situation on my side.
Same issue as yours.
You can run this command to output the location of current work space:
Get-Location
I notice the powershell script file on your side is in the sub folder of Default working directory.
So do you set the work space in the powershell script file you are running first?
Set-Location $env:System_DefaultWorkingDirectory\subfolders
In your situation, I think the issue comes from the current work space is System_DefaultWorkingDirectory , the error output means the script can't get the file you want. This issue only occurs when you select 'file path' to run.

Databricks init scripts not working sometimes

Ok, it is very strange. I have some init scripts that I would like to run when a cluster starts
cluster has the init script , which is in a file (in dbfs)
basically this
dbfs:/databricks/init-scripts/custom-cert.sh
Now , when I make the init script like this, it works (no ssl errors for my endpoints. Also, the event logs for the cluster shows the duration as 1 second for the init script
dbutils.fs.put("/databricks/init-scripts/custom-cert.sh", """#!/bin/bash
cp /dbfs/orgcertificates/orgcerts.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt" >> /databricks/spark/conf/spark-env.sh
""")
However, if I just put the init script in an bash script and upload it to DBFS through a pipeline, the init script does not do anything. It executes , as per the event log but the execution duration is 0 sec.
I have the sh script in a file named
custom-cert.sh
with the same contents as above, i.e.
#!/bin/bash
cp /dbfs/orgcertificates/orgcerts.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt"
but when I check /usr/local/share/ca-certificates/ , it does not contain /dbfs/orgcertificates/orgcerts.crt, even though the cluster init script has run.
Also, I have compared the contents of the init script in both cases and it least to the naked eye, I can't figure out any difference
i.e.
%sh
cat /dbfs/databricks/init-scripts/custom-cert.sh
shows the same contents in both the scenarios. What is the problem with the 2nd case?
EDIT: I read a bit more about init scripts and found that the logs of init scripts are written here
%sh
ls /databricks/init_scripts/
Looking at the err file in that location, it seems there is an error
sudo: update-ca-certificates
: command not found
Why is it that update-ca-certificates found in the first case but not when I put the same script in a sh script and upload it to dbfs (instead of executing the dbutils.fs.put within a notebook) ?
EDIT 2: In response to the first answer. After running the command
dbutils.fs.put("/databricks/init-scripts/custom-cert.sh", """#!/bin/bash
cp /dbfs/orgcertificates/orgcerts.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt" >> /databricks/spark/conf/spark-env.sh
""")
the output is the file custom-cert.sh and then I restart the cluster with the init script location as dbfs:/databricks/init-scripts/custom-cert.sh and then it works. So, it is essentially the same content that the init script is reading (which is the generated sh script). Why can't it read it if I do not use dbfs put but just put the contents in bash file and upload it during the CI/CD process?
As we aware, An init script is a shell script that runs during startup of each cluster node before the Apache Spark driver or worker JVM start. case-2 When you run bash
command by using of %sh magic command means you are trying to execute this command in Local driver node. So that workers nodes is not able to access . But based on
case-1 , By using of %fs magic command you are trying run copy command (dbutils.fs.put )from root . So that along with driver node , other workers node also can access path .
Ref : https://docs.databricks.com/data/databricks-file-system.html#summary-table-and-diagram
It seems that my observations I made in the comments section of my question is the way to go.
I now create the init script using a databricks job that I run during the CI/CD pipeline from Azure DevOps.
The notebook has the commands
dbutils.fs.rm("/databricks/init-scripts/custom-cert.sh")
dbutils.fs.put("/databricks/init-scripts/custom-cert.sh", """#!/bin/bash
cp /dbfs/internal-certificates/certs.crt /usr/local/share/ca-certificates/
sudo update-ca-certificates
echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt" >> /databricks/spark/conf/spark-env.sh
""")
I then create a Databricks job (pointing to this notebook), the cluster is a job cluster which is just temporary . Of course , in my case , even this job creation is automated using a powershell script.
I then call this Databricks job in the release pipeline using again a Powershell script.
This creates the file
/databricks/init-scripts/custom-cert.sh
I then use this file in any other cluster that accesses my org's endpoints (without certificate errors).
I do not know (or still understand), why can't the same script file be just part of a repo and uploaded during the release process (instead of it being this Databricks job calling a notebook). I would love to know the reason . The other answer on this question does not hold true as you can see, that the cluster script is created by a job cluster and then accessed from another cluster as part of its init script.
It simply boils down to how the init script gets created.
But I get my job done. Just if it helps someone get their job done too.
I have raised a support case though to understand the reason.

How to open D:\a\r1\a\ on Azure?

I'm running Cypress in one of my release stages and it gives me this output:
Finished processing: D:\a\r1\a\_ClientWeb-Build-CI\ShellArtifact\tests\integration\cypress\videos\onboarding.spec.js.mp4 (0 seconds)
I have 2 questions:
Is the path name relative to the app service? If I have a app service called randomname and run the Cypress Stage on that randomname app service should I be able to find tCypresshe output in randomname.scm.azurewebsites.net.
If I go into the scm debug console and I do cd D:\a\ I get:
cd : Cannot find path 'D:\a\' because it does not exist.
So how do I actually access my Cypress test results?
I've also tried archiving the files into a zip file:
In the output of the task step I see:
Creating archive: d:\home\testing\somefile.zip
But when I try to access the D:/home/testing folder on my appname.scm.azurewebsites.net I get:
cd : Cannot find path 'D:\home\testing' because it does not exist.
The path D:\a\r1\a is inside the hosted agent that run the release pipeline, is not in your application.
The same thing is for the zip file, when you specify d:/home/... is in the agent.
After the release is finish all the files are deleted, so you need to save the file in another place (maybe in azure?) during the pipeline, for example, with "Azure File Copy" task.

Resources