I am forming a JSON dynamically during the pipeline run based on few pipeline parameters and pre-defined environment variables and trying to pass this JSON as an input in subsequent pipeline task.
jobs:
- job: PayloadCreation
pool: linux-agent (or windows)
steps:
- ${{ each app in apps }}:
- bash: |
payload=$(jq .artifact += [{"name": "${{ app.name}}, "version":"$(Build.BuildId)"}]' artifact.json)
echo $payload > artifact.json
echo "##vso[task.setvariable variable=payload]$payload"
I am getting the output of artifact.json as well as variable $payload as follows -
"artifacts": [
{
"name":"service-a",
"version":"1.0.0"
},
{
"name":"service-b",
"version": "1.0.1"
}
]
}
Subsequently, I am trying to use this JSON variable to pass it as input in the following job and unable to do so.
- job: JobB
steps:
- task: SericeNow-DevOps-Agent-Artifact-Registration#1
inputs:
connectedServiceName: 'test-SC'
artifactsPayload: $(payload)
It is unable to read the JSON as input variable. I get the below error -
Artifact Registration could not be sent due to the exception: Unexpected token $ in JSON at position 0
Is there any other way a JSON could be passed as input variable?
By default, variables are not available between jobs. In JobB, the $(payload) variable is not defined.
When setting the variable, you need to provide isOutput: echo "##vso[task.setvariable variable=payload;isOutput=true]$payload"
When referencing the variable, you need to use the appropriate runtime expression:
variables:
payload: $[ dependencies.PayloadCreation.outputs['payload'] ]
Ref: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#share-variables-across-pipelines
https://learn.microsoft.com/en-us/azure/devops/pipelines/scripts/logging-commands?view=azure-devops&tabs=bash#setvariable-initialize-or-modify-the-value-of-a-variable
Is there any other way a JSON could be passed as input variable?
Strictly, no. Variables under the concept of DevOps pipeline doesn't support JSON object.
Why no?
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops#variables
Variables are always strings.
But this doesn't mean you can't pass the JSON information, if you want, passing string is the only way.
Is the task designed by yourself?
Convert string object to JSON object is not a difficult:
//convert string object to json object
var str = `
{
"artifacts": [
{
"name":"service-a",
"version":"1.0.0"
},
{
"name":"service-b",
"version": "1.0.1"
}
]
}
`;
var obj = JSON.parse(str);
console.log(obj.artifacts[0].name);
console.log(obj.artifacts[0].version);
Not sure how your task design, but Daniel's method of passing variables is correct.
You can do operations in your extension task code after convert the string object to JSON object.
Here I add other relevant information of the logging command:
Set Variables
Variables Level
By the way, in your question, the json is
"artifacts": [
{
"name":"service-a",
"version":"1.0.0"
},
{
"name":"service-b",
"version": "1.0.1"
}
]
}
Shouldn't it be like this?
{
"artifacts": [
{
"name":"service-a",
"version":"1.0.0"
},
{
"name":"service-b",
"version": "1.0.1"
}
]
}
Related
I'm trying to create an Azure DevOps pipeline for deploying Azure Blueprint. There are some fields in the parameters file(JSON) which I want to be configurable. How can I pass these values as pipeline variables and use them in the parameters file?
I tried defining a pipeline variable and reference it in the parameter file like this "$(var-name)", but it didn't work. Is there a way to solve this?
Below is my pipeline definition, I'm using AzureBlueprint extension for creating and assigning blueprint:
steps:
- task: CreateBlueprint#1
inputs:
azureSubscription: $(serviceConnection)
BlueprintName: $(blueprintName)
BlueprintPath: '$(blueprintPath)'
AlternateLocation: false
PublishBlueprint: true
- task: AssignBlueprint#1
inputs:
azureSubscription: $(serviceConnection)
AssignmentName: '$(blueprintName)-assignment'
BlueprintName: $(blueprintName)
ParametersFile: '$(blueprintPath)/assign.json'
SubscriptionID: $(subscriptionId)
Wait: true
Timeout: 500
and my parameters file:
"parameters":{
"organization" : {
"value": "xxxx"
},
"active-directory-domain-services_ad-domain-admin-password" : {
"reference": {
"keyVault": {
"id": "/subscriptions/xxxx/resourceGroups/xxxx/providers/Microsoft.KeyVault/vaults/xxxx"
},
"secretName": "xxxx"
}
},
"jumpbox_jumpbox-local-admin-password" : {
"reference": {
"keyVault": {
"id": "/subscriptions/xxxx/resourceGroups/xxxx/providers/Microsoft.KeyVault/vaults/xxxx"
},
"secretName": "xxxx"
}
},
"keyvault_ad-domain-admin-user-password" : {
"value" : "xxxx"
},
"keyvault_deployment-user-object-id" : {
"value" : "xxxx"
},
"keyvault_jumpbox-local-admin-user-password" : {
"value" : "xxxx"
}
}
Since the Tasks (CreateBlueprint and AssignBlueprint) you are using doesn't support overriding parameters, you have two options:
Use the Azure CLI az blueprint command to directly create and assign blueprints.
Change the parameters file bei either using JSON variable substitution or by using a small PowerShell script (see blow):
Sample:
$paramFile = Get-Content ./azuredeploy.parameters.json | ConvertFrom-Json
$paramFile.parameters.organization.value = "your-org-name"
$paramFile | ConvertTo-Json | Set-Content ./azuredeploy.parameters.json
Be aware that the Task you are using hasn't received an update within the last 17 months (here is the GitHub repository).
AssignBlueprint#1 doesn't support natively this. However you can modify assign.json using Json substitution
It comes down to having Azure Pipeline variables with a name like a path to a leaf in the json file which you want to teplace.
Here is an example:
variables:
Data.DebugMode: disabled
Data.DefaultConnection.ConnectionString: 'Data Source=(prodDB)\MSDB;AttachDbFilename=prod.mdf;'
Data.DBAccess.Users.0: Admin-3
Data.FeatureFlags.Preview.1.NewWelcomeMessage: AllAccounts
# Update appsettings.json via FileTransform task.
- task: FileTransform#1
displayName: 'File transformation: appsettings.json'
inputs:
folderPath: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip'
targetFiles: '**/appsettings.json'
fileType: json
I have an Azure DevOps pipeline and want to reference other pipeline that my pipeline will fetch the artefacts from. I am struggling to find a way to actually do it over REST API.
https://learn.microsoft.com/en-us/rest/api/azure/devops/pipelines/runs/run%20pipeline?view=azure-devops-rest-6.1 specifies there is a BuildResourceParameters or PipelineResourceParameters but I cannot find a way to get it to work.
For example:
Source pipeline A produces an artefact B in run C. I want to tell API to reference the artefact B from run C of pipeline A rather than refer to the latest.
Anyone?
In your current situation, we recommend you can follow the below request body to help you select your reference pipeline version.
{
"stagesToSkip": [],
"resources": {
"repositories": {
"self": {
"refName": "refs/heads/master"
}
},
"pipelines": {
"myresourcevars": {
"version": "1313"
}
}
},
"variables": {}
}
Note: The name 'myresourcevars' is the pipeline name you defined in your yaml file:
enter image description here
I am currently trying to update a pipeline variable at the scope, DEV however, I am having hard time updating that variable. Is it possible to update the variable at a scope other than "Release"? If so, how? Below is the code that I used and the error that I received.
let reqLink = ' https://vsrm.dev.azure.com/'+ organization +'/'+project+'/_apis/release/releases?api-version=5.1';
let reqBody = {
"definitionId": definitionId,
"variables": {
"someVar":
{
"value": "foo",
"scope": "DEV"
}
}
};
sendHttpRequest('POST',reqLink,reqBody).then(response => {
let data = JSON.parse(response);
console.log(data);
});
This is the error that I am receiving:
{"$id":"1","innerException":null,"message":"Variable(s) someVar do not exist in the release pipeline at scope: Release
Scoped variables are defined not on the root level. But on stage level. So you must modify this here:
Here you have variable SomeVer scoped to Stage 1. The easiest way to achieve this will be hit endpoint with GET, manipulate on json and hit endpoint with PUT.
And what I noticed you are hiting release/releases whereas you should hit rather specific release release/releases/{releaseId}. Or maybe your goal is to update definition itself?
Is it possible to update the variable at a scope other than "Release"? If so, how?
The answer is yes.
The REST API you use is to create a release, if you want to update a release pipeline, using:
PUT https://vsrm.dev.azure.com/{organization}/{project}/_apis/release/definitions?api-version=6.0-preview.4
The request body of the REST API may need detailed information of the release pipeline. Use the following REST API to get it.
GET https://vsrm.dev.azure.com/{organization}/{project}/_apis/release/definitions/{definitionId}?api-version=6.0-preview.4
Then you can modify its response body and use it as the request body of the first REST API.
The property variables doesn't have a property called scope. If you want to update a variable from 'Release' scope to a stage scope, you need to delete the variable's original definition in variables and redefinite it in target environment. Here is an example.
Original script:
{
...
"variables": {
"somevar": {
"value": "foo"
}
},
...
};
The modified script:
{
...
"environments": [
{
"id": {stage id},
"name": DEV
...
"variables": {
"somevar": {
"value": "foo",
},
...
}
],
...
"variables": {},
...
};
Here is the summary: To change the scope of a variable, just move the variable definition to target scope.
How I can add Bindings in a "IIS web app manage" task using yaml?
I tried putting the bindings like classic pipeline and doesnt work
The accepted answer doesn't give a great example on usage. The Bindings input accepts a multiline string formatted as a particular JSON object. Also be sure to set AddBinding: true as it appears it will ignore the Bindings input without it.
On a related note, if you are storing your certificates in WebHosting (as opposed to MY), the deployment will fail as the task won't be able to find your certificate. Here's the relevant github enhancement to fix this
task: IISWebAppManagementOnMachineGroup#0
displayName: 'IIS Web App Manage'
inputs:
IISDeploymentType: 'IISWebsite'
ActionIISWebsite: 'CreateOrUpdateWebsite'
...
AddBinding: true
Bindings: |
{
bindings:[
{
"protocol":"http",
"ipAddress":"*",
"hostname":"mywebsite.com",
"port":"80",
"sslThumbprint":"",
"sniFlag":false
},
{
"protocol":"https",
"ipAddress":"*",
"hostname":"mywebsite.com",
"port":"443",
"sslThumbprint":"...",
"sniFlag":true
}
]
}
You need to create a JSon with all information like this:
{
"bindings":[{
"protocol":"http",
"ipAddress":"*",
"port":"xxxxx",
"sslThumbprint":"",
"sniFlag":false
},
{
"protocol":"http",
"ipAddress":"*",
"hostname":"yyyyyy.com",
"port":"80",
"sslThumbprint":"",
"sniFlag":false
},
{
"protocol":"http",
"ipAddress":"*",
"hostname":"xxxxxxxx.com",
"port":"80",
"sslThumbprint":"",
"sniFlag":false
}
]
}
In Cloudformation I have two stacks (one nested).
Nested stack "ec2-setup":
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Parameters" : {
// (...) some parameters here
"userData" : {
"Description" : "user data to be passed to instance",
"Type" : "String",
"Default": ""
}
},
"Resources" : {
"EC2Instance" : {
"Type" : "AWS::EC2::Instance",
"Properties" : {
"UserData" : { "Ref" : "userData" },
// (...) some other properties here
}
}
},
// (...)
}
Now in my main template I want to refer to nested template presented above and pass a bash script using the userData parameter. Additionally I do not want to inline the content of user data script because I want to reuse it for few ec2 instances (so I do not want to duplicate the script each time I declare ec2 instance in my main template).
I tried to achieve this by setting the content of the script as a default value of a parameter:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Parameters" : {
"myUserData": {
"Type": "String",
"Default" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash \n",
"yum update -y \n",
"# Install the files and packages from the metadata\n",
"echo 'tralala' > /tmp/hahaha"
]]}}
}
},
(...)
"myEc2": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": "s3://path/to/ec2-setup.json",
"TimeoutInMinutes": "10",
"Parameters": {
// (...)
"userData" : { "Ref" : "myUserData" }
}
But I get following error while trying to launch stack:
"Template validation error: Template format error: Every Default
member must be a string."
The error seems to be caused by the fact that the declaration { Fn::Base64 (...) } is an object - not a string (although it results in returning base64 encoded string).
All works ok, if I paste my script directly into to the parameters section (as inline script) when calling my nested template (instead of reffering to string set as parameter):
"myEc2": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": "s3://path/to/ec2-setup.json",
"TimeoutInMinutes": "10",
"Parameters": {
// (...)
"userData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash \n",
"yum update -y \n",
"# Install the files and packages from the metadata\n",
"echo 'tralala' > /tmp/hahaha"
]]}}
}
but I want to keep the content of userData script in a parameter/variable to be able to reuse it.
Any chance to reuse such a bash script without a need to copy/paste it each time?
Here are a few options on how to reuse a bash script in user-data for multiple EC2 instances defined through CloudFormation:
1. Set default parameter as string
Your original attempted solution should work, with a minor tweak: you must declare the default parameter as a string, as follows (using YAML instead of JSON makes it possible/easier to declare a multi-line string inline):
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
myUserData:
Type: String
Default: |
#!/bin/bash
yum update -y
# Install the files and packages from the metadata
echo 'tralala' > /tmp/hahaha
(...)
Resources:
myEc2:
Type: AWS::CloudFormation::Stack
Properties
TemplateURL: "s3://path/to/ec2-setup.yml"
TimeoutInMinutes: 10
Parameters:
# (...)
userData: !Ref myUserData
Then, in your nested stack, apply any required intrinsic functions (Fn::Base64, as well as Fn::Sub which is quite helpful if you need to apply any Ref or Fn::GetAtt functions within your user-data script) within the EC2 instance's resource properties:
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
# (...) some parameters here
userData:
Description: user data to be passed to instance
Type: String
Default: ""
Resources:
EC2Instance:
Type: AWS::EC2::Instance
Properties:
UserData:
"Fn::Base64":
"Fn::Sub": !Ref userData
# (...) some other properties here
# (...)
2. Upload script to S3
You can upload your single Bash script to an S3 bucket, then invoke the script by adding a minimal user-data script in each EC2 instance in your template:
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
# (...) some parameters here
ScriptBucket:
Description: S3 bucket containing user-data script
Type: String
ScriptKey:
Description: S3 object key containing user-data script
Type: String
Resources:
EC2Instance:
Type: AWS::EC2::Instance
Properties:
UserData:
"Fn::Base64":
"Fn::Sub": |
#!/bin/bash
aws s3 cp s3://${ScriptBucket}/${ScriptKey} - | bash -s
# (...) some other properties here
# (...)
3. Use preprocessor to inline script from single source
Finally, you can use a template-preprocessor tool like troposphere or your own to 'generate' verbose CloudFormation-executable templates from more compact/expressive source files. This approach will allow you to eliminate duplication in your source files - although the templates will contain 'duplicate' user-data scripts, this will only occur in the generated templates, so should not pose a problem.
You'll have to look outside the template to provide the same user data to multiple templates. A common approach here would be to abstract your template one step further, or "template the template". Use the same method to create both templates, and you'll keep them both DRY.
I'm a huge fan of cloudformation and use it to create most all my resources, especially for production-bound uses. But as powerful as it is, it isn't quite turn-key. In addition to creating the template, you'll also have to call the coudformation API to create the stack, and provide a stack name and parameters. Thus, automation around the use of cloudformation is a necessary part of a complete solution. This automation can be simplistic ( bash script, for example ) or sophisticated. I've taken to using ansible's cloudformation module to automate "around" the template, be it creating a template for the template with Jinja, or just providing different sets of parameters to the same reusable template, or doing discovery before the stack is created; whatever ancillary operations are necessary. Some folks really like troposphere for this purpose - if you're a pythonic thinker you might find it to be a good fit. Once you have automation of any kind handling the stack creation, you'll find it's easy to add steps to make the template itself more dynamic, or assemble multiple stacks from reusable components.
At work we use cloudformation quite a bit and are tending these days to prefer a compositional approach, where we define the shared components of the templates we use, and then compose the actual templates from components.
the other option would be to merge the two stacks, using conditionals to control the inclusion of the defined resources in any particular stack created from the template. This works OK in simple cases, but the combinatorial complexity of all those conditions tends to make this a difficult solution in the long run, unless the differences are really simple.
Actually I found one more solution than already mentioned. This solution on the one hand is a little "hackish", but on the other hand I found it to be really useful for "bash script" use case (and also for other parameters).
The idea is to create an extra stack - "parameters stack" - which will output the values. Since outputs of a stack are not limited to String (as it is for default values) we can define entire base64 encoded script as a single output from a stack.
The drawback is that every stack needs to define at least one resource, so our parameters stack also needs to define at least one resource. The solution for this issue is either to define the parameters in another template which already defines existing resource, or create a "fake resource" which will never be created becasue of a Condition which will never be satisified.
Here I present the solution with fake resource. First we create our new paramaters-stack.json as follows:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Outputs/returns parameter values",
"Conditions" : {
"alwaysFalseCondition" : {"Fn::Equals" : ["aaaaaaaaaa", "bbbbbbbbbb"]}
},
"Resources": {
"FakeResource" : {
"Type" : "AWS::EC2::EIPAssociation",
"Condition" : "alwaysFalseCondition",
"Properties" : {
"AllocationId" : { "Ref": "AWS::NoValue" },
"NetworkInterfaceId" : { "Ref": "AWS::NoValue" }
}
}
},
"Outputs": {
"ec2InitScript": {
"Value":
{ "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash \n",
"yum update -y \n",
"# Install the files and packages from the metadata\n",
"echo 'tralala' > /tmp/hahaha"
]]}}
}
}
}
Now in the main template we first declare our paramters stack and later we refer to the output from that parameters stack:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"myParameters": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": "s3://path/to/paramaters-stack.json",
"TimeoutInMinutes": "10"
}
},
"myEc2": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": "s3://path/to/ec2-setup.json",
"TimeoutInMinutes": "10",
"Parameters": {
// (...)
"userData" : {"Fn::GetAtt": [ "myParameters", "Outputs.ec2InitScript" ]}
}
}
}
}
Please note that one can create up to 60 outputs in one stack file, so it is possible to define 60 variables/paramaters per single stack file using this technique.