Conditionally Set Environment Azure DevOps - azure

I am working with an Azure Pipeline in which I need to conditionally set the environment property. I am invoking the pipeline from a rest API call by passing Parameters in the body which is documented here.. When I try to access that parameter at compile time to set the environment conditionally though the variable is coming through as empty (assuming it is not accessible at compile time?)
Does anybody know a good way to solve for this via the pipeline or the API call?

After some digging I have found the answer to my question and I hope this helps someone else in the future.
As it turns out the Build REST API does support template parameters that can be used at compile time, the documentation just doesn't explicitly tell you. This is also supported in the Runs endpoint as well.
My payload for my request ended up looking like:
{
"Parameters": "{\"Env\":\"QA\"}",
"templateParameters": {"SkipApproval" : "Y"},
"Definition": {
"Id": 123
},
"SourceBranch": "main"
}
and my pipeline consumed those changes at compile time via the following (abbreviated version) of my pipeline
parameters:
- name: SkipApproval
default: ''
type: string
...
${{if eq(parameters.SkipApproval, 'Y')}}:
environment: NoApproval-All
${{if ne(parameters.SkipApproval, 'Y')}}:
environment: digitalCloud-qa

This is a common area of confusion for YAML pipelines. Run-time variables need to be accessed using a different syntax.
$[ variable ]
YAML pipelines go through several phases.
Compilation - This is where all of the YAML documents (templates, etc) comprising the final pipelines are compiled into a single document. Final values for parameters and variables using ${{}} syntax are inserted into the document.
Runtime - Run-time variables using the $[] syntax are plugged in.
Execution - The final pipeline is run by the agents.
This is a simplification, another explanation from Microsoft is a bit better:
First, expand templates and evaluate template expressions.
Next, evaluate dependencies at the stage level to pick the first stage(s) to run.
For each stage selected to run, two things happen:
All resources used in all jobs are gathered up and validated for authorization to run.
Evaluate dependencies at the job level to pick the first job(s) to run.
For each job selected to run, expand multi-configs (strategy: matrix or strategy: parallel in YAML) into multiple runtime jobs.
For each runtime job, evaluate conditions to decide whether that job is eligible to run.
Request an agent for each eligible runtime job.
...
This ordering helps answer a common question: why can't I use certain variables in my template parameters? Step 1, template expansion, operates solely on the text of the YAML document. Runtime variables don't exist during that step. After step 1, template parameters have been resolved and no longer exist.
[ref: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/runs?view=azure-devops]

Related

Shall we include ForceNew or throw an error during "terraform apply" for a new TF resource?

Context: Implementation a Terraform Provider via TF Provider SDKv2 by following an official tutorial.
As a result, all of the schema elements in the corresponding Terraform Provider resource aws_launch_configuration are marked as ForceNew: true. This behavior instructs Terraform to first destroy and then recreate the resource if any of the attributes change in the configuration, as opposed to trying to update the existing resource.
TF tutorial suggests we should add ForceNew: true for every non-updatable field like:
"base_image": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
resource "example_instance" "ex" {
name = "bastion host"
base_image = "ubuntu_17.10" # base_image updates are not supported
}
However one might run into the following:
Let's consider "important" resources foo_db_instance (a DB instance that should be deleted / recreated in exceptional scenarios) (related unanswered question) that has name attribute:
resource "foo_db_instance" "ex" {
name = "bar" # name updates are not supported
...
}
However its underlying API was written in a weird way and it doesn't support updates for name attribute. There're 2 options:
Following approach of the tutorial, we might add ForceNew: true and then, if a user doesn't pay attention to terraform plan output it might recreate foo_db_instance.ex when updating name attribute by accident that will create an outage.
Don't follow the approach from the tutorial and don't add ForceNew: true. As a result terraform plan will not output the error and it will make it look like the update is possible. However when running terraform apply a user will run into an error, if we add a custom code to resourceUpdate() like this:
func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics {
if d.HasChanges("name) {
return diag.Errorf("error updating foo_db_instance: name attribute updates are not supported")
}
...
}
There're 2 disadvantages of this approach:
non-failing output of terraform plan
we might need some hack to restore tf state to override d.Set(name, oldValue).
Which approach should be preferrable?
I know there's prevent_destroy = true lifecycle attribute but it seems like it won't prevent this specific scenario (it only prevents from accidental terraform destroy).
The most typical answer is to follow your first option, and then allow Terraform to report in its UI that the change requires replacement and allow the user to decide how to proceed.
It is true that if someone does not read the plan output then they can potentially make a change they did not intend to make, but in that case the user is not making use of the specific mechanism that Terraform provides to help users avoid making undesirable changes.
You mentioned prevent_destroy = true and indeed that this a setting that's relevant to this situation, and is in fact exactly what that option is for: it will cause Terraform to raise an error if the plan includes a "replace" action for the resource that was annotated with that setting, thereby preventing the user from accepting the plan and thus from destroying the object.
Some users also wrap Terraform in automation which will perform more complicated custom policy checks on the generated plan, either achieving a similar effect as prevent_destroy (blocking the operation altogether) or alternatively just requiring an additional confirmation to help ensure that the operator is aware that something unusual is happening. For example, in Terraform Cloud a programmatic policy can report a "soft failure" which causes an additional confirmation step that might be approvable only by a smaller subset of operators who are better equipped to understand the impact of what's being proposed.
It is in principle possible to write logic in either the CustomizeDiff function (which runs during planning) or the Update function (which runs during the apply step) to return an error in this or any other situation you can write logic for in the Go programming language. Of these two options I would say that CustomizeDiff would be preferable since that would then prevent creating a plan at all, rather than allowing the creation of a plan and then failing partway through the apply step, when some other upstream changes may have already been applied.
However, to do either of these would be inconsistent with the usual behavior users expect for Terraform providers. The intended model is for a Terraform provider to describe the effect of a change as accurately as possible and then allow the operator to make the final decision about whether the proposed change is acceptable, and to cancel the plan and choose another strategy if not.

What is the difference between Override Attribute Initializers and Set Run State in Enterprise Architect and why does it behave differently?

this is my first question on SO, so please exercise some kindness on my path to phrasing perfect questions.
On my current project I try to model deployments in EA v14.0 where I want components to be deployed on execution environments and additionally set them to some values.
However depending on how I deploy (as an Deployment Artifact or as a Component Instance) I get different behaviours. On Deployment Artifacts I am offered to Override Attribute Initializers. On a Component Instance I am offered to Set Run State. When I try to set an attribute on the DeploymentArtifact I get an error message that there is no initialiser to override. When I try to set the run state on the Component Instance I can set a value. However, then I get an UML validation error message, that I must not link a component instance to an execution environment:
MVR050002 - error ( (Deployment)): Deployment is not legal for Instance: Component1 --> ExecutionEnvironment1
This is how I started. I created a component with a deployment specification:
I then created a deployment diagram to deploy my component: Once as a Deployment Artifact and once as a Component Instance.
When I try to Override Attribute Initializers , I get the error message DeploymentArtifact has no attribute initializers to override`.
When I try to Set Run State I can actually enter values .
However, when I then validate my package, I get the aforementioned error message.
Can anyone explain what I am doing wrong or how this is supposed to work?
Thank you very much for your help!
Actually there are multiple questions here.
Your 2nd diagram is invalid (and probably EA should have moaned here already since it does so in V12).
You can deploy an artifact on a node instance and use a deployment spec as association class like shown on p. 654 of the UML 2.5 specs:
But you can not deploy something on something abstract. You will need instances - on both sides.
You can silence EA about warnings by turning off strict connector checking in the options:
To answer your question in the title: Override initializers looks into the attribute list of the classifier of an object and offers the so you can set any run states (that is values of attributes at runtime). Moreover the Set Run State allows to set arbitrary key value pairs which are not classifier attributes. This is to express e.g. RAM size in Nodes or things like that.

How the if() function executes in Azure Resource Manager Templates

I am using if() functions in my ARM template to conditionally set some connection string values in my Web App resource. The current condition looks like this.
"[if(equals(parameters('isProduction'), 'Yes'), concat(variables('redisCacheName'),'.redis.cache.windows.net:6380|', listKeys(resourceId('Microsoft.Cache/Redis', variables('redisCacheName')), '2015-08-01').primaryKey, '|', variables('resourcePrefix')), parameters('redisSessionStateConnection'))]"
To simplify it, the condition looks like this;
[if(equals(arg1, arg2), true_expression, false_expression)]
When I deploy the ARM template with isProduction parameter set to No the execution throws an exception. When isProduction parameter is set to Yes then the template works fine. The exceptions is related to ARM trying to find the redis cache resource which will not be deployed in non production environment.
My guess is even if the isProduction parameter value is No, the true_expression in the above condition which is referencing the Redis Cache resource is getting executed and since the Redis Cache resource is not created in a non production state, it throws the exception.
So my question is, when we have a condition like above, will the true_expression and the false_expression in the if() function is evaluated before the actual condition of the if() function is executed?
If so what would be possible workarounds to get around this problem?
Both sides of if() are evaluated regardless (in ARM templates). so you have to work around that using "clever" ways.
you could use nested deployments\variables to try and work around that.
update: this has been fixed some time ago, only the relevant part of if() function is evaluated.
My guess would be: no, only the yes, both expressions needed based on the outcome of the if statement is are evaluated.
To solve your issue: you can use environment specific parameter files. This enables you to only include parameters for the environment you're deploying to.
See the documentation on parameters in the 'Understand the structure and syntax of Azure Resource Manager templates' article.
In the parameters section of the template, you specify which values you can input when deploying the resources. These parameter values enable you to customize the deployment by providing values that are tailored for a particular environment (such as dev, test, and production). You do not have to provide parameters in your template, but without parameters your template would always deploy the same resources with the same names, locations, and properties.

Pass variables between step definitions in Cucumber groovy

I am wondering how we can pass variables between two step definitions files.
I found this How to share variables across multiple cucumber step definition files with groovy but their structure is different from mine, because I am not using classes in step definition.
The following is my two step definition files.
Feature File 1
Scenario: Consumer registration
When I try to register with my details with "memberNo" mem no.
Then I should be able to get success response
stepDef1
When(~'^I try to register with my details with "([^"]*)" mem no.$') { String memdNo ->
sMemdNo = memNo + getRanNo()
// more code here
}
Feature File 2
Scenario: Event Generation
When I activate my account
Then I can see the file having "logName" event
stepDef2
Then(~'^I can see the file having "([^"]*)" event$') { String logName ->
eventFile = GetLogtData(logName , sMemdNo )
// more code here
}
So, as per the above I want to get the value of sMemdNo from stepDef1 and use it in stepDef2.
I will recommend that you use the World, to store global variables needed across step definitions.
You can see an example here: cucumber-jvm-groovy-example.
You can combine the World with a Factory and/or dependency injection pattern.
To use variables between steps you can add the variable at the top of the steps file (groovy or java), and the variable used in one step will have the value available for other variable.
Example
Result

How to remember parameter values used on last build in Jenkins/Hudson

I need to remember the last parameter values when I begin a new build with parameters.
I have two string parameters:
${BRANCH}
${ServerSpecified}
On the first build execution I need those values in blank, but for the second execution, I need the values of the first execution, in the third execution the values of the second execution, and so on...
Do I need to install a plugin? I have tried using dynamic param with groovy, but I can't extract the last value. Does anybody know how to do this or have any other idea?
There is a Rebuild plugin that would allow you to re-build any job of interest. It also allows you to modify one or more of the original build parameters
In order to retrieve parameters from previous executions, you can follow this approach in your pipeline:
def defaultValueForMyParameter = "My_Default_Value"
node('master') {
parameterValue = params.MY_PARAMETER ?: defaultValueForMyParameter
}
pipeline {
parameters {
string(name: 'MY_PARAMETER', defaultValue: parameterValue, description: "whatever")
}
...
}
This code keeps track on the last value used for the parameter allowing to change it before or during the run. If the parameter does not exist in the job, it will be created and the default value will be assigned to it.
Yes, it looks like you are trying to invent something like Version Number Plugin:
This plugin creates a new version number and stores it in the
environment variable whose name you specify in the configuration.
So you can as many variables as you want.
No one mentions the Persistent Parameter plugin, which is the one I use.
Supports string parameters, choice, and more.

Resources