How the if() function executes in Azure Resource Manager Templates - azure

I am using if() functions in my ARM template to conditionally set some connection string values in my Web App resource. The current condition looks like this.
"[if(equals(parameters('isProduction'), 'Yes'), concat(variables('redisCacheName'),'.redis.cache.windows.net:6380|', listKeys(resourceId('Microsoft.Cache/Redis', variables('redisCacheName')), '2015-08-01').primaryKey, '|', variables('resourcePrefix')), parameters('redisSessionStateConnection'))]"
To simplify it, the condition looks like this;
[if(equals(arg1, arg2), true_expression, false_expression)]
When I deploy the ARM template with isProduction parameter set to No the execution throws an exception. When isProduction parameter is set to Yes then the template works fine. The exceptions is related to ARM trying to find the redis cache resource which will not be deployed in non production environment.
My guess is even if the isProduction parameter value is No, the true_expression in the above condition which is referencing the Redis Cache resource is getting executed and since the Redis Cache resource is not created in a non production state, it throws the exception.
So my question is, when we have a condition like above, will the true_expression and the false_expression in the if() function is evaluated before the actual condition of the if() function is executed?
If so what would be possible workarounds to get around this problem?

Both sides of if() are evaluated regardless (in ARM templates). so you have to work around that using "clever" ways.
you could use nested deployments\variables to try and work around that.
update: this has been fixed some time ago, only the relevant part of if() function is evaluated.

My guess would be: no, only the yes, both expressions needed based on the outcome of the if statement is are evaluated.
To solve your issue: you can use environment specific parameter files. This enables you to only include parameters for the environment you're deploying to.
See the documentation on parameters in the 'Understand the structure and syntax of Azure Resource Manager templates' article.
In the parameters section of the template, you specify which values you can input when deploying the resources. These parameter values enable you to customize the deployment by providing values that are tailored for a particular environment (such as dev, test, and production). You do not have to provide parameters in your template, but without parameters your template would always deploy the same resources with the same names, locations, and properties.

Related

Shall we include ForceNew or throw an error during "terraform apply" for a new TF resource?

Context: Implementation a Terraform Provider via TF Provider SDKv2 by following an official tutorial.
As a result, all of the schema elements in the corresponding Terraform Provider resource aws_launch_configuration are marked as ForceNew: true. This behavior instructs Terraform to first destroy and then recreate the resource if any of the attributes change in the configuration, as opposed to trying to update the existing resource.
TF tutorial suggests we should add ForceNew: true for every non-updatable field like:
"base_image": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
},
resource "example_instance" "ex" {
name = "bastion host"
base_image = "ubuntu_17.10" # base_image updates are not supported
}
However one might run into the following:
Let's consider "important" resources foo_db_instance (a DB instance that should be deleted / recreated in exceptional scenarios) (related unanswered question) that has name attribute:
resource "foo_db_instance" "ex" {
name = "bar" # name updates are not supported
...
}
However its underlying API was written in a weird way and it doesn't support updates for name attribute. There're 2 options:
Following approach of the tutorial, we might add ForceNew: true and then, if a user doesn't pay attention to terraform plan output it might recreate foo_db_instance.ex when updating name attribute by accident that will create an outage.
Don't follow the approach from the tutorial and don't add ForceNew: true. As a result terraform plan will not output the error and it will make it look like the update is possible. However when running terraform apply a user will run into an error, if we add a custom code to resourceUpdate() like this:
func resourceUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics {
if d.HasChanges("name) {
return diag.Errorf("error updating foo_db_instance: name attribute updates are not supported")
}
...
}
There're 2 disadvantages of this approach:
non-failing output of terraform plan
we might need some hack to restore tf state to override d.Set(name, oldValue).
Which approach should be preferrable?
I know there's prevent_destroy = true lifecycle attribute but it seems like it won't prevent this specific scenario (it only prevents from accidental terraform destroy).
The most typical answer is to follow your first option, and then allow Terraform to report in its UI that the change requires replacement and allow the user to decide how to proceed.
It is true that if someone does not read the plan output then they can potentially make a change they did not intend to make, but in that case the user is not making use of the specific mechanism that Terraform provides to help users avoid making undesirable changes.
You mentioned prevent_destroy = true and indeed that this a setting that's relevant to this situation, and is in fact exactly what that option is for: it will cause Terraform to raise an error if the plan includes a "replace" action for the resource that was annotated with that setting, thereby preventing the user from accepting the plan and thus from destroying the object.
Some users also wrap Terraform in automation which will perform more complicated custom policy checks on the generated plan, either achieving a similar effect as prevent_destroy (blocking the operation altogether) or alternatively just requiring an additional confirmation to help ensure that the operator is aware that something unusual is happening. For example, in Terraform Cloud a programmatic policy can report a "soft failure" which causes an additional confirmation step that might be approvable only by a smaller subset of operators who are better equipped to understand the impact of what's being proposed.
It is in principle possible to write logic in either the CustomizeDiff function (which runs during planning) or the Update function (which runs during the apply step) to return an error in this or any other situation you can write logic for in the Go programming language. Of these two options I would say that CustomizeDiff would be preferable since that would then prevent creating a plan at all, rather than allowing the creation of a plan and then failing partway through the apply step, when some other upstream changes may have already been applied.
However, to do either of these would be inconsistent with the usual behavior users expect for Terraform providers. The intended model is for a Terraform provider to describe the effect of a change as accurately as possible and then allow the operator to make the final decision about whether the proposed change is acceptable, and to cancel the plan and choose another strategy if not.

using variables in a Terraform destroy provisioner

In the past I have managed to work around the limitation of not being able to use variables inside a destroy provisioner by using things such as terraform.workspace, per the error message you get when attempting to use variables, you can also use 'self', 'count.index', or 'each.key'. I've only ever seen self used in the context of referencing values from a map associated with a triggers block of a null_resource, however what I want to reference, a URL and API token aren't applicable for use inside a triggers block. Is there some elegant way I'm missing of working around this limitation ?.

Conditionally Set Environment Azure DevOps

I am working with an Azure Pipeline in which I need to conditionally set the environment property. I am invoking the pipeline from a rest API call by passing Parameters in the body which is documented here.. When I try to access that parameter at compile time to set the environment conditionally though the variable is coming through as empty (assuming it is not accessible at compile time?)
Does anybody know a good way to solve for this via the pipeline or the API call?
After some digging I have found the answer to my question and I hope this helps someone else in the future.
As it turns out the Build REST API does support template parameters that can be used at compile time, the documentation just doesn't explicitly tell you. This is also supported in the Runs endpoint as well.
My payload for my request ended up looking like:
{
"Parameters": "{\"Env\":\"QA\"}",
"templateParameters": {"SkipApproval" : "Y"},
"Definition": {
"Id": 123
},
"SourceBranch": "main"
}
and my pipeline consumed those changes at compile time via the following (abbreviated version) of my pipeline
parameters:
- name: SkipApproval
default: ''
type: string
...
${{if eq(parameters.SkipApproval, 'Y')}}:
environment: NoApproval-All
${{if ne(parameters.SkipApproval, 'Y')}}:
environment: digitalCloud-qa
This is a common area of confusion for YAML pipelines. Run-time variables need to be accessed using a different syntax.
$[ variable ]
YAML pipelines go through several phases.
Compilation - This is where all of the YAML documents (templates, etc) comprising the final pipelines are compiled into a single document. Final values for parameters and variables using ${{}} syntax are inserted into the document.
Runtime - Run-time variables using the $[] syntax are plugged in.
Execution - The final pipeline is run by the agents.
This is a simplification, another explanation from Microsoft is a bit better:
First, expand templates and evaluate template expressions.
Next, evaluate dependencies at the stage level to pick the first stage(s) to run.
For each stage selected to run, two things happen:
All resources used in all jobs are gathered up and validated for authorization to run.
Evaluate dependencies at the job level to pick the first job(s) to run.
For each job selected to run, expand multi-configs (strategy: matrix or strategy: parallel in YAML) into multiple runtime jobs.
For each runtime job, evaluate conditions to decide whether that job is eligible to run.
Request an agent for each eligible runtime job.
...
This ordering helps answer a common question: why can't I use certain variables in my template parameters? Step 1, template expansion, operates solely on the text of the YAML document. Runtime variables don't exist during that step. After step 1, template parameters have been resolved and no longer exist.
[ref: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/runs?view=azure-devops]

When should I use resource or #property_hash in a Puppet provider?

When writing a Puppet provider, there are two ways to access properties of the resource: the resource variable, and the #property_hash variable. I'm trying to use a property foo in a setter, and started by using resource[:foo]. This works when doing
puppet apply
and it works when doing
puppet resource thing thingname
but if I try
puppet resource thing thingname foo=Foo
then resource[:foo] is unset. #property_hash[:foo] has the right value.
I can print out the value of foo right before calling new in self.instances, and it is correct in both cases.
This article shows resource being used all over the place. It's in a function called from flush, so I changed all my setters to work with flush, but still resource[:foo] isn't set.
I can use #property_hash[:foo], but a colleague found that that didn't work when creating a resource - not a problem in my case, as the resource is only managed not created - but I really need to understand this properly to avoid problems in the future. When should I use resource and when #property_hash? And why does resource work in that example but not for me?

How get a value from a puppet resource

I have a problem with my puppet script.
I would like to get a value set in my resource file. I declare a resource like that
define checkxml(
$account = '',
$pwd = template('abc/abc.erb'),
){
if(empty($pwd)){
fail('pwd empty')
}
}
I call it via :
checkxml{"$agtaccount":
account => $agtaccount,
}
I want to get the value of $pwd. The $pwd will get is value by Template. If i try to show the value in my resource definition it's ok, I get the right value, so the Template works fine.
My problem is to get access this value after calling the ressource. I saw the getparam of stdlib but doesn't work for me.
getparam(Checkxml["$agtaccount"],"pwd")
If i try to get the account parameters instead of pwd it's ok. I think as i doesn't declare the pwd i can't get him back
How can i get him ?
Thanks for your help
Ugh, this looks dangerous. First off I'd recommend to steer clear of that function and the concept it embodies. It faces you with evaluation order dependencies, which can always lead to inconsistent manifest behavior.
As for the retrieval of the value itself - that will likely not work if the default is used. That's because on a catalog building level, there is not yet a value that is being bound to the parameter, if that makes any sense.
The resolution of final parameter values is rather involved, so there are lots of things that can go wrong with a manifest that relies on such introspective functionality.
I recommend to retrieve the desired value in a more central location (that depends on your manifest structure) and use it both when declaring the Checkxml["$agtaccount"] resource as well as its other uses (for which you are currently trying to extract it).

Resources