Unable to utilise variable passed from previous step in a Azure release pipeline - azure

I have an Azure release pipeline that I am using for a Windows Virtual Desktop deployment.
I have created a simple Powershell task to obtain some information about the existing session hosts available and capture it to a list which I have then set to a dynamic variable using
$sessions = "vm_name1 vm_name2 vm_name3"
write-host "##vso[task.setvariable variable=sessions;isOutput=true;]$sessions"
I can then retrieve this fine in later tasks by using
write-host $(taskreference.sessions)
result: vm_name1 vm_name2 vm_name3
However, I need to access be able to parse this variable $(sessions) to obtain the individual host names to be used in later steps of my release pipeline.
i.e. $vms = $(sessions).Split(" ")
$vms[0]
$vms[1]
etc etc
I have tried various methods to access and expand this variable but ultimately am struggling to get anything working as it will always return null. When using a Azure CLI task, I am able to successfully access the variable using the exact same code. I suspect this is something to do with how the tasks work during run time/compile time.
Is there any way that I could parse the variable properly to obtain the individual host names?

The one you described would work fine if the variable is passed in as a string parameter.
Else this might work.
$vms = "$(sessions)".Split(" ")
$vms[0]
$vms[1]

Related

Terraform Inconsistent Azure VM Names

I have the following terraform code that creates multiple instances of a Azure VM based on a variable called nb_instance
module "create-servers" {
  source              = "../../Modules/Create-Vms"
  resource_group_name = azurerm_resource_group.rg.name
  vm_hostname         = "testvm"
  nb_instances = 2
On my create-servers module, I have the following code
resource "azurerm_virtual_machine" "vm-windows" {
  count                         = var.nb_instances 
  name                          = "${var.vm_hostname}${format("%02d",count.index+1)}"
  resource_group_name           = data.azurerm_resource_group.vm.name
Which works great, however I'm getting inconsitent naming convention between the Azure resource and the actual server name of the VM.
Azure displays the server as testvm01 and testvm02 however the server's actual OS name is testvm-1 and testvm-2 . My desired name is testvm02 which is what I expected within the module. Is there a workaround for this to keep both names consistent?
You also need to set the property os_profile.computer_name
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_machine#os_profile

Terraform provisioner trigger only for new instances / only run once

I have conditional provision steps I want to run for all compute instances created, but only run once.
I know I can put the provisioning within the compute resource, but then it cannot be conditional.
If I put it in a null_resource, I need a trigger, and I don't know how to trigger on only the newly created resources (i.e. if I already have 1 instance, and want to scale to 2, I want to only run provisioning on the 2nd being created, not run again on the 1st which is already provisioned).
How can I get a variable that only gives me the id or ip of the instance just created, as opposed to all of them?
Below an example of the provisioner.
resource "null_resource" "provisioning" {
count = var.condition ? length(var.instance_ips) : 0
triggers = {
instance_ids = join(",", var.instance_ips)
}
connection {
agent = false
timeout = "4m"
host = var.instance_ips[count.index]
user = "user"
private_key = var.ssh_private_key
}
provisioner "remote-exec" {
inline = [ do something, then remove the public key from authorized_keys ]
}
}
PS: the reason I only can run once (as opposed to run again and do nothing if already provisioned) is that I want to destroy the provisioning public key after I'm done, since it is using a tf generated key pair and the private key ends up in the state file, I want to make sure someone who gets access to the key pair still cannot access the instance.
Once the public key is removed from the authorized_keys the provisioner running a second time will just fail to connect, timeout and fail.
I found that I can use the on_failure: continue key, but then if it actual fails for legitimate reasons it would continue too.
I also could use a key pair that is generated locally with a local-exec provisioner so it doesn't show in the state file, but then the key is a file, which is not much different if someone get access to it; the file needs to stay on the machine, which may not work well with a cloud resource manager env that is recreated on a need to run basis.
And then I'm sure there are other ways to provision a file or script, but in this case it contains instance dependency data generated by TF, that I don't want left in a cloud-init.
So, I come down to needing to figure a way to use a trigger that only contains the new instance(s)
Any ideas how to do this?
https://www.terraform.io/docs/provisioners/
This documentation lists provisioners as a last resource and provides some suggestions on how to avoid having to use it, for various common resources.
Execute the script from the user_data, which is specifically designed for provisional, run-once actions. Since defining the user_data supports all regular Terraform interpolation, you can use that opportunity to pass environment variables or selectively include/exclude parts of a script, if you need conditional logic.
The downside is that any change in user_data results in recreating the instances, or creating a new launch configuration/template.

Error "BadRequest" when calling Azure Function in ADF

I am creating an extensive data factory work flow that will create and fill a data warehouse for multiple customers automatic, however i'm running into an error. I am going to post the questions first, since the remaining info is a bit long. Keep in mind i'm new to data factory and JSON coding.
Questions & comments
How do i correctly pass the parameter through to an Execute Pipeline activity?
How do i add said parameter to an Azure Function activity?
The issue may lie with correctly passing the parameter through, or it may lie in picking it up - i can't seem to determine which one. If you spot an error with the current setup, dont hesitate to let me know - all help is appreciated
The Error
{
"errorCode": "BadRequest",
"message": "Operation on target FetchEntries failed: Call to provided Azure function
'' failed with status-'BadRequest' and message -
'{\"Message\":\"Please pass 'customerId' on the query string or in the request body\"}'.",
"failureType": "UserError",
"target": "ExecuteFullLoad"
}
The Setup:
The whole setup starts with a function call to get new customers from an online economic platform. It the writes them to a SQL table, from which they are processed and loaded into the final table, after which a new pipeline is executed. This process works perfectly. From there the following pipeline is executed:
As you can see it all works well until the ForEach loop tries to execute another pipeline, that contains an azure function that calls a .NET scripted function that fills said warehouse (complex i know). This azure function needs a customerid to retrieve tokens and load the data into the warehouse. I'm trying to pass those tokens from the InternalCustomerID lookup through the ForEach into the pipeline and into the function. The ForEach works actually, but fails "Because an inner activity failed".
The Execute Pipeline task contains the following settings, where i'm trying to pass the parameter through which comes from the foreach loop. This part of the process also works, since it executes twice (as it should in this test phase):
I dont know if it doesn't successfully pass the parameter through or it fails at adding it to the body of the azure function.
The child pipeline (FullLoad) contains the following parameters. I'm not sure if i should set a default value to be overwritten or how that actually works. The guides i've look at on the internet havent had a default value.
Finally there is the settings for the Azure function. I'm not sure what i need to write in order to correctly capture the parameter and/or what to fill in - if it's the header or the body regarding the error message. I know a post cannot be executed without a body.
If i run this specific funtion by hand (using the Function App part of portal.azure.com) it works fine, by using the following settings:
I viewed all of your detailed question and I think the key of the issue is the format of Azure Function Request Body.
I'm afraid this is incorrect. Please see my below steps based on your description:
Work Flow:
Inside ForEach Activity, only one Azure Function Activity:
The preview data of LookUp Activity:
Then the configuration of ForEach Activity: #activity('Lookup1').output.value
The configuration of Azure Function Activity: #json(concat('{"name":"',item().name,'"}'))
From the azure function, I only output the input data. Sample Output as below:
Tips: I saw your step is executing azure function in another pipeline and using Execute Pipeline Activity, (I don't know why you have to follow such steps), but I think it doesn't matter because you only need to focus on the Body format, if your acceptable format is JSON, you could use #json(....),if the acceptable format is String, you could use #cancat(....). Besides, you could check the sample from the ADF UI portal which uses pipeline().parameters

Substitute Service Fabric application parameters during deployment

I'm setting up my production environment and would like to secure my environment-related variables.
For the moment, every environment has its own application parameters file, which works well, but I don't want every dev in my team knowing the production connection strings and other sensitive stuffs that could appear in there.
So I'm looking for every possibility available.
I've seen that in Azure DevOps, which I'm using at the moment for my CI/CD, there is some possible variable substitution (xml transformation). Is it usable in a SF project?
I've seen in another project something similar through Octopus.
Are there any other tools that would help me manage my variables by environment safely (and easily)?
Can I do that with my KeyVault eventually?
Any recommendations?
Thanks
EDIT: an example of how I'd like to manage those values; this is a screenshot from octopus :
so something similar to this that separates and injects the values is what I'm looking for.
You can do XML transformation to the ApplicationParameter file to update the values in there before you deploy it.
The other option is use Powershell to update the application and pass the parameters as argument to the script.
The Start-ServiceFabricApplicationUpgrade command accept as parameter a hashtable with the parameters, technically, the builtin task in VSTS\DevOps transform the application parameters in a hashtable, the script would be something like this:
#Get the existing parameters
$app = Get-ServiceFabricApplication -ApplicationName "fabric:/AzureFilesVolumePlugin"
#Create a temp hashtable and populate with existing values
$parameters = #{ }
$app.ApplicationParameters | ForEach-Object { $parameters.Add($_.Name, $_.Value) }
#Replace the desired parameters
$parameters["test"] = "123test" #Here you would replace with your variable, like $env:username
#Upgrade the application
Start-ServiceFabricApplicationUpgrade -ApplicationName "fabric:/AzureFilesVolumePlugin" -ApplicationParameter $parameters -ApplicationTypeVersion "6.4.617.9590" -UnmonitoredAuto
Keep in mind that the existing VSTS Task also has other operations, like copy the package to SF and register the application version in the image store, you will need to replicate it. You can copy the full script from Deploy-FabricApplication.ps1 file in the service fabric project and replace it with your changes. The other approach is get the source for the VSTS Task here and add your changes.
If you are planning to use KeyVault, I would recommend the application access the values direct on KeyVault instead of passing it to SF, this way, you can change the values in KeyVault without redeploying the application. In the deployment, you would only pass the KeyVault credentials\configuration.

Azure ARM Template Testing with Pester

I have been following the link
Azure ARM Template Testing on how to carry out ARM testing with Pester.
Unfortunately, I'm unable to get a successful tests.
For example in the script the following code states the following:
It "Does Availability Set Have Correct SKU" {
$av = $deploymentOutput.validatedResources | Where-Object { $_.type -eq 'Microsoft.Compute/availabilitySets' }
$av.sku.name | Should Be **'Align'**
However, even though the result of the ARM template is 'Align' I get the following error.
error
Whereas I should be getting the following successful output:
success
For a complete look at the code it can be found here
Any guidance will greatly appreciated.
Regards
While this isnt a direct answer to your question, this is an indirect answer to your question :)
Just dont do this. Test-AzureRMResourceGroupDeployment doesnt do any real good. If you insist on using it you can always use a 1 liner to do that or use VSCode tasks or whatever to kick off this cough test cough.
There is really no point in validating if this particular resource type is the one that you expect, because you dont really change resource types in the resource after you created it. Also, if Test-AzureRMResourceGroupDeployment returns success doesnt mean your deployment will work. It only checks basic sanity. Just create a powershell script\task to deploy a template and kick it off automatically after commit. Pester adds nothing of value to this process, only complicates things.

Resources