Properly using Terraform External Data Source - terraform

I am using Terraform from the bash cloud shell in Azure. I am trying to add an external data source to my Terraform configuration file that will use az cli to query for the virtualip object on a Microsoft.Web/hostingEnvironment the template deploys.
AZ CLI command line:
az resource show --ids /subscriptions/<subscription Id>/resourceGroups/my-ilbase-rg/providers/Microsoft.Web/hos
tingEnvironments/my-ilbase/capacities/virtualip
Output when run from command line:
{
"additionalProperties": {
"internalIpAddress": "10.10.1.11",
"outboundIpAddresses": [
"52.224.70.119"
],
"serviceIpAddress": "52.224.70.119",
"vipMappings": []
},
"id": null,
"identity": null,
"kind": null,
"location": null,
"managedBy": null,
"name": null,
"plan": null,
"properties": null,
"sku": null,
"tags": null,
"type": null
}
In my Terraform config I create a variable for the --ids value:
variable ilbase_resourceId {
default = "/subscriptions/<subscription Id>/resourceGroups/my-ilbase-rg/providers/Microsoft.Web/hostingEnvironments/my-ilbase/capacities/virtualip"
}
I then have the data source structured this way:
data "external" "aseVip" {
program = ["az", "resource", "show", "--ids", "${var.ilbase_resourceId}"]
}
When I execute my configuration, I get the error below:
data.external.aseVip: data.external.aseVip: command "az" produced invalid JSON: json: cannot unmarshal object into Go value of type string
Any ideas what I am doing wrong?

I discovered the problem was that the Terraform External Data Source is not yet able to handle the complex structure of what gets returned by the command. I was able to get around this by adding an AZ CLI command block at the beginning of the script I use to deploy the Application Gateway that grabs the IP address and passes it into the Terraform config as a variable. Below is the script block I am using:
ilbase_virtual_ip=$(
az resource show \
--ids "/subscriptions/$subscription_id/resourceGroups/$ilbase_rg_name/providers/Microsoft.Web/hostingEnvironments/$ilbase_name/capacities/virtualip" \
--query "additionalProperties.internalIpAddress"
)

That command will be successful when you are working in a session. I guess that when you run it from your shell, you already have done az login. When terraform executes your command, it is not using your existing session. You would need to create a PS1 script where you would be propmted for login, or where you provide your credentials so your request can be successful.
Whatever your choice is, take into account that the ONLY output that script should have is a JSON. If any other command add something to the output (for example, when you do a login, you have an output with information about your subscription) then you will have the same error as the output is not a proper JSON. You will need to pipeline that kind of extra ouputs to Out-Null making them "silent" and just write to the output the JSON you are receiving from your request.
I hope this can help.

While there is an accepted response, that is actually a good workaround.
The error is because terraform expect an one level json map like { "a" = "b", "c" = "d" } and your az command returns a multi level map. ( a map of maps )
You can improve your az command to limit the return only one map by adding --query
data "external" "aseVip" {
program = ["az", "resource", "show", "--ids", "${var.ilbase_resourceId}" , "--query additionalProperties" ]
}
output "internalIpAddress" {
value = data.external.aseVip.internalIpAddress
}
output "outboundIpAddresses" {
value = data.external.aseVip.outboundIpAddresses
}
I hope this may help other people.

Related

Unable to query Azure Table Storage using Azure CLI

I wanted to filter the entry in my Azure Storage Table and the structure looks like the following. I wanted to filter the entry's based on the given Id for example JD.98755. How can we achieve this?
{
"items": [
{
"selectionId": {
"Id": "JD.98755",
"status": 0
},
"Consortium": "xxxxxx",
"CreatedTime": "2019-09-06T09:34:07.551260+00:00",
"RowKey": "yyyyyy",
"PartitionKey": "zzzzzz-zzzzz-zz-zzzzz-zz",
"Timestamp": "2019-09-06T09:41:34.660306+00:00",
"etag": "W/\"datetime'2019-09-06T09%3A41%3A34.6603060Z'\""
}
],
"nextMarker": {}
}
I can filter other elements like the Consortium using the below query but not the Id
az storage entity query -t test --account-name zuhdefault --filter "Consortium eq 'test'"
I tried something like the following to filter based on the given ID but it has not returned any results.
az storage entity query -t test --account-name zuhdefault --filter "Id eq 'JD.98755'"
{
"items": [],
"nextMarker": {}
}
I do agree with #Gaurav Mantri and I guess one of other approach you can use is:
I have reproduced in my environment and got expected results as below:
Firstly, you need to store the output of the command into a variable like below:
I have stored output in $x variable:
$x
Then you can change the output from Json:
$r= $x | ConvertFrom-Json
Then you can store items.id value in a variable like below:
Now you can use below command to get the items with Id JD.98755:
If you have more data, then store the first output into variable then divide them into objects using ConvertFrom-json and then you use the above steps from first.
The reason you are not getting any data back is because Azure Table Storage is a simple key/value pair store and you are storing a JSON there (in all likelihood, the SDK serialized JSON data and stored it as string in Table Storage).
Considering there is no key named Id, you will not be able to search for that.
If you need to store JSON document, one option is to make use of Cosmos DB (with SQL API) instead of Table Storage. Other option would be to flatten your JSON so that you store them as key/value pair. In this scenario, your data would look something like:
{
"selectionId_Id": "JD.98755",
"selectionId_status": 0,
"Consortium": "xxxxxx",
"CreatedTime": "2019-09-06T09:34:07.551260+00:00",
"RowKey": "yyyyyy",
"PartitionKey": "zzzzzz-zzzzz-zz-zzzzz-zz",
"Timestamp": "2019-09-06T09:41:34.660306+00:00",
"etag": "W/\"datetime'2019-09-06T09%3A41%3A34.6603060Z'\""
}
then you should be able to filter by selectionId_Id.

Azure VM creation failed using downloaded template (The entity name 'vmImageName' is invalid according to its validation rule)

I want to create a vm on azure using az cli.
So I created interactively a VM and downloaded the template and its parameters.
I added some missing parameters, the final script is
az deployment group create --resource-group my-rg --template-file ./azure-32Go.template.json --parameters #./azure-32Go.template.parameters --parameters publicIpAddressName=my-ip networkInterfaceName=my-nic adminPublicKey="$(cat ~/.ssh/id_rsa.pub)" imageName=MsOps-Demo1-image-20210212-175858 --debug
I get an error
msrestazure.azure_exceptions.CloudError: Azure Error: DeploymentFailed
Message: At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.
Exception Details:
Error Code: BadRequest
Message: {'error': {'code': 'InvalidParameter', 'message': "The entity name 'vmImageName' is invalid according to its validation rule: ^[^_\\W][\\w-._]{0,79}(?<![-.])$.", 'target': 'vmImageName'}}
Googling did not point me to someone to get the same issue.
I looked in azur portal Resource Group / Activity log to look at more information, but I only see the same above issue with no more information
How can I fix my code ?
I auto answer this question.
I have redone the whole process, and starting the vm with the template works fine.
As I wanted to set a specific image name. I added a parameter imageName
Now I updated the template.json by adding a parameter
"parameters": {
"location": {
"type": "string"
},
"imageName": {
"type": "string"
},
"networkInterfaceName": {
Updated where it is used
"storageProfile": {
"osDisk": {
"createOption": "fromImage",
"managedDisk": {
"storageAccountType": "[parameters('osDiskType')]"
}
},
"imageReference": {
"id": "/subscriptions/14b6e880-753e-4a5f-8618-c0786702aa1c/resourceGroups/MsOps-Horizon-rg/providers/Microsoft.Compute/images/[parameters('imageName')]"
}
},
But when doing the creation process fails with the same reason
Deployment failed. Correlation ID: 39b18e50-ff0d-4ecd-a3e8-e40b988a341b. {
"error": {
"code": "InvalidParameter",
"message": "The entity name 'vmImageName' is invalid according to its validation rule: ^[^_\\W][\\w-._]{0,79}(?<![-.])$.",
"target": "vmImageName"
}
}
Creation of vm failed
My workaround is therefore to do a sed in the template.json to insert the image name I want :(
My guess is that the [parameter('imageName')] is not expanded early enough which makes the process fails as the name does not match the matching rule

How to get nested properties using Azure CLI

I had been using the line below to grab the value of the internalIpAddress property from an ILB App Service Environment in Azure:
az resource show `
--ids "/subscriptions/$subscription_id/resourceGroups/$ilbase_rg_name/providers/Microsoft.Web/hostingEnvironments/$ilbase_name/capacities/virtualip" `
--query "internalIpAddress"
The format of the virtualip resource was:
{
"internalIpAddress": "10.30.0.139",
"outboundIpAddresses": [
"13.72.76.135"
],
"serviceIpAddress": "13.72.76.135",
"vipMappings": []
}
Seems like in the past day or so, the format of the virtualip resource has now changed to this:
{
"additionalProperties": {
"internalIpAddress": "10.30.0.139",
"outboundIpAddresses": [
"13.72.76.135"
],
"serviceIpAddress": "13.72.76.135",
"vipMappings": []
},
"id": null,
"identity": null,
"kind": null,
"location": null,
"managedBy": null,
"name": null,
"plan": null,
"properties": null,
"sku": null,
"tags": null,
"type": null
}
And now my command no longer works...it returns nothing. I can modify my command to get the entire additionalProperties object but I then don't know how to parse thru it to get just the value of the internalIpAddress property.
Another interesting note on this is, if you go to the Azure Resource Explorer and navigate to the virtualip resource, it still shows it in the same old format. If you try the PowerShell code the Azure Resource Explorer gives you to query the resource, it returns nothing.
Here is the PowerShell the Azure Resource Explorer said to use:
Get-AzureRmResource -ResourceGroupName MyRG -ResourceType Microsoft.Web/hostingEnvironments/capacities -ResourceName "myilbase/virtualip" -ApiVersion 2018-02-01
Looking for some help on how to parse the nested internalIpAddress property from the additionalProperties object
just traverse the object like you normally would:
--query "additionalProperties.internalIpAddress"

Groovy: Executing azure CLI Command with JSON - Parsing Issue?

Currently I fail to run an azure CLI command from Groovy because of the JSON Part in the Command.
There is an azure command to run custom scripts on a virtual machine. The CommandToExecute on the Machine is passes as JSON.
WORKING Example:
REQUEST-CALL in Console:az vm extension set -g demo --vm-name demo-cfg01 --name CustomScript --publisher Microsoft.Azure.Extensions --settings '{"commandToExecute":"ls"}'
RESPONSE: {
"autoUpgradeMinorVersion": true,
"forceUpdateTag": null,
"id": "/subscriptions/xxxxxxxxxx-xxxxxx-xxxx-xxxx-xxxxxxxxxxxxx/resourceGroups/demo/providers/Microsoft.Compute/virtualMachines/demo-cfg01/extensions/CustomScript",
"instanceView": null,
"location": "germanycentral",
"name": "CustomScript",
"protectedSettings": null,
"provisioningState": "Succeeded",
"publisher": "Microsoft.Azure.Extensions",
"resourceGroup": "demo",
"settings": {
"commandToExecute": "ls"
},
"tags": null,
"type": "Microsoft.Compute/virtualMachines/extensions",
"typeHandlerVersion": "2.0",
"virtualMachineExtensionType": "CustomScript"
}
This script works fine.
"Same" Command executed with Groovy leads to following:
def process
StopWatch.withTimeRecording("EXECUTING COMMAND '" + cargs + "'",_logger, Level.ALL) {
process = (cargs).execute(null,null);
process.waitForProcessOutput(sout, serr)
}
Please notice the StopWatch which logs the StringArray containing the params:
EXECUTING COMMAND '[az, vm, extension, set, -g, demo, --vm-name,
demo-cfg01, --name, CustomScript, --publisher,
Microsoft.Azure.Extensions, --settings, '{"commandToExecute":"ls"}']'
The Params looks the same as in the console
The Response from Azure is:
VM has reported a failure when processing extension 'CustomScript'.
Error message: "Enable failed: failed to get configuration: error
reading extension configuration: error parsing settings file: error
parsing json: json: cannot unmarshal string into Go value of type
map[string]interface {}
I think groovy somehow escapes the characters before execution, i cannot figure out what went wrong. Any suggestion?
when you call execute on array groovy (actually java) doublequotes each parameter.
just build your command line as you need in a string
string in groovy has the same execute method as an array...
def cmd = """az vm extension set -g demo --vm-name demo-cfg01 --name CustomScript --publisher Microsoft.Azure.Extensions --settings '{"commandToExecute":"ls"}' """
def process = cmd.execute()
when you use execute on string groovy will execute the exact command you've provided
Found a "workaround". The az command also accept an *.json file as settings parameter. Therefor i first create the command in a temporary json file and passes the json file as parameter. Works!
You are quoting for an .execute() call. You dont need to quote there, because no shell or command parser is involved here.
Your command there gets '{"commandToExecute":"ls"}', which is a valid JSON String (no Map) and this is also what the error message states:
error parsing json: json: cannot unmarshal string into Go value of type map[string]interface
Just use {"commandToExecute": "ls"} (no surrounding ') as argument there.

Passing credentials to DSC script from arm template

I am trying to deploy a VM with a DSC extension from an ARM template. According to various sources, and even this SO question, I am following the correct way to pass a credential object to my script:
"properties": {
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.19",
"autoUpgradeMinorVersion": true,
"settings": {
"modulesUrl": "[concat(parameters('_artifactsLocation'), '/', variables('ConfigureRSArchiveFolder'), '/', variables('ConfigureRSArchiveFileName'), '/', parameters('_artifactsLocationSasToken'))]",
"configurationFunction": "[variables('rsConfigurationConfigurationFunction')]",
"properties": {
"SQLSAAdminAuthCreds": {
"UserName": "[parameters('SSRSvmAdminUsername')]",
"Password": "PrivateSettingsRef:SAPassword"
}
}
},
"protectedSettings": {
"Items": {
"SAPassword": "[parameters('SSRSvmAdminPassword')]"
}
}
}
However, when I deploy it, I get this error message:
Error message: "The DSC Extension received an incorrect input: The password element of
argument 'SQLSAAdminAuthCreds' to configuration 'PrepareSSRSServer' does not
match the required format. It should be as follows
{
"UserName" : "MyUserName",
"Password" : "PrivateSettingsRef:MyPassword"
}.
Please correct the input and retry executing the extension.".
As far as I can see, my format is correct. What am I missing? Thanks
It seems that function try to use the paramters that cause the issue. So please have try a check the function in the ps1 file where use the SQLSAAdminAuthCreds. I can't repro the issue that your mentioned. I do a demo for it, the following is my detail steps.
1.Prepare a ps1 file, I get the demo code from article
configuration Main
{
param(
[Parameter(Mandatory=$true)]
[ValidateNotNullorEmpty()]
[PSCredential]
$SQLSAAdminAuthCreds
)
Node localhost {
User LocalUserAccount
{
Username = $SQLSAAdminAuthCreds.UserName
Password = $SQLSAAdminAuthCreds
Disabled = $false
Ensure = "Present"
FullName = "Local User Account"
Description = "Local User Account"
PasswordNeverExpires = $true
}
}
}
2.Zip the ps1 file
3.Download the ARM template and parameters from the Azure portal.
4.Edit the template and parameter file
Try to deploy the ARM template with VS or Powershell
Check it from the Azure portal or output.

Resources