I want to use Powershell splatting to conditionally control which parameters are used for some Azure CLI calls. Specifically for creating CosmosDb collections.
The target was something like this:
$params = #{
"db-name" = "test";
"collection-name"= "test2";
# makes no difference if I prefix with '-' or '--'
"-key" = "secretKey";
"url-connection" = "https://myaccount.documents.azure.com:443"
"-url-connection" = "https://myaccount.documents.azure.com:443"
}
az cosmosdb collection create #params
Unfortunately, this only works for db-name and collection-name. The other parameters fail with this error:
az : ERROR: az: error: unrecognized arguments: --url-connection:https://myaccount.documents.azure.com:443
--key:secretKey
After some back and forth, I ended up using array splatting:
$params = "--db-name", "test", "--collection-name", "test2",
"--key", "secretKey",
"--url-connection", "https://myaccount.documents.azure.com:443"
az cosmosdb collection create #params
Now I can do things like this:
if ($collectionExists) {
az cosmosdb collection update #colParams #colCreateUpdateParams
} else {
# note that the partition key cannot be changed by update
if ($partitionKey -ne $null) {
$colCreateUpdateParams += "--partition-key-path", $partitionKey
}
az cosmosdb collection create #colParams #colCreateUpdateParams
}
Related
I need to get the repository id of an existing project to work on that repo. There seems no other way than using Azure DevOps REST API.
I tried to utilize the REST API to get the repo id in my terraform code:
data "http" "example" {
url = "https://dev.azure.com/{organization}/{project}/_apis/git/repositories?api-version=6.0"
request_headers = {
"Authorization" = "Basic ${base64encode("PAT:${var.personal_access_token}")}"
}
}
output "repository_id" {
value = data.http.example.json.value[0].id
}
It yields error while I was running terraform plan:
Error: Unsupported attribute
line 29, in output "repository_id":
29: value = data.http.example.json.value[0].id
I tried also with jsondecode (jq is already installed):
resource "null_resource" "example" {
provisioner "local-exec" {
command = "curl -s -H 'Authorization: Bearer ${var.pat}' https://dev.azure.com/{organization}/{project}/_apis/git/repositories?api-version=6.0 | jq '.value[0].id'"
interpreter = ["bash", "-c"]
}
}
output "repo_id" {
value = "${jsondecode(null_resource.example.stdout).id}"
}
That did not work either!!
Azure DevOps REST API works fine, I just cannot fetch the value from the responce into terraform! What would be the right code or can it be done without using REST API!
Thank you!
To proper way to interact with external API and return its output to TF is through external data source. TF docs linked provide example of how to use and create such a data source.
I tried querying cosmosdb container using azure powershell from azure portal with powershell modules. The result I am getting is only system defined columns instead of custom defined columns too. What should I do in order to get all columns provided in the select query?
Below is the code that was executed on via azure runbook.
$CosmosDB_Name = "dbname"
$Abt_RG_Name = "RGName"
$accountName="Acc"
$cosmosDbContext = New-CosmosDbContext -Account $accountName -Database $CosmosDB_Name -ResourceGroup $Abt_RG_Name
$query="SELECT s.visibleSalesOrderId,c.requestTimestamp,c.acquirerAction,c.statusCode,(c.statusCode='2200')? 'Success': (c.statusCode='2109')? 'Declined': (c.statusCode='2115')? 'Timeout': (c.statusCode='2110')? 'Could not Process': (c.statusCode='2111')? 'ConcurrentProcess':(c.statusCode='2300')? 'JSON_message_isEmpty': ((c.statusCode='2102')? 'InternalServerError':'Unknown') AS StatusDescription, s.acquirerRetryCounter
from salesorder s join c IN s.acquirerLog where
s.visibleSalesOrderId in ('64738389hdh')"
$Data= Get-CosmosDbDocument -Context $cosmosDbContext -CollectionId 'salesorder' -Query $query -QueryEnableCrossPartition $true -verbose
$Data
Result I got is below system properties only.
id :
_etag :
_rid :
_ts :
_attachments :
Expected result is custom defined columns(present in select list of above query) are below.
"visibleSalesOrderId": "57P702M2hdhsdGC",
"requestTimestamp": "2021-12-15T18:03:42.000Z",
"acquirerAction": "endCycldde",
"statusCode": "2188",
"StatusDescription": "Declined"
I am learning about bicep and how to use to deploy azure resources.
I have been working on key vault deployment as follow:
resource keyVault 'Microsoft.KeyVault/vaults#2021-06-01-preview' existing = {
name: keyVaultName
}
// Create key vault keys
resource keyVaultKeys 'Microsoft.KeyVault/vaults/keys#2021-06-01-preview' = [for tenantCode in tenantCodes: {
name: '${keyVault.name}/${keyVaultKeyPrefix}${tenantCode}'
properties: {
keySize: 2048
kty: 'RSA'
// storage key should only needs these operations
keyOps: [
'unwrapKey'
'wrapKey'
]
}
}]
what I would like to do now, is create a GUID for each deployment in this format for example:
b402c7ed-0c50-4c07-91c4-e075694fdd30
I couldn't find any source to achieve this.
Can anyone please direct me on the right path.
Thank you very much for any help and for your patience with a beginner
You can use the newGuid function, as per documentation:
Returns a value in the format of a globally unique identifier. This function can only be used in the default value for a parameter.
// parameter with default value
param deploymentId string = newGuid()
...
output deploymentId string = deploymentId
I am trying to fetch values from azure using external data source in terraform. However, i dont understand what am i doing wrong when i try to export values using write-output, getting an error
data.external.powershell_test: data.external.powershell_test: command "Powershell.exe" produced invalid JSON: invalid character 'l' looking for beginning of object key string"
Below is my script
$vm=(Get-AzureRmVM -ResourceGroupName MFA-RG -Name vm2).name | convertTo-json
Write-Output "{""first"" : ""$vm""}"
Main.tf file
data "external" "powershell_test" {
program = ["Powershell.exe", "./vm.ps1"]
}
output "value" {
value = "${data.external.powershell_test.result.first}"
}
Can someone tel me what wrong with the script ? and if i am using write-out properly ?
Edited-------------
Below is the screenshot when i run vm.ps1 directly
Also, when i directly assign value to a variable as below, terraform is able to execute the code.
$vm = "testvm"
Write-Output "{""first"" : ""$vm""}"
For your issue, you should change your PowerShell command like this:
$vm=(Get-AzureRmVM -ResourceGroupName MFA-RG -Name vm2).name | convertTo-json
Write-Output "{""first"" : $vm}"
And you could change the code in the data source like this or not, but I suggest you do this:
data "external" "powershell_test" {
program = ["Powershell.exe", "${path.module}/vm.ps1"]
}
The result on my side is below:
I use the new Azure PowerShell module Az and my code shows here:
PowerShell:
$vm=(Get-AzVM -ResourceGroupName charles -Name azureUbuntu18).name | convertTo-json
Write-Output "{""first"" : $vm}"
Terraform:
data "external" "powershell_test" {
program = ["Powershell.exe", "${path.module}/vm.ps1"]
}
output "value" {
value = "${data.external.powershell_test.result.first}"
}
data.external.powershell_test.result is the only valid attribute, and it is map.
So the code will be changed to
output "value" {
value = "${data.external.powershell_test.result['first']}"
}
Reference:
https://www.terraform.io/docs/configuration-0-11/interpolation.html#user-map-variables
Thanks Charles XU for the answer. I was looking for Azure Application Gateway and after lots of digging I ended up here, as Terraform is yet to provide a data source for Azure Application Gateway. However, the same can be done using shell script and azure rest API.
Using Shell Script
appgw.sh
#!/bin/bash
#Linux: Requires Azure cli and jq to be available
#Getting Application Gateway ID using azure application gateway rest API, az cli as data source doesn't exist for it.
appgwid=$(az rest -m get --header "Accept=application/json" -u 'https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/newrg/providers/Microsoft.Network/applicationGateways?api-version=2020-07-01' | jq '.value[].id')
#Terraform External Data Source requires an output, else it will return in unmarshal json error with }
echo "{\"appgw_id\" : $appgwid}"
data.tf
data "external" "appgw_id_sh" {
program = ["/bin/bash", "${path.module}/appgw.sh"]
}
outputs.tf
output "appgw_id_sh" {
value = data.external.appgw_id_sh.result.appgw_id
}
Using Powershell
appgw.ps1
#Windows: Require Azure Powershell to be available
#1. Install-Module -Name PowerShellGet -Force
#2. Install-Module -Name Az -AllowClobber
#Getting Application Gateway ID using AzApplicationGateway AzResource as data source doesn't exist for it.
$appGw = (Get-AzApplicationGateway -Name "appgw-name" -ResourceGroupName "appgw-rg-name").id | convertTo-json
#Terraform External Data Source requires an output, else it will return in unmarshal json error with }
Write-Output "{""appgw_id"" : $appgw}"
data.tf
data "external" "appgw_id_ps" {
program = ["Powershell.exe", "${path.module}/appgw.ps1"]
}
outputs.tf
output "appgw_id_ps" {
value = data.external.appgw_id_ps.result.appgw_id
}
FYI - when I use powershell 7, I need to use pwsh.exe instead of powershell.exe inside program code.
data "external" "powershell_test" {
program = ["**pwsh.exe**", "${path.module}/vm.ps1"]
}
I have application which deploy VM with storage account and disks, I want to convert it to use managed disks - as this is the future of Azure storage. I am looking on the REST API - and I am missing two things:
1. how can i create a snapshot form existing managed disk, there is an API to create a snapshot but it is empty or from old unmanaged
2. can i choose the lun on which the disk is created?
how can i create a snapshot form existing managed disk, there is an API to create a snapshot but it is empty or from old unmanaged
According to your description, I have created a test demo to create a snapshot of an existing managed disk(OS disk), it works well.
I create a Windows VM and use managed disk as the OS disk, then I create another managed disk and add it to the VM.
The result is like below:
If you want to create a snapshot of an existing managed disk(It has the data), I suggest you could send request to below url.
Url: https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.Compute/snapshots/{snapshotName}?api-version={api-version}
Method: PUT
Parameter:
subscriptionId The identifier of your subscription where the snapshot is being created.
resourceGroup The name of the resource group that will contain the snapshot.
snapshotName The name of the snapshot that is being created. The name can’t be changed after the snapshot is created. Supported characters for the name are a-z, A-Z, 0-9 and _. The max name length is 80 characters.
api-version The version of the API to use. The current version is 2016-04-30-preview.
Request content:
{
"properties": {
"creationData": {
"createOption": "Copy",
"sourceUri": "/subscriptions/{subscriptionId}/resourceGroups/{YourResourceGroup}/providers/Microsoft.Compute/disks/{YourManagedDiskName}"
}
},
"location": "eastasia"
}
More details, you could refer to follow C# codes:
json.txt:
{
"properties": {
"creationData": {
"createOption": "Copy",
"sourceUri": "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/BrandoSecondTest/providers/Microsoft.Compute/disks/BrandoTestVM"
}
},
"location": "eastasia"
}
Code:
static void Main(string[] args)
{
string body = File.ReadAllText(#"D:\json.txt");
// Display the file contents to the console. Variable text is a string.
string tenantId = "xxxxxxxxxxxxxxxxxxxxxxxx";
string clientId = "xxxxxxxxxxxxxxxxxxxxxxxx";
string clientSecret = "xxxxxxxxxxxxxxxxxxxx";
string authContextURL = "https://login.windows.net/" + tenantId;
var authenticationContext = new AuthenticationContext(authContextURL);
var credential = new ClientCredential(clientId, clientSecret);
var result = authenticationContext.AcquireTokenAsync(resource: "https://management.azure.com/", clientCredential: credential).Result;
if (result == null)
{
throw new InvalidOperationException("Failed to obtain the JWT token");
}
string token = result.AccessToken;
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create("https://management.azure.com/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/xxxxxxxxxxxxxxxx/providers/Microsoft.Compute/snapshots/BrandoTestVM_snapshot2?api-version=2016-04-30-preview");
request.Method = "PUT";
request.Headers["Authorization"] = "Bearer " + token;
request.ContentType = "application/json";
try
{
using (var streamWriter = new StreamWriter(request.GetRequestStream()))
{
streamWriter.Write(body);
streamWriter.Flush();
streamWriter.Close();
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
//Get the response
var httpResponse = (HttpWebResponse)request.GetResponse();
using (var streamReader = new StreamReader(httpResponse.GetResponseStream()))
{
Console.WriteLine(streamReader.ReadToEnd());
}
Console.ReadLine();
}
Result:
can i choose the lun on which the disk is created?
Do you mean you want to use azuredeploy to select the LUN of the disk?
If this is your opinion, I suggest you could refer to follow json example to know how to build the deploy content of the VM and select its LUN.
More details, you could refer to below deploymentTemplate Json (Partly):
"diskArray": [
{
"name": "datadisk1",
"lun": 0,
"vhd": {
"uri": "[concat('http://', variables('storageAccountName'),'.blob.core.windows.net/vhds/', 'datadisk1.vhd')]"
},
"createOption": "Empty",
"caching": "[variables('diskCaching')]",
"diskSizeGB": "[variables('sizeOfDataDisksInGB')]"
},
]
More details, you could refer to follow link:
201-vm-dynamic-data-disks-selection/azuredeploy.json
New VM’s can be created in the same resource group or in different resource group using a snapshot.
In case the VM is to be created is in a different RG, make sure the locations are same. Azure supports only within the same location.
Take a snap shot of the OS disk
disks-> click on os disk -> create snapshot
Enter the name
Select the resource group, where the snapshot has to be created.
(In case of different RG, make sure locations are same)
Once snap shot is created, Update the below commands as required and run from terminal (Azure cli - ref: [link][1]) .
Provide the subscription Id of the subscription subscriptionId=88888-8888-888888-8888-8888888
Provide the name of your resource group
resourceGroupName=RG_name
Provide the name of the snapshot that will be used to create Managed Disks(this was created in previous step)
snapshotName=OSDISK_snapshot
Provide the name of the Managed Disk which will be created
osDiskName=MY_OSDISK
Provide the size of the disks in GB. It should be greater than the VHD file size.
diskSize=30
Provide the storage type for Managed Disk. Premium_LRS or Standard_LRS.
storageType=Premium_LRS
Set the context to the subscription Id where Managed Disk will be created
az account set --subscription $subscriptionId
Get the snapshot Id
snapshotId=$(az snapshot show --name $snapshotName --resource-group $resourceGroupName --query [id] -o tsv)
az disk create -n $osDiskName -g $resourceGroupName --size-gb $diskSize --sku $storageType --source $snapshotId
The above command will create a managed disk in the RG.
Select the created disk from the list under RG and click on create VM.
Enter the name, select RG , select the size .... and click on create.
Make sure NSG has all the inboud ports available
Http - 80, ssh -22
Once the vm is created, select the vm from RG’s resource list.
Scroll down to "run commands" and run the below commands in case SSH and HTTP are not accessible.
sudo service apace2 restart
sudo service ssh restart
This should resolve the issue with accessing from browser and terminal.
incase ssh is still not working, run below command
rm /run/nologin
now access the vm from terminal via ssh.