I'm trying to bulk upload custom domains to Azure WebApps (specifically the deployment slots). I currently loop through with Azure CLI but it adds 15-20 minutes to CICD process.
I've also tried with ARM Templates but it reports a "conflict" during bulk upload. It requires a condition or dependency on the previous hostname which means it's still configuring the domains one by one.
Does anyone know of a way to bulk configure?
Powershell and Azure CLI
#Set Host Names and SSL
$WebApp_HostNames = Import-Csv "$(ConfigFiles_Path)\$WebApp-Hostnames.csv"
if ($null -ne $WebApp_HostNames) {
foreach ($HostName in $WebApp_HostNames) {
az webapp config hostname add --hostname $HostName.Name --resource-group "$(ResourceGroup)" --webapp-name "$WebApp" --slot "$WebApp$(WebApp_Slot_Suffix)"
az webapp config ssl bind --certificate-thumbprint $HostName.Thumbprint --ssl-type SNI --resource-group "$(ResourceGroup)" --name "$WebApp" --slot "$WebApp$(WebApp_Slot_Suffix)"
}
}
EDIT: Got it! Will write out the process and provide the answer here for future reference
TL;DR
I've included the entire Powershell script here including getting your auth token and pushing the json config. There's nothing else you'll need to bulk upload.
Hope this helps
# Get auth token from existing Context
$currentAzureContext = Get-AzContext
$azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
$profileClient = New-Object Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient($azProfile)
$token = $profileClient.AcquireAccessToken($currentAzureContext.Tenant.TenantId)
# Authorisation Header
$authHeader = #{
'Content-Type' = 'application/json'
'Authorization' = 'Bearer ' + ($token.AccessToken)
}
# Request URL
$url = "https://management.azure.com/subscriptions/{SubscriptionID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.Web/sites/{WebAppName}/slots/{WebAppSlotName}?api-version=2018-11-01"
# JSON body
$body = '
{
"id":"/subscriptions/{SubscriptionID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.Web/sites/{WebAppName}/slots/{WebAppSlotName}",
"kind":"app",
"location":"Australia East",
"name":"{WebAppName} or {WebAppSlotName}",
"type":"Microsoft.Web/sites/slots", or "Microsoft.Web/sites",
"properties":{
"hostNameSslStates":[
{
"name":"sub1.domain.com",
"sslState":"SniEnabled",
"thumbprint":"IUAHSIUHASD78ASDIUABFKASF79ASUASD8ASFHOANKSL",
"toUpdate":true,
"hostType":"Standard"
},
{
"name":"sub2.domain.com",
"sslState":"SniEnabled",
"thumbprint":"FHISGF8A7SFG9SUGBSA7G9ASHIAOSHDF08ASHDF08AS",
"toUpdate":true,
"hostType":"Standard"
}
],
"hostNames":[
"sub1.domain.com",
"sub2.domain.com",
"{Default WebApp Domain}"
]
}
}
'
# Push to Azure
Invoke-RestMethod -Uri $url -Body $body -Method PUT -Headers $authHeader
Method
In the end I just pressed F12 to capture the traffic in the Networking tab in browser developer tools.
When clicking "Add Binding" in portal there's one PUT request which has the payload required to replicate. I found that the request contains ALL of the Custom Domains and SSL bindings for bulk upload, not just the change. The only trick here was to set the toUpdate property to true for all domains. I stripped out any null and notConfigured values to keep it tidier.
My next progression for this is to fetch my hostnames and certificate thumbprints and dynamically build the body before pushing the change to Azure.
Related
Is there not a way to create an application gateway with waf_v2 sku and have a WAF policy attached using the rest api?
With this code i can deploy the application gateway
"webApplicationFirewallConfiguration" = #{
"disabledRuleGroups" = #()
"enabled" = $true
"exclusions" = #()
"fileUploadLimitInMb" = 100
"firewallMode" = "Detection"
"maxRequestBodySizeinKb" = 128
"requestBodyCheck" = $true
"ruleSetType" = "OWASP"
"ruleSetVersion" = "3.1"
}
But if i removed that and put instead
"firewallPolicy" = #{
"id" = "path to WAF Policy"
}
I am getting the following return:
{
"error": {
"code": "ApplicationGatewayFirewallNotConfiguredForSelectedSku",
"message": "Application Gateway (Path to gateway) with the selected SKU tier WAF_v2 must have a valid WAF policy or configuration",
"details": []
}
}
I have added "forceFirewallPolicyAssociation" = $true but that hasnt seemed to help. Has anyone worked with the REST API and the application gateways? I have been at this for about 16 hours and am at my wits end. AKS wasn't this hard to deploy via rest api... Any help is appreciated
https://learn.microsoft.com/en-us/rest/api/application-gateway/application-gateways/create-or-update?tabs=HTTP#code-try-0
I solved it by putting all 3 items together. It then deployed with the external policy.
Seems like an issue in the 2022-01-01 version. At least i get the same error in Bicep. Rolling back to 2021-08-01 works.
I'm trying to create azurerm backend_http_settings in an Azure Application Gateway v2.0 using Terraform and Letsencrypt via the ACME provider.
I can successfully create a cert and import the .pfx into the frontend https listener, acme and azurerm providers provide everything you need to handle pkcs12.
Unfortunatley the backend wants a .cer file, presumably encoded in base64, not DER, and I can't get it to work no matter what I try. My understanding is that a letsencrypt .pem file should be fine for this, but when I attempt to use the acme provider's certificate_pem as the trusted_root_certificate, I get the following error:
Error: Error Creating/Updating Application Gateway "agw-frontproxy" (Resource Group "rg-mir"): network.ApplicationGatewaysClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="ApplicationGatewayTrustedRootCertificateInvalidData" Message="Data for certificate .../providers/Microsoft.Network/applicationGateways/agw-frontproxy/trustedRootCertificates/vnet-mir-be-cert is invalid." Details=[]
terraform plan works fine, the above error happens during terraform apply, when the azurerm provider gets angry that the cert data are invalid. I have written the certs to disk and they look as I'd expect. Here is a code snippet with the relevant code:
locals {
https_setting_name = "${azurerm_virtual_network.vnet-mir.name}-be-tls-htst"
https_frontend_cert_name = "${azurerm_virtual_network.vnet-mir.name}-fe-cert"
https_backend_cert_name = "${azurerm_virtual_network.vnet-mir.name}-be-cert"
}
provider "azurerm" {
version = "~>2.7"
features {
key_vault {
purge_soft_delete_on_destroy = true
}
}
}
provider "acme" {
server_url = "https://acme-staging-v02.api.letsencrypt.org/directory"
}
resource "acme_certificate" "certificate" {
account_key_pem = acme_registration.reg.account_key_pem
common_name = "cert-test.example.com"
subject_alternative_names = ["cert-test2.example.com", "cert-test3.example.com"]
certificate_p12_password = "<your password here>"
dns_challenge {
provider = "cloudflare"
config = {
CF_API_EMAIL = "<your email here>"
CF_DNS_API_TOKEN = "<your token here>"
CF_ZONE_API_TOKEN = "<your token here>"
}
}
}
resource "azurerm_application_gateway" "agw-frontproxy" {
name = "agw-frontproxy"
location = azurerm_resource_group.rg-mir.location
resource_group_name = azurerm_resource_group.rg-mir.name
sku {
name = "Standard_v2"
tier = "Standard_v2"
capacity = 2
}
trusted_root_certificate {
name = local.https_backend_cert_name
data = acme_certificate.certificate.certificate_pem
}
ssl_certificate {
name = local.https_frontend_cert_name
data = acme_certificate.certificate.certificate_p12
password = "<your password here>"
}
# Create HTTPS listener and backend
backend_http_settings {
name = local.https_setting_name
cookie_based_affinity = "Disabled"
port = 443
protocol = "Https"
request_timeout = 20
trusted_root_certificate_names = [local.https_backend_cert_name]
}
How do I get AzureRM Application Gateway to take ACME .PEM cert as trusted_root_certificates in AGW SSL end-to-end config?
For me the only thing that worked is using tightly coupled Windows tools. If you follow the below documentation it's going to work. Spend 2 days fighting the same issue :)
Microsoft Docs
If you don't specify any certificate, the Azure v2 application gateway will default to using the certificate in the backend web server that it is directing traffic to. this eliminates the redundant installation of certificates, one in the web server (in this case a Traefik edge router) and one in the AGW backend.
This works around the question of how to get the certificate formatted correctly altogether. Unfortunately, I never could get a certificate installed, even with a Microsoft Support Engineer on the phone. He was like "yeah, it looks good, it should work, don't know why it doesn't, can you just avoid it by using a v2 gateway and not installing a cert at all on the backend?"
A v2 gateway requires static public IP and "Standard_v2" sku type and tier to work as shown above.
Seems that if you set the password to empty string like this: https://github.com/vancluever/terraform-provider-acme/issues/135
it suddenly works. This is because the certificate format already includes the password. It is unfortunate that it is not written in the documentation. I am going to try it now and give my feedback on that.
I am trying to fetch values from azure using external data source in terraform. However, i dont understand what am i doing wrong when i try to export values using write-output, getting an error
data.external.powershell_test: data.external.powershell_test: command "Powershell.exe" produced invalid JSON: invalid character 'l' looking for beginning of object key string"
Below is my script
$vm=(Get-AzureRmVM -ResourceGroupName MFA-RG -Name vm2).name | convertTo-json
Write-Output "{""first"" : ""$vm""}"
Main.tf file
data "external" "powershell_test" {
program = ["Powershell.exe", "./vm.ps1"]
}
output "value" {
value = "${data.external.powershell_test.result.first}"
}
Can someone tel me what wrong with the script ? and if i am using write-out properly ?
Edited-------------
Below is the screenshot when i run vm.ps1 directly
Also, when i directly assign value to a variable as below, terraform is able to execute the code.
$vm = "testvm"
Write-Output "{""first"" : ""$vm""}"
For your issue, you should change your PowerShell command like this:
$vm=(Get-AzureRmVM -ResourceGroupName MFA-RG -Name vm2).name | convertTo-json
Write-Output "{""first"" : $vm}"
And you could change the code in the data source like this or not, but I suggest you do this:
data "external" "powershell_test" {
program = ["Powershell.exe", "${path.module}/vm.ps1"]
}
The result on my side is below:
I use the new Azure PowerShell module Az and my code shows here:
PowerShell:
$vm=(Get-AzVM -ResourceGroupName charles -Name azureUbuntu18).name | convertTo-json
Write-Output "{""first"" : $vm}"
Terraform:
data "external" "powershell_test" {
program = ["Powershell.exe", "${path.module}/vm.ps1"]
}
output "value" {
value = "${data.external.powershell_test.result.first}"
}
data.external.powershell_test.result is the only valid attribute, and it is map.
So the code will be changed to
output "value" {
value = "${data.external.powershell_test.result['first']}"
}
Reference:
https://www.terraform.io/docs/configuration-0-11/interpolation.html#user-map-variables
Thanks Charles XU for the answer. I was looking for Azure Application Gateway and after lots of digging I ended up here, as Terraform is yet to provide a data source for Azure Application Gateway. However, the same can be done using shell script and azure rest API.
Using Shell Script
appgw.sh
#!/bin/bash
#Linux: Requires Azure cli and jq to be available
#Getting Application Gateway ID using azure application gateway rest API, az cli as data source doesn't exist for it.
appgwid=$(az rest -m get --header "Accept=application/json" -u 'https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/newrg/providers/Microsoft.Network/applicationGateways?api-version=2020-07-01' | jq '.value[].id')
#Terraform External Data Source requires an output, else it will return in unmarshal json error with }
echo "{\"appgw_id\" : $appgwid}"
data.tf
data "external" "appgw_id_sh" {
program = ["/bin/bash", "${path.module}/appgw.sh"]
}
outputs.tf
output "appgw_id_sh" {
value = data.external.appgw_id_sh.result.appgw_id
}
Using Powershell
appgw.ps1
#Windows: Require Azure Powershell to be available
#1. Install-Module -Name PowerShellGet -Force
#2. Install-Module -Name Az -AllowClobber
#Getting Application Gateway ID using AzApplicationGateway AzResource as data source doesn't exist for it.
$appGw = (Get-AzApplicationGateway -Name "appgw-name" -ResourceGroupName "appgw-rg-name").id | convertTo-json
#Terraform External Data Source requires an output, else it will return in unmarshal json error with }
Write-Output "{""appgw_id"" : $appgw}"
data.tf
data "external" "appgw_id_ps" {
program = ["Powershell.exe", "${path.module}/appgw.ps1"]
}
outputs.tf
output "appgw_id_ps" {
value = data.external.appgw_id_ps.result.appgw_id
}
FYI - when I use powershell 7, I need to use pwsh.exe instead of powershell.exe inside program code.
data "external" "powershell_test" {
program = ["**pwsh.exe**", "${path.module}/vm.ps1"]
}
I am trying to use the information in this article:
https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/dsc-template#default-configuration-script
to onboard a VM to Azure Automation at deployment time and apply a configuration.
I am using Terraform to do the deployment, below is the code I am using for the extensions:
resource "azurerm_virtual_machine_extension" "cse-dscconfig" {
name = "${var.vm_name}-dscconfig-cse"
location = "${azurerm_resource_group.my_rg.location}"
resource_group_name = "${azurerm_resource_group.my_rg.name}"
virtual_machine_name = "${azurerm_virtual_machine.my_vm.name}"
publisher = "Microsoft.Powershell"
type = "DSC"
type_handler_version = "2.76"
depends_on = ["azurerm_virtual_machine.my_vm"]
settings = <<SETTINGS
{
"configurationArguments": {
"RegistrationUrl": "${var.endpoint}",
"NodeConfigurationName": "VMConfig"
}
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"configurationArguments": {
"registrationKey": {
"userName": "NOT_USED",
"Password": "${var.key}"
}
}
}
PROTECTED_SETTINGS
}
I am getting the RegistrationURL value at execution time by running the command below and passing the value into Terraform:
$endpoint = (Get-AzureRmAutomationRegistrationInfo -ResourceGroupName $tf_state_rg -AutomationAccountName $autoAcctName).Endpoint
I am getting the Password value at execution time by running the command below and passing the value into Terraform:
$key = (Get-AzureRmAutomationRegistrationInfo -ResourceGroupName $tf_state_rg -AutomationAccountName $autoAcctName).PrimaryKey
I can tell from the logs on the VM that the extension is getting installed but never registers with the Automation Account.
Figured out what the problem was. The documentation is thin on details in some areas so it really was by trial and error that I discovered what was causing the problem. I had the wrong value in the NodeConfigurationName properties. What the documentation says about this property: Specifies the node configuration in the Automation account to assign to the node. Not having much experience with DSC, I interrupted this to mean the name of the configuration as seen in the Configurations section of the State configuration (DSC) blade of the Automation Account in the Azure portal.
What the NodeConfigurationName property is really referring to is the Node definition inside the configuration and it should be in the format of ConfigurationName.NodeName. As an example, the name of my configuration is VMConfig and in the config source I have a Node block defined called localhost. So, with this...the value of the NodeConfigurationName property should be VMConfig.localhost.
I have application which deploy VM with storage account and disks, I want to convert it to use managed disks - as this is the future of Azure storage. I am looking on the REST API - and I am missing two things:
1. how can i create a snapshot form existing managed disk, there is an API to create a snapshot but it is empty or from old unmanaged
2. can i choose the lun on which the disk is created?
how can i create a snapshot form existing managed disk, there is an API to create a snapshot but it is empty or from old unmanaged
According to your description, I have created a test demo to create a snapshot of an existing managed disk(OS disk), it works well.
I create a Windows VM and use managed disk as the OS disk, then I create another managed disk and add it to the VM.
The result is like below:
If you want to create a snapshot of an existing managed disk(It has the data), I suggest you could send request to below url.
Url: https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.Compute/snapshots/{snapshotName}?api-version={api-version}
Method: PUT
Parameter:
subscriptionId The identifier of your subscription where the snapshot is being created.
resourceGroup The name of the resource group that will contain the snapshot.
snapshotName The name of the snapshot that is being created. The name can’t be changed after the snapshot is created. Supported characters for the name are a-z, A-Z, 0-9 and _. The max name length is 80 characters.
api-version The version of the API to use. The current version is 2016-04-30-preview.
Request content:
{
"properties": {
"creationData": {
"createOption": "Copy",
"sourceUri": "/subscriptions/{subscriptionId}/resourceGroups/{YourResourceGroup}/providers/Microsoft.Compute/disks/{YourManagedDiskName}"
}
},
"location": "eastasia"
}
More details, you could refer to follow C# codes:
json.txt:
{
"properties": {
"creationData": {
"createOption": "Copy",
"sourceUri": "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/BrandoSecondTest/providers/Microsoft.Compute/disks/BrandoTestVM"
}
},
"location": "eastasia"
}
Code:
static void Main(string[] args)
{
string body = File.ReadAllText(#"D:\json.txt");
// Display the file contents to the console. Variable text is a string.
string tenantId = "xxxxxxxxxxxxxxxxxxxxxxxx";
string clientId = "xxxxxxxxxxxxxxxxxxxxxxxx";
string clientSecret = "xxxxxxxxxxxxxxxxxxxx";
string authContextURL = "https://login.windows.net/" + tenantId;
var authenticationContext = new AuthenticationContext(authContextURL);
var credential = new ClientCredential(clientId, clientSecret);
var result = authenticationContext.AcquireTokenAsync(resource: "https://management.azure.com/", clientCredential: credential).Result;
if (result == null)
{
throw new InvalidOperationException("Failed to obtain the JWT token");
}
string token = result.AccessToken;
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create("https://management.azure.com/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/xxxxxxxxxxxxxxxx/providers/Microsoft.Compute/snapshots/BrandoTestVM_snapshot2?api-version=2016-04-30-preview");
request.Method = "PUT";
request.Headers["Authorization"] = "Bearer " + token;
request.ContentType = "application/json";
try
{
using (var streamWriter = new StreamWriter(request.GetRequestStream()))
{
streamWriter.Write(body);
streamWriter.Flush();
streamWriter.Close();
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
//Get the response
var httpResponse = (HttpWebResponse)request.GetResponse();
using (var streamReader = new StreamReader(httpResponse.GetResponseStream()))
{
Console.WriteLine(streamReader.ReadToEnd());
}
Console.ReadLine();
}
Result:
can i choose the lun on which the disk is created?
Do you mean you want to use azuredeploy to select the LUN of the disk?
If this is your opinion, I suggest you could refer to follow json example to know how to build the deploy content of the VM and select its LUN.
More details, you could refer to below deploymentTemplate Json (Partly):
"diskArray": [
{
"name": "datadisk1",
"lun": 0,
"vhd": {
"uri": "[concat('http://', variables('storageAccountName'),'.blob.core.windows.net/vhds/', 'datadisk1.vhd')]"
},
"createOption": "Empty",
"caching": "[variables('diskCaching')]",
"diskSizeGB": "[variables('sizeOfDataDisksInGB')]"
},
]
More details, you could refer to follow link:
201-vm-dynamic-data-disks-selection/azuredeploy.json
New VM’s can be created in the same resource group or in different resource group using a snapshot.
In case the VM is to be created is in a different RG, make sure the locations are same. Azure supports only within the same location.
Take a snap shot of the OS disk
disks-> click on os disk -> create snapshot
Enter the name
Select the resource group, where the snapshot has to be created.
(In case of different RG, make sure locations are same)
Once snap shot is created, Update the below commands as required and run from terminal (Azure cli - ref: [link][1]) .
Provide the subscription Id of the subscription subscriptionId=88888-8888-888888-8888-8888888
Provide the name of your resource group
resourceGroupName=RG_name
Provide the name of the snapshot that will be used to create Managed Disks(this was created in previous step)
snapshotName=OSDISK_snapshot
Provide the name of the Managed Disk which will be created
osDiskName=MY_OSDISK
Provide the size of the disks in GB. It should be greater than the VHD file size.
diskSize=30
Provide the storage type for Managed Disk. Premium_LRS or Standard_LRS.
storageType=Premium_LRS
Set the context to the subscription Id where Managed Disk will be created
az account set --subscription $subscriptionId
Get the snapshot Id
snapshotId=$(az snapshot show --name $snapshotName --resource-group $resourceGroupName --query [id] -o tsv)
az disk create -n $osDiskName -g $resourceGroupName --size-gb $diskSize --sku $storageType --source $snapshotId
The above command will create a managed disk in the RG.
Select the created disk from the list under RG and click on create VM.
Enter the name, select RG , select the size .... and click on create.
Make sure NSG has all the inboud ports available
Http - 80, ssh -22
Once the vm is created, select the vm from RG’s resource list.
Scroll down to "run commands" and run the below commands in case SSH and HTTP are not accessible.
sudo service apace2 restart
sudo service ssh restart
This should resolve the issue with accessing from browser and terminal.
incase ssh is still not working, run below command
rm /run/nologin
now access the vm from terminal via ssh.