Is there not a way to create an application gateway with waf_v2 sku and have a WAF policy attached using the rest api?
With this code i can deploy the application gateway
"webApplicationFirewallConfiguration" = #{
"disabledRuleGroups" = #()
"enabled" = $true
"exclusions" = #()
"fileUploadLimitInMb" = 100
"firewallMode" = "Detection"
"maxRequestBodySizeinKb" = 128
"requestBodyCheck" = $true
"ruleSetType" = "OWASP"
"ruleSetVersion" = "3.1"
}
But if i removed that and put instead
"firewallPolicy" = #{
"id" = "path to WAF Policy"
}
I am getting the following return:
{
"error": {
"code": "ApplicationGatewayFirewallNotConfiguredForSelectedSku",
"message": "Application Gateway (Path to gateway) with the selected SKU tier WAF_v2 must have a valid WAF policy or configuration",
"details": []
}
}
I have added "forceFirewallPolicyAssociation" = $true but that hasnt seemed to help. Has anyone worked with the REST API and the application gateways? I have been at this for about 16 hours and am at my wits end. AKS wasn't this hard to deploy via rest api... Any help is appreciated
https://learn.microsoft.com/en-us/rest/api/application-gateway/application-gateways/create-or-update?tabs=HTTP#code-try-0
I solved it by putting all 3 items together. It then deployed with the external policy.
Seems like an issue in the 2022-01-01 version. At least i get the same error in Bicep. Rolling back to 2021-08-01 works.
Related
I want to find out the availability status of a resource in Azure.
There is a REST API available for Resource Health.
How is it possible to call this endpoint with the Azure Java SDK?
I found a solution using com.azure.resourcemanager:azure-resourcemanager:2.2.0.
Following code loads the availability status of an Azure Key Vault.
TokenCredential credentials = InteractiveBrowserCredentialBuilder().build();
AzureResourceManager resourceManager = AzureResourceManager
.configure()
.authenticate(credentials, AzureProfile(AzureEnvironment.AZURE))
.withDefaultSubscription();
String resourceGroup = "resource-group-name";
String keyVaultName = "key-vault-name";
GenericResource resource = resourceManager.genericResources().get(
resourceGroup,
"Microsoft.KeyVault",
"vaults/" + keyVaultName + "/providers/Microsoft.ResourceHealth",
"availabilityStatuses",
"/current",
"2020-05-01"
);
// Map containing the availabilityState
resource.properties();
I'm trying to bulk upload custom domains to Azure WebApps (specifically the deployment slots). I currently loop through with Azure CLI but it adds 15-20 minutes to CICD process.
I've also tried with ARM Templates but it reports a "conflict" during bulk upload. It requires a condition or dependency on the previous hostname which means it's still configuring the domains one by one.
Does anyone know of a way to bulk configure?
Powershell and Azure CLI
#Set Host Names and SSL
$WebApp_HostNames = Import-Csv "$(ConfigFiles_Path)\$WebApp-Hostnames.csv"
if ($null -ne $WebApp_HostNames) {
foreach ($HostName in $WebApp_HostNames) {
az webapp config hostname add --hostname $HostName.Name --resource-group "$(ResourceGroup)" --webapp-name "$WebApp" --slot "$WebApp$(WebApp_Slot_Suffix)"
az webapp config ssl bind --certificate-thumbprint $HostName.Thumbprint --ssl-type SNI --resource-group "$(ResourceGroup)" --name "$WebApp" --slot "$WebApp$(WebApp_Slot_Suffix)"
}
}
EDIT: Got it! Will write out the process and provide the answer here for future reference
TL;DR
I've included the entire Powershell script here including getting your auth token and pushing the json config. There's nothing else you'll need to bulk upload.
Hope this helps
# Get auth token from existing Context
$currentAzureContext = Get-AzContext
$azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
$profileClient = New-Object Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient($azProfile)
$token = $profileClient.AcquireAccessToken($currentAzureContext.Tenant.TenantId)
# Authorisation Header
$authHeader = #{
'Content-Type' = 'application/json'
'Authorization' = 'Bearer ' + ($token.AccessToken)
}
# Request URL
$url = "https://management.azure.com/subscriptions/{SubscriptionID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.Web/sites/{WebAppName}/slots/{WebAppSlotName}?api-version=2018-11-01"
# JSON body
$body = '
{
"id":"/subscriptions/{SubscriptionID}/resourceGroups/{ResourceGroupName}/providers/Microsoft.Web/sites/{WebAppName}/slots/{WebAppSlotName}",
"kind":"app",
"location":"Australia East",
"name":"{WebAppName} or {WebAppSlotName}",
"type":"Microsoft.Web/sites/slots", or "Microsoft.Web/sites",
"properties":{
"hostNameSslStates":[
{
"name":"sub1.domain.com",
"sslState":"SniEnabled",
"thumbprint":"IUAHSIUHASD78ASDIUABFKASF79ASUASD8ASFHOANKSL",
"toUpdate":true,
"hostType":"Standard"
},
{
"name":"sub2.domain.com",
"sslState":"SniEnabled",
"thumbprint":"FHISGF8A7SFG9SUGBSA7G9ASHIAOSHDF08ASHDF08AS",
"toUpdate":true,
"hostType":"Standard"
}
],
"hostNames":[
"sub1.domain.com",
"sub2.domain.com",
"{Default WebApp Domain}"
]
}
}
'
# Push to Azure
Invoke-RestMethod -Uri $url -Body $body -Method PUT -Headers $authHeader
Method
In the end I just pressed F12 to capture the traffic in the Networking tab in browser developer tools.
When clicking "Add Binding" in portal there's one PUT request which has the payload required to replicate. I found that the request contains ALL of the Custom Domains and SSL bindings for bulk upload, not just the change. The only trick here was to set the toUpdate property to true for all domains. I stripped out any null and notConfigured values to keep it tidier.
My next progression for this is to fetch my hostnames and certificate thumbprints and dynamically build the body before pushing the change to Azure.
I'm trying to create azurerm backend_http_settings in an Azure Application Gateway v2.0 using Terraform and Letsencrypt via the ACME provider.
I can successfully create a cert and import the .pfx into the frontend https listener, acme and azurerm providers provide everything you need to handle pkcs12.
Unfortunatley the backend wants a .cer file, presumably encoded in base64, not DER, and I can't get it to work no matter what I try. My understanding is that a letsencrypt .pem file should be fine for this, but when I attempt to use the acme provider's certificate_pem as the trusted_root_certificate, I get the following error:
Error: Error Creating/Updating Application Gateway "agw-frontproxy" (Resource Group "rg-mir"): network.ApplicationGatewaysClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="ApplicationGatewayTrustedRootCertificateInvalidData" Message="Data for certificate .../providers/Microsoft.Network/applicationGateways/agw-frontproxy/trustedRootCertificates/vnet-mir-be-cert is invalid." Details=[]
terraform plan works fine, the above error happens during terraform apply, when the azurerm provider gets angry that the cert data are invalid. I have written the certs to disk and they look as I'd expect. Here is a code snippet with the relevant code:
locals {
https_setting_name = "${azurerm_virtual_network.vnet-mir.name}-be-tls-htst"
https_frontend_cert_name = "${azurerm_virtual_network.vnet-mir.name}-fe-cert"
https_backend_cert_name = "${azurerm_virtual_network.vnet-mir.name}-be-cert"
}
provider "azurerm" {
version = "~>2.7"
features {
key_vault {
purge_soft_delete_on_destroy = true
}
}
}
provider "acme" {
server_url = "https://acme-staging-v02.api.letsencrypt.org/directory"
}
resource "acme_certificate" "certificate" {
account_key_pem = acme_registration.reg.account_key_pem
common_name = "cert-test.example.com"
subject_alternative_names = ["cert-test2.example.com", "cert-test3.example.com"]
certificate_p12_password = "<your password here>"
dns_challenge {
provider = "cloudflare"
config = {
CF_API_EMAIL = "<your email here>"
CF_DNS_API_TOKEN = "<your token here>"
CF_ZONE_API_TOKEN = "<your token here>"
}
}
}
resource "azurerm_application_gateway" "agw-frontproxy" {
name = "agw-frontproxy"
location = azurerm_resource_group.rg-mir.location
resource_group_name = azurerm_resource_group.rg-mir.name
sku {
name = "Standard_v2"
tier = "Standard_v2"
capacity = 2
}
trusted_root_certificate {
name = local.https_backend_cert_name
data = acme_certificate.certificate.certificate_pem
}
ssl_certificate {
name = local.https_frontend_cert_name
data = acme_certificate.certificate.certificate_p12
password = "<your password here>"
}
# Create HTTPS listener and backend
backend_http_settings {
name = local.https_setting_name
cookie_based_affinity = "Disabled"
port = 443
protocol = "Https"
request_timeout = 20
trusted_root_certificate_names = [local.https_backend_cert_name]
}
How do I get AzureRM Application Gateway to take ACME .PEM cert as trusted_root_certificates in AGW SSL end-to-end config?
For me the only thing that worked is using tightly coupled Windows tools. If you follow the below documentation it's going to work. Spend 2 days fighting the same issue :)
Microsoft Docs
If you don't specify any certificate, the Azure v2 application gateway will default to using the certificate in the backend web server that it is directing traffic to. this eliminates the redundant installation of certificates, one in the web server (in this case a Traefik edge router) and one in the AGW backend.
This works around the question of how to get the certificate formatted correctly altogether. Unfortunately, I never could get a certificate installed, even with a Microsoft Support Engineer on the phone. He was like "yeah, it looks good, it should work, don't know why it doesn't, can you just avoid it by using a v2 gateway and not installing a cert at all on the backend?"
A v2 gateway requires static public IP and "Standard_v2" sku type and tier to work as shown above.
Seems that if you set the password to empty string like this: https://github.com/vancluever/terraform-provider-acme/issues/135
it suddenly works. This is because the certificate format already includes the password. It is unfortunate that it is not written in the documentation. I am going to try it now and give my feedback on that.
I am trying to use the information in this article:
https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/dsc-template#default-configuration-script
to onboard a VM to Azure Automation at deployment time and apply a configuration.
I am using Terraform to do the deployment, below is the code I am using for the extensions:
resource "azurerm_virtual_machine_extension" "cse-dscconfig" {
name = "${var.vm_name}-dscconfig-cse"
location = "${azurerm_resource_group.my_rg.location}"
resource_group_name = "${azurerm_resource_group.my_rg.name}"
virtual_machine_name = "${azurerm_virtual_machine.my_vm.name}"
publisher = "Microsoft.Powershell"
type = "DSC"
type_handler_version = "2.76"
depends_on = ["azurerm_virtual_machine.my_vm"]
settings = <<SETTINGS
{
"configurationArguments": {
"RegistrationUrl": "${var.endpoint}",
"NodeConfigurationName": "VMConfig"
}
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"configurationArguments": {
"registrationKey": {
"userName": "NOT_USED",
"Password": "${var.key}"
}
}
}
PROTECTED_SETTINGS
}
I am getting the RegistrationURL value at execution time by running the command below and passing the value into Terraform:
$endpoint = (Get-AzureRmAutomationRegistrationInfo -ResourceGroupName $tf_state_rg -AutomationAccountName $autoAcctName).Endpoint
I am getting the Password value at execution time by running the command below and passing the value into Terraform:
$key = (Get-AzureRmAutomationRegistrationInfo -ResourceGroupName $tf_state_rg -AutomationAccountName $autoAcctName).PrimaryKey
I can tell from the logs on the VM that the extension is getting installed but never registers with the Automation Account.
Figured out what the problem was. The documentation is thin on details in some areas so it really was by trial and error that I discovered what was causing the problem. I had the wrong value in the NodeConfigurationName properties. What the documentation says about this property: Specifies the node configuration in the Automation account to assign to the node. Not having much experience with DSC, I interrupted this to mean the name of the configuration as seen in the Configurations section of the State configuration (DSC) blade of the Automation Account in the Azure portal.
What the NodeConfigurationName property is really referring to is the Node definition inside the configuration and it should be in the format of ConfigurationName.NodeName. As an example, the name of my configuration is VMConfig and in the config source I have a Node block defined called localhost. So, with this...the value of the NodeConfigurationName property should be VMConfig.localhost.
Within API management, I created an API that enables to call a serverless function app. Now I would like to deploy automatically this functionnality. Here are the possibilities I saw on internet :
Create and configure the api management through the portal (this is not what I call an automatic deployment)
Use Powershell command (unfortunally I am working with linux)
ARM (Azure Resource Manager): this is not easy and I did find how to create an API with Azure function app
Terraform: same as ARM, it is not clear for me how to create an API with Azure function app
If someone has an experience, links or ideas I would be very thankful.
Regards,
We are currently using Terraform for all our Azure infrastructure including API Management and I would strongly recommend it.
It is creating and updating everything we want, including api policies and has a relativity small learning curve.
You can start learning here:
https://learn.hashicorp.com/terraform?track=azure#azure
The docs for APIM are here:
https://www.terraform.io/docs/providers/azurerm/r/api_management.html
Once the initial learning curve is done the rest is easy and the benefits are many.
Azure Powershell is 100% cross platform now, so that's an option. Here are some samples: https://learn.microsoft.com/en-us/azure/api-management/powershell-samples
You can also use ARM Templates to spin it up. Configuring it is a lot harder. You can map any of these calls to the ARM Template.
Terraform - i think its still in the works. https://github.com/terraform-providers/terraform-provider-azurerm/issues/1177. But I wouldnt go that way.
ARM is the way to go.
You can combine it with:
Azure resource manager API for deployment
API Management API for things that ARM doesn't support (yet)
Have a look at the Azure API Management DevOps Resource Kit:
https://github.com/Azure/azure-api-management-devops-resource-kit
I believe the most convenient way for automating the deployment of Azure APIM is dotnet-apim. It's a cross-platform solution that you can easily use on your dev machine or ci/cd pipeline.
Make sure you have .NET Core installed.
Install dotnet-apim tool.
In a yaml file, you define the list of APIVersionSets, APIs, Products, Backends, Tags, etc. This YAML file defines what you want to deploy to APIM. You can have it on your source control to take the history of changes. The following YAML file defines 2 Sets, APIs, and Products along with their policies.
version: 0.0.1 # Required
apimServiceName: $(apimServiceName) # Required, must match name of an apim service deployed in the specified resource group
apiVersionSets:
- name: Set1
displayName: API Set 1
description: Contains Set 1 APIs.
versioningScheme: Segment
- name: Set2
displayName: API Set 2
description: Contains Set 2 APIs.
versioningScheme: Segment
apis:
- name: API1
displayName: API v1
openApiSpec: $(apimBasePath)\Apis\OpenApi.json # Required, can be url or local file
policy: $(apimBasePath)\Apis\ApiPolicy.xml
path: api/sample1
apiVersion: v1
apiVersionSetId: Set1
apiRevision: 1
products: AutomationTests, SystemMonitoring
protocols: https
subscriptionRequired: true
isCurrent: true
operations:
customer_get: # it's operation id
policy: $(apimBasePath)\Apis\HealthCheck\HealthCheckPolicy.xml:::BackendUrl=$(attachmentServiceUrl)
subscriptionKeyParameterNames:
header: ProviderKey
query: ProviderKey
- name: API2
displayName: API2 v1 [Staging]
openApiSpec: $(apimBasePath)\Apis\OpenApi.json # Required, can be url or local file
policy: $(apimBasePath)\Apis\ApiPolicy.xml
path: api/sample2
apiVersion: v1
apiVersionSetId: Set2
apiRevision: 1
products: AutomationTests, SystemMonitoring
protocols: https
subscriptionRequired: true
isCurrent: true
subscriptionKeyParameterNames:
header: ProviderKey
query: ProviderKey
products:
- name: AutomationTests
displayName: AutomationTests
description: Product for automation tests
subscriptionRequired: true
approvalRequired: true
subscriptionsLimit: 1
state: published
policy: $(apimBasePath)\Products\AutomationTests\policy.xml
- name: SystemMonitoring
displayName: SystemMonitoring
description: Product for system monitoring
subscriptionRequired: true
approvalRequired: true
subscriptionsLimit: 1
state: published
policy: $(apimBasePath)\Products\SystemMonitoring\policy.xml
outputLocation: $(apimBasePath)\output
linkedTemplatesBaseUrl : $(linkedTemplatesBaseUrl) # Required if 'linked' property is set to true
the $(variableName) is a syntax for defining variables inside the YAML file which makes customization easier in ci/cd scenarios.
The next step is to transform the YAML file to ARM which Azure can understand.
dotnet-apim --yamlConfig "c:/apim/definition.yml"
Then you have to deploy the generated ARM templates to Azure which is explained here.
Using the test-api (if you do the microsoft demo for API Mgmt, you should recognize it), here's a snippet of terraform that does work. does not include the resource group (thisrg)
resource "azurerm_api_management" "apimgmtinstance" {
name = "${var.base_apimgmt_name}-${var.env_name}-apim"
location = azurerm_resource_group.thisrg.location
resource_group_name = azurerm_resource_group.thisrg.name
publisher_name = "Marc Pub"
publisher_email = "marc#trash.com"
sku_name = var.apimgmt_size
/* policy {
xml_content = <<XML
<policies>
<inbound />
<backend />
<outbound />
<on-error />
</policies>
XML
} */
}
resource "azurerm_api_management_product" "apiMgmtProductContoso" {
product_id = "contoso-marc"
display_name = "Contoso Marc"
description = "this is a test"
subscription_required = true
approval_required = true
api_management_name = azurerm_api_management.apimgmtinstance.name
resource_group_name = azurerm_resource_group.thisrg.name
published = true
subscriptions_limit = 2
terms = "you better accept this or else... ;-)"
}
resource "azurerm_api_management_api" "testapi" {
description = "this is a mock test"
display_name = "Test API"
name = "test-api"
protocols = ["https"]
api_management_name = azurerm_api_management.apimgmtinstance.name
resource_group_name = azurerm_resource_group.thisrg.name
// version = "0.0.1"
revision = "1"
path = ""
subscription_required = true
}
data "azurerm_api_management_api" "testapi_data" {
name = azurerm_api_management_api.testapi.name
api_management_name = azurerm_api_management.apimgmtinstance.name
resource_group_name = azurerm_resource_group.thisrg.name
revision = "1"
}
resource "azurerm_api_management_api_operation" "testapi_getop" {
operation_id = "test-call"
api_name = data.azurerm_api_management_api.testapi_data.name
api_management_name = data.azurerm_api_management_api.testapi_data.api_management_name
resource_group_name = data.azurerm_api_management_api.testapi_data.resource_group_name
display_name = "Test call"
method = "GET"
url_template = "/test"
description = "test of call"
response {
status_code = 200
description = ""
representation {
content_type = "application/json"
sample = "{\"sampleField\": \"test\"}"
}
}
}
resource "azurerm_api_management_api_operation_policy" "testapi_getop_policy" {
api_name = azurerm_api_management_api_operation.testapi_getop.api_name
api_management_name = azurerm_api_management_api_operation.testapi_getop.api_management_name
resource_group_name = azurerm_api_management_api_operation.testapi_getop.resource_group_name
operation_id = azurerm_api_management_api_operation.testapi_getop.operation_id
xml_content = <<XML
<policies>
<inbound>
<mock-response status-code="200" content-type="application/json"/>
</inbound>
</policies>
XML
}
terraform now mostly supports azure api management. I've been implementing most of an existing azure api management into terraform using a combination of the https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/api_management along with terraform import (just do it in a separate folder, terraform import imports into a terraform.tfstate file, and if you mix up the resource you are importing along with the tf file(s) you are creating as a result's terraform.tfstate file (generated via terraform plan/apply), you could accidently delete the resource you are importing from. yay...
It mostly does it, except for an API where you modified for 'All operations". I can do it for specific operations (get blah, or post blahblah), but for all operations... I don't yet know.