What fields am I supposed to populate in test_configurations.json? - azure

I am very new to Azure and cloud services in general. I am trying to experiment with the azure storage sdk (https://github.com/Azure/azure-storage-cpp) to use in my application and am trying to get used to it. I'm trying to run the tests to make sure that I built the library correctly.
Looking at the README it says to fill in the test_configurations.json file but I don't know what should be filled in or how to obtain the info that should actually be in there. I signed up for the free Azure account and create a storage account but I don't see client_id, tenant_id or client secret anywhere in the console. Here is the config file:
{
"target": "production",
"premium_target": "premium_account",
"blob_storage_target": "blob_storage_account",
"tenants": [
{
"name": "devstore",
"type": "devstore",
"connection_string": "UseDevelopmentStorage=true"
},
{
"name": "production",
"type": "cloud",
"connection_string": "DefaultEndpointsProtocol=https;"
},
{
"name": "premium_account",
"type": "cloud",
"connection_string": "DefaultEndpointsProtocol=https;"
},
{
"name": "blob_storage_account",
"type": "cloud",
"connection_string": "DefaultEndpointsProtocol=https;"
}
],
"token_information": {
"account_name": "",
"tenant_id": "",
"client_id": "",
"client_secret": "",
"resource": "https://storage.azure.com"
}
}
How do I obtain the ids and secrets that I need?

You need to register an application as storage client and then you can obtain those details
as mentioned here
The first step in using Azure AD to authorize access to storage
resources is registering your client application with an Azure AD

Related

Publish Bot - Composer - Import from Existing Resources

I had few permission related issues due to my account being a one from organization. So decided to make use of "Import using existing resources". Please find the .json below that was used to create the publishing profile.
{
"name": "convAISubAuto",
"environment": "dev",
"tenantId": "TENANT_ID",
"hostname": "subAutoConvAI", #Same as azure bot name
"runtimeIdentifier": "win-x64",
"resourceGroup": "chatbot",
"botName": "subAutoConvAI", #Same as azure bot name
"subscriptionId": "SUB_ID",
"region": "westus",
"appServiceOperatingSystem": "windows",
"scmHostDomain": "",
"luisResource": "convAIBoth",
"settings": {
"applicationInsights": {
"InstrumentationKey": "",
"connectionString": ""
},
"cosmosDb": {
"cosmosDBEndpoint": "",
"authKey": "",
"databaseId": "botstate-db",
"containerId": "botstate-container"
},
"blobStorage": {
"connectionString": "",
"container": ""
},
"luis": {
"authoringKey": "",
"authoringEndpoint": "",
"endpointKey": "",
"endpoint": "",
"region": "eastus"
},
"qna": {
"subscriptionKey": "",
"endpoint": ""
},
"MicrosoftAppId": "APP_ID", #Taken from Azure bot configurations screen
"MicrosoftAppPassword": "***" #Taken from secret generated during app registration
}
}
The bot is getting published successfully. However, when i click on "Test in Web Chat" option inside the Azure Bot Resource, i am getting blank screen.
Also, inside the configuration for Azure bot... The endpoint mentioned was https://host_name/api/messages. I had to mention it as the field was blank.
Is there any issue with my publishing profile's JSON file?
The solution is as follows.
Do a app registration and generate a secret.
Create an Azure bot and during the first step, there's an option for "Use existing app registration", inside it mention the app ID and secret (app password)
Use the same app ID and app Password on the above json file.
This solved the issue.

Data factory SQL connection String with keyvault

I exported arm template from Datafactory V2, when Importing the template it is asking me to manually enter SQL database connection string. To minimize the human interaction I made the following changes.
{
"name": "[concat(parameters('factoryName'), '/myFactory')]",
"type": "Microsoft.DataFactory/factories/linkedServices",
"apiVersion": "2018-06-01",
"properties": {
"type": "AzureSqlDatabase",
"typeProperties": {
"connectionString": "[concat('Server=tcp:',parameters('sqlServerName'),'.database.windows.net,1433;Initial Catalog=', parameters('sqlDatabaseName'), ';Persist Security Info=False;User ID=',parameters('sqlServerUserName'),';Password=(password)',';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30')]",
"password": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "AzureKeyVault1",
"type": "LinkedServiceReference"
},
"secretName": "sql-password"
}
}
},
"dependsOn": [
"[concat(variables('factoryId'), '/linkedServices/AzureKeyVault1')]"
]
},
So currently when deploying to Datafactory V2 and test connection to this SQL server, I got
Cannot connect to SQL Database: 'tcp:mysqlserver.database.windows.net,1433',
Database: 'mydatabase', User: 'admin'. Check the linked service configuration
is correct, and make sure the SQL Database firewall allows the integration runtime to access.
Login failed for user 'admin'., SqlErrorNumber=18456,
If I manually input all the connections in the portal UI, I can easily connect to the database and test successfully so it is not a firewall issue.
Then I think there could be 2 issue:
1.how the password from keyvault is consumed in the connectionstring. I didn't find much information about it online.
When I open the created Sql Linked service, I notice the Fully qualified domain name is missing,If i manually add it in then the connection works.
The SQL connection UI
Throwing this as an alternative answer/approach.
Store the connection string in it's entirety in Key Vault. If doing this then the reference would look like:
{
"name": "[concat(parameters('factoryName'), '/',parameters('connectionNameAdventureWorks'))]",
"type": "Microsoft.DataFactory/factories/linkedServices",
"apiVersion": "2018-06-01",
"properties": {
"annotations": [],
"type": "AzureSqlDatabase",
"typeProperties": {
"connectionString": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "[variables('azkDataAnalyticsReferenceName')]",
"type": "LinkedServiceReference"
},
"secretName": "[variables('azkAdventureWorksSecretName')]"
}
}
},
"dependsOn": [
"[concat(variables('factoryId'), '/linkedServices/',variables('azkDataAnalyticsReferenceName'))]"
]
}
And even more secure approach would be to add Data Factory as a Managed Identity and then run a sql script to add the user If doing this then all there is no need for any credentials to be passed at all.
One downside is if the DataFactory is deleted and recreated then the managed identity permissions would need to be reassigned to the sql database.

Enable HTTPS on Azure Front Door custom domain with ARM template deployment

I am deploying an Azure Front Door via an ARM template, and attempting to enable HTTPS on a custom domain.
According to the Azure documentation for Front Door, there is a quick start template to "Add a custom domain to your Front Door and enable HTTPS traffic for it with a Front Door managed certificate generated via DigiCert." However, while this adds a custom domain, it does not enable HTTPS.
Looking at the ARM template reference for Front Door, I can't see any obvious way to enable HTTPS, but perhaps I'm missing something?
Notwithstanding the additional information below, I'd like to be able to enable HTTPS on a Front Door custom domain via an ARM template deployment. Is this possible at this time?
Additional information
Note that there is a REST operation to enable HTTPS, but this does not seem to work with a Front Door managed certificate -
POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/frontDoors/{frontDoorName}/frontendEndpoints/{frontendEndpointName}/enableHttps?api-version=2019-05-01
{
"certificateSource": "FrontDoor",
"protocolType": "ServerNameIndication",
"minimumTLSVersion": "1.2"
}
There is also a Az PowerShell cmdlet to enable HTTP, which does work.
Enable-AzFrontDoorCustomDomainHttps -ResourceGroupName "lmk-bvt-accounts-front-door" -FrontDoorName "my-front-door" -FrontendEndpointName "my-front-door-rg"
UPDATE: This implementation currently seems to be unstable and is working only intermittently, which indicates it may not be production ready yet.
This now actually seems to be possible with ARM templates, after tracking down the latest Front Door API (2020-01-01) specs (which don't appear to be fully published in the MS reference websites yet):
https://github.com/Azure/azure-rest-api-specs/tree/master/specification/frontdoor/resource-manager/Microsoft.Network/stable/2020-01-01
There's a new customHttpsConfiguration property in the frontendEndpoint properties object:
"customHttpsConfiguration": {
"certificateSource": "AzureKeyVault" // or "FrontDoor",
"minimumTlsVersion":"1.2",
"protocolType": "ServerNameIndication",
// Depending on "certificateSource" you supply either:
"keyVaultCertificateSourceParameters": {
"secretName": "<secret name>",
"secretVersion": "<secret version>",
"vault": {
"id": "<keyVault ResourceID>"
}
}
// Or:
"frontDoorCertificateSourceParameters": {
"certificateType": "Dedicated"
}
}
KeyVault Managed SSL Certificate Example
Note: I have tested this and appears to work.
{
"type": "Microsoft.Network/frontdoors",
"apiVersion": "2020-01-01",
"properties": {
"frontendEndpoints": [
{
"name": "[variables('frontendEndpointName')]",
"properties": {
"hostName": "[variables('customDomain')]",
"sessionAffinityEnabledState": "Enabled",
"sessionAffinityTtlSeconds": 0,
"webApplicationFirewallPolicyLink": {
"id": "[variables('wafPolicyResourceId')]"
},
"resourceState": "Enabled",
"customHttpsConfiguration": {
"certificateSource": "AzureKeyVault",
"minimumTlsVersion":"1.2",
"protocolType": "ServerNameIndication",
"keyVaultCertificateSourceParameters": {
"secretName": "[parameters('certKeyVaultSecret')]",
"secretVersion": "[parameters('certKeyVaultSecretVersion')]",
"vault": {
"id": "[resourceId(parameters('certKeyVaultResourceGroupName'),'Microsoft.KeyVault/vaults',parameters('certKeyVaultName'))]"
}
}
}
}
}
],
...
}
}
Front Door Managed SSL Certificate Example
Looks like for a FrontDoor managed certificate you would need to set:
Note: I have not tested this
{
"type": "Microsoft.Network/frontdoors",
"apiVersion": "2020-01-01",
"properties": {
"frontendEndpoints": [
{
"name": "[variables('frontendEndpointName')]",
"properties": {
"hostName": "[variables('customDomain')]",
"sessionAffinityEnabledState": "Enabled",
"sessionAffinityTtlSeconds": 0,
"webApplicationFirewallPolicyLink": {
"id": "[variables('wafPolicyResourceId')]"
},
"resourceState": "Enabled",
"customHttpsConfiguration": {
"certificateSource": "FrontDoor",
"minimumTlsVersion":"1.2",
"protocolType": "ServerNameIndication",
"frontDoorCertificateSourceParameters": {
"certificateType": "Dedicated"
}
}
}
}
],
...
}
}
I was able to successfully make an enableHttps REST Call using the Azure Management API.
I got a successful response and can see the resource results in the portal.azure.com and resource.azure.com sites.
However I am pretty sure the Management API, and PowerShell methods are the only ways supported right now. Since there is likely some validation required on the Certificate and Handling, they didn't include that yet in the ARM Templates. Given validation can be quite important, it is best you confirm your configuration is workable in the UI first, before automating it (IMHO).
According to this discussion this seems only possible via the REST API (see e.g. this answer) and not (yet) via ARM.
I managed to get this working with an ARM template. The below link shows you how to do this using Azure Front Door as a certificate source:
https://github.com/Azure/azure-quickstart-templates/blob/master/101-front-door-custom-domain/azuredeploy.json
I drew inspiration from this for deploying a certificate from Azure Key Vault for a custom domain. Here are the relevant elements from the ARM template that I am using:
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"hubName": {
"type": "string",
"metadata": {
"description": "Name to assign to the hub. This name will prefix all resources contained in the hub."
}
},
"frontdoorName": {
"type": "string",
"metadata": {
"description": "Name to assign to the Frontdoor instance"
}
},
"frontdoorCustomDomain": {
"type": "string",
"metadata": {
"description": "The custom domain name to be applied to the provisioned Azure Frontdoor instance"
}
},
"keyVaultCertificateName": {
"type": "string",
"metadata": {
"description": "Name of the TLS certificate in the Azure KeyVault to be deployed to Azure Frontdoor for supporting TLS over a custom domain",
"assumptions": [
"Azure KeyVault containing the TLS certificate is deployed to the same resource group as the resource group where Azure Frontdoor will be deployed to",
"Azure KeyVault name is the hub name followed by '-keyvault' (refer to variable 'keyVaultName' in this template)"
]
}
},
...
},
"variables": {
"frontdoorName": "[concat(parameters('hubName'), '-', parameters('frontdoorName'))]",
"frontdoorEndpointName": "[concat(variables('frontdoorName'), '-azurefd-net')]",
"customDomainFrontdoorEndpointName": "[concat(variables('frontdoorName'), '-', replace(parameters('frontdoorCustomDomain'), '.', '-'))]",
"keyVaultName": "[concat(parameters('hubName'), '-keyvault')]",
"frontdoorHostName": "[concat(variables('frontdoorName'), '.azurefd.net')]",
...
},
"resources": [
{
"type": "Microsoft.Network/frontdoors",
"apiVersion": "2020-05-01",
"name": "[variables('frontdoorName')]",
"location": "Global",
"properties": {
"resourceState": "Enabled",
"backendPools": [...],
"healthProbeSettings": [...],
"frontendEndpoints": [
{
"id": "[concat(resourceId('Microsoft.Network/frontdoors', variables('frontdoorName')), concat('/FrontendEndpoints/', variables('frontdoorEndpointName')))]",
"name": "[variables('frontdoorEndpointName')]",
"properties": {
"hostName": "[variables('frontdoorHostName')]",
"sessionAffinityEnabledState": "Enabled",
"sessionAffinityTtlSeconds": 0,
"resourceState": "Enabled"
}
},
{
"id": "[concat(resourceId('Microsoft.Network/frontdoors', variables('frontdoorName')), concat('/FrontendEndpoints/', variables('customDomainFrontdoorEndpointName')))]",
"name": "[variables('customDomainFrontdoorEndpointName')]",
"properties": {
"hostName": "[parameters('frontdoorCustomDomain')]",
"sessionAffinityEnabledState": "Enabled",
"sessionAffinityTtlSeconds": 0,
"resourceState": "Enabled"
}
}
],
"loadBalancingSettings": [...],
"routingRules": [...],
"backendPoolsSettings": {
"enforceCertificateNameCheck": "Enabled",
"sendRecvTimeoutSeconds": 30
},
"enabledState": "Enabled",
"friendlyName": "[variables('frontdoorName')]"
}
},
{
"type": "Microsoft.Network/frontdoors/frontendEndpoints/customHttpsConfiguration",
"apiVersion": "2020-07-01",
"name": "[concat(variables('frontdoorName'), '/', variables('customDomainFrontdoorEndpointName'), '/default')]",
"dependsOn": [
"[resourceId('Microsoft.Network/frontdoors', variables('frontdoorName'))]"
],
"properties": {
"protocolType": "ServerNameIndication",
"certificateSource": "AzureKeyVault",
"minimumTlsVersion": "1.2",
"keyVaultCertificateSourceParameters": {
"secretName": "[parameters('keyVaultCertificateName')]",
"vault": {
"id": "[resourceId(resourceGroup().name, 'Microsoft.KeyVault/vaults', variables('keyVaultName'))]"
}
}
}
}
]
}
Azure Front Door classic now seems to support both managed certificates and custom certificates for custom domains. At least there are quickstart templates in the official repo from Microsoft exactly for these cases:
managed certificate
custom certificate
They both use Microsoft.Network/frontdoors/frontendEndpoints/customHttpsConfiguration subresource of the Front Door, currently with API version 2020-07-01. Only the parent subresource is documented in the templates reference, though.
The name of the customHttpsConfiguration resource is "default", so when the resource is specified as a top-level resource in the template, its complete name is something like "myfrontdoorafd/www-example-com/default".
Using Bicep (which transpiles to JSON ARM templates and which I highly recommend), the important part of the template looks like this:
param frontDoorName string
param customDomainName string
var frontEndEndpointCustomName = replace(customDomainName, '.', '-')
resource frontDoor 'Microsoft.Network/frontDoors#2020-01-01' = {
name: frontDoorName
properties: {
frontendEndpoints: [
{
name: frontEndEndpointCustomName
properties: {
hostName: customDomainName
...
}
}
...
]
...
}
...
resource frontendEndpoint 'frontendEndpoints' existing = {
name: frontEndEndpointCustomName
}
}
// This resource enables a Front Door-managed TLS certificate on the frontend.
resource customHttpsConfiguration 'Microsoft.Network/frontdoors/frontendEndpoints/customHttpsConfiguration#2020-07-01' = {
parent: frontDoor::frontendEndpoint
name: 'default'
properties: {
protocolType: 'ServerNameIndication'
certificateSource: 'FrontDoor'
frontDoorCertificateSourceParameters: {
certificateType: 'Dedicated'
}
minimumTlsVersion: '1.2'
}
}
Note that the deployment will be in progress till the certificate is actually issued and deployed to all points of presence (PoP) of Azure. This may take really long and even fail due to RequestTimeout. If you want to just start the operation and let it complete asynchronously, use e.g. the enable-https subcommand in Azure CLI. Even after the failure, the customHttpsProvisioningState is Pending and the certificate provisioning process may complete successfully.
Also note that when you have many frontend endpoints and changes happen frequently but most frontend endpoints stay unchanged, the pattern from this template cannot be generalized just by specifying multiple customHttpsConfiguration instances for multiple frontend endpoints. Such a generalization is not efficient and likely hits the rate limit of the underlying API (429 TooManyRequests) because the API is called even when the endpoint already has the HTTPS configuration.
In such a case, I was able to use nested templates and conditional deployment to deploy the customHttpsConfiguration subresource only when the frontend endpoint's property customHttpsProvisioningState has the value of Disabled. This works OK even with tens of frontend endpoints when a new frontend endpoint is added (and it should get a managed certificate). Even in deployment mode Complete, the once-applied configuration persists.

Pass Service Principal Client Id and Secret to ARM Template

I have an ARM template that creates an Azure Key Vault followed by an Azure Kubernetes service. The problem is that the Azure Kubernetes service needs a Service Principle's Client ID and Client Secret passed in the first time I create it. So I run the application.json without the kubernetes_servicePrincipalClientId and kubernetes_servicePrincipalClientSecret parameters in the production.parameters.json file:
application.json
{
"comments": "Kubernetes Service Principal Client ID",
"type": "Microsoft.KeyVault/vaults/secrets",
"name": "[concat(parameters('key_vault_name'), '/KubernetesServicePrincipalClientId')]",
"apiVersion": "2018-02-14",
"properties": {
"contentType": "text/plain",
"value": "[parameters('kubernetes_servicePrincipalClientId')]"
}
},
{
"comments": "Kubernetes Service Principal Client Secret",
"type": "Microsoft.KeyVault/vaults/secrets",
"name": "[concat(parameters('key_vault_name'), '/KubernetesServicePrincipalClientSecret')]",
"apiVersion": "2018-02-14",
"properties": {
"contentType": "text/plain",
"value": "[parameters('kubernetes_servicePrincipalClientSecret')]"
}
}
The second time I run the ARM template, I add the following lines to my production.parameters.json file, so that the Client ID and Client Secret are retrieved from Azure Key Vault where they were stored the first time I ran the ARM template.
production.parameters.json
"kubernetes_servicePrincipalClientId": {
"reference": {
"keyVault": {
"id": "/subscriptions/[Subscription Id]/resourcegroups/[Resource Group Name]/providers/Microsoft.KeyVault/vaults/[Vault Name]"
},
"secretName": "KubernetesServicePrincipalClientId"
}
},
"kubernetes_servicePrincipalClientSecret": {
"reference": {
"keyVault": {
"id": "/subscriptions/[Subscription Id]/resourcegroups/[Resource Group Name]/providers/Microsoft.KeyVault/vaults/[Vault Name]"
},
"secretName": "KubernetesServicePrincipalClientSecret"
}
}
Unfortunately it looks like you can't create a service principal in an ARM template. Is there a better way to configure all this in an automated way, so that regardless of whether or not I'm running the template the first time or second time, I don't have to perform any manual steps?
no, those apis are not exposed to arm, there is no way of managing a service principal with an ARM Template. you can however create a script that will provision the service principal and pass its details to the arm template, or you can use some sort of tool to handle all of this for you (pulumi\terraform\ansible)

How to programatically know the current region in an azure role?

I need to programmatically find the current region (e.g "West US" or "East US") where my current role is running. Is there any API to find this?
Consider using Get Cloud Service in the service management API. When you supply the service that your roles are a part of, you can retrieve a response similar to the following. Note the location field that I've starred.
<?xml version="1.0" encoding="utf-8"?>
<HostedService xmlns="http://schemas.microsoft.com/windowsazure">
<Url>hosted-service-url</Url>
<ServiceName>hosted-service-name</ServiceName>
<HostedServiceProperties>
<Description>description</Description>
<AffinityGroup>name-of-affinity-group</AffinityGroup>
**<Location>location-of-service</Location >**
<Label>base-64-encoded-name-of-service</Label>
<Status>current-status-of-service</Status>
<DateCreated>creation-date-of-service</DateCreated>
<DateLastModified>last-modification-date-of-service</DateLastModified>
<ExtendedProperties>
<ExtendedProperty>
<Name>name-of-property</Name>
<Value>value-of-property</Value>
</ExtendedProperty>
</ExtendedProperties>
<GuestAgentType>type-of-guest-agent</GuestAgentType>
</HostedServiceProperties>
<DefaultWinRmCertificateThumbprint>thumbprint-of-winrm-certificate</DefaultWinRmCertificateThumbprint>
</HostedService>
You can only get that information if you use the Management Api.
Either by REST or you can use the c# Windows Azure Management Libraries (Prerelease on nuget).
But due note that you need to set up management certificates to get the information.
An easier alternative is to create a setting in your cloud service and set the values when you create the deployment configuration. I do this and have deployment configurations for the regions I target.
using( var azure = CloudContext.Clients.CreateComputeManagementClient(...))
{
var service = await azure.HostedServices.GetDetailedAsync("servicename");
// service.Properties.Location
// service.Properties.AffinityGroup;
}
using(var azure = CloudContext.Clients.CreateManagementClient(...))
{
var affinityGroup = await azure.AffinityGroups.GetAsync("name",new CancellationToken());
// affinityGroup.Location
}
Here ... is the credentials, either a management certificate or your WAAD oauth tokens.
(ADAL : Active Directory Authentication Library) can be used for tokens.
here is the code for getting credentials from a certificate:
public static CertificateCloudCredentials GetCertificateCloudCredentials(
string certificateThumbprint, string subscriptionId)
{
var certificate = CertificateHelper.LoadCertificate(
StoreName.My,
StoreLocation.LocalMachine,
certificateThumbprint);
if (certificate == null)
throw new Exception(
string.Format("Certificate with thumbprint '{0}' not found",
certificateThumbprint));
var cred = new CertificateCloudCredentials(
subscriptionId,
certificate
);
return cred;
}
That information is available from the Azure Instance Metadata Service (IMDS). The REST endpoint for any VM running in the Azure public cloud is http://169.254.169.254/metadata/instance?api-version=2017-04-02. The metadata object contains two sub-objects, one for "compute" and one for "network". The region name appears in the "location" member of the "compute" object.
Sample code in multiple languages for accessing various elements of IMDS data are available from the Microsoft/azureimds repo on github. Far more information than I show here is available through the 2018-10-01 version of the IMDS API; see IMDS docs for details.
$ curl -s -H Metadata:True "http://169.254.169.254/metadata/instance?api-version=2017-04-02&format=json" | jq .
{
"compute": {
"location": "westus2",
"name": "samplevm",
"offer": "UbuntuServer",
"osType": "Linux",
"platformFaultDomain": "0",
"platformUpdateDomain": "0",
"publisher": "Canonical",
"sku": "18.04-LTS",
"version": "18.04.201904020",
"vmId": "(redacted)",
"vmSize": "Standard_D2s_v3"
},
"network": {
"interface": [
{
"ipv4": {
"ipAddress": [
{
"privateIpAddress": "10.0.0.7",
"publicIpAddress": ""
}
],
"subnet": [
{
"address": "10.0.0.0",
"prefix": "24"
}
]
},
"ipv6": {
"ipAddress": []
},
"macAddress": "(redacted)"
}
]
}
}

Resources