I'm deploying a Storage Account and Function App via an ARM template, but after a successful deployment, I'm unable to use the the Function App because no keys are being created.
We are not able to retrieve the runtime master key. Please try again later.
From what I understand (see this issue on GitHub), there should be a folder created in the associated Storage Accont that contains hosts.json: storage-account/azure-webjobs-secrets/function-app. However, this folder does not exist.
I was hoping that upon deployment of a Function, hosts.json would be created. I believe there should also be a file named FunctionName.json that contains Function specific keys, but still the aforementioned folder does not exist and neither does this file. This leads to an additional error being displayed.
We are not able to retrieve the keys for function FunctionName. This can happen if the runtime is not able to load your function. Check other function errors.
Just in case anyone from MS picks this up, both errors should hopefully be recorded under Session ID a814072a75d643d89c71d64d20c3dd55.
Is there some bug with Function App deployment via an ARM template, or am I doing something wrong?
ARM Template
Can be found in this Pastebin
Example output
DeploymentName : DEV-FunctionAppName
ResourceGroupName : a-resourece-group
ProvisioningState : Succeeded
Timestamp : 02/10/2017 10:30:28
Mode : Incremental
TemplateLink :
Parameters :
Name Type Value
=============== ========================= ==========
environment String DEV
environmentSettings Object {
"applicationInsightsName": "service-name-app-insights",
"functionAppName": "DEV-FunctionAppName",
"storageAccountName": "servicenamedevstorene"
}
serviceName String Service Name
location String northeurope
serverFarmName String NorthEuropePlan
storageAccountSku Object {
"name": "Standard_GRS",
"tier": "Standard"
}
tags Object {
"Application": "Service Name",
"Environment": "DEV"
}
Outputs :
DeploymentDebugLogLevel :
Related
I have some problems, that I am hoping someone can help me with.
I created a bunch of resources - few linked services, few datasets and few pipe lines in the Datafactory "DevDataFactory".
One of the linked services connecting to a SQL Database is configured this way
Then in the json for that linked service the connection string is seen in this snippet:
"typeProperties": {
"connectionString": "Integrated Security=False;Encrypt=True;Connection Timeout=30;Data Source=#{linkedService().cloudDbDomain};Initial Catalog=#{linkedService().dbName};User ID=#{linkedService().dbUserName}",
"password": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "KeyVaultLink",
"type": "LinkedServiceReference"
},
"secretName": "DBPassword"
},
"alwaysEncryptedSettings": {
"alwaysEncryptedAkvAuthType": "ManagedIdentity"
}
}
All parameters are in place, default values are set and when
pipeline is run it asks for values.
The problem is that when I then go and export the ARM Template so to use the datafactory in another environment, there is a parameter there for this Linked Service's connection String, and this parameter value is blank. ALL AND ANY OTHER PARAMETER used in the ARM template has some default value - why does this one not have it ?
Given the parameters used inside the connection string are present and do have their own default values.
If I use the Azure Portal to import that ARM Template, I can go ahead and enter the connection string, like from the json code snippent above
Integrated Security=False;Encrypt=True;Connection Timeout=30;Data Source=#{linkedService().cloudDbDomain};Initial Catalog=#{linkedService().dbName};User ID=#{linkedService().dbUserName}
.. and everything will be imported and all my pipelines will automatically work out from the newly created datafactory.
The PROBLEM IS that I need to do this from Azure Dev Ops pipelines, which are automatically picked up from Git repo "adf-publish" branch, and this is where I don't know what to do. When ADO pipeline automatically runs, I can't just substitute the "connection string" on the fly.
I am stuck, please help !!
So, I'm trying to deploy a certificate to Azure using ARM template (currently using bicep).
I have received my .cer files from Sectigo, generating a pfx file using openssl seems to work fine since the generated pfx is possible to add using the Azure portal on my FunctionApp.
But when I try to deploy it using ARM template I get this error:
{
"code":"DeploymentFailed",
"message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.",
"details": [
{
"code":"InternalServerError",
"message":"There was an unexpected InternalServerError. Please try again later. x-ms-correlation-request-id: f25b9b70-e931-4e19-b010-cc1907cdcbcc"
}
]
}
The deployment looks like this:
{
"type": "Microsoft.Web/certificates",
"apiVersion": "2016-03-01",
"name": "xxx20220609",
"location": "[resourceGroup().location]",
"properties": {
"pfxBlob": "[parameters('certificatePfx')]",
"password": "[parameters('certificatePassword')]"
}
}
The certificatePassword is provided as a parameter and is the same as when I import it manually.
The certificatePfx is found just reading the bytes from the pfx file and base64 encoding it, which I've done using C#:
Convert.ToBase64String(File.ReadAllBytes(#"[pfx-file-path]"))
Any idea on what the InternalServerError could be about?
Please check once the below points as, I was doing the below mistakes in my test application:
• In my environment I discovered that the certificate binding to the host's name must be done via two templates instead of one because we cannot have two operations against the same type within an ARM template.
• Even I was getting a subsequent validation error which was occurring due to the domain name containing upper case letters. Once I altered that, I was successfully able to issue an app service with a managed certificate via an ARM template.
Funny thing. Tried exporting the certificate again, with another password. This time it worked
I'm developing an Azure Blueprint with three artifacts:
A log analytics workspace
Two policy assignments that reference this log analytics workspace
To be able to reference log analytics workspace in policy assignments, I exported resource id of this log analytics workspace
"outputs": {
"id": {
"type": "string",
"value": "[resourceId('Microsoft.OperationalInsights/workspaces', variables('workspaces_manage_log_analytics_name'))]"
}
}
In policy assignments, I tried to refer to the log analytics workspace id with [artifacts('LogAnalyticsWorkspace').outputs.id], but ended with an error
This error says this reference is invalid. I checked artifacts function document, but with no luck to solve this problem.
In portal, users can assign "display name", but not "name" to artifacts. If users want to assign "name" to an artifact, they need to use Powershell to export blueprint, rename json file, and import the modified blueprint.
For the question, "LogAnalyticsWorkspace" is not a "name", but only a "display name". You need to assign "name" by yourself.
I am trying to deploy an ARM template using the Azure DevOps release pipeline. Azure KeyVault is one of the resources in the template. the deployment is successful when I use the Powershell script. however, when Azure DevOps Release pipeline is used, deployment fails with error "Bad JSON content found in the request"
The key vault resource definition is as below.
{
"type": "Microsoft.KeyVault/vaults",
"apiVersion": "2018-02-14",
"name": "[parameters('keyVaultName')]",
"location": "[parameters('location')]",
"tags": {
"displayName": "KeyVault"
},
"properties": {
"enabledForDeployment": "[parameters('enabledForDeployment')]",
"enabledForTemplateDeployment": "[parameters('enabledForTemplateDeployment')]",
"enabledForDiskEncryption": "[parameters('enabledForDiskEncryption')]",
"tenantId": "[parameters('tenantId')]",
"accessPolicies": [],
"sku": {
"name": "[parameters('skuName')]",
"family": "A"
}
}
}
Update: I suspected that it could be because of tenant id and hardcoded the tenant id to test. But still no luck.
According to the log, you are specifying the override parameters in the task. That's why you are using the ARM template I provided, but still facing the Bad request error. Because in the task logic, the script which in ARM files is the request body of the API. And we use this API to create a resource you specified in azure. For detailed task logic described, you can refer my previous answer.
The parameter definition in the ARM template is correct, but now, the error caused by the override parameters specified:
More specifically, the error is because of the subscription().tenantId in your parameter override definition.
You can try to use Write-Host subscription().tenantId to get its value and print out by using Azure powershell task. You will see that it could not get any thing. One word, this can only used in Json file instead of used in task.
So now, because of no value get from this expression, also you have override the previous value which defined in the JSON file. It will lack the key parameter value(tenantId) in the request body when the task is going to create a azure resource with API.
There has 2 solution can solve it.
1. Do not try to override the parameters which its value is using expression.
Here I just mean the parameter that relevant with Azure subscription. Most of the expression could not be compiled in the Azure ARM deploy task.
2. If you still want to override these special parameters with the special expression in the task.
If this, you must add one task firstly, to get the tenantId from that. Then pass it into the ARM deploy task.
You can add Azure Powershell task by using the following sample script:
Write-Output "Getting tenantId using Get-AzureRmSubscription..."
$subscription = (Get-AzureRmSubscription -SubscriptionId $azureSubscriptionId)
Write-Output "Requested subscription: $azureSubscriptionId"
$subscriptionId = $subscription.Id
$subscriptionName = $subscription.Name
$tenantId = $subscription.tenantId
Write-Output "Subscription Id: $subscriptionId"
Write-Output "Subscription Name: $subscriptionName"
Write-Output "Tenant Id: $tenantId"
Write-Host "##vso[task.setvariable variable=TenantID;]$$tenantId"
Then in the next task, you can use $(TenantID) to get its value.
Here you can refer to this two excellent blog: Blog1 and Blog2
I still recommend you to use the first solution since the volume of the pipeline will increase and complicate if choosing the second solution.
Is there a way to specify KeyVault References in Function App configuration within my ARM template?
I have an ARM template that will deploy an Azure Function App with different deploy parameter for each environment. Currently I am retrieving the secret in my environment via references in the parameter:
{
"storageAccountSecret": {
"reference": {
"keyVault": {
"id": "/subscriptions/plan-id-goes-here/resourceGroups/group-name-goes-here/providers/Microsoft.KeyVault/vaults/vault-name-goes-here"
},
"secretName": "super-secret-name-goes-here"
}
}
I then reference the parameter in the ARM template, resources -> properties -> siteConfig -> appSettings
{
"name": "AzureWebJobsStorage",
"value": "[parameters('storageAccountSecret')]"
},
Above works fine! However, our team also periodically rotate our keys which change the underlying value of the secret. With my current approach, the secret on this function app config won't update until we run the ARM template again.
My get around is to use KeyVault Reference in the config, with the following syntax in the configuration.
#Microsoft.KeyVault(SecretUri=https://vault-name-goes-here.vault.azure.net/secrets/super-secret-name-goes-here/)
Now when the underlying secret changes, my function App will still get the up to date secret. However this require me to do it manually. I would love to achieve the same effect with just the ARM template alone, is that possible? 🤔
For exactly that reason you should always use the full URL to your secrets in Key Vault - including the version. See here.
When you update a secret in Key Vault, you get a new URL. Once you update the app setting (through your ARM template or any other way), the Function will be restarted. This is the desired behavior when updating app settings.