When I call graphAPI from my Powershell script it first removes all keyCredentials(certificates) from the Enterprise Application Service Principal in Azure AD, then uploads my custom certificate. How can I retain the certificates that are currently installed on the application and ALSO upload my new certificate in an inactive state?
Here is the body.
{
"keyCredentials": [
{
"customKeyIdentifier":
"endDateTime":
"keyId":
"startDateTime":
"type": "X509CertAndPassword",
"usage": "Sign",
"key":
"displayName":
},
{
"customKeyIdentifier":
"endDateTime":
"keyId":
"startDateTime":
"type": "AsymmetricX509Cert",
"usage": "Verify",
"key":
"displayName":
}
],
"passwordCredentials": [
{
"customKeyIdentifier":
"keyId":
"endDateTime":
"startDateTime":
"secretText":
}
]
}'
Each key has a value I just am removing them for privacy.
Here is the call to graphAPI
$response = Invoke-RestMethod -Method Patch -Uri "https://graph.microsoft.com/v1.0/servicePrincipals/{AppID}" -Headers $global:Header -Body $certBody
All of the information is correct because it uploads the custom certificate correctly. I just want it to leave the other certs alone.
After some research, and discussions with Microsoft, the way to use this method and retain the certificates is to query the service principal's key credentials using a regular GET call to https://graph.microsoft.com/v1.0/servicePrincipals/{id}. Then, create a new JSON payload with both the CURRENT key credentials from the GET call and the NEW key credentials. You can then make a PATCH call, using the same URI, with that payload and it will successfully update the service principal certificates without deleting the current certificates.
Couple of things to note:
The "customKeyIdentifier" field has a character limit and since it is recommended to use a hash of the thumbprint I tried SHA256 but it created too long of a hash, so I used SHA1. Since this isn't as secure another random generation could work as long as it is within the limit, though a GUID didn't work and I suspect it was a character type issue with the dashes. I have not done enough testing to find the limit, though my identifier was 40 characters and it was successful.
For the private key you can use the following command in Powershell to get the correct decoded private key. Here is where I found this command. (This is the only method I found useful as OpenSSL didn't seem to output the correct private key Azure is looking for)
$fileContentBytes = get-content '<PATH_TO_PFX_FILE>' -asbytestream
$privKey = [system.convert]::tobase64string($fileContentBytes)
Here is an example of a successful JSON payload with existing certs and a new cert. The password credentials section should contain the passphrase for the new cert as well as the keyID of the private key. This specific service principal has two current certificates.
Please note that while the current certs only have a "AsymmetricX509Cert" for the type, the new cert has both "AsymmetricX509Cert" and "X509CertandPassword" using the public key and private key respectively.
{
"keyCredentials": [
{
"customKeyIdentifier": "<HASH_OF_THUMBPRINT>",
"displayName": "<CN>",
"endDateTime": "<EXPIRY_DATE>",
"key": null <EXISTING_CERT>,
"keyId": "<GUID>",
"startDateTime": "<START_DATE>",
"type": "AsymmetricX509Cert",
"usage": "Verify"
},
{
"customKeyIdentifier": "<HASH_OF_THUMBPRINT>",
"displayName": "<CN>",
"endDateTime": "<EXPIRY_DATE>",
"key": null <EXISTING_CERT>,
"keyId": "<GUID>",
"startDateTime": "<START_DATE>",
"type": "AsymmetricX509Cert",
"usage": "Sign"
},
{
"customKeyIdentifier": "<HASH_OF_THUMBPRINT>",
"displayName": "<CN>",
"endDateTime": "<EXPIRY_DATE>",
"key": null <EXISTING_CERT>,
"keyId": "<GUID>",
"startDateTime": "<START_DATE>",
"type": "AsymmetricX509Cert",
"usage": "Verify"
},
{
"customKeyIdentifier": "<HASH_OF_THUMBPRINT>",
"displayName": "<CN>",
"endDateTime": "<EXPIRY_DATE>",
"key": null <EXISTING_CERT>,
"keyId": "<GUID>",
"startDateTime": "<START_DATE>",
"type": "AsymmetricX509Cert",
"usage": "Sign"
},
{
"customKeyIdentifier": "<HASH_OF_THUMBPRINT>",
"endDateTime": "<EXPIRY_DATE>",
"type": "X509CertandPassword",
"key": "<NEWCERT-PrivateKey>",
"displayName": "<CN>",
"startDateTime": "<START_DATE>",
"keyId": "<GUID>",
"usage": "Sign"
},
{
"customKeyIdentifier": "<HASH_OF_THUMBPRINT>",
"endDateTime": "<EXPIRY_DATE>",
"type": "AsymmetricX509Cert",
"key": "<NEWCERT-PublicKey>",
"displayName": "<CN>",
"startDateTime": "<START_DATE>",
"keyId": "<GUID>",
"usage": "Verify"
}
],
"passwordCredentials": [
{
"endDateTime": "<EXPIRY_DATE>",
"secretText": "<PASSPHRASE_FOR_NEW_CERT>",
"startDateTime": "<START_DATE>",
"keyId": "<KEY_ID_OF_PRIVATE_KEY>",
"customKeyIdentifier": "<HASH_OF_THUMBPRINT_OF_NEW_CERT>"
}
]
}
Use addKey instead of the Update method to add additional keyCredentials:
POST /servicePrincipals/{id}/addKey versus PATCH /servicePrincipals/{id}
But be aware that:
ServicePrincipals that don’t have any existing valid certificates (i.e.: no certificates have been added yet, or all certificates have expired), won’t be able to use this service action. Update servicePrincipal can be used to perform an update instead.
Related
I have a real chicken and egg situation. Also i am quite new to ARM so maybe missing something glaringly obvious.
Previously we had an arm template that worked using a certificate from a keyvault which was fine.
hostnamebindings resources created the custom domain and binding.
However we want to move to use the explicit self managed certificates in azure for the web service but are hitting some issues at the last hurdle.
The certificate is dependant on the custom domain as without it it fails to deploy but we can not reference the same resource twice in the template without it erroring.
Order must be:
Create custom domain
Create certificate
Bind certificate
ARM template extract below.
{
"condition": "[equals(parameters('UseCustomDomain'),'True')]",
"Comments": "If custom domain is selected then add to the webapplication",
"type": "Microsoft.Web/sites/hostnameBindings",
"apiVersion": "2022-03-01",
"name": "[concat(variables('appName'), '/', variables('DomainName'))]",
"location": "[ResourceGroup().location]",
"dependsOn": [
"[resourceId('Microsoft.Web/sites', variables('appName'))]"
],
"properties": {
"domainId": null,
"hostNameType": "Verified",
"siteName": "variables('DomainName')"
}
},
{
"type": "Microsoft.Web/certificates",
"apiVersion": "2021-03-01",
"name": "[variables('DomainName')]",
"Comments": "Creating Subdomain Certificate",
"location": "[resourceGroup().location]",
"dependsOn": [
"[resourceId('Microsoft.Web/sites', variables('AppName'))]",
"[resourceId('Microsoft.Web/sites/hostnameBindings/',variables('appName'), variables('DomainName'))]"
],
"properties": {
"hostNames": [
"[variables('DomainName')]"
],
"canonicalName": "[variables('DomainName')]",
"serverFarmId": "[variables('ServerFarmID')]"
}
},
What I would like to do is add the following properties after the the certificate is created to the hostnamebindings resource.
"sslState": "[if(variables('enableSSL'), 'SniEnabled', json('null'))]",
"thumbprint": "[if(variables('enableSSL'), reference(resourceId('Microsoft.Web/certificates', variables('DomainName'))).Thumbprint, json('null'))]"
Is there a way to make individual properties dependant on a resource? When i try the below in the hostname bindings properities i get a "Deployment template validation failed: 'Circular dependency detected on resource"
"properties": { "domainId": null, "hostNameType": "Verified", "siteName": "variables('DomainName')", "dependsOn": [ "[resourceId('Microsoft.Web/certificates', variables('DomainName'))]" ], "sslState": "[if(variables('enableSSL'), 'SniEnabled', json('null'))]", "thumbprint": "[if(variables('enableSSL'), reference(resourceId('Microsoft.Web/certificates', variables('DomainName'))).Thumbprint, json('null'))]" }
Any help greatly appreciated.
I am trying to add Firewall rules for Azure Key Vault using ARM templates. It works as expected if ipRules property in conjunction with multiple IPs are defined in template (not as parameter).
However, if I try to define it as parameter getting "Bad JSON content found in the request."
Property defined in Template ("apiVersion": "2019-09-01"):
"kv-ipRules": {
"type": "array",
"metadata": {
"description": "The address space (in CIDR notation) to use for the Azure Key Vault to be deployed as Firewall rules."
}
}
"networkAcls": {
"defaultAction": "Deny",
"bypass": "AzureServices",
"virtualNetworkRules": [
{
"id": "[concat(parameters('kv-virtualNetworks'), '/subnets/','kv-subnet')]",
"ignoreMissingVnetServiceEndpoint": false
}
],
"ipRules": "[parameters('kv-ipRules')]"
}
Property defined in Parameters:
"kv-ipRules": {
"value": [
"xx.xx.xx.xxx",
"yy.yy.yy.yyy"
]
}
Given the documentation (https://learn.microsoft.com/en-us/azure/templates/Microsoft.KeyVault/vaults?tabs=json#IPRule), I would use:
"kv-ipRules": {
"value": [
{
"value": "xx.xx.xx.xxx"
},
{
"value": "yy.yy.yy.yyy"
}
]
}
I am trying to generate a SAS token from an ARM template, to allow my template to subsequently access resources in a blob storage (including linked templates). The SAS token is supposed to be stored in a vault I'm also creating in this template. The storage account exists independently (in another RG)
However, I get the following error:
{
"code": "InvalidValuesForRequestParameters",
"message": "Values for request parameters are invalid: signedPermission,signedExpiry,signedResourceTypes,signedServices."
}
My template had this variable and line to generate the SAS token:
"variables": {
"vaultName": "[concat('hpc',uniqueString(resourceGroup().id, parameters('keyVaultName')))]",
"accountSasProperties": {
"type": "object",
"defaultValue": {
"signedServices": "fb",
"signedPermission": "rwdlacup",
"signedExpiry": "2021-11-30T00:00:00Z",
"signedResourceTypes": "co"
}
}
},
(...)
{
"apiVersion": "2018-02-14",
"type": "Microsoft.KeyVault/vaults/secrets",
"dependsOn": [
"[concat('Microsoft.KeyVault/vaults/', variables('vaultName'))]"
],
"name": "[concat(variables('vaultName'), '/', 'StorageSaSToken')]",
"properties": {
"value": "[listAccountSas(resourceId(parameters('StorageAccountRg'),'Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2018-07-01', variables('accountSasProperties')).accountSasToken]"
}
}
I tried several variation of the parameters, but could not find what's wrong, and the error is not really helping
I tried (among other things):
removing the 'signed' in front of the parameters (services instead of signedServices)
various combination of services, resource types and permission
various times (shorter, longer...)
When we define variables, we do not need to specify a data type for the variable. For more details, please refer to here.
So please update your template as the following template
"variables": {
"vaultName": "[concat('hpc',uniqueString(resourceGroup().id, parameters('keyVaultName')))]",
"accountSasProperties": {
"signedServices": "fb",
"signedPermission": "rwdlacup",
"signedExpiry": "2021-11-30T00:00:00Z",
"signedResourceTypes": "co"
}
},
(...)
{
"apiVersion": "2018-02-14",
"type": "Microsoft.KeyVault/vaults/secrets",
"dependsOn": [
"[concat('Microsoft.KeyVault/vaults/', variables('vaultName'))]"
],
"name": "[concat(variables('vaultName'), '/', 'sas')]",
"properties": {
"value": "[listAccountSas(resourceId(parameters('StorageAccountRg'),'Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2018-07-01', variables('accountSasProperties')).accountSasToken]"
}
}
Found the issue with the help of #jim-xu answer, and it's the worst kind of solution: the stupid mistake
I switched "accountSasProperties" from parameters to variables, and in the process, I forgot to remove the "defaultValue", and put the value directly under "accountSasProperties"
the correct syntax for a variable in my case:
"accountSasProperties": {
"signedServices": "fb",
"signedPermission": "rwdlacup",
"signedExpiry": "2021-11-30T00:00:00Z",
"signedResourceTypes": "co"
}
I incidentally also remove object type, as pointed out by #jim-xu in his answer
I'm trying to create automation variable off KeyVault secret. I assume I can probably do the same thing what is currently done in main template for retrieving windows password but it fails with non-descriptive error below. Not sure what shall be done next to troubleshoot.
Error
{
"code": "BadRequest",
"message": "{\"Message\":\"The request is invalid.\",\"ModelState\":{\"variable.properties.value\":[\"An error has occurred.\"]}}"
}
Template
{
"name": "mystring",
"type": "variables",
"apiVersion": "2015-10-31",
"dependsOn": [
"[concat('Microsoft.Automation/automationAccounts/', parameters('AutomationAccountName'))]"
],
"properties": {
"value": {
"reference": {
"keyVault": {
"id": "[resourceId(subscription().subscriptionId, 'Utility-RG', 'Microsoft.KeyVault/vaults', 'MyKeyVault')]"
},
"secretName": "WindowsPasswordSecret"
}
},
"description": "test var",
"isEncrypted": false
}
}
That error is indeed helpful, while I have no idea what went wrong there, I can tell you how to work around that, you need to pass the data from the KV to the template (as input parameter) not to the resource. And in the template use parameter to assign value to the object in question.
Reference: https://github.com/4c74356b41/bbbb-is-the-word/blob/master/_arm/parent.json#L151
I am trying to deploy add a custom script extension to an Azure VM using an ARM template, and I want to have it download files from a storage account using a SAS token.
Here is the template (simplified):
{
"name": "CustomScriptExtension"
"type": "Microsoft.Compute/virtualMachines/extensions",
"location": "eastus",
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.8",
"settings": {
"fileUris": [
"https://{storage-account}.blob.core.windows.net/installers/{installer}.msi?sv=2015-04-05&sig={signature}&st=2017-05-03T05:18:28Z&se=2017-05-10T05:18:28Z&srt=o&ss=b&sp=r"
],
"commandToExecute": "start /wait msiexec /package {installer}.msi /quiet"
},
}
}
And deploying it results in this error:
{
"name": "CustomScriptExtension",
"type": "Microsoft.Compute.CustomScriptExtension",
"typeHandlerVersion": "1.8",
"statuses": [
{
"code": "ProvisioningState/failed/3",
"level": "Error",
"displayStatus": "Provisioning failed",
"message": "Failed to download all specified files. Exiting. Error Message: Missing mandatory parameters for valid Shared Access Signature"
}
]
}
If I hit the URL with the SAS token directly it pulls down the file just fine so I know the SAS token is correct. Does the custom script extension not support URLs with SAS tokens?
I figured it out, this must be a bug in the custom script extension which causes it to not support storage account level SAS tokens. If I add &sr=b on the the end of the SAS token (which isn't part of the storage account level SAS token spec) it starts working.
I found this info here:
https://azureoperations.wordpress.com/2016/11/21/first-blog-post/
As #4c74356b41 said. Now, customer script extension template does not support SAS tokens. If you want to download file from a private storage account, you could use storage account key. Please refer to this example.
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "[concat(variables('vmName'),'/', variables('extensionName'))]",
"apiVersion": "[variables('apiVersion')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
],
"properties": {
"publisher": "Microsoft.Azure.Extensions",
"type": "CustomScript",
"typeHandlerVersion": "2.0",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": "[split(parameters('fileUris'), ' ')]",
"commandToExecute": "[parameters('commandToExecute')]"
},
"protectedSettings": {
"storageAccountName": "[parameters('customScriptStorageAccountName')]",
"storageAccountKey": "[parameters('customScriptStorageAccountKey')]"
}
}
}
Currently, there is support for SAS token in VM Extension
No, it does not support SAS tokens. Refer to this feedback item:
https://github.com/Azure/azure-linux-extensions/issues/105