Post updated. Issue has been solved. The scripts below will create a resource group, create a service principal, deploy a key vault, configure permissions and write a secret to the vault. Hopes this help! :)
Problem:
I am logged into PowerShell as a Service Principal that has Owner permissions on a resource group.
I get permission errors when i try to create a vault, set permission on the vault and when i try to write secrets.
Solution:
Step 1: Create resource group and Service Principal. You must be logged in as an administrator to execute this script
Clear-Host
Import-Module Azure
Import-Module AzureRM.Resources
Add-AzureRmAccount
Get-AzureRmSubscription
Set-AzureRmContext -SubscriptionId <Your subscription id goes here>
$ServicePrincipalDisplayName = "myServicePrincipalName"
$CertificateName = "CN=SomeCertName"
$cert = New-SelfSignedCertificate -CertStoreLocation "cert:\CurrentUser\My" -Subject $CertificateName -KeySpec KeyExchange
$keyValue = [Convert]::ToBase64String($cert.GetRawCertData())
$ResouceGroupName = "myRessourceGroup"
$location = "North Central US"
# Create the resource group
New-AzureRmResourceGroup -Name $ResouceGroupName -Location $location
$ResouceGroupNameScope = (Get-AzureRmResourceGroup -Name $ResouceGroupName -ErrorAction Stop).ResourceId
# Create the Service Principal that logs in with a certificate
New-AzureRMADServicePrincipal -DisplayName $ServicePrincipalDisplayName -CertValue $keyValue -EndDate $cert.NotAfter -StartDate $cert.NotBefore
$myServicePrincipal = Get-AzureRmADServicePrincipal -SearchString $ServicePrincipalDisplayName
Write-Host "myServicePrincipal.ApplicationId " $myServicePrincipal.ApplicationId -ForegroundColor Green
Write-Host "myServicePrincipal.DisplayName " $myServicePrincipal.DisplayName
# Sleep here for a few seconds to allow the service principal application to become active (should only take a couple of seconds normally)
Write-Host "Waiting 10 seconds"
Start-Sleep -s 10
Write-Host "Make the Service Principal owner of the resource group " $ResouceGroupName
$NewRole = $null
$Retries = 0
While ($NewRole -eq $null -and $Retries -le 6)
{
New-AzureRMRoleAssignment -RoleDefinitionName Owner -ServicePrincipalName $myServicePrincipal.ApplicationId -Scope $ResouceGroupNameScope -ErrorAction SilentlyContinue
$NewRole = Get-AzureRMRoleAssignment -ServicePrincipalName $myServicePrincipal.ApplicationId
Write-Host "NewRole.DisplayName " $NewRole.DisplayName
Write-Host "NewRole.Scope: " $NewRole.Scope
$Retries++
Start-Sleep -s 10
}
Write-Host "Service principal created" -ForegroundColor Green
Step 2 : ARM deployment of the vault. Create a filenamed keyvault2.parameters.json Update the id's to reflect your installation (5479eaf6-31a3-4be3-9fb6-c2cdadc31735 is the service principal used by azure web apps when accessing the vault.)
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"vaultName": {
"value": "valueFromParameterFile"
},
"vaultlocation": {
"value": "valueFromParameterFile"
},
"skumode": {
"value": "Standard"
},
"accessPolicyList": {
"value": [
{
"objectId": "The object ID for your AAD user goes here so that you can read secrets etc",
"tenantId": "Your Tenant Id goes here",
"permissions": {
"keys": [
"Get",
"List"
],
"secrets": [
"Get",
"List"
],
"certificates": [
"Get",
"List"
]
}
},
{
"objectId": "The object ID for the service principal goes here Get-AzureRmADServicePrincipal -ServicePrincipalName <Service Principal Application ID>",
"tenantId": "Your Tenant Id goes here",
"permissions": {
"keys": [
"Get",
"List",
"Update",
"Create",
"Import",
"Delete",
"Recover",
"Backup",
"Restore"
],
"secrets": [
"Get",
"List",
"Set",
"Delete",
"Recover",
"Backup",
"Restore"
],
"certificates": [
"Get",
"List",
"Update",
"Create",
"Import",
"Delete",
"ManageContacts",
"ManageIssuers",
"GetIssuers",
"ListIssuers",
"SetIssuers",
"DeleteIssuers"
]
},
"applicationId": null
},
{
"objectId": "5479eaf6-31a3-4be3-9fb6-c2cdadc31735",
"tenantId": "Your Tenant Id goes here",
"permissions": {
"keys": [],
"secrets": [
"Get"
],
"certificates": []
},
"applicationId": null
}
]
},
"tenant": {
"value": "Your Tenant Id goes here"
},
"isenabledForDeployment": {
"value": true
},
"isenabledForTemplateDeployment": {
"value": false
},
"isenabledForDiskEncryption": {
"value": false
}
}
}
Step 3 : ARM deployment of the vault. Create a filenamed keyvault2.template.json
{
"$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"vaultName": {
"type": "string"
},
"vaultlocation": {
"type": "string"
},
"skumode": {
"type": "string",
"defaultValue": "Standard",
"allowedValues": [
"Standard",
"standard",
"Premium",
"premium"
],
"metadata": {
"description": "SKU for the vault"
}
},
"accessPolicyList": {
"type": "array",
"defaultValue": [],
"metadata": {
"description": "The access policies defined for this vault."
}
},
"tenant": {
"type": "string"
},
"isenabledForDeployment": {
"type": "bool"
},
"isenabledForTemplateDeployment": {
"type": "bool"
},
"isenabledForDiskEncryption": {
"type": "bool"
}
},
"resources": [
{
"apiVersion": "2015-06-01",
"name": "[parameters('vaultName')]",
"location": "[parameters('vaultlocation')]",
"type": "Microsoft.KeyVault/vaults",
"properties": {
"enabledForDeployment": "[parameters('isenabledForDeployment')]",
"enabledForTemplateDeployment": "[parameters('isenabledForTemplateDeployment')]",
"enabledForDiskEncryption": "[parameters('isenabledForDiskEncryption')]",
"accessPolicies": "[parameters('accessPolicyList')]",
"tenantId": "[parameters('tenant')]",
"sku": {
"name": "[parameters('skumode')]",
"family": "A"
}
}
}
]
}
Step 4 : Deploy vault. Start a new powershell window and execute this script. Update 3 x id's
Clear-Host
Import-Module Azure
Import-Module AzureRM.Resources
$ServicePrincipalApplicationId = "xxx"
$TenantId = "yyy"
$SubscriptionId = "zzz"
$CertificateName = "CN=SomeCertName"
$ResouceGroupName = "myRessourceGroup"
$location = "North Central US"
$VaultName = "MyVault" + (Get-Random -minimum 10000000 -maximum 1000000000)
$MySecret = ConvertTo-SecureString -String "MyValue" -AsPlainText -Force
$Cert = Get-ChildItem cert:\CurrentUser\My\ | Where-Object {$_.Subject -match $CertificateName }
Write-Host "cert.Thumbprint " $cert.Thumbprint
Write-Host "cert.Subject " $cert.Subject
Add-AzureRmAccount -ServicePrincipal -CertificateThumbprint $cert.Thumbprint -ApplicationId $ServicePrincipalApplicationId -TenantId $TenantId
Get-AzureRmSubscription
Set-AzureRmContext -SubscriptionId $SubscriptionId
Write-Host ""
Write-Host "Creating vault" -ForegroundColor Yellow
New-AzureRmResourceGroupDeployment -ResourceGroupName $ResouceGroupName -vaultName $vaultName -vaultlocation $location -isenabledForDeployment $true -TemplateFile ".\keyvault2.template.json" -TemplateParameterFile ".\keyvault2.parameters.json"
Write-Host ""
Write-Host "Key Vault " $vaultName " deployed" -ForegroundColor green
Write-Host "Wait 5 seconds"
Start-Sleep -Seconds 5
Write-Host "Write Secret" -ForegroundColor Yellow
Set-AzureKeyVaultSecret -VaultName $VaultName -Name "MyKey" -SecretValue $MySecret
Write-Host "Wait 10 seconds"
Start-Sleep -Seconds 10
Write-Host "Read secret"
Get-AzureKeyVaultSecret -VaultName $VaultName -Name "MyKey"
Set-AzureRmKeyVaultAccessPolicy -VaultName $name -ObjectId $oId -PermissionsToSecrets get
returns error
Set-AzureRmKeyVaultAccessPolicy : Insufficient privileges to complete the operation.
Solution is to add additional parameter -BypassObjectIdValidation
Set-AzureRmKeyVaultAccessPolicy -BypassObjectIdValidation -VaultName $name -ObjectId $oId -PermissionsToSecrets get
Solution looks like a hack, but it works for me. After this, object with $oId have got access to keyVault. (For checks access polices use Get-AzureRmKeyVault -VaultName $vaultName)
The solution was to move the configuration of the permission to the ARM template instead of trying to do it using PowerShell. As soon as i did that all permission issues got solved.
In the ARM template the object Id i had specified for the Service Principal was wrong. It thought it as the Object Id you can find in the portal under app registrations, but no, it is actually the object ID of the service principal of the Azure AD application it wants.
It will let you deploy the ARM template just fine even if you use the wrong Id and everything like too correct configured, until you start wondering about why the icon looks different for you service principal compared to the other users. This of course you will not notice until much later if you like me only had one user ...
Wrong id (This icon is different):
Correct id:
This post gave me that final solution.
How do I fix an "Operation 'set' not allowed" error when creating an Azure KeyVault secret programmatically?
I struggled with this issue a lot this week, as I don't have permission in my AAD to add API permissions for the service principal. I found a solution using the ARM Output marketplace item. Using the ARM output task, I can retrieve the Principal ID's of my objects from the ARM template and convert them to pipeline variables, which in turn can be consumed by an Azure PowerShell script to successfully update the Key vault access policies .
In the ARM template, I added this output variable, to return the website principal id - this is the information I couldn't query AD with.
"outputs": {
"websitePrincipalId": {
"type": "string",
"value": "[reference(concat(resourceId('Microsoft.Web/sites', variables('webSiteName')), '/providers/Microsoft.ManagedIdentity/Identities/default'), '2015-08-31-PREVIEW').principalId]"
}
}
Then I used the ARM Output task, to return the output as pipeline variables, which is useful in a Azure PowerShell script where I was able to use this to populate my key vault with the correct access policies:
Set-AzKeyVaultAccessPolicy -VaultName "$(KeyVaultName)" -ObjectId "$(servicePrincipalId)" -PermissionsToSecrets list,get -PassThru -BypassObjectIdValidation
According to your description, I test in my lab, I also use my Service Principal to login my Azure subscription. Your cmdlet works for me.
Do you check my Service Principal role? You could check it on Azure Portal.
Please ensure your Service Principal has Contributor or Owner permission. More information about this please refer to this link.
Update:
I test in my lab, your PowerShell script works fine for you. I suggest you could use Power Shell to create key vault and give permissions.
2022 refer to https://learn.microsoft.com/en-us/azure/key-vault/general/assign-access-policy?tabs=azure-cli
ran on Azure Cloud shell 2.43.0
az keyvault set-policy --name myKeyVault --object-id --secret-permissions --key-permissions --certificate-permissions
remove flags you don't want
Related
I've created a runbook to onboard Azure VMs to Update Management.
The idea was taken from the MS provided runbook (https://github.com/azureautomation/runbooks/blob/master/Utility/ARM/Enable-AutomationSolution.ps1).
The MS runbook uses old AzureRM modules, doesn't meet my needs and didn't work straight out of the box anyway.
My runbook in essence does the same thing, looks for VMs with a tag, installs the Microsoft Monitoring Agent and configures it to report to the workspace.
It then updates the query in the workspace to include the VM.
All this works and completes successfully but there is a messge in the Update Management portal saying "1 machine does not have 'Update Management' enabled" and "These machines are reporting to the Log Analytics workspace 'workspacename', but they do not have 'Update Management' enabled on them."
I'm not sure what other steps I am missing or whether something has changed and I can't see what the MS runbook does that mine doesn't.
Modules in runbook:
Az.Accounts
1/6/2021, 3:15 PM
Available
2.2.3
Az.Compute
1/6/2021, 3:17 PM
Available
4.8.0
Az.OperationalInsights
1/6/2021, 3:17 PM
Available
2.3.0
Az.Resources
1/6/2021, 3:17 PM
Available
3.1.1
Az.Storage
1/6/2021, 3:18 PM
Available
3.2.0
My runbook:
#Import-module -Name Az.Profile, Az.Automation, Az.OperationalInsights, Az.Compute, Az.Resources
#exit
#if ($oErr)
#{
# Write-Error -Message "Failed to load needed modules for Runbook, check that Az.Automation, Az.OperationalInsights, Az.Compute and Az.Resources are imported into the Automation Account" -ErrorAction Stop
#}
# Fetch AA RunAs account detail from connection object asset
$ServicePrincipalConnection = Get-AutomationConnection -Name "AzureRunAsConnection" -ErrorAction Stop
$Connection = Add-AzAccount -ServicePrincipal -TenantId $ServicePrincipalConnection.TenantId `
-ApplicationId $ServicePrincipalConnection.ApplicationId -CertificateThumbprint $ServicePrincipalConnection.CertificateThumbprint -ErrorAction Continue -ErrorVariable oErr
if ($oErr)
{
Write-Error -Message "Failed to connect to Azure" -ErrorAction Stop
}
else
{
write-output "connected to Azure"
}
#get the LA subscription
$LogAnalyticsSolutionSubscriptionId = Get-AutomationVariable -Name "LASolutionSubscriptionId"
#get the LA workspace RG
$LogAnalyticsSolutionWorkspaceRG = Get-AutomationVariable -Name "LASolutionWorkspaceRG"
#get the LA workspace Name
$LogAnalyticsSolutionWorkspaceName = Get-AutomationVariable -Name "LASolutionWorkspaceName"
#get the LA workspace Id
$LogAnalyticsSolutionWorkspace = Get-AzOperationalInsightsWorkspace -Name $LogAnalyticsSolutionWorkspaceName -ResourceGroupName $LogAnalyticsSolutionWorkspaceRG
#$LogAnalyticsSolutionWorkspaceId = Get-AutomationVariable -Name "LASolutionWorkspaceId"
#get the LA workspace key
$LogAnalyticsSolutionWorkspaceKey = AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName $LogAnalyticsSolutionWorkspaceRG -Name $LogAnalyticsSolutionWorkspaceName
#$PublicSettings=#{"workspaceId" = $LogAnalyticsSolutionWorkspace.CustomerId};
#write-output "public settings are: "
#write-output $PublicSettings
write-output "sid is $LogAnalyticsSolutionSubscriptionId, wid is $($LogAnalyticsSolutionWorkspace.CustomerId), key is $($LogAnalyticsSolutionWorkspaceKey.PrimarySharedKey)"
if ($null -eq $LogAnalyticsSolutionSubscriptionId -or $Null -eq $LogAnalyticsSolutionWorkspace -or $null -eq $LogAnalyticsSolutionWorkspaceKey)
{
Write-Error -Message "Unable to retrieve variables from automation account."
exit
}
#get VM list
$VMsWantingPatching=Get-AzResource -TagName "patching" -TagValue "true" -ResourceType "Microsoft.Compute/virtualMachines" -ErrorAction Continue -ErrorVariable oErr
if ($oErr)
{
Write-Error -Message "Could not retrieve VM list" -ErrorAction Stop
}
elseif ($Null -eq $VMsWantingPatching)
{
Write-Error -Message "No VMs need patching" -ErrorAction Stop
}
else
{
write-output "Successfully retrieved VM list"
}
foreach ($VM in $VMsWantingPatching)
{
Try
{
#Configure MMA on the VM to report to the UM LA workspace, unfortunately the workspace ID and key need to be hardcoded
#as passing a parameter does not work.
#Using this method preserves existing workspaces on the MMA agent, this method only adds, not replaces like using ARM would do
$CurrentVM=Get-AzVM -name $VM.name
Set-AzVMExtension -ExtensionName "MicrosoftMonitoringAgent" `
-ResourceGroupName $VM.ResourceGroupName `
-VMName $VM.Name `
-Publisher "Microsoft.EnterpriseCloud.Monitoring" `
-ExtensionType "MicrosoftMonitoringAgent" `
-TypeHandlerVersion "1.0" `
-Settings #{"workspaceId" = "xxx" } `
-ProtectedSettings #{"workspaceKey" = "xxx"} `
-Location $VM.location
#Add VM to string list for LA function
$VMList += "`"$($CurrentVM.vmid)`", "
}
catch
{
write-error -Message $_.Exception.message
}
<# if ($VMAgentConfig.StatusCode -ne 0)
{
Write-Error -Message "Failed to add workspace to $($VM.Name)"
exit
}
else
{
write-output "Successfully added the workspace to $($VM.Name)"
}#>
}
write-output "VMList to use is: " $($VMList)
#Get the saved queries in LA
$SavedGroups = Get-AzOperationalInsightsSavedSearch -ResourceGroupName $LogAnalyticsSolutionWorkspaceRG `
-WorkspaceName $LogAnalyticsSolutionWorkspaceName -AzureRmContext $LASubscriptionContext -ErrorAction Continue -ErrorVariable oErr
#Find the Update Management saved query from the entire list and put it into a variable
$UpdatesGroup = $SavedGroups.Value | Where-Object {$_.Id -match "MicrosoftDefaultComputerGroup" -and $_.Properties.Category -eq "Updates"}
write-output "group is " $UpdatesGroup | out-string -Width 1000
#set the query/function
#remove trailing comma from list of VMs
$VMList = $($VMList).TrimEnd(', ')
#we use Resource as UUID is not useful when troubleshooting the query and computer gets truncated
$NewQuery="Heartbeat | where VMUUID in ( $($VMList) ) | distinct Computer"
write-output "Newquery is" $($NewQuery)
#just left in for info but no longer use Powershell to update LA query, using ARM as this has etag parameter, which is needed to avoid 409 conflict.
#New-AzOperationalInsightsSavedSearch -ResourceGroupName $LogAnalyticsSolutionWorkspaceRG -WorkspaceName $LogAnalyticsSolutionWorkspaceName `
# -SavedSearchID "Updates|MicrosoftDefaultComputerGroup" -Category "Updates" -FunctionAlias "Updates__MicrosoftDefaultComputerGroup" `
# -Query $NewQuery -DisplayName "MicrosoftDefaultComputerGroup" -Force
#write arm template to a file then make a new resource deployment to update the LA function
#configure arm template
$ArmTemplate = #'
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"location": {
"type": "string",
"defaultValue": ""
},
"id": {
"type": "string",
"defaultValue": ""
},
"resourceName": {
"type": "string",
"defaultValue": ""
},
"category": {
"type": "string",
"defaultValue": ""
},
"displayName": {
"type": "string",
"defaultValue": ""
},
"query": {
"type": "string",
"defaultValue": ""
},
"functionAlias": {
"type": "string",
"defaultValue": ""
},
"etag": {
"type": "string",
"defaultValue": ""
},
"apiVersion": {
"defaultValue": "2017-04-26-preview",
"type": "String"
}
},
"resources": [
{
"apiVersion": "2017-04-26-preview",
"type": "Microsoft.OperationalInsights/workspaces/savedSearches",
"location": "[parameters('location')]",
"name": "[parameters('resourceName')]",
"id": "[parameters('id')]",
"properties": {
"displayname": "[parameters('displayName')]",
"category": "[parameters('category')]",
"query": "[parameters('query')]",
"functionAlias": "[parameters('functionAlias')]",
"etag": "[parameters('etag')]",
"tags": [
{
"Name": "Group", "Value": "Computer"
}
]
}
}
]
}
'#
#Endregion
# Create temporary file to store ARM template in
$TempFile = New-TemporaryFile -ErrorAction Continue -ErrorVariable oErr
if ($oErr)
{
Write-Error -Message "Failed to create temporary file for solution ARM template" -ErrorAction Stop
}
Out-File -InputObject $ArmTemplate -FilePath $TempFile.FullName -ErrorAction Continue -ErrorVariable oErr
if ($oErr)
{
Write-Error -Message "Failed to write ARM template for solution onboarding to temp file" -ErrorAction Stop
}
# Add all of the parameters
$QueryDeploymentParams = #{}
$QueryDeploymentParams.Add("location", $($LogAnalyticsSolutionWorkspace.Location))
$QueryDeploymentParams.Add("id", $UpdatesGroup.Id)
$QueryDeploymentParams.Add("resourceName", ($LogAnalyticsSolutionWorkspaceName+ "/Updates|MicrosoftDefaultComputerGroup").ToLower())
$QueryDeploymentParams.Add("category", "Updates")
$QueryDeploymentParams.Add("displayName", "MicrosoftDefaultComputerGroup")
$QueryDeploymentParams.Add("query", $NewQuery)
$QueryDeploymentParams.Add("functionAlias", $SolutionType + "__MicrosoftDefaultComputerGroup")
$QueryDeploymentParams.Add("etag", "*")#$UpdatesGroup.ETag) etag is null for the query and referencing null property doesn't work so * instead
#$QueryDeploymentParams.Add("apiVersion", $SolutionApiVersion)
# Create deployment name
$DeploymentName = "EnableMultipleAutomation" + (Get-Date).ToFileTimeUtc()
$ObjectOutPut = New-AzResourceGroupDeployment -ResourceGroupName $LogAnalyticsSolutionWorkspaceRG -TemplateFile $TempFile.FullName `
-Name $DeploymentName `
-TemplateParameterObject $QueryDeploymentParams `
-AzureRmContext $SubscriptionContext -ErrorAction Continue -ErrorVariable oErr
if ($oErr)
{
Write-Error -Message "Failed to add VM: $VMName to solution: $SolutionType" -ErrorAction Stop
}
else
{
Write-Output -InputObject $ObjectOutPut
Write-Output -InputObject "VM: $VMName successfully added to solution: $SolutionType"
}
# Remove temp file with arm template
Remove-Item -Path $TempFile.FullName -Force
Like I say, the runbook seems to complete successfully, but I wonder could anyone advise what task I am not performing, please?
Thanks,
Neil.
OK, so I think I have sorted it.
The answer lies in the LA query.
My query did not use Computer as a search field as it truncates at 15 characters, I only used VMUUID. Even if the result set is identical, unless your query is correct then UM will fail.
The correct query should include this even though it is unused. You also need the tildes as clearly Update Management expects a very precise query syntax.
Here is an example of a working query:
Heartbeat | where Computer in~ ("") or VMUUID in~ ("xxxxx") | distinct Computer
Then you must restart MMA on the box and then wait ages for the console to update.
I am taking my first crack at making a DSC (Desired State Configuration) file to go with an ARM (Azure Resource Manager) template to deploy a Windows Server 2016 and so far everything was working great until I tried to pass a username/password so I can create a local Windows user account. I can't seem to make this function at all (see the error message below).
My question is, how do I use an ARM template to pull a password from an Azure key vault and pass it to a DSC powershell extension?
Here is my current setup:
azuredeploy.json
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"deployExecUsername": {
"type": "string",
"defaultValue": "DeployExec"
},
"deployExecPassword": {
"type": "securestring"
},
"_artifactsLocation": {
"type": "string",
"metadata": {
"description": "Auto-generated container in staging storage account to receive post-build staging folder upload"
}
},
"_artifactsLocationSasToken": {
"type": "securestring",
"metadata": {
"description": "Auto-generated token to access _artifactsLocation"
}
},
"virtualMachineName": {
"type": "string",
"defaultValue": "web-app-server"
}
},
"variables": {
"CreateLocalUserArchiveFolder": "DSC",
"CreateLocalUserArchiveFileName": "CreateLocalUser.zip"},
"resources": [
{
"name": "[concat(parameters('virtualMachineName'), '/', 'Microsoft.Powershell.DSC')]",
"type": "Microsoft.Compute/virtualMachines/extensions",
"location": "eastus2",
"apiVersion": "2016-03-30",
"dependsOn": [ ],
"tags": {
"displayName": "CreateLocalUser"
},
"properties": {
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.9",
"autoUpgradeMinorVersion": true,
"settings": {
"configuration": {
"url": "[concat(parameters('_artifactsLocation'), '/', variables('CreateLocalUserArchiveFolder'), '/', variables('CreateLocalUserArchiveFileName'))]",
"script": "CreateLocalUser.ps1",
"function": "Main"
},
"configurationArguments": {
"nodeName": "[parameters('virtualMachineName')]"
}
},
"protectedSettings": {
"configurationArguments": {
"deployExecCredential": {
"UserName": "[parameters('deployExecUsername')]",
"Password": "[parameters('deployExecPassword')]"
}
},
"configurationUrlSasToken": "[parameters('_artifactsLocationSasToken')]"
}
}
}],
"outputs": {}
}
azuredeploy.parameters.json
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"deployExecPassword": {
"reference": {
"keyVault": {
"id": "/subscriptions/<GUID>/resourceGroups/<Resource Group Name>/providers/Microsoft.KeyVault/vaults/<Resource Group Name>-key-vault"
},
"secretName": "web-app-server-deployexec-password"
}
}
}
}
DSC/CreateLocalUser.ps1
Configuration Main
{
Param (
[string] $nodeName,
[PSCredential]$deployExecCredential
)
Import-DscResource -ModuleName PSDesiredStateConfiguration
Node $nodeName
{
User DeployExec
{
Ensure = "Present"
Description = "Deployment account for Web Deploy"
UserName = $deployExecCredential.UserName
Password = $deployExecCredential
PasswordNeverExpires = $true
PasswordChangeRequired = $false
PasswordChangeNotAllowed = $true
}
}
}
Deploy-AzureResourceGroup.ps1 (Default from Azure Resource Group template)
#Requires -Version 3.0
Param(
[string] [Parameter(Mandatory=$true)] $ResourceGroupLocation,
[string] $ResourceGroupName = 'AzureResourceGroup2',
[switch] $UploadArtifacts,
[string] $StorageAccountName,
[string] $StorageContainerName = $ResourceGroupName.ToLowerInvariant() + '-stageartifacts',
[string] $TemplateFile = 'azuredeploy.json',
[string] $TemplateParametersFile = 'azuredeploy.parameters.json',
[string] $ArtifactStagingDirectory = '.',
[string] $DSCSourceFolder = 'DSC',
[switch] $ValidateOnly
)
try {
[Microsoft.Azure.Common.Authentication.AzureSession]::ClientFactory.AddUserAgent("VSAzureTools-$UI$($host.name)".replace(' ','_'), '3.0.0')
} catch { }
$ErrorActionPreference = 'Stop'
Set-StrictMode -Version 3
function Format-ValidationOutput {
param ($ValidationOutput, [int] $Depth = 0)
Set-StrictMode -Off
return #($ValidationOutput | Where-Object { $_ -ne $null } | ForEach-Object { #(' ' * $Depth + ': ' + $_.Message) + #(Format-ValidationOutput #($_.Details) ($Depth + 1)) })
}
$OptionalParameters = New-Object -TypeName Hashtable
$TemplateFile = [System.IO.Path]::GetFullPath([System.IO.Path]::Combine($PSScriptRoot, $TemplateFile))
$TemplateParametersFile = [System.IO.Path]::GetFullPath([System.IO.Path]::Combine($PSScriptRoot, $TemplateParametersFile))
if ($UploadArtifacts) {
# Convert relative paths to absolute paths if needed
$ArtifactStagingDirectory = [System.IO.Path]::GetFullPath([System.IO.Path]::Combine($PSScriptRoot, $ArtifactStagingDirectory))
$DSCSourceFolder = [System.IO.Path]::GetFullPath([System.IO.Path]::Combine($PSScriptRoot, $DSCSourceFolder))
# Parse the parameter file and update the values of artifacts location and artifacts location SAS token if they are present
$JsonParameters = Get-Content $TemplateParametersFile -Raw | ConvertFrom-Json
if (($JsonParameters | Get-Member -Type NoteProperty 'parameters') -ne $null) {
$JsonParameters = $JsonParameters.parameters
}
$ArtifactsLocationName = '_artifactsLocation'
$ArtifactsLocationSasTokenName = '_artifactsLocationSasToken'
$OptionalParameters[$ArtifactsLocationName] = $JsonParameters | Select -Expand $ArtifactsLocationName -ErrorAction Ignore | Select -Expand 'value' -ErrorAction Ignore
$OptionalParameters[$ArtifactsLocationSasTokenName] = $JsonParameters | Select -Expand $ArtifactsLocationSasTokenName -ErrorAction Ignore | Select -Expand 'value' -ErrorAction Ignore
# Create DSC configuration archive
if (Test-Path $DSCSourceFolder) {
$DSCSourceFilePaths = #(Get-ChildItem $DSCSourceFolder -File -Filter '*.ps1' | ForEach-Object -Process {$_.FullName})
foreach ($DSCSourceFilePath in $DSCSourceFilePaths) {
$DSCArchiveFilePath = $DSCSourceFilePath.Substring(0, $DSCSourceFilePath.Length - 4) + '.zip'
Publish-AzureRmVMDscConfiguration $DSCSourceFilePath -OutputArchivePath $DSCArchiveFilePath -Force -Verbose
}
}
# Create a storage account name if none was provided
if ($StorageAccountName -eq '') {
$StorageAccountName = 'stage' + ((Get-AzureRmContext).Subscription.SubscriptionId).Replace('-', '').substring(0, 19)
}
$StorageAccount = (Get-AzureRmStorageAccount | Where-Object{$_.StorageAccountName -eq $StorageAccountName})
# Create the storage account if it doesn't already exist
if ($StorageAccount -eq $null) {
$StorageResourceGroupName = 'ARM_Deploy_Staging'
New-AzureRmResourceGroup -Location "$ResourceGroupLocation" -Name $StorageResourceGroupName -Force
$StorageAccount = New-AzureRmStorageAccount -StorageAccountName $StorageAccountName -Type 'Standard_LRS' -ResourceGroupName $StorageResourceGroupName -Location "$ResourceGroupLocation"
}
# Generate the value for artifacts location if it is not provided in the parameter file
if ($OptionalParameters[$ArtifactsLocationName] -eq $null) {
$OptionalParameters[$ArtifactsLocationName] = $StorageAccount.Context.BlobEndPoint + $StorageContainerName
}
# Copy files from the local storage staging location to the storage account container
New-AzureStorageContainer -Name $StorageContainerName -Context $StorageAccount.Context -ErrorAction SilentlyContinue *>&1
$ArtifactFilePaths = Get-ChildItem $ArtifactStagingDirectory -Recurse -File | ForEach-Object -Process {$_.FullName}
foreach ($SourcePath in $ArtifactFilePaths) {
Set-AzureStorageBlobContent -File $SourcePath -Blob $SourcePath.Substring($ArtifactStagingDirectory.length + 1) `
-Container $StorageContainerName -Context $StorageAccount.Context -Force
}
# Generate a 4 hour SAS token for the artifacts location if one was not provided in the parameters file
if ($OptionalParameters[$ArtifactsLocationSasTokenName] -eq $null) {
$OptionalParameters[$ArtifactsLocationSasTokenName] = ConvertTo-SecureString -AsPlainText -Force `
(New-AzureStorageContainerSASToken -Container $StorageContainerName -Context $StorageAccount.Context -Permission r -ExpiryTime (Get-Date).AddHours(4))
}
}
# Create or update the resource group using the specified template file and template parameters file
New-AzureRmResourceGroup -Name $ResourceGroupName -Location $ResourceGroupLocation -Verbose -Force
if ($ValidateOnly) {
$ErrorMessages = Format-ValidationOutput (Test-AzureRmResourceGroupDeployment -ResourceGroupName $ResourceGroupName `
-TemplateFile $TemplateFile `
-TemplateParameterFile $TemplateParametersFile `
#OptionalParameters)
if ($ErrorMessages) {
Write-Output '', 'Validation returned the following errors:', #($ErrorMessages), '', 'Template is invalid.'
}
else {
Write-Output '', 'Template is valid.'
}
}
else {
New-AzureRmResourceGroupDeployment -Name ((Get-ChildItem $TemplateFile).BaseName + '-' + ((Get-Date).ToUniversalTime()).ToString('MMdd-HHmm')) `
-ResourceGroupName $ResourceGroupName `
-TemplateFile $TemplateFile `
-TemplateParameterFile $TemplateParametersFile `
#OptionalParameters `
-Force -Verbose `
-ErrorVariable ErrorMessages
if ($ErrorMessages) {
Write-Output '', 'Template deployment returned the following errors:', #(#($ErrorMessages) | ForEach-Object { $_.Exception.Message.TrimEnd("`r`n") })
}
}
Do note that my original template deploys the entire server, but I am able to reproduce the issue I am having with the above template and any old Windows Server 2016 VM.
I am running the template through Visual Studio 2017 Community:
The template validates and runs, but when I run it I am getting the following error message:
22:26:41 - New-AzureRmResourceGroupDeployment : 10:26:41 PM - VM has reported a failure when processing extension
22:26:41 - 'Microsoft.Powershell.DSC'. Error message: "The DSC Extension received an incorrect input: Compilation errors occurred
22:26:41 - while processing configuration 'Main'. Please review the errors reported in error stream and modify your configuration
22:26:41 - code appropriately. System.InvalidOperationException error processing property 'Password' OF TYPE 'User': Converting
22:26:41 - and storing encrypted passwords as plain text is not recommended. For more information on securing credentials in MOF
22:26:41 - file, please refer to MSDN blog: http://go.microsoft.com/fwlink/?LinkId=393729
22:26:41 - At C:\Packages\Plugins\Microsoft.Powershell.DSC\2.77.0.0\DSCWork\CreateLocalUser.4\CreateLocalUser.ps1:12 char:3
22:26:41 - + User Converting and storing encrypted passwords as plain text is not recommended. For more information on securing
22:26:41 - credentials in MOF file, please refer to MSDN blog: http://go.microsoft.com/fwlink/?LinkId=393729 Cannot find path
22:26:41 - 'HKLM:\SOFTWARE\Microsoft\PowerShell\3\DSC' because it does not exist. Cannot find path
22:26:41 - 'HKLM:\SOFTWARE\Microsoft\PowerShell\3\DSC' because it does not exist.
22:26:41 - Another common error is to specify parameters of type PSCredential without an explicit type. Please be sure to use a
22:26:41 - typed parameter in DSC Configuration, for example:
22:26:41 - configuration Example {
22:26:41 - param([PSCredential] $UserAccount)
22:26:41 - ...
22:26:41 - }.
22:26:41 - Please correct the input and retry executing the extension.".
22:26:41 - At F:\Users\Shad\Documents\Visual Studio 2017\Projects\AzureResourceGroup2\AzureResourceGroup2\bin\Debug\staging\AzureR
22:26:41 - esourceGroup2\Deploy-AzureResourceGroup.ps1:108 char:5
22:26:41 - + New-AzureRmResourceGroupDeployment -Name ((Get-ChildItem $Templat ...
22:26:41 - + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
22:26:41 - + CategoryInfo : NotSpecified: (:) [New-AzureRmResourceGroupDeployment], Exception
22:26:41 - + FullyQualifiedErrorId : Microsoft.Azure.Commands.ResourceManager.Cmdlets.Implementation.NewAzureResourceGroupDep
22:26:41 - loymentCmdlet
What I Tried
I have already tried looking at this question:
Passing credentials to DSC script from arm template
but it seems to be using the old ARM template format to construct the DSC call and since I am not familiar with it, I am unable to work out what the extra parameters are for.
I also took a look at this question:
Securely pass credentials to DSC Extension from ARM Template
and the accepted answer is to just use PsDSCAllowPlainTextPassword = $true. While this doesn't seem like the best way, I tried adding the following configuration data file.
CreateLocalUser.psd1
#
# CreateLocalUser.psd1
#
#{
AllNodes = #(
#{
NodeName = '*'
PSDscAllowPlainTextPassword = $true
}
)
}
And changing Deploy-AzureResourceGroup.ps1 to pass these settings to the DSC configuration, as follows.
# Create DSC configuration archive
if (Test-Path $DSCSourceFolder) {
$DSCSourceFilePaths = #(Get-ChildItem $DSCSourceFolder -File -Filter '*.ps1' | ForEach-Object -Process {$_.FullName})
foreach ($DSCSourceFilePath in $DSCSourceFilePaths) {
$DSCArchiveFilePath = $DSCSourceFilePath.Substring(0, $DSCSourceFilePath.Length - 4) + '.zip'
$DSCConfigDataFilePath = $DSCSourceFilePath.Substring(0, $DSCSourceFilePath.Length - 4) + '.psd1'
Publish-AzureRmVMDscConfiguration $DSCSourceFilePath -OutputArchivePath $DSCArchiveFilePath -ConfigurationDataPath $DSCConfigDataFilePath -Force -Verbose
}
}
However, I am not getting any change in the error message when doing so.
I have been through loads of the Azure documentation to try to work this out. The link in the error message is entirely unhelpful as there are no examples of how to use the encryption with an ARM template. Most of the examples are showing running Powershell scripts rather than an ARM template. And there isn't a single example in the documentation anywhere on how to retrieve the password from a key vault and pass it into a DSC extension file from an ARM template. Is this even possible?
Note that I would be happy with simply using Visual Studio to do my deployment, if I could just get it working. But I have been working on this issue for several days and cannot seem to find a single solution that works. So, I thought I would ask here before throwing in the towel and just using the admin Windows account for web deployment.
UPDATE 2018-12-21
I noticed when running the deploy command through Visual Studio 2017 that the log contains an error message:
20:13:43 - Build started.
20:13:43 - Project "web-app-server.deployproj" (StageArtifacts target(s)):
20:13:43 - Project "web-app-server.deployproj" (ContentFilesProjectOutputGroup target(s)):
20:13:43 - Done building project "web-app-server.deployproj".
20:13:43 - Done building project "web-app-server.deployproj".
20:13:43 - Build succeeded.
20:13:43 - Launching PowerShell script with the following command:
20:13:43 - 'F:\Projects\thepath\web-app-server\bin\Debug\staging\web-app-server\Deploy-AzureResourceGroup.ps1' -StorageAccountName 'staged<xxxxxxxxxxxxxxxxx>' -ResourceGroupName 'web-app-server' -ResourceGroupLocation 'eastus2' -TemplateFile 'F:\Projects\thepath\web-app-server\bin\Debug\staging\web-app-server\web-app-server.json' -TemplateParametersFile 'F:\Projects\thepath\web-app-server\bin\Debug\staging\web-app-server\web-app-server.parameters.json' -ArtifactStagingDirectory '.' -DSCSourceFolder '.\DSC' -UploadArtifacts
20:13:43 - Deploying template using PowerShell script failed.
20:13:43 - Tell us about your experience at https://go.microsoft.com/fwlink/?LinkId=691202
After the error message occurs, it continues. I suspect it may be falling back to using another method and it is that method that is causing the failure and why others are not seeing what I am.
Sadly, no matter what I try this does not function with DSC.
Workaround
For now, I am working around this issue using a custom script extension, like this:
{
"name": "[concat(parameters('virtualMachineName'), '/addWindowsAccounts')]",
"type": "Microsoft.Compute/virtualMachines/extensions",
"apiVersion": "2018-06-01",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', parameters('virtualMachineName'))]"
],
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.9",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": []
},
"protectedSettings": {
"commandToExecute": "[concat('powershell -ExecutionPolicy Unrestricted -Command \"& { $secureDeployExecPassword = ConvertTo-SecureString -String ', variables('quote'), parameters('deployExecPassword'), variables('quote'), ' -AsPlainText -Force; New-LocalUser -AccountNeverExpires -UserMayNotChangePassword -Name ', variables('quote'), parameters('deployExecUsername'), variables('quote'), ' -Password $secureDeployExecPassword -FullName ', variables('quote'), parameters('deployExecUsername'), variables('quote'), ' -Description ', variables('quote'), 'Deployment account for Web Deploy', variables('quote'), ' -ErrorAction Continue ', '}\"')]"
}
}
}
And then using dependsOn to force the custom script extension to run before DSC by setting these on the DSC extension.
"dependsOn": [
"[resourceId('Microsoft.Compute/virtualMachines', parameters('virtualMachineName'))]",
"addWindowsAccounts"
],
Not the ideal solution, but it is secure and gets me past this blocking issue without resorting to using the administrator password for website deployment.
Here's one of the recent ones I'm using:
Configuration:
Param(
[System.Management.Automation.PSCredential]$Admincreds,
)
and in the template I do this:
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.20",
"autoUpgradeMinorVersion": true,
"settings": {
"configuration": {
"url": "url.zip",
"script": "file.ps1",
"function": "configuration"
}
},
"protectedSettings": {
"configurationArguments": {
"adminCreds": {
"userName": "username",
"password": "password"
}
}
}
You dont need PSDscAllowPlainTextPassword because they are auto encrypted by powershell.dsc extension.
This template run on the same resource group produced the error StorageAccountAlreadyTaken but it is expected to do incremental deployment, ie. not to create any new resources. How to fix this?
{
"type": "Microsoft.Storage/storageAccounts",
"name": "[variables('prodSaName')]",
"tags": { "displayName": "Storage account" },
"apiVersion": "2016-01-01",
"location": "[parameters('location')]",
"sku": { "name": "Standard_LRS" },
"kind": "Storage",
"properties": {}
},
{
"type": "Microsoft.Storage/storageAccounts",
"name": "[ge.saName(parameters('brn'), parameters('environments')[copyIndex()])]",
"tags": { "displayName": "Storage account" },
"apiVersion": "2016-01-01",
"location": "[parameters('location')]",
"sku": { "name": "Standard_LRS" },
"kind": "Storage",
"copy": {
"name": "EnvStorageAccounts",
"count": "[length(parameters('environments'))]"
},
Run New-AzureRmResourceGroupDeployment #args -debug with this template and it outputs:
New-AzureRmResourceGroupDeployment : 15:03:38 - Error: Code=StorageAccountAlreadyTaken; Message=The storage account named
UNIQUENAMEOFTHERESOURCE is already taken.
At C:\coding\gameo\gameoemergency\provision.run.ps1:101 char:9
+ New-AzureRmResourceGroupDeployment #args -debug
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [New-AzureRmResourceGroupDeployment], Exception
+ FullyQualifiedErrorId : Microsoft.Azure.Commands.ResourceManager.Cmdlets.Implementation.NewAzureResourceGroupDeploymentCmdle
t
PS. Somehow running from VSTS doesn't have this result. That run from local machine. Also there is no such error in Activity Log, strangely.
Update.
If I don't do these selects of a subscription as below but only for RM there are no errors.
Select-AzureRmSubscription -SubscriptionID $Cfg.subscriptionId > $null
# this seems to be the reason of the error, if removed - works:
Select-AzureSubscription -SubscriptionId $Cfg.subscriptionId > $null
# ... in order to run:
exec { Start-AzureWebsiteJob #startArgs | out-null }
Not sure if its a help but I had this issue with APIs prior to 2016-01-01 as there was a known bug. I updated to 2016-12-01 as this was the latest at the time and it worked for me.
Have you tried updating the api to the latest and trying again? The latest is 2018-02-01
This is the result of selecting a subscription twice for Resource Manager cmdlets and using Select-AzureSubscription as below:
Select-AzureRmSubscription -SubscriptionID $Cfg.subscriptionId > $null
# this seems to be the reason of the error, if removed - works:
# Select-AzureSubscription -SubscriptionId $Cfg.subscriptionId > $null
To avoid old cmdlets (before RM) we can use Invoke-AzureRmResourceAction. See example at https://stackoverflow.com/a/51712321/511144
A good thing to do before performing a deployment is to validate the resources that if it can be created with the name provided. For EventHub and Storage Account there are built in cmdlets that allows you to check it.
For EventHub
Test-AzureRmEventHubName -Namespace $eventHubNamespaceName | Select-Object -ExpandProperty NameAvailable
if ($availability -eq $true) {
Write-Host -ForegroundColor Green `r`n"Entered Event Hub Namespace name is available"
For Storage Account
while ($true) {
Write-Host -ForegroundColor Yellow `r`n"Enter Storage account name (Press Enter for default) : " -NoNewline
$storageAccountName = Read-Host
if ([string]::IsNullOrEmpty($storageAccountName)) {
$storageAccountName = $defaultstorageAccountName
}
Write-Host -ForegroundColor Yellow `r`n"Checking whether the entered name for Storage account is available"
$availability = Get-AzureRmStorageAccountNameAvailability -Name $storageAccountName | Select-Object -ExpandProperty NameAvailable
if ($availability -eq $true ) {
Write-Host -ForegroundColor Green `r`n"Entered Storage account name is available"
break
}
Write-Host `r`n"Enter valid Storage account name"
}
This will prevent deployment errors by validating before deployment and asking the user to enter a valid name again if the entered one is not present.
I need to create Azure container service with single azure rest api.
Is it possible? if yes folks please help me
Yes, if you only need a Container with no Virtual Machine at all. The following PowerShell Script calls such REST API.
Add-Type -Path 'C:\Program Files\Microsoft Azure Active Directory Connect\Microsoft.IdentityModel.Clients.ActiveDirectory.dll'
$tenantID = "<the tenant ID of your Subscription>"
$loginEndpoint = "https://login.windows.net/"
$managementResourceURI = "https://management.core.windows.net/"
$redirectURI = New-Object System.Uri ("urn:ietf:wg:oauth:2.0:oob")
$clientID = "1950a258-227b-4e31-a9cf-717495945fc2"
$subscriptionID = "<your subscription id>"
$authString = $loginEndpoint + $tenantID
$authenticationContext = New-Object Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext ($authString, $false)
$promptBehaviour = [Microsoft.IdentityModel.Clients.ActiveDirectory.PromptBehavior]::Auto
$userIdentifierType = [Microsoft.IdentityModel.Clients.ActiveDirectory.UserIdentifierType]::RequiredDisplayableId
$userIdentifier = New-Object Microsoft.IdentityModel.Clients.ActiveDirectory.UserIdentifier ("<your azure account>", $userIdentifierType)
$authenticationResult = $authenticationContext.AcquireToken($managementResourceURI, $clientID, $redirectURI, $promptBehaviour, $userIdentifier);
# construct authorization header for the REST API.
$authHeader = $authenticationResult.AccessTokenType + " " + $authenticationResult.AccessToken
$headers = #{"Authorization"=$authHeader; "Content-Type"="application/json"}
# Invoke the REST API.
Invoke-RestMethod -Method PUT -Uri "https://management.azure.com/subscriptions/$subscriptionID/resourceGroups/<the resource group>/providers/Microsoft.ContainerService/containerServices/<the container>?api-version=2016-03-30" -Headers $headers -infile containerService.json
For the containerService.json, here is a sample of DCOS container.
{
"name": "containerservice-mooncaketeam",
"type": "Microsoft.ContainerService/ContainerServices",
"location": "eastasia",
"properties": {
"orchestratorProfile": {
"orchestratorType": "DCOS"
},
"masterProfile": {
"count": 1,
"dnsPrefix": "jackstestmgmt",
},
"agentPoolProfiles": [
{
"name": "agentpools",
"count": 1,
"vmSize": "Standard_D1",
"dnsPrefix": "jackstestagents",
}
],
"linuxProfile": {
"ssh": {
"publicKeys": [
{
"keyData": "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEArgPsnnGrA2gbmXKEd0O1zWGmiRhfBgmGugAwC7IGcm71RjqoISHz0MKZyJbt/gvX6BKogdCAaN1rDisuOMSsd7LonkURtOJV3RszdAKtk3o+tBtrJy1RhGOIA76/5XQWaCFgoiQGGwF9KYn9VnwjwcQki2OOZIq1YJAkrZxgkNPkMKjVlmsyGJJkpSHyIpzVqZWOYVFP8mon8kll+ZUec+tPK+RYxNZQadxvUzRMvCGdHCGT274KpgnP0FgemrS9/SCJCHW4qZawANp8uBrjLwSTstqmA1uJddZ3RPZu+BgZ68EihF0wG3GsvB4tV0fBYnxRiElYn+FdaZlYbZDobw=="
}
]
},
"adminUsername": "admin"
}
}
}
This REST API will create 1 container Service, 1 availability set, 3 network security groups, and 2 public IP addresses.
I am trying to create RM template that creates a web site and configures this for logging into the blob storage.
I saw this post in StackOverflow, which shows how to configure this.
The json looks somewhat following:
{
"id": "/subscriptions/.../config/logs",
"name": "logs",
"type": "Microsoft.Web/sites/config",
"location": "North Central US",
"properties": {
"applicationLogs": {
"fileSystem": {
"level": "Off"
},
"azureBlobStorage": {
"level": "Information",
"sasUrl": "...",
"retentionInDays": 14
}
},
...
}
However, I couldn't figure out how the sasUrl should be calculated/resolved into this file?
To create an SASurl for a container you would use the New-AzureStorageContainerSASToken
A script like this should work
$context = New-AzureStorageContext -StorageAccountName $name `
-StorageAccountKey ((Get-AzureRmStorageAccountKey `
-ResourceGroupName $rg -Name $name).Key1)
New-AzureStorageContainerSASToken -Name sql
-Permission rwdl -Context $context
You might need to add -FullUri to the end of the last one.
Moim, I don't think you can create a sasToken within the template. As MichaelB mentioned you can create it before you do the deployment and either pass it in as a parameter or simply hardcode it in the template (not ideal since this is a secret). A couple things to add to Michael's code: 1) you need the full URL and a container and 2) you'll want to set an expiry time on the token so it doesn't expire and prevent logging to it. For example:
$SasToken = New-AzureStorageContainerSASToken -Container 'logs'
-Context $context -Permission rwdl -ExpiryTime (Get-Date).AddYears(1) -FullUri
The other way you can do this is to create a sasToken, store it in Azure KeyVault, and reference that KeyVault secret in the template. This blog has a few posts that walk through setting that up: http://www.codeisahighway.com/how-to-refer-an-azure-key-vault-secret-in-an-azure-resource-manager-deployment-template/