I need to create Azure container service with single azure rest api.
Is it possible? if yes folks please help me
Yes, if you only need a Container with no Virtual Machine at all. The following PowerShell Script calls such REST API.
Add-Type -Path 'C:\Program Files\Microsoft Azure Active Directory Connect\Microsoft.IdentityModel.Clients.ActiveDirectory.dll'
$tenantID = "<the tenant ID of your Subscription>"
$loginEndpoint = "https://login.windows.net/"
$managementResourceURI = "https://management.core.windows.net/"
$redirectURI = New-Object System.Uri ("urn:ietf:wg:oauth:2.0:oob")
$clientID = "1950a258-227b-4e31-a9cf-717495945fc2"
$subscriptionID = "<your subscription id>"
$authString = $loginEndpoint + $tenantID
$authenticationContext = New-Object Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext ($authString, $false)
$promptBehaviour = [Microsoft.IdentityModel.Clients.ActiveDirectory.PromptBehavior]::Auto
$userIdentifierType = [Microsoft.IdentityModel.Clients.ActiveDirectory.UserIdentifierType]::RequiredDisplayableId
$userIdentifier = New-Object Microsoft.IdentityModel.Clients.ActiveDirectory.UserIdentifier ("<your azure account>", $userIdentifierType)
$authenticationResult = $authenticationContext.AcquireToken($managementResourceURI, $clientID, $redirectURI, $promptBehaviour, $userIdentifier);
# construct authorization header for the REST API.
$authHeader = $authenticationResult.AccessTokenType + " " + $authenticationResult.AccessToken
$headers = #{"Authorization"=$authHeader; "Content-Type"="application/json"}
# Invoke the REST API.
Invoke-RestMethod -Method PUT -Uri "https://management.azure.com/subscriptions/$subscriptionID/resourceGroups/<the resource group>/providers/Microsoft.ContainerService/containerServices/<the container>?api-version=2016-03-30" -Headers $headers -infile containerService.json
For the containerService.json, here is a sample of DCOS container.
{
"name": "containerservice-mooncaketeam",
"type": "Microsoft.ContainerService/ContainerServices",
"location": "eastasia",
"properties": {
"orchestratorProfile": {
"orchestratorType": "DCOS"
},
"masterProfile": {
"count": 1,
"dnsPrefix": "jackstestmgmt",
},
"agentPoolProfiles": [
{
"name": "agentpools",
"count": 1,
"vmSize": "Standard_D1",
"dnsPrefix": "jackstestagents",
}
],
"linuxProfile": {
"ssh": {
"publicKeys": [
{
"keyData": "ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEArgPsnnGrA2gbmXKEd0O1zWGmiRhfBgmGugAwC7IGcm71RjqoISHz0MKZyJbt/gvX6BKogdCAaN1rDisuOMSsd7LonkURtOJV3RszdAKtk3o+tBtrJy1RhGOIA76/5XQWaCFgoiQGGwF9KYn9VnwjwcQki2OOZIq1YJAkrZxgkNPkMKjVlmsyGJJkpSHyIpzVqZWOYVFP8mon8kll+ZUec+tPK+RYxNZQadxvUzRMvCGdHCGT274KpgnP0FgemrS9/SCJCHW4qZawANp8uBrjLwSTstqmA1uJddZ3RPZu+BgZ68EihF0wG3GsvB4tV0fBYnxRiElYn+FdaZlYbZDobw=="
}
]
},
"adminUsername": "admin"
}
}
}
This REST API will create 1 container Service, 1 availability set, 3 network security groups, and 2 public IP addresses.
Related
[1]Can someone please help me on how to create variable group in Azure DevOps Library using PowerShell? The file is in .json/.csv file. This is what I have so far. I am new to PowerShell so please excuse any errors.
$file = get-content = C:\test\Variables.json
ForEach {
$name = $_.Variable;
$Issecret = $_.IsSecret;
$Value = $_.Value;
az pipelines variable-group create --name test -p $ProjectName --org $orgUrl --authorize --variables -$name -$issecret -value
}
I am trying to import all the values to Azure DevOps library.
Name Is Secret Value
-------------------------- ----------- -------------------------------------------------------------------------------
Test1 False https://test1.com
Test2 False https://test2.com
You can get data from the csv file through the script below:
$body=#{
"variables"= #{};
"type"= "Vsts";
"name"= "TestVariableGroup1";
"description"= "A test variable group"
}
$employee_list = Import-Csv "D:\test\data.csv"
foreach ($employee in $employee_list){
$body.variables[$employee.name]=#{value=$setting.value; isSecret=$employee.isSecret}
}
$body | ConvertTo-Json
Then you can use rest api to create variable group.
Request url:
POST https://dev.azure.com/{organization}/_apis/distributedtask/variablegroups?api-version=6.0-preview.2
Request body:
{
"description": "xxxx",
"name": "xxx",
"providerData": null,
"type": "Vsts",
"variables": {"test1": {
"isSecret": true,
"value": "fortest1"
},
"test2": {
"isSecret": true,
"value": "fortest2"
}},
"variableGroupProjectReferences": [{
"description": "xxxx",
"name": "xxx",
"projectReference": {
"id": "projectId",
"name": ""
}
}]
}
Sample script:
$token = "PAT token"
$url = "https://dev.azure.com/{organization}/_apis/distributedtask/variablegroups?api-version=6.0-preview.2"
$token = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($token)"))
$JSON = #'
request body
'#
$response = Invoke-RestMethod -Uri $url -Headers #{Authorization = "Basic $token"} -Method Post -ContentType application/json -body $JSON
I am taking my first crack at making a DSC (Desired State Configuration) file to go with an ARM (Azure Resource Manager) template to deploy a Windows Server 2016 and so far everything was working great until I tried to pass a username/password so I can create a local Windows user account. I can't seem to make this function at all (see the error message below).
My question is, how do I use an ARM template to pull a password from an Azure key vault and pass it to a DSC powershell extension?
Here is my current setup:
azuredeploy.json
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"deployExecUsername": {
"type": "string",
"defaultValue": "DeployExec"
},
"deployExecPassword": {
"type": "securestring"
},
"_artifactsLocation": {
"type": "string",
"metadata": {
"description": "Auto-generated container in staging storage account to receive post-build staging folder upload"
}
},
"_artifactsLocationSasToken": {
"type": "securestring",
"metadata": {
"description": "Auto-generated token to access _artifactsLocation"
}
},
"virtualMachineName": {
"type": "string",
"defaultValue": "web-app-server"
}
},
"variables": {
"CreateLocalUserArchiveFolder": "DSC",
"CreateLocalUserArchiveFileName": "CreateLocalUser.zip"},
"resources": [
{
"name": "[concat(parameters('virtualMachineName'), '/', 'Microsoft.Powershell.DSC')]",
"type": "Microsoft.Compute/virtualMachines/extensions",
"location": "eastus2",
"apiVersion": "2016-03-30",
"dependsOn": [ ],
"tags": {
"displayName": "CreateLocalUser"
},
"properties": {
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.9",
"autoUpgradeMinorVersion": true,
"settings": {
"configuration": {
"url": "[concat(parameters('_artifactsLocation'), '/', variables('CreateLocalUserArchiveFolder'), '/', variables('CreateLocalUserArchiveFileName'))]",
"script": "CreateLocalUser.ps1",
"function": "Main"
},
"configurationArguments": {
"nodeName": "[parameters('virtualMachineName')]"
}
},
"protectedSettings": {
"configurationArguments": {
"deployExecCredential": {
"UserName": "[parameters('deployExecUsername')]",
"Password": "[parameters('deployExecPassword')]"
}
},
"configurationUrlSasToken": "[parameters('_artifactsLocationSasToken')]"
}
}
}],
"outputs": {}
}
azuredeploy.parameters.json
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"deployExecPassword": {
"reference": {
"keyVault": {
"id": "/subscriptions/<GUID>/resourceGroups/<Resource Group Name>/providers/Microsoft.KeyVault/vaults/<Resource Group Name>-key-vault"
},
"secretName": "web-app-server-deployexec-password"
}
}
}
}
DSC/CreateLocalUser.ps1
Configuration Main
{
Param (
[string] $nodeName,
[PSCredential]$deployExecCredential
)
Import-DscResource -ModuleName PSDesiredStateConfiguration
Node $nodeName
{
User DeployExec
{
Ensure = "Present"
Description = "Deployment account for Web Deploy"
UserName = $deployExecCredential.UserName
Password = $deployExecCredential
PasswordNeverExpires = $true
PasswordChangeRequired = $false
PasswordChangeNotAllowed = $true
}
}
}
Deploy-AzureResourceGroup.ps1 (Default from Azure Resource Group template)
#Requires -Version 3.0
Param(
[string] [Parameter(Mandatory=$true)] $ResourceGroupLocation,
[string] $ResourceGroupName = 'AzureResourceGroup2',
[switch] $UploadArtifacts,
[string] $StorageAccountName,
[string] $StorageContainerName = $ResourceGroupName.ToLowerInvariant() + '-stageartifacts',
[string] $TemplateFile = 'azuredeploy.json',
[string] $TemplateParametersFile = 'azuredeploy.parameters.json',
[string] $ArtifactStagingDirectory = '.',
[string] $DSCSourceFolder = 'DSC',
[switch] $ValidateOnly
)
try {
[Microsoft.Azure.Common.Authentication.AzureSession]::ClientFactory.AddUserAgent("VSAzureTools-$UI$($host.name)".replace(' ','_'), '3.0.0')
} catch { }
$ErrorActionPreference = 'Stop'
Set-StrictMode -Version 3
function Format-ValidationOutput {
param ($ValidationOutput, [int] $Depth = 0)
Set-StrictMode -Off
return #($ValidationOutput | Where-Object { $_ -ne $null } | ForEach-Object { #(' ' * $Depth + ': ' + $_.Message) + #(Format-ValidationOutput #($_.Details) ($Depth + 1)) })
}
$OptionalParameters = New-Object -TypeName Hashtable
$TemplateFile = [System.IO.Path]::GetFullPath([System.IO.Path]::Combine($PSScriptRoot, $TemplateFile))
$TemplateParametersFile = [System.IO.Path]::GetFullPath([System.IO.Path]::Combine($PSScriptRoot, $TemplateParametersFile))
if ($UploadArtifacts) {
# Convert relative paths to absolute paths if needed
$ArtifactStagingDirectory = [System.IO.Path]::GetFullPath([System.IO.Path]::Combine($PSScriptRoot, $ArtifactStagingDirectory))
$DSCSourceFolder = [System.IO.Path]::GetFullPath([System.IO.Path]::Combine($PSScriptRoot, $DSCSourceFolder))
# Parse the parameter file and update the values of artifacts location and artifacts location SAS token if they are present
$JsonParameters = Get-Content $TemplateParametersFile -Raw | ConvertFrom-Json
if (($JsonParameters | Get-Member -Type NoteProperty 'parameters') -ne $null) {
$JsonParameters = $JsonParameters.parameters
}
$ArtifactsLocationName = '_artifactsLocation'
$ArtifactsLocationSasTokenName = '_artifactsLocationSasToken'
$OptionalParameters[$ArtifactsLocationName] = $JsonParameters | Select -Expand $ArtifactsLocationName -ErrorAction Ignore | Select -Expand 'value' -ErrorAction Ignore
$OptionalParameters[$ArtifactsLocationSasTokenName] = $JsonParameters | Select -Expand $ArtifactsLocationSasTokenName -ErrorAction Ignore | Select -Expand 'value' -ErrorAction Ignore
# Create DSC configuration archive
if (Test-Path $DSCSourceFolder) {
$DSCSourceFilePaths = #(Get-ChildItem $DSCSourceFolder -File -Filter '*.ps1' | ForEach-Object -Process {$_.FullName})
foreach ($DSCSourceFilePath in $DSCSourceFilePaths) {
$DSCArchiveFilePath = $DSCSourceFilePath.Substring(0, $DSCSourceFilePath.Length - 4) + '.zip'
Publish-AzureRmVMDscConfiguration $DSCSourceFilePath -OutputArchivePath $DSCArchiveFilePath -Force -Verbose
}
}
# Create a storage account name if none was provided
if ($StorageAccountName -eq '') {
$StorageAccountName = 'stage' + ((Get-AzureRmContext).Subscription.SubscriptionId).Replace('-', '').substring(0, 19)
}
$StorageAccount = (Get-AzureRmStorageAccount | Where-Object{$_.StorageAccountName -eq $StorageAccountName})
# Create the storage account if it doesn't already exist
if ($StorageAccount -eq $null) {
$StorageResourceGroupName = 'ARM_Deploy_Staging'
New-AzureRmResourceGroup -Location "$ResourceGroupLocation" -Name $StorageResourceGroupName -Force
$StorageAccount = New-AzureRmStorageAccount -StorageAccountName $StorageAccountName -Type 'Standard_LRS' -ResourceGroupName $StorageResourceGroupName -Location "$ResourceGroupLocation"
}
# Generate the value for artifacts location if it is not provided in the parameter file
if ($OptionalParameters[$ArtifactsLocationName] -eq $null) {
$OptionalParameters[$ArtifactsLocationName] = $StorageAccount.Context.BlobEndPoint + $StorageContainerName
}
# Copy files from the local storage staging location to the storage account container
New-AzureStorageContainer -Name $StorageContainerName -Context $StorageAccount.Context -ErrorAction SilentlyContinue *>&1
$ArtifactFilePaths = Get-ChildItem $ArtifactStagingDirectory -Recurse -File | ForEach-Object -Process {$_.FullName}
foreach ($SourcePath in $ArtifactFilePaths) {
Set-AzureStorageBlobContent -File $SourcePath -Blob $SourcePath.Substring($ArtifactStagingDirectory.length + 1) `
-Container $StorageContainerName -Context $StorageAccount.Context -Force
}
# Generate a 4 hour SAS token for the artifacts location if one was not provided in the parameters file
if ($OptionalParameters[$ArtifactsLocationSasTokenName] -eq $null) {
$OptionalParameters[$ArtifactsLocationSasTokenName] = ConvertTo-SecureString -AsPlainText -Force `
(New-AzureStorageContainerSASToken -Container $StorageContainerName -Context $StorageAccount.Context -Permission r -ExpiryTime (Get-Date).AddHours(4))
}
}
# Create or update the resource group using the specified template file and template parameters file
New-AzureRmResourceGroup -Name $ResourceGroupName -Location $ResourceGroupLocation -Verbose -Force
if ($ValidateOnly) {
$ErrorMessages = Format-ValidationOutput (Test-AzureRmResourceGroupDeployment -ResourceGroupName $ResourceGroupName `
-TemplateFile $TemplateFile `
-TemplateParameterFile $TemplateParametersFile `
#OptionalParameters)
if ($ErrorMessages) {
Write-Output '', 'Validation returned the following errors:', #($ErrorMessages), '', 'Template is invalid.'
}
else {
Write-Output '', 'Template is valid.'
}
}
else {
New-AzureRmResourceGroupDeployment -Name ((Get-ChildItem $TemplateFile).BaseName + '-' + ((Get-Date).ToUniversalTime()).ToString('MMdd-HHmm')) `
-ResourceGroupName $ResourceGroupName `
-TemplateFile $TemplateFile `
-TemplateParameterFile $TemplateParametersFile `
#OptionalParameters `
-Force -Verbose `
-ErrorVariable ErrorMessages
if ($ErrorMessages) {
Write-Output '', 'Template deployment returned the following errors:', #(#($ErrorMessages) | ForEach-Object { $_.Exception.Message.TrimEnd("`r`n") })
}
}
Do note that my original template deploys the entire server, but I am able to reproduce the issue I am having with the above template and any old Windows Server 2016 VM.
I am running the template through Visual Studio 2017 Community:
The template validates and runs, but when I run it I am getting the following error message:
22:26:41 - New-AzureRmResourceGroupDeployment : 10:26:41 PM - VM has reported a failure when processing extension
22:26:41 - 'Microsoft.Powershell.DSC'. Error message: "The DSC Extension received an incorrect input: Compilation errors occurred
22:26:41 - while processing configuration 'Main'. Please review the errors reported in error stream and modify your configuration
22:26:41 - code appropriately. System.InvalidOperationException error processing property 'Password' OF TYPE 'User': Converting
22:26:41 - and storing encrypted passwords as plain text is not recommended. For more information on securing credentials in MOF
22:26:41 - file, please refer to MSDN blog: http://go.microsoft.com/fwlink/?LinkId=393729
22:26:41 - At C:\Packages\Plugins\Microsoft.Powershell.DSC\2.77.0.0\DSCWork\CreateLocalUser.4\CreateLocalUser.ps1:12 char:3
22:26:41 - + User Converting and storing encrypted passwords as plain text is not recommended. For more information on securing
22:26:41 - credentials in MOF file, please refer to MSDN blog: http://go.microsoft.com/fwlink/?LinkId=393729 Cannot find path
22:26:41 - 'HKLM:\SOFTWARE\Microsoft\PowerShell\3\DSC' because it does not exist. Cannot find path
22:26:41 - 'HKLM:\SOFTWARE\Microsoft\PowerShell\3\DSC' because it does not exist.
22:26:41 - Another common error is to specify parameters of type PSCredential without an explicit type. Please be sure to use a
22:26:41 - typed parameter in DSC Configuration, for example:
22:26:41 - configuration Example {
22:26:41 - param([PSCredential] $UserAccount)
22:26:41 - ...
22:26:41 - }.
22:26:41 - Please correct the input and retry executing the extension.".
22:26:41 - At F:\Users\Shad\Documents\Visual Studio 2017\Projects\AzureResourceGroup2\AzureResourceGroup2\bin\Debug\staging\AzureR
22:26:41 - esourceGroup2\Deploy-AzureResourceGroup.ps1:108 char:5
22:26:41 - + New-AzureRmResourceGroupDeployment -Name ((Get-ChildItem $Templat ...
22:26:41 - + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
22:26:41 - + CategoryInfo : NotSpecified: (:) [New-AzureRmResourceGroupDeployment], Exception
22:26:41 - + FullyQualifiedErrorId : Microsoft.Azure.Commands.ResourceManager.Cmdlets.Implementation.NewAzureResourceGroupDep
22:26:41 - loymentCmdlet
What I Tried
I have already tried looking at this question:
Passing credentials to DSC script from arm template
but it seems to be using the old ARM template format to construct the DSC call and since I am not familiar with it, I am unable to work out what the extra parameters are for.
I also took a look at this question:
Securely pass credentials to DSC Extension from ARM Template
and the accepted answer is to just use PsDSCAllowPlainTextPassword = $true. While this doesn't seem like the best way, I tried adding the following configuration data file.
CreateLocalUser.psd1
#
# CreateLocalUser.psd1
#
#{
AllNodes = #(
#{
NodeName = '*'
PSDscAllowPlainTextPassword = $true
}
)
}
And changing Deploy-AzureResourceGroup.ps1 to pass these settings to the DSC configuration, as follows.
# Create DSC configuration archive
if (Test-Path $DSCSourceFolder) {
$DSCSourceFilePaths = #(Get-ChildItem $DSCSourceFolder -File -Filter '*.ps1' | ForEach-Object -Process {$_.FullName})
foreach ($DSCSourceFilePath in $DSCSourceFilePaths) {
$DSCArchiveFilePath = $DSCSourceFilePath.Substring(0, $DSCSourceFilePath.Length - 4) + '.zip'
$DSCConfigDataFilePath = $DSCSourceFilePath.Substring(0, $DSCSourceFilePath.Length - 4) + '.psd1'
Publish-AzureRmVMDscConfiguration $DSCSourceFilePath -OutputArchivePath $DSCArchiveFilePath -ConfigurationDataPath $DSCConfigDataFilePath -Force -Verbose
}
}
However, I am not getting any change in the error message when doing so.
I have been through loads of the Azure documentation to try to work this out. The link in the error message is entirely unhelpful as there are no examples of how to use the encryption with an ARM template. Most of the examples are showing running Powershell scripts rather than an ARM template. And there isn't a single example in the documentation anywhere on how to retrieve the password from a key vault and pass it into a DSC extension file from an ARM template. Is this even possible?
Note that I would be happy with simply using Visual Studio to do my deployment, if I could just get it working. But I have been working on this issue for several days and cannot seem to find a single solution that works. So, I thought I would ask here before throwing in the towel and just using the admin Windows account for web deployment.
UPDATE 2018-12-21
I noticed when running the deploy command through Visual Studio 2017 that the log contains an error message:
20:13:43 - Build started.
20:13:43 - Project "web-app-server.deployproj" (StageArtifacts target(s)):
20:13:43 - Project "web-app-server.deployproj" (ContentFilesProjectOutputGroup target(s)):
20:13:43 - Done building project "web-app-server.deployproj".
20:13:43 - Done building project "web-app-server.deployproj".
20:13:43 - Build succeeded.
20:13:43 - Launching PowerShell script with the following command:
20:13:43 - 'F:\Projects\thepath\web-app-server\bin\Debug\staging\web-app-server\Deploy-AzureResourceGroup.ps1' -StorageAccountName 'staged<xxxxxxxxxxxxxxxxx>' -ResourceGroupName 'web-app-server' -ResourceGroupLocation 'eastus2' -TemplateFile 'F:\Projects\thepath\web-app-server\bin\Debug\staging\web-app-server\web-app-server.json' -TemplateParametersFile 'F:\Projects\thepath\web-app-server\bin\Debug\staging\web-app-server\web-app-server.parameters.json' -ArtifactStagingDirectory '.' -DSCSourceFolder '.\DSC' -UploadArtifacts
20:13:43 - Deploying template using PowerShell script failed.
20:13:43 - Tell us about your experience at https://go.microsoft.com/fwlink/?LinkId=691202
After the error message occurs, it continues. I suspect it may be falling back to using another method and it is that method that is causing the failure and why others are not seeing what I am.
Sadly, no matter what I try this does not function with DSC.
Workaround
For now, I am working around this issue using a custom script extension, like this:
{
"name": "[concat(parameters('virtualMachineName'), '/addWindowsAccounts')]",
"type": "Microsoft.Compute/virtualMachines/extensions",
"apiVersion": "2018-06-01",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', parameters('virtualMachineName'))]"
],
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.9",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": []
},
"protectedSettings": {
"commandToExecute": "[concat('powershell -ExecutionPolicy Unrestricted -Command \"& { $secureDeployExecPassword = ConvertTo-SecureString -String ', variables('quote'), parameters('deployExecPassword'), variables('quote'), ' -AsPlainText -Force; New-LocalUser -AccountNeverExpires -UserMayNotChangePassword -Name ', variables('quote'), parameters('deployExecUsername'), variables('quote'), ' -Password $secureDeployExecPassword -FullName ', variables('quote'), parameters('deployExecUsername'), variables('quote'), ' -Description ', variables('quote'), 'Deployment account for Web Deploy', variables('quote'), ' -ErrorAction Continue ', '}\"')]"
}
}
}
And then using dependsOn to force the custom script extension to run before DSC by setting these on the DSC extension.
"dependsOn": [
"[resourceId('Microsoft.Compute/virtualMachines', parameters('virtualMachineName'))]",
"addWindowsAccounts"
],
Not the ideal solution, but it is secure and gets me past this blocking issue without resorting to using the administrator password for website deployment.
Here's one of the recent ones I'm using:
Configuration:
Param(
[System.Management.Automation.PSCredential]$Admincreds,
)
and in the template I do this:
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.20",
"autoUpgradeMinorVersion": true,
"settings": {
"configuration": {
"url": "url.zip",
"script": "file.ps1",
"function": "configuration"
}
},
"protectedSettings": {
"configurationArguments": {
"adminCreds": {
"userName": "username",
"password": "password"
}
}
}
You dont need PSDscAllowPlainTextPassword because they are auto encrypted by powershell.dsc extension.
This template run on the same resource group produced the error StorageAccountAlreadyTaken but it is expected to do incremental deployment, ie. not to create any new resources. How to fix this?
{
"type": "Microsoft.Storage/storageAccounts",
"name": "[variables('prodSaName')]",
"tags": { "displayName": "Storage account" },
"apiVersion": "2016-01-01",
"location": "[parameters('location')]",
"sku": { "name": "Standard_LRS" },
"kind": "Storage",
"properties": {}
},
{
"type": "Microsoft.Storage/storageAccounts",
"name": "[ge.saName(parameters('brn'), parameters('environments')[copyIndex()])]",
"tags": { "displayName": "Storage account" },
"apiVersion": "2016-01-01",
"location": "[parameters('location')]",
"sku": { "name": "Standard_LRS" },
"kind": "Storage",
"copy": {
"name": "EnvStorageAccounts",
"count": "[length(parameters('environments'))]"
},
Run New-AzureRmResourceGroupDeployment #args -debug with this template and it outputs:
New-AzureRmResourceGroupDeployment : 15:03:38 - Error: Code=StorageAccountAlreadyTaken; Message=The storage account named
UNIQUENAMEOFTHERESOURCE is already taken.
At C:\coding\gameo\gameoemergency\provision.run.ps1:101 char:9
+ New-AzureRmResourceGroupDeployment #args -debug
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [New-AzureRmResourceGroupDeployment], Exception
+ FullyQualifiedErrorId : Microsoft.Azure.Commands.ResourceManager.Cmdlets.Implementation.NewAzureResourceGroupDeploymentCmdle
t
PS. Somehow running from VSTS doesn't have this result. That run from local machine. Also there is no such error in Activity Log, strangely.
Update.
If I don't do these selects of a subscription as below but only for RM there are no errors.
Select-AzureRmSubscription -SubscriptionID $Cfg.subscriptionId > $null
# this seems to be the reason of the error, if removed - works:
Select-AzureSubscription -SubscriptionId $Cfg.subscriptionId > $null
# ... in order to run:
exec { Start-AzureWebsiteJob #startArgs | out-null }
Not sure if its a help but I had this issue with APIs prior to 2016-01-01 as there was a known bug. I updated to 2016-12-01 as this was the latest at the time and it worked for me.
Have you tried updating the api to the latest and trying again? The latest is 2018-02-01
This is the result of selecting a subscription twice for Resource Manager cmdlets and using Select-AzureSubscription as below:
Select-AzureRmSubscription -SubscriptionID $Cfg.subscriptionId > $null
# this seems to be the reason of the error, if removed - works:
# Select-AzureSubscription -SubscriptionId $Cfg.subscriptionId > $null
To avoid old cmdlets (before RM) we can use Invoke-AzureRmResourceAction. See example at https://stackoverflow.com/a/51712321/511144
A good thing to do before performing a deployment is to validate the resources that if it can be created with the name provided. For EventHub and Storage Account there are built in cmdlets that allows you to check it.
For EventHub
Test-AzureRmEventHubName -Namespace $eventHubNamespaceName | Select-Object -ExpandProperty NameAvailable
if ($availability -eq $true) {
Write-Host -ForegroundColor Green `r`n"Entered Event Hub Namespace name is available"
For Storage Account
while ($true) {
Write-Host -ForegroundColor Yellow `r`n"Enter Storage account name (Press Enter for default) : " -NoNewline
$storageAccountName = Read-Host
if ([string]::IsNullOrEmpty($storageAccountName)) {
$storageAccountName = $defaultstorageAccountName
}
Write-Host -ForegroundColor Yellow `r`n"Checking whether the entered name for Storage account is available"
$availability = Get-AzureRmStorageAccountNameAvailability -Name $storageAccountName | Select-Object -ExpandProperty NameAvailable
if ($availability -eq $true ) {
Write-Host -ForegroundColor Green `r`n"Entered Storage account name is available"
break
}
Write-Host `r`n"Enter valid Storage account name"
}
This will prevent deployment errors by validating before deployment and asking the user to enter a valid name again if the entered one is not present.
I'm attempting to create an ARM Template that includes creating a Storage account.
I want to create a StorageV2 (general purpose v2) account but this seems to fail because StorageV2 does not exist in the schema.
{
"name": "[variables('xblobstorageName')]",
"type": "Microsoft.Storage/storageAccounts",
"location": "[resourceGroup().location]",
"apiVersion": "2016-01-01",
"sku": {
"name": "[parameters('xblobstorageType')]"
},
"dependsOn": [],
"tags": {
"displayName": "xblobstorage"
},
"kind": "StorageV2"
}
The only allowed values for kind are Storage and BlobStorage so when attempting to deploy the above template the following error is received:
"error": {
"code": "AccountPropertyIsInvalid",
"message": "Account property kind is invalid for the request."
}
Is it possible to create a V2 storage account using ARM Templates?
You have to update the apiVersion to 2018-02-01.
I wrote a PowerShell script to determine the latest API Version for a resource provider:
<#
.Synopsis
Gets the latest API version of a resource provider
.DESCRIPTION
The following cmdlet returns the latest API version for the specified resource provider.
You can also include pre-release (preview) versions using the -IncludePreview switch
.EXAMPLE
Using the Full Parameter Set:
Get-AzureRmResourceProviderLatestApiVersion -Type Microsoft.Storage/storageAccounts
.EXAMPLE
Using the Full Parameter Set with the -IncludePreview switch:
Get-AzureRmResourceProviderLatestApiVersion -Type Microsoft.Storage/storageAccounts -IncludePreview
.EXAMPLE
Using the ProviderAndType Parameter Set:
Get-AzureRmResourceProviderLatestApiVersion -ResourceProvider Microsoft.Storage -ResourceType storageAccounts
#>
function Get-AzureRmResourceProviderLatestApiVersion
{
[CmdletBinding()]
[Alias()]
[OutputType([string])]
Param
(
[Parameter(ParameterSetName = 'Full', Mandatory = $true, Position = 0)]
[string]$Type,
[Parameter(ParameterSetName = 'ProviderAndType', Mandatory = $true, Position = 0)]
[string]$ResourceProvider,
[Parameter(ParameterSetName = 'ProviderAndType', Mandatory = $true, Position = 1)]
[string]$ResourceType,
[switch]$IncludePreview
)
# retrieving the resource providers is time consuming therefore we store
# them in a script variable to accelerate subsequent requests.
if (-not $script:resourceProvider)
{
$script:resourceProvider = Get-AzureRmResourceProvider
}
if ($PSCmdlet.ParameterSetName -eq 'Full')
{
$ResourceProvider = ($Type -replace "\/.*")
$ResourceType = ($Type -replace ".*?\/(.+)", '$1')
}
$provider = $script:resourceProvider |
Where-Object {
$_.ProviderNamespace -eq $ResourceProvider -and
$_.ResourceTypes.ResourceTypeName -eq $ResourceType
}
if ($IncludePreview)
{
$provider.ResourceTypes.ApiVersions[0]
}
else
{
$provider.ResourceTypes.ApiVersions | Where-Object {
$_ -notmatch '-preview'
} | Select-Object -First 1
}
}
Usage:
Get-AzureRmResourceProviderLatestApiVersion -Type Microsoft.Storage/storageAccounts
And wrote a blog article about it:
Determine latest API version for a resource provider
Post updated. Issue has been solved. The scripts below will create a resource group, create a service principal, deploy a key vault, configure permissions and write a secret to the vault. Hopes this help! :)
Problem:
I am logged into PowerShell as a Service Principal that has Owner permissions on a resource group.
I get permission errors when i try to create a vault, set permission on the vault and when i try to write secrets.
Solution:
Step 1: Create resource group and Service Principal. You must be logged in as an administrator to execute this script
Clear-Host
Import-Module Azure
Import-Module AzureRM.Resources
Add-AzureRmAccount
Get-AzureRmSubscription
Set-AzureRmContext -SubscriptionId <Your subscription id goes here>
$ServicePrincipalDisplayName = "myServicePrincipalName"
$CertificateName = "CN=SomeCertName"
$cert = New-SelfSignedCertificate -CertStoreLocation "cert:\CurrentUser\My" -Subject $CertificateName -KeySpec KeyExchange
$keyValue = [Convert]::ToBase64String($cert.GetRawCertData())
$ResouceGroupName = "myRessourceGroup"
$location = "North Central US"
# Create the resource group
New-AzureRmResourceGroup -Name $ResouceGroupName -Location $location
$ResouceGroupNameScope = (Get-AzureRmResourceGroup -Name $ResouceGroupName -ErrorAction Stop).ResourceId
# Create the Service Principal that logs in with a certificate
New-AzureRMADServicePrincipal -DisplayName $ServicePrincipalDisplayName -CertValue $keyValue -EndDate $cert.NotAfter -StartDate $cert.NotBefore
$myServicePrincipal = Get-AzureRmADServicePrincipal -SearchString $ServicePrincipalDisplayName
Write-Host "myServicePrincipal.ApplicationId " $myServicePrincipal.ApplicationId -ForegroundColor Green
Write-Host "myServicePrincipal.DisplayName " $myServicePrincipal.DisplayName
# Sleep here for a few seconds to allow the service principal application to become active (should only take a couple of seconds normally)
Write-Host "Waiting 10 seconds"
Start-Sleep -s 10
Write-Host "Make the Service Principal owner of the resource group " $ResouceGroupName
$NewRole = $null
$Retries = 0
While ($NewRole -eq $null -and $Retries -le 6)
{
New-AzureRMRoleAssignment -RoleDefinitionName Owner -ServicePrincipalName $myServicePrincipal.ApplicationId -Scope $ResouceGroupNameScope -ErrorAction SilentlyContinue
$NewRole = Get-AzureRMRoleAssignment -ServicePrincipalName $myServicePrincipal.ApplicationId
Write-Host "NewRole.DisplayName " $NewRole.DisplayName
Write-Host "NewRole.Scope: " $NewRole.Scope
$Retries++
Start-Sleep -s 10
}
Write-Host "Service principal created" -ForegroundColor Green
Step 2 : ARM deployment of the vault. Create a filenamed keyvault2.parameters.json Update the id's to reflect your installation (5479eaf6-31a3-4be3-9fb6-c2cdadc31735 is the service principal used by azure web apps when accessing the vault.)
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"vaultName": {
"value": "valueFromParameterFile"
},
"vaultlocation": {
"value": "valueFromParameterFile"
},
"skumode": {
"value": "Standard"
},
"accessPolicyList": {
"value": [
{
"objectId": "The object ID for your AAD user goes here so that you can read secrets etc",
"tenantId": "Your Tenant Id goes here",
"permissions": {
"keys": [
"Get",
"List"
],
"secrets": [
"Get",
"List"
],
"certificates": [
"Get",
"List"
]
}
},
{
"objectId": "The object ID for the service principal goes here Get-AzureRmADServicePrincipal -ServicePrincipalName <Service Principal Application ID>",
"tenantId": "Your Tenant Id goes here",
"permissions": {
"keys": [
"Get",
"List",
"Update",
"Create",
"Import",
"Delete",
"Recover",
"Backup",
"Restore"
],
"secrets": [
"Get",
"List",
"Set",
"Delete",
"Recover",
"Backup",
"Restore"
],
"certificates": [
"Get",
"List",
"Update",
"Create",
"Import",
"Delete",
"ManageContacts",
"ManageIssuers",
"GetIssuers",
"ListIssuers",
"SetIssuers",
"DeleteIssuers"
]
},
"applicationId": null
},
{
"objectId": "5479eaf6-31a3-4be3-9fb6-c2cdadc31735",
"tenantId": "Your Tenant Id goes here",
"permissions": {
"keys": [],
"secrets": [
"Get"
],
"certificates": []
},
"applicationId": null
}
]
},
"tenant": {
"value": "Your Tenant Id goes here"
},
"isenabledForDeployment": {
"value": true
},
"isenabledForTemplateDeployment": {
"value": false
},
"isenabledForDiskEncryption": {
"value": false
}
}
}
Step 3 : ARM deployment of the vault. Create a filenamed keyvault2.template.json
{
"$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"vaultName": {
"type": "string"
},
"vaultlocation": {
"type": "string"
},
"skumode": {
"type": "string",
"defaultValue": "Standard",
"allowedValues": [
"Standard",
"standard",
"Premium",
"premium"
],
"metadata": {
"description": "SKU for the vault"
}
},
"accessPolicyList": {
"type": "array",
"defaultValue": [],
"metadata": {
"description": "The access policies defined for this vault."
}
},
"tenant": {
"type": "string"
},
"isenabledForDeployment": {
"type": "bool"
},
"isenabledForTemplateDeployment": {
"type": "bool"
},
"isenabledForDiskEncryption": {
"type": "bool"
}
},
"resources": [
{
"apiVersion": "2015-06-01",
"name": "[parameters('vaultName')]",
"location": "[parameters('vaultlocation')]",
"type": "Microsoft.KeyVault/vaults",
"properties": {
"enabledForDeployment": "[parameters('isenabledForDeployment')]",
"enabledForTemplateDeployment": "[parameters('isenabledForTemplateDeployment')]",
"enabledForDiskEncryption": "[parameters('isenabledForDiskEncryption')]",
"accessPolicies": "[parameters('accessPolicyList')]",
"tenantId": "[parameters('tenant')]",
"sku": {
"name": "[parameters('skumode')]",
"family": "A"
}
}
}
]
}
Step 4 : Deploy vault. Start a new powershell window and execute this script. Update 3 x id's
Clear-Host
Import-Module Azure
Import-Module AzureRM.Resources
$ServicePrincipalApplicationId = "xxx"
$TenantId = "yyy"
$SubscriptionId = "zzz"
$CertificateName = "CN=SomeCertName"
$ResouceGroupName = "myRessourceGroup"
$location = "North Central US"
$VaultName = "MyVault" + (Get-Random -minimum 10000000 -maximum 1000000000)
$MySecret = ConvertTo-SecureString -String "MyValue" -AsPlainText -Force
$Cert = Get-ChildItem cert:\CurrentUser\My\ | Where-Object {$_.Subject -match $CertificateName }
Write-Host "cert.Thumbprint " $cert.Thumbprint
Write-Host "cert.Subject " $cert.Subject
Add-AzureRmAccount -ServicePrincipal -CertificateThumbprint $cert.Thumbprint -ApplicationId $ServicePrincipalApplicationId -TenantId $TenantId
Get-AzureRmSubscription
Set-AzureRmContext -SubscriptionId $SubscriptionId
Write-Host ""
Write-Host "Creating vault" -ForegroundColor Yellow
New-AzureRmResourceGroupDeployment -ResourceGroupName $ResouceGroupName -vaultName $vaultName -vaultlocation $location -isenabledForDeployment $true -TemplateFile ".\keyvault2.template.json" -TemplateParameterFile ".\keyvault2.parameters.json"
Write-Host ""
Write-Host "Key Vault " $vaultName " deployed" -ForegroundColor green
Write-Host "Wait 5 seconds"
Start-Sleep -Seconds 5
Write-Host "Write Secret" -ForegroundColor Yellow
Set-AzureKeyVaultSecret -VaultName $VaultName -Name "MyKey" -SecretValue $MySecret
Write-Host "Wait 10 seconds"
Start-Sleep -Seconds 10
Write-Host "Read secret"
Get-AzureKeyVaultSecret -VaultName $VaultName -Name "MyKey"
Set-AzureRmKeyVaultAccessPolicy -VaultName $name -ObjectId $oId -PermissionsToSecrets get
returns error
Set-AzureRmKeyVaultAccessPolicy : Insufficient privileges to complete the operation.
Solution is to add additional parameter -BypassObjectIdValidation
Set-AzureRmKeyVaultAccessPolicy -BypassObjectIdValidation -VaultName $name -ObjectId $oId -PermissionsToSecrets get
Solution looks like a hack, but it works for me. After this, object with $oId have got access to keyVault. (For checks access polices use Get-AzureRmKeyVault -VaultName $vaultName)
The solution was to move the configuration of the permission to the ARM template instead of trying to do it using PowerShell. As soon as i did that all permission issues got solved.
In the ARM template the object Id i had specified for the Service Principal was wrong. It thought it as the Object Id you can find in the portal under app registrations, but no, it is actually the object ID of the service principal of the Azure AD application it wants.
It will let you deploy the ARM template just fine even if you use the wrong Id and everything like too correct configured, until you start wondering about why the icon looks different for you service principal compared to the other users. This of course you will not notice until much later if you like me only had one user ...
Wrong id (This icon is different):
Correct id:
This post gave me that final solution.
How do I fix an "Operation 'set' not allowed" error when creating an Azure KeyVault secret programmatically?
I struggled with this issue a lot this week, as I don't have permission in my AAD to add API permissions for the service principal. I found a solution using the ARM Output marketplace item. Using the ARM output task, I can retrieve the Principal ID's of my objects from the ARM template and convert them to pipeline variables, which in turn can be consumed by an Azure PowerShell script to successfully update the Key vault access policies .
In the ARM template, I added this output variable, to return the website principal id - this is the information I couldn't query AD with.
"outputs": {
"websitePrincipalId": {
"type": "string",
"value": "[reference(concat(resourceId('Microsoft.Web/sites', variables('webSiteName')), '/providers/Microsoft.ManagedIdentity/Identities/default'), '2015-08-31-PREVIEW').principalId]"
}
}
Then I used the ARM Output task, to return the output as pipeline variables, which is useful in a Azure PowerShell script where I was able to use this to populate my key vault with the correct access policies:
Set-AzKeyVaultAccessPolicy -VaultName "$(KeyVaultName)" -ObjectId "$(servicePrincipalId)" -PermissionsToSecrets list,get -PassThru -BypassObjectIdValidation
According to your description, I test in my lab, I also use my Service Principal to login my Azure subscription. Your cmdlet works for me.
Do you check my Service Principal role? You could check it on Azure Portal.
Please ensure your Service Principal has Contributor or Owner permission. More information about this please refer to this link.
Update:
I test in my lab, your PowerShell script works fine for you. I suggest you could use Power Shell to create key vault and give permissions.
2022 refer to https://learn.microsoft.com/en-us/azure/key-vault/general/assign-access-policy?tabs=azure-cli
ran on Azure Cloud shell 2.43.0
az keyvault set-policy --name myKeyVault --object-id --secret-permissions --key-permissions --certificate-permissions
remove flags you don't want